admin管理员组文章数量:1130349
前言
当前主流的基于polyhedral model的deep learning compiler 例如Tensor Comprehensions(Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions)都用到了schedule tree技术,这里简单介绍。
from: http://impact.gforge.inria.fr/impact2017/papers/impact-17-a-general-compilation-algorithm-to-parallelize-and-optimize-counted-loops-with-dynamic-data-dependences_slides.pdf
Schedule tree节点类型
Filter node:选择部分statement instances来做schedule
Band node:节点内所有statement instances都进行schedule
sequence与set node:子节点分别是有序和无序执行
Domain node和Context node比较容易理解,上面图中解释已经很清晰了。
mark node:A mark node allows the user to mark specific subtrees of the schedule tree.
extension node:introduce auxiliary computations that are not part of the original iteration domain, which is useful for, e.g., introducing statements copying data to and from shared memory.
Example (from tensor comprehensions):
Schedule tree operation
参考文献Schedule Trees
注:本文主要内容摘自/改编自文中已注释的和下面列出的参考文献等,欢迎大家批评指正。
S. Verdoolaege, S. Guelton, T. Grosser, and A. Cohen. Schedule Trees. In 4th Workshop on
Polyhedral Compilation Techniques (IMPACT, Associated with HiPEAC), page 9, Vienna, Austria,
Jan. 2014.http://impact.gforge.inria.fr/impact2014/papers/impact2014-verdoolaege.pdf
Oleksandr Zinenko, Lorenzo Chelini, Tobias Grosser. Declarative Transformations in the Polyhedral Model. [Research Report] RR-9243, Inria; ENS Paris - Ecole Normale Supérieure de Paris; ETH Zurich; TU Delft; IBM Zürich. 2018. ffhal-01965599f. https://hal.inria.fr/hal-01965599/document
Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
Polyhedral AST Generation Is More Than Scanning Polyhedra.
Polyhedral Parallel Code Generation for CUDA
https://dl.acm/doi/pdf/10.1145/2400682.2400713
Diesel: DSL for linear algebra and neural net computations on gpus
前言
当前主流的基于polyhedral model的deep learning compiler 例如Tensor Comprehensions(Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions)都用到了schedule tree技术,这里简单介绍。
from: http://impact.gforge.inria.fr/impact2017/papers/impact-17-a-general-compilation-algorithm-to-parallelize-and-optimize-counted-loops-with-dynamic-data-dependences_slides.pdf
Schedule tree节点类型
Filter node:选择部分statement instances来做schedule
Band node:节点内所有statement instances都进行schedule
sequence与set node:子节点分别是有序和无序执行
Domain node和Context node比较容易理解,上面图中解释已经很清晰了。
mark node:A mark node allows the user to mark specific subtrees of the schedule tree.
extension node:introduce auxiliary computations that are not part of the original iteration domain, which is useful for, e.g., introducing statements copying data to and from shared memory.
Example (from tensor comprehensions):
Schedule tree operation
参考文献Schedule Trees
注:本文主要内容摘自/改编自文中已注释的和下面列出的参考文献等,欢迎大家批评指正。
S. Verdoolaege, S. Guelton, T. Grosser, and A. Cohen. Schedule Trees. In 4th Workshop on
Polyhedral Compilation Techniques (IMPACT, Associated with HiPEAC), page 9, Vienna, Austria,
Jan. 2014.http://impact.gforge.inria.fr/impact2014/papers/impact2014-verdoolaege.pdf
Oleksandr Zinenko, Lorenzo Chelini, Tobias Grosser. Declarative Transformations in the Polyhedral Model. [Research Report] RR-9243, Inria; ENS Paris - Ecole Normale Supérieure de Paris; ETH Zurich; TU Delft; IBM Zürich. 2018. ffhal-01965599f. https://hal.inria.fr/hal-01965599/document
Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
Polyhedral AST Generation Is More Than Scanning Polyhedra.
Polyhedral Parallel Code Generation for CUDA
https://dl.acm/doi/pdf/10.1145/2400682.2400713
Diesel: DSL for linear algebra and neural net computations on gpus
本文标签: Modelpolyhedraltreeschedule
版权声明:本文标题:polyhedral model schedule tree 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://it.en369.cn/jiaocheng/1755003576a2752337.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。


发表评论