Streaming Tensor Program: A streaming abstraction for dynamic parallelism
Abstract
Dynamic behaviors are becoming prevalent in many tensor applications, like machine learning, where many widely used models contain data-dependent tensor shapes and control flow. However, the limited expressiveness of prior programming abstractions for spatial dataflow accelerators (SDAs) forces these dynamic behaviors to be implemented statically and/or unoptimized. To address these challenges, we present Streaming Tensor Programs (STeP), a streaming abstraction that enables dynamic tensor workloads to run efficiently on SDAs. STeP introduces flexible routing operators, an explicit memory hierarchy, and symbolic-shape semantics that expose dynamic data rates and tensor dimensions. These capabilities unlock new optimizations, like dynamic tiling, dynamic parallelization, and configuration time-multiplexing, that adapt SDA execution to dynamic behaviors while preserving dataflow efficiency. Using a cycle-approximate simulator on representative LLM layers and a full model with real-world traces, STeP enables: dynamic tiling that breaks the Pareto-optimal frontier from prior work, dynamic parallelization that improves latency by ∼2.72x, and configuration time-multiplexing that increases compute utilization by ∼2.64x over prior SDA abstractions and their implementations.
BibTeX
@article{sohn-gina2026,
title={Streaming Tensor Program: A streaming abstraction for dynamic parallelism},
author={Gina Sohn and Genghan Zhang and Konstantin Hossfeld and Jungwoo Kim and Nathan Sobotka and Nathan Zhang and Olivia Hsu and Kunle Olukotun},
journal={to appear in International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS)},
year={2026},
month={}
}