Adaptive Self-improvement LLM Agentic System for ML Library Development
Abstract
ML libraries, often written in architecture-specific programming languages (ASPLs) that target domain-specific architectures, are key to efficient ML systems. However, writing these high-performance ML libraries is challenging because it requires expert knowledge of both ML algorithms and the ASPL. Large language models (LLMs), on the other hand, have shown general coding capabilities. However, challenges remain when using LLMs for generating ML libraries using ASPLs because 1) this task is complicated even for human experts and 2) there are limited code examples due to the esoteric and evolving nature of ASPLs. We present an adaptive self-improvement agentic system that enables LLMs to perform such complex reasoning under limited data by iteratively improving their capability through self-generated experience. In order to evaluate the effectiveness of our system, we construct a benchmark of a typical ML library and generate ASPL code with both open and closed-source LLMs on this benchmark. Our results show improvements of up to over a baseline single LLM.
Article
BibTeX
@article{zhanggenghan2025,
title={Adaptive Self-improvement LLM Agentic System for ML Library Development},
author={Genghan Zhang and Weixin Liang and Olivia Hsu and Kunle Olukotun},
journal={International Conference on Learning Representations (ICLR) Workshop on Reasoning and Planning for LLMs},
year={2025},
month={May}
}