Understanding the Challenges in Iterative Generative Optimization with LLMs
Abstract
Generative optimization using large language models faces challenges due to implicit design decisions about artifact modification and learning evidence that significantly impact success across different applications.
Generative optimization uses large language models (LLMs) to iteratively improve artifacts (such as code, workflows or prompts) using execution feedback. It is a promising approach to building self-improving agents, yet in practice remains brittle: despite active research, only 9% of surveyed agents used any automated optimization. We argue that this brittleness arises because, to set up a learning loop, an engineer must make ``hidden'' design choices: What can the optimizer edit and what is the "right" learning evidence to provide at each update? We investigate three factors that affect most applications: the starting artifact, the credit horizon for execution traces, and batching trials and errors into learning evidence. Through case studies in MLAgentBench, Atari, and BigBench Extra Hard, we find that these design decisions can determine whether generative optimization succeeds, yet they are rarely made explicit in prior work. Different starting artifacts determine which solutions are reachable in MLAgentBench, truncated traces can still improve Atari agents, and larger minibatches do not monotonically improve generalization on BBEH. We conclude that the lack of a simple, universal way to set up learning loops across domains is a major hurdle for productionization and adoption. We provide practical guidance for making these choices.
Community
LLM-driven code modification, prompt change, and design proposals has been the norm. However, this "revision" (optimization) process does not escape the fundamental laws of machine learning. In this paper, we study the three factors that impact this optimization process that cause difficulty in getting good outcomes from the LLM.
We also provide a framework to evaluate LLMs on Atari games by writing code to control the game's interface.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- REVERE: Reflective Evolving Research Engineer for Scientific Workflows (2026)
- FT-Dojo: Towards Autonomous LLM Fine-Tuning with Language Agents (2026)
- VeRO: An Evaluation Harness for Agents to Optimize Agents (2026)
- SWE-QA-Pro: A Representative Benchmark and Scalable Training Recipe for Repository-Level Code Understanding (2026)
- From Static Templates to Dynamic Runtime Graphs: A Survey of Workflow Optimization for LLM Agents (2026)
- Meta Context Engineering via Agentic Skill Evolution (2026)
- Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.23994 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper

