The illusion of diminishing returns: measuring long horizon execution in LLMs
The illusion of diminishing returns: measuring long horizon execution in LLMs
Does continued scaling of large language models (LLMs) yield diminishing returns? In this work, we show that short-task benchmarks may give an illusion of slowing progress, as even marginal gains in single-step accuracy can compound into exponential improvements in the length of tasks a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. So, we propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. First, we find that larger models can correctly execute significantly more turns even when small models have near-perfect single-turn accuracy. We then observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations---curiously, we observe a self-conditioning effect---models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. But, we find that thinking mitigates self-conditioning, and also enables execution of much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of tasks they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.
Sinha, Akshit
c380aaa9-fa02-447c-a68b-508d01b314d9
Arun, Arvindh
b40d0eee-a67d-44e0-9302-dc747e77157a
Goel, Shashwat
53b0395b-30b0-4b24-9747-9c8d0015798d
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
Geiping, Jonas
3e6380c4-6fb0-425b-8e7d-ca08e4896905
24 April 2026
Sinha, Akshit
c380aaa9-fa02-447c-a68b-508d01b314d9
Arun, Arvindh
b40d0eee-a67d-44e0-9302-dc747e77157a
Goel, Shashwat
53b0395b-30b0-4b24-9747-9c8d0015798d
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
Geiping, Jonas
3e6380c4-6fb0-425b-8e7d-ca08e4896905
Sinha, Akshit, Arun, Arvindh, Goel, Shashwat, Staab, Steffen and Geiping, Jonas
(2026)
The illusion of diminishing returns: measuring long horizon execution in LLMs.
ICLR 2026: The Fourteenth International Conference on Learning Representations, , Rio de Janeiro, Brazil.
23 - 27 Apr 2026.
1 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
Does continued scaling of large language models (LLMs) yield diminishing returns? In this work, we show that short-task benchmarks may give an illusion of slowing progress, as even marginal gains in single-step accuracy can compound into exponential improvements in the length of tasks a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. So, we propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. First, we find that larger models can correctly execute significantly more turns even when small models have near-perfect single-turn accuracy. We then observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations---curiously, we observe a self-conditioning effect---models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. But, we find that thinking mitigates self-conditioning, and also enables execution of much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of tasks they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.
This record has no associated files available for download.
More information
Published date: 24 April 2026
Venue - Dates:
ICLR 2026: The Fourteenth International Conference on Learning Representations, , Rio de Janeiro, Brazil, 2026-04-23 - 2026-04-27
Identifiers
Local EPrints ID: 511094
URI: http://eprints.soton.ac.uk/id/eprint/511094
PURE UUID: 4b822cc2-f286-4856-81e4-005a0ba9c3ce
Catalogue record
Date deposited: 01 May 2026 16:36
Last modified: 02 May 2026 01:51
Export record
Contributors
Author:
Akshit Sinha
Author:
Arvindh Arun
Author:
Shashwat Goel
Author:
Steffen Staab
Author:
Jonas Geiping
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics