CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis

arXiv:2503.23145

Anjiang Wei\(^{1}\), Tarun Suresh\(^{2}\), Jiannan Cao\(^{3}\), Naveen Kannan\(^{1}\), Yuheng Wu\(^{1}\), Kai Yan\(^{2}\), Thiago S. F. X. Teixeira\(^{4}\), Ke Wang\(^{5}\), Alex Aiken\(^{1}\)
\(^1\)Stanford University \(^2\)University of Illinois Urbana-Champaign \(^3\)MIT \(^4\)Intel \(^5\)Visa Research

Abstract

Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored. Existing evaluation protocols rely on static sets of examples and held-out tests, offering no feedback when synthesized functions are incorrect and failing to reflect real-world scenarios such as reverse engineering. We propose CodeARC, the Code Abstraction and Reasoning Challenge, a new evaluation framework where agents interact with a hidden target function by querying it with new inputs, synthesizing candidate functions, and iteratively refining their solutions using a differential testing oracle. This interactive setting encourages agents to perform function calls and self-correction based on feedback. We construct the first large-scale benchmark for general-purpose inductive program synthesis, featuring 1114 functions. Among 18 models evaluated, o3-mini performs best with a success rate of 52.7%, highlighting the difficulty of this task. Fine-tuning LLaMA-3.1-8B-Instruct on curated synthesis traces yields up to a 31% relative performance gain. CodeARC provides a more realistic and challenging testbed for evaluating LLM-based program synthesis and inductive reasoning.


Leaderboard

Model Annotated Dataset Anonymized Dataset
# I/O # Oracle Success (%) # I/O # Oracle Success (%)
Llama-3.2-3B 28.3 1.9 11.0 29.3 2.0 4.8
Mixtral-8x7B 27.4 1.9 20.3 28.5 1.9 12.0
Llama-3.1-8B 28.0 1.8 19.3 28.6 1.9 13.7
Mixtral-8x22B 26.7 1.8 25.1 28.1 1.9 15.0
QwQ-32B 24.6 1.8 20.0 25.7 1.9 15.4
Qwen2.5-7B 26.9 1.8 29.2 28.3 1.9 15.8
Llama-3.2-11B 27.3 1.8 24.9 28.3 1.9 16.1
gpt-4o-mini 27.0 1.8 26.1 27.9 1.8 18.5
Llama-3.2-90B 26.2 1.8 28.4 27.7 1.9 19.7
Llama-3.1-70B 26.9 1.8 30.1 27.9 1.9 20.0
Qwen2.5-72B 25.5 1.7 30.1 27.1 1.8 21.6
Llama-3.1-405B 24.2 1.7 38.6 26.0 1.8 26.7
gpt-4o 23.4 1.7 37.8 25.2 1.8 28.7
DeepSeek-V3 23.7 1.7 37.7 25.1 1.8 29.5
claude3.7-sonnet 23.6 1.7 39.0 24.6 1.7 33.8
DeepSeek-R1 18.6 1.6 49.8 20.3 1.7 41.3
o1-mini 21.0 1.6 53.2 21.5 1.6 47.7
o3-mini 15.6 1.5 59.5 16.0 1.6 52.7
Table 1: CodeARC Leaderboard Results. # I/O represents the average number of input-output examples queried; # Oracle shows the average number of differential testing oracle calls; Success (%) indicates the percentage of problems successfully solved.


Overview

CodeARC Overview
Figure 1: Overview of CodeARC. Our framework evaluates LLMs' reasoning capabilities in inductive program synthesis. The agent begins with input-output examples, interacts with a hidden target function via function calls, and uses a differential testing oracle to check the correctness of the synthesized function for self-reflection and refinement.


Dataset

Source Functions Lines of Code Min Lines of Code Max Lines of Code Avg
HumanEval+ 78 7 56 18.5
MBPP+ 131 2 21 3.9
APPS 905 2 74 9.5
CodeARC Datasets
CodeARC (Annotated) 1114 2 74 9.5
CodeARC (Anonymized) 1114 2 74 9.5
Table 2: Function count and code complexity statistics across dataset sources. The CodeARC benchmark is constructed from existing programming datasets (HumanEval+, MBPP+, and APPS). The Annotated version includes function names as synthesis hints, while the Anonymized version removes these semantic clues to increase difficulty.


Scaling Trend

Scaling trend on CodeARC
Figure 2: Scaling trend on CodeARC.


Case Study

Case Study
Figure 3: Case Study. The model queries edge cases, synthesizes a comparison function, receives a counterexample from the oracle, and corrects it with a set-based solution.


BibTeX

@article{wei2025codearc,
    title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
    author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
    journal={arXiv preprint arXiv:2503.23145},
    year={2025}
}