VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation

arXiv:2504.15659

Anjiang Wei\(^{1}\), Huanmi Tan\(^{2}\), Tarun Suresh\(^{3}\), Daniel Mendoza\(^{1}\),
Thiago S. F. X. Teixeira\(^{4}\), Ke Wang\(^{5}\), Caroline Trippel\(^{1}\), Alex Aiken\(^{1}\)

\(^1\)Stanford University \(^2\)CMU \(^3\)UIUC \(^4\)Intel \(^5\)Visa Research

Abstract

Recent advances in Large Language Models (LLMs) have sparked growing interest in applying them to Electronic Design Automation (EDA) tasks, particularly Register Transfer Level (RTL) code generation. While several RTL datasets have been introduced, most focus on syntactic validity rather than functional validation with tests, leading to training examples that compile but may not implement the intended behavior. We present VERICODER, a model for RTL code generation fine-tuned on a dataset validated for functional correctness. This fine-tuning dataset is constructed using a novel methodology that combines unit test generation with feedback-directed refinement. Given a natural language specification and an initial RTL design, we prompt a teacher model (GPT-4o-mini) to generate unit tests and iteratively revise the RTL design based on its simulation results using the generated tests. If necessary, the teacher model also updates the tests to ensure they comply with the natural language specification. As a result of this process, every example in our dataset is functionally validated, consisting of a natural language description, an RTL implementation, and passing tests. Fine-tuned on this dataset of over 125,000 examples, VERICODER achieves state-of-the-art metrics in functional correctness on VerilogEval and RTLLM, with relative gains of up to 71.7% and 27.4% respectively. An ablation study further shows that models trained on our functionally validated dataset outperform those trained on functionally non-validated datasets, underscoring the importance of high-quality datasets in RTL code generation.




Overview

VeriCoder Overview
Figure 1: LLM-guided dataset augmentation overview.


Comparison with Other Datasets

Comparison with other datasets
Figure 2: Comparison of Verilog fine-tuning dataset construction approaches.


Motivating Example

Motivating Example
Figure 3: Natural language specification (left) and the corresponding buggy and corrected Verilog designs (middle and right). The specification and buggy design are from the original dataset, which lacks tests, while the test and corrected design are generated by a teacher model (GPT-4o-mini) and included in our validated dataset.


Results

Results
Figure 4: Results on VerilogEval and RTLLM.


BibTeX

@article{wei2025vericoder,
      title={VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation},
      author={Wei, Anjiang and Tan, Huanmi and Suresh, Tarun and Mendoza, Daniel and Teixeira, Thiago SFX and Wang, Ke and Trippel, Caroline and Aiken, Alex},
      journal={arXiv preprint arXiv:2504.15659},
      year={2025}
}