- Large language models (LLMs) have shown remarkable ability in code generation with more than 90 pass@1 in
- solving Python coding problems in HumanEval and MBPP. Such high accuracy leads to the question: can LLMs
- replace human programmers? Existing manual crafted, simple, or single-line code generation benchmarks
- cannot answer this question due to their gap with real-world software development. To answer this
- question, we propose REPOCOD, a code generation benchmark with 980 problems collected from 11 popular
- real-world projects, with more than 58% of them requiring file-level or repository-level context
- information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the
- highest average cyclomatic complexity (9.00) compared to existing benchmarks. In our evaluations on ten
- LLMs, none of the models can achieve more than 30 pass@1 on REPOCOD, disclosing the necessity of building
- stronger LLMs that can help developers in real-world software development.
+ Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.
+
+
+ To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developerwritten test cases on average for better correctness evaluation. In our evaluations of ten LLMs, none of the models achieve more than 30 pass@1 on REPOCOD, indicating the necessity of building stronger LLMs that can help developers in real-world software development.