diff --git a/index.html b/index.html index 41d4dd7..63b9cba 100644 --- a/index.html +++ b/index.html @@ -60,11 +60,11 @@

- Shanchao Liang, + Shanchao Liang, - Yiran Hu, + Yiran Hu, - Nan Jiang + Nan Jiang, Lin Tan @@ -72,7 +72,7 @@

- Purdue Univeristy
Under Submission
+ Purdue University
Under Submission
@@ -112,7 +112,7 @@

- @@ -142,7 +142,7 @@

Leaderboard

- + @@ -210,7 +210,7 @@

Leaderboard

BM25Sparse-Retrieval
Rank
- + @@ -361,7 +361,7 @@

Notes on Experiments

2. Generation details: Each LLM generates one output per instance in REPOCOD using greedy decoding. Outputs must have correct indentation to avoid syntax errors.

- Please checkout our paper for more details. + Please checkout our paper for more details.

@@ -378,16 +378,10 @@

Notes on Experiments

Abstract

- Large language models (LLMs) have shown remarkable ability in code generation with more than 90 pass@1 in - solving Python coding problems in HumanEval and MBPP. Such high accuracy leads to the question: can LLMs - replace human programmers? Existing manual crafted, simple, or single-line code generation benchmarks - cannot answer this question due to their gap with real-world software development. To answer this - question, we propose REPOCOD, a code generation benchmark with 980 problems collected from 11 popular - real-world projects, with more than 58% of them requiring file-level or repository-level context - information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the - highest average cyclomatic complexity (9.00) compared to existing benchmarks. In our evaluations on ten - LLMs, none of the models can achieve more than 30 pass@1 on REPOCOD, disclosing the necessity of building - stronger LLMs that can help developers in real-world software development. + Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions. +

+

+ To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developerwritten test cases on average for better correctness evaluation. In our evaluations of ten LLMs, none of the models achieve more than 30 pass@1 on REPOCOD, indicating the necessity of building stronger LLMs that can help developers in real-world software development.

DenseDense-Retrieval
Rank