Language Models for Code Completion: A Practical Evaluation
On this page
Transformer-based language models for automatic code completion have showngreat promise so far, yet the evaluation of these models rarely uses real data.This study provides both quantitative and qualitative assessments of threepublic code language models when completing real-world code. We first developedan open-source IDE extension, Code4Me, for the online evaluation of the models.We collected real auto-completion usage data for over a year from more than1200 users, resulting in over 600K valid completions. These models were thenevaluated using six standard metrics across twelve programming languages. Next,we conducted a qualitative study of 1690 real-world completion requests toidentify the reasons behind the poor model performance. A comparative analysisof the models’ performance in online and offline settings was also performed,using benchmark synthetic datasets and two masking strategies. Our findingssuggest that while developers utilize code completion across various languages,the best results are achieved for mainstream languages such as Python and Java.InCoder outperformed the other models across all programming languages,highlighting the significance of training data and objectives. Our study alsorevealed that offline evaluations do not accurately reflect real-worldscenarios. Upon qualitative analysis of the model’s predictions, we found that66.3
Further reading
- Access Paper in arXiv.org