Sign in

Aligning Offline Metrics and Human Judgments of Value for Code Generation Models

By Victor Dibia and others at
LogoMicrosoft Research
and
LogoUniversity of Chicago
Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code is most often evaluated in terms of their functional correctness (i.e., whether generations pass available unit tests), correctness does not fully capture (e.g., may... Show more
June 13, 2023
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
Aligning Offline Metrics and Human Judgments of Value for Code Generation Models
Click on play to start listening