Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
CanItEdit / README.md
cassanof's picture
Update README.md
8505ac2 verified
|
raw
history blame
2.49 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
pretty_name: CanItEdit
size_categories:
  - n<1K
source_datasets:
  - original
task_categories:
  - text2text-generation
task_ids: []
tags:
  - code-generation
  - code
paperswithcode_id: canitedit
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: name
      dtype: string
    - name: full_name
      dtype: string
    - name: before
      dtype: string
    - name: after
      dtype: string
    - name: tests
      dtype: string
    - name: instruction_descriptive
      dtype: string
    - name: instruction_humane
      dtype: string
    - name: taxonomy
      struct:
        - name: change_kind
          dtype: string
        - name: libraries
          sequence: string
        - name: topic
          dtype: string
  splits:
    - name: test
      num_bytes: 301982
      num_examples: 54
  download_size: 136181
  dataset_size: 301982
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions

CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains 54 hand-crafted Python programs with before and after code blocks, two types of natural language instructions (descriptive and lazy), and a hidden test suite.

The dataset’s dual natural language instructions test model efficiency in two scenarios:

  1. Descriptive: Detailed instructions replicate situations where users provide specific specifications or another model outlines a plan, similar to Reflexion prompting,
  2. Lazy: Informal instructions resemble typical user queries for LLMs in code generation.

For more information and results see our paper.

Citation

If you use our work, please cite our paper as such:

@misc{cassano2023edit,
      title={Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}, 
      author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
      year={2023},
      eprint={2312.12450},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}

How To Evaluate

All the code for evaluating the benchmark can be found in our GitHub repository.