Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
manytypes4py / README.md
arjunguha's picture
Update README.md
9ee5545 verified
|
raw
history blame
1.64 kB
metadata
dataset_info:
  features:
    - name: zip
      dtype: string
    - name: filename
      dtype: string
    - name: contents
      dtype: string
    - name: type_annotations
      sequence: string
    - name: type_annotation_starts
      sequence: int64
    - name: type_annotation_ends
      sequence: int64
  splits:
    - name: train
      num_bytes: 4206116750
      num_examples: 548536
  download_size: 1334224020
  dataset_size: 4206116750
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

ManyTypes4Py-Reconstructed

This is a reconstruction of the original code from the ManyTypes4Py paper from the following paper

A. M. Mir, E. Latoškinas and G. Gousios, "ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference," IEEE/ACM International Conference on Mining Software Repositories (MSR), 2021, pp. 585-589

The artifact (v0.7) for ManyTypes4Py does not have the original Python files. Instead, each file is pre-processed into a stream of types without comments, and the contents of each repository are stored in a single JSON file. This reconstructed dataset has raw Python code.

More specifically:

  1. We extract the list of repositories from the "clean" subset of ManyTypes4Py, which are the repositories that type-check with mypy.

  2. We attempt to download all repositories, but only succeed in fetching 4,663 (out of ~5.2K).

  3. We augment each file with the text of each type annotation, as well as their start and end positions (in bytes) in the code.