Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 47 new columns ({'feat_PrimalInitialGap', 'feat_Gap', 'feat_preinteger', 'feat_#MCP', 'feat_GlbRed', 'feat_RHS_dynamic', 'feat_pre_columns', 'feat_obj_dynamic', 'feat_LessThan', 'feat_pre_row', 'feat_GreaterThan', 'feat_DualInitialGap', 'feat_GlbFix', 'feat_PrimalDualGap', 'feat_has_varub', 'File Name', 'feat_Nodes', 'Log Name', 'feat_Columns', 'feat_IKN', 'feat_Nonzeros', 'feat_LPit/n', 'feat_#Cuts', 'feat_#Sepa', 'feat_PAC', 'feat_obj_density', 'feat_IntInf', 'feat_#Conf', 'feat_M01', 'feat_CON', 'feat_Equality', 'feat_GapClosed', 'feat_Active', 'feat_has_varlb', 'feat_COV', 'feat_CAR', 'feat_per_i', 'feat_per_b', 'feat_KNA', 'feat_Rows', 'feat_Coe_dynamic', 'feat_Time', 'feat_BIN', 'feat_MI', 'feat_PAR', 'feat_EQK', 'feat_Symmetries'}) and 1 missing columns ({'NAME          COPTPROB'}).

This happened while the csv dataset builder was generating data using

hf://datasets/SEVANTORY/BenLOC/table_data/feat/feat_indset.csv (at revision c464d4d1f790101c4306163b2d7430cbd4ac1e1f)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Log Name: string
              File Name: string
              feat_Nodes: double
              feat_Active: double
              feat_LPit/n: double
              feat_IntInf: double
              feat_GlbFix: double
              feat_GlbRed: double
              feat_#Cuts: double
              feat_#MCP: double
              feat_#Sepa: double
              feat_#Conf: double
              feat_Gap: double
              feat_Time: double
              feat_obj_density: double
              feat_pre_row: double
              feat_pre_columns: double
              feat_preinteger: double
              feat_obj_dynamic: double
              feat_RHS_dynamic: double
              feat_Coe_dynamic: double
              feat_DualInitialGap: double
              feat_PrimalDualGap: double
              feat_PrimalInitialGap: double
              feat_GapClosed: double
              feat_Rows: double
              feat_Columns: double
              feat_Nonzeros: double
              feat_per_i: double
              feat_per_b: double
              feat_has_varlb: double
              feat_has_varub: double
              feat_Equality: double
              feat_GreaterThan: double
              feat_LessThan: double
              feat_PAR: double
              feat_PAC: double
              feat_COV: double
              feat_CAR: double
              feat_EQK: double
              feat_BIN: double
              feat_KNA: double
              feat_IKN: double
              feat_M01: double
              feat_MI: double
              feat_CON: double
              feat_Symmetries: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 6089
              to
              {'NAME          COPTPROB': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 47 new columns ({'feat_PrimalInitialGap', 'feat_Gap', 'feat_preinteger', 'feat_#MCP', 'feat_GlbRed', 'feat_RHS_dynamic', 'feat_pre_columns', 'feat_obj_dynamic', 'feat_LessThan', 'feat_pre_row', 'feat_GreaterThan', 'feat_DualInitialGap', 'feat_GlbFix', 'feat_PrimalDualGap', 'feat_has_varub', 'File Name', 'feat_Nodes', 'Log Name', 'feat_Columns', 'feat_IKN', 'feat_Nonzeros', 'feat_LPit/n', 'feat_#Cuts', 'feat_#Sepa', 'feat_PAC', 'feat_obj_density', 'feat_IntInf', 'feat_#Conf', 'feat_M01', 'feat_CON', 'feat_Equality', 'feat_GapClosed', 'feat_Active', 'feat_has_varlb', 'feat_COV', 'feat_CAR', 'feat_per_i', 'feat_per_b', 'feat_KNA', 'feat_Rows', 'feat_Coe_dynamic', 'feat_Time', 'feat_BIN', 'feat_MI', 'feat_PAR', 'feat_EQK', 'feat_Symmetries'}) and 1 missing columns ({'NAME          COPTPROB'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/SEVANTORY/BenLOC/table_data/feat/feat_indset.csv (at revision c464d4d1f790101c4306163b2d7430cbd4ac1e1f)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

NAME COPTPROB
string
OBJSENSE
MIN
ROWS
N __OBJ___
L C1
L C3
L C4
L C5
L C9
L C11
L C12
L C13
L C14
L C15
L C16
L C17
L C18
L C19
L C20
L C21
L C22
L C24
L C26
L C27
L C28
L C30
L C31
L C33
L C34
L C36
L C39
L C41
L C44
L C46
L C47
L C48
L C49
L C50
L C52
L C54
L C55
L C56
L C57
L C59
L C60
L C62
L C63
L C64
L C68
L C72
L C75
L C77
L C78
L C79
L C81
L C82
L C83
L C84
L C86
L C87
L C89
L C90
L C93
L C94
L C96
L C97
L C99
L C102
L C103
L C104
L C105
L C106
L C108
L C109
L C111
L C112
L C114
L C115
L C118
L C119
L C120
L C122
L C123
L C124
L C125
L C126
L C127
L C129
L C130
L C131
L C132
L C134
L C135
L C136
L C138
L C140
L C141
L C142
L C143
L C145
End of preview.

Datasets of ML4MOC

Presolved Data is stored in .\instance. The folder structure after the datasets are set up looks as follows

instances/
  MIPLIB/                   -> 1065 instances
  set_cover/                -> 3994 instances
  independent_set/          -> 1604 instances
  nn_verification/          -> 3104 instances
  load_balancing/           -> 2286 instances

Dataset Description

MIPLIB

Heterogeneous dataset from MIPLIB 2017, a well-established benchmark for evaluating MILP solvers. The dataset includes a diverse set of particularly challenging mixed-integer programming (MIP) instances, each known for its computational difficulty.

Set Covering

This dataset consists of instances of the classic Set Covering Problem, which can be found here. Each instance requires finding the minimum number of sets that cover all elements in a universe. The problem is formulated as a MIP problem.

Maximum Independent Set

This dataset addresses the Maximum Independent Set Problem, which can be found here. Each instance is modeled as a MIP, with the objective of maximizing the size of the independent set.

NN Verification

This “Neural Network Verification” dataset is to verify whether a neural network is robust to input perturbations can be posed as a MIP. The MIP formulation is described in the paper On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models (Gowal et al., 2018). Each input on which to verify the network gives rise to a different MIP.

Load Balancing

This dataset is from NeurIPS 2021 Competition. This problem deals with apportioning workloads. The apportionment is required to be robust to any worker’s failure. Each instance problem is modeled as a MILP, using a bin-packing with an apportionment formulation.

Dataset Spliting

Each dataset was split into a training set $D_{\text{train}}$ and a testing set $D_{\text{test}}$, following an approximate 80-20 split. Moreover, we split the dataset by time and "optimality", which means according to the proportion of optimality for each parameter is similar in training and testing sets. This ensures a balanced representation of both temporal variations and the highest levels of parameter efficiency in our data partitions.

Downloads last month
41