lhoestq HF Staff commited on
Commit
c56fea0
·
verified ·
1 Parent(s): 19cd970

Add 'legal_professional' config data files

Browse files
README.md CHANGED
@@ -242,6 +242,14 @@ configs:
242
  path: law/val-*
243
  - split: dev
244
  path: law/dev-*
 
 
 
 
 
 
 
 
245
  dataset_info:
246
  - config_name: accountant
247
  features:
@@ -1113,6 +1121,36 @@ dataset_info:
1113
  num_examples: 5
1114
  download_size: 83562
1115
  dataset_size: 92043
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1116
  ---
1117
 
1118
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
242
  path: law/val-*
243
  - split: dev
244
  path: law/dev-*
245
+ - config_name: legal_professional
246
+ data_files:
247
+ - split: test
248
+ path: legal_professional/test-*
249
+ - split: val
250
+ path: legal_professional/val-*
251
+ - split: dev
252
+ path: legal_professional/dev-*
253
  dataset_info:
254
  - config_name: accountant
255
  features:
 
1121
  num_examples: 5
1122
  download_size: 83562
1123
  dataset_size: 92043
1124
+ - config_name: legal_professional
1125
+ features:
1126
+ - name: id
1127
+ dtype: int32
1128
+ - name: question
1129
+ dtype: string
1130
+ - name: A
1131
+ dtype: string
1132
+ - name: B
1133
+ dtype: string
1134
+ - name: C
1135
+ dtype: string
1136
+ - name: D
1137
+ dtype: string
1138
+ - name: answer
1139
+ dtype: string
1140
+ - name: explanation
1141
+ dtype: string
1142
+ splits:
1143
+ - name: test
1144
+ num_bytes: 121985
1145
+ num_examples: 215
1146
+ - name: val
1147
+ num_bytes: 12215
1148
+ num_examples: 23
1149
+ - name: dev
1150
+ num_bytes: 6974
1151
+ num_examples: 5
1152
+ download_size: 125081
1153
+ dataset_size: 141174
1154
  ---
1155
 
1156
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
legal_professional/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3d2fe07b6e85cec739ea9ff2ed9adcefd6b23ea9ae3ab6af220a8245cdf2841
3
+ size 15227
legal_professional/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a2abce1c5080cdfe3a2991a7f6728a7a07b8548d310a008b8ab8669975553c7
3
+ size 93336
legal_professional/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07ed092ba4e5c3154bb8e54cce4dd20fcf4760f3d970704d4f4b82723bdce9c7
3
+ size 16518