Update README.md
Browse files
README.md
CHANGED
@@ -188,17 +188,17 @@ The following instruction datasets were used for the instruction tuning.
|
|
188 |
- Japanese
|
189 |
- `lmsys-chat-1m-synth-ja-wo-pii-and-template-instructions`
|
190 |
- Single-turn Japanese synthetic instruction dataset derived from [lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset [\[Zhang+, ICLR24\]](https://openreview.net/forum?id=BOfDKxfwt0)).
|
191 |
-
The first-turn user instructions were translated into Japanese via DeepL machine translation, and the assistant responses were generated using the Llama
|
192 |
- As implied by the dataset name, conversations that contain personally identifiable information (PII) or template-based user instructions have been removed. Duplicate instuctions have also been removed.
|
193 |
- `filtered-magpie-ultra-ja`
|
194 |
-
- A Japanese variant of the `filtered-magpie-ultra-en` dataset, machine-translated into Japanese using the
|
195 |
- `gemma-magpie`
|
196 |
-
- Japanese Q&A dataset
|
197 |
- English
|
198 |
- `lmsys-chat-1m-synth-en-wo-pii-and-template-instructions`
|
199 |
- Similar to the `lmsys-chat-1m-synth-ja-wo-pii-and-template-instructions`, but this version uses the original English user instructions. The assistant responses were generated in English as well. Rejection sampling was not applied in this version.
|
200 |
- `filtered-magpie-ultra-en`
|
201 |
-
- A subset of the [magpie-ultra](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) dataset, developed following the MAGPIE recipe [\[Xu+, arXiv24\]](https://arxiv.org/abs/2406.08464) using Llama
|
202 |
|
203 |
|
204 |
## Risks and Limitations
|
|
|
188 |
- Japanese
|
189 |
- `lmsys-chat-1m-synth-ja-wo-pii-and-template-instructions`
|
190 |
- Single-turn Japanese synthetic instruction dataset derived from [lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset [\[Zhang+, ICLR24\]](https://openreview.net/forum?id=BOfDKxfwt0)).
|
191 |
+
The first-turn user instructions were translated into Japanese via DeepL machine translation, and the assistant responses were generated using the [Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) model. Rejection sampling (n=6) was applied, with [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) serving as a judge.
|
192 |
- As implied by the dataset name, conversations that contain personally identifiable information (PII) or template-based user instructions have been removed. Duplicate instuctions have also been removed.
|
193 |
- `filtered-magpie-ultra-ja`
|
194 |
+
- A Japanese variant of the `filtered-magpie-ultra-en` dataset, machine-translated into Japanese using the [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
|
195 |
- `gemma-magpie`
|
196 |
+
- A Japanese synthetic Q&A dataset from scratch, generated using [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it). User instructions were created with prompts specific to each topic, and the assistant responses were generated for these instructions. The conversations were then heuristically filtered for quality and length.
|
197 |
- English
|
198 |
- `lmsys-chat-1m-synth-en-wo-pii-and-template-instructions`
|
199 |
- Similar to the `lmsys-chat-1m-synth-ja-wo-pii-and-template-instructions`, but this version uses the original English user instructions. The assistant responses were generated in English as well. Rejection sampling was not applied in this version.
|
200 |
- `filtered-magpie-ultra-en`
|
201 |
+
- A subset of the [magpie-ultra](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) dataset, developed following the MAGPIE recipe [\[Xu+, arXiv24\]](https://arxiv.org/abs/2406.08464) using [Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct). This subset includes only samples rated as 'average,' 'good,' or 'excellent.'
|
202 |
|
203 |
|
204 |
## Risks and Limitations
|