Quazim0t0 commited on
Commit
55f1803
·
verified ·
1 Parent(s): 1aaf513

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -181,7 +181,7 @@ To facilitate broader testing and real-world inference, **GGUF Full and Quantize
181
  ### **Loading LoRA Adapters with `transformers` and `peft`**
182
  To load and apply the LoRA adapters on Phi-4, use the following approach:
183
 
184
- ```Scores
185
 
186
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
187
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Quazim0t0__Phi4.Turn.R1Distill_v1.5.1-Tensors-details)
@@ -195,3 +195,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
195
  |GPQA (0-shot) | 2.46|
196
  |MuSR (0-shot) | 7.04|
197
  |MMLU-PRO (5-shot) |45.75|
 
 
181
  ### **Loading LoRA Adapters with `transformers` and `peft`**
182
  To load and apply the LoRA adapters on Phi-4, use the following approach:
183
 
184
+ ```python
185
 
186
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
187
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Quazim0t0__Phi4.Turn.R1Distill_v1.5.1-Tensors-details)
 
195
  |GPQA (0-shot) | 2.46|
196
  |MuSR (0-shot) | 7.04|
197
  |MMLU-PRO (5-shot) |45.75|
198
+