alexmarques commited on
Commit
382ed99
·
verified ·
1 Parent(s): 6feca67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -430,4 +430,38 @@ lm_eval \
430
  --tasks truthfulqa \
431
  --num_fewshot 0 \
432
  --batch_size auto
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
433
  ```
 
430
  --tasks truthfulqa \
431
  --num_fewshot 0 \
432
  --batch_size auto
433
+ ```
434
+
435
+ #### OpenLLM v2
436
+ ```
437
+ lm_eval \
438
+ --model vllm \
439
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=1",enable_chunked_prefill=True \
440
+ --apply_chat_template \
441
+ --fewshot_as_multiturn \
442
+ --tasks leaderboard \
443
+ --batch_size auto
444
+ ```
445
+
446
+ #### HumanEval and HumanEval+
447
+ ##### Generation
448
+ ```
449
+ python3 codegen/generate.py \
450
+ --model neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w8a8 \
451
+ --bs 16 \
452
+ --temperature 0.2 \
453
+ --n_samples 50 \
454
+ --root "." \
455
+ --dataset humaneval
456
+ ```
457
+ ##### Sanitization
458
+ ```
459
+ python3 evalplus/sanitize.py \
460
+ humaneval/neuralmagic--Meta-Llama-3.1-70B-Instruct-quantized.w8a8_vllm_temp_0.2
461
+ ```
462
+ ##### Evaluation
463
+ ```
464
+ evalplus.evaluate \
465
+ --dataset humaneval \
466
+ --samples humaneval/neuralmagic--Meta-Llama-3.1-70B-Instruct-quantized.w8a8_vllm_temp_0.2-sanitized
467
  ```