Uploaded model

  • Developed by: Kuniho
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

ベンチマーク出力方法

ライブラリーをinstall

>>> pip install -U bitsandbytes
>>> pip install -U transformers
>>> pip install -U accelerate
>>> pip install -U datasets
>>> from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
)
>>> import torch
>>> from tqdm import tqdm
>>> import json
>>> HF_token = "xxxxx" # 自身のHagging Face tokenを入力
>>> model_name = "Kuniho/kh_llm-jp-3-13b-it"
>>> bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=False,
)
>>> # Load Model
>>> model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map="auto",
    token = HF_TOKEN
)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, token = HF_TOKEN)

データセットの読み込み

>>> datasets = []
>>> with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
    item = ""
    for line in f:
      line = line.strip()
      item += line
      if item.endswith("}"):
        datasets.append(json.loads(item))
        item = ""

推論

>>> results = []
>>> for data in tqdm(datasets):
  input = data["input"]

  prompt = f"""### 指示
  {input}
  ### 回答:
  """

  tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
  with torch.no_grad():
      outputs = model.generate(
          tokenized_input,
          max_new_tokens=100,
          do_sample=False,
          repetition_penalty=1.2
      )[0]
  output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)

  results.append({"task_id": data["task_id"], "input": input, "output": output})

結果をjsonに保存

>>> import re
>>> model_name = re.sub(".*/", "", model_name)
>>> with open(f"./{model_name}-outputs.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)  # ensure_ascii=False for handling non-ASCII characters
        f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kuniho/kh_llm-jp-3-13b-it

Finetuned
(1117)
this model