Model Card for Model TwinDoc/RedWhale-2-12B
Llama3.1 8Bλ₯Ό TLIνμ¬ 12B λͺ¨λΈλ‘ λ§λ ν μ¬μ νμ΅ν λͺ¨λΈμ
λλ€. μ¬μ νμ΅μ νκ΅μ΄ Corpusλ‘ μ§ννμμ΅λλ€.
TLIλ transformerμ layerλ₯Ό 볡μ νλ λͺ¨λΈ up-scale λ°©λ²λ‘ μ
λλ€.
Model Details
Model Description
- Developed by: AgileSoda
- Model type: Llama
- Language(s) (NLP): νκ΅μ΄
- License: [More Information Needed]
- Finetuned from model [optional]: TwinDoc/RedWhale-2-12B-Instruct
- Foundation Model: RedWhale-2-12B-TLI
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
RedWhale-2-12B λͺ¨λΈ μ¬μ© λ°©λ²μ meta-llama/Llama-3.1-8B λͺ¨λΈ μ¬μ© λ°©λ²κ³Ό λμΌν©λλ€. μ¬μ©νκ³ μ νλ μλΉ μμ§μ 곡μ λ¬Έμλ₯Ό μ°Έκ³ νμΈμ. λ€μμ μμμ λλ€.
Direct Use
usage with Transformers μμ μ½λλ transformers == 4.48.1μμ μμ±λμμ΅λλ€.
from transformers import AutoModelForCausalLM,AutoTokenizer
import torch
loading_args = {"torch_dtype": torch.bfloat16, "device_map": "auto"} ## for multi gpu loading
model = AutoModelForCausalLM.from_pretrained("TwinDoc/RedWhale-2-12B",**loading_args)
tokenizer = AutoTokenizer.from_pretrained("TwinDoc/RedWhale-2-12B")
text = "λνλ―Όκ΅μ μλλ "
inputs = tokenizer(text,return_tensors="pt")
outputs = model.generate(**inputs,max_new_tokens = 100)
>>> print(tokenizer.decode(outputs[0]))
"<|begin_of_text|>λνλ―Όκ΅μ μλλ 1000λ§μ¬ λͺ
μ΄μμ΄ κ±°μ£Όνκ³ μλ μμΈλ‘ λνλλ λμ¬μ§μ΄λ€. λ³Έ μ°κ΅¬μμλ μμΈμ μ€μ¬μ λνλ΄λ 4λλ¬Έ μμ λμ¬μ§λ‘ μ μνκ³ , κ·Έ κ²½κ³λ₯Ό λΆμ
μ°, μΈμμ°, λ¨μ°, λμ°μΌλ‘ ꡬλΆνλ 4μ°μ μ°μ€κΈ°μ λλ‘λ‘ κ΅¬μ±λλ 8κ°μ λ³μ κ²½κ³λ‘ μ νλ€. κ΅ν 곡κ°μ κ΄μ μμ μ°λ¦¬λλΌμ"
Out-of-Scope Use
μ¬μ νμ΅λ§ μ§νν λͺ¨λΈμ΄κΈ° λλ¬Έμ Instructionμ λ°λ₯΄λ λ₯λ ₯μ μμ΅λλ€. νΉμ Taskμ λ°λ‘ μ¬μ©νκΈ° 보λ€λ Fine-Tuningμ μν Baseλͺ¨λΈλ‘ μ¬μ©νλ κ²μ κΆμ₯ν©λλ€.
Training Details
Training Data
- dataset information
- μ¬μ νμ΅ λ°μ΄ν°μ max lengthλ 8192μ λλ€.
- download dataset
Training Procedure
Compute Infrastructure
Hardware
- L40 48GB * 4EA
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Collection including TwinDoc/RedWhale-2-12B
Collection
RedWhale2
β’
4 items
β’
Updated