YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

license: mit language:

  • en base_model:
  • MatteoKhan/Mistral-LLaMA-Fusion library_name: transformers tags:
  • fine-tuned
  • cosmetic-domain
  • lora
  • mistral
  • llama
  • rtx4060-optimized πŸ’„ CosmeticAdvisor: Expert Model for Beauty & Cosmetic Queries πŸ“Œ Overview Mistral-LLaMA-Fusion-Cosmetic is a domain-specialized language model, fine-tuned on a dataset focused on cosmetic-related queries. Built from the powerful Mistral-LLaMA-Fusion, this version benefits from LoRA-based fine-tuning and GPU optimization on a RTX 4060.

πŸ”— Created by: Matteo Khan πŸŽ“ Affiliation: Apprentice at TW3 Partners (Generative AI Research) πŸ“ License: MIT

πŸ”— Connect on LinkedIn(https://www.linkedin.com/in/matteo-khan-a10309263/) πŸ” Base Model

🧠 Model Details Architecture: Mistral + LLaMA fusion

Technique: Fine-tuned with LoRA (Low-Rank Adaptation)

Base Model: MatteoKhan/Mistral-LLaMA-Fusion

Training Dataset: Proprietary dataset (Parquet) of user queries in the cosmetic and beauty domain

Training Hardware: RTX 4060 (8GB VRAM), 3 epochs

🎯 Intended Use This model is optimized for:

βœ… Responding to beauty & cosmetic product questions

βœ… Assisting in cosmetic product recommendation

βœ… Enhancing chatbots in beauty domains

βœ… Cosmetic-focused creative content generation

πŸ› οΈ Technical Details Fine-tuning Method: LoRA (r=8, Ξ±=16, dropout=0.05)

Quantization: 4-bit NF4 via bitsandbytes

Training Strategy: Gradient checkpointing + mixed precision (fp16)

Sequence Length: 256 tokens

Batch Strategy: Batch size 1 + gradient accumulation 16

πŸ§ͺ Training Configuration (LoRA) python Copier Modifier peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=8, lora_alpha=16, lora_dropout=0.05, target_modules=["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], bias="none", ) πŸš€ How to Use python Copier Modifier from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "MatteoKhan/CosmeticAdvisor" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "What skincare products are best for oily skin?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ⚠️ Limitations May hallucinate or provide incorrect information

Knowledge is limited to cosmetic domain-specific data

Should not replace professional dermatological advice

🧾 Citation If you use this model in your research, please cite:

bibtex Copier Modifier @misc{mistralllama2025cosmetic, title={Mistral-LLaMA-Fusion-Cosmetic}, author={Matteo Khan}, year={2025}, note={Fine-tuned for cosmetic domain}, url={https://huggingface.co/MatteoKhan/CosmeticAdvisor} }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support