license: mit language: - en base_model: - MatteoKhan/Mistral-LLaMA-Fusion library_name: transformers tags: - fine-tuned - cosmetic-domain - lora - mistral - llama - rtx4060-optimized ๐Ÿ’„ CosmeticAdvisor: Expert Model for Beauty & Cosmetic Queries ๐Ÿ“Œ Overview Mistral-LLaMA-Fusion-Cosmetic is a domain-specialized language model, fine-tuned on a dataset focused on cosmetic-related queries. Built from the powerful Mistral-LLaMA-Fusion, this version benefits from LoRA-based fine-tuning and GPU optimization on a RTX 4060. ๐Ÿ”— Created by: Matteo Khan ๐ŸŽ“ Affiliation: Apprentice at TW3 Partners (Generative AI Research) ๐Ÿ“ License: MIT ๐Ÿ”— Connect on LinkedIn(https://www.linkedin.com/in/matteo-khan-a10309263/) ๐Ÿ” Base Model ๐Ÿง  Model Details Architecture: Mistral + LLaMA fusion Technique: Fine-tuned with LoRA (Low-Rank Adaptation) Base Model: MatteoKhan/Mistral-LLaMA-Fusion Training Dataset: Proprietary dataset (Parquet) of user queries in the cosmetic and beauty domain Training Hardware: RTX 4060 (8GB VRAM), 3 epochs ๐ŸŽฏ Intended Use This model is optimized for: โœ… Responding to beauty & cosmetic product questions โœ… Assisting in cosmetic product recommendation โœ… Enhancing chatbots in beauty domains โœ… Cosmetic-focused creative content generation ๐Ÿ› ๏ธ Technical Details Fine-tuning Method: LoRA (r=8, ฮฑ=16, dropout=0.05) Quantization: 4-bit NF4 via bitsandbytes Training Strategy: Gradient checkpointing + mixed precision (fp16) Sequence Length: 256 tokens Batch Strategy: Batch size 1 + gradient accumulation 16 ๐Ÿงช Training Configuration (LoRA) python Copier Modifier peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=8, lora_alpha=16, lora_dropout=0.05, target_modules=["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], bias="none", ) ๐Ÿš€ How to Use python Copier Modifier from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MatteoKhan/CosmeticAdvisor" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) prompt = "What skincare products are best for oily skin?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) โš ๏ธ Limitations May hallucinate or provide incorrect information Knowledge is limited to cosmetic domain-specific data Should not replace professional dermatological advice ๐Ÿงพ Citation If you use this model in your research, please cite: bibtex Copier Modifier @misc{mistralllama2025cosmetic, title={Mistral-LLaMA-Fusion-Cosmetic}, author={Matteo Khan}, year={2025}, note={Fine-tuned for cosmetic domain}, url={https://huggingface.co/MatteoKhan/CosmeticAdvisor} }