metadata
language: en
license: apache-2.0
tags:
- social-media
- content-analysis
- deepseek
- llama
- unsloth
datasets:
- custom
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
widget:
- text: >-
Let me show you how to track your expenses with this simple spreadsheet
template. First, create columns for date, category, and amount. Then, use
the SUM function to automatically calculate your total spending...
Social Media Content Analyzer
This model is fine-tuned from DeepSeek-R1-Distill-Llama-8B to analyze social media content and generate:
- Detailed content critiques analyzing:
- Hook effectiveness
- Reliability factor
- Relatability
- Shareability
- Attention-grabbing titles optimized for TikTok, Instagram Reels, or YouTube Shorts
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "umarfarzan/social-media-content-analyzer"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
def generate_content_analysis(transcript, confidence_score):
prompt = f"""Below is a transcript from a social media video along with its confidence score.
Your task is to analyze the content and provide a detailed content critique analyzing the hook, reliability factor, relatability, and shareability.
### Transcript:
{transcript}
### Confidence Score:
{confidence_score}
### Content Critique:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
max_new_tokens=1000,
temperature=0.7,
top_p=0.9
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response.split("### Content Critique:")[1].strip()
# Example usage
transcript = "Let me show you how to track your expenses with this simple spreadsheet template..."
score = 88
critique = generate_content_analysis(transcript, score)
print(critique)
Training
This model was fine-tuned using Unsloth on a dataset of social media content with expert annotations.