SmolVLM: Redefining small and efficient multimodal models Paper • 2504.05299 • Published 16 days ago • 170
view article Article Welcome Gemma 3: Google's all new multimodal, multilingual, long context open LLM Mar 12 • 398
view article Article Introducing Three New Serverless Inference Providers: Hyperbolic, Nebius AI Studio, and Novita 🔥 Feb 18 • 98
view article Article From Chunks to Blocks: Accelerating Uploads and Downloads on the Hub Feb 12 • 64
AMD-OLMo Collection AMD-OLMo are a series of 1 billion parameter language models trained by AMD on AMD Instinct™ MI250 GPUs based on OLMo. • 4 items • Updated Oct 31, 2024 • 18