Mert Erbak's picture

Mert Erbak PRO

merterbak

AI & ML interests

NLP and Image Processing

Recent Activity

Organizations

Open-Source AI Meetup's profile picture MLX Community's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture open/ acc's profile picture AI Starter Pack's profile picture

merterbak's activity

reacted to meg's post with 🔥 about 16 hours ago
reacted to their post with 🚀🔥 about 16 hours ago
posted an update about 16 hours ago
reacted to clem's post with 🔥 about 16 hours ago
view post
Post
1016
Energy is a massive constraint for AI but do you even know what energy your chatGPT convos are using?

We're trying to change this by releasing ChatUI-energy, the first interface where you see in real-time what energy your AI conversations consume. Great work from @jdelavande powered by spaces & TGI, available for a dozen of open-source models like Llama, Mistral, Qwen, Gemma and more.

jdelavande/chat-ui-energy

Should all chat interfaces have this? Just like ingredients have to be shown on products you buy, we need more transparency in AI for users!
  • 2 replies
·
reacted to their post with 🚀 4 days ago
view post
Post
1917
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
reacted to their post with 👀🔥 5 days ago
view post
Post
1917
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
posted an update 5 days ago
view post
Post
1917
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
reacted to their post with 🚀 9 days ago
view post
Post
2990
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure
reacted to thomwolf's post with 🚀 9 days ago
view post
Post
4315
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.

At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!

You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics

We're so excited to build and share more open-source robots with the world in the coming months!
  • 1 reply
·
reacted to their post with 🔥 9 days ago
view post
Post
2990
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure
posted an update 9 days ago
view post
Post
2990
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure
reacted to their post with 🚀 12 days ago
view post
Post
3006
OpenAI has released BrowseComp an open source benchmark designed to evaluate the web browsing capabilities of AI agents. This dataset comprising 1,266 questions challenges AI models to navigate the web and uncover complex and obscure information. Crafted by human trainers, the questions are intentionally difficult. (unsolvable by another person in under ten minutes and beyond the reach of existing models like ChatGPT with and without browsing and an early version of OpenAI's Deep Research tool.)

Blog Post: https://openai.com/index/browsecomp/
Paper: https://cdn.openai.com/pdf/5e10f4ab-d6f7-442e-9508-59515c65e35d/browsecomp.pdf
Code in simple eval repo: https://github.com/openai/simple-evals
reacted to their post with 🔥 13 days ago
view post
Post
3006
OpenAI has released BrowseComp an open source benchmark designed to evaluate the web browsing capabilities of AI agents. This dataset comprising 1,266 questions challenges AI models to navigate the web and uncover complex and obscure information. Crafted by human trainers, the questions are intentionally difficult. (unsolvable by another person in under ten minutes and beyond the reach of existing models like ChatGPT with and without browsing and an early version of OpenAI's Deep Research tool.)

Blog Post: https://openai.com/index/browsecomp/
Paper: https://cdn.openai.com/pdf/5e10f4ab-d6f7-442e-9508-59515c65e35d/browsecomp.pdf
Code in simple eval repo: https://github.com/openai/simple-evals
posted an update 13 days ago
view post
Post
3006
OpenAI has released BrowseComp an open source benchmark designed to evaluate the web browsing capabilities of AI agents. This dataset comprising 1,266 questions challenges AI models to navigate the web and uncover complex and obscure information. Crafted by human trainers, the questions are intentionally difficult. (unsolvable by another person in under ten minutes and beyond the reach of existing models like ChatGPT with and without browsing and an early version of OpenAI's Deep Research tool.)

Blog Post: https://openai.com/index/browsecomp/
Paper: https://cdn.openai.com/pdf/5e10f4ab-d6f7-442e-9508-59515c65e35d/browsecomp.pdf
Code in simple eval repo: https://github.com/openai/simple-evals
replied to their post 13 days ago
view reply

Thanks for sharing. He denied that it would come in hours, but I hope it’s soon. :)

reacted to their post with ❤️ 13 days ago
reacted to their post with 👀🔥 14 days ago