AI & ML interests

The AI community building the future.

Recent Activity

Articles

angt 
posted an update 2 days ago
view post
Post
1143
I'm excited to share that https://installama.sh is up and running! 🚀

On Linux / macOS / FreeBSD it is easier than ever:
curl https://installama.sh | sh


And Windows just joined the party 🥳
irm https://installama.sh | iex

Stay tuned for new backends on Windows!
angt 
posted an update 7 days ago
view post
Post
347
🚀 installama.sh update: Vulkan & FreeBSD support added!

The fastest way to install and run llama.cpp has just been updated!

We are expanding hardware and OS support to make local AI even more accessible. This includes:

🌋 Vulkan support for Linux on x86_64 and aarch64.
😈 FreeBSD support (CPU backend) on x86_64 and aarch64 too.
✨ Lots of small optimizations and improvements under the hood.

Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh
angt 
posted an update 16 days ago
view post
Post
1943
One command line is all you need...

...to launch a local llama.cpp server on any Linux box or any Metal-powered Mac 🚀

curl angt.github.io/installama.sh | MODEL=unsloth/gpt-oss-20b-GGUF sh


Learn more: https://github.com/angt/installama.sh
cgeorgiaw 
posted an update 17 days ago
lunarflu 
posted an update 28 days ago
lunarflu 
posted an update 28 days ago
lunarflu 
posted an update 28 days ago
view post
Post
2659
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook
AdinaY 
posted an update 30 days ago
view post
Post
3194
Kimi K2 Thinking is now live on the hub 🔥

moonshotai/Kimi-K2-Thinking

✨ 1T MoE for deep reasoning & tool use
✨ Native INT4 quantization = 2× faster inference
✨ 256K context window
✨ Modified MIT license
AdinaY 
posted an update about 1 month ago
view post
Post
627
Chinese open source AI in October wasn’t about bigger models, it was about real world impact 🔥

https://huggingface.co/collections/zh-ai-community/october-2025-china-open-source-highlights

✨ Vision-Language & OCR wave 🌊
- DeepSeek-OCR : 3B
- PaddleOCR-VL : 0.9B
- Qwen3-VL : 2B / 4B / 8B / 32B /30B-A3B
- Open-Bee: Bee-8B-RL
- http://Z.ai Glyph :10B

OCR is industrializing, the real game now is understanding the (long context) document, not just reading it.

✨ Text generation: scale or innovation?
- MiniMax-M2: 229B
- Antgroup Ling-1T & Ring-1T
- Moonshot Kimi-Linear : linear-attention challenger
- Kwaipilot KAT-Dev

Efficiency is the key.

✨ Any-to-Any & World-Model : one step forward to the real world
- BAAI Emu 3.5
- Antgroup Ming-flash-omni
- HunyuanWorld-Mirror: 3D

Aligning with the “world model” globally

✨ Audio & Speech + Video & Visual: released from entertainment labs to delivery platforms
- SoulX-Podcast TTS
- LongCat-Audio-Codec & LongCat-Video by Meituan delivery paltform
- xiabs DreamOmni 2

Looking forward to what's next 🚀
abidlabs 
posted an update about 1 month ago
view post
Post
8196
Why I think local, open-source models will eventually win.

The most useful AI applications are moving toward multi-turn agentic behavior: systems that take hundreds or even thousands of iterative steps to complete a task, e.g. Claude Code, computer-control agents that click, type, and test repeatedly.

In these cases, the power of the model is not how smart it is per token, but in how quickly it can interact with its environment and tools across many steps. In that regime, model quality becomes secondary to latency.

An open-source model that can call tools quickly, check that the right thing was clicked, or verify that a code change actually passes tests can easily outperform a slightly “smarter” closed model that has to make remote API calls for every move.

Eventually, the balance tips: it becomes impractical for an agent to rely on remote inference for every micro-action. Just as no one would tolerate a keyboard that required a network request per keystroke, users won’t accept agent workflows bottlenecked by latency. All devices will ship with local, open-source models that are “good enough” and the expectation will shift toward everything running locally. It’ll happen sooner than most people think.
·
AdinaY 
posted an update about 1 month ago