HomeTechnologyRunning local models on Macs gets faster with Ollama's MLX support

Running local models on Macs gets faster with Ollama's MLX support

TechnologyApril 1, 2026
1 min read
Running local models on Macs gets faster with Ollama's MLX support
Apple Silicon Macs get a performance boost thanks to better unified memory usage.

Ollama, a runtime system for operating large language models on a local computer, has introduced support for Apple's open source MLX framework for machine learning. Additionally, Ollama says it has improved caching performance and now supports Nvidia's NVFP4 format for model compression, making for much more efficient memory usage in certain models.

Combined, these developments promise significantly improved performance on Macs with Apple Silicon chips (M1 or later)—and the timing couldn't be better, as local models are starting to gain steam in ways they haven't before outside researcher and hobbyist communities.

The recent runaway success of OpenClaw—which raced its way to over 300,000 stars on GitHub, made headlines with experiments like Moltbook and became an obsession in China in particular—has many people experimenting with running models on their machines.

Read full article

Comments

Source: Ars Technica

Share this article

Related Articles

The future of local TV news has taken a Trumpian turn
2026Apr 19

The future of local TV news has taken a Trumpian turn

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more stories on Big Tech versus politics in Washington, DC, follow Tina Nguyen and read Regulator.

Article1 min read
Read More
The Stars My Destination is classic proto-cyberpunk
2026Apr 19

The Stars My Destination is classic proto-cyberpunk

This might feel like a somewhat obvious recommendation to some, but it flew under my radar until now. Alfred Bester's The Stars My Destination (originally published as Tiger! Tiger! in the UK) is a 19

Article1 min read
Read More