Curated lists of AI models by device capability, RAM tier, and use case
Optimize local AI performance on Apple Silicon. Covers Metal GPU acceleration, unified memory advantages, and the best models for each Mac chip generation.

Run vision-capable AI models like LLaVA on your hardware. Analyze images, describe photos, and extract text — all locally, without sending data to the cloud.

Different AI tasks need different models. Find the best model for coding, conversational chat, and document-based RAG based on your hardware and needs.

A complete guide to running AI models on Apple Silicon Macs. Which models work on M1, M2, and M3 chips, how much RAM you need, and real performance benchmarks.

With 16GB RAM you can run powerful models like Qwen 2.5 14B and Mistral Small. The complete list of models, performance expectations, and setup commands.

32GB RAM unlocks professional-grade models like Qwen 2.5 32B and Mixtral 8x7B. Here is exactly what to run and how to get the best performance from each.
