How to Install LM Studio — The Easiest Way to Run Local AI
Download, install, and start chatting with AI models in under 5 minutes using LM Studio. No terminal needed — everything runs through a beautiful desktop app.
LM Studio is the easiest way to run AI models on your computer. Download the app, pick a model, and start chatting. No terminal, no configuration, no technical knowledge required.
Why LM Studio?
- Graphical interface — point and click, no command line
- Built-in model search — find and download models directly in the app
- Chat interface — ChatGPT-like experience, but local and private
- Free for personal use
- Works on macOS, Windows, and Linux
Minimum Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 8 GB | 16 GB |
| Storage | 10 GB free | 50 GB free |
| CPU | Any modern CPU | Apple M-series or NVIDIA GPU |
| Internet | For downloading models | Any |
Step 1: Download LM Studio
- Go to lmstudio.ai
- Click Download for your operating system:
- macOS: Download the
.dmgfile - Windows: Download the
.exeinstaller - Linux: Download the
.AppImagefile
- macOS: Download the
Step 2: Install
macOS
- Open the downloaded
.dmgfile - Drag LM Studio to your Applications folder
- Launch LM Studio from Applications
Windows
- Run the downloaded
.exefile - Follow the installation wizard
- Launch LM Studio from the Start menu
Linux
chmod +x LM-Studio-*.AppImage
./LM-Studio-*.AppImageStep 3: Download Your First Model
- Open LM Studio
- Click the Search icon (magnifying glass) in the sidebar
- Search for a model. Good starters:
- Llama 3.1 8B — great all-rounder
- Qwen 2.5 7B — excellent for coding
- Mistral 7B — fast and conversational
- Click Download on the model card
- Wait for the download to complete (1-5 GB depending on the model)
Step 4: Start Chatting
- Click the Chat icon in the sidebar
- Select your downloaded model from the dropdown at the top
- The model loads into memory (takes 5-15 seconds the first time)
- Start typing your message and press Enter
- The AI responds locally on your hardware — no internet needed
Step 5: Explore the Interface
LM Studio has several useful features:
Chat Tab:
- Conversation history
- System prompt configuration
- Temperature and sampling controls
- Multiple conversation threads
Search Tab:
- Browse available models
- Filter by size, quantization, and popularity
- See RAM requirements before downloading
Local Server Tab:
- Run an OpenAI-compatible API server
- Use LM Studio as a backend for other applications
- Configure the port and CORS settings
Step 6: Use the API Server (Optional)
LM Studio can act as a drop-in replacement for the OpenAI API:
- Go to the Local Server tab (developer icon in sidebar)
- Click Start Server
- The server runs on
http://localhost:1234/v1
You can now point any OpenAI-compatible application to this URL.
Tips for Best Performance
- Close other apps — free up RAM for the model
- Use Q4_K_M quantization — best quality/size balance
- Match model size to your RAM — 8B models for 8GB RAM, 14B for 16GB
- Enable GPU acceleration — LM Studio auto-detects NVIDIA GPUs and Apple Metal
- Use the right model for the task — check our 8GB RAM model guide for recommendations
LM Studio vs Ollama
Not sure which tool to use? Our Ollama vs LM Studio comparison breaks it down in detail. Short version:
- Choose LM Studio if you prefer a graphical interface and ease of use
- Choose Ollama if you're comfortable with the terminal or need API access
Summary
LM Studio gets you from zero to chatting with a local AI model in under 10 minutes. Download the app, search for a model, and start chatting. It's the simplest way to experience local AI.
Next Steps
- Getting Started with Local AI — broader overview of local AI
- Ollama vs LM Studio — detailed comparison
- Best AI Models for 8GB RAM — model recommendations
Author

Categories
More Posts
Advanced RAG Techniques — Chunking, Reranking, and Hybrid Search
TutorialGo beyond basic RAG. Learn chunking strategies, embedding model selection, reranking, and hybrid search to get more accurate answers from your local documents.

Getting Started with Local AI in 2026 — The Complete Beginner's Guide
TutorialLearn how to run AI models like Llama, Mistral, and DeepSeek on your own computer. No cloud subscriptions, no API keys, no data ever leaving your device.

Runpod Beginner Guide — Run AI Models on Cloud GPU in Minutes
TutorialLearn how to use Runpod to run large language models on cloud GPUs. No expensive hardware needed — pay only for what you use, starting at $0.20/hour.
