Ollama
A quick guide to setting up Ollama for local AI model execution with Cline.
📋 Prerequisites
Windows, macOS, or Linux computer
Cline installed in VS Code
🚀 Setup Steps
1. Install Ollama
Visit ollama.com
Download and install for your operating system
2. Choose and Download a Model
Browse models at ollama.com/search
Select model and copy command:
Copy
ollama run [model-name]
Open your Terminal and run the command:
Example:
Copy
ollama run llama2
✨ Your model is now ready to use within Cline!
3. Configure Cline
Open VS Code
Click Cline settings icon
Select "Ollama" as API provider
Enter configuration:
Base URL:
http://localhost:11434/
(default value, can be left as is)Select the model from your available options
⚠️ Important Notes
Start Ollama before using with Cline
Keep Ollama running in background
First model download may take several minutes
🔧 Troubleshooting
If Cline can't connect to Ollama:
Verify Ollama is running
Check base URL is correct
Ensure model is downloaded
Need more info? Read the Ollama Docs.
Last updated