Ollama
A quick guide to setting up Ollama for local AI model execution with Cline.
Last updated
A quick guide to setting up Ollama for local AI model execution with Cline.
Last updated
Windows, macOS, or Linux computer
Cline installed in VS Code
1. Install Ollama
Visit
Download and install for your operating system
2. Choose and Download a Model
Browse models at
Select model and copy command:
Copy
Open your Terminal and run the command:
Example:
Copy
✨ Your model is now ready to use within Cline!
3. Configure Cline
Open VS Code
Click Cline settings icon
Select "Ollama" as API provider
Enter configuration:
Base URL: http://localhost:11434/
(default value, can be left as is)
Select the model from your available options
Start Ollama before using with Cline
Keep Ollama running in background
First model download may take several minutes
If Cline can't connect to Ollama:
Verify Ollama is running
Check base URL is correct
Ensure model is downloaded
Need more info? Read the .