Ollama

A quick guide to setting up Ollama for local AI model execution with Cline.

📋 Prerequisites

  • Windows, macOS, or Linux computer

  • Cline installed in VS Code

🚀 Setup Steps

1. Install Ollama

  • Download and install for your operating system

2. Choose and Download a Model

  • Browse models at ollama.com/search

  • Select model and copy command:

    Copy

    ollama run [model-name]

  • Open your Terminal and run the command:

    • Example:

      Copy

      ollama run llama2

✨ Your model is now ready to use within Cline!

3. Configure Cline

  1. Open VS Code

  2. Click Cline settings icon

  3. Select "Ollama" as API provider

  4. Enter configuration:

    • Base URL: http://localhost:11434/ (default value, can be left as is)

    • Select the model from your available options

⚠️ Important Notes

  • Start Ollama before using with Cline

  • Keep Ollama running in background

  • First model download may take several minutes

🔧 Troubleshooting

If Cline can't connect to Ollama:

  1. Verify Ollama is running

  2. Check base URL is correct

  3. Ensure model is downloaded

Need more info? Read the Ollama Docs.

Last updated