← Back to Help

How to Connect 3sparks Chat to Ollama

Run AI models locally with Ollama's simple and efficient platform.

What is Ollama?

Ollama is an open-source tool that makes it easy to run large language models locally. It provides a simple way to download, run, and manage AI models on your computer.

Prerequisites

  • A computer with at least 8GB RAM (16GB recommended)
  • macOS, Windows, or Linux operating system

Step-by-Step Setup Guide

1. Install Ollama

Visit ollama.ai and follow the installation instructions for your operating system:

macOS:

Download and run the installer from the website

Windows:

Download and run the Windows installer

Linux:

curl -fsSL https://ollama.ai/install.sh | sh

2. Download a Model

Open your terminal or command prompt and run:

ollama pull mistral

This will download the Mistral model, which is a good balance of performance and resource usage.

3. Start Ollama

Ollama should start automatically after installation. If it doesn't:

  • On macOS: Launch Ollama from Applications
  • On Windows: Run Ollama from the Start menu
  • On Linux: Run ollama serve in your terminal

4. Connect 3sparks Chat

  1. Open 3sparks Chat
  2. Click the settings icon in the top right
  3. Select "Ollama" as your model provider
  4. Enter the server URL (usually http://localhost:11434)
  5. Select your model (e.g., "mistral")
  6. Click "Save"

Troubleshooting

Connection Failed
Ensure Ollama is running and the URL is correct. Try running ollama serve in your terminal.
Model Not Found
Make sure you've downloaded the model using ollama pull model-name.
Performance Issues
Try using a smaller model like "tinyllama" or ensure no other resource-intensive applications are running.

Popular Models

  • mistral - Good all-around model
  • llama2 - Meta's open-source model
  • codellama - Specialized for coding tasks
  • tinyllama - Lightweight model for basic tasks