Run AI models locally with Ollama's simple and efficient platform.
What is Ollama?
Ollama is an open-source tool that makes it easy to run large language models locally. It provides a simple way to download, run, and manage AI models on your computer.
Prerequisites
A computer with at least 8GB RAM (16GB recommended)
macOS, Windows, or Linux operating system
Step-by-Step Setup Guide
1. Install Ollama
Visit ollama.ai and follow the installation instructions for your operating system:
macOS:
Download and run the installer from the website
Windows:
Download and run the Windows installer
Linux:
curl -fsSL https://ollama.ai/install.sh | sh
2. Download a Model
Open your terminal or command prompt and run:
ollama pull mistral
This will download the Mistral model, which is a good balance of performance and resource usage.
3. Start Ollama
Ollama should start automatically after installation. If it doesn't:
On macOS: Launch Ollama from Applications
On Windows: Run Ollama from the Start menu
On Linux: Run ollama serve in your terminal
4. Connect 3sparks Chat
Open 3sparks Chat
Click the settings icon in the top right
Select "Ollama" as your model provider
Enter the server URL (usually http://localhost:11434)
Select your model (e.g., "mistral")
Click "Save"
Troubleshooting
Connection Failed
Ensure Ollama is running and the URL is correct. Try running ollama serve in your terminal.
Model Not Found
Make sure you've downloaded the model using ollama pull model-name.
Performance Issues
Try using a smaller model like "tinyllama" or ensure no other resource-intensive applications are running.