Introduction
Ollama is a fantastic tool for running large language models (LLMs) locally. However, in some environments, you might need to access the internet through a proxy server. This blog post will guide you on how to configure Ollama to work seamlessly behind a proxy, whether you're using it directly on your system or within a Docker container.
Why Use a Proxy?
There are several reasons why you might need to use a proxy:
- Network Restrictions: Your organization's network might require all internet traffic to go through a proxy.
- Security: Proxies can add an extra layer of security by masking your IP address.
- Content Filtering: Proxies can be used to filter out certain types of content.
Configuring Ollama to Use a Proxy
Ollama primarily needs a proxy for pulling models from the internet, which are done via HTTPS requests. The key environment variable to use is HTTPS_PROXY
.
Important Note: Avoid setting HTTP_PROXY
, as it's not used for model pulls and might interfere with client connections to the Ollama server.
Setting HTTPS_PROXY
on Different Platforms
The method for setting environment variables varies depending on your operating system:
macOS
If you're running Ollama as a macOS application:
-
Use
launchctl
to set theHTTPS_PROXY
environment variable:launchctl setenv HTTPS_PROXY "https://your.proxy.server:port"
-
Restart the Ollama application.
Linux
If Ollama is running as a systemd service:
-
Edit the Ollama service file:
systemctl edit ollama.service
-
Add the following line under the
[Service]
section:[Service] Environment="HTTPS_PROXY=https://your.proxy.server:port"
-
Save the file and exit the editor.
-
Reload systemd and restart Ollama:
systemctl daemon-reload systemctl restart ollama
Windows
On Windows, Ollama inherits user and system environment variables:
- Quit Ollama from the taskbar.
- Search for "environment variables" in the Settings (Windows 11) or Control Panel (Windows 10).
- Click on "Edit environment variables for your account".
- Create a new variable named
HTTPS_PROXY
and set its value to your proxy server's address (e.g.,https://your.proxy.server:port
). - Click "OK" to save.
- Restart the Ollama application.
Using Ollama with a Proxy in Docker
If you're running Ollama inside a Docker container, you have two main options:
1. Pass HTTPS_PROXY
to the Container
When starting the container, use the -e
flag to pass the HTTPS_PROXY
environment variable:
docker run -d -e HTTPS_PROXY=https://your.proxy.server:port -p 11434:11434 ollama/ollama
2. Configure the Docker Daemon
Alternatively, you can configure the Docker daemon itself to use a proxy. Docker provides documentation for doing this on macOS, Windows, Linux, and for the Docker daemon with systemd.
Handling Self-Signed Certificates
If your proxy uses a self-signed certificate, you'll need to install it as a system certificate. For Docker, this might involve creating a custom Docker image:
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
RUN update-ca-certificates
Then, build and run the image:
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://your.proxy.server:port -p 11434:11434 ollama-with-ca
Conclusion
Configuring Ollama to work behind a proxy is straightforward, thanks to its use of environment variables. By setting HTTPS_PROXY
correctly, you can ensure that Ollama can download models and function correctly even in environments with network restrictions. Remember to install any necessary certificates if your proxy uses them, and you'll be up and running with Ollama in no time!