默认情况
默认情况下ollama监控的是127.0.0.1的11434端口,只能通过本机连接,如果需要通过其它机器的应用无法连接。监听情况如下:
root@docker:~/.ollama# netstat -tunlp | grep 11
tcp 0 0 127.0.0.1:34861 0.0.0.0:* LISTEN 1136/containerd
tcp 0 0 127.0.0.1:11434 0.0.0.0:* LISTEN 72188/ollama
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 9118/sshd: root@pts
tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 23254/sshd: root@pt
tcp6 0 0 ::1:6010 :::* LISTEN 9118/sshd: root@pts
tcp6 0 0 ::1:6011 :::* LISTEN 23254/sshd: root@pt
修改步骤
先使用systemctl status ollama查看.service文件的位置
root@docker:~# systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-02-05 08:53:06 UTC; 19min ago
Main PID: 25690 (ollama)
Tasks: 23 (limit: 19104)
Memory: 6.3G (peak: 11.3G)
CPU: 17min 56.220s
CGroup: /system.slice/ollama.service
├─25690 /usr/local/bin/ollama serve
└─42633 /usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af3670404>
Feb 05 09:07:14 docker ollama[25690]: llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
Feb 05 09:07:14 docker ollama[25690]: llama_new_context_with_model: CPU output buffer size = 2.34 MiB
Feb 05 09:07:14 docker ollama[25690]: llama_new_context_with_model: CPU compute buffer size = 302.75 MiB
Feb 05 09:07:14 docker ollama[25690]: llama_new_context_with_model: graph nodes = 986
Feb 05 09:07:14 docker ollama[25690]: llama_new_context_with_model: graph splits = 1
Feb 05 09:07:14 docker ollama[25690]: time=2025-02-05T09:07:14.650Z level=INFO source=server.go:594 msg="llama runner started in 2.51 seconds"
Feb 05 09:07:14 docker ollama[25690]: [GIN] 2025/02/05 - 09:07:14 | 200 | 2.586010861s | 127.0.0.1 | POST "/api/generate"
Feb 05 09:08:16 docker ollama[25690]: [GIN] 2025/02/05 - 09:08:16 | 200 | 2.854109831s | 127.0.0.1 | POST "/api/chat"
Feb 05 09:09:10 docker ollama[25690]: [GIN] 2025/02/05 - 09:09:10 | 200 | 58.04us | 127.0.0.1 | HEAD "/"
Feb 05 09:09:10 docker ollama[25690]: [GIN] 2025/02/05 - 09:09:10 | 200 | 1.144422ms | 127.0.0.1 | GET "/api/tags"
停止ollama服务
systemctl restart ollama
然后编辑/etc/systemd/system/ollama.service这个文件,在[Service]这个标签中添加Environment="OLLAMA_HOST=0.0.0.0:11434",配置后如下:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
[Install]
WantedBy=default.targe
然后执行以下命令
systemctl daemon-reload
systemctl restart ollama
最后再确认11434端口不再是关联127.0.0.1的地址
root@docker:~/.ollama# netstat -tunlp | grep 11
tcp 0 0 127.0.0.1:34861 0.0.0.0:* LISTEN 1136/containerd
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 9118/sshd: root@pts
tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 23254/sshd: root@pt
tcp6 0 0 :::11434 :::* LISTEN 73183/ollama
tcp6 0 0 ::1:6010 :::* LISTEN 9118/sshd: root@pts
tcp6 0 0 ::1:6011 :::* LISTEN 23254/sshd: root@pt
这时就可以在其它电脑上使用了。