192
edits
| Line 116: | Line 116: | ||
=== ollama-2080 === | === ollama-2080 === | ||
* Install nvidia drivers https://www.xda-developers.com/nvidia-stopped-supporting-my-gpu-so-i-started-self-hosting-llms-with-it/ | |||
** Pin the driver version so you don't have to re-run the nvidia installer every time the kernel gets updated | |||
* Install <pre>ollama</pre> with <pre>curl -fsSL https://ollama.com/install.sh | sh</pre> | * Install <pre>ollama</pre> with <pre>curl -fsSL https://ollama.com/install.sh | sh</pre> | ||
* Use ollama to pull and run | * Use ollama to pull and run deepseek-r1:8b | ||
* Verify: http://ollama.local:11434/ should show the message <pre>Ollama is running.</pre> | * Verify: http://ollama.local:11434/ should show the message <pre>Ollama is running.</pre> | ||
| Line 155: | Line 157: | ||
* In <pre>~/git/openwebui</pre>, run <pre>docker compose up</pre> | * In <pre>~/git/openwebui</pre>, run <pre>docker compose up</pre> | ||
** Note: newer docker uses <pre>docker compose</pre>, not <pre>docker-compose</pre> | ** Note: newer docker uses <pre>docker compose</pre>, not <pre>docker-compose</pre> | ||
* I had to do some hole-punching in ufw to get open-webui to see ollama2080 | |||
* Useful commands | * Useful commands | ||
** <pre>sudo ss -plnt # | ** <pre>sudo ss -plnt # Lists ports this machine is listening on | ||
ip -4 a # Get this machine's IP address on the local network | ip -4 a # Get this machine's IP address on the local network | ||
</pre> | </pre> | ||