Difference between revisions of "Gpubox"

789 bytes added ,  Yesterday at 22:58
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
mediawiki
= Hosted Services =
= gpubox Setup Guide =
{| class="wikitable"
|+ On gpubox
|-
! Hostname:Port !! Description
|-
| [https://gpubox.local:8006/ gpubox.local:8006] || Proxmox admin
|-
| [http://dockerhost.local:3000/ dockerhost.local:3000] || Open WebUI (to play with LLMs)
|-
| [https://ipmi-compute-2-171.local/ ipmi-compute-2-171.local] || IPMI
|}
 
= gpubox Setup =


== Bare Metal Configuration ==
== Bare Metal Configuration ==
Line 102: Line 114:
* Upload <pre>debian-13.3.0-amd64-netinst.iso</pre> to storage through the proxmox web ui
* Upload <pre>debian-13.3.0-amd64-netinst.iso</pre> to storage through the proxmox web ui
* Create a minimal Debian 13 template
* Create a minimal Debian 13 template
** <pre>apt install -y ufw fail2ban curl git zsh sudo</pre>
** <pre>apt install -y ufw fail2ban curl git zsh sudo net-tools</pre>
** <pre>sudo apt update && sudo apt full-upgrade -y</pre>
** <pre>sudo apt update && sudo apt full-upgrade -y</pre>
* Make a user called <pre>deb</pre> with sudo
* Make a user called <pre>deb</pre> with sudo
Line 118: Line 130:
* Install nvidia drivers https://www.xda-developers.com/nvidia-stopped-supporting-my-gpu-so-i-started-self-hosting-llms-with-it/
* Install nvidia drivers https://www.xda-developers.com/nvidia-stopped-supporting-my-gpu-so-i-started-self-hosting-llms-with-it/
** Pin the driver version so you don't have to re-run the nvidia installer every time the kernel gets updated
** Pin the driver version so you don't have to re-run the nvidia installer every time the kernel gets updated
* Install <pre>ollama</pre> with <pre>curl -fsSL https://ollama.com/install.sh | sh</pre>
* Install ollama with <pre>curl -fsSL https://ollama.com/install.sh | sh</pre>
* Use ollama to pull and run deepseek-r1:8b
* Use ollama to pull and run deepseek-r1:8b
* <pre>sudo ufw allow from 10.0.0.0/24 to any port 11434 proto tcp</pre>
* Verify: http://ollama.local:11434/ should show the message <pre>Ollama is running.</pre>
* Verify: http://ollama.local:11434/ should show the message <pre>Ollama is running.</pre>
=== imgtotext ===
* Install ollama as above
* <pre>ollama run hf.co/noctrex/ZwZ-8B-GGUF:Q8_0</pre> from the page https://huggingface.co/noctrex/ZwZ-8B-GGUF (i pressed the image-to-text tag and looked at trending models)
* http://imgtotext.local:11434/ should show ollama is running


=== dockerhost ===
=== dockerhost ===
Line 165: Line 183:
=== ai-conductor ===
=== ai-conductor ===
* TBD
* TBD
=== if you're using a 1080 Ti or 1080 ===
<pre>sudo apt purge "*nvidia*"
sudo apt autoremove --purge
</pre>
then reboot.


== Key Commands ==
== Key Commands ==