![](/static/f79995a8/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
Glad i could help ;)
Glad i could help ;)
You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.
Also, i don’t trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.
If you’re lucky you just set it to the wrong version, mine uses 10.3.0 (see below).
I tried running the docker container first as well but gave up since there are seperate versions for cuda and rocm which comes packaged with this as well and therefor gets unnecessary big.
I am running it on Fedora natively. I installed it with the setup script from the top of the docs:
curl -fsSL https://ollama.com/install.sh | sh
After that i created a service file (also stated in the linked docs) so that it starts at boot time (so i can just boot my pc and forget it without needing to login).
The crucial part for the GPU in question (RX 6700XT) was this line under the [service] section:
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
As you stated, this sets the environment variable for rocm. Also to be able to reach it from outside of localhost (for my server):
Environment="OLLAMA_HOST=0.0.0.0"
I have my gaming pc running as ollama host when i need it (RX 6700XT with rocm doing the heavy lifting). PC idles at ~50W and draws up to 200W when generating an answer. It is plenty fast though.
My mini pc home server is running openwebui with access to this “ollama instance” but also OpenAIs api when i just need a quick answer and therefor don’t turn on my pc.
It’s a reference to the latest fireship video… I think:
Anna’s archive would be the go to i think. You can choose the language in the sidebar.
She certainly can’t break anything by using it, you can spin up a docker container and see for yourself. It’s also localized in a lot of languages.
I don’t think my own mother would do well with this, mainly because I think she doesn’t know what the difference between a pdf and word or libre doc is. But apart from that, it is really simple.
Have used it for some months and it’s great. I mostly use it for basic stuff like splitting / merging pdfs because im too lazy to look up the pdftk command.
But there are many more features like sanitizing (removing embedded JS code) or OCR (which works great).
I hope it went well :) i was completely ready to go back changing the image tag to v2 but didn’t need to.
Bricked my pc twice because of the bootloader and couldn’t repair it. From now on i just nuke my system if something is fucky and have a shell script do the installing of packages etc.
I had a friend come over to my place to fix her laptops wifi. After about an hour searching for any setting in windows that i could have missed, i coincidentally found a forum where one pointed out this could be due to a hardware wifi switch…
Loop habit tracker app on android: https://github.com/iSoron/uhabits
They are in the google play store and f-droid i believe
Let’s hope that this is not a productive system. I want to say that you’d have to try hard to do something that stupid, but then again, knowing from myself, you can cause a lot of trouble with a single command in a cli somewhere.
Im so looking forward to this. When i tried to use tmpfs / ramdisk, the transcoding would simply stop because there was no space left.
That escalated quickly.
In my case it’s more like something from the 400 range
I distrohopped for a little while when i built my new gaming rig two years ago and can confirm:
Fedora KDE spin is the way.
Yes, since we have similar gpus you could try the following to run it in a docker container on linux, taken from here and slightly modified:
#!/bin/bash
model=microsoft/phi-2
# share a volume with the Docker container to avoid downloading weights every run
volume=<path-to-your-data-directory>/data
docker run -e HSA_OVERRIDE_GFX_VERSION=10.3.0 -e PYTORCH_ROCM_ARCH="gfx1031" --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4-rocm --model-id $model
Note how the rocm version has a different tag and that you need to mount your gpu device into the container. The two environment variables are specific to my (any maybe yours also) gpu architecture. It will need a while to download though.
Like one of the comments mentioned: there is yt-dlp for now at least.