I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. ~5GB ram ~3GB filesize for the smaller sized ollama-unsensored.
It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.
Use local and open source models if you care about privacy.
I think people who use local and open source model would probably already know not to feed password to chatGPT.
I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. ~5GB ram ~3GB filesize for the smaller sized ollama-unsensored.
It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.
Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.