![](https://programming.dev/pictrs/image/fbb0fc6c-aade-4537-ac38-36a7e6437814.jpeg)
![](https://lemmy.dbzer0.com/pictrs/image/a18b0c69-23c9-4b2a-b8e0-3aca0172390d.png)
Show me a music store I can purchase music from on my phone through an app, and I’ll purchase it.
Show me a music store I can purchase music from on my phone through an app, and I’ll purchase it.
They added a video player with version 3, I think.
Now the question is - are they open sourcing the original Winamp, or the awful replacement?
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
Unfortunately, I don’t expect it to remain free forever.
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
It’s a W3C managed standard, but there are tons of behavior not spelled out in the specification that platforms can choose to impose.
The standard doesn’t impose a 500 character limit, but there’s nothing that says there can’t be a limit.
My go-to solution for this is the Android FolderSync app with an SFTP connection.
I’m not familiar with creating fonts specifically, but you’ll want to commit any resources necessary to recreate the font file, including any build scripts to help ease the process and instructions specifying compatible versions of tooling (FontForge in this case). Don’t include FontForge in the repository, of course.
The compiled font files should be under releases in GitHub for the repository.
Git isn’t generally meant for binary resources but as long as they’re not too large, they’ll be fine. You just may not have meaningful ways to compare changes easily.
Correction: migrated to GitLab, but I don’t expect they’ll want to keep it there.
The Nuzu repository is already wiped.
On Android, it moved SMS messages from the shared SMS store upon receipt and to Signal’s own database, which was more secure.
Of course!
Valid, but not standard and more inconvenient.
Additionally, you act like query strings can’t be used to track you when they certainly can.
Most of the advantages of Gemini are implemented in the client and not the protocol itself.
The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host’s Docker instance.
It’s entirely optional.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.