How does this analogy work at all? LoRA is chosen by the modifier to be low ranked to accommodate some desktop/workstation memory constraint, not because the other weights are “very hard” to modify if you happens to have the necessary compute and I/O. The development in LoRA is also largely directed by storage reduction (hence not too many layers modified) and preservation of the generalizability (since training generalizable models is hard). The Kronecker product versions, in particular, has been first developed in the context of federated learning, and not for desktop/workstation fine-tuning (also LoRA is fully capable of modifying all weights, it is rather a technique to do it in a correlated fashion to reduce the size of the gradient update). And much development of LoRA happened in the context of otherwise fully open datasets (e.g. LAION), that are just not manageable in desktop/workstation settings.
This narrow perspective of “source” is taking away the actual usefulness of compute/training here. Datasets from e.g. LAION to Common Crawl have been available for some time, along with training code (sometimes independently reproduced) for the Imagen diffusion model or GPT. It is only when e.g. GPT-J came along that somebody invested into the compute (including how to scale it to their specific cluster) that the result became useful.
This is a very shallow analogy. Fine-tuning is rather the standard technical approach to reduce compute, even if you have access to the code and all training data. Hence there has always been a rich and established ecosystem for fine-tuning, regardless of “source.” Patching closed-source binaries is not the standard approach, since compilation is far less computational intensive than today’s large scale training.
Java byte codes are a far fetched example. JVM does assume a specific architecture that is particular to the CPU-dominant world when it was developed, and Java byte codes cannot be trivially executed (efficiently) on a GPU or FPGA, for instance.
And by the way, the issue of weight portability is far more relevant than the forced comparison to (simple) code can accomplish. Usually today’s large scale training code is very unique to a particular cluster (or TPU, WSE), as opposed to the resulting weight. Even if you got hold of somebody’s training code, you often have to reinvent the wheel to scale it to your own particular compute hardware, interconnect, I/O pipeline, etc… This is not commodity open source on your home PC or workstation.
The situation is somewhat different and nuanced. With weights there are tools for fine-tuning, LoRA/LoHa, PEFT, etc., which presents a different situation as with binaries for programs. You can see that despite e.g. LLaMA being “compiled”, others can significantly use it to make models that surpass the previous iteration (see e.g. recently WizardLM 2 in relation to LLaMA 2). Weights are also to a much larger degree architecturally independent than binaries (you can usually cross train/inference on GPU, Google TPU, Cerebras WSE, etc. with the same weights).
Unless Valve can either find or pay a company that does a custom packaging of a Nvidia GPU with x86 (like the Intel Kaby Lake-G SoC with an in-package Radeon), very unlikely. The handheld size makes an “out of package” discrete GPU very difficult.
And making Nvidia themselves warm up to x86 is just unrealistic at this point. Even if e.g. Nintendo demanded, the entire gaming market — see AMD’s anemic recent 2024Q1 result from gaming vs. data center and AI — is unlikely to be compelling enough for Nvidia to be interested in x86 development, vs. continuing with their ARM-based Grace “superchip.”
There is even a sentence in README.md
that makes it explicit:
The source files in this repo are for historical reference and will be kept static, so please don’t send Pull Requests suggesting any modifications to the source files […]
Probably from the FAQ pane on the Kickstarter page:
What about Steamdeck support?
Will be 100% supported
Last updated: Tue, April 23 2024 10:55 AM PDT
Have people actually checked the versions there before making the suggestion?
F-Droid: Version 3.5.4 (13050408) suggested Added on Feb 23, 2023
Google Play: Updated on Aug 27, 2023
https://f-droid.org/en/packages/org.videolan.vlc/
https://play.google.com/store/apps/details?id=org.videolan.vlc
The problem seems to be squarely with VLC themselves.
See https://knowyourmeme.com/memes/u-mad, in particular the “Due to the agitating nature of the phrase, it is often considered a form of trolling.”
The “you mad bro” is found among internal Valve communication (Valve COO Scott Lynch to Erik Johnson and Newell, i.e. in the sense Johnson/Newell being “mad”, not Sweeney). It was particularly not sent out as a response to Sweeney. Another outlet already got tripped over this and had to make a correction: https://www.gamingonlinux.com/2024/03/valve-coo-on-epics-tim-sweeney-you-mad-bro-when-launching-the-epic-store/
This is not quite as sensational as some people are framing it.
Retention, or the lack thereof, when cold-stored.
In term of SD or standard NAND, not even Nintendo does that. Nintendo builds Macronix XtraROM in their Game Card, which is some proprietary Flash memory with claimed 20 year cold storage retention. And they introduced the 64 GB version only after a lengthy delay, in 2020. So it seems that the (lack of) cold storage performance of standard NAND Flash is viewed by some in the industry as not ready for prime time. Macronix discussed it many years back in a DigiTimes article: https://www.digitimes.com/news/a20120713PR201.html.
And Sony and Microsoft are both still building Blu-ray-based consoles.
Yes. If you mean “CLI” as for e.g. pacman install, it is a GUI (Electron) application, so I expect will install straight from e.g. KDE Discover and then run without you touching the shell.
Installing podman-compose with the immutable filesystem is fairly straight forward, since it is just a single Python file (https://github.com/containers/podman-compose/blob/devel/podman_compose.py), which you can basically install anywhere in your path. You can also first bootstrap pip (python3 get-pip.py --user
with get-pip.py
from https://github.com/pypa/get-pip) and then do pip3 install --user podman-compose
.
There might be several misunderstandings:
So what you want is already available, and no Docker Desktop is actually needed.
The was a GNOME FAQ that describes “guh-NOME” or IPA /ɡˈnəʊm/ as the official pronunciation, due to the emphasis of G as GNU. It does acknowledge that many pronounce it “NOME” or /nəʊm/: https://stuff.mit.edu/afs/athena/astaff/project/aui/html/pronunciation.html
Undervolting provides the chip with additional power and thermal headroom, and can improved situations where otherwise throttling sets in.
Yes. But one should also note that only a limited range of Intel GPU support SR-IOV.
From https://www.gamingonlinux.com/2023/10/tony-hawks-pro-skater-1-2-adds-offline-support-for-steam-deck/ :
You may be able to get it to work on desktop Linux too in offline mode by using
SteamDeck=1 %command%
as a Steam launch option for the game, which likely won’t work for Windows since the Steam Deck is just a Linux machine.
Tom Clancy’s The Division 2 runs decently on the Steam Deck, and has semi-(?)/de-facto-(?) official support (the developer purposefully switched to a Linux/Wine-compatible EAC earlier this year, and referenced the Steam Deck support in the corresponding patch note).
This summary (and sadly, also the GoL title) has somewhat buried the lede here: The firmware update that comes with 3.5.1 Preview adds undervolting controls — with the obvious implications of improving the battery life.
Yes it does. For example, Hans-Kristian Arntzen declared the DirectX Raytracing (DXR) implementation in VKD3D-proton as feature complete in February 2023 (https://github.com/HansKristian-Work/vkd3d-proton/issues/154#issuecomment-1434761594). And since November 2023/release 2.11, VKD3D-proton in fact runs with DXR enabled by default (https://github.com/HansKristian-Work/vkd3d-proton/releases/tag/v2.11).