This looks loke something stupid, but it doesn’t really look like KKK costumes. I could understand if your point is that even a vague resemblance might be in poor taste, but “closely resembling” seems like a stretch.
This looks loke something stupid, but it doesn’t really look like KKK costumes. I could understand if your point is that even a vague resemblance might be in poor taste, but “closely resembling” seems like a stretch.
I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.
XML aims to be both human-readable and machine-readable, but manages neither. It’s only really worth it if you actually need the complexity or extensibility, otherwise it’s just a major pain to map XML structures to any sensible type representation. I’ve been forced to work with some of the protocols that people like to present as examples of good XML usage and I hate every single one of them.
Fuck YAML though. That spec is longer and more complex than any other markup language I know of and it doesn’t have a single fully compliant implementation.
C has not aged well, despite its popularity in many applications. I’m grateful for the incredible body of work that kernel developers have assembled over the decades, but there are some very useful aspects of rust that might help alleviate some of the hurdles that aspiring contributors face. This was not a push by rust evangelists, but an attempt to enable modernization efforts at least for new driver development. If it doesn’t work out, that’s fair enough but I’m grateful for the willingness - especially of Linus - to try something new.
If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.
The whole premise of this discussion was about technological progress and growth going by your initial comment. That means refining existing models and training new ones, which is going to cost a lot of energy. The way this industry is going, even privacy conscious usage of open source models will contribute to the insane energy usage by creating demand and popularizing the technology.
Do we really need to grow our energy consumption as a society by such a disproportionate amount?
With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?
Grouping seasons into a series folder doesn’t work well in some cases, because that’s not the way they are released in Japan. A new season is (most of the time) effectively an entire new show entry. Show seasons are mostly a north american thing. No matter which software you use, there’s always going to be some minor issues if you group seasons into one entry.
Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.
Shoko also has rate limits. The problem is that AniDB does rate limiting in an extremely stupid way for a UDP API and doesn’t even have the decency to define clear time limits.
The only thing that’s slow is dnf’s repository check and some migration scripts in certain fedora packages. If that’s the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.
I tried using connman to setup a wireguard connection once. It was not a good experience and ultimately led nowhere, due to missing feature support.
If anything, he gets most of his inspiration from MacOS.
The joke in the OP stops at the beginning of the joke explanation. If you just share your honest opinion like that in a shitposting community, you can’t expect everyone to “play along” with your “joke”.
Pretty sure that the registry path for official images is “library” (at least it used to be). So it should be “docker.io/library/debian”, though I can’t double check at the moment.
I haven’t used TypeScript in a classically OOP way and it never felt like I was being urged to do so either.
There are two ways to save your work in either program. You can save your actual work, so you can continue editing at some point and not lose stuff like layers, image quality and other information or you can export your work into some kind of image file, optionally compressing it and discarding extra information about the project. I think GIMP even offers a shortcut for this called “Overwrite …”. How is this an issue?
wayland doesn’t seem to support nvidia as well as X does, just due to development focus
Ymmv with Nvidia, but that has nothing to do with development focus and everything to do with Nvidia’s refusal to use the same interfaces Intel and AMD use. Most of the way Nvidia works or doesn’t work with X or Wayland is down to Nvidia’s driver stack. Personally I’ve not had much positive experiences with Nvidia on X.
Like yes major releases and distros are moving to wayland now, that just means they find it stable enough to start doing development on it.
That happened literal years ago. The reason you’re only noticing now, might be because KDE has gotten their Wayland implementation to a reasonably stable point. Gnome has supported Wayland for some time now and other DEs probably don’t have the resources to move on from X. I don’t see the distros that are only switching over now as major contributors to any development specific to Wayland.
I don’t take issue with your preferences. Maybe you’re better off with X for now, that’s fine, but you make it sound like Wayland is just full of issues and has barely even entered some kind of pre-release state for software masochists.
Some kind of new replicant detection method?