• 0 Posts
  • 80 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • For what it’s worth, I played the NES release of DQ1, and then a translation of the japan-only SNES release of DQ2 recently (I actually beat DQ2 last week) and I found DQ2 to be a much better game than DQ1 overall. DQ1 was… interesting, but it was very much a game that did not respect the player’s time in the least, to the point of expecting the player to fight literally hundreds of battles in order to grind up enough money and experience to afford the gear. The most charitable thing I can say about it is that the battle system was so rudimentary and so grindy that the gameplay felt more like it was focused on resource management–there was a tension in deciding whether you could afford to take another fight, or if you needed to return to town and spend money sleeping at an inn to heal (setting your grind back at least 1-2 fights with how piddly gold and XP drops were), optimizing efficiency in spending your MP to heal vs. the risk of dying to the next monster, etc.

    DQ2 meanwhile was a much more robust and much less grindy game–the simple addition of multiple party members and multiple enemies in a single battle meant that your gold and XP gains were multiplied over the first game. While it still demanded grinding, it was much more reasonable about it, and it felt much more like a “modern” JRPG like you’re used to seeing.






  • The problem is that there’s no incentive for employees to stay beyond a few years. Why spend months or years training someone if they leave after the second year?

    But then you have to question why employees aren’t loyal any longer, and that’s because pensions and benefits have eroded, and your pay doesn’t keep up as you stay longer at a company. Why stay at a company for 20, 30, or 40 years when you can come out way ahead financially by hopping jobs every 2-4 years?


  • Ah, yes, you don’t have an actual rebuttal so everything is just “propaganda” and “cyberpunk dystopia” as if snake oil salesmen hawking freaking AI-powered vibrators and vagueposting about the benefits of AI while downplaying or ignoring its very real, very measurable harms, while an entire cottage industry of individuals making a living on their creative endeavors being forced into wage slave office jobs isn’t even more of a dystopia.

    Try actually talking to an artist sometime bud, I don’t know of a single one that is actually okay with AI, and if you weren’t either blind or an “ideas guy” salivating at the thought of having a personal slave to make (shitty, barely functional, vapid) shit without paying someone with the actual necessary skills, you’d agree too.


  • ideally? It means that AI companies have to throw away their entire training model, pay for a license that they may not be able to afford, and go out of business as a result, at which point everyone snaps out of the cult of AI and realizes it’s as overhyped as block chain and pretends it never happened. Pardon me while I find a flea to play the world’s tiniest violin. More realistically, open models will be restricted to FOSS works and the public domain, while commercial models pay for licenses from copyright holders.

    Like, what, you think I haven’t thought through this exact issue before and reached the exact conclusion your leading questions are so transparently pushing that open models will be restricted to public works only while commercial models can obtain a license? Yeah, duh. And you know what? I. Don’t. Care. Commercial models can be (somewhat) more easily regulated, and even in the absolute worst case, at least creators will have a mechanism to opt out of the artist crushing machine.


  • Yeah, no, stop with the goddamn tone policing. I have zero interest in vagueposting and high-horse riding.

    As for what I want, I want generative AI banned entirely, or at minimum restricted to training on works that are either in the public domain, or that the person creating the training model received explicit, opt-in consent to use. This is the supposed gold standard everyone demands when it comes to the widescale collection and processing of personal data that they generate just through their normal, everyday activities, why should it be different for the widescale collection and processing of the stuff we actually put our effort into creating?


  • Huh? How does that follow at all? Judging that the specific use of training LLMs–which absolutely flunks the “amount and substantiality of the portion taken” (since it’s taking the whole damn work) and “the effect on the market” (fucking DUH) tests–isn’t fair use in no way impacts parody or R34. It’s the same kind of logic the GOP uses when they say “if the IRS cracks down on billionaires evading taxes then Blue Collar Joe is going to get audited!”

    Fuck outta here with that insane clown logic.



  • So because corps abuse copyright, that means I should be fine with AI companies taking whatever I write–all the journal entries, short stories, blog posts, tweets, comments, etc.–and putting it through their model without being asked, and with no ability to opt out? My artist friends should be fine with their art galleries being used to train the AI models that are actively being used to deprive them of their livelihood without any ability to say “I don’t want the fruits of my labor to be used in this way?”


  • Hell, that article is also all about Google Books, which is an entirely different beast from generative AI. One of the key points from the circuit judge was that Google Books’ use of copyrighted material “…[maintains] respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders.” The appeals court, in upholding the ruling that Google Books’ use of copyrighted content is fair use, ruled “the revelations do not provide a significant market substitute for the protected aspects of the originals.”

    If you think that gen AI doesn’t provide a significant market substitute for the artwork created by the artists and authors used to train these models, or that it doesn’t adversely impact their rights, then you’re utterly delusional.


  • An actual technical answer: Apparently, it’s because while the PS5 and Xbox Series X are technically regular x86-64 architecture, they have a design that allows the GPU and CPU to share a single pool of memory with no loss in performance. This makes it easy to allocate a shit load of RAM for the GPU to store textures very quickly, but it also means that as the games industry shifts from developing for the PS4/Xbox One X first (both of which have separate pools of memory for CPU & GPU) to the PS5/XSX first, VRAM requirements are spiking up because it’s a lot easier to port to PC if you just keep the assumption that the GPU can handle storing 10-15 GB of texture data at once instead of needing to refactor your code to reduce VRAM usage.