The conversation/free-for-all around the role of automated “AI”-based game development rolls on with a few thoughts from Tom Hall, co-founder of id Software and one of the creators of the original DOOM, who says he’s (Commander) keen on the prospect of “ethical” uses for such tools in gamedev, but worries that reliance on them “will homogenize games, sort of like AAA games are now”.

Speaking to Sektor.sk, Hall said he was “excited” by “how AI could be used ethically to be more of a core element of the game, so it’s almost like a game that you’re playing and it’s playing you, in a sense, or it knows what you want. It could generate things for you, or enable different gameplay, it can adapt much more seamlessly to what you’re doing, or just sensibly create more game content.”

But he added: “I don’t want it to just willy-nilly be procedural, everything AI, and just not have any crafting to it, because that will homogenize games, sort of like a lot of AAA games are now. They’re just kind of like I attack the monster, oh, it’s attacking, I’ll roll out of the way. It’s all kind of the same stuff. And that’s what I don’t want to happen to games because of AI. I want it to enable us to make cooler things, and more amazing things, but there still needs to be a sense of craft.”

  • nanoUFO@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Is there any ethical AI, all they do is take data people posted online and then profit off it. With the original creators not getting a say if their data gets used or any profits derived from it.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        We did this in a previous org. Basically, we had a bunch of user-generated data, users would then classify a sample of that data, and then we’d train our model on those classifications.

        I don’t see how it would work in game dev though, unless they’re using AI to customize an NPC’s behavior based on the player’s actions (i.e. teaching an enemy to block player attacks). Generating models and whatnot would just have too small of a data set to work with.

    • geosoco@kbin.socialOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      arguably no?

      Though Getty did introduce their new AI today that was only trained on images they own the copyright to. Arguably, still not ethical, but at least it’s things they own the data for.

        • geosoco@kbin.socialOP
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          I didn’t dig too much into it, but my guess would be no.

          Even if you could verify, it’s still an ethical grey area as it’s taking works they paid photographers to generate new works potentially without crediting the original photographers? Their own website tells people they have to credit the original photographer, and I’d be surprised if the AI lists all the works it used to create it.

  • DeathbringerThoctar@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’m 1000% behind an Anachronox reboot or sequel. The cliffhanger at the end of the original game is one of the largest unanswered questions of my childhood.

  • DigitalPaperTrail@kbin.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I’m not a fan of games that are designed by committee, and I fear AI-generated games would take that to the 11th degree.

    Given that, I feel very specific aspects could still be vastly improved by AI, like games that implement procedural generation; I feel like his mention of procedural fears more of everything becoming procedural, instead of it supplementing the pre-existing applications of it. Those kinds of games hit a plateau at a certain point in the gameplay loop - the limitations of the tiles or combinations of assets starts to become very predictable and doesn’t achieve the purpose it sets out to at that point.

    Also to take into account, AI needs a dataset to train it, and to avoid the homogenization he fears would involve producing datasets for specific tasks, and differentiating them from one another; to me, devs producing these unique datasets to sell is inevitable, and there’s definitely going to be a lot of “shovelware”-quality datasets being thrown around. The ethics of the data contained in a lot of those kinds of ones will definitely be questionable.

    It’s a really mixed bag.