• Mereo@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    1 year ago

    I read the article but felt writing a “generic” comment about AI as various studios also wants to replace writers with AIs. I’ve been thinking about this for a long time.

    • cryshlee@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I agree with your points. The term “AI” is a buzzword, and “machine learning” is the correct term for what most people consider things like chatGPT or Midjourney to be. I think it’s very important to differentiate between the different tools and not use a catchall term such as “AI” because this leads to widespread demonization of all tools that use machine learning when the truth is that some models are exploitative while others are not.

      I think working towards accurate language should be a priority when litigating the use of machine learning but people also have a responsibility to do their due diligence in learning about what machine learning is and what it can do.

      • Mikina@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ranting about this at length was one of my last posts on Reddit. The whole AI situation feels exactly like covid - where you had many “expert doctors” doomsaying how vaccines will be the end of us all.

        I’ve seen a few podcasts with industry experts on AI, where he managed to mention stuff like “to me, it felt sentient” or “in a few years, we will have AGIs that are hundred times more intelligent than us. Imagine a standart commoner at the time with IQ like 70 talking to Einstein, but the smartest people alive now will be the commoners compared to the AI”. It’s such a bullshit, as far as I know from my limited ML knowledge from college, I don’t see any way how anything using machine learning can become AGI - or can get smarter than humans.

        Because ML needs feedback. And we can’t give feedback on something that’s more inteligent that we are. It’s as if the lowest commoner in the metaphor was staring Einstain over the shoulder, and Einstain would only continue working on a theory if the commoner agreed with him that this is the correct approach. If he would say no, he would try an entierly different approach. I don’t see how that can get smarter than people. Or how it can learn to “escape into the internet and destroy humanity”. Unless I’m missing a major advancement in ML algorithms, I can’t imagine any approach I know being capable of something like that. It just doesn’t work like that, it’s by definition not possible. (But if anyone knows more about the topic and disagrees with me, please let me know - I would really love to discuss this topic, since it’s pretty important to me)

        But, on an entirely different point - ML will be better than people at the single task it’s given. Give it someones profile of data collected about him from internet and smart devices, and let AI select a marketing campaign, email text or a video to show to him to convince him to vote for XY. And given enough time to experiment and train, the model will get results - and there’s nothing you can do about it. Even if you know you will be manipulated, the ML model knows that - based on your data - and will figure out a way how to manipulate you anyway. That’s what I’m worried about, especially since Facebook had literally years of billions of users and data to train and perfect their model. Facebook feed is ML training wet dream.

        We’re fucked, the only way how to defend yourself is to avoid any kind of personalized content. Google search, YT feed, news-sites, streaming services… Anything that’s personalized will potentionally be able to manipulate you. That’s the biggest issue with ML.