• Artyom@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    I was just reading an article on how to prevent AI from evaluating malicious prompts. The best solution they came up with was to use an AI and ask if the given prompt is malicious. It’s turtles all the way down.

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Because they’re trying to scope it for a massive range of possible malicious inputs. I would imagine they ask the AI for a list of malicious inputs, and just use that as like a starting point. It will be a list a billion entries wide and a trillion tall. So I’d imagine they want something that can anticipate malicious input. This is all conjecture though. I am not an AI engineer.