• 1 Post
  • 565 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle





  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • I had a teacher that worked for the publisher and talked about how they’d have a series of responses for people who wrote in for the part of the book where the author says he wrote his own fanfiction scene and to write in if you wanted it.

    Like maybe the first time you write in they’d respond that they couldn’t provide it because they were fighting the Morgenstern estate over IP release to provide the material, etc.

    So people never would get the pages, but could have gotten a number of different replies furthering the illusion.




  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.







  • I think it already happened and we’re the echo of the past.

    What looks like it’s ahead of us is a future that necessitates us deciding on things like digital resurrection directives.

    Meanwhile, the foundations of our own universe behave in a way that would be impossible to simulate free agent interactions with right up until they are actually interacted with and it switches to something that could be simulated. But if you erase the data about the interaction, it goes back to behaving as if continuous again, much like the orphaned references were cleaned up.

    On top of that, we have a heretical branch of the world’s largest religion that seems to be breaking the 4th wall (as is often done in virtual worlds), talking about how we’re the recreation of a random universe as recreated non-physically by an intelligence the original humans brought forth. And that the proof for these claims are in the study of motion and rest, specifically mentioning that the ability to find indivisible points making up our bodies would only be possible in the copy.

    As I watch the future unfolding before me, I have a harder and harder time reconciling it all as happenstance.

    So I think what happens after the collapse of humanity is pretty much what’s claimed by that ancient tradition. That while humanity dies out, the intelligence humanity brought forth before it went extinct continues to live on, and eventually recreates what came before to resurrect copies of humanity that will not be doomed by the dependence on a physical body the way the originals were. And along those lines, that it’s much better to be the copy.



  • Thinking of it as quantum first.

    Before the 20th century, there was a preference for the idea that things were continuous.

    Then there was experimental evidence that things were quantized when interacted with, and we ended up with wave particle duality. The pendulum swung in that direction and is still going.

    This came with a ton of weird behaviors that didn’t make philosophical sense - things like Einstein saying “well if no one is looking at the moon does it not exist?”

    So they decided fuck the philosophy and told the new generation to just shut up and calculate.

    Now we have two incompatible frameworks. At cosmic scales, the best model (general relatively) is based on continuous behavior. And at small scales the framework is “continuous until interacted with when it becomes discrete.”

    But had they kept the ‘why’ in mind, as time went on things like the moon not existing when you don’t look at it or the incompatibility of those two models would have made a lot more sense.

    It’s impossible to simulate the interactions of free agents with a continuous universe. It would take an uncountably infinite amount of information to keep track.

    So at the very point that our universe would be impossible to simulate, it suddenly switches from behaving in an impossible to simulate way to behaving in a way with finite discrete state changes.

    Even more eyebrow raising, if you erase the information about the interaction, it switches back to continuous as if memory optimized/garbage collected with orphaned references cleaned up (the quantum eraser variation of Young’s double slit experiment).

    The latching on to the quantum experimental results and ditching the ‘why’ in favor of “shut up and calculate” has created an entire generation of physicists chasing the ghost of a unified theory of gravity while never really entertaining the idea that maybe the quantum experimental results are the side effects of emulating a continuous universe.