Basically: you ask for poems forever, and LLMs start regurgitating training data:

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 months ago

    A breakdown in the weighting system is the most probable. Don’t get me wrong I am not an AI engineer or scientists, just a regular cs bachelor. So my reply probably won’t be as detailed or low level as your’s. But I would look at what is going on with whatever algorithm determines the weighting. I don’t know if LLMs restructure the weighting for the next most probably word, or if its like a weighted drop table.

      • Modva@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        My fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.