• Candelestine@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    Not that complicated. Which humans is it better than? People that want to watch the world burn, while they’re still on it, are not always that great at more difficult thinking.

    Not too different from how hard it is for states that want to find new methods of execution that are both humane and effective to actually figure that out. The people who are capable of actually doing that competently, doctors, won’t help them.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    This is the best summary I could come up with:


    Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.

    Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.

    LLMs are already actively flooding the online ecosystem with dubious content.

    NewsGuard, a misinformation analytics firm, says that so far it has “identified 676 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools.”

    The misinformation in the study comes from prompting ChatGPT and other open-source LLMs, including Llama and Vicuna, to create content based on human-generated misinformation datasets, such as Politifact, Gossipcop and CoAID.

    Eight LLM detectors (ChatGPT-3.5, GPT-4, Llama2-7B, and Llama2-13B, using two different modes) were then asked to evaluate the human and machine-authored samples.


    The original article contains 246 words, the summary contains 162 words. Saved 34%. I’m a bot and I’m open source!