When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

    • EpeeGnome@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      Yes, hallucination is the now standard term for this, but it’s a complete misnomer. A hallucination is when something that does not actually exist is perceived as if it were real. LLMs do not perceive, and therefor can’t hallucinate. I know, the word is stuck now and fighting against it is like trying to bail out the tide, but it really annoys me and I refuse to use it. The phenomenon would better be described as a confabulation.

      • chiisana@lemmy.chiisana.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 months ago

        The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

        The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          The users’ assumption/expectation of the output being factual is what is wrong.

          So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

    • mindlesscrollyparrot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with “incorrect assumptions made by the model”. What kind of assumption would that be?

      In fact, all of the results are hallucinations. It’s just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.

      • femtech@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        It was an incorrect assumption based on his name being in the article. It should have listed him as the author only, not a part of the cases.