Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • Eheran@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    No, hallucination is a really good term. It can be super confident and seemingly correct but still completely made up.

    • richieadler@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      6 months ago

      It’s a really bad term because it’s usually associated with a mind, and LLMs are nothing of the sort.