• krashmo@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    3 months ago

    In what way is presenting factually incorrect information as if it’s true not a bad thing?

    • 0laura@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so

        • vcmj@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.