• htrayl@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    19
    ·
    5 months ago

    That’s also pretty true for people, unfortunately. People are deeply incapable of differentiating fact from fiction.

    • kaffiene@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      5 months ago

      No that’s not it at all. People know that they don’t know some things. LLMs do not.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Exactly, the LLM isn’t “thinking,” it’s just matching inputs to outputs with some randomness thrown in. If your data is high quality, a lot of the time the answers will be appropriate given the inputs. If your data is poor, it’ll output surprising things more often.

        It’s a really cool technology in how much we get for how little effort we put in, but it’s not “thinking” in any sense of the word. If you want it to “think,” you’ll need to put in a lot more effort.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Your brain is also “just” matching inputs to outputs using complex statistics, a huge number of interconnects and clever digital-analog mixed ionic circuitry.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            At a super high level, sure. But human brains also have tens of thousands of years (perhaps hundreds of thousands) to develop, so it’s not like a newborn baby is working off a blank slate, there’s a ton of evolutionary circuitry in there that influences things.

            That’s why an algorithm that is based on human data will never quite work like a human. That doesn’t mean it’s not intelligent, it just requires a different set of requirements. That’s why I think the Turing test is a bad metric, since an LLM could just find “proper” responses given a bunch of existing conversations without having to reason about the conversation.

            Real intelligence, imo, would need to be able to learn to solve puzzles without seeing similar puzzles. That’s more the domain of other “AI” fields like neural networks and machine learning. But each field approaches problems in a different, limited way, so general AI will be quite complicated unless we find a new approach.