• 0 Posts
  • 44 Comments
Joined 6 months ago
cake
Cake day: May 19th, 2024

help-circle





  • Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?

    …no.

    So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.

    But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.

    Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.

    This is the idea that"Sentience can’t come from semantics"… More is needed than that.


  • Guarantee it would be a widely used substance if it wasn’t for the smell… People would be making scriptures out of it and fixing up cracks in their homes. It would be considered innocent and fun, and some would alter their diets to get a particular consistency.

    Incredibly gross to us, and probably still unhygienic. Maybe that’s why it smells, to keep us away from it!










  • How about something autonomous that makes choices of its own will, and performs long term learning that influences the choices it makes, just as a flat benchmark.

    LLMs don’t qualify, they’re trained, retain information within a conversation, then forget it after the conversation is closed. They don’t do any long term learning after their initial training so they’re basically forever trapped in the mode of regurgitating within the parameters set by the training data at the time they’re trained.

    That’s just a very fancy way to search and read out the training data. Definitely not an active intelligence in there.

    They also don’t have any autonomy, they’re not active of their own accord when they’re not being addressed. They’re not sitting there thinking, so they have no internal personal landscape of thought. They have no place in which a private intelligence can be at play.

    They’re innert.



  • Oof, programmers calling LLMs “AI” - that’s embarrassing. Glorified text generators don’t need ethics, what’s the risk? Making the Internet’s worst texts available? Who cares.

    I’m from an era when the Anarchists Cook Book, and The Unabombers Manifesto were both widely available - and I’m betting they still are.

    There’s no obligation to protect people from “dangerous text” - there might be an obligation to allow people access to them though.