• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle



  • Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.


  • For what it’s worth, I’ve heard that there’s a major update coming this month sometime for Kbin, first in a while, so it might get a lot better (with the expected new bugs to chase). On desktop it’s been mostly okay with the occasional glitches, but the real question that is hard to measure is how much is getting in and out from the fediverse properly. Hard to have a baseline when all the software is beta at best.




  • Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.

    What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…




  • Good counterpoint. I was more inferring not possible for us to figure out rather than impossible. We may simply be running out of time more than anything. Maybe that’s why the top people are so eager to push into unknowns (aside from the profitability of course). They see we have a small window of high tech opportunity and being too cautious and slow we’ll miss it. Naturally a big assumption is that AGI will be aligned and able to help us fix things, rather than the often portrayed versions that decide we are the problem, or that the problems are too insurmountable and the AI turns itself off.


  • To continue the thought, even if the alignment problem within AI could be solved (I don’t think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate “but if we can do it” fantasies is always concerning. Yet another example of a few determining the rest of humanity’s future with very high risks. Our best luck would be if AGI and beyond simply isn’t possible, and even then the “dumb” AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.

    I forgot to add - I’m totally on the side of our AI overlords and Roko’s Basilisk.


  • The internet as most people know it and as companies depend on it isn’t that old.

    The difference being discussed here is a single existence vs. potential for redundancy. The best way for something to outlive even the places it’s stored is by repetition. That goes against both how we’ve grown things so far on the internet as well as the talk about competition among instances and the biggest one wins. It’s far better for there to be many groups that share information in some way but are their own entities and aren’t dependent on the rest.


  • I get the concern, but long term persistence is probably a rarity. The internet is still young. If anything a federated group of communities that are linked somehow will last far longer than a single server of even a large corporation. For the weeks that Lemmy et al have been growing, how to best develop communities that connect and last has been an ongoing question.