True of many things we take for granted now. It would be a different world entirely. Another non-computer example would be the 3-point seat belt that Volvo left as an open patent, saving countless lives over the past decades.
True of many things we take for granted now. It would be a different world entirely. Another non-computer example would be the 3-point seat belt that Volvo left as an open patent, saving countless lives over the past decades.
Or a different “feel” when turned on vs. off (more resistance or something). They spent effort printing all that text to show where the switch was when a universal 0/1 would have made it clear.
I can’t think of any example of a button or switch that by itself can be clear if it is engaged or not. A button could be assumed to be on if in, but that isn’t always the case, like for example with emergency stops.
Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.
For what it’s worth, I’ve heard that there’s a major update coming this month sometime for Kbin, first in a while, so it might get a lot better (with the expected new bugs to chase). On desktop it’s been mostly okay with the occasional glitches, but the real question that is hard to measure is how much is getting in and out from the fediverse properly. Hard to have a baseline when all the software is beta at best.
It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.
In the context of LLMs, I think that means giving them access to their own outputs in some way.
That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.
What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…
Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.
Maybe Lemmy Explorer will help? You can look at instances stats as well as communities. You can also change to Kbin magazines using the top right menu. It doesn’t have as much info, just number of subscribers.
Good counterpoint. I was more inferring not possible for us to figure out rather than impossible. We may simply be running out of time more than anything. Maybe that’s why the top people are so eager to push into unknowns (aside from the profitability of course). They see we have a small window of high tech opportunity and being too cautious and slow we’ll miss it. Naturally a big assumption is that AGI will be aligned and able to help us fix things, rather than the often portrayed versions that decide we are the problem, or that the problems are too insurmountable and the AI turns itself off.
To continue the thought, even if the alignment problem within AI could be solved (I don’t think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate “but if we can do it” fantasies is always concerning. Yet another example of a few determining the rest of humanity’s future with very high risks. Our best luck would be if AGI and beyond simply isn’t possible, and even then the “dumb” AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.
I forgot to add - I’m totally on the side of our AI overlords and Roko’s Basilisk.
The internet as most people know it and as companies depend on it isn’t that old.
The difference being discussed here is a single existence vs. potential for redundancy. The best way for something to outlive even the places it’s stored is by repetition. That goes against both how we’ve grown things so far on the internet as well as the talk about competition among instances and the biggest one wins. It’s far better for there to be many groups that share information in some way but are their own entities and aren’t dependent on the rest.
I get the concern, but long term persistence is probably a rarity. The internet is still young. If anything a federated group of communities that are linked somehow will last far longer than a single server of even a large corporation. For the weeks that Lemmy et al have been growing, how to best develop communities that connect and last has been an ongoing question.
It’s different software displaying the content. Like if you used either Outlook, Gmail, or Thunderbird to show email. You can go to kbin.social and compare it to what you’ve got. There’s also a bunch of scripts posted in kbinStyles to further customize the looks (that might be wrapped into later versions of kbin perhaps).
Kbin has the ability to filter out specifics, but it’s simple wildcard matching, not regex.
Lemmyverse includes Kbin instances as well, you just have to select to look for them in the upper right. I suppose there’s a reason we can’t see both at the same time.
I haven’t seen them in production yet, but for years I’ve heard of the idea of infrared detection in car systems to see warm bodies better at night on a screen or heads up display. There was also the idea of using that along with IR lighting and road markings to light up the road better. Like having high beams on without blinding other drivers, something that is far too common these days.