• 0 Posts
  • 53 Comments
Joined 8 months ago
cake
Cake day: March 3rd, 2024

help-circle


  • If Lemmy and other fediverse discussion areas had developed slower and more naturally there might have been more of a country/instance symmetry, but anyone who was around when the Reddit implosion and migration happened knows that it was total chaos and a grab bag of where a new user should sign up. Lemmy and the rest were not ready for such a shift, and now that everyone’s been in a place or two for a while, short of a closure or blocking or whatever there’s no reason to move around to a matching country and instance, if there even is one. People mainly look for popularity, activity, themes, and engagement, and if that’s found on the other side of the globe it works.




  • Is there a longer video anywhere? Looking closely I have to wonder where the hell did that deer come from? There’s a car up ahead of the Tesla in the same lane, I presume quickly moved back in once it passed the deer? The deer didn’t spook or anything from that car?

    This would have been hard for a human driver to avoid hitting, but I know the issue is the right equipment would have been better than human vision, which should be the goal. And it didn’t detect the impact either since it didn’t stop.

    But I just think it’s peculiar that that deer just literally popped there without any sign of motion.




  • Maybe it’s not the right place for it, but the mentions of AI safety and safety in general don’t pertain to the actual definition of AI safety as used in AI research. No mention of alignment parameters that are needed to be held to. This reads as a “we need to be careful who gets access to this” vs. any warning to AI companies on their direction and haste.




  • The flaw of the question is assuming there is a clear dividing line between species. Evolutionary change is a continuous process. We only have dividing lines where we see differences in long dead ones in the fossil record, or we see enough differences in living ones. The question has no answer, only a long explanation of how that isn’t how any of this works.


  • Even a hypothetically true artificial general intelligence would still not be a moral agent

    That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.

    It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.




  • Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it’s much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn’t understand the code it gives you, nor can it reason if it will function.

    Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.

    Basically, if you can help the LLM hone in its probabilities on the better data from the start, you’re more likely to get what may be good code.