Hard drive space might be bottlenecked by the holographic principle.
Hard drive space might be bottlenecked by the holographic principle.
people need to use these tools responsibly
Have you met people…?
That’s tragic, LG monitors used to be great.
But it’s trivial to torrent their content at whatever resolution I want…
Look, we’re talking people who call ninety-nine “four twenty ten nine”; you can’t expect them to name things properly.
That’s a good rule of thumb… but it’s probably not enough; no reasonable definition would call Jupiter a star, or even a brown dwarf, or the Solar System a binary system, yet the Sol - Jupiter barycentre is outside the sun… (the whole system’s barycentre is sometimes inside the sun, but that’s due to Saturn’s, Uranus’, and Neptune’s pulls cancelling Jupiter’s).
I’d call the barycentre thing a necessary but not sufficient requirement; a proper definition of double planet should probably also take into account other factors like the relative mass and density of the bodies, and their minimum and maximum distance.
I use a 13 year old PC because a newer one will be infected with Windows 11. (The company refuses to migrate to Linux because some of the software they use isn’t compatible.)
And I’m saying that I could have been that developer if I were twenty years younger.
They’re not bad developers, they just haven’t yet been hurt enough to develop protective mechanisms against scams like these.
They are not the problem. The scammers selling the LLM’s as something they’re not are.
I was lucky enough to not have access to LLMs when I was learning to code.
Plus, over the years I’ve developed a good thick protective shell (or callus) of cynicism, spite, distrust, and absolute seething hatred towards anything involving computers, which younger developers yet lack.
No. LLMs are very good at scamming people into believing they’re giving correct answers. It’s practically the only thing they’re any good at.
Don’t blame the victims, blame the scammers selling LLMs as anything other than fancy but useless toys.
Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.
At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something… now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it’d’ve given the wrong answer) and call it a day; no brain activity or learning whatsoever.
I’m both, and while I do hate myself, I don’t think it’s related, so I’m not sure I get it.
(I hate computers more, though, except when they’re turned off — no bugs when they’re off —, but they’re the only thing I’m good enough at to make a living off of.)
makes it sound like they’re all equal, and there hasn’t been any progression
Programming peaked with Lisp (and SQL for database stuff).
Every “progression” made since Lisp has been other languages adding features to (partially but not quite completely) do stuff that could already be done in Lisp, but with less well implemented (though probably with probably less parentheses).
They are all flawed and they all encourage some bad design patterns.
On the other hand, Lisp.
What’s worse is that half the coordinates probably ended up as dates…
Are search engines worse than they used to be?
Definitely.
Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?
Also yes. And it’s not taking significantly longer than it did when they were less enshittified.
Are LLMs a viable alternative to search engines, even as enshittified as they are today?
Fuck, no. They’re slower, they’re harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can’t learn anything useful from (even worse, if you learn it as the correct solution you’ll never learn why it’s suboptimal or, more probably, downright harmful).
If search engines ever get enshittified to the point of being truly useless, the alternative aren’t LLMs. The alternative is to grab a fucking book (after making sure it wasn’t defecated by an LLM), like we did before search engines were a thing.
I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines
Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.
That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.
I remembered another example: I was checking a pull request and it wouldn’t compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace()
(in C# internal
means “I designed my classes wrong and I don’t have time to redesign them from scratch; this member should be private
or protected
, but I need to access it from outside the class hierarchy, so I’ll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend
in c++; it’s used a lot in standard .NET libraries).
Now, that particular internal
function isn’t documented practically anywhere, and being internal
can’t be used outside its particular library, so it wouldn’t pop up in any example the coder might have seen… but .NET is open source, and the library’s source code is on GitHub, so chatgpt/copilot has been trained on it, so that’s where the coder must have gotten it from.
The thing, though, is that LLM’s being essentially statistic engines that’ll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal
. Or private
, or protected
, for that matter.
That function is used in the code they’ve been trained on to figure if a string is empty, so they’re just as likely to output it as string.IsNullOrWhitespace()
or string.IsNullOrEmpty()
.
Hell, if(condition)
and if(!condition)
are probably also equally likely in most places… and I for one don’t want to have to debug code generated by something that can’t tell those apart.
It could be, in a monkeys with typewriters sort of way… 🤷♂️
That’s their end goal: no choice whatsoever, you watch 30 minutes of ads, followed by the 30 second video the algorithm wants you to watch (which is also an ad), 30 more minutes of ads, and so on.
And, since they also own chrome, you can’t go to any other page without first spending at least six hours watching youtube.