Did I fall into a 1999 Slashdot comment section somehow?
Did I fall into a 1999 Slashdot comment section somehow?
I put a bike on a trunk rack on the back of our Toyota. It thought a bike was behind the car and kept slamming on the brakes while trying to back out of the driveway.
Then there’s the lane assist that jerks the wheel while going through construction zones, because the lines on the road don’t match up with where you need to be.
It actually did, but not in a way people expected at the time that movie was made. It changed a lot underneath the hood.
We shall break into the desktop and laptop market! Let’s start by severing ties with one of the most successful companies to do that so far.
The x86 license itself doesn’t matter much anymore. Those patents expired a long time ago. Early x86_64 is held by AMD, but those patents are also expiring soon.
There’s more advancements past that which are held by both Intel and AMD. You still can’t make a modern x86 CPU on your own. Soon, you’ll be able to make a CPU with an instruction set compatible with the first Athlon 64-bit processors, but that’s as far as it goes.
That’s exactly what I’m getting at. AI is about pushing the boundary. Once the boundary is crossed, it’s not AI anymore.
Those chess engines don’t play like human players. If you were to look at how they determine things, you might conclude they’re not intelligent at all by the same metrics that you’re dismissing ChatGPT. But at this point, they are almost impossible for humans to beat.
AI as a field of computer science is mostly about pushing computers to do things they weren’t good at before. Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
Along the way, it created a lot of really important tools. Things like optimizing compilers, virtual memory, and runtime environments. The way computers work today was built off of a lot of things out of the old MIT CSAIL labs. Saying “there’s no I to this AI” is an insult to their work.
Censoring topics is the least of the issues with the AI bubble.
It’s really difficult/expensive for a home user to do a 3-2-1 backup properly. Especially if you’re pushing beyond a few TB.
My NAS uses a pair of SAS drives, and they make noises at boot up that would be concerning in a desktop. They’re quite obnoxious. But I keep them in part of the house where they don’t bother me.
Right, I think the future isn’t Intel v AMD, it’s AMD v ARM v RISC-V. Might be hard to break into the desktop and laptop space, but Linux servers don’t have the same backwards compatibility issues with x86. That’s a huge market.
It was also a big surprise when Intel just gave up. The industry was getting settled in for a David v Goliath battle, and then Goliath said this David kid was right.
There’s some streaming video sites that deliberately block Firefox. It used to be that Firefox didn’t support the necessary web standards, but now it does. The site put up blocks telling you to use Chrome, and never got around to taking them down.
WinRAR did make piles of money by focusing on the commercial market. They really didn’t care if home users went past the free trial period, but they did care if you were a business.
I don’t know what WinAmp does, or ever did.
It’s more likely to survive the company if it’s FOSS. The app was dormant for a long time.
The Samsung Galaxy is 15 years old, and it was excellent. First Android device I felt had decent performance. I think my first one was an Galaxy S III, which would have been 2012.
The Zenphone that’s out now uses a Snapdragon. There’s probably a good reason for that.
As it exists now, no. The models are reaching their limit, and they aren’t good enough. They can’t absorb any more information than they have, and more training iterations aren’t making them better. They’ll do some useful things; a recent find of the longest black hole jet ever found was done in part from AI classification of astronomy data. It’s going to get implemented into existing tools and that’s about it. It won’t be enough to justify the money that’s already been dumped in.
Historically, the field has been very bursty. Lots of money gets dumped into it, it makes some big improvements, and then hits a wall. Funding dries up because it’s not meeting goals anymore, and the whole thing goes into slumber for a decade or two. A new breakthrough eventually comes, and then money gets dumped in again. We’ve about maxed out what the last breakthrough can give us. I expect we’ll need at least one more cycle of this before AGI works out.
Sounds like demand shaping is already done, but not in a way that’s helpful to renewables.
One of the things with AI is that it’s a largely constant load factor. Nuclear is really good for that.
However, I highly doubt any of these new nuclear plants are finished before the AI bubble bursts. SMRs haven’t even been proven in practice yet, and this is the first good news they’ve had in a while. Restarting Three Mile Island isn’t expected to work before 2028. The hype bubble could easily burst in the next year, and even if it doesn’t, keeping it going to 2028 is highly unlikely.
So we’ll probably have some new nuclear around that isn’t going into AI, because those datacenters will be dead when the hype passes. Might as well use them, I guess.
You’re aware Linux basically runs the Internet, right?