I get the impression you don’t think this was a good thing to have done, hence the interrogation?
I get the impression you don’t think this was a good thing to have done, hence the interrogation?
Not very satisfying answers I’m afraid, they were probably 8-10 and I have no idea how we got onto the topic since this was 15+ years ago.
They were actually pretty grateful, feeling it had set them up for a lot of positive realizations down the line. We play D&D now and they’re working on their masters, so I guess they weren’t too badly scarred…
I had a former summer camp kid come up and credit me with having given them their “first existential crisis” (for explaining that when you die, “you just cease”) which I am proud of.
We can see what it’s sending to facebook though, and it’s not constant. There’s a bunch that it does send and receive, but this isn’t hypothetical speculation, like, we can just see that it’s not using your microphone for that, or sending anything like audio data. You can check this yourself, wireshark is free and packet specifications are available.
Setting aside confirmation bias (idk, because it’s boring?): So people you’re in a book club with, an established group which it is very easy to associate you with, were discussing Dell laptops… and you think it’s strange you got looped in? If three people from your book club all looked up dells later, or earlier, or etc. etc., why wouldn’t they figure you might also be interested in dell laptops? An approach that doesn’t require NLP of god only knows how much hypothetical audio taken from pockets, and works much better?
Extremely old news, but still very cool.
We used to have one of these roaming around my college compaci lab, hooked up to a big red bluetooth button that would recompile the neurological structure when pressed. When we were feeling particularly nasty (or they were waxing particularly poetic), we used to challenge the humanities majors to push the button and ‘kill’ the worm.
I’m not particularly proud of the fact I made quite a few people break down completely with the implications of asking them to do that - or more sadistically, by repeatedly pressing the button and asking them why it mattered. I got punched in the face by a vegan for that one, which was fair enough tbh. Anyways, the reality of the project really isnt something most people are prepared to address.
IDK, this one seems pretty unsurprising. That’s a damned tiny package, I can’t really see another way of fastening it together.
Now if it were me, that would be a reason not to make the damn thing in the first place but what do I know…
Uhm… This seems like preemptive and somewhat unrealistic horn-tooting, since they’re actively testing delivery drones in my neighborhood of seattle, and delivery robots are commonplace on the sidewalks of many cities (london, SF, LA). They’re cute, too!
Demanding more responsible regulation is reasonable and just. Time and time again we’re shown that attempts to prevent the broad adoption of understood technology is as effective as the war on drugs. I just think that a different strategy is going to be needed here.
Good.
Sure, but that’s not a monopolistic practice. That’s just a MAP, which is an incredibly common agreement. Hell, its better than most MAP contracts because they only take a 30% cut of sales thru steam, even if the dev is selling steam keys thru an alternate storefront.
Have you read the filings? The complaints are that steam listings for a game have to match the lowest price for the game, that keys can’t be sold for less than the steam listing (I’m not really sure how this is a different thing from the low pricing), and that steam takes too big a cut of the proceeds. That last one is particularly hilarious, in that they are bringing this lawsuit to a court that respects USA business laws, which pointedly do not hold that ‘being too greedy’ is a problem (outside of price-gouging laws, which are not relevant here…)
Totally unrelated to the OP but damn I’m impressed that DDG is such a large percentage of the market. 1.28% sounds like nothing but I can’t even guess how many millions of searches that is, that’s absolutely wild. well done duckfriends.
LMFAO. The audacity of calling the token limit a “rolling context window” like it’s a desirable feature and not just a design aspect of every LLM…
A world where we all go insane from explaining “we can’t just ‘hack’ your bitlocker key” over and over to every older relative we have…
Yikes. Well. I’ll be over here, conspiring with the other NASA lizard people on how best to deceive you by politely answering questions on a site where maaaaybe 20 total people will actually read it. Good luck getting your head around it, there’s lots of papers out there that might help (well, assuming I’m not lying to you about those, too).
Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there’s no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it’s theorized that we’ll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I’ve seen the concept called Generative Bootstrapping or the Bootstrap Ladder (it’s a new enough concept that we haven’t all agreed on a name for it yet. we can only hope someone comes up with something better because wow the current ones suck…). We’re even seeing some models that are starting to do this coalesce-towards-stability thing, though only in some extremely niche applications. Only time will tell if all models are able to do this stable-coalescing or if it’s only possible in some cases.
My original point though was just that this headline is fairly sensationalist, and that people shouldn’t take too much hope from this collapse because we’re both aware of it, and are working to mitigate it (exactly like the paper itself cautions us to do)
Edge comes pre-enabled with a ton of microsoft’s crappy AI - Bing chat, copilot, etc.
Wow, this is a peak bad science reporting headline. I hate to be the one to break the news but no, this is deeply misleading. We all want AI to hit it’s downfall, but these issues with recursive training data or training on small datasets have been near enough solved for 5+ years now. The nature paper is interesting because it explains the modality of how specific kinds of recursion impact broadly across model types, this doesn’t mean AI is going to crawl back into pandoras box. The opposite, in fact, since this will let us design even more robust systems.
Not amazed, just depressed.