• 0 Posts
  • 38 Comments
Joined 11 months ago
cake
Cake day: October 11th, 2023

help-circle







  • Extremely old news, but still very cool.

    We used to have one of these roaming around my college compaci lab, hooked up to a big red bluetooth button that would recompile the neurological structure when pressed. When we were feeling particularly nasty (or they were waxing particularly poetic), we used to challenge the humanities majors to push the button and ‘kill’ the worm.

    I’m not particularly proud of the fact I made quite a few people break down completely with the implications of asking them to do that - or more sadistically, by repeatedly pressing the button and asking them why it mattered. I got punched in the face by a vegan for that one, which was fair enough tbh. Anyways, the reality of the project really isnt something most people are prepared to address.



  • Uhm… This seems like preemptive and somewhat unrealistic horn-tooting, since they’re actively testing delivery drones in my neighborhood of seattle, and delivery robots are commonplace on the sidewalks of many cities (london, SF, LA). They’re cute, too!

    Demanding more responsible regulation is reasonable and just. Time and time again we’re shown that attempts to prevent the broad adoption of understood technology is as effective as the war on drugs. I just think that a different strategy is going to be needed here.




  • Have you read the filings? The complaints are that steam listings for a game have to match the lowest price for the game, that keys can’t be sold for less than the steam listing (I’m not really sure how this is a different thing from the low pricing), and that steam takes too big a cut of the proceeds. That last one is particularly hilarious, in that they are bringing this lawsuit to a court that respects USA business laws, which pointedly do not hold that ‘being too greedy’ is a problem (outside of price-gouging laws, which are not relevant here…)






  • Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there’s no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it’s theorized that we’ll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I’ve seen the concept called Generative Bootstrapping or the Bootstrap Ladder (it’s a new enough concept that we haven’t all agreed on a name for it yet. we can only hope someone comes up with something better because wow the current ones suck…). We’re even seeing some models that are starting to do this coalesce-towards-stability thing, though only in some extremely niche applications. Only time will tell if all models are able to do this stable-coalescing or if it’s only possible in some cases.

    My original point though was just that this headline is fairly sensationalist, and that people shouldn’t take too much hope from this collapse because we’re both aware of it, and are working to mitigate it (exactly like the paper itself cautions us to do)



  • Wow, this is a peak bad science reporting headline. I hate to be the one to break the news but no, this is deeply misleading. We all want AI to hit it’s downfall, but these issues with recursive training data or training on small datasets have been near enough solved for 5+ years now. The nature paper is interesting because it explains the modality of how specific kinds of recursion impact broadly across model types, this doesn’t mean AI is going to crawl back into pandoras box. The opposite, in fact, since this will let us design even more robust systems.