Been having similar thoughts. We’ll very soon be back in the age of internment camps, reporting on neighbors’ behavior, and minorities looking over their shoulders even more than usual. I am ashamed of us.
Been having similar thoughts. We’ll very soon be back in the age of internment camps, reporting on neighbors’ behavior, and minorities looking over their shoulders even more than usual. I am ashamed of us.
This is a good analogy, and is one big reason I won’t trust any AI until the ‘answers’ are guaranteed and verifiable. I’ve worked with people who needed to have every single thing they worked on double-checked for accuracy/quality, and my takeaway is that it’s usually faster to just do it myself. Doing a properly thorough review of someone else’s work, knowing that they historically produce crap, takes just about as long as doing the work myself from scratch. This has been true in every field I’ve worked in, from academia to tech.
I will not be using any of Apple’s impending AI features, they all seem like a dangerous joke to me.
Exactly. I wish more people had this view of interns. Unpaid ones, at the very least. I worked with a few, and my colleagues would often throw spreadsheets at them and have them do meaningless cleanup work that no one would ever look at. Whenever it was my turn to ‘find work’ for the interns, I would just have them fully shadow me, and do the work I was doing, as I was doing it. Essentially duplicating the work, but with my products being the ones held to final submissions standards. They had some great ideas, which I incorporated into the final versions, and they could see what the role was actually like by doing the work without worrying about messing anything up or bearing any actual responsibility. Interns are supposed to benefit from having the internship. The employer, by accepting the responsibility of having interns, shouldn’t expect to get anything out of it other than the satisfaction of helping someone gain experience. Maybe a future employee, if you treat them well.
Yeah totally, that’s an important distinction. Paid interns are definitely different than unpaid interns, and can legally do essentially the same work as a paid employee.
The way the distinction was explained to me is that an unpaid intern is essentially a student of the company, they are there to learn. They often get university credit for the internship. A paid internship is essentially an entry-level job with the expectation that you might get more on-the-job training than a ‘normal’ employee.
This article doesn’t say if the intern was paid, but it does say the company reported the behavior to the intern’s university, so I’d guess it was unpaid.
I work at a small tech company, by no means big tech. I know it’s common for interns to be treated as employees, but it’s usually in violation of labor law. It’s one of those things that is extremely common, but no less illegal.
The US Department of Labor has a 7 part test to help determine if an intern is classified properly. #6 is particularly relevant to this.
There’s very little detail in the article. I’d be curious to find out exactly what the intern’s responsibilities were, because based on the description in the article it seems like this was a failure of management, not the intern. Interns should never have direct access to production systems. In fact, in most parts of the world (though probably not China, I don’t know) interns are there to learn. They’re not supposed to do work that would otherwise be assigned to a paid employee, because that would make them an employee not an intern. Interns can shadow the paid employee to learn from them on the job, but interns are really not supposed to have any actual responsibilities beyond gaining experience for when they go on the job market.
Blaming the intern seems like a serious shift of responsibility. The fact that the intern was able to do this at all is the fault of management for not supervising their intern.
Think about it this way: remember those upside-down answer keys in the back of your grade school math textbook? Now imagine if those answer keys included just as many incorrect answers as correct ones. How would you know if you were right or wrong without asking your teacher? Until a LLM can guarantee a right answer, and back it up with real citations, it will continue to do more harm than good.
That’s a very cool concept. I’d definitely be willing to participate in a platform that has that kind of trust system baked in, as long as it respected my privacy and couldn’t broadcast how much time I spend on specific things etc. Instance owners would also potentially get access to some incredibly personal and lucrative user data, so protections would have to be strict. But I guess there are a lot of ways to get at positive user engagement in a non-invasive way. I think it could solve a lot of current and potential problems. I wish I was confident the majority of users would be into it, but I’m not so sure.
For sure, it’s not an easy problem to address. But I’m not willing to give up on it just yet. Bad actors will always find a way to break the rules and go under the radar, but we should be making new rules and working to improve these platforms in good faith, with the assumption that most people want healthy communities that follow the rules.
I think by default bots should not be allowed anywhere. But if that’s a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I’m specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It’s basically a spammer bot at this point, cluttering up our feeds even when it can’t figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.
This is awesome, we need more rules like this, and Khan is absolutely nailing it. But I’m worried it won’t stick. I think companies have taken our absentmindedness and laziness for granted, and have made tons of money because of it. I don’t think they’ll give that up without a fight, but hopefully they lose. Unless the Supreme Court gets involved, and then we can all but guarantee they’d rule against these consumer protections.
“Too often, businesses make people jump through endless hoops just to cancel a subscription,” FTC Chair Lina Khan said in a statement. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”
It’s such a basic and obvious consumer protection.
I guess I’ve been under a rock, but I hadn’t heard of this company until now. Did they really name themselves Nikola Motor? Were they expecting to be bought out by Tesla or something? This would be like me opening a store called George next to an existing store called Washington. Weird.
This seems like a great way to turn architects into spellcheckers and glorified model trainers, and make buildings incredibly unsafe. This is one of the those use cases that strikes me as wholly irresponsible and dangerous. I understand that a lot of this kind of work is time consuming and difficult, but if you tell me a chatbot helped plan and design a building, I’m not stepping foot inside that building.
Bingo. They should invest in their own company, they have the money. There’s no reason for taxpayers to play any part in this.
As of October 2024 Microsoft has a market cap of $3.109 Trillion. (Source). So uh, fuck that.
Wow, it’s hard to know just how impactful this will be, but it sounds like they’ve got something here.
its batteries which it said avoid using metals such as lithium, cobalt, graphite and copper, providing a cost reduction of up to 40% compared to lithium-ion batteries.
Altech said its batteries are completely fire and explosion proof, have a life span of more than 15 years and operate in all but the most extreme conditions.
That’s huge, especially the fire and explosion proof part.
I’ll be honest, I looked at this with the intention of poking holes, but that was a surprisingly thorough article on researchers doing a year-long study trying to figure out practical uses for AI. I for one am still not convinced there’s a practical or truly ethical use at the moment, but I’m glad to see researchers trying. Their results were decidedly mixed, and I still think all the trade-offs don’t work in our favor at the moment, but this was a surprisingly balanced article with a fair amount of subtly on an issue than needs to be examined critically. They admitted that hallucinations are still a huge wildcard that no one knows how to deal with, which is rare. The headline is dumb, but because of how skeptical and distrustful I am of this massive AI bubble, I’m glad there are still researchers putting in the work to figure this shit out.
Wait, I never used snapchat, so I could be totally off base, but don’t Snapchat messages get automatically deleted? Isn’t that the whole point? Haven’t they already been caught deceiving users into thinking their deleted photos are actually gone? This just seems so gross.
Having a device that can be used for surveillance is not the same thing as someone actively choosing to report their neighbor.