An ankle weight is less damaging. I doubt if gait recognition is all that valid anyway.
An ankle weight is less damaging. I doubt if gait recognition is all that valid anyway.
the paper’s ostensibly liberal/progressive line
They’re aligned with the Liberal party, which is a centrist party which is seldom if ever progressive. The Guardian does put up some articles by progressives, on occasion, but they also publish articles by conservatives. When the Labour Party was led by Corbyn, the Guardian was consistently critical of Labour policy and bought into the rightwing press’s phony accusations that Corbyn was antisemitic. Overall, the Guardian’s core politics are those of the metropolitan bourgeoisie, as can also be seen by their lifestyle and media commentary, as well as their general smugness. And on economic matters, their coverage is utterly useless. On that, the Economist and the FT are far superior, despite their occasionally odious politics in their editorial pages.
I still read the Graun, though, since the rest of the British press is far, far worse.
A couple of years late, but OK.
An even better alternative is to replace it with nothing. The Twitter-like messaging paradigm is only good for trivia and rumor-mongering.
This is a privacy intrusion that should be banned nationally.
And some subreddits have fascist mods who arbitrarily ban anyone who’s not a alt-right or worse.
Interoperability is a big job, but the extent to which it matters varies widely according to the use case. There are layers of standards atop other standards, some new, some near deprecation. There are some extremely large and complex datasets that need a shit-ton of metadata to decipher or even extract. Some more modern dataset standards have that metadata baked into the file, but even then there are corner cases. And the standards for zero-trust security enclaves, discoverability, non-repudiation, attribution, multidimensional queries, notification and alerting, pub/sub are all relatively new, so we occasionally encounter operational situations that the standards authors didn’t anticipate.
TripAdvisor has better content. Too many Google reviews give a business 1 star because the review author was too stupid to check working hours, or has some incredibly rare digestive condition that they didn’t bother to communicate to the eatery before ordering. Or they expect their Basque waiter to speak fluent Latvian, or to accommodate a walk-in party of 20.
Isn’t yelp a pretty easily replaceable thing?
Yelp is at this stage a completely worthless thing. The only thing they were originally was an aggregator of semi-literate reviews, and a shakedown racket against businesses that pissed off some Karen
is all but guaranteed to be possible
It’s more correct to say it “is not provably impossible.”
Someone, somewhere along the line, almost certainly coded rate(2025) = 2*rate(2024)
. And someone approved that going into production.
If they aren’t liable for what their product does, who is?
The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.
If a self-driving car kills someone, the programming of the car is at least partially to blame
No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.
I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.
No liability should apply while coding. When that code is deployed for use, there should be liability if it is unfit for its intended use. If your AI falsely denies my insurance claim, your ass should be on the line.
Yeah, all these systems do is worsen the already bad signal/noise ratio in online discourse.
Unless there is a huge disclaimer before every interaction saying “THIS SYSTEM OUTPUTS BOLLOCKS!” then it’s not good enough. And any commercial enterprise that represents any AI-generated customer interaction as factual or correct should be held legally accountable for making that claim.
There are probably already cases where AI is being used for life-and-limb decisions, probably with a do-nothing human rubber stamp in the loop to give plausible deniability. People will be maimed and killed by these decisions.
They are a product of lack of control over the stadistical output.
OK, so describe how you control that output so that hallucinations don’t occur. Does the anti-hallucination training set exceed the size of the original LLM’s training set? How is it validated? If it’s validated by human feedback, then how much of that validation feedback is required, and how do you know that the feedback is not being used to subvert the model rather than to train it?
It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.
You’re attempting to redefine “bug.”
Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.
From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).
This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.
I’m no AI fanboy, but what you just described was the feedback cycle during training.
Leave your phone at home. Ride a bike or walk, don’t drive (defeats giar recognition, ANPR and in-car tracker software). ANPR cameras can also be disabled with black spray paint. Wear a hoodie. Use a VPN and an adblocker when you are online. Practice skeet shooting so you can shoot down drones. Also jam them if you can.