“AI” long predates LLM bullshit.
“AI” long predates LLM bullshit.
Hallucinations aren’t a problem with the actually medically useful tools he’s talking about. Machine learning is being used to draw extra attention to abnormalities that humans may miss.
It’s completely unrelated to LLM nonsense.
Except the summary is almost always literally the content the sites ask the sites linking them to show.
They have “please show this preview instead of a boring plain link” code.
Of course they aren’t, because they’re not required to, and money is money.
The fun part is that if it actually were restricted to collecting data for law enforcement? It would be a pretty obvious (though probably still not enforced because the courts suck) violation of your rights against searches without due process of law. But because it’s “publicly available”, they can pretend that it’s not really a search.
They’re struggling because they’re not learning, or learning how to learn.
LLM outputs aren’t reliable. Using one for your research is doing the exact opposite of the steps that are required to make good decisions.
The prerequisite to making a good decision is learning the information relevant to the decision, then you use that information to determine your options and likely outcomes of those paths. The internalization of the problem space is fundamental to the process. You need to actually understand the space you’re making a decision about in order to make a good decision. The effort is the point.
Evaluating sources and consolidating the information they contain into concise, organized structures is how your brain learns. The notes aren’t the goal of note taking. They’re simply the process you use to internalize the information.
There’s a place for more formal writing.
But the point of using precise, formal language is the intent behind it. If you’re just RNG-ing it it loses all meaning.
They’re asking for a jury trial. That’s what I’m referring to.
The scary part is that they think (and are probably correct) that they have a good chance of convincing a random jury that it’s totally fine.
It sets an absolutely obscene precedent that a government can globally restrict information. Even global terrible actors like Russia and China haven’t succeeded at that.
Yes, that precedent is 1000 orders of magnitude more harm than India losing access (which they won’t, because the entirety of Wikipedia is open source and would be mirrored in the country instantly. But even if they actually would, it is literally impossible to get anywhere near the harm of the precedent this sets).
Being available elsewhere is entirely irrelevant. Wikipedia must stand against totalitarian censorship to resemble a reputable organization.
Complying is unforgivable.
They not only can, trivially. They unconditionally must.
It is not possible to ever be a reputable organization ever again if you have to choose between censoring content globally for an authoritarian government and shutting down in that country, and censoring content globally is something they genuinely consider. Open, fact based information is their entire reason for existing.
No, I have no interest in digging through their history. But it’s less than trivial to do. Any random no name site can do it in 5 minutes with any source of the geo-mapping information, with virtually no knowledge required. It is not work.
GDPR can do literally nothing but block any site that doesn’t have finances under their jurisdiction, and they shouldn’t be able to. No one else will enforce their fines for them. It’s no different than Russia fining Google more money than exists. You can’t just magically rob someone because you’re a country.
Yes, they do. They’ve done it in the past.
It literally doesn’t matter what Indian courts rule. Being banned from India is orders and orders of magnitude more acceptable than blocking a single article anywhere else on the planet. It single handedly eliminates all of their credibility.
India isn’t capable of enforcing fines against an organization that doesn’t operate in their country and there’s no chance a US court will enforce such an unhinged judgement. They can’t be forced to pay.
They already have the capability to block content locally.
There isn’t a worse option than allowing a government to globally block an article.
You can’t give a deranged dictatorship global censorship authority.
That keeps the entire planet from access to information.
Everywhere else on the planet, in order for a device to be cleared for sale, that specific model undergoes heavy testing for regulatory compliance by a government agency.
“The specs said it was fine” is literally never going to be a valid legal defense, and making that argument will get you laughed out of court. Either it’s actually certified to be used as you’re allowing it to be used, or you get the hammer dropped on you, as you should.
The carrier doesn’t decide that.
I literally quoted the part that required carriers to block ineligible phones.
They should have just blocked India.
Censoring factual articles globally is an extremely bad precedent to set for yourself.
If those sites think that being linked to is a service they’re providing Google (which demanding payment implies), then Google is just fulfilling their wishes.