Normally that would have been the preferred solution, but since IANA has experienced all kinds of shenanigans on similar occasions they have decided to not allow ccTLD’s to survive their former country anymore.
Normally that would have been the preferred solution, but since IANA has experienced all kinds of shenanigans on similar occasions they have decided to not allow ccTLD’s to survive their former country anymore.
override the auto driving
I must be tired right now but I don’t see how a remote operator could have driven better in this situation.
You can’t get away from someone blocking your car in traffic without risk.of hitting them or other people or vehicles.
You probably meant they ought to drive away regardless of what they hit, if it helps the passenger escape a.dire.situation? But I have to wonder if a remote operator would agree to be put on the spot like that.
It’s been removed in most of the US.
If you like this you may like Chrome too, because that’s exactly how Google is trying to do things now.
Here’s the thing. I don’t want my browser to do things under the hood. It’s either protecting my privacy or it’s not. That means it’s either sending cookies to the website I’m visiting or it’s not.
When Firefox takes it upon itself to bypass cookies and collect information about me, that’s surprising and unpredictable and may fail in ways unique to Firefox. It’s one more thing to worry about.
If Mozilla wants to outright and overly protect me they can offer an “allow cookies” button like LibreWolf does, our how you can get with the CAD add-on (Cookie Auto Delete).
If they won’t do that then stick to blocking third-party cookies and get out of the way.
I don’t want Firefox to second-guess what I want to share with anybody, and assuming I want to share anything with advertisers, even anonimized data, is an abuse of my trust.
We don’t owe advertisers anything, btw. They’re a parasitic industry and the sooner it dies and we move on the better.
It will fall through much faster than that. I’m thinking two years, tops.
You don’t have to install drivers or CUPS on client devices. Linux and Android support IPP out of the box. Just make sure your CUPS on the server is multicasting to the LAN.
You may need to install Avahi on the server if it’s not already (that’s what does the actual multicasting). The printer(s) should then auto magically appear in the print dialogs on apps on Linux clients and in the printer service on Android.
On Linux it may take a few seconds to appear after you turn it on and may not appear when it’s off. On Android it shows up anyways as long as the CUPS server is on.
From what I understand OP’s images aren’t the same image, just very similar.
short of all using the same wordpress or whatnot hoster, that is.
That’s the thing, that’s common practice. It’s basically a given nowadays for shared web hosting to use one IP for a few dozen websites, or for a service to leverage a load/geo-balancer with 20 IPs into a CDN serving static assets for thousands of domains.
with infrastructure the size of twitter you can also blackhole their whole IP range
Just one note, services the size of Twitter typically use cloud infrastructure so if you block that indiscriminately you risk blocking a lot of unrelated stuff.
It stops working occasionally but they release fixed versions pretty fast.
Bayesian filters are statistical, they have nothing to do with machine learning.
You should consider if you really want to integrate your application super tightly with the HTTP protocol.
Will it always be used exclusively over a REST-ful HTTP API that you control, and it has exactly one hop to the client, or passes through hops that can be trusted to never alter the HTTP metadata significantly? In that case you can afford to make HTTP codes semantically relevant for your app.
But maybe you need to pass data through multiple different types of layers and different mechanisms (socket protocols, pub-sub, file storage etc.) In that case you want all your semantics to be independent from any form of transport.
It’s a perfectly fine way of doing things as long as it’s consistent and the spec is clear.
HTTP is a transport layer. You don’t have to use its codes for your application layer. It’s often done that way but it’s not the only way.
In the example above the transport layer is saying “OK I’ve delivered your output” which is technically correct. It’s not concerned with logical errors inside what it was transporting, just with the delivery itself.
If any client app is blindly converting body to JSON without checking (at the very least) content type and size, they deserve what they get.
If you want to make it part of your API spec to always return JSON that’s one thing, but don’t do it to make up for poorly written clients. There’s no end of ways in which clients can fail. Sticking to a clear spec is the only way to preserve your sanity.
They buy the hardware once then sell services based on it.
Because AI reversed the ratio.
It’s much worse. Generally speaking projects in large corporations at least try to make sense and to have a decent chance to return something of value. But with AI projects is like they all went insane, they disregard basic things, common sense, fundamental logic etc.
They typically use internal personnel and being parcimonious about it so you’re right about that.
Well probably not just Nvidia but the next likely beneficiaries are in the same range (Microsoft etc.)
Yes but it’s unregulated and like most unregulated TLDs it has become a cesspool of malware and dark dealings. I don’t think anybody would never if that were to happen to .io.