They can be tracked back one by one but if you have any amount of traffic it’s a constant game of cat and mouse.
You can block entire ASNs until they start using residential proxies provided by less ethical companies. Then you end up blocking all of France or destroying user experience by enforcing a captcha on everyone.
I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.
The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.
Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.
My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.
The type of request is not relevant. It’s the cost of the request that’s an issue. We have long ago stopped serving html documents that are static and can be cached. Tons of requests can trigger complex searches or computations which are expensive server side. This type of behavior basically ruins the internet and pushes everything into closed gardens and behind logins.
It has nothing to do with a sysadmin. It’s impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it’s a small amount of processing power per request.
For a business it’s probably not a big deal, but if it’s a self hosted site it quickly can become a problem.
Right, thats why I said you should fire your sysadmin if they aren’t caching or can’t manage to get the cache down to zero load for static content served to simple GET requests
This is fine. I support archiving the Internet.
It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping
The only bots we need to worry about are the ones that POST, not the ones that GET
It’s not fine. They are not archiving the internet.
I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.
Isn’t there a way to limit requests so that traffic isn’t bringing down your servers
They obfuscate their traffic by randomizing user agents, so it’s either add a global rate limit, or let them ass fuck you
the article told all source IPs can be tracked back to bytedance. Wouldn’t it be possible to block them? maybe even blocking all IPs of a specific ASN
They can be tracked back one by one but if you have any amount of traffic it’s a constant game of cat and mouse.
You can block entire ASNs until they start using residential proxies provided by less ethical companies. Then you end up blocking all of France or destroying user experience by enforcing a captcha on everyone.
Why do they need to hit a website like that? Wouldn’t it just need to scrape the data and frig off. What is the point of creating that much traffic
I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.
The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.
Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.
My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.
I think a common nginx config is to just redirect malicious bots to some well-cached terrabyte file. I think hetzner hosts one iirc
https://github.com/iamtraction/ZOD
42kB ZIP file which decompresses into 4.5 PB.
wouldn’t it be trivial to defend against that with a hash check if the size matches?
though I guess it’s possible to create your own that differs
Bytedance ain’t looking to build an archival tool. This is to train gen AI models.
this is neither archiving, nor ratelimited, if the AI training purpose and the 25 times faster scraping than a large company did not make it obvious
GET requests can still overload a system.
The type of request is not relevant. It’s the cost of the request that’s an issue. We have long ago stopped serving html documents that are static and can be cached. Tons of requests can trigger complex searches or computations which are expensive server side. This type of behavior basically ruins the internet and pushes everything into closed gardens and behind logins.
Sounds like you need to fire your sysadmin
It has nothing to do with a sysadmin. It’s impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it’s a small amount of processing power per request.
For a business it’s probably not a big deal, but if it’s a self hosted site it quickly can become a problem.
Caches can be configured locally to use near-zero processing power. Or moved to the last mile to use zero processing power (by your hardware)
Near zero isn’t zero though. And not everyone is using caching.
Right, thats why I said you should fire your sysadmin if they aren’t caching or can’t manage to get the cache down to zero load for static content served to simple GET requests
Not every GET request is simple enough to cache, and not everyone is running something big enough to need a sysadmin.