I don’t like cloudflare but it’s nice that they allow people to stop AI scrapping if they want to
They do have a point though. It would be great to let per-prompt searches go through, but not mass scrapping
I believe a lot of websites don’t want both though
Does it not need to be scraped to be indexed, assuming it’s semi-typical RAG stuff?
Uh, are they admitting they are trying to circumvent technological protections setup to restrict access to a system?
Isn’t that a literal computer crime?
No-no, see. When an AI-first company does it, it’s actually called courageous innovation. Crimes are for poor people
puts on evil hat CloudFlare should DRM their protection then DMCA Perplexity and other US based “AI” companies to oblivion. Side effect, might break the Internet.
Worth it.
Gee that’s a real removed it ain’t it perplexity?
deleted by creator
they cant get their ai to check a box that says “I am not a robot”? I’d think thatd be a first year comp sci student level task. And robots.txt files were basically always voluntary compliance anyway.
Cloudflare actually fully fingerprints your browser and even sells that data. Thats your IP, TLS, operating system, full browser environment, installed extensions, GPU capabilities etc. It’s all tracked before the box even shows up, in fact the box is there to give the runtime more time to fingerprint you.
Recaptcha v2 does way more than check if the box was checked.
Can’t believe I’ve lived to see Cloudflare be the good guys
It’s insane that anyone would side with Cloudflare here. To this day I cant visit many websites like nexusmods just because I run Firefox on Linux. The Cloudflare turnstile just refreshes infinitely and has been for months now.
Cloudflare is the biggest cancer on the web, fucking burn it.
Linux and Firefox here. No problem at all with Cloudflare, despite having more or less as much privacy preserving add-on as possible. I even spoof my user agent to the latest Firefox ESR on Linux.
Something’s muat be wrong with your setup.
Thats not how it works. Cf uses thousands of variables to estimate a trust score and block people so just because it works for you doesn’t mean it works.
Same goes the other way. It’s not because it doesn’t work for you that it should go away.
That technology has its uses, and Cloudflare is probably aware that there are still some false positive, and probably is working on it as we write.
The decision is for the website owner to take, taking into consideration the advantages of filtering out a majority of bots and the disadvantages of loosing some legitimate traffic because of false positives. If you get Cloudflare challenge, chances are that he chosed that the former vastly outclass the later.
Now there are some self-hosted alternatives, like Anubis, but business clients prefer SaaS like Cloudflare to having to maintain their own software. Once again it is their choices and liberty to do so.
lmao imagine shilling for corporate Cloudflare like this. Also false positive vs false negative are fundamentally not equal.
Cloudflare is probably aware that there are still some false positive, and probably is working on it as we write.
The main issue with Cloudflare is that it’s mostly bullshit. It does not report any stats to the admins on how many users were rejected or any false positive rates and happily put’s everyone under “evil bot” umbrella. So people from low trust score environments like Linux or IPs from poorer countries are under significant disadvantage and left without a voice.
I’m literally a security dev working with Cloudflare anti-bot myself (not by choice). It’s a useful tool for corporate but a really fucking bad one for the health of the web, much worse than any LLM agent or crawler, period.
I’m on Linux with Firefox and have never had that issue before (particularly nexusmods which I use regularly). Something else is probably wrong with your setup.
Thirded. All three (Linux, FF, nexus)
ZERO ISSUES.
In my case, it’s usually the VPN.
“Wrong with my setup” - thats not how internet works.
I’m based in south east asia and often work on the road so IP rating probably is the final crutch in my fingerprint score.
Either way this should be no way acceptible.
omg ur a hacker
Did you mean Edge on Windows? 'Cause if so, welcome in!
Here comes the ridiculous offer to buy Google chrome with money they don’t have: easy delicious scraping directly from the user source
You could say they are… Perplexed.
Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.
So, I assume Perplexity uses appropriate identifiable user-agent headers, to allow hosters to decide whether to serve them one way or another?
And I’m assuming if the robots.txt state their UserAgent isn’t allowed to crawl, it obeys it, right? :P
No, as per the article, their argumentation is that they are not web crawlers generating an index, they are user-action-triggered agents working live for the user.
Except, it’s not a live user hitting 10 sights all the same time, trying to crawl the entire site… Live users cannot do that.
That said, if my robots.txt forbids them from hitting my site, as a proxy, they obey that, right?
Its not up to the hoster to decide whom to serve content. Web is intended to be user agent agnostic.
yeah it’s almost like there as already a system for this in place
This is a nice CloudFlare ad
yeah. still not worth dealing with fucking cloudflare. fuck cloudflare.
I’m out of the loop, what’s wrong with cloud flare?
Centralization, mostly, but also their hands-off approach to most fascist content.
DEATH TO CLOUDFLARE!
That would be terrible for a lot of people as they are the only company providing such services that doesn’t charge for traffic.
They can use web.archive.org as a cdn(I do that to cloudflare websites). But honestly, cloudflare or not, the internet is broken.
Can you explain please? How can I use archive.org as a cdn for my website?
That’s the entire point, dipshit. I wish we got one of the cool techno dystopias rather than this boring corporate idiot one.
I’m still holding out for Stephen Hawking to mail out Demon Summoning programs.
I’ve developed my own agent for assisting me with researching a topic I’m passionate about, and I ran into the exact same barrier: Cloudflare intercepts my request and is clearly checking if I’m a human using a web browser. (For my network requests, I’ve defined my own user agent.)
So I use that as a signal that the website doesn’t want automated tools scraping their data. That’s fine with me: my agent just tells me that there might be interesting content on the site and gives me a deep link. I can extract the data and carry on my research on my own.
I completely understand where Perplexity is coming from, but at scale, implementations like
thisPerplexity’s are awful for the web.(Edited for clarity)
I hate to break it to you but not only does Cloudflare do this sort of thing, but so does Akamai, AWS, and virtually every other CDN provider out there. And far from being awful, it’s actually protecting the web.
We use Akamai where I work, and they inform us in real time when a request comes from a bot, and they further classify it as one of a dozen or so bots (search engine crawlers, analytics bots, advertising bots, social networks, AI bots, etc). It also informs us if it’s somebody impersonating a well known bot like Google, etc. So we can easily allow search engines to crawl our site while blocking AI bots, bots impersonating Google, and so on.
What I meant with “things like this are awful for the web,” I meant that automation through AI is awful for the web. It takes away from the original content creators without any attribution and hits their bottom line.
My story was supposed to be one about responsible AI, but somehow I screwed that up in my summary.