• daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    4 days ago

    AI does not triple traffic. It’s a completely irrational statement to make.

    There’s a very limited number of companies training big LLM models, and these companies do train a model a few times per year. I would bet that the number of requests per year of s resource by an AI scrapper is on the dozens at most.

    Using as much energy as a available per scrapping doesn’t even make physical sense. What does that sentence even mean?

    • grysbok@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      You’re right. AI didn’t just triple the traffic to my tiny archive’s site. It way more than tripled it. After implementing Anubis, we went from 3000 ‘unique’ visitors down to 20 in a half-day. Twenty is a much more expected number for a small college archive in the summer. That’s before I did any fine-tuning to Anubis, just the default settings.

      I was getting constant outage reports. Now I’m not.

      For us, it’s not about protecting our IP. We want folks to get to find out information. That’s why we write finding aids, scan it, accession it. But, allowing bots to siphon it all up inefficiently was denying everyone access to it.

      And if you think bots aren’t inefficient, explain why Facebook requests my robots.txt 10 times a second.