Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

Some AI vendors, including Google, OpenAI and Apple, allow website owners to block the bots they use for data scraping and model training by amending their site’s robots.txt, the text file that tells bots which pages they can access on a website. But, as Cloudflare points out in a post announcing its bot-combating tool, not all AI scrapers respect this.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 months ago

    Cloudflare’s free CDN offering is a MiTM (you use their certificates ONLY to be able to go through their network). Adding to this, they control a lot of Internet infrastructure (comparable to Microsoft and Google). I hate all of these companies and specifically use Quad9 till I get my own DNS running. It probably doesn’t matter to the end-user but I’m happy to see a technical crowd who maintains my ideals on big tech on Lemmy

    • 𝕸𝖔𝖘𝖘@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 months ago

      It can matter to the end user. I had to spoof my user agent, because I was using a beta version of Firefox, and cloudflare thought I was a bot. Sites still don’t load sometimes at work (just keeps cycling through the “checking to make sure you’re a human” bullshit), regardless of browser. It’s a single point of failure for much of the web. Not that long ago (last year, I think), cloudflare had some bad config files pushed to prod, and about half the web broke. Cloudflare can arbitrarily block (and has done so) websites, since they’re serving the content. In theory, cf is a great service. In practice, they’ve abused it enough that we really shouldn’t trust them again.