Google published documentation explaining its testing of Web Bot Auth, an experimental IETF protocol that can help websites cryptographically verify some automated requests from bots and AI agents.
The protocol adds another verification layer by letting agents sign HTTP requests with cryptographic keys. Websites can then verify those signatures against published public keys to confirm the request came from who it claims to be.
What’s New
Web Bot Auth uses HTTP Message Signatures (RFC 9421) to let automated clients sign outgoing requests. A bot holds a private key, publishes its public key at a known URL, and signs each request. The receiving website checks the signature against the public key to confirm identity.
Google says a subset of signed Google-Agent requests are authenticated as https://agent.bot.goog. Signed requests include a Signature-Agent HTTP header set to g="https://agent.bot.goog", and the corresponding signature can be verified using public keys published at that domain’s .well-known directory.
According to Google’s documentation, bot-detection services, CDNs, and WAFs already support the protocol. The IETF draft is authored by Thibault Meunier of Cloudflare and Sandor Major of Google. Cloudflare publishes a reference implementation on GitHub.
The IETF Web Bot Auth Working Group was chartered in early 2026 with milestones for standards-track specifications and a best current practice document.
What Google Is Not Doing Yet
Not all Google user agents are participating. The documentation says Google is testing with “some AI agents hosted on Google infrastructure” but does not name which ones beyond the Google-Agent user-triggered fetcher.
Even for participating agents, not every request is signed. The documentation recommends that sites continue relying on IP addresses, reverse DNS, and user-agent strings as the primary verification method while signed traffic rolls out gradually.
The Internet-Draft could change as the working group develops the standard.
Why This Matters
Bot impersonation has been a persistent problem. Scrapers and bad actors can spoof user-agent strings to disguise their traffic as Googlebot or other legitimate crawlers, making it harder for site owners to tell real bot traffic from fake.
We covered this issue when Google’s Martin Splitt warned that “not everyone who claims to be Googlebot actually is Googlebot.” The available verification methods at the time were reverse DNS lookups and IP range checks. Web Bot Auth would add a layer that can’t be forged without the agent’s private key.
For sites already using a CDN or WAF that supports the protocol, verification may happen automatically. For everyone else, the experimental status means there is no urgency to act. The documentation recommends treating existing verification as the default and Web Bot Auth as supplementary.
Looking Ahead
Web Bot Auth is still moving through the standards process, and Google’s implementation remains experimental.
For now, the practical change is visibility. Websites may start seeing signed requests from some Google-Agent traffic, while existing verification methods remain the default.
The next question is whether more AI agents adopt signed requests, and whether hosting providers make verification automatic for websites that don’t want to manage keys.