We’re genuinely honored that we were invited by the OWASP community to craft the anti-spoofing and bot-protection section of its new GenAI security project, Securing Agentic Applications Guidance.
In writing our contribution, our objectives were simply to give developers practical tactics for proving an agent’s identity and for partnering their agents with bot-mitigation systems. Use of these tactics ensures (1) that agents are not spoofed and impersonated by bad actors, and (2) that trusted agents are able to carry out their tasks without being blocked by bot mitigation tools.
The Challenge of AI Agent Identity
AI agents are the next leap forward in automation. They can do more than just crawl websites; they can reason in real-time, manage complex workflows like online shopping or cloud resource management, and even control other agents. But this power creates a critical new challenge: How can we trust that an AI agent is who it says it is, and is doing what it is supposed to do?
For years, bot management has distinguished “good” bots from “bad” ones by checking details like IP addresses and User-Agent strings. But these methods are fragile, because attackers can easily copy and use them to allow malicious bots to impersonate legitimate ones.
Modern autonomous agents operate across multiple APIs, interface with other agents, and make real-time decisions, all conditions that make spoofing both easier for adversaries and costlier for defenders. The result is that trustworthy agents risk being blocked, while sophisticated fraud could slip through.
To build a trustworthy “Agentic Web,” we need an updated framework for identifying and managing bots.
The new OWASP guidelines, Securing Agentic Applications, map the way from our current system that relies on outdated heuristics toward one that is based on strong, verifiable identity.
How to Build Verifiable Agent Identity: Challenges and Solutions
Challenge |
What We Recommend |
Why It Works |
Proving an agent’s identity |
Combine cryptographic request signatures, mutual TLS, and public-key verification. |
Each request can be traced to a single, non-forgeable keypair, eliminating spoofed traffic. |
Speaking the language of bot-mitigation platforms |
Publish stable identifiers—static IPs, session IDs, iteration IDs—and sign every HTTP request. |
These signals map cleanly onto how leading mitigation tools separate “good” automation from abuse. |
Creating a common naming layer |
Support emerging standards such as the Agent Name Service (ANS) that encode protocol, capability, and provider metadata in a verifiable format. |
A shared registry lets websites, APIs, and peer agents resolve identity without bespoke allow-lists. |
Industry Shift: From Guesswork to Cryptographic Proof
The security industry is moving toward a more robust model for verifying automated agents. The emerging standard combines HTTP Message Signatures and mutual TLS (mTLS) to create a pattern for production-grade bot verification.
This approach doesn’t rely on identifiers that are easy to forge; instead, it uses cryptographic keys that can’t be faked without being compromised, which prevents spoofing and man-in-the-middle attacks.
Initiatives like LOKA’s Universal Agent Identity Layer and the open-source Agent Network Protocol (ANP) are pushing forward this shift, building toward a future where agents have verifiable identities backed by public key infrastructure (PKI).
How to Keep Your Agent Off the Blocklist: A Guide to Transparency and Trust
As bot security systems become more perceptive and more stringent, transparency and good behavior will be increasingly important for ensuring that your legitimate agent isn’t accidentally flagged as a malicious bot. OWASP’s guidance recommends taking a proactive approach:
- Identify Transparently: Don’t hide who you are. On every request, your agent should provide a clear identity bundle, including a declarative User-Agent string, a stable IP address if possible, and, most importantly, a cryptographic signature so that your identity is not spoofable. Adopting the ANS protocol will be a key part of this.
- Declare Intent and Respect Policy: State clearly what your agent is trying to do. Use headers like X-Agent-Intent to advertise the bot’s purpose, whether its job is to crawl, pay, assist, etc. The more clearly an agent declares its intentions, the easier it is for mitigation services to permit its actions.
- Anticipate the Next Tier of Trust: Prepare for a future where identity is persistent, with each authenticated agent having a stable agent ID, as well as the ability to generate a unique tag for each session. Over time, each agent will build a verifiable reputation that other systems can query.
Announcing the HUMAN Verified AI Agent Project
To help accelerate this shift, HUMAN Security is excited to introduce our new HUMAN Verified AI Agent initiative. This new open-source registry serves as a showcase for AI agents that use HTTP Message Signatures, as well as a practical guide for developers who want to implement HTTP Message Signatures in their own agentic projects.
This demo showcases a future where only verified AI Agents can interact and take actions online. By leveraging the HTTP Message Signatures protocol, implementing Agent-to-Agent (A2A) communication, including support for Google’s A2A framework, and integrating the OWASP ANS identity framework, we demonstrate what trusted and secure agentic interactions on the internet can look like.
We invite AI agent developers to explore the repository and register their agents to become part of this foundational trust layer. Read more about the HUMAN Verified AI Agent Project here.
The Future: AI Agents with Digital Driver’s Licenses
The controls outlined earlier provide guidance that developers can act on right now. But OWASP’s guidance also looks forward to a possible long-term solution in section 4.3, envisioning decentralized identity registries as a logical next step.
This model includes several key components:
- A Decentralized Registry: A shared, tamper-resistant registry where legitimate agents are listed. Anyone can access an agent’s official record without relying on any single authority.
- Decentralized Identifier (DID): The agent’s unique, long-lived identifier stored in the registry. No one else can claim it, although it can be updated or revoked if needed.
- Public Keys: Each agent’s DID resolves to a document containing the agent’s cryptographic public keys. The agent can sign requests or messages with its private key (which it keeps secret), and anyone can instantly verify these signatures using the agent’s public key. If the signature checks out, you know you’re dealing with the real agent.
- Verifiable Credentials: Trusted organizations can issue agents cryptographically signed credentials for specific capabilities, like “authorized to make payments” or “officially owned by Company X.” Services can quickly check these credentials to decide what actions the agent can perform.
OWASP sees this approach as valuable both for agents running within organizations, where secure internal communication and trust is critical, and for agents interacting across the open internet. While promising, this decentralized identity model is still emerging, and certain foundational protocols and definitions remain under active development.
The Road Ahead: Charting a Course for Verifiable Agent Identity
By moving toward this model, we can build an agentic web where trust is not assumed but proven. No single registry or cryptographic scheme will solve agent identity overnight. Standards like ANS are still new, and large-scale certificate authorities for agents don’t yet exist. Even so, we believe the principles above lay a durable foundation. OWASP has committed to updating the guidance as protocols mature, and we intend to stay involved.
We’re grateful to the OWASP GenAI Security Project team for the invitation and peer review, and to our fellow contributors across the working group who helped sharpen these ideas. Inside HUMAN, special thanks go to the research, engineering, and threat-intelligence teams whose day-to-day experience informed every recommendation.
Open standards only succeed when builders adopt them. We hope our contribution gives developers a head start on shipping agents that can identify themselves in an airtight way, and we look forward to collaborating with the community as the Agentic AI ecosystem evolves.