To successfully implement agentic commerce’s core features, such as automated transactions, it’s imperative to be able to identify the agents performing these transactions and differentiate the legitimate ones from malicious actors.
And as fraudsters themselves continue to leverage AI in more and more sophisticated ways, static multifactor authentication won’t be sufficient to generate the required level of trust.
Increasingly, the only way to conduct airtight verification is through adaptive measures that use machine learning to detect and respond to suspicious activity in real time. But trust must also be programmable at the protocol level—not just inferred from behavior.
To address this challenge, HUMAN has released an open-source demonstration called HUMAN Verified AI Agent, which shows how agents can cryptographically authenticate themselves using HTTP Message Signatures (RFC 9421). In this model, every request from an AI agent is signed using a public–private key pair, and verified at a gateway before any downstream API interaction occurs. Each agent has a unique, cryptographically bound identity, resolvable via the OWASP Agent Name Service (ANS), that makes requests traceable, verifiable, and governable.
As agentic commerce scales, systems like these will be essential to enforce which agents can act, which actions they are allowed to take, and how their behavior is traced, ultimately forming a trust layer for the agentic internet.