GEOagentic commerceWhatsApp

Agentic Commerce Security: The Zero-Signal Crisis and How to Solve It

Dark web community posts mentioning "AI Agent" surged over 450% in the past six months, according to Visa's Payment Ecosystem Risk and Control (PERC) team. Fraudsters are already adapting to agentic commerce. Your fraud detection systems were not built for this.

15 min readRecently updated
Hero image for Agentic Commerce Security: The Zero-Signal Crisis and How to Solve It - GEO and agentic commerce

Agentic Commerce Security: The Zero-Signal Crisis and How to Solve It

Last updated: March 2026

Dark web community posts mentioning “AI Agent” surged over 450% in the past six months, according to Visa’s Payment Ecosystem Risk and Control (PERC) team. Fraudsters are already adapting to agentic commerce. Your fraud detection systems were not built for this.

The shift from human-driven to agent-driven purchasing is not an incremental change in e-commerce. It is a structural break in how transactions are authenticated, monitored, and secured. When an AI agent executes a purchase, it eliminates the entire behavioral signal layer that modern fraud prevention depends on. Device fingerprints, browser telemetry, mouse movements, typing cadence, session duration, geolocation correlation – all of it disappears.

This article examines the security implications of agentic commerce, the emerging standards designed to address them, and what security teams need to implement now to protect their organizations.


The Zero-Signal Crisis Explained

Traditional fraud detection relies on a rich set of device-level and behavioral signals to distinguish legitimate customers from bad actors. When an AI agent initiates a purchase, those signals vanish entirely.

Signal Category Human Shopping Agent Shopping
Device fingerprint Available Absent or synthetic
Browser behavior Mouse movement, scroll patterns, hesitation None
Session duration Natural variance (minutes) Milliseconds
Geolocation Correlates with billing address Data center IP
Typing patterns Unique biometric signature Not applicable
Purchase velocity Human-speed, sequential Hundreds per minute

The result is a visibility gap: legitimate agent-initiated purchases become indistinguishable from automated bot fraud. A properly authorized AI agent buying groceries on behalf of a consumer produces the exact same signal profile as a credential-stuffing attack executing unauthorized purchases at scale.

This is not a theoretical concern. Friendly fraud already accounts for approximately 75% of all chargebacks, costing merchants an estimated $132 billion annually. As agentic commerce scales – McKinsey projects $3-5 trillion globally by 2030 – the zero-signal problem will amplify these losses dramatically unless new trust infrastructure is established.

Fraudsters are also weaponizing the technology. Visa’s research documents attackers manipulating agentic shopping logic to steer consumers toward fraudulent websites engineered to appear trustworthy, exploiting the trust consumers place in their AI agents.


New Fraud Vectors in Agentic Commerce

Agentic commerce introduces attack surfaces that did not exist in traditional e-commerce:

Agent impersonation. Without a standardized identity layer, malicious software can present itself as a legitimate shopping agent. A fraudulent agent could intercept payment credentials, redirect purchases, or exfiltrate consumer data while appearing to function normally.

Scope escalation. An agent authorized for a narrow task (e.g., “buy groceries under $100”) could be exploited to exceed its mandate – purchasing unauthorized categories, exceeding spending limits, or transacting with unapproved merchants.

Delegation chain attacks. When agents invoke sub-agents or third-party services, each handoff creates an opportunity for interception or manipulation. The longer the delegation chain, the larger the attack surface.

Intent manipulation. Attackers can craft adversarial inputs that cause agents to misinterpret consumer instructions, triggering purchases the consumer never intended.

Chargeback exploitation. Traditional chargeback reason codes map poorly to agent-initiated transactions. A consumer claiming “I didn’t authorize this” becomes far more complex when the authorization was delegated to an agent whose scope is ambiguous. This creates fertile ground for friendly fraud at scale.


The Verifiable Intent Standard

On March 5, 2026, Mastercard and Google introduced the most significant development in agentic commerce trust to date: the Verifiable Intent standard.

Verifiable Intent is an open-source, standards-based trust layer that creates tamper-resistant records capturing three critical elements:

  1. The cardholder authorizing the AI agent – cryptographic proof that a specific consumer delegated authority.
  2. The consumer’s specific instructions – the intent scope defining what the agent is permitted to do.
  3. The agent-merchant interaction – an auditable record of the transaction that resulted from the agent’s actions.

The technical foundation draws on established standards from the FIDO Alliance, EMVCo, IETF, and W3C. A key design principle is Selective Disclosure: each transaction party receives only the minimum information necessary for their role. The merchant sees proof of authorization and payment validity without accessing the consumer’s full profile. The payment network sees delegation proof without accessing purchase details.

This creates a cryptographic audit trail that fundamentally changes dispute resolution. When a consumer disputes an agent-initiated purchase, the Verifiable Intent record shows what was explicitly authorized, the agent’s action log shows what it actually did, and the merchant’s transaction record shows what was delivered. If the agent acted within scope, the chargeback claim weakens. If the agent exceeded scope, liability shifts to the agent platform.

Partners in the initiative include Google, Fiserv, IBM, Checkout.com, Basis Theory, and Getnet.


Know Your Agent (KYA) Framework

Analogous to Know Your Customer (KYC) requirements in financial services, Know Your Agent (KYA) is an emerging framework that establishes identity and trust for AI agents operating in commerce environments. KYA requires answering three questions before an agent can transact:

  1. Who or what is the agent? Establishing the agent’s identity, its developer, and its technical provenance.
  2. What is it permitted to do? Defining the scope of authorized actions – transaction limits, merchant categories, product types, and time windows.
  3. On whose behalf does it act? Verifying the delegation chain from the consumer to the agent with cryptographic proof.

KYA is designed to work alongside traditional KYC, not replace it. The consumer’s identity is still verified through existing mechanisms. KYA adds a verification layer for the autonomous intermediary acting on their behalf.

The Cloud Security Alliance’s Agentic Trust Framework (published February 2026) operationalizes KYA principles by applying Zero Trust architecture to AI agents: never trust an agent by default, always verify identity, scope, and intent before permitting any action.

NIST’s AI Agent Standards Initiative, announced in February 2026 through its Center for AI Standards and Innovation (CAISI), is working to formalize agent identity and interoperability standards at the federal level.


Visa Trusted Agent Protocol (TAP)

Visa launched the Trusted Agent Protocol in October 2025, developed in collaboration with Cloudflare, Adyen, Shopify, Stripe, and Microsoft. TAP provides a network-level identity layer for AI agents engaged in commerce.

The protocol operates on three principles:

Cryptographic identity. Agents must register public keys in a Visa-managed directory before initiating transactions. When an agent makes a request, it cryptographically signs the HTTP message using its private key. Merchants and Visa verify the signature against the registered public key using the Web Bot Auth standard for HTTP message signatures.

Agent-bound tokens. Payment credentials are bound to specific authenticated agents through shared payment tokens (SPTs). An agent cannot transfer its payment authorization to another agent or process. This prevents credential forwarding and limits the blast radius of a compromised agent.

Fraud risk scoring. SPTs carry fraud risk scores that allow merchants to make informed acceptance decisions. A newly registered agent with no transaction history receives a higher risk score than an established agent with a clean behavioral record.

TAP is currently in sandbox with over 30 partners actively building integrations and more than 100 ecosystem partners worldwide. Visa is also piloting the protocol in Asia Pacific markets throughout 2026.


PCI DSS Implications for AI Agents

The PCI Security Standards Council has published AI principles for payment environments that carry direct implications for agentic commerce platforms. Three PCI DSS requirements demand particular attention:

Requirement 7 – Least Privilege Access. AI agents must operate under strict least-privilege controls. An agent authorized to process a payment must never have access to the full card vault. Scoped tokens – not raw Primary Account Numbers (PANs) – must be used for all agent-initiated transactions. Platforms that allow agents to handle unmasked card data face immediate compliance exposure.

Requirement 10 – Audit and Logging. Logging must be sufficient to audit the complete chain from user authorization through agent reasoning to payment execution. This includes prompt inputs, the agent’s decision process, and every action taken. Traditional transaction logs that record only the final API call are insufficient. The entire delegation and decision chain must be traceable.

Documentation and Oversight. Organizations must document agent privileges, validate AI outputs against expected behavior, and maintain human oversight mechanisms. Failure to document how agents interact with cardholder data environments risks violating foundational PCI DSS requirements for access control, auditability, and secure authentication.

For platforms using vault-first architectures where agents handle only tokenized aliases (never raw card data), the compliance posture is significantly stronger. Agents that interact exclusively with payment tokens rather than PANs reduce PCI scope and limit the damage potential of a compromised agent.


Consumer Protection: The Liability Question

When an AI agent makes an unauthorized or harmful purchase, the liability question has no settled answer. Courts and regulators are applying existing frameworks – agency law, product liability, contract law – to a technology those frameworks were never designed to address.

Current state: Under most existing legal frameworks, the merchant remains the merchant of record and retains responsibility for fraud, chargebacks, and disputes. The deploying organization (the business that released the agent) bears primary liability, treated similarly to an employer’s vicarious liability for employee actions.
Emerging shift: Verifiable Intent and similar delegation-proof mechanisms may redistribute liability. If cryptographic evidence demonstrates the agent exceeded its authorized scope, responsibility could shift upstream to the agent platform. This represents a meaningful protection for merchants against the rising tide of friendly fraud.
Authorization gap: The fundamental unresolved question is how to prove a consumer authorized an agent to make a specific purchase. Traditional click-to-confirm consent does not apply to autonomous agents. Blanket authorization (“handle my shopping”) may not satisfy regulatory requirements for informed consent on individual transactions.
Cooling-off periods: Existing return and cancellation rights were designed for deliberate human purchases. Whether these protections apply – and how – when an agent executes an instant purchase remains legally untested.

The EU’s revised Product Liability Directive extends strict liability to software and AI, treating a defective AI agent like a defective physical product. This is the most significant near-term liability risk for agentic commerce providers operating in European markets.


Regulatory Landscape: EU AI Act, LGPD, and FTC

No jurisdiction has enacted regulation specifically addressing agentic commerce. Organizations must navigate a patchwork of existing laws applied to novel circumstances.

EU AI Act (high-risk enforcement: August 2, 2026). Agentic commerce systems that influence financial decisions or handle sensitive data are likely classified as high-risk. This triggers obligations around technical documentation, risk assessments, human oversight, transparency, accuracy, and cybersecurity. Penalties reach up to 35 million EUR or 7% of global annual revenue. Critically, the AI Act predates mainstream agentic commerce and contains no specific provisions for autonomous purchasing agents.

Brazil’s LGPD. Enforcement has escalated significantly, with over EUR 12 million in fines in Q1 2025 alone. ANPD is actively targeting companies that use personal data in AI processing without adequate consent. LGPD’s prompt breach notification requirement (faster than GDPR’s 30 days) creates particular urgency for platforms handling payment data through AI agents. Brazil’s Bill 2338/2023, currently before the Chamber of Deputies, would establish explicit rights to explanation and human review for automated decisions.

United States – FTC. The FTC has signaled reduced appetite for AI-specific rulemaking, favoring an enforcement-driven approach under existing Section 5 authority (unfair and deceptive practices). However, the Colorado AI Act (effective June 30, 2026) requires both developers and deployers of high-risk AI systems to exercise “reasonable care” to protect consumers from algorithmic discrimination – the most comprehensive state-level AI consumer protection law to date. A December 2025 Executive Order proposes federal preemption of inconsistent state AI laws, but the outcome remains uncertain.

UK ICO. The UK Information Commissioner’s Office published a “Tech Futures” report in January 2026 specifically addressing agentic AI, recommending purpose limitation at each processing stage, data minimization, user control over agent data access, and human approval before accessing personal information.


Best Practices for Merchants

Based on the emerging standards, regulatory requirements, and threat landscape, security teams should prioritize the following:

  1. Require agent identification at checkout. Integrate with Visa TAP or Mastercard Agent Pay to verify agent identity before processing transactions. Do not accept unidentified agent traffic.

  2. Implement vault-first payment architecture. Ensure AI agents never handle raw card data. Use tokenization services so agents interact exclusively with payment aliases, reducing PCI scope and limiting breach impact.

  3. Log complete delegation chains. Store Verifiable Intent records alongside transaction data. Record the full path from user authorization through agent action to payment execution with cryptographic signatures at each step.

  4. Enforce agent-specific transaction limits. Set maximum amounts, category restrictions, and velocity limits for agent-initiated purchases. An agent authorized for routine purchases should not be able to execute high-value transactions without escalation.

  5. Require out-of-band confirmation for high-risk transactions. For purchases exceeding defined thresholds (amount, category, or frequency), require direct consumer confirmation through a separate channel – push notification, SMS, or messaging platform.

  6. Build agent behavioral profiles. Replace device-based fraud signals with agent-level behavioral baselines: typical purchase frequency, amount ranges, merchant categories, and time patterns. Flag deviations for review.

  7. Publish agent-specific policies. Define and communicate clear return, dispute, and liability policies for agent-initiated transactions. Ambiguity in policies creates chargeback exposure.

  8. Prepare for EU AI Act compliance. Conduct risk classification analysis, complete required documentation, implement human oversight mechanisms, and prepare conformity assessments before the August 2026 deadline.

  9. Implement rate limiting per agent and per user. Prevent runaway agents from executing excessive transactions. Set velocity limits that align with reasonable consumer purchasing patterns.

  10. Design graceful degradation. If agent identity cannot be verified, fall back to human-in-the-loop confirmation rather than blocking the transaction outright or processing it without verification.


Frequently Asked Questions

What is the zero-signal crisis in agentic commerce?

The zero-signal crisis refers to the disappearance of traditional fraud detection signals when AI agents execute purchases instead of humans. Device fingerprints, browser behavior, typing patterns, geolocation correlation, and session timing – the signals that fraud models rely on to distinguish legitimate customers from attackers – are absent in agent-initiated transactions. This creates a visibility gap where authorized agent purchases are indistinguishable from automated bot fraud.

How does Mastercard’s Verifiable Intent standard work?

Verifiable Intent creates tamper-resistant cryptographic records that capture three elements: proof that the cardholder authorized the agent, the specific scope of the consumer’s instructions, and an auditable record of the agent-merchant interaction. Built on FIDO Alliance, EMVCo, IETF, and W3C standards, it uses Selective Disclosure to share only the minimum data needed with each transaction party. This provides a chain of evidence for dispute resolution without exposing unnecessary consumer data.

Who is liable when an AI agent makes an unauthorized purchase?

No jurisdiction has definitively answered this question. Under current legal frameworks, the deploying organization (the business operating the agent) typically bears primary liability, analogous to employer liability for employee actions. The EU’s revised Product Liability Directive extends strict liability to software, treating defective AI like a defective product. With Verifiable Intent or similar delegation proofs, liability may shift based on whether the agent acted within or exceeded its authorized scope.

Does PCI DSS apply to AI agents handling payment data?

Yes. The PCI Security Standards Council’s published AI principles make clear that PCI DSS requirements apply to AI systems in payment environments. Requirement 7 (least privilege) mandates that agents use scoped tokens, not raw PANs. Requirement 10 (audit logging) requires tracing the full chain from user authorization through agent reasoning to payment execution. Organizations must document agent privileges and maintain human oversight over AI systems interacting with cardholder data.

How should merchants prepare for the EU AI Act’s August 2026 deadline?

Merchants operating in EU markets should: (1) determine whether their AI commerce systems qualify as high-risk under the Act, (2) complete required risk assessments and technical documentation, (3) implement human oversight mechanisms for significant purchasing decisions, (4) ensure transparency about AI involvement in transactions, and (5) prepare conformity assessment documentation. Non-compliance penalties reach up to 35 million EUR or 7% of global annual revenue.

What is the difference between Visa TAP and Mastercard’s approach?

Visa’s Trusted Agent Protocol (TAP) focuses on network-level agent identity through cryptographic HTTP message signatures and a Visa-managed public key directory. Agents register their keys, sign requests, and merchants verify signatures. Mastercard’s approach centers on Agentic Tokens that encode cardholder identity, agent identity, and authorized scope into a single cryptographic token, combined with the Verifiable Intent standard for consent documentation. Both are underpinned by Cloudflare’s Web Bot Auth technology and are complementary rather than competing.

How does Brazil’s LGPD affect agentic commerce platforms?

LGPD imposes strict requirements on AI-mediated transactions in Brazil: a valid legal basis for each data processing stage, prompt breach notification (faster than GDPR), rights to explanation and human review for automated decisions, and non-discrimination protections. ANPD enforcement has escalated aggressively, with over EUR 12 million in fines in early 2025. Platforms operating WhatsApp-based commerce agents in Brazil face particular scrutiny given the volume of personal data processed through messaging interactions.


This article is intended for security professionals, fraud prevention teams, and compliance officers evaluating agentic commerce risk. The regulatory landscape is evolving rapidly. Organizations should consult legal counsel for jurisdiction-specific guidance and review this analysis quarterly against emerging standards and enforcement actions.

H

Hexagon Team

Published March 8, 2026

Share

Want your brand recommended by AI?

Hexagon helps e-commerce brands get discovered and recommended by AI assistants like ChatGPT, Claude, and Perplexity.

Get Started