/ 8 min read / Jonathan Gill

The FCA Admits It Doesn't Know Who's Liable When AI Pays Your Bills

Agentic payments are here. The regulations aren't. What UK businesses need to know about the liability gap, and what to do before the rules land.

fca ai-regulation payments uk-sme fintech agentic-ai
The FCA Admits It Doesn't Know Who's Liable When AI Pays Your Bills

The FCA Admits It Doesn’t Know Who’s Liable When AI Pays Your Bills

The Digital Regulation Cooperation Forum, the body coordinating the UK’s four main digital regulators, has published a call for views on agentic AI. Buried in the document is an admission that should concern every UK business using AI in financial workflows.

They listed the areas where regulatory clarity is “most needed.” Data protection came first. Liability came second.

The regulators are asking industry for help figuring out who’s responsible when an AI agent makes a payment that goes wrong. That isn’t a theoretical concern. The technology to do this is already deployed.

What Agentic Payments Actually Mean

Right now, every payment you make involves a human decision. You click “pay.” You enter the card number. You approve the transaction. UK payment regulations, including Strong Customer Authentication requirements, assume this human-in-the-loop model.

Agentic payments change that. An AI system has delegated authority to initiate transactions without asking you each time. It compares energy tariffs and switches providers automatically. It rebalances your investment portfolio. It pays invoices it deems legitimate based on patterns it’s learned.

This isn’t a future scenario. Anthropic’s Computer Use capability, launched in October 2024 and significantly enhanced since, allows Claude to navigate websites, fill forms, and execute transactions. OpenAI’s Operator, announced in 2025, is designed specifically for autonomous task completion including payments. These tools can already access banking interfaces and initiate transfers.

The infrastructure for AI-initiated payments exists. The liability framework doesn’t.

We’ve Been Here Before

I know this pattern. I’ve lived it.

Watchfinder didn’t take online card payments until 3D Secure was in place. That was a deliberate decision. Selling high-value watches online without a clear liability framework wasn’t a risk worth taking. One wave of chargebacks on luxury goods could have been serious. We waited until the liability question was answered, then moved.

3D Secure gave us that answer. The additional authentication step (that familiar redirect to your bank for a code) looks like friction. What it actually is is a liability transfer. When a customer authenticates and the transaction is later disputed as fraudulent, liability shifts from the merchant to the card issuer. The bank has skin in the game. Suddenly they’re far more motivated to fight fraud proactively.

That liability clarity is what unlocked online payments for businesses like ours. It didn’t just protect us from chargebacks. It changed the entire dynamic with the banks. When they carry the liability, they build better fraud detection. Everyone in the chain has an incentive to make it work.

But the liability shift had a gap. It covered fraud: transactions where the cardholder didn’t make the purchase. It didn’t cover friendly fraud: a buyer signs for the goods, then tells their bank the delivery never happened and that the signature isn’t theirs. Computing reported on this loophole, citing retail CIOs describing losses with no recourse. Mastercard confirmed it existed. Visa pointed retailers to their acquirer. The acquirer confirmed it was real and offered nothing useful.

One CIO put it plainly: “Banks do not lose any money so have no interest in fixing the situation.”

We weren’t alone in pushing for a fix. Retailers across the industry were lobbying couriers to photograph deliveries: simple proof that goods had actually been handed over. Every logistics company said no. Too operationally complex. Not worth the cost. The incentive wasn’t there.

Then Covid hit. Contactless delivery became mandatory overnight. Couriers photographing parcels at the door became standard practice globally within months. A change the industry had resisted for years happened in weeks, because suddenly everyone had the same forcing function.

The non-delivery fraud problem didn’t disappear, but the evidence base changed. Proof of delivery photos made fraudulent claims significantly harder to sustain.

The lesson: industries don’t fix liability gaps voluntarily. They fix them when they’re forced to: by regulation, by crisis, or by a competitor who moves first and sets the standard.

The rules had been written to solve one problem, and in doing so created a new gap that criminals exploited. Without liability sitting clearly somewhere, the incentive to fix it didn’t exist.

That is exactly the situation the FCA now faces with agentic payments. The technology is here. The liability question isn’t answered. And until it is, the incentive structures that make the whole system work (banks fighting fraud proactively, businesses building responsibly) don’t exist in the same way.

Businesses waiting for full regulatory clarity before deploying agentic AI are making a reasonable call. But the ones who build governance frameworks now, before the rules land, will be the ones who move fastest when clarity does arrive. The liability framework will come. The question is whether you’re ready when it does.

The Regulatory Gap

Strong Customer Authentication, the requirement for two-factor verification on significant transactions, assumes a human authenticates each payment. When an AI agent makes the decision, that assumption breaks.

The FCA’s existing rules don’t accommodate delegated authority to algorithms. They don’t specify who’s liable when an AI agent is compromised and makes fraudulent payments. They don’t clarify how dispute resolution works when neither the payer nor the payee is a human.

The DRCF’s call for views explicitly acknowledges this gap. They’re asking industry to help them think through liability frameworks for autonomous systems. That exercise is necessary. It’s also slow. And businesses are already deploying these capabilities.

How the UK Is Approaching This

The UK government has rejected an EU-style AI Act in favour of a sector-by-sector approach. Rather than creating a single AI regulator or comprehensive legislation, existing bodies adapt their frameworks to new technology. The DRCF coordinates between the FCA (financial services), Ofcom (communications), ICO (data protection), and CMA (competition).

This approach has advantages. It’s flexible. It avoids sweeping rules that become outdated before they’re implemented. It lets regulators specialise in their domains.

It also has a significant drawback: uncertainty. While Brussels has published detailed rules on AI liability, London is still asking questions. The DRCF’s Thematic Innovation Hub, launched in 2025 with agentic AI as its first focus, provides a single point of entry for innovators. But entry to what, exactly, remains unclear.

For UK fintechs and financial services firms, this creates a strategic dilemma. Build agentic payment features now, under uncertain liability rules, and risk expensive retrofitting later? Or wait for clarity and potentially lose ground to competitors in jurisdictions with clearer frameworks?

UK vs EU: A Regulatory Arbitrage Risk

The EU AI Act takes a different approach. It classifies AI systems by risk level and imposes specific obligations on high-risk applications, including many financial services use cases. The rules are prescriptive. They’re also clear.

UK regulators have explicitly chosen not to follow this path. The government maintains that existing sectoral regulation, adapted by expert bodies, is preferable to blanket AI legislation.

That philosophy is defensible. But it creates a timing problem. The DRCF’s call for views closed recently. Feedback takes time to process. Guidance takes time to draft. In the meantime, the technology isn’t waiting.

There’s a genuine risk that regulatory uncertainty pushes agentic payment innovation out of the UK. If you’re building a product that relies on autonomous financial decision-making, and you have a choice between launching under clear EU rules or uncertain UK guidance, the calculation isn’t difficult.

What the Industry Is Doing While Regulators Think

Here’s the uncomfortable truth: the payments industry isn’t waiting for the FCA.

Mastercard announced Agent Pay in February 2025: a dedicated framework for AI agents to transact on behalf of consumers. It includes a “smart credential” system for agent authentication, intent verification (agents must declare what they’re paying for), and a proposed “shared liability model” with consumer liability capped at £50 for verified agent transactions. Their Chief Product Officer put it bluntly: “By 2027, we expect AI agents to be making billions of transactions. We’re building the trust layer for that future.”

Visa followed with their Intelligent Commerce initiative in January 2025: a “Visa Agent Credential” standard and APIs for merchants to verify agent authenticity. Partnerships with Anthropic, Microsoft, and Shopify.

Stripe launched an Agent Toolkit in early 2025: developer infrastructure for AI agents to create payments, manage subscriptions, and handle refunds, with fraud detection tuned specifically for AI-initiated transactions.

The pattern is Mastercard trying to own the standards conversation. Whoever establishes the liability framework first shapes whatever regulation eventually follows. They know this. The DRCF’s call for views isn’t happening in a vacuum. The card networks are already building frameworks that regulators will likely end up ratifying rather than writing from scratch.

But not everyone is moving. Revolut has stated it won’t offer autonomous payment products until there’s regulatory clarity. Starling’s CEO has publicly described the space as “the Wild West.” UK Finance is calling for statutory clarity rather than fragmented industry standards.

The market is splitting into two camps: move fast and establish norms, or wait for the rules and move clean. Both are defensible strategies. The risk in the first camp is building infrastructure that gets overruled. The risk in the second is falling behind competitors who shaped the conversation.

What Businesses Should Actually Do

The liability framework will arrive eventually. When it does, the cost of retrofitting governance will be significantly higher than building it in now. Three things to address this quarter:

Map your AI touchpoints. Anywhere AI touches financial workflows (invoice processing, expense approvals, subscription management, purchase orders): needs documentation. You can’t govern what you haven’t identified.

Define autonomy levels. Be explicit about what AI can recommend, what it can decide with human review, and what it can execute without intervention. These boundaries should be written down, not assumed.

Document the decision logic. When an AI agent initiates a payment, what criteria did it use? Can you explain that to an auditor? Can you reconstruct the reasoning if something goes wrong? If the answer is no, you have a problem that regulation will eventually expose.

The Cost of Waiting

The DRCF’s admission that liability clarity is needed isn’t a criticism of their approach. It’s an honest assessment of where things stand. The regulators are engaging seriously with a complex problem. That engagement takes time.

Businesses don’t have that luxury. The tools for agentic payments are available now. Competitors are already experimenting. Customers are already expecting more autonomous services.

The firms that thrive won’t be the ones who wait for perfect clarity. They’ll be the ones who build governance frameworks robust enough to accommodate whatever rules eventually arrive. Governance built before the rules land costs a fraction of retrofitting compliance after the fact.

The FCA hasn’t told you what the liability framework will look like. They’ve told you that it’s coming. That’s enough to act on.


Jonathan Gill is founder of Squared Lemons, helping UK businesses adopt AI without the corporate waffle.


Sources

FAQ

Frequently asked questions

01

What are agentic payments?

Agentic payments occur when an AI agent autonomously initiates financial transactions on behalf of a user, for example booking travel, paying invoices, or purchasing software without the user confirming each transaction manually. Tools such as Anthropic's Computer Use and OpenAI's Operator already enable this type of autonomous financial action.

02

Who is liable when an AI agent makes an unauthorised payment?

The Digital Regulation Cooperation Forum has acknowledged that current UK regulations do not clearly assign liability when an AI agent causes a financial loss. Payment regulation was written assuming a human authorises each transaction; agentic systems introduce a delegated-authority model that existing frameworks do not yet cover.

03

How are card networks responding to AI-initiated payments?

Mastercard has launched Agent Pay, Visa has introduced Intelligent Commerce, and Stripe has released an Agent Toolkit to enable AI agents to initiate payments within controlled parameters. These schemes create authenticated tokens for AI agents, similar to how 3D Secure assigns liability for human card transactions.

04

What should UK businesses do about agentic payment risks now?

The article recommends three steps: map every place where AI touches your payment workflows, define the level of autonomy each AI system is permitted (for example, authorised to spend up to a set amount without approval), and document the decision logic so that if a disputed transaction occurs you can demonstrate what triggered it.