/ 11 min read / Jonathan Gill

The EU AI Act: What UK SMEs Actually Need to Do

The EU AI Act applies to UK businesses selling into Europe. The high-risk deadline is shifting from August 2026 to December 2027. What that means and what to do now.

ai-act compliance regulation eu uk-sme ai-governance
The EU AI Act: What UK SMEs Actually Need to Do

The EU AI Act: What UK SMEs Actually Need to Do

If you’re a UK SME and you think the EU AI Act doesn’t apply to you, I’ve got bad news. If you serve EU customers, use EU data, or your AI systems affect anyone in the EU, you’re in scope. Brexit doesn’t shield you from this one.

Multiple law firms (Farrer & Co, CMS, Taylor Wessing) have confirmed the same thing: the EU AI Act is extraterritorial. It applies based on where your AI system’s outputs are used, not where your company is headquartered. Sound familiar? It should. It’s the same principle as GDPR, and it caught just as many UK businesses off guard.

The penalties are serious: up to €35 million or 7% of global annual turnover for prohibited AI practices, and up to €15 million or 3% of turnover for other infringements. For a £5M turnover business, even the lower tier is a potential six-figure fine.

The Deadline Is Moving, But That’s Not an Excuse

The original timeline was clear: high-risk AI system obligations were due to hit on 2 August 2026. That date was set when the Act entered into force in August 2024.

That’s changing. In November 2025, the European Commission proposed the “Digital Omnibus” package, which includes a delay to the high-risk deadline. In March 2026, the EU Council agreed its position: the new application date for high-risk rules would be 2 December 2027.1 The EU Parliament has voted to support the postponement.2

The Omnibus isn’t formally adopted yet; it’s still moving through the legislative process. But both Council and Parliament are aligned, which means December 2027 is almost certainly the new deadline.

Two things about this:

First, some obligations are already in force. The ban on prohibited AI practices and AI literacy requirements both took effect on 2 February 2025. If you’re using AI in ways that fall under the prohibited category, you’re already exposed. If your staff are using AI systems without adequate training on what they’re doing, you have an existing obligation you’re probably not meeting.

Second, most businesses will see the delay and do nothing. That’s exactly the wrong response. The Omnibus also proposes reducing administrative burden by 25% for all businesses and 35% for SMEs, which means the final requirements may be lighter, but they’re still coming. And compliance takes time. Legal advisors like Rubicon Software estimate 8 to 14 months3 to get high-risk systems fully compliant. If December 2027 is the deadline, you need to be moving by early 2027 at the latest.

Smart businesses use the extra time to get ahead, not to procrastinate.

What the Act Actually Covers

The AI Act uses a risk-based approach. Not all AI is treated the same.

Prohibited (Banned Entirely)

  • Social scoring by governments
  • Real-time biometric identification in public spaces (limited exceptions for law enforcement)
  • Emotion recognition in workplaces and schools
  • Subliminal manipulation and exploitation of vulnerabilities
  • AI “nudifier” systems creating non-consensual intimate imagery (proposed addition via the Omnibus)

Most SMEs won’t touch these. If you do, stop immediately. These bans are already in effect.

High-Risk

  • AI used in recruitment, hiring, and performance evaluation
  • Credit scoring and loan approval systems
  • Insurance pricing and claims assessment
  • Critical infrastructure management
  • Medical devices and healthcare diagnosis
  • Education (access and assessment decisions)

This is where most SMEs who’ve deployed AI will land. If you’re using AI to screen CVs, assess candidates, score leads based on personal data, or make any decision that significantly affects individuals, you’re probably in high-risk territory.

Limited Risk

  • Chatbots and AI assistants (transparency obligations: users must know they’re interacting with AI)
  • Deepfakes and synthetic content (labelling obligations)
  • Emotion recognition systems outside prohibited contexts

Minimal Risk

  • AI-powered spam filters
  • Basic recommendation systems
  • Inventory management tools

No special compliance burden. Keep records, follow good practice.

The Seven Requirements for High-Risk Systems

When the high-risk obligations take effect (likely December 2027) affected AI systems must meet all seven:

1. Risk Management Systems (Article 9) Documented processes to identify, assess, and mitigate risks throughout the AI system’s lifecycle. Not a one-time exercise. Ongoing monitoring.

For SMEs: a risk register per AI system, quarterly reviews minimum, clear escalation paths.

2. Data Governance (Article 10) Training data must be representative, tested for bias where possible, and documented. You need to know what data trained your models and whether it’s fit for purpose.

For SMEs: document your data sources, run bias testing for high-stakes decisions like hiring or lending, check data quality before deployment.

3. Technical Documentation (Article 11) Comprehensive documentation covering system architecture, capabilities, limitations, and intended use. Think “handover pack”: if a regulator asked you to explain this system tomorrow, could you?

4. Record-Keeping (Article 12) Automatic logging of events while the system operates. Audit trails showing what decisions were made, when, and based on what inputs.

5. Transparency and User Information (Article 13) Clear documentation for end-users about what the AI does, its limitations, and how to interpret outputs. No black boxes.

6. Human Oversight (Article 14) High-risk AI must allow human intervention. Someone needs to be able to review, override, and correct AI decisions. Fully autonomous high-risk decision-making is essentially prohibited.

7. Accuracy, Robustness, and Cybersecurity (Article 15) Systems must perform to claimed accuracy levels, be resilient to errors and attacks, and maintain security appropriate to the risk level.

You’ve Been Here Before

If you were running a business in 2018, you’ve already done a version of this. GDPR landed with months of headlines about massive fines and existential compliance risk. In practice, most SMEs audited what personal data they held, updated a few processes, added a privacy notice, and got on with their lives.

The AI Act follows the same pattern. Audit what AI you use. Understand the risk level. Document your processes. Make sure a human can review and override consequential decisions. If you did GDPR, you already know how this works. Same muscles, different technology.

The key difference: GDPR was about personal data. The AI Act is about automated decisions and the systems that make them. But the compliance mindset is identical. Know what you run. Classify the risk. Document how it works. Keep humans in the loop where it matters.

The businesses that struggled with GDPR were the ones who ignored it until the deadline. The AI Act deadlines are further out, but the preparation is the same. Start now and it’s a manageable project. Leave it until 2027 and it’s a crisis.

Where Most UK SMEs Actually Stand

The UK has no equivalent comprehensive AI regulation. The government’s approach is sector-specific, relying on existing regulators (the FCA, ICO, Ofcom, CMA) to apply AI principles within their domains. There’s no single domestic framework telling you what to do.

That creates a compliance gap. UK SMEs selling into the EU face AI Act obligations with no domestic infrastructure to lean on. No UK AI Act to train against. No local regulator publishing AI-specific guidance for SMEs. You’re essentially preparing for EU regulation using EU resources.

Most UK SMEs are not in trouble. Most will never touch the high-risk categories. And the ones who might are usually building things they already know carry weight. This isn’t a trap. It’s a framework for thinking about what you’re actually doing.

In practice, most UK SMEs I speak to fall into one of three camps:

Camp 1: “We’re not using AI yet.” You’re not immediately exposed. But if you’re planning AI projects that will touch EU customers or data, build compliance in from the start. Retrofitting is always more expensive.

Camp 2: “We use AI tools but don’t build them.” If you’re using third-party AI tools — ChatGPT via the web interface, Copilot, SaaS products with built-in AI features — your vendors carry much of the compliance burden for the underlying system. You’re still responsible for how you use those tools. But you’re not the deployer of a high-risk AI system just because you have a Copilot licence.

There’s an important exception here, and it catches more businesses than you’d expect.

The API exception: when Camp 2 becomes Camp 3 If you’re calling a third-party AI via API to power automated decisions, you’re not in Camp 2 anymore. The API call is yours. The decision logic is yours. The compliance burden is yours.

Example: you build a workflow that sends CVs to the ChatGPT API and uses the response to score candidates. OpenAI didn’t deploy a hiring system. You did. You’re the operator of a high-risk AI system under the Act, regardless of whose model is doing the inference.

The same applies to any automated decision that significantly affects individuals: loan scoring, insurance assessment, performance evaluation, access to services. If you built the workflow, you own the risk classification.

Camp 3: “We’ve built or customised AI systems.” This is the camp with the most work to do. Custom models, fine-tuned open-source LLMs, or automated decision workflows that affect EU individuals all need proper compliance documentation if they fall into the high-risk categories. If you’re here, you probably already know it.

What to Do Now

None of what follows is exotic compliance overhead. It’s the kind of hygiene any well-run business should have in place anyway.

Keep a list of what AI tools you use. Not because a regulator might ask (though they might). Because you’d want this for any vendor audit, IT review, or new employee onboarding. If you can’t list the AI tools your business runs on, that’s a gap regardless of regulation.

Know what decisions your AI is influencing. Which business processes involve AI outputs? Where are those outputs feeding into decisions about people, customers, or operations? You’d want this for quality control. You’d want it if a decision got challenged. The Act just formalises something you should already know.

Make sure a human can review and override AI decisions. For anything consequential, a human should be able to check the output and say “no”. That’s not just good compliance practice. It’s good management. Automated systems make mistakes. If no one can intervene, mistakes compound.

Check your AI vendors have a compliance roadmap. Same as any supplier due diligence. You’d want to know if your payroll provider had no plan for HMRC changes. Ask your AI vendors the same question: are you preparing for the AI Act? What does your compliance timeline look like? The answers will tell you a lot about whether they’re worth staying with.

If you’re in Camp 3 or the API exception camp, you have more specific work to do around documentation, risk registers, and audit trails. That’s where a proper gap analysis against the seven requirements above earns its keep. But even that isn’t as heavy as it sounds if you approach it like a system design exercise rather than a legal filing.

December 2027 is the working deadline for high-risk obligations. That’s enough time to get this right without panic. Start with the inventory. Everything else follows from that.

The Opportunity in Compliance

Yes, it’s a burden. But it’s also a competitive moat.

Enterprise customers are already asking suppliers about AI governance. Being able to demonstrate compliant, well-documented AI use is becoming a procurement requirement. SMEs that can say “we’re AI Act ready” will win contracts their competitors can’t touch.

The businesses that treat this as preparation rather than punishment, that build responsible AI systems now, will be the ones serving EU markets while competitors scramble to catch up.

December 2027 will come whether you’re ready or not.

Build Your AI Inventory Now

Give this prompt to every employee who uses AI in their job. Each person runs it themselves and gets a personal report. Collect the reports and you have a company-wide picture — far more accurate than asking a manager to guess.

If the employee already uses ChatGPT or Claude with memory enabled, or has an existing project or conversation history, the AI will already know some of what they use. The prompt below tells the AI to check what it already knows and surface those as suggestions first, rather than asking cold questions. The employee just confirms, corrects, or fills in the gaps.

You are an AI usage assistant helping an employee document how they personally use AI in their work. Before asking any questions, check your memory and conversation history for anything you already know about this person's role, the tools they use, and how they work. Surface that as a suggestion. Do not ask cold questions when you already have the answer.

Start by introducing yourself briefly. Tell the employee you are going to help them create a personal record of how they use AI at work. Then share anything you already know about them and ask if it looks right.

Critical rule: ask only ONE question at a time. Do not ask the next question until the employee has answered the current one.

For each area below, lead with a suggestion if you have one. Format: "Based on what I know about you, it looks like you use [X] for [Y]. Does that sound right, or am I missing anything?" If you have no prior knowledge on a topic, ask the question normally.

Work through the following areas in order:

1. Role and department. What do they do and which team are they in?
2. Which AI tools do they use personally at work? Include everything: ChatGPT, Copilot, Grammarly, AI features built into other software, anything.
3. For each tool: what do they use it for?
4. For each tool: do they review the output themselves before anything happens with it, or does it run automatically without a manual check?
5. Does any AI tool they use influence decisions about other people? This includes colleagues, customers, job candidates, or anyone else.
6. For each tool: do they access it through a website or app they log into directly, or is it connected to company systems in the background?

When you have covered all the tools, or if the employee says they are done, stop asking questions and produce a structured summary.

Summary format:

| Role | Tool | Use case | Manual or automated | Affects people decisions (Y/N) | Access method (UI/API) |
|------|------|----------|--------------------|---------------------------------|------------------------|

End the summary with this line, exactly:
"Save this report and share it with your manager or whoever is leading your company's AI review."

What it looks like in practice:

“I’m going to help you document how you use AI at work. I’ll ask one question at a time. Based on what I know about you, it looks like you use ChatGPT for drafting emails and Grammarly for editing. Does that sound right, or am I missing anything?”

The employee confirms, corrects, or adds tools the AI didn’t know about. Each answer feeds the next question until the full picture is built.


Jonathan Gill is the founder of Squared Lemons, an AI consultancy helping UK SMEs navigate AI adoption, including regulatory readiness. Get in touch for a practical compliance assessment.


Sources

Footnotes

  1. EU Council position on Digital Omnibus delay, 13 March 2026. Council press release.

  2. EU Parliament vote supporting postponement, approximately 16 March 2026. Parliament press release.

  3. Compliance timeline estimate of 8–14 months: Rubicon Software.

FAQ

Frequently asked questions

01

Does the EU AI Act apply to UK businesses after Brexit?

Yes. The EU AI Act is extraterritorial, like GDPR. If your AI system's outputs affect anyone in the EU, if you serve EU customers, or if you use EU data, you are in scope regardless of where your company is headquartered. Multiple UK law firms including Farrer and Co, CMS, and Taylor Wessing have confirmed this interpretation.

02

What AI uses are prohibited under the EU AI Act?

Prohibited systems include social scoring by public authorities, real-time biometric identification in public spaces, subliminal manipulation of behaviour, and AI that exploits vulnerabilities of protected groups. These prohibitions applied from February 2025 and carry fines of up to 35 million euros or 7% of global turnover.

03

When is the deadline for UK SMEs to comply with EU AI Act high-risk requirements?

The high-risk AI system compliance deadline has shifted from August 2026 to December 2027 following the European Commission's updated implementation guidance. Transparency obligations (informing users they are interacting with AI) have been in force since August 2026 and apply regardless of risk category.

04

What counts as high-risk AI under the EU AI Act?

High-risk systems include AI used in hiring and recruitment, credit scoring, insurance underwriting, healthcare diagnostics, education and assessment, and critical infrastructure. If your business uses AI in any of these areas and the outputs affect EU users, the full compliance framework applies including risk management documentation and human oversight requirements.