Policy

How Regulation Will Shape AI Search in the UK and EU

RW
Founder, Fortitude Media
12 min readPublished

EU AI Act, UK AI Safety Institute, and emerging regulation will impose transparency requirements, citation obligations, and right-to-be-recommended.

Geometric boundary line in emerald dividing space into two regulated zones, jurisdictional demarcation

The Regulatory Landscape

Key Insight

The AI search revolution is hitting a regulatory headwind in the UK and EU. Unlike the US (where AI regulation is fragmented and minimal), the EU and UK are building comprehensive regulatory frameworks specifically for AI systems, including AI search and recommendation systems.

The AI search revolution is hitting a regulatory headwind in the UK and EU. Unlike the US (where AI regulation is fragmented and minimal), the EU and UK are building comprehensive regulatory frameworks specifically for AI systems, including AI search and recommendation systems.

This matters enormously to your business because:

  1. AI Overviews and agent recommendations will be subject to regulation
  2. These regulations will change how AI systems evaluate and cite businesses
  3. Non-compliance will face penalties
  4. Competitive dynamics will shift as regulation favors transparency and fairness

The regulatory framework is still forming, but the direction is clear: move toward transparency, fairness, and consumer/business protection.

The EU AI Act: The Primary Force

Key Insight

The EU AI Act came into effect January 1, 2026. It's the first comprehensive AI regulation globally and is shaping how regulation develops worldwide.

The EU AI Act: The Primary Force — How Regulation Will Shape AI Search in the UK and EU
The EU AI Act: The Primary Force

The EU AI Act came into effect January 1, 2026. It's the first comprehensive AI regulation globally and is shaping how regulation develops worldwide.

Key Provisions Affecting AI Search

The AI Act classifies AI systems by risk level:

High-Risk AI Systems

AI search and recommendation systems are increasingly classified as high-risk because they:

  • Materially impact business viability (recommendations affect which vendors are chosen)
  • Affect fundamental rights (access to information, fair competition)
  • Can discriminate or create biased outcomes

Requirements for high-risk AI systems:

  1. Transparency and Documentation

    • AI systems must document their evaluation criteria
    • Decision-making logic must be explainable
    • Training data must be documented
  2. Accuracy and Performance Standards

    • Systems must meet defined accuracy standards
    • Bias and discrimination must be monitored
    • Performance must be regularly audited
  3. Human Oversight

    • Significant decisions require human review
    • Audit trails must be maintained
    • Systems must be explainable to regulators
  4. Rights for Affected Parties

    • Individuals can request explanation of decisions
    • Businesses can request why they weren't recommended
    • Disputes can be escalated to regulators

Implications for AI Search

Under the EU AI Act, an AI search system recommending vendors must:

  • Explain its evaluation criteria to users
  • Show why it ranked vendors in a particular order
  • Be auditable by regulators
  • Prevent bias in evaluation

This is already changing how AI systems operate. Google's AI Overviews now show sources more explicitly. Claude asks users to verify whether its recommendations are accurate.

Timeline

  • Phase 1 (2026): Requirements apply to all AI systems in EU
  • Phase 2 (2027-2028): Enforcement and penalties begin
  • Phase 3 (2028+): Full enforcement with substantial fines

The UK AI Safety Institute's Approach

Key Insight

The UK has chosen a different regulatory path than the EU, focused on industry-led governance with light-touch regulation.

The UK has chosen a different regulatory path than the EU, focused on industry-led governance with light-touch regulation. Understanding this distinction is important because UK-based businesses and EU-based businesses face different compliance burdens.

The UK's Principle-Based Regulatory Philosophy

The UK AI Safety Institute (established January 2024) operates from a different philosophy than the EU:

Rather than creating prescriptive rules ("You must do X"), the UK sets principles and expects industry to achieve compliance through self-regulation. This approach has several implications:

  1. More Flexibility for Innovation

    • Companies have more latitude in HOW they achieve compliance
    • Regulatory sandboxes allow testing of new approaches
    • Requirements evolve as the technology matures
    • This means faster innovation, but also more uncertainty
  2. Sector-Specific Rather Than One-Size-Fits-All

    • Search and recommendation systems get tailored guidance
    • Different rules for healthcare AI, financial AI, etc.
    • Guidance issued in waves (currently focused on large language models)
    • More practically relevant than blanket regulation
  3. Collaboration Over Enforcement

    • Regulatory approach is "let's work together" rather than "comply or face fines"
    • Emphasis on industry standards and best practices
    • Enforcement is softer (warnings, orders to improve) rather than heavy-handed fines
    • This requires good faith engagement from industry

Key Principles for AI Search Systems

UK AI Safety Institute principles for recommendation and search systems:

  1. Transparency: Users should understand why recommendations are made
  2. Fairness: Evaluation criteria should not discriminate
  3. Accountability: Someone must be responsible for outcomes
  4. Safety: System behavior should be safe and predictable
  5. Contestability: Users/businesses should be able to contest decisions

What This Means in Practice for AI Search

For a business operating in the UK:

  • You should be able to request explanation of why an AI search system didn't recommend you
  • AI systems should demonstrate they're not systematically biased
  • There should be a process for challenging unfair exclusion
  • But the standards aren't as rigorous as the EU AI Act requires

Timeline and Enforcement

  • 2026: Principles guidance published; industry adoption encouraged (soft approach)
  • 2027: Compliance monitoring begins; regulators check on adoption
  • 2028+: Enforcement through individual sector regulations (probably firmer)

EU vs UK: Which Is Stricter?

For businesses trying to operate in both markets:

  • EU: Strict, prescriptive, heavy penalties (up to 15% of global revenue)
  • UK: Flexible, principle-based, lighter enforcement

Strategy: Most global businesses will adopt EU standards (higher bar) to serve both markets. This means de facto, UK regulation will push toward EU compliance standards even though formal UK requirements are softer.


Key Differences from EU Approach

Key Insight

The UK's AI approach (outlined in the UK AI Safety Institute framework):

Key Differences from EU Approach — How Regulation Will Shape AI Search in the UK and EU
Key Differences from EU Approach

The UK's AI approach (outlined in the UK AI Safety Institute framework):

  1. Principle-Based Rather Than Rule-Based

    • Sets principles (safety, fairness, transparency) rather than prescriptive rules
    • Industry has more flexibility in how to achieve compliance
    • More agile approach to emerging risks
  2. Sector-Specific Guidance

    • Different guidance for search/recommendation, healthcare, finance, etc.
    • AI search systems get tailored requirements
    • Evolves as the sector matures
  3. Softer Enforcement

    • Penalties exist but are less severe than EU
    • Focus on collaboration and improvement
    • Regulatory sandboxes for innovation

Citation Obligations and Transparency

Key Insight

One of the most significant regulatory requirements emerging is citation obligations for AI search systems.

One of the most significant regulatory requirements emerging is citation obligations for AI search systems.

What This Means

When an AI system makes a recommendation or answers a question, it must:

  1. Cite its sources
  2. Be transparent about which sources it weighted most heavily
  3. Allow users to verify the sources
  4. Correct or update recommendations if cited sources are outdated

Current Implementation

Major AI platforms are already implementing this:

  • ChatGPT shows sources when answering queries
  • Perplexity explicitly weights source credibility
  • Google AI Overviews cite sources
  • Claude shows which sources informed its recommendation

Future Requirements Under Regulation

Regulatory evolution will likely require:

  1. Standardized Citation Format

    • AI systems will need to cite sources in machine-readable format
    • Businesses will be able to track when they're cited
    • Automated compliance monitoring becomes possible
  2. Access to Citation Data

    • Businesses should be able to request how often they're cited
    • Regulators should be able to audit citation patterns
    • Discrimination in citation practices can be challenged
  3. Right to Correct Information

    • If cited information is inaccurate, you can request correction
    • AI systems must have processes to address corrections
    • False citations can be challenged
  4. Right to Know When You're Not Cited

    • For some high-value queries, businesses have a right to be evaluated
    • Being excluded from consideration can be challenged (in extreme cases)

Business Implications

This regulation makes visibility to AI systems a legal right, not just a competitive advantage. It means:

  • You can verify you're being cited fairly
  • You can challenge if you're being unfairly excluded
  • You have recourse if AI systems misrepresent you
Key Insight

The most controversial emerging regulation is the concept of a "right to be recommended" or at minimum, a right to fair evaluation.

The most controversial emerging regulation is the concept of a "right to be recommended" or at minimum, a right to fair evaluation. This concept could fundamentally reshape competitive dynamics if enacted.

The Problem It Addresses

As AI systems become gatekeepers for procurement decisions, there's a real risk of anti-competitive behavior:

  • Systematic vendor preference: An AI system could be trained or configured to favor vendors based on corporate relationships (e.g., Google AI Overviews favor Google Cloud).
  • Competitive freezeout: Smaller vendors not visible to AI systems could be systematically excluded from consideration, regardless of merit.
  • Market concentration: If dominant platforms control the AI systems, they control which competitors are even evaluated.
  • Hidden biases: Unlike traditional markets where you can see competitors, in AI-mediated markets, exclusion is invisible.

Real example that's already happening: If Google's AI Overviews systematically recommend Google Cloud over AWS or Azure when synthesizing cloud infrastructure advice, that's anti-competitive and potentially illegal.

What Regulation Might Require

Emerging discussions in EU and UK regulatory bodies suggest three potential requirements:

  1. Non-Discrimination Requirements

    • AI systems cannot favor vendors based on corporate relationships or ownership
    • Evaluation criteria must be applied consistently across all vendors
    • Bias against smaller vendors or new entrants must be prevented and auditable
    • Systems must demonstrate they treat similar companies similarly
  2. Fair Access Requirements

    • Vendors should have documented ability to be evaluated fairly
    • Evaluation criteria should be published or publicly available (at least at high level)
    • Clear process for challenging unfair exclusion
    • Right to request explanation for non-inclusion
    • Dispute resolution mechanism
  3. Transparency and Audit Rights

    • Competitors can challenge recommendation patterns
    • Regulators can audit AI evaluation logic and training data
    • Systematic bias can be challenged in courts
    • Vendors should be able to see citation frequency and understand why they're or aren't cited

Real-World Business Scenario

A B2B SaaS company discovers that when enterprise customers ask their AI assistant "Best project management tools for distributed teams," the AI strongly recommends Asana, Monday.com, and Jira, but rarely mentions the SaaS company despite comparable features and better pricing.

Under proposed regulation:

  • The company could request from the AI platform: "Why aren't we being recommended? What criteria are we failing?"
  • The platform would need to explain its evaluation logic
  • If evaluation is based on bias (e.g., these platforms pay for ads, yours doesn't) or hidden relationships, the company could challenge it
  • Regulators could audit to verify non-discrimination

Likelihood and Timeline

This is still emerging:

  • Digital Markets Act (EU) is preparing to apply competition law to digital gatekeepers, including AI systems
  • UK Competition Authority is investigating AI's competitive impact
  • Business groups are lobbying for fair evaluation requirements

Timing expectations:

  • 2026-2027: Regulatory frameworks clarified
  • 2027-2028: Initial enforcement attempts
  • 2028+: Jurisprudence develops through court cases

Business Implications

This regulation would create both protection and opportunity:

For smaller vendors: Protection from being frozen out. If you can demonstrate you meet evaluation criteria but aren't being recommended, you'll have recourse. This levels the playing field against entrenched incumbents.

For large vendors: Constraint on being able to game recommendations. You can't rely on hidden relationships or paid placement to dominate AI recommendations.

For all vendors: Incentive to meet transparent evaluation criteria rather than trying to game systems. This actually favors legitimate vendors who have real merit.

What You Should Do Now

To prepare for a "right to be recommended" world:

  1. Document your competitive advantages: Be clear about why you're superior (cost, features, outcomes) so you can reference this if challenging unfair exclusion.
  2. Meet objective criteria: Ensure you meet any published or emerging evaluation standards (data protection, compliance, uptime, etc.).
  3. Track your visibility: Monitor how often you appear in AI-generated recommendations vs. competitors. If pattern is unfair, document it.
  4. Build third-party validation: Analyst recognition, customer reviews, media mentions are hard to ignore in evaluation logic.
  5. Join industry groups: Advocate for fair evaluation standards in your industry.

Data Privacy and AI Training

Key Insight

A critical but often-overlooked regulation: how AI systems train on and use business data.

A critical but often-overlooked regulation: how AI systems train on and use business data.

GDPR and Business Data

The General Data Protection Regulation (GDPR) applies to all organizations processing data of EU residents. As AI systems train on web content, they're processing business information that may be protected.

Key Issues

  1. Training Data Transparency

    • AI systems should disclose what data they trained on
    • Businesses can request their data be excluded from training
    • Opt-out mechanisms are being mandated
  2. Right to Object

    • Under GDPR, organizations have rights to object to processing
    • You can object to your website being used to train AI systems
    • Compliance is being enforced in 2026-2027
  3. Competitive Data Use

    • Information in your case studies, benchmarks, or proprietary research is your data
    • If AI systems use it to train competitors, that's potentially a violation
    • Litigation is beginning on these issues

Practical Implication

This means:

  • You should add robots.txt directives to prevent AI crawlers from training data
  • Your confidential content (research, methodologies) has some protection
  • Violations can be challenged legally
  • Competitive intelligence through AI training might be restricted

Business Opportunity

If competitors' confidential information is being used to train AI systems unfairly, you have legal grounds to challenge it.

Compliance Implications for Businesses

Key Insight

For a typical B2B business, what does this regulatory landscape mean?

For a typical B2B business, what does this regulatory landscape mean?

If You Sell to EU Companies

You are affected because:

  • Your potential customers' AI procurement systems will be subject to EU AI Act
  • Your vendor information will be evaluated under transparency and fairness requirements
  • You should ensure your information meets EU standards

Requirements:

  • Ensure your website has accurate, verifiable information
  • Document your claims with evidence
  • Remove misleading or unsubstantiated claims
  • Have processes to correct inaccurate information

If You Sell to UK Companies

You are affected because:

  • UK AI regulation will eventually converge partially with EU (currently lighter-touch)
  • Best practice is to assume similar requirements as EU

If You're a Data-Heavy Business

You are affected because:

  • Your data is training material for AI systems
  • You should manage what data is publicly available
  • You should use robots.txt and terms of service to protect competitive data

If You Sell AI Systems

You are heavily affected:

  • Your AI systems must meet EU AI Act standards
  • Documentation and auditability are required
  • Enforcement will be strict

What to Prepare for Now

Key Insight

Given this regulatory trajectory, here's what B2B companies should do now:

Given this regulatory trajectory, here's what B2B companies should do now:

Immediate Actions (Q2-Q4 2026)

  1. Audit Your Public Information

    • Ensure all claims are substantiated with evidence
    • Remove hyperbole and unsubstantiated marketing claims
    • Document methodology and research behind your claims
  2. Prepare Citation Management

    • Track where you're cited in AI systems
    • Set up processes to monitor and verify accuracy
    • Create templates for requesting corrections
  3. Document Your Evaluation Criteria

    • If you have B2B procurement processes, document how you evaluate vendors
    • Show it's transparent and non-discriminatory
    • Be prepared to explain it to regulators
  4. Review Data Use

    • Check your robots.txt for sensitive data exclusions
    • Review what confidential information is public
    • Determine if you need to protect additional information

Medium-Term Actions (2027)

  1. Prepare for Audit

    • Assume your website and claims will be audited by regulators
    • Have documentation ready
    • Create audit trails showing how claims are substantiated
  2. Implement Correction Processes

    • Have formal processes for addressing inaccurate citations
    • Train team on how to request AI system corrections
    • Budget for potential need to respond to challenges
  3. Develop Fair Evaluation Claims

    • If you offer AI systems or recommendations, ensure they're fair
    • Document non-discrimination
    • Prepare for regulatory review
  4. Join Industry Groups

    • Participate in industry conversations about AI regulation
    • Contribute to standard-setting
    • Build relationships with regulators

Long-Term Strategy (2027-2028+)

  1. Embrace Transparency

    • Make your evaluation criteria public
    • Show your research and methodology openly
    • Build trust through transparency
  2. Build Defensibility

    • Document everything
    • Keep detailed records of decisions and reasoning
    • Be prepared to explain to regulators
  3. Advocate for Fair Rules

    • Engage with regulatory bodies
    • Push for fair, non-discriminatory standards
    • Ensure smaller businesses aren't disadvantaged

Frequently Asked Questions

No, but it will make them more transparent and fair. Regulation will shift from "fastest growing wins" to "most transparent and fair wins." This actually favors businesses doing things legitimately and punishes those gaming the system.
Yes. Early enforcement is already happening. Fines can reach 15% of global revenue for severe violations. Major AI platforms are implementing compliance now. Expect aggressive enforcement 2027-2028.
If all your public claims are substantiated, you're transparent about methodology, and you don't discriminate, you're likely compliant. If you have hyperbolic marketing claims, hidden methodologies, or use unfair evaluation practices, you're at risk.
They still must comply if they serve EU/UK markets. US-based systems will implement EU compliance as minimum because losing the EU market is too costly.
Not yet legally, but you'll be able to in 2027-2028. For now, contact the platform and request explanation. Document the response. These will become precedent for future regulation.
For most B2B businesses, compliance is mostly about being honest and transparent. If you already are, compliance cost is near zero. If you need to rebuild your claims and evidence, it could be £20K-£100K depending on scope.
Not less useful, but slower to develop and more transparent. Systems that were recommending based on hidden criteria will need to explain themselves. This is slower but fairer.
Sales teams will shift from "convince the prospect" to "validate what the AI recommended." This is actually simpler but requires preparation.
RW

Ross Williams

Founder, Fortitude Media

Ross Williams is the founder of Fortitude Media, specialising in AI visibility and content strategy for B2B companies.

Connect on LinkedIn

Share this article

Related Articles

How AI Agents Will Change B2B Buying Decisions
Forecast

How AI Agents Will Change B2B Buying Decisions

AI purchasing agents will autonomously research, shortlist, and recommend suppliers. This article explores what signals they'll use, why AI-visible businesses will capture the early wave, and how to prepare.

Read more
How AI Is Disrupting the Traditional Marketing Agency Model
Industry Shift

How AI Is Disrupting the Traditional Marketing Agency Model

The traditional agency model—big upfront builds, separate retainers, creative overhead—is being replaced by integrated AI-first services. This article examines the shift and what it means for agencies and their clients.

Read more
How AI Search Will Affect Recruitment and Talent Attraction
HR & Brand

How AI Search Will Affect Recruitment and Talent Attraction

Candidates are increasingly using AI to research employers. AI visibility will become a critical factor in talent attraction. How employers should adapt their strategies.

Read more

See what AI says about your business

Our free AI audit reveals how visible you are across 150+ AI platforms and what to fix first.

Get Your Free AI Audit

Or email [email protected]

Next up

Preparing Your Business for AI Agents That Buy on Behalf of Customers

10 min read
Ready to get visible?Free AI Audit