How Regulation Will Shape AI Search in the UK and EU
EU AI Act, UK AI Safety Institute, and emerging regulation will impose transparency requirements, citation obligations, and right-to-be-recommended.

The Regulatory Landscape
The AI search revolution is hitting a regulatory headwind in the UK and EU. Unlike the US (where AI regulation is fragmented and minimal), the EU and UK are building comprehensive regulatory frameworks specifically for AI systems, including AI search and recommendation systems.
The AI search revolution is hitting a regulatory headwind in the UK and EU. Unlike the US (where AI regulation is fragmented and minimal), the EU and UK are building comprehensive regulatory frameworks specifically for AI systems, including AI search and recommendation systems.
This matters enormously to your business because:
- AI Overviews and agent recommendations will be subject to regulation
- These regulations will change how AI systems evaluate and cite businesses
- Non-compliance will face penalties
- Competitive dynamics will shift as regulation favors transparency and fairness
The regulatory framework is still forming, but the direction is clear: move toward transparency, fairness, and consumer/business protection.
The EU AI Act: The Primary Force
The EU AI Act came into effect January 1, 2026. It's the first comprehensive AI regulation globally and is shaping how regulation develops worldwide.

The EU AI Act came into effect January 1, 2026. It's the first comprehensive AI regulation globally and is shaping how regulation develops worldwide.
Key Provisions Affecting AI Search
The AI Act classifies AI systems by risk level:
High-Risk AI Systems
AI search and recommendation systems are increasingly classified as high-risk because they:
- Materially impact business viability (recommendations affect which vendors are chosen)
- Affect fundamental rights (access to information, fair competition)
- Can discriminate or create biased outcomes
Requirements for high-risk AI systems:
-
Transparency and Documentation
- AI systems must document their evaluation criteria
- Decision-making logic must be explainable
- Training data must be documented
-
Accuracy and Performance Standards
- Systems must meet defined accuracy standards
- Bias and discrimination must be monitored
- Performance must be regularly audited
-
Human Oversight
- Significant decisions require human review
- Audit trails must be maintained
- Systems must be explainable to regulators
-
Rights for Affected Parties
- Individuals can request explanation of decisions
- Businesses can request why they weren't recommended
- Disputes can be escalated to regulators
Implications for AI Search
Under the EU AI Act, an AI search system recommending vendors must:
- Explain its evaluation criteria to users
- Show why it ranked vendors in a particular order
- Be auditable by regulators
- Prevent bias in evaluation
This is already changing how AI systems operate. Google's AI Overviews now show sources more explicitly. Claude asks users to verify whether its recommendations are accurate.
Timeline
- Phase 1 (2026): Requirements apply to all AI systems in EU
- Phase 2 (2027-2028): Enforcement and penalties begin
- Phase 3 (2028+): Full enforcement with substantial fines
The UK AI Safety Institute's Approach
The UK has chosen a different regulatory path than the EU, focused on industry-led governance with light-touch regulation.
The UK has chosen a different regulatory path than the EU, focused on industry-led governance with light-touch regulation. Understanding this distinction is important because UK-based businesses and EU-based businesses face different compliance burdens.
The UK's Principle-Based Regulatory Philosophy
The UK AI Safety Institute (established January 2024) operates from a different philosophy than the EU:
Rather than creating prescriptive rules ("You must do X"), the UK sets principles and expects industry to achieve compliance through self-regulation. This approach has several implications:
-
More Flexibility for Innovation
- Companies have more latitude in HOW they achieve compliance
- Regulatory sandboxes allow testing of new approaches
- Requirements evolve as the technology matures
- This means faster innovation, but also more uncertainty
-
Sector-Specific Rather Than One-Size-Fits-All
- Search and recommendation systems get tailored guidance
- Different rules for healthcare AI, financial AI, etc.
- Guidance issued in waves (currently focused on large language models)
- More practically relevant than blanket regulation
-
Collaboration Over Enforcement
- Regulatory approach is "let's work together" rather than "comply or face fines"
- Emphasis on industry standards and best practices
- Enforcement is softer (warnings, orders to improve) rather than heavy-handed fines
- This requires good faith engagement from industry
Key Principles for AI Search Systems
UK AI Safety Institute principles for recommendation and search systems:
- Transparency: Users should understand why recommendations are made
- Fairness: Evaluation criteria should not discriminate
- Accountability: Someone must be responsible for outcomes
- Safety: System behavior should be safe and predictable
- Contestability: Users/businesses should be able to contest decisions
What This Means in Practice for AI Search
For a business operating in the UK:
- You should be able to request explanation of why an AI search system didn't recommend you
- AI systems should demonstrate they're not systematically biased
- There should be a process for challenging unfair exclusion
- But the standards aren't as rigorous as the EU AI Act requires
Timeline and Enforcement
- 2026: Principles guidance published; industry adoption encouraged (soft approach)
- 2027: Compliance monitoring begins; regulators check on adoption
- 2028+: Enforcement through individual sector regulations (probably firmer)
EU vs UK: Which Is Stricter?
For businesses trying to operate in both markets:
- EU: Strict, prescriptive, heavy penalties (up to 15% of global revenue)
- UK: Flexible, principle-based, lighter enforcement
Strategy: Most global businesses will adopt EU standards (higher bar) to serve both markets. This means de facto, UK regulation will push toward EU compliance standards even though formal UK requirements are softer.
Key Differences from EU Approach
The UK's AI approach (outlined in the UK AI Safety Institute framework):

The UK's AI approach (outlined in the UK AI Safety Institute framework):
-
Principle-Based Rather Than Rule-Based
- Sets principles (safety, fairness, transparency) rather than prescriptive rules
- Industry has more flexibility in how to achieve compliance
- More agile approach to emerging risks
-
Sector-Specific Guidance
- Different guidance for search/recommendation, healthcare, finance, etc.
- AI search systems get tailored requirements
- Evolves as the sector matures
-
Softer Enforcement
- Penalties exist but are less severe than EU
- Focus on collaboration and improvement
- Regulatory sandboxes for innovation
Citation Obligations and Transparency
One of the most significant regulatory requirements emerging is citation obligations for AI search systems.
One of the most significant regulatory requirements emerging is citation obligations for AI search systems.
What This Means
When an AI system makes a recommendation or answers a question, it must:
- Cite its sources
- Be transparent about which sources it weighted most heavily
- Allow users to verify the sources
- Correct or update recommendations if cited sources are outdated
Current Implementation
Major AI platforms are already implementing this:
- ChatGPT shows sources when answering queries
- Perplexity explicitly weights source credibility
- Google AI Overviews cite sources
- Claude shows which sources informed its recommendation
Future Requirements Under Regulation
Regulatory evolution will likely require:
-
Standardized Citation Format
- AI systems will need to cite sources in machine-readable format
- Businesses will be able to track when they're cited
- Automated compliance monitoring becomes possible
-
Access to Citation Data
- Businesses should be able to request how often they're cited
- Regulators should be able to audit citation patterns
- Discrimination in citation practices can be challenged
-
Right to Correct Information
- If cited information is inaccurate, you can request correction
- AI systems must have processes to address corrections
- False citations can be challenged
-
Right to Know When You're Not Cited
- For some high-value queries, businesses have a right to be evaluated
- Being excluded from consideration can be challenged (in extreme cases)
Business Implications
This regulation makes visibility to AI systems a legal right, not just a competitive advantage. It means:
- You can verify you're being cited fairly
- You can challenge if you're being unfairly excluded
- You have recourse if AI systems misrepresent you
What "Right to Be Recommended" Could Mean for Your Business
The most controversial emerging regulation is the concept of a "right to be recommended" or at minimum, a right to fair evaluation.
The most controversial emerging regulation is the concept of a "right to be recommended" or at minimum, a right to fair evaluation. This concept could fundamentally reshape competitive dynamics if enacted.
The Problem It Addresses
As AI systems become gatekeepers for procurement decisions, there's a real risk of anti-competitive behavior:
- Systematic vendor preference: An AI system could be trained or configured to favor vendors based on corporate relationships (e.g., Google AI Overviews favor Google Cloud).
- Competitive freezeout: Smaller vendors not visible to AI systems could be systematically excluded from consideration, regardless of merit.
- Market concentration: If dominant platforms control the AI systems, they control which competitors are even evaluated.
- Hidden biases: Unlike traditional markets where you can see competitors, in AI-mediated markets, exclusion is invisible.
Real example that's already happening: If Google's AI Overviews systematically recommend Google Cloud over AWS or Azure when synthesizing cloud infrastructure advice, that's anti-competitive and potentially illegal.
What Regulation Might Require
Emerging discussions in EU and UK regulatory bodies suggest three potential requirements:
-
Non-Discrimination Requirements
- AI systems cannot favor vendors based on corporate relationships or ownership
- Evaluation criteria must be applied consistently across all vendors
- Bias against smaller vendors or new entrants must be prevented and auditable
- Systems must demonstrate they treat similar companies similarly
-
Fair Access Requirements
- Vendors should have documented ability to be evaluated fairly
- Evaluation criteria should be published or publicly available (at least at high level)
- Clear process for challenging unfair exclusion
- Right to request explanation for non-inclusion
- Dispute resolution mechanism
-
Transparency and Audit Rights
- Competitors can challenge recommendation patterns
- Regulators can audit AI evaluation logic and training data
- Systematic bias can be challenged in courts
- Vendors should be able to see citation frequency and understand why they're or aren't cited
Real-World Business Scenario
A B2B SaaS company discovers that when enterprise customers ask their AI assistant "Best project management tools for distributed teams," the AI strongly recommends Asana, Monday.com, and Jira, but rarely mentions the SaaS company despite comparable features and better pricing.
Under proposed regulation:
- The company could request from the AI platform: "Why aren't we being recommended? What criteria are we failing?"
- The platform would need to explain its evaluation logic
- If evaluation is based on bias (e.g., these platforms pay for ads, yours doesn't) or hidden relationships, the company could challenge it
- Regulators could audit to verify non-discrimination
Likelihood and Timeline
This is still emerging:
- Digital Markets Act (EU) is preparing to apply competition law to digital gatekeepers, including AI systems
- UK Competition Authority is investigating AI's competitive impact
- Business groups are lobbying for fair evaluation requirements
Timing expectations:
- 2026-2027: Regulatory frameworks clarified
- 2027-2028: Initial enforcement attempts
- 2028+: Jurisprudence develops through court cases
Business Implications
This regulation would create both protection and opportunity:
For smaller vendors: Protection from being frozen out. If you can demonstrate you meet evaluation criteria but aren't being recommended, you'll have recourse. This levels the playing field against entrenched incumbents.
For large vendors: Constraint on being able to game recommendations. You can't rely on hidden relationships or paid placement to dominate AI recommendations.
For all vendors: Incentive to meet transparent evaluation criteria rather than trying to game systems. This actually favors legitimate vendors who have real merit.
What You Should Do Now
To prepare for a "right to be recommended" world:
- Document your competitive advantages: Be clear about why you're superior (cost, features, outcomes) so you can reference this if challenging unfair exclusion.
- Meet objective criteria: Ensure you meet any published or emerging evaluation standards (data protection, compliance, uptime, etc.).
- Track your visibility: Monitor how often you appear in AI-generated recommendations vs. competitors. If pattern is unfair, document it.
- Build third-party validation: Analyst recognition, customer reviews, media mentions are hard to ignore in evaluation logic.
- Join industry groups: Advocate for fair evaluation standards in your industry.
Data Privacy and AI Training
A critical but often-overlooked regulation: how AI systems train on and use business data.
A critical but often-overlooked regulation: how AI systems train on and use business data.
GDPR and Business Data
The General Data Protection Regulation (GDPR) applies to all organizations processing data of EU residents. As AI systems train on web content, they're processing business information that may be protected.
Key Issues
-
Training Data Transparency
- AI systems should disclose what data they trained on
- Businesses can request their data be excluded from training
- Opt-out mechanisms are being mandated
-
Right to Object
- Under GDPR, organizations have rights to object to processing
- You can object to your website being used to train AI systems
- Compliance is being enforced in 2026-2027
-
Competitive Data Use
- Information in your case studies, benchmarks, or proprietary research is your data
- If AI systems use it to train competitors, that's potentially a violation
- Litigation is beginning on these issues
Practical Implication
This means:
- You should add robots.txt directives to prevent AI crawlers from training data
- Your confidential content (research, methodologies) has some protection
- Violations can be challenged legally
- Competitive intelligence through AI training might be restricted
Business Opportunity
If competitors' confidential information is being used to train AI systems unfairly, you have legal grounds to challenge it.
Compliance Implications for Businesses
For a typical B2B business, what does this regulatory landscape mean?
For a typical B2B business, what does this regulatory landscape mean?
If You Sell to EU Companies
You are affected because:
- Your potential customers' AI procurement systems will be subject to EU AI Act
- Your vendor information will be evaluated under transparency and fairness requirements
- You should ensure your information meets EU standards
Requirements:
- Ensure your website has accurate, verifiable information
- Document your claims with evidence
- Remove misleading or unsubstantiated claims
- Have processes to correct inaccurate information
If You Sell to UK Companies
You are affected because:
- UK AI regulation will eventually converge partially with EU (currently lighter-touch)
- Best practice is to assume similar requirements as EU
If You're a Data-Heavy Business
You are affected because:
- Your data is training material for AI systems
- You should manage what data is publicly available
- You should use robots.txt and terms of service to protect competitive data
If You Sell AI Systems
You are heavily affected:
- Your AI systems must meet EU AI Act standards
- Documentation and auditability are required
- Enforcement will be strict
What to Prepare for Now
Given this regulatory trajectory, here's what B2B companies should do now:
Given this regulatory trajectory, here's what B2B companies should do now:
Immediate Actions (Q2-Q4 2026)
-
Audit Your Public Information
- Ensure all claims are substantiated with evidence
- Remove hyperbole and unsubstantiated marketing claims
- Document methodology and research behind your claims
-
Prepare Citation Management
- Track where you're cited in AI systems
- Set up processes to monitor and verify accuracy
- Create templates for requesting corrections
-
Document Your Evaluation Criteria
- If you have B2B procurement processes, document how you evaluate vendors
- Show it's transparent and non-discriminatory
- Be prepared to explain it to regulators
-
Review Data Use
- Check your robots.txt for sensitive data exclusions
- Review what confidential information is public
- Determine if you need to protect additional information
Medium-Term Actions (2027)
-
Prepare for Audit
- Assume your website and claims will be audited by regulators
- Have documentation ready
- Create audit trails showing how claims are substantiated
-
Implement Correction Processes
- Have formal processes for addressing inaccurate citations
- Train team on how to request AI system corrections
- Budget for potential need to respond to challenges
-
Develop Fair Evaluation Claims
- If you offer AI systems or recommendations, ensure they're fair
- Document non-discrimination
- Prepare for regulatory review
-
Join Industry Groups
- Participate in industry conversations about AI regulation
- Contribute to standard-setting
- Build relationships with regulators
Long-Term Strategy (2027-2028+)
-
Embrace Transparency
- Make your evaluation criteria public
- Show your research and methodology openly
- Build trust through transparency
-
Build Defensibility
- Document everything
- Keep detailed records of decisions and reasoning
- Be prepared to explain to regulators
-
Advocate for Fair Rules
- Engage with regulatory bodies
- Push for fair, non-discriminatory standards
- Ensure smaller businesses aren't disadvantaged
Frequently Asked Questions
On this page
Ross Williams
Founder, Fortitude Media
Ross Williams is the founder of Fortitude Media, specialising in AI visibility and content strategy for B2B companies.
Connect on LinkedInShare this article


