How to Audit Your Existing Content for AI Readiness
Evaluate existing content for LLM citation likelihood. Framework for depth scoring, freshness assessment, structural analysis, and prioritizing updates.

Summary: Most existing content was written for Google, not for large language models. A content audit for AI readiness is a strategic exercise that identifies which pieces deserve investment, which need updating, which should be rewritten, and which should be retired. This framework helps you evaluate your entire content library systematically, prioritize work, and measure progress.
Why AI Readiness Audits Matter
Content written for Google search ranking operates on different principles than content optimized for LLM citation.
Content written for Google search ranking operates on different principles than content optimized for LLM citation. Your existing library probably contains:
- Keyword-optimized pieces that rank well but don't get cited
- Shallow content that covers topics but doesn't go deep
- Outdated pieces with stale data or references
- Content without clear structure or authority signals
- Lots of commodity knowledge, little original insight
Some of this content is worth updating. Some should be rewritten completely. Some should be retired. And some might already be AI-ready and just needs minor optimization.
A systematic audit tells you:
- Which pieces have the highest ROI for updating
- Which topics need net-new content instead
- Which content should be consolidated or retired
- Where you're thin on depth for important topics
- Which articles can be quick wins versus major rewrites
Without a framework, you end up spending enormous time and money updating content that shouldn't be updated, missing obvious priorities, and spreading effort across low-impact pieces.
The Audit Framework
This framework evaluates content across five dimensions:

This framework evaluates content across five dimensions:
- Depth (0-25 points) — How comprehensively does this content treat the topic?
- Freshness (0-25 points) — How current is the publication date, data, and references?
- Structure (0-20 points) — How well organized is the content for LLM parsing?
- Authority (0-20 points) — What authority signals does the article carry?
- Originality (0-10 points) — How much unique perspective or data does it contain?
Total: 100 points. Scoring:
- 80-100: AI-ready or close. Invest in minor optimization.
- 60-79: Partially ready. Meaningful update or targeted rewrite.
- 40-59: Significant work needed. Full rewrite or consolidate with other content.
- Below 40: Retire or heavily redesign.
Depth Scoring
Depth measures how comprehensively the article treats its topic. This is the most important dimension for LLM citation.
Depth measures how comprehensively the article treats its topic. This is the most important dimension for LLM citation.
Scoring criteria (0-25 points):
0-5 points: Shallow or surface-level treatment
- Article covers topic in 800 words or less
- Each section receives 2-3 paragraphs maximum
- No subsections or layered explanation
- Topic treated at single level of abstraction
- Few supporting details or examples
- No edge cases or constraints discussed Example: "5 Benefits of Cloud Migration" with one paragraph per benefit.
6-10 points: Basic coverage
- Article is 1,200-1,500 words
- Main sections are defined but minimally elaborated
- Some supporting detail but often generic
- Limited exploration of why or how
- Examples are present but illustrative rather than instructive Example: "Cloud Migration Guide" that covers planning, execution, and optimization at surface level.
11-15 points: Moderate depth
- Article is 1,800-2,200 words
- Clear sections with substantial explanation
- Includes supporting detail and specific examples
- Begins to address edge cases or variations
- Multiple explanatory angles covered
- Some analysis of success factors or failure modes Example: "Implementing Cloud Migration in Enterprise Environments" that covers architecture, organization change, and common obstacles.
16-20 points: Deep treatment
- Article is 2,400-3,200 words
- Sections are granular and thoroughly explained
- Includes specific examples, case studies, or data
- Addresses edge cases, constraints, and limitations
- Multiple explanatory angles with trade-off analysis
- Clear explanation of causal mechanisms Example: "Enterprise Cloud Migration: Architecture, Organizational Change, and Hidden Implementation Costs" with specific cost data, change management patterns, and real examples.
21-25 points: Exhaustive treatment
- Article is 3,000+ words with disciplined structure
- Subsection hierarchy is clear and purpose-driven
- Specific data points, original research, or proprietary insights
- Comprehensive edge case and constraint coverage
- Sophisticated comparative analysis
- Multiple perspectives with clear reasoning about which applies when
- Anticipates reader questions and addresses them Example: Comprehensive piece on cloud migration including cost models, vendor comparison frameworks, organizational change patterns, and when not to migrate.
Scoring tip: This isn't about word count alone. A 2,500-word article that repeats ideas scores lower than a 2,000-word article with dense, non-redundant explanation. Count substantive content, not filler.
Freshness Assessment
Freshness evaluates how current the content is and whether it reflects present-day reality.

Freshness evaluates how current the content is and whether it reflects present-day reality.
Scoring criteria (0-25 points):
0-5 points: Very stale
- Published 3+ years ago
- No update date indicated
- References data from 3+ years prior
- Outdated vendor/product references
- Reflects outdated industry practices
- Statistics are clearly obsolete
6-10 points: Moderately dated
- Published 2-3 years ago
- References data that's 2-3 years old
- Some vendor/product references are current, some outdated
- Foundational concepts are current, but practices may have evolved
- No update date
11-15 points: Reasonably current
- Published 12-24 months ago
- References data from last 18 months
- Most vendor/product references current
- Practices are current but not cutting-edge
- Some evergreen content that doesn't require dates
16-20 points: Current
- Published within last 12 months
- References current data and year
- Modern vendor/product references
- Reflects current best practices
- May have an update date indicating recent refresh
21-25 points: Very current
- Published within last 6 months
- References current data (within 3 months)
- Cutting-edge examples and vendor references
- Clear update dates on refreshed content
- Time-sensitive information is clearly marked
- Signals that content is actively maintained
Scoring tip: Publication date alone isn't sufficient. An article from 2022 that discusses timeless concepts is fresher than an article from 2024 that references outdated data. Look at the content currency, not just the publication date.
Structural Analysis
Structure measures whether the article is organized in a way that LLMs can parse effectively.
Structure measures whether the article is organized in a way that LLMs can parse effectively.
Scoring criteria (0-20 points):
0-5 points: Poor structure
- Unclear or missing H1
- Inconsistent heading hierarchy (jumps levels, mixes H2 and H4)
- No table of contents
- Long body paragraphs (600+ words without section breaks)
- No visual breaks or formatting
- Difficult to identify main concepts
6-10 points: Basic structure
- Clear H1
- Mostly consistent heading hierarchy
- Some subsections but not comprehensive
- Medium-length paragraphs (300-500 words)
- Some list formatting
- Main concepts are identifiable but not clearly delimited
11-15 points: Good structure
- Clear, specific H1
- Consistent heading hierarchy (H1-H3 or H1-H4)
- Table of contents present
- Logical subsection breakdown
- Clear topic sentences and summary statements
- Visual formatting (lists, callouts) used appropriately
- FAQ section (if applicable)
16-20 points: Excellent structure
- Precise, semantic H1
- Clean H1-H2-H3 hierarchy
- Comprehensive table of contents with anchor links
- Each section clearly delineated
- Summary statements at section conclusions
- Consistent visual formatting
- FAQ section with schema-ready Q&A
- Supporting elements (code blocks, diagrams, matrices) clearly labeled
Scoring tip: Run the article through an automated heading hierarchy checker (browser extension or online tool). Inconsistent hierarchy is an objective failing. Check whether an LLM could segment this content into discrete knowledge units—if not, the structure needs work.
Authority Signal Evaluation
Authority signals measure whether the article carries credibility cues that LLMs weight.
Authority signals measure whether the article carries credibility cues that LLMs weight.
Scoring criteria (0-20 points):
0-5 points: Minimal authority signals
- Generic byline ("By [Company] Editorial Team")
- No author credentials or expertise context
- No data sources cited
- No references to research or external sources
- No case studies or examples
- Unclear organizational expertise level
6-10 points: Some authority signals
- Named author but limited credential detail
- Some data sources mentioned but not rigorously cited
- A few external references
- General examples but not case studies
- Some topical consistency across company articles
- Organizational expertise implied but not explicit
11-15 points: Moderate authority signals
- Named author with expertise context ("Jane Chen, 10 years in data infrastructure")
- Multiple data sources with proper attribution
- 4-5 substantive external references
- 1-2 case study examples
- Clear topical consistency across author/company content
- Organizational expertise in domain is evident
16-20 points: Strong authority signals
- Named author with specific credentials and experience
- Multiple proprietary data sources
- 6+ high-quality external references including academic/authoritative sources
- 2-3 substantial case studies with specific results
- Clear author topical authority across multiple articles
- Organizational expertise is demonstrable and specialized
- Proprietary research or original data analysis included
Scoring tip: Check whether the author is a consistent expert across multiple pieces. A single article with great credentials still doesn't signal authority the way five related articles from the same author do. Also evaluate reference quality: citing industry analyst reports is stronger than citing general news articles.
Originality and Insight Assessment
Originality measures how much unique perspective, data, or insight the article contains.
Originality measures how much unique perspective, data, or insight the article contains.
Scoring criteria (0-10 points):
0-2 points: Commodity knowledge
- Article covers topic the same way every competitor covers it
- No unique perspective or insight
- Generic information available everywhere
- No proprietary data
- No unique framing or recontextualization
- Essentially a paraphrase of existing knowledge
3-4 points: Slight originality
- Some unique angle or framing
- Mostly commodity knowledge with one unique element
- Limited proprietary insight
- Slightly different perspective than common coverage
5-6 points: Moderate originality
- Clear unique perspective on familiar topic
- Some proprietary data or insight
- Topic is recontextualized in useful way
- Synthesis of existing knowledge with novel angle
- Insight that competitors aren't documenting
7-8 points: Substantial originality
- Significant unique perspective backed by proprietary data
- Recontextualizes topic in ways competitors haven't
- Includes research or analysis others haven't conducted
- Synthesis that produces genuinely new insight
- Perspective that only this organization can offer
9-10 points: Highly original
- Proprietary research, data, or methodology
- Unique framing that redefines how the topic is understood
- Insight that has clear competitive moat
- Perspective competitors literally cannot replicate without access to your data/experience
- Findings or conclusions not documented elsewhere
Scoring tip: This is harder to score objectively. Do a competitor audit: search for the same topic at top competitors. If your article says the same things they say, it's commodity knowledge. If it offers something unique, score accordingly. Proprietary data is the easiest to score—if you have it, originality is higher.
Prioritization Matrix
Once you've scored all content, you need a prioritization matrix to decide what to do with each piece.
Once you've scored all content, you need a prioritization matrix to decide what to do with each piece.
Create a 2x2 matrix:
- X-axis: Audit Score (40-60 on left, 60+ on right)
- Y-axis: Strategic Importance (low on bottom, high on top)
This creates four quadrants:
High Importance / High Score (80+): Maintain & Optimize
- These are your best pieces. Minor optimization only.
- Action: Update data, add FAQ section if missing, ensure structure is excellent.
- Effort: Low (1-2 days per article).
High Importance / Medium Score (60-79): Targeted Rewrite
- Important topics that aren't optimally treated.
- Action: Rewrite for depth, add proprietary insights, restructure for clarity.
- Effort: Medium (3-5 days per article).
High Importance / Low Score (below 60): Major Rewrite or Net-New
- Critical topics that aren't well-covered. Often better to write fresh.
- Action: Either deeply rewrite or retire and write replacement content.
- Effort: High (5-10 days per article) or create new piece.
Low Importance / Any Score: Retire or Consolidate
- Content on low-priority topics, regardless of quality.
- Action: Retire individual pieces; consolidate with related content if strategic.
- Effort: Remove from content architecture.
Strategic importance is determined by:
- Is this a topic your target customers ask about?
- Is this aligned with your core expertise domain?
- Does this support your business positioning?
- Would competitors prioritize this topic?
High strategic importance: topics your customers need solved, topics where you have unique insight, topics central to your value prop.
Low strategic importance: tangential topics, commodity knowledge, nice-to-have content.
Implementation Timeline
With scores and prioritization, you need a realistic timeline.
With scores and prioritization, you need a realistic timeline.
Phase 1: Quick Wins (Month 1) Focus on high-importance/high-score articles.
- Add FAQ sections (if missing)
- Update publication dates
- Refresh outdated data
- Add summary statements if structure is weak
- Effort: 10-15 articles of 1-2 days each
Phase 2: Medium Priority (Months 2-4) Targeted rewrites of high-importance/medium-score pieces.
- Deepen explanations
- Add proprietary data or insights
- Restructure for clarity
- Add case studies or examples
- Effort: 5-8 articles of 3-5 days each
Phase 3: Strategic Rewrites (Months 4-6) Major rewrites or net-new content for high-importance/low-score pieces.
- Complete rewrites from scratch if content is weak
- Or write new supplementary content
- Build topical clusters around core themes
- Effort: 3-4 articles of 5-10 days each
Phase 4: Retirement and Consolidation (Months 2-6, ongoing) Remove low-importance content.
- Retire or redirect low-strategic-value pieces
- Consolidate overlapping content
- Clean up your content library
- Effort: Review and clean (1-2 days per month)
The timeline assumes you have roughly 100 pieces of content. Scale up or down based on your library size.
Frequently Asked Questions
On this page
Ross Williams
Founder, Fortitude Media
Ross Williams is the founder of Fortitude Media, specialising in AI visibility and content strategy for B2B companies.
Connect on LinkedInShare this article


