AI SEO audit tool - see how ChatGPT, Perplexity, and Google AI Overviews read your site
AI search is a parallel channel to traditional blue links: LLM-driven engines like ChatGPT, Perplexity, and Google AI Overviews pull answers from the open web and cite the sites they trust. MetricSpot scores both - you get an AI-readability score and a traditional SEO score from the same audit, with a fix-it list for each.
No card needed. Results in 30 seconds.
Try it on any URL.
Updated
What an AI SEO audit checks
Eleven AI-readability rules run on every audit. Each one maps to a public documentation page you can hand to a developer or a client - open the rule to read what it checks, why it matters for AI search, and how to fix it. The same rules ship in the audit report, the white-label PDF, and the API response, so what you read here is what your client reads on delivery.
llms.txt for AI agents
An /llms.txt manifest at the root of your site tells LLM crawlers which pages are canonical and how to read them. Emerging spec, low cost, high upside.
Read the check →Declare a policy in /agents.txt
A companion file that states your data-use policy for AI agents - useful when you want to permit or restrict training-set scraping separately from indexing.
Read the check →Allow AI crawlers in robots.txt
Check that GPTBot, PerplexityBot, ClaudeBot, Googlebot, and other AI user agents aren't accidentally blocked. The default WordPress robots.txt blocks several.
Read the check →Answer-first content
AI engines pull the first quotable answer block. Pages that lead with a direct answer get cited; pages that bury the answer below 800 words of preamble don't.
Read the check →Author attribution
Author bios, credentials, and Person schema give the engine an entity to cite. Anonymous posts are routinely skipped by Perplexity and AI Overviews.
Read the check →JSON-LD structured data
Article, Product, HowTo, and FAQ schema turn prose into machine-readable facts. LLM grounding pipelines lean heavily on JSON-LD when it's available.
Read the check →FAQPage schema for FAQs
Question-and-answer sections marked up with FAQPage schema are the single highest-conversion pattern for AI-overview citations on how-to and product pages.
Read the check →Organization schema
An Organization JSON-LD block with name, logo, sameAs, and contactPoint identifies you as an entity that LLMs can resolve and cite by brand name.
Read the check →Semantic HTML
article, section, header, nav, and main let crawlers parse content structure without running JavaScript. Div-soup pages lose context to LLM parsers.
Read the check →Visible last-updated date
Engines prefer fresh sources. A visible last-updated date - not just a publish date - earns citations on time-sensitive topics.
Read the check →Content-type schema
Mark each page with its specific schema.org type (Article, Product, HowTo, Recipe, LocalBusiness) so the engine knows which fact pattern to extract.
Read the check →GEO vs traditional SEO
The AI-search category is still settling on vocabulary. Four terms come up in searches; they overlap heavily but each captures a slightly different angle.
| Term | What it means |
|---|---|
| GEO (Generative Engine Optimization) | Optimizing for AI engines that generate answers - Perplexity, ChatGPT Search, Google AI Overviews. The output is a synthesized paragraph with citations, not a ranked list. Optimization targets are the inputs to that paragraph: schema, citation-friendly facts, fresh dates, named authors. |
| AEO (Answer Engine Optimization) | Making your content structured enough that engines can pull a direct answer. Overlaps with GEO; AEO leans more on FAQ schema, definition blocks, and quotable opening sentences. The two terms are largely interchangeable in 2026. |
| LLM SEO | Umbrella term that some practitioners use as a synonym for GEO and others use to mean the broader practice of optimizing for any LLM-based surface (chatbots, agents, code assistants). |
| Traditional SEO | Optimizing for ranked blue links on Google and Bing. Still the larger channel by volume; AI-search optimization complements it rather than replaces it. |
Same site, different surfaces. MetricSpot scores both, and most of the AI-readiness signals (schema, semantic HTML, freshness, author signals) feed your traditional rankings as well. If you've ever heard the question 'do I need to redo my SEO for AI search?' - the honest answer is mostly no: in our experience most of the signals overlap, the new work is at the edges (llms.txt, agents.txt, answer-first formatting, Person and Organization schema), and the upside compounds across both channels.
Inside an AI SEO audit report
Your AI-readability findings appear as a severity-color-coded list - one row per rule, with a short explanation of what we found on your page and a link to the matching documentation. The same data lands in the PDF report, branded for your agency on paid plans.
- •Severity pills: info, minor, major, critical - same scale used across every MetricSpot module so you can triage at a glance.
- •Per-rule findings: the exact element, file, or pattern we matched against on your page (e.g., 'no llms.txt at /llms.txt').
- •How-to-fix link: every finding deep-links to its /docs/ page with the actionable fix and references.
- •AI-readability sub-score: a 0-100 module score that feeds into your overall MetricSpot score.
AI SEO audit tool pricing
AI-readability ships on every plan, including the free tier. Paid plans add unlimited audits, scheduled re-runs, and a fully white-label PDF.
Free
$0/mo
Try the platform. No card, no commitment.
- ·10 audits per month (1 per site per 24h)
- ·All ten score modules
- ·PDF download with our branding
- ·Multilingual reports
Starter
$29/mo
For freelancers running monthly reports.
- ·Up to 5 tracked domains
- ·50 audits per month
- ·Fully white-labeled PDF reports
- ·Custom brand kit (logo, color, footer)
Pro
$49/mo
For agencies, freelancers, and resellers.
- ·Everything in Starter
- ·Scheduled re-audits (weekly, biweekly or monthly)
- ·Unlimited tracked domains
- ·Email reports directly to clients
Need to hand the report to a client? Every paid plan ships the same AI-readability findings inside a PDF that's branded for your agency - your logo, colors, and contact info instead of MetricSpot's.
MetricSpot is itself MCP-ready: every check on this page is also exposed to AI agents via our Model Context Protocol server. Hosted clients connect to `mcp.metricspot.com`; local clients (Claude Code, Cursor, Zed) install with `npx @metricspot/mcp-server`. See the agent integration guide for tool specs, auth, and sample responses.
FAQ
Is AI search big enough to optimize for?
It's the fastest-growing search surface of 2025-2026. Google AI Overviews increasingly appear above the blue links on informational queries, and Perplexity + ChatGPT Search send a referral stream that doesn't appear in Google Analytics' default channel grouping. Even if AI traffic is a small share of your total today, the signals that earn AI citations (schema, semantic HTML, named authors) also strengthen your traditional rankings - so you get two channels' worth of upside from one round of work.
What's the difference between AI readability and traditional SEO?
Traditional SEO optimizes for a ranked list of links that a human clicks. AI readability optimizes for the inputs an LLM pulls into a generated answer - usually schema, citation-friendly facts, named authors, fresh dates, and structured HTML. Most of the signals are shared, but the failure modes differ: a thin page can still rank traditionally but rarely earns an AI citation, and a page hidden behind JavaScript can rank with Google's renderer but is invisible to most LLM crawlers.
Do I need an llms.txt file?
It's emerging - not a Google ranking factor, not yet supported by every LLM crawler. We check for it as an info-severity rule because the cost to add it is near zero and a handful of LLM clients already prefer sites that publish one. Treat it as cheap insurance against a spec that may or may not become standard.
Will optimizing for AI search hurt my Google rankings?
No. The AI-readiness signals MetricSpot checks (JSON-LD, semantic HTML, answer-first prose, named authors, freshness) overlap heavily with the signals Google's traditional ranker rewards. We've never seen a site improve AI readability and lose Google traffic from it.
How often should I re-audit?
Monthly is the typical cadence for active sites; quarterly is fine for sites that publish less often. Re-audit immediately after any template change, schema rollout, or robots.txt update - those are the changes that most often regress AI readability without anyone noticing.
Does the audit query ChatGPT or Perplexity directly to see if my site is cited?
No - that's a different category of tool (visibility tracking) and we don't ship it today. MetricSpot scores your site's readiness signals: the things on your page that determine whether an LLM can cite you. It doesn't ask ChatGPT a prompt and watch for your URL. If citation tracking is your main need, pair MetricSpot with a dedicated AI visibility tracker; if your goal is to fix the signals first, MetricSpot is the audit.
Which AI engines is this aimed at?
The signals are engine-agnostic - schema, semantic HTML, llms.txt, and named authors are read by every LLM-based engine we know of: ChatGPT Search, Perplexity, Google AI Overviews, Claude, Brave Leo, You.com, and the long tail of agent-based search clients.
Can I run a full AI-readability audit on the free plan?
Yes. The free plan includes the full AI-readability module on every audit, with the same per-rule findings as the paid plans. Paid plans add unlimited audits, scheduled re-runs, and a white-label PDF.
Stop writing SEO reports by hand.
Run an audit, brand the PDF, send to your client. In five minutes.
Start your first audit