accessibility

Lighthouse accessibility score

MetricSpot runs Google Lighthouse and reports the accessibility score (0–100). Below 90 means automated checks for contrast, ARIA, keyboard access, and form labels are failing.

What this check does

Runs the Lighthouse accessibility audit (a curated subset of axe-core rules) against the page and reports the weighted score. MetricSpot fails the check below 90.

Why it matters

The score is a coarse but honest signal. Lighthouse only catches issues a static parser can verify — about a third of WCAG. Anything it flags is almost certainly real, not a false positive.

  • Real users are excluded. Roughly 1 in 5 visitors uses assistive tech at some point: screen readers, keyboard-only navigation, high-contrast modes, voice control. A page with a 60 accessibility score will be unusable for most of them.
  • Legal exposure. In the EU, the European Accessibility Act applies to public-facing commercial sites as of June 2025. In the US, the DOJ has stated explicitly that ADA Title III covers websites. Failing-score sites are a frequent target for shakedown lawsuits.
  • SEO and AI. Google’s page-experience signals include accessibility-adjacent metrics. AI crawlers parse with the same DOM tree screen readers use — broken semantics hurt extraction.

How to fix it

Open Lighthouse (Chrome DevTools → Lighthouse tab → Accessibility) and run it against the failing URL. The report lists every failing audit grouped by severity, with the offending DOM nodes highlighted.

Focus on the highest-weighted items first — Lighthouse weights audits by impact, not count:

  • Color contrast (color-contrast) — text ≥ 4.5:1, large text ≥ 3:1. Use a tool like Stark or DevTools’ contrast indicator.
  • Image alt text (image-alt) — every <img> needs an alt attribute, even if empty for decorative images. See Image alt text.
  • Form labels (label) — every input needs an explicit <label for="…">. See Form input labels.
  • Document language (html-has-lang) — <html lang="en"> on the root. See Declare page language.
  • Link text (link-name) — no “click here”, no empty <a>. See Descriptive link text.

Run axe-core in CI to catch regressions:

bun add -d @axe-core/playwright
import { test, expect } from "@playwright/test";
import AxeBuilder from "@axe-core/playwright";

test("home page is accessible", async ({ page }) => {
  await page.goto("/");
  const results = await new AxeBuilder({ page }).analyze();
  expect(results.violations).toEqual([]);
});

Manual checks Lighthouse can’t do. Tab through the page top-to-bottom — is the focus order logical? Is the focus ring visible? Turn on VoiceOver (Cmd+F5 on macOS) or NVDA (Windows) and read one page end-to-end. Lighthouse 100 is a floor, not a ceiling.

Frequently asked questions

Why was my score so different on a re-run?

Lighthouse runs in a headless Chrome instance with throttled CPU. Network noise, ad scripts, and consent-banner timing can shift the score by 5–10 points. Run three times and take the median; treat the trend, not the spot value, as truth.

Lighthouse is 100 — am I done?

No. Lighthouse audits about 30% of WCAG. The other 70% (logical reading order, error-recovery flow, focus management, content-meaning conveyed by color alone) requires manual review. A 100 score means the automated baseline is clean; ship the manual review on top.

Where does the score come from in the audit?

MetricSpot calls the Lighthouse engine during the audit and stores the score. If the run fails (the page is behind a login, blocks headless browsers, or times out) you’ll see a separate Lighthouse accessibility unavailable finding instead.

Sources

Last updated 2026-05-11