api
GET /api/audits/:id
Fetch a single audit by id with score, per-module breakdown, and the full findings array. Poll this after queueing with POST /api/audits.
What this endpoint does
GET /api/audits/:id returns the full audit envelope for an audit your account owns: scores, per-module breakdown, and every rule that was evaluated. This is the endpoint to poll after queueing with POST /api/audits.
- Idempotent and cacheable on the client side: an audit’s findings never change once
status === "completed". - Includes
plan(the user’s current effective plan) andcooldown(when the next audit on the same URL is allowed), which the dashboard uses to render action buttons. - Returns
404if the id doesn’t exist,403if the audit belongs to a different account,401if the Bearer token is missing or invalid.
Why it matters
GET /api/audits/:id is the readback for any audit you queued. It’s the call you put on a 3-5 second polling loop after POST /api/audits, and it’s the call you make later when an agent wants to revisit findings without re-running the audit.
Concrete workflows:
- An audit-on-PR bot polls every 3 seconds until
statusiscompleted, then ranks thefindingsbyseverityand posts the worst three to the PR. - A content reviewer agent receives the audit id from a webhook and pulls the findings to draft a fix list ordered by module.
How to use it
The id is the integer returned by POST /api/audits. Bearer auth is required.
Request
GET /api/audits/12345 HTTP/1.1
Host: app.metricspot.com
Authorization: Bearer ms_live_xxxxxxxxxxxxxxxxxxxxxxxx
curl
curl https://app.metricspot.com/api/audits/12345 \
-H "authorization: Bearer ms_live_xxxxxxxxxxxxxxxxxxxxxxxx"
Node fetch
const res = await fetch("https://app.metricspot.com/api/audits/12345", {
headers: { authorization: "Bearer ms_live_xxxxxxxxxxxxxxxxxxxxxxxx" },
});
const data = await res.json();
console.log(data.audit.status, data.audit.score);
console.log(`${data.findings.length} findings`);
Python httpx
import httpx
r = httpx.get(
"https://app.metricspot.com/api/audits/12345",
headers={"authorization": "Bearer ms_live_xxxxxxxxxxxxxxxxxxxxxxxx"},
timeout=30.0,
)
data = r.json()
for f in data["findings"]:
if not f["passed"] and f["severity"] in ("major", "critical"):
print(f["severity"], f["rule_id"], f["title"])
Response
200 OK:
{
"audit": {
"id": 12345,
"user_id": 42,
"domain": "example.com",
"url": "https://example.com",
"status": "completed",
"score": 78,
"audit_version": 7,
"raw": {
"module_scores": {
"technical": 92,
"onpage": 71,
"performance": 84,
"ai": 65,
"modern_seo": 80,
"social": 88,
"accessibility": 74,
"privacy": 82,
"readability": 70,
"tech_stack": 100,
"organic_traffic": 100
}
},
"perf_status": "ready",
"error_message": null,
"started_at": "2026-05-14T10:18:05.000Z",
"completed_at": "2026-05-14T10:18:24.000Z",
"created_at": "2026-05-14T10:18:04.000Z"
},
"findings": [
{
"id": 9001,
"module": "onpage",
"rule_id": "onpage.meta-description",
"severity": "major",
"passed": false,
"title": "Missing meta description",
"recommendation": "Add a 140-160 character meta description summarizing the page.",
"data": {}
}
],
"plan": "starter",
"cooldown": { "ready_at": "2026-05-15T10:18:04.000Z" }
}
Fields:
audit.status: one ofqueued,running,completed,failed. Findings are only stable oncecompleted.audit.score: integer 0 to 100, ornullwhile queued or running.audit.raw.module_scores: per-module score 0 to 100. Modules that were not applicable for this run returnnull.audit.perf_status:readyonce PageSpeed Insights returns;pendingwhile PSI is still in flight.audit.error_message: populated only whenstatus === "failed".findings[]: every rule evaluated, including the ones that passed. Filter onpassed: falseto get just the failures.findings[].severity:info,minor,major,critical. Sort descending by severity for the most useful fix list.findings[].data: rule-specific extra data (e.g. measured pixel sizes, missing tag count).plan: the user’s effective plan:free,starter, orpro.cooldown.ready_at: ISO timestamp of the next allowed audit for this URL.
Error envelope
{ "error": "Not found" }
Common errors
| Code | When | Action |
|---|---|---|
UNAUTHORIZED (401) | Missing or invalid Bearer token | Mint a key at app.metricspot.com/settings/api-keys |
FORBIDDEN (403) | Audit belongs to a different account | Use a key for the owning account, or call GET /api/audits to list yours |
AUDIT_NOT_FOUND (404) | :id doesn’t exist | Call GET /api/audits and use a real id |
INVALID_URL (400) | :id not a positive integer | Pass an integer id |
Frequently asked questions
How often should I poll?
3 to 5 seconds is the sweet spot. Faster than that wastes round trips; slower delays your downstream comment or notification. Most audits complete inside the first 30 seconds; bail out at 90 seconds and surface the still-running state to the user.
Are findings sorted in a useful order?
They are returned in insertion order (by id ascending), which mirrors how the audit pipeline runs the modules. For a human-facing fix list, sort client-side by severity (critical > major > minor > info), then by module.
What happens if I fetch an audit while it’s still running?
You’ll get status: "queued" or status: "running" with an empty findings array and score: null. The other fields (id, domain, url, created_at) are stable from the first response.
Does this consume any plan allowance?
No. Only POST /api/audits (and only when it succeeds) decrements the monthly counter. GET /api/audits/:id is unmetered.
Sources
Last updated 2026-05-14