Agent readability

Agent readability for the web.

AI agents — ChatGPT, Claude, Copilot, Cursor — are becoming a primary way people discover and consume the web. a14y is the open spec, the versioned scorecard, and the tools that score any site for how discoverable, parseable, and comprehensible it is to those agents. Documentation sites are a common high-value target, but the scorecard works for marketing sites, product pages, help centers — anything agents might read.

npx a14y your-site.com Read the spec →

Run the scorecard yourself

Same engine, two surfaces. Both produce the same score for the same URL and scorecard version.

CLI

Audit any page or whole site from your terminal. Outputs a scored text report, JSON, or a Markdown fix-list ready to hand to a coding agent.

npm install -g a14y
a14y your-site.com

Chrome Extension

Score any page you're looking at, right in the browser. Switch scorecard versions from the popup; jump straight to the failing check's fix guidance.

Chrome Web Store View source →

Coming soon to the Chrome Web Store. For now, load the unpacked build from GitHub.

How teams use it

Run once for a baseline, fix the biggest offenders, re-run in CI — or hand the whole fix-list to a coding agent and let it ship the changes.

Step 1
Audit

a14y your-site.com scores a page. Add --mode site to crawl the whole thing. Pin a specific --scorecard for reproducibility.

Step 2
Report

Add --output agent-prompt to get a Markdown fix-list that a coding agent can consume directly — every failure with a link to the detection logic and the fix.

Step 3
Improve

Ship the fixes, re-run with --fail-under in CI, and watch the score climb. Every scorecard version stays reproducible forever so historical scores trend cleanly.