Skip to main content

About TestAccessibility

TestAccessibility is a small, public-focused project: a free URL scanner, shareable report pages, and plain-language explainers so teams can find obvious barriers faster.

What it is

You paste a URL. We load the page in a browser, run automated accessibility rules (via axe-core), and save a public report you can link to in tickets, Slack, or email. The site also publishes guides and issue pages so the same vocabulary shows up in scans and documentation.

Why it exists

Many teams still ship accessibility problems that automated tools catch early—missing labels, broken contrast, empty button names, bad heading order. The goal here is not to replace experts or manual QA, but to lower the cost of the first pass: something shareable, repeatable, and honest about limits.

What automated testing can detect

  • Many failures that map to programmatic rules: contrast ratios, presence of accessible names, duplicate IDs, invalid ARIA, missing alt text on images exposed to the accessibility tree.
  • Structural signals: skipped heading levels, empty headings, some landmark issues.
  • Regressions between deploys when you rescan the same URL pattern.

What it cannot detect

  • Whether instructions are understandable in context, or whether error messages help users fix mistakes.
  • Full coverage of WCAG—many success criteria need human judgment, multi-step tasks, or assistive technology.
  • Problems that only appear after login, behind feature flags, or in states the single page load does not reach.
  • Legal compliance; regulators and courts look beyond tooling.

Who should use it

Developers checking a staging URL, content editors validating a template, agencies sending a client a concrete artifact, and internal teams who want a dated snapshot after a release. It is most useful when someone owns follow-up: file issues, fix, rescan.

How public reports help teams

A shared URL beats screenshots alone: it lists the URL and time of scan, a score summary, severity counts, and per-rule text. You can compare new reports after fixes instead of arguing from memory. The report is a starting point for discussion—not a certificate.

When manual testing is still necessary

Use a keyboard for every interactive control on primary flows. Try a screen reader on one critical journey (e.g. checkout or signup). Test with real content, not only empty states. For high-risk or regulated launches, budget time with accessibility specialists and users with disabilities—not only automation.

Try the scanner

No account is required for public scans and reports.