How to Quantify Accessibility Improvements

To quantify accessibility improvements, track the number of WCAG issues identified in an audit, categorize them by severity, and measure how many are resolved and validated over time. The most reliable baseline comes from a (manual) audit that reports issues against specific WCAG success criteria. Progress is measured by comparing the original issue count to the remaining open issues after remediation and validation. Severity weighting, criterion coverage, and validation status all factor into how meaningful the number actually is. Raw issue counts alone do not tell the full story. Context around severity, scope, and conformance level is what makes the data useful.

Core Metrics for Measuring Accessibility Progress
Metric What It Measures
Total issues identified Baseline count from the initial audit report
Issues resolved and validated Fixes confirmed by an auditor, not just marked complete
Severity distribution Critical, high, medium, low breakdown across the report
WCAG criterion coverage Which success criteria currently pass versus fail
Conformance level reached Progress toward full WCAG 2.1 AA or 2.2 AA

Start With a Real Baseline

You cannot quantify improvements without a baseline. That baseline comes from a (manual) accessibility audit conducted against a WCAG standard, typically WCAG 2.1 AA or 2.2 AA.

Scans cannot serve this role. Scans only flag approximately 25% of issues, and they cannot confirm WCAG conformance. Any number produced by a scan represents a sliver of reality, not the full picture.

An audit report from a qualified accessibility auditor gives you the complete issue list mapped to specific success criteria. That list is what every future measurement compares against.

What Should You Actually Count?

Raw issue count is the starting point, but it is rarely the most useful number on its own. A site with 300 low-severity issues is in better shape than a site with 40 critical ones.

Track these counts separately: total issues by severity (critical, high, medium, low), issues per WCAG success criterion to see patterns, issues per page or screen to identify problem areas, and issues by component type such as forms, navigation, and media.

This segmentation lets you report progress in ways that actually map to user impact and legal exposure, not just a shrinking number.

Measure Remediation, Then Validation

Fixing an issue and confirming it is fixed are two different things. A developer marking a ticket complete is not the same as an auditor verifying the fix meets the success criterion.

Meaningful quantification separates these two stages. The remediation count reflects issues the team has addressed in code. The validation count reflects issues an auditor has reviewed and confirmed resolved.

The validated number is the one that matters for conformance claims. It is the figure you can defend in a procurement review, a legal inquiry, or an ACR.

Use Severity Weighting for Better Reporting

If leadership wants a single progress percentage, weight issues by severity. A critical issue resolved counts for more than a low-severity cosmetic problem.

One practical approach is to assign point values (critical = 4, high = 3, medium = 2, low = 1) and report the percentage of total weighted points resolved. The Risk Factor or User Impact prioritization formulas used during audit review can inform these weights.

This method prevents a team from reporting 80% progress while every critical issue is still open.

Track Progress Against WCAG Criteria

A second useful view is criterion-level progress. How many WCAG 2.1 AA success criteria currently pass across your audited scope? How many still have at least one open issue?

This view maps directly to what goes into a VPAT and ACR, where each criterion is evaluated as Supports, Partially Supports, Does Not Support, or Not Applicable. Watching criteria move from Does Not Support to Supports over time is one of the cleanest ways to show measurable accessibility progress.

How Do You Report Progress Over Time?

Snapshots are useful, but trendlines tell the real story. A monthly or quarterly report that shows issue counts, severity distribution, and criterion status over time gives leadership a clear view of whether the work is moving in the right direction.

Reports should include the baseline issue count from the original audit, current open issues broken down by severity, issues validated since the last report, new issues introduced from new features or content, and criterion-level conformance status.

That last point catches something teams often miss. Accessibility is not static. New code, new pages, and new content can introduce new issues, which is why remediation and validation work is ongoing rather than a one-time project.

Pair the Numbers With Qualitative Evidence

Numbers are persuasive, but user evaluation with people who rely on assistive technology adds evidence that a spreadsheet cannot. A screen reader user completing a checkout flow that previously blocked them is a meaningful data point, even if it does not fit neatly into an issue counter.

Combining quantitative audit data with qualitative user evaluation gives you the fullest possible view of accessibility improvement.

What tools help quantify accessibility improvements?

A spreadsheet works for smaller projects. For larger efforts, an accessibility platform built to track audit issues automates severity rollups, criterion status, validation workflows, and progress reporting. The key requirement is that the data source be an audit, not a scan.

How often should progress be measured?

Monthly reporting works well during active remediation. Quarterly reporting is reasonable once a product has reached conformance and is in a maintenance cycle. Major product releases should trigger a targeted re-evaluation regardless of schedule.

Can you quantify improvements without an audit?

Not meaningfully. Scan-based metrics move without reflecting real user impact, and they miss most of what actually matters. An audit is the only reliable baseline for conformance-grade measurement.

Audit data is the foundation of any honest accessibility measurement program. Numbers without that foundation tend to flatter the work rather than describe it.

For help setting up a measurable accessibility program, Contact Kris Rivenburgh.