Validation is the step where an auditor reviews your fixes to confirm that previously identified accessibility issues have been properly resolved. It happens after remediation and before any conformance claim is made. Without validation, you have no way to know whether your fixes actually work.
The concept is simple. An audit identifies issues. Your development team remediates those issues. Then the auditor goes back through each one to verify the fix holds up across assistive technologies, browsers, and devices. Validation closes the loop.
| Aspect | Detail |
|---|---|
| When it happens | After remediation, before conformance is claimed |
| Who performs it | The original auditor or auditing team |
| What is evaluated | Each previously identified issue, one by one |
| Goal | Confirm fixes resolve the original issue without creating new ones |
| Standard | WCAG 2.1 AA or WCAG 2.2 AA conformance criteria |

What Happens During Validation?
The auditor takes the original audit report and works through each issue that was flagged. For every item your team marked as fixed, the auditor evaluates the updated page or screen to confirm the fix meets the relevant WCAG success criterion.
This is not a quick scan. The auditor uses assistive technologies like screen readers, keyboard navigation, and magnification tools to verify that the remediation works in real conditions. A fix that looks correct in the code but breaks under a screen reader is not a valid fix.
Each issue gets one of three outcomes: resolved, partially resolved, or still present. Partially resolved means the fix addressed part of the issue but introduced a new problem or left a piece of the original nonconformance intact.
Why Can’t Developers Validate Their Own Fixes?
Developers are essential to remediation. But self-review introduces the same blind spots that made the issue possible in the first place. A developer who wrote the fix is the least likely person to catch a flaw in it.
Independent validation from an auditor provides objectivity. The auditor is evaluating against WCAG conformance criteria, not against what the developer intended. This distinction matters. Accessibility issues are defined by how assistive technology users experience the content, not by what the code appears to do.
Audits should always be fully manual, and the validation process follows the same standard. Every fix is evaluated by a human auditor, not by an automated scan.
How Does Validation Relate to the Audit Process?
Validation is a phase within the larger accessibility project lifecycle. Here is how the sequence flows:
First, an accessibility audit identifies issues against a WCAG standard, typically WCAG 2.1 AA or WCAG 2.2 AA. The audit report documents each issue with its location, the relevant success criterion, and remediation guidance.
Second, your development team works through the report and fixes each issue. This is remediation.
Third, validation. The auditor re-evaluates each fix. Issues confirmed as resolved are closed out. Issues that remain open go back to your team for another round of remediation.
Some projects complete validation in a single pass. Others require two or three rounds, depending on the complexity of the digital asset and how accurately the initial fixes were implemented.
What Does the Auditor Check During Validation?
The auditor checks the specific element and context described in the original audit report. If the issue was a missing form label on a login page, the auditor navigates to that login page, interacts with the form using a screen reader, and confirms the label is now correctly associated.
Beyond the individual fix, the auditor also checks for regressions. A regression is when a fix for one issue creates a new issue elsewhere. For example, adding an ARIA label to a button might resolve one problem but cause a duplicate announcement in certain screen readers. Validation catches this.
The evaluation covers both the technical implementation (is the code correct?) and the functional experience (does it work for a person using assistive technology?). Both must pass.
Does Validation Involve Automated Scans?
Automated scans and validation are separate activities. Scans only flag approximately 25% of issues, which means they miss the majority of what a manual evaluation identifies. Validation requires the same depth as the original audit.
An auditor may use automated tools as a preliminary check during validation, but the determination of whether a fix passes is always a human judgment call. No scan can confirm that a screen reader interaction is correct, that focus order is logical, or that an alternative text description is meaningful.
How Long Does Validation Take?
Turnaround depends on the number of issues being reviewed and the complexity of the digital asset. A web app with 80 identified issues will take longer to validate than an informational website with 15.
Most validation rounds take a few business days. If all fixes are implemented correctly the first time, a single round may be all that is needed. In practice, a second round is common because some fixes are partially correct or introduce minor regressions.
What Happens After Validation Is Complete?
Once every issue is confirmed as resolved, WCAG conformance can be claimed for the evaluated scope. This is the point where certification documents, accessibility statements, or an ACR (if a VPAT was part of the project) reflect the current state of conformance.
Conformance is a snapshot. It represents the state of the digital asset at the time of validation. Any subsequent code changes, content updates, or third-party integrations can introduce new issues. That is why periodic re-evaluation is recommended.
Can validation be skipped if the fixes seem obvious?
No. Skipping validation means you have no independent confirmation that your remediation worked. Even simple fixes can behave differently across assistive technologies. Without validation, a conformance claim is unsupported.
Is validation the same as user evaluation with assistive technology?
They overlap but serve different purposes. Validation confirms that specific identified issues have been fixed according to WCAG criteria. User evaluation with assistive technology (sometimes called user evaluation) assesses the overall experience of the digital asset from the perspective of someone with a disability. Both are valuable. Validation is targeted; user evaluation is broad.
How many rounds of validation should a project expect?
Most accessibility projects go through one to three rounds. The number depends on the accuracy of the initial remediation and the total issue count. Projects with strong developer guidance from the audit report often complete validation in fewer rounds.
Validation is what separates a real accessibility project from a checkbox exercise. It is the auditor’s confirmation that the work actually holds up for the people who depend on it.
Contact Kris Rivenburgh to discuss your accessibility audit, remediation, or validation needs.