Accessibility Audits: Mobile Apps vs Web Apps

Accessibility audits for mobile apps and web apps share the same goal: identifying issues against WCAG 2.1 AA or WCAG 2.2 AA. But how each audit is conducted, the environments evaluated, and the types of issues identified differ significantly between the two.

A web app audit evaluates content rendered in a browser. A native mobile app audit evaluates content rendered by a device’s operating system. That distinction changes the scope, the tools an auditor uses, and the cost of the engagement.

Key Differences Between Mobile App and Web App Audits
Factor Web App Audit Mobile App Audit
Evaluation Environment Desktop and mobile browsers iOS and Android native environments
Assistive Technology NVDA, JAWS, VoiceOver (browser) VoiceOver (iOS), TalkBack (Android)
Applicable Standard WCAG 2.1 AA or WCAG 2.2 AA WCAG 2.1 AA or WCAG 2.2 AA
Typical Cost Lower per screen on average Higher due to multi-environment evaluation
Common Issue Types Keyboard navigation, color contrast, form labels Touch target size, gesture alternatives, screen reader labeling

What Does a Web App Audit Cover?

A web app audit evaluates screens or pages rendered in a browser environment. The auditor works through each unique screen state, checking conformance against WCAG success criteria. Desktop and mobile browser environments are both in scope.

Keyboard navigation is central to a web app evaluation. Every interactive element, from form fields to modal dialogs, needs to be operable without a mouse. The auditor also evaluates how screen readers like NVDA and JAWS interpret the page structure, including headings, landmarks, and ARIA attributes.

Because browsers provide a relatively standardized rendering layer, the behavior of HTML elements is more predictable. That consistency means web app audits tend to move through screens at a slightly faster pace than their mobile counterparts.

What Changes with a Native Mobile App Audit?

A native mobile app runs directly on the device operating system, not inside a browser. That changes what the auditor evaluates and how.

Instead of keyboard navigation, the auditor evaluates touch interactions: swipe gestures, tap targets, pinch-to-zoom behavior, and whether custom gestures have accessible alternatives. Touch target sizing is a frequent source of issues on mobile, especially with densely packed interfaces.

Screen reader evaluation shifts to VoiceOver on iOS and TalkBack on Android. Each has its own interaction model. An element that reads correctly with VoiceOver might behave differently under TalkBack, so both platforms need to be evaluated separately. This is one of the primary reasons mobile app audits carry a higher cost.

Why Does Environment Matter So Much?

A web app lives in the browser. The browser provides built-in accessibility support for standard HTML elements. Native apps rely on platform-specific accessibility APIs, which differ between iOS and Android.

On iOS, accessibility properties are set through UIAccessibility traits. On Android, they are configured through AccessibilityNodeInfo. When developers build custom controls, they need to implement accessibility properties correctly for each platform. If they miss one, the auditor will identify the gap.

Orientation changes also come into play with mobile. WCAG requires content to be usable in both portrait and landscape modes unless a specific orientation is essential to the function. Web apps adapt through responsive design. Mobile apps need to explicitly support both orientations, and many do not.

Are the WCAG Criteria Different for Each?

The standard is the same. Both mobile app and web app audits evaluate against WCAG 2.1 AA or WCAG 2.2 AA. There is no separate mobile version of WCAG.

However, certain success criteria carry more weight in mobile contexts. Target size requirements (addressed more directly in WCAG 2.2 AA) apply to both, but touch interfaces surface these issues more frequently. Gesture-based interactions also map to criteria around pointer gestures and motion actuation that are less commonly triggered in browser-based apps.

How Does Cost Compare?

Mobile app audits generally cost more than web app audits of comparable scope. Two factors drive this.

First, mobile audits require evaluation across at least two platforms: iOS and Android. Each platform has its own screen reader, its own accessibility API, and its own set of behavioral quirks. Evaluating both effectively doubles the assistive technology evaluation work.

Second, mobile app screens often contain complex custom components, things like carousels, bottom sheets, and gesture-driven navigation, that require more time to evaluate than standard web elements. Custom components in web apps also take longer, but mobile development frameworks produce more of them by default.

For organizations budgeting a digital accessibility project, mobile app audits typically fall at the higher end of per-screen pricing. Web app audits sit lower, though complexity still plays a role. A web app with heavy interactive functionality may cost as much as a mobile app with simple screens.

Can You Use Automated Scans for Either?

Automated scans exist for both web and mobile environments, but they serve the same limited role in each. Scans only flag approximately 25% of issues. They cannot determine WCAG conformance for a web app or a mobile app.

For web apps, browser-based scanning tools can check DOM structure, color contrast, and some ARIA usage. For mobile apps, platform-specific tools like Accessibility Scanner (Android) or Xcode’s Accessibility Inspector (iOS) can catch certain surface-level issues.

In both cases, a manual accessibility audit conducted by a qualified auditor is the only way to determine WCAG conformance. Scans are a separate activity from an audit.

What About ACR and VPAT Considerations?

If your product needs an ACR (the completed document based on the VPAT template), the audit scope determines what goes into it. A web app ACR documents conformance for browser-based content. A mobile app ACR documents conformance for native iOS and Android environments.

Some products span both. A SaaS company might have a web app and a companion mobile app. In that case, the ACR may cover both under a single document with separate sections, or two ACRs may be produced. The WCAG edition of the VPAT is the default for most organizations in this situation.

Procurement teams reviewing ACRs will look for evidence that the evaluation covered all relevant environments. An ACR that only documents web app conformance when a mobile app exists leaves a gap that buyers will notice.

Do I Need Separate Audits for iOS and Android?

Not necessarily separate audits, but both platforms need to be evaluated within a single mobile audit engagement. A thorough mobile audit includes VoiceOver evaluation on iOS and TalkBack evaluation on Android. If your app only exists on one platform, the scope narrows accordingly and the cost reflects that.

Should I Audit My Web App or Mobile App First?

Prioritize the asset with the larger user base or the one facing compliance pressure. If a procurement contract requires an ACR for your web app, start there. If your mobile app is the primary product and ADA compliance is a concern, prioritize that.

How Often Should Mobile and Web App Audits Be Repeated?

After any significant release or product change that affects the user interface. New features, redesigned screens, or framework migrations can all introduce accessibility issues. There is no fixed calendar requirement, but audits lose freshness as the product evolves. Most organizations that maintain conformance re-evaluate after each major release cycle.

The environment changes the audit, but the objective stays constant: identifying every issue that prevents WCAG conformance and giving your team a clear remediation path.

Contact Kris Rivenburgh to scope an accessibility audit for your web app or mobile app.