Comprehensive Accessibility & Inclusive Design Audit

1. General Information

This form gathers essential details about the system under audit and the auditing context.

 

System/Project Name

Version or Release

Primary URL or Location

System Type

 

Audit Start Date

Lead Auditor Name

Contact Email

2. Scope & Context

Define the boundaries and objectives of this audit.

 

Accessibility Guidelines to Evaluate Against

Is this a first-time audit?

 

Describe any previous informal checks or known pain-points:

 

Summarize key findings and remediations from the last audit:

User Groups in Scope

Are third-party components (plugins, widgets, payment gateways) in scope?

Is legacy content (older documents, multimedia) included?

Any out-of-scope items or constraints?

3. Assistive Technology Testing Matrix

List the environments and tools used to validate compatibility.

 

Assistive Technologies & Platforms

AT Tool

Platform

Browser/Version

Tested?

Severity of Issues (0=None, 5=Critical)

Notes/Known Bugs

A
B
C
D
E
F
1
JAWS
Windows 11
Edge 122
Yes
2
Focus order issue on cart page
2
VoiceOver
iOS
Safari 17
Yes
0
 
3
 
 
 
 
 
 
4
 
 
 
 
 
 
5
 
 
 
 
 
 
6
 
 
 
 
 
 
7
 
 
 
 
 
 
8
 
 
 
 
 
 
9
 
 
 
 
 
 
10
 
 
 
 
 
 

4. Perceivable - Text Alternatives & Media

Evaluate how information is presented to users regardless of their sensory abilities.

 

Do decorative images have empty (null) alt attributes?

Do informative images have descriptive alt text?

 

Provide an example of well-written alt text found:

 

Describe the shortcomings:

Are there any charts or graphs?

 

Is data also available in an accessible table or description?

Are captions provided for prerecorded video?

Are transcripts provided for audio-only content?

Rate the quality of text alternatives overall

5. Perceivable - Adaptable & Distinguishable

Can the content be understood when CSS styling is disabled?

Is semantic HTML used correctly (headings, lists, landmarks)?

Is color alone used to convey information?

 

Describe where color is the only indicator:

Is there sufficient contrast between text and background?

 

Which components failed contrast checks?

Is text resizable up to 200% without loss of content or functionality?

Are images of text avoided except for logos?

6. Operable - Keyboard & Navigation

Assess whether all functionality is available via keyboard and that users have enough time to interact.

 

Is a visible focus indicator present on all interactive elements?

 

List elements where focus is missing or unclear:

Is there a logical tab order throughout the interface?

 

Describe where tab order jumps illogically:

Are keyboard traps present?

 

Detail the trap location and steps to reproduce:

Are skip links provided to bypass repetitive content?

Are motion animations respecting prefers-reduced-motion?

Are time limits present?

 

Which user controls are available?

7. Operable - Input Modalities

Examine pointer, gesture, and voice inputs.

 

Are click/tap targets at least 24×24 CSS pixels?

Are complex gestures (multi-finger, drawing) accompanied by simple alternatives?

Is drag-and-drop functionality keyboard accessible?

Are labels placed to receive pointer clicks?

Is accidental activation prevented (e.g., confirmatory buttons)?

8. Understandable - Readability & Predictability

Is the primary language identified in HTML?

Are abbreviations or jargon expanded or explained?

Are error messages clear and actionable?

Are form labels and instructions persistently visible?

Are navigation menus consistent across pages?

Are changes of context (new window, focus shift) predictable?

9. Understandable - Input Assistance

Evaluate support provided during data entry.

 

Are required fields clearly indicated?

Are input errors automatically detected and described?

Are suggestions provided for fixing errors?

Is data validated client-side before submission?

Is a confirmation page shown before final irreversible actions?

10. Robust - Compatibility & Standards

Ensure content is robust enough for various assistive technologies.

 

Is valid HTML/CSS used?

Are ARIA roles and properties used correctly?

Are status messages conveyed via role='status' or aria-live regions?

Are duplicate IDs avoided?

Are custom components exposing correct name, role, and state?

11. WCAG 2.2 New Success Criteria

Checkpoints introduced in WCAG 2.2.

 

Rate compliance for new criteria

Fail

Partial

Pass

N/A

2.4.11 Focus Not Obscured (AA)

2.4.12 Focus Not Obscured (Enhanced, AAA)

2.4.13 Focus Appearance (AAA)

2.5.7 Dragging Movements (AA)

2.5.8 Target Size (AA)

3.2.6 Consistent Help (A)

3.3.7 Redundant Entry (A)

3.3.8 Accessible Authentication (AA)

3.3.9 Accessible Authentication (No Exception, AAA)

12. Automated vs Manual Testing

Document tools and manual efforts.

 

Automated tools used

Did automated tools find any issues?

 

Approximate number of unique issues flagged:

Describe manual testing scenarios performed:

Were cognitive walkthroughs conducted?

Were user tests with people with disabilities included?

13. Issue Severity & Prioritization

Log and prioritize issues discovered.

 

Accessibility Issues Log

Issue ID

WCAG Criterion

Description

Severity

Effort

Owner/Team

Target Resolution Date

A
B
C
D
E
F
G
1
A11Y-42
2.4.7 Focus Visible
No focus ring on primary CTA
High
Simple
Design System
8/31/2025
2
 
 
 
 
 
 
 
3
 
 
 
 
 
 
 
4
 
 
 
 
 
 
 
5
 
 
 
 
 
 
 
6
 
 
 
 
 
 
 
7
 
 
 
 
 
 
 
8
 
 
 
 
 
 
 
9
 
 
 
 
 
 
 
10
 
 
 
 
 
 
 

14. Remediation & Roadmap

Plan next steps and assign resources.

 

Is a phased remediation roadmap defined?

Are design tokens or an accessible component library available?

Will regression testing include accessibility checks?

Is training for developers/designers scheduled?

Outline key milestones:

15. Statement & Sign-off

Provide declaration and accountability.

 

Draft accessibility statement (optional):

Is executive sign-off obtained?

Lead Auditor Signature

 

Analysis for IT Accessibility & Inclusive Design Audit Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

 

Overall Form Strengths

This audit form is a best-practice example of how to structure a specialist compliance checklist. It mirrors the WCAG 2.2 table of contents, which shortens the learning curve for seasoned auditors and guarantees that no guideline is forgotten. The progressive disclosure pattern—yes/no questions first, then conditional open-ended fields—keeps the perceived length low while still capturing rich qualitative evidence. Built-in data-quality checks such as the 24×24 pixel minimum, severity scales, and date pickers reduce transcription errors and make later aggregation into executive dashboards trivial.

 

Another strength is the deliberate balance between prescriptive mandatory fields and generous optional space. Mandatory questions are limited to high-value identifiers and critical WCAG clauses, so an audit can be legally signed off even if the auditor chooses not to record tertiary details. Optional narrative boxes and the testing-results table encourage thoroughness without penalizing consultants who work under tight procurement deadlines. Finally, the form is future-proofed: the matrix rating block for WCAG 2.2 success criteria can be expanded simply by editing the sub-question array, while the Assistive Technology Testing Matrix already anticipates new AT tools or OS versions.

 

Question: System/Project Name

System/Project Name is the single most important database key for every downstream workflow—issue tracking, remediation tickets, procurement approvals, and regulatory filings. By forcing the auditor to supply a concise, human-readable label up-front, the form guarantees that all exported spreadsheets, Jira projects, and VPAT documents will share a common identifier, eliminating the classic problem of duplicate or cryptic product codes that plague enterprise archives.

 

From a user-experience standpoint, placing this field first leverages the psychological principle of commitment and consistency: once the auditor has typed the product name, they are mentally invested in completing the remainder of the audit. The generous placeholder text (“e.g., Customer Self-Service Portal”) silently teaches the expected format, reducing the likelihood of vague entries such as “Homepage” that would later require clarification.

 

Data-quality implications are equally significant. Because the field is short and validated as plain text, it can be indexed, sorted, and searched with minimal cleansing. This pays dividends when compliance officers need to generate a quarterly report listing every audited system or when procurement wants evidence that a new vendor product has already been cleared.

 

Question: Primary URL or Location

Primary URL or Location acts as both a legal reference point and a practical doorway for re-testing. Regulators such as the U.S. Access Board or the EU’s enforcement bodies require an exact URI when they reproduce audit findings; a missing or broken link can invalidate an entire Section 508 or EN 301 549 submission. Making this field mandatory therefore shields the organization from contractual or regulatory push-back.

 

Technically, the URL field is the anchor for automated regression testing. Accessibility scanners can be scheduled to revisit precisely this address, diffing new results against the baseline captured in this form. Because the form stores the canonical URL only once, there is no risk of drift between the auditor’s report and the CI/CD pipeline, a common failure mode when URLs are recorded in separate spreadsheets or wiki pages.

 

Privacy considerations are minimal but worth noting: the URL may reveal internal project names or version numbers. However, because the form itself is intended for internal consultants, the exposure risk is acceptable and far outweighed by the operational benefits of traceability.

 

Question: System Type

System Type is the pivot around which the entire audit methodology rotates. WCAG techniques differ materially between websites, web applications, mobile apps, desktop software, and APIs; for instance, keyboard traps are almost irrelevant for an iOS native app, whereas touch-target size is paramount. By forcing a choice, the form ensures that the auditor’s tool-chain, test scripts, and pass/fail criteria are aligned with the correct modality, preventing costly re-work.

 

The single-choice widget is supplemented by a conditional “Other” text box, a design compromise that preserves data integrity while remaining inclusive of emerging technologies such as VR kiosks or embedded hardware. This approach yields clean categorical data for metrics dashboards while still allowing edge-case documentation.

 

From a governance perspective, knowing the system type enables automatic routing of the finished audit to the correct approval board—web audits flow to the Chief Digital Officer, mobile audits to the Mobile Center of Excellence, and so on—accelerating sign-off and reducing email churn.

 

Question: Audit Start Date

Audit Start Date is essential for SLA tracking and regulatory chronology. Many public-sector contracts stipulate that remediation must be completed within a fixed number of days from the “audit baseline”; without an authoritative start date, the countdown cannot be enforced. Making the field mandatory protects the organization from legal disputes where vendors claim remediation deadlines were ambiguous.

 

The date picker widget standardizes the format (ISO 8601), eliminating locale confusion between U.S. MM/DD/YYYY and European DD/MM/YYYY representations. This small UX decision prevents an entire class of schedule misalignment bugs that plague international teams.

 

Finally, the date field enables longitudinal analytics: compliance managers can plot defect density over time, identify seasonal spikes, and correlate audit velocity with release cycles, all of which are impossible if the date is optional or inconsistently entered.

 

Question: Lead Auditor Name

Lead Auditor Name fulfills multiple compliance frameworks that demand a clearly identified “responsible party.” Section 508 and WCAG-EM both require that audit reports contain contact information for the individual who attests to the accuracy of findings. Making this field mandatory therefore future-proofs the document for potential deposition or regulatory inquiry.

 

Internally, the name field drives accountability and coaching. Reviewers can quickly spot patterns—if one auditor consistently logs high-severity issues while another does not, the QA team can investigate whether training gaps or lenient scoring are at play. Over time, this creates a feedback loop that raises the competency of the entire audit practice.

 

From a project-management perspective, the lead auditor becomes the default assignee for clarification questions during remediation sprints, reducing the cycle time spent hunting for subject-matter experts.

 

Question: Contact Email

Contact Email is the primary asynchronous communication channel for every stakeholder loop that follows the audit: developers asking for clarification, procurement officers negotiating vendor fixes, and regulators requesting raw data. By mandating a valid email address, the form guarantees that silence never blocks remediation and that an audit can never become an “orphan” record.

 

Validation occurs at point of entry (format checking plus domain confirmation), preventing typos that would otherwise generate support tickets weeks later. This small UX friction is justified because the cost of a bounced email can be an entire re-audit cycle.

 

Privacy is handled proportionally: the form only requests a business email, not personal data, and the resulting database can be encrypted at rest to satisfy GDPR or HIPAA requirements.

 

Question: Accessibility Guidelines to Evaluate Against

Accessibility Guidelines to Evaluate Against determines the legal ceiling for the entire engagement. WCAG 2.2 AA is the default for most legislation, but some EU contracts require EN 301 549, while U.S. federal agencies still reference Section 508. By forcing the auditor to tick at least one guideline, the form prevents the catastrophic scenario where an audit is conducted against an incorrect standard, rendering all findings non-compliant and exposing the organization to contractual penalties.

 

The multiple-choice widget also permits blended audits—an agency might require WCAG 2.2 AA plus selected AAA criteria such as Focus Appearance. Capturing this nuance up-front ensures that the testing protocol and tooling (e.g., color-contrast ratios, focus-ring pixel counts) are calibrated to the correct thresholds, eliminating costly re-testing.

 

Data aggregations benefit as well: compliance dashboards can instantly show how many systems meet each guideline, allowing executives to prioritize remediation budgets based on legal risk exposure.

 

Question: User Groups in Scope

User Groups in Scope personalizes the audit to real-world harm. A motor-impairment-only scope will emphasize keyboard operability and target size, whereas a blind/low-vision scope will stress screen-reader compatibility and alt-text quality. Making this selection mandatory guarantees that no disability cohort is inadvertently omitted, a common oversight that can result in discriminatory user experiences and ADA lawsuits.

 

Multiple-choice granularity allows intersectional coverage—auditors can select both Deaf/Hard-of-Hearing and Cognitive/Learning, prompting the checklist to include captions and plain-language checks. This design pattern is superior to a single “all users” checkbox because it forces conscious prioritization and resource allocation.

 

Finally, the chosen groups feed into the Testing Matrix section, where each AT tool is mapped to the relevant user cohort, ensuring traceability between claim and evidence.

 

Question: Do decorative images have empty (null) alt attributes?

Do decorative images have empty (null) alt attributes? is a binary acid-test for WCAG 1.1.1 compliance. Failing to null-decorate images results in redundant screen-reader chatter that fatigues blind users, while incorrectly nulling informative images hides critical functionality. Because this single mistake propagates across every page template, the form elevates it to mandatory status, ensuring that auditors verify a representative sample rather than ignoring the issue.

 

The yes/no framing is intentionally black-and-white; there is no “partial” answer, mirroring the pass/fail nature of WCAG success criteria. This design decision removes scoring ambiguity and accelerates peer review.

 

Follow-up text boxes capture evidence, so if the answer is “no,” the auditor must list URLs and remediation steps, creating a ready-to-paste defect log for Jira or Azure Boards.

 

Question: Can the content be understood when CSS styling is disabled?

Can the content be understood when CSS styling is disabled? validates the underlying semantic structure, a cornerstone of WCAG 1.3.1 (Info and Relationships). If reading order collapses or controls vanish without CSS, the product fails for screen-reader users who browse via linearized DOM. Mandatory enforcement guarantees that auditors perform this quick but revealing test early, before time is wasted on pixel-perfect visual reviews that may ultimately be irrelevant.

 

The question also acts as a proxy for code quality: teams that rely on CSS for content injection or layout hacks typically surface here, giving management an early warning that refactoring may be required.

 

Because the test is environment-agnostic (any browser dev-tools can disable styles), the form does not need extra apparatus, making compliance verification lightweight and repeatable in continuous-integration pipelines.

 

Mandatory Question Analysis for IT Accessibility & Inclusive Design Audit Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

Mandatory Field Analysis

System/Project Name
Justification: This field serves as the unique human-readable identifier for every downstream workflow—issue repositories, VPAT documents, and procurement approvals. Without a mandatory name, audit records risk ambiguous labels such as “Homepage,” making it impossible to aggregate metrics or prove compliance history during regulatory inquiries.

 

Primary URL or Location
Justification: The URL is the canonical reference point required by Section 508, WCAG-EM, and EN 301 549 reports. A missing or incorrect link invalidates reproducibility, exposes the organization to legal challenges, and prevents automated regression scanners from revisiting the exact scope in future release cycles.

 

System Type
Justification: WCAG techniques diverge sharply across websites, mobile apps, desktop software, and APIs. Forcing the auditor to declare the modality ensures that the correct checklist, tooling, and pass/fail thresholds are applied, eliminating costly re-audits caused by category misalignment.

 

Audit Start Date
Justification: Many contracts mandate remediation within N days from the baseline audit. A mandatory start date enforces SLA accountability, enables longitudinal analytics, and removes ambiguity in jurisdictions where regulatory countdowns are legally binding.

 

Lead Auditor Name
Justification: Compliance frameworks require a clearly identified responsible party who attests to the accuracy of findings. Making the name mandatory guarantees traceability for potential depositions, supports internal coaching programs, and expedites clarification requests during remediation sprints.

 

Contact Email
Justification: Email is the primary asynchronous channel for every stakeholder loop that follows the audit. A validated, mandatory address prevents orphaned records, eliminates support tickets caused by bounced messages, and ensures that clarification questions never block remediation timelines.

 

Accessibility Guidelines to Evaluate Against
Justification: The selected standard—WCAG 2.2 AA, Section 508, EN 301 549, or other—sets the legal ceiling for the engagement. Missing this declaration risks auditing against the wrong criteria, nullifying all findings and exposing the organization to contractual penalties or discrimination lawsuits.

 

User Groups in Scope
Justification: Accessibility is not one-size-fits-all. Mandating the selection of user groups ensures that no disability cohort is inadvertently omitted, protecting the organization from ADA litigation and guaranteeing that testing protocols cover the specific assistive technologies relevant to each group.

 

Do decorative images have empty (null) alt attributes?
Justification: This is a binary WCAG 1.1.1 requirement with zero tolerance for error. A mandatory yes/no answer forces auditors to verify a representative sample, preventing screen-reader fatigue caused by redundant alt text and ensuring that decorative images are correctly silenced.

 

Can the content be understood when CSS styling is disabled?
Justification: Semantic structure and reading order are foundational to WCAG 1.3.1. Making this test mandatory catches linearization failures early, before effort is wasted on visual polishing that may ultimately be irrelevant to assistive-technology users.

 

Lead Auditor Signature
Justification: A digital signature provides non-repudiation and fulfills most regulatory requirements that an audit report must be personally attested. Mandating the signature creates legal defensibility and signals executive-level accountability for accessibility compliance.

 

Sign-off Date
Justification: The sign-off date locks the audit version and triggers remediation SLAs. Without a mandatory date, stakeholders cannot enforce contractual deadlines or prove timeliness during regulatory reviews, potentially voiding compliance claims.

 

Overall Mandatory Field Strategy Recommendation

The form strikes an effective balance: only high-impact identifiers and critical WCAG checkpoints are mandatory, while narrative evidence and tertiary details remain optional. This design maximizes completion rates without sacrificing data quality, because an audit can be legally signed off once the core pass/fail criteria are captured. To further optimize, consider making optional fields conditionally mandatory—for example, if “Are keyboard traps present?” is answered “yes,” the follow-up description could be required before the form can be finalized. Similarly, when third-party components are in scope, the auditor could be compelled to list them, ensuring that contractual liability is clearly apportioned.

 

Finally, provide persistent visual indicators (red asterisk or “required” tag) next to mandatory fields and group them early in each section. This reduces cognitive load and prevents users from reaching the signature block only to discover missing required cells. Overall, the current mandatory field footprint is lean yet legally robust; maintain this discipline as WCAG evolves, and resist stakeholder pressure to over-mandate, which historically correlates with a 20–30% drop in form completion rates.

 

To configure an element, select it on the form.

To add a new question or element, click the Question & Element button in the vertical toolbar on the left.