1 of 35

Conformance

Summer 2025

1

2 of 35

Discussion Plan

  1. Requirement applicability for conformance (How to scope a claim)
  2. Ways of asserting conformance
  3. Foundational vs. Supplemental
    1. Exercise using Supplemental Requirements
    2. Criteria for placing a requirement in foundational or supplemental
      1. Does foundational have a large or small scope?
  4. Is foundational sufficient for base conformance or part of base conformance?
  5. Conformance options based on current decisions
    • How does WCAG 3 treat Not Applicable?
  6. What does conformance at different levels mean?
  7. Representative sampling (non-normative content)
  8. Third-party content (non-normative content)

2

3 of 35

  1. Requirement applicability for conformance �(How to scope a claim)

3

4 of 35

Terminology

  • Component (Developing) - grouping of interactive elements for a distinct function
  • Conformance scope (Developing) - set of views and/or pages selected to be part of a conformance claim. Where a view or page is part of a process, all the views or pages in the process must be included
  • Page (Developing) - non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together
  • Process (Developing) - series of views or pages associated with user actions, where actions required to complete an activity are performed, often in a certain order, regardless of the technologies used or whether it spans different sites or domains
  • View (Developing) - content that is actively available in a viewport including that which can be scrolled or panned to, and any additional content that is included by expansion while leaving the rest of the content in the viewport actively available

4

5 of 35

Requirement applicability for conformance

  • What is our minimal unit of conformance?
    • In WCAG 2, the conformance unit is the page, set of pages, process
    • In WCAG 3, the conformance unit(s) could be component/page/view/process
    • Some requirements may need to be evaluated at the smaller unit and a larger unit (minimum isn’t exclusive)
  • Can we include the smallest unit of conformance/evaluation to which it can apply in each requirement?
    • Some requirements could apply to components and larger scopes (ex: contrast)
    • Some requirements apply to pages/views and larger scopes (ex: heading structure)
    • Some requirements apply to sets of pages/views and larger scopes (ex: consistent navigation)
  • What groupings make sense (page/view, process, conformance scope) to use in conformance claims?
  • What should a note (like WCAG-EM) cover, separately?
    • Potential Content for Notes

5

6 of 35

Summary from 5 August Meeting

  • Minutes
  • General agreement that there is a lot of value in providing hooks for reporting on how well a component meets the applicable guidance
  • The group was polled and the preference, with some disagreement, was to write a note on evaluating accessibility of components. Other options discussed:
    • Including a category in an evaluation methodology document for components that was essentially “accessibility-ready” or some other name for a conformance like state of components (This has slightly less support that a full taskforce)
    • Including component as a minimum unit of conformance
    • Creating a special class of conformance within the WCAG conformance section

6

7 of 35

2. Ways of asserting (stating, discussing) conformance

7

8 of 35

Conformance, Compliance & Reporting

Conformance

  • “satisfying all the requirements listed under the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim” - WCAG 3
  • Historically, 100% against technical requirements with technical requirements assigned to levels
  • Associated with technical standard

Reporting

  • Report of the state of conforming (as determined by rulemakers or report format) and/or how something is conforming or has progressed at the time of evaluation
    • Example: Accessibility Conformance Report - “a document that explains how information and communication technology (ICT) products such as software, hardware, electronic content, and support documentation meets…[standard]” - Section 508
  • Reporting is active and ongoing. A report is a static result of reporting.

Compliance

  • How well something meets law, regulation, policy, contract, or procurement requirements
  • Compliance relates to laws, regulations, and policy - it is not in W3C and AG remit but providing hooks for compliance is within scope
  • General exceptions (and reasonable efforts concepts are relevant to Compliance)
    • Example: online maps and mapping services, as long as essential information is provided in an accessible digital manner for maps intended for navigational use (EAA?)
  • Associated with statutory or institutional requirements

8

9 of 35

How do people assert / report on conformance or compliance?

  • ACR (Accessibility Conformance Report)
    • VPAT (Voluntary Product Accessibility Template)
  • Accessibility Statement
  • Internal checklists
  • WCAG EM Report Tool or similar
  • Badges
  • ATAG Report Tool
  • Part of a purchase contract (RFP checklist)
  • Narrative statements (website, email, document, communication, some static reference)
  • Scores & Dashboards
  • What else?

9

10 of 35

What things could people report against WCAG 3?

  • Number of provisions passed or failed
  • Level achieved
  • Functional performance criteria (needs) supported
  • Issue severity (First Aid)
  • Progress
  • Assertions (processes that have been completed)
  • Dates: of report and when acquired
  • Whether archival vs. active
  • Areas that you go above and beyond
  • Error density
  • Scope of testing
  • Risk profile or organizational impact
  • Roadmap of what’s next
  • AT compatibility
  • Remediation policy or timeline
  • Responsibilities (team that need to fix)
  • Version tested
  • Version of report
  • Overall score / badge
  • Important features/details
  • How to use certain features
  • Methodology & tools used
  • Who evaluated
  • Challenges encountered (software/hardware/organization)
  • Qualifications of author/evaluator
  • If and how was AI used? (this may be difficult to figure out, not always known)

10

Not everything we list here will go into the Normative content. Some can go into notes or understanding documents or not be included in any W3c documentation.

11 of 35

Additional points about reports

  • Reporting changes based on who the report is going to
    • Feature for end users; technical deficiencies to policy/contracts
  • Reporting needs to reference static content
  • Can divide things reported based on audience (different people can validate different information)

11

12 of 35

3. Foundational or Supplemental

12

13 of 35

Level Exercise

  • Survey Results

Previously agreed upon criteria for foundational level:

  • Safety Issues
  • Needed for AT to work
  • Likely to prevent task completion even with ideal current AT support

Possible criteria

  • Universal applicability?
  • Feasibility?
  • Impact?
  • Testability?

13

14 of 35

4. Size/scope of foundational

14

15 of 35

Level exercise results

Criteria for foundational requirements:

  • Must be a requirement
    • A statement about the interface which is testable.
  • Universal applicability
    • Must apply to all interfaces (across scales), unless a condition is built into the requirement.
  • Feasible
    • Achievable with reasonable effort across various scales of product/organisation.

Would lead to foundational rather than supplemental:

  • Prevents safety issues
  • Needed for AT to work
  • Allows for task completion even with ideal current AT support

15

16 of 35

Examples

Foundational:

  • Equivalent text alternatives are available for images that convey content.
    • Could be fulfilled by UA, AI, or author. “Available” is a minimum bar.

Supplemental:

  • Enhanced features that allow users to interact with captions are available.
    • Not useful for all content / scenarios, feasibility question.

For discussion:

  • Captions are available for all live audio content
    • Level depends on feasibility, discussion needed.�

Noting the flexibility of requirement placement: In the writing process �each requirement can be adjusted to be a smaller or larger requirement.

  • Exceptions can be added to increase feasibility and make a requirement appropriate for foundational
  • A scope can be broader which reduces feasibility and makes a requirement more appropriate for supplemental

16

17 of 35

Minimal foundational approach

Foundational

  • Descriptive transcripts
  • Pointer pressure alternative
  • Speech alternative
  • Error notification
  • No visual motion

Supplemental

  • Captions prerecorded
  • Captions live
  • Audio descriptions prerecorded
  • Comparable keyboard effort (assertion)
  • Error persists (based on slide 13, or removed?)
  • Unnecessary steps (assertion?)
  • No misleading wording (may be assertion)
  • No artificial pressure
  • Comparable risk

17

18 of 35

More comparable to WCAG 2.x A & AA approach

Foundational

  • Descriptive Transcripts
  • Audio descriptions prerecorded
  • Captions prerecorded
  • Captions live
  • Audio descriptions prerecorded
  • Pointer pressure alternative
  • Speech alternative
  • Error notification

Supplemental

  • Comparable keyboard effort (assertion)
  • Error persists (based on slide 13, or removed?)
  • Unnecessary steps (assertion?)
  • No misleading wording (may be assertion)
  • No artificial pressure
  • Comparable risk

18

19 of 35

Notes from Discussion

  • Meeting Minutes
  • General agreement that the terminology “foundational” and “supplementary” does not adequately convey what these concepts mean
  • Comments on minimal foundational approach
    • Potentially more flexible
    • May result in less accessible products than WCAG 2.x
    • Forces people to look at all supplemental to evaluate which are appropriate
  • Comments on larger foundational approach
    • More closely aligns with WCAG 2.x A & AA

19

20 of 35

4. Use of “Not Applicable”, NA

20

21 of 35

How should N/A work?

Based on previous conversations supplemental will be reported on:

  • Modules
  • Points or Percentages

When an item is Not Applicable (N/A) it can be:

  • Ignored (it doesn’t add or subtract from the score)
  • Counted as a success
  • Varies based on the requirement

In modules, N/A becomes pass, since every item in the module would be tested

In Points and Percentages, N/As effect the score:

  • When ignored, N/As reduce the overall number of requirements and assertions being tested so more heavily weight the ones completed and lower the importance of the N/A
  • When counted as a success, they weight the N/A as equal to the other requirements and assertions

21

22 of 35

Example - 5 Supplemental Requirements & Assertions

  • No flashing - Content does not include flashing
  • Numerical alternatives - Alternatives are provided for numerical information such as statistics
  • Plain language review - Content author(s) conduct plain language reviews to check against plain language guidance appropriate to the language used. This includes checking that:
    • the verb tense used is easiest to understand in context;
    • content is organized into short paragraphs; and
    • Paragraphs of informative content begin with a sentence stating the aim or purpose of the content.
  • Media alternatives style guide - Content author(s) follow a style guide that includes guidance on media alternatives.
  • Audio description volume - A mechanism is available that allow users to control the audio description volume independently from the audio volume of the video and to change the language of the audio description, if multiple languages are provided.

22

Let’s assume the scope has no flashing and no multimedia.

23 of 35

Example - NA affect on Scoring

23

Short Name

Points or %

(NA = 0)

Points or %

(NA = 1)

Points or %

(Varies)

No flashing

0

1

1

Numerical alternatives

1

1

1

Plain language review

1

1

1

Media alternatives style guide

0

1

0

Audio description volume

0

1

0

Total

2 (40%)

5 (100%)

3 (60%)

24 of 35

Simple vs Large/Complex

  • A simple site could have a lot of NAs, making it harder to score at a higher level (unless NAs count positively, or it’s scored proportionally to non-NA items).

  • A large / complex site could have almost every requirement in scope, therefore mistakes/issues are more likely.

24

25 of 35

Wordsmithing requirement categories:

“Foundational” alternatives: �Core / Primary / Minimum / Preliminary

“Supplemental” alternatives: �Secondary / Advanced

25

26 of 35

Key Points from Discussion

  • Meeting Minutes
  • We must make sure that smaller sites are not punished
  • Scoring would only apply to the supplemental criteria
  • Percentage of the items that apply
    • % would stay the same as different functionality
  • Points allow creators to see increase over time
  • We want scoring to better reflect user experience
    • Need to be careful to avoid it being an “accessibility score”, doesn’t include importance per issue.
    • The goal of this isn’t scoring accessibility, the goal is to encourage people to do more
    • Weighted scoring may also help.
  • Is it more important to understand score by disability categories?
    • Explore prioritization based on EN 301 549
    • Also look at energy labels
    • Conformance vs. Reporting (Question: Can reporting be used to better reflect user experience?)

26

27 of 35

4. Conformance at different levels

27

28 of 35

What is the purpose of providing different levels?

  • Encourage continual improvement
    • Side effect: Allowing for prioritisation of criteria.
  • Encourage people to visibly see all requirements, assertions and best practices
  • Enabling regulators to select different levels for different services
  • Incorporate incentive to go beyond base level (In WCAG 2.x A & AA)
  • 3 levels encourage the middle level
  • Levels facilitate ramp up
  • What else?

28

29 of 35

Sample set of Supplemental Requirements & Assertions

Supplemental Requirements

  • Audio description volume
  • Error linked
  • No flashing
  • Numerical alternatives
  • Readable text style (enhanced)
  • Visually organized
  • Risk statements

Assertions

  • Comparable keyboard effort
  • Media alternatives style guide
  • Plain language review
  • No harm from algorithms
  • Disturbing content

Test Mapping

29

Assumptions

  • We are using the larger foundational set (more equivalent to WCAG 2 A&AA)
  • All foundational are met
  • Not Applicable counts as pass

30 of 35

Percentages of Supplemental **Assumes all foundational met

30

Functional Needs

  • Error linked
  • No flashing
  • Numerical alternatives
  • Visually organized
  • Risk statements
  • Comparable keyboard effort
  • Media alternatives style guide
  • Plain language review
  • No harm from algorithms
  • Disturbing content
  • No flashing
  • Visually organized

Overall

25%

58%

17%

With limited Memory

20%

80%

20%

WIth limited Language

0%

100%

0%

WIth limited Knowledge

33%

67%

0%

With limited Perception

20%

80%

50%

With limited Executive function (Attention, reasoning)

33%

67%

0%

With limited Self Regulation

0%

100%

0%

With limited Fine motor control

33%

66%

0%

Wth limited Gross motor control

50%

50%

0%

With limited Voice

0%

100%

0%

Without typical Physical attributes

0%

100%

0%

Without or with limited Hearing

0%

100%

0%

Without or with limited Sight

14%

57%

14%

With Sensory Sensitivity

50%

50%

50%

31 of 35

Which is more useful for conformance?

Levels set based on

  • overall %
  • % by functional needs

Questions

  • Is it preferable to incorporate functional needs meta-data, and save functional needs as a method of reporting?
  • Would it be beneficial to create recommended sets of supplemental requirements and assertions based on some qualifier (industry, type of product, etc)

31

32 of 35

4. Representative sampling

32

33 of 35

Representative sampling (non-normative content)

  • Is representative sampling already covered by how we defined scope and definitions?
  • Is there a way we would consider representative sampling that would affect conformance or how we write requirements?

33

34 of 35

4. Third Party Content

34

35 of 35

Third Party Content

Third-party content (non-normative content)

  • Gathering perspectives on what could go into a policy doc.
  • Would any of that affect how we write the conformance model?

Previous conversation

Types of 3rd party content

  • Selected and built in parts of a site/product/etc.
  • News articles, etc that are pulled in
  • Content created by individuals
  • Advertising
  • API
  • AI generated?

Possible intersections with normative WCAG content

  • Authoring tools (if included)
  • Possibly exceptions
  • Possible methods (assertion?) for user generated content

35