1 of 32

W3C Accessibility Guidelines (WCAG) 3 First Draft - What to look for

These slides: bit.ly/3onBM1T

2 of 32

What is WCAG 3.0?

  • A new set of accessibility guidelines from the W3C that succeeds (but doesn’t deprecate) WCAG 2
  • Slight name change: W3C Accessibility Guidelines to reflect the broader scope.
  • The code-name "Silver" now applies to the Task Force and project
  • WCAG 3.0 Introduction

3 of 32

Alert!

This is a first draft

  • There will be many changes to come.
  • We won’t be done for years.
  • Tool developers should exercise caution and stay current with changes the group is working on.

4 of 32

Research 2016 - 2018

5 of 32

First Draft

6 of 32

A New Structure for the Guidelines

  • Guidelines: high-level, plain-language version of the content for managers, policy makers, individuals who are new to accessibility. How-To sections describe the guideline.
  • Outcomes: testable criteria that include information on how to score the outcome in an optional Conformance Claim.
  • Methods: detailed information on how to meet the outcome, code samples, working examples, resources, as well as information about testing and scoring the method.

7 of 32

Comparison to WCAG 2.x Structure

WCAG 2

WCAG 3

Guidelines

Guidelines

Success criteria

Outcomes

Techniques

Methods

Understanding

How To

Principles

Tags (TBD)

8 of 32

  • Guideline: Provide text alternative for non-text content.
  • Outcomes: Provides text alternatives for non-text content for user agents and assistive technologies. This allows users who are unable to perceive and / or understand the non-text content to determine its meaning.
  • Methods:
    • Text alternative for Image of text (HTML)
    • Functional Images (HTML, PDF, ePub)
    • Decorative images (HTML, PDF, ePub)
    • Author control of text alternatives (ATAG)

9 of 32

Structure of Text Alternatives Method

  • Introduction
  • Description
  • Examples
  • Tests
  • Resources

10 of 32

  • Get started
  • Plan
  • Design
  • Develop
  • Examples
  • Resources

11 of 32

Guidelines in First Public Draft

  • Text alternatives - a direct migration from WCAG 2.X success criterion
  • Clear words - new guidance that could not previously be included in WCAG 2.X.
  • Captions - an example of adapting WCAG 2.X guidance to emerging technologies, such as web virtual reality
  • Structured content - migration and merging of several previously unrelated WCAG 2.X success criteria.
  • Visual contrast of text - a migration of WCAG 2.X Color Contrast with substantial changes. It also merges AA and AAA success criteria.

12 of 32

Conformance

13 of 32

Changes from WCAG 2

WCAG 2

WCAG 3

Evaluate by page

Evaluate by site or product (or subset)

A, AA, AAA

Critical Errors

Perfection or fail

Point System

AA is mostly used for regulations

Bronze will be recommended for regulations

Success criteria have the same true/false evaluation

Guidelines are customized for the tests and scoring that is most appropriate.

14 of 32

Critical Error

An accessibility problem that will stop a user from being able to complete a process. Critical errors include:

  1. Items that will stop a user from being able to complete the task if it exists anywhere on the view (examples: flashing, keyboard trap, audio with no pause);
  2. Errors that when located within a process means the process cannot be completed (example: submit button not in tab order);
  3. Errors that when aggregated within a view or across a process cause failure (example: a large amount of confusing, ambiguous language).

15 of 32

Point System

The point system has 3 levels:

  1. Testing the views and processes (Atomic tests)
  2. Assigning the Outcome score
  3. Calculating the overall score
    1. The overall score for the site or product
    2. The overall score by disability category
    3. Whether it meets Bronze level

16 of 32

Scoring Atomic Tests

Goal: To allow more flexible tests, make them easily and consistently scored, and provide a way to allow bugs without blocking the user.

Testing is scoped to either a “view”, or a “process”. Each outcome has a section that shows how it is scored.

  • Tests that results in a pass or fail condition will be assigned a 100% or 0%;
  • Tests at the element level that can be consistently counted will be assigned a percentage.
  • Tests that apply to content without clear boundaries will be scored using a rating scale.
  • Critical errors within processes will be identified and result in score of 0.

Note: The intent is to include “holistic tests” in a later draft.

17 of 32

Example: Text Alternatives

Two types of tests:

  • Automated for the presence of alternative text - percentage of total images
  • Manual for whether the images in the process or task have appropriate alternative text. - percentage of total images + critical error

As an example, a result of the tests: 83% of images have appropriate alternative text with no critical errors.

18 of 32

Example: Text Alternative Outcome rating

19 of 32

Overall Scoring

  • The results from the atomic tests are aggregated across views and used along with the existence of critical errors to assign a rating (Very Poor to Excellent - 0 to 4) by outcome..
  • The ratings are then averaged for a total score
  • Calculate a total score by the functional category(ies) they support.(easy for a tool)
  • Bronze level is proposed to be a score of at least 3.5 in each disability category and overall.

Note: The guidelines can be used for good-practice without using scoring or conformance.

Note: We are discussing if it should be possible to pass with a critical error if the rest of the site is good enough.

20 of 32

Claiming Conformance

  • Claiming conformance is optional.
  • The scope can be all content within a digital product.
  • The scope is usually one or more sub-sets of the whole, based on processes and views.
  • The scores are normalized (averaged) to a 0-4 score for each applicable outcome.
  • Must have:
    • No critical errors, and
    • at least 3.5 average total score, and
    • at least 3.5 score within each functional category (disability type)

Note: These numbers are placeholders at the moment. They need testing.

21 of 32

Claiming Conformance - levels

  • Bronze - the proposal in the previous slides.
  • Silver & Gold level are still under development. We want to have more usability-oriented testing for Silver level and Gold level.

22 of 32

Benefits of the Proposed Testing and Critical Errors

  • Sites can have a small percentage of accessibility bugs as long as they are not a critical error for people with disabilities.
  • Tests vary by guideline and method so the most appropriate test can be used.
  • More needs of people with disabilities that were previously too hard to measure with a true/false test can be included.
  • If a critical error is encountered, you can stop testing that guideline because it fails. Some testers are reporting that it speeds the conformance testing process. (not testing for bugs)

23 of 32

Demonstrating that the Conformance Works

Accessibility Metrics

  • Validity - is the score what the product should get?
  • Reliability - can the score be reproduced?
  • Sensitivity - does the score reflect the severity of the disability barriers and are different disabilities being treated equally?
  • Adequacy - does a minor change in accessibility produce a minor change in score?
  • Complexity - is it going to be harder for you to test, and if so, how much?

Metrics and Plan for Evaluating a Scoring Proposal

Slide deck introducing the metrics testing - still in development

24 of 32

Proof-of-Concept Tools

25 of 32

Testing Clear Words

To test Clear Words, you can use the Silver Writer tool.

  • This is a prototype tool to demonstrate the feasibility of scoring language.
  • We expect that accessibility tool makers will refine and improve this idea.
  • The Silver Writer tool is a fork of XKCD’s Simple Writer, which was created originally created by the author to help him write his Thing Explainer book.
  • The app was re-created and open sourced, which we in turn forked to see if it could be used to assess WCAG 3 Clear Words scoring.
  • Because the original word list was very restrictive, we added two extra lists that contained the top 3000 and 5000 words from Google’s Trillion Word Corpus (note: the lists were not edited to remove not safe for work content).

26 of 32

Using the Clear Words Tool

To use the Clear Words tool,

  1. Type, or copy and paste, text into the “Enter Words Here” text input.
  2. “Less Simple” words will be listed in the “Less Simple Words” section that will appear at the bottom of the page.
  3. Duplicate words will be removed from the “Less Simple” list, but plurals are currently still counted as different words. This will probably change.
  4. The different corpuses can be toggled using the Change Word List select element.

27 of 32

Testing Visual Contrast

Research - this is a substantial change from WCAG2 and FYI there is voluminous information behind the change including a table of change from WCAG2 to WCAG3. This work continues to evolve.

APCA Test Tool - this tool is developed and maintained by Andy Somers, a contributor to WCAG3. See his Github repo for additional information.

Note that there is still a manual lookup component, which we have kept for now, because we want the formulas to be clear. We expect that tool developers will build much simpler versions.

28 of 32

Feedback

29 of 32

Feedback Questions

  • Editor’s Notes contain explanations for decisions and ask questions we particularly want feedback on.
  • There is a longer list of feedback questions with contextual information.
  • Please focus your comments on the structure, not the text of the guidelines.

30 of 32

To Submit Feedback

To submit feedback, file an issue in the W3C silver GitHub repository. Please file one issue per discrete comment. If filing issues in GitHub is not feasible, send email to public-agwg-comments@w3.org (comment archive). The Working Group requests comments on this draft be sent by 26 February 2021.

31 of 32

FAQ & Other Ways to Contribute

  1. The plain language editors have only worked on the summary sections. We need more people to help with plain language editing.
  2. Part of the reason it is so long right now is all the editor notes. Another part is that we need more plain language editors.
  3. This draft version is in the official W3C spec template, which we have to do for W3C reasons. We want to move it into a content management system so that pages can be easily searched, filtered, and dynamically built. We plan to include searchable tags.
  4. We would love your help! Join the Silver Community Group. We have groups forming to work on new guidelines. See the Sub-group list.

32 of 32

Questions?

Thanks for your interest. If you have a survey link, please complete it.

Jeanne Spellman�jspellman@spellmanconsulting.com

Twitter & Github: @jspellman

For more info than you ever wanted, see our wiki page.

Thanks to Alastair Campbell of Nomensa and co-chair of AGWG. I ripped some of his slides. The good slides are his.

This slide deck is at: bit.ly/3onBM1T