1 of 24

How to Write Guidelines

...and support material for WCAG3

2 of 24

WCAG 3 Structure

3 of 24

Comparison to WCAG 2.x Structure

WCAG 2

WCAG 3

Guidelines

Guidelines

Success criteria

Outcomes

Techniques

Methods

Understanding

How To

Principles

Tags (TBD)

4 of 24

A New Structure for the Guidelines

  • Guidelines: high-level, plain-language version of the content for managers, policy makers, individuals who are new to accessibility. How-To sections describe the guideline.
  • Outcomes: testable criteria that include information on how to score the outcome in an optional Conformance Claim.
  • Methods: detailed information on how to meet the outcome, code samples, working examples, resources, as well as information about testing and scoring the method.

5 of 24

WCAG 3 Structure

6 of 24

  • Guideline: Provide text alternative for non-text content.
  • Outcomes: Provides text alternatives for non-text content for user agents and assistive technologies. This allows users who are unable to perceive and / or understand the non-text content to determine its meaning.
  • Methods:
    • Text alternative for Image of text (HTML)
    • Functional Images (HTML, PDF, ePub)
    • Decorative images (HTML, PDF, ePub)
    • Author control of text alternatives (ATAG)

7 of 24

Structure of Text Alternatives Method

  • Introduction
  • Description
  • Examples
  • Tests
  • Resources

8 of 24

  • Get started
  • Plan
  • Design
  • Develop
  • Examples
  • Resources

9 of 24

Scoring & Conformance

10 of 24

Changes from WCAG 2

WCAG 2

WCAG 3

Evaluate by page

Evaluate by site or product (or subset)

A, AA, AAA

Critical Errors

Perfection or fail

Point System

AA is mostly used for regulations

Bronze will be recommended for regulations

Success criteria have the same true/false evaluation

Guidelines are customized for the tests and scoring that is most appropriate.

11 of 24

Critical Error

An accessibility problem that will stop a user from being able to complete a process. Critical errors include:

  1. Items that will stop a user from being able to complete the task if it exists anywhere on the view (examples: flashing, keyboard trap, audio with no pause);
  2. Errors that when located within a process means the process cannot be completed (example: submit button not in tab order);
  3. Errors that when aggregated within a view or across a process cause failure (example: a large amount of confusing, ambiguous language).

12 of 24

Point System

The point system has 3 levels:

  1. Testing the views and processes (Atomic tests)
  2. Assigning the Outcome score
  3. Calculating the overall score
    1. The overall score for the site or product
    2. The overall score by disability category
    3. Whether it meets Bronze level

13 of 24

Scoring Atomic Tests

Goal: To allow more flexible tests, make them easily and consistently scored, and provide a way to allow bugs without blocking the user.

Testing is scoped to either a “view”, or a “process”. Each outcome has a section that shows how it is scored.

  • Tests that results in a pass or fail condition will be assigned a 100% or 0%;
  • Tests at the element level that can be consistently counted will be assigned a percentage.
  • Tests that apply to content without clear boundaries will be scored using a rating scale.
  • Critical errors within processes will be identified and result in score of 0.

Note: The intent is to include “holistic tests” in a later draft.

14 of 24

Example: Text Alternatives

Two types of tests:

  • Automated for the presence of alternative text - percentage of total images
  • Manual for whether the images in the process or task have appropriate alternative text. - percentage of total images + critical error

As an example, a result of the tests: 83% of images have appropriate alternative text with no critical errors.

15 of 24

Example: Text Alternative Outcome rating

16 of 24

Overall Scoring

  • The results from the atomic tests are aggregated across views and used along with the existence of critical errors to assign a rating (Very Poor to Excellent - 0 to 4) by outcome..
  • The ratings are then averaged for a total score
  • Calculate a total score by the functional category(ies) they support.(easy for a tool)
  • Bronze level is proposed to be a score of at least 3.5 in each disability category and overall.

Note: The guidelines can be used for good-practice without using scoring or conformance.

Note: We are discussing if it should be possible to pass with a critical error if the rest of the site is good enough.

17 of 24

WCAG3 Writing Process

18 of 24

Writing Process

19 of 24

Scope for SubGroup

  • A short bullet list:
    • what is in scope
    • what is out of scope
  • Milestones or an expectation of how long this group is expected to last

20 of 24

User Needs

Goal: To Put User Needs at the Center.

Resources: Functional Needs Master list

For: background information to drive the process and Get Started tabs

Complete the following:

  1. List group(s) of people with disabilities (Who) and the barriers they experience (Why)
  2. List of categories from Functional Needs (for conformance scoring)
  3. Identify common needs and unique needs (for testing and conflicts)

Completion: You are complete when you have a list of groups and the barriers they experience; a list of the common needs and the unique needs of each group.

21 of 24

Outcomes

  • How you measure that the user need has been met
    • Similar to success criterion
    • Plain language
    • Simple phrasing and verb tenses
    • Active voice
  • Multiple Outcomes have an “AND” relationship: You must do “this” AND “this” AND “this”
  • As many outcomes as are needed to meet the user need
  • Associate the disability categories with each outcome
  • Identify any critical errors (if appropriate)

22 of 24

Critical Errors

  • Discriminate between errors that can be tolerated as bugs and errors that are essential to fix.
  • Can be different for different Outcomes.
  • Three types of critical errors - any can apply
    • These may be items that will stop a user from being able to complete the task if it exists anywhere on the page (examples: flashing, keyboard trap, audio with no pause)
    • These may be an error that when located within a process means the process cannot be completed (example: submit button not in tab order)
    • These may be errors that when aggregated within a view or across a process cause failure (example: a large amount of confusing, ambiguous language)

23 of 24

Example

Alt text

24 of 24

Plain Language

What is plain language?

  • Plain language is content written clearly so that readers can easily find, understand, and use the information.

Know your audience.

  • Find out what your readers need to know—not what you want to tell them.

How to do it:

  • Organize your content in a logical way.
  • Put the most important info first and follow with details.
  • Write in active—instead of passive—voice.
  • Use common, everyday words.
  • Write short sentences.
  • Use contractions.
  • Remove excess words.
  • Speak directly to your readers: Use “you” and other pronouns.
  • Use lists and tables to structure content.