markdown · 2568 bytes Raw Blame History

Accessibility audit

This directory contains the tooling for the WCAG AA audit pass described in S39. Two complementary tools:

  • pa11y-ci — broad scan across anonymous routes. Runs from a static URL list in pa11y-config.json. Cheap to run, good for catching regressions on the main pages.
  • axe-core via Puppeteer — focused scan on the routes that need a logged-in session (settings, dashboard, new-repo form, notifications). Drives a headless Chromium through a real sign-in.

Automated tools catch ~30% of accessibility issues. Pair these with manual screen-reader passes (NVDA on Windows / VoiceOver on macOS) and keyboard-only navigation.

Prerequisites

# One-time, in your CI runner or workstation:
npm i -g pa11y-ci puppeteer @axe-core/puppeteer

You also need a running shithub instance with seeded data. For local runs:

make dev-db dev-storage dev-migrate dev
# in another terminal, sign up an account so the auth flow has
# a known credential. Then export it:
export SHITHUB_USER=alice
export SHITHUB_PASS=<the-password-you-set>
export SHITHUB_URL=http://127.0.0.1:8080

Running

# Anonymous routes:
make audit-a11y-pa11y

# Authenticated + diff/issue views:
make audit-a11y-axe

# Both:
make audit-a11y

Both tools exit non-zero on a high-severity (critical / serious) violation. CI gates on a clean run; lower-severity findings go to the audit record (docs/internal/a11y-audit-record.md).

What's covered

Tool Scope
pa11y-ci /, /signup, /login, /explore, /-/health
axe-runner dashboard, settings/profile, settings/security/2fa, /new, /notifications
Manual SR every primary form + diff view + admin surfaces

What's NOT covered by automation

  • Screen-reader semantics that pass programmatic checks but read poorly. Diff view labelling old/new for SR users is the canonical example — aria-label checks pass; whether the resulting verbal output makes sense needs a human.
  • Keyboard order that diverges from visual order in CSS-positioned layouts. tabindex audits help but don't catch every case.
  • Modal focus trapping under unusual interaction sequences.
  • Color contrast in user-content (avatars, custom topic colors).

Findings from the manual audits are recorded in docs/internal/a11y-audit-record.md along with the dispositions (fixed / accepted-with-rationale).

View source
1 # Accessibility audit
2
3 This directory contains the tooling for the WCAG AA audit pass
4 described in S39. Two complementary tools:
5
6 - **pa11y-ci** — broad scan across anonymous routes. Runs from a
7 static URL list in `pa11y-config.json`. Cheap to run, good for
8 catching regressions on the main pages.
9 - **axe-core via Puppeteer** — focused scan on the routes that
10 need a logged-in session (settings, dashboard, new-repo form,
11 notifications). Drives a headless Chromium through a real
12 sign-in.
13
14 Automated tools catch ~30% of accessibility issues. Pair these
15 with manual screen-reader passes (NVDA on Windows / VoiceOver on
16 macOS) and keyboard-only navigation.
17
18 ## Prerequisites
19
20 ```sh
21 # One-time, in your CI runner or workstation:
22 npm i -g pa11y-ci puppeteer @axe-core/puppeteer
23 ```
24
25 You also need a running shithub instance with seeded data. For
26 local runs:
27
28 ```sh
29 make dev-db dev-storage dev-migrate dev
30 # in another terminal, sign up an account so the auth flow has
31 # a known credential. Then export it:
32 export SHITHUB_USER=alice
33 export SHITHUB_PASS=<the-password-you-set>
34 export SHITHUB_URL=http://127.0.0.1:8080
35 ```
36
37 ## Running
38
39 ```sh
40 # Anonymous routes:
41 make audit-a11y-pa11y
42
43 # Authenticated + diff/issue views:
44 make audit-a11y-axe
45
46 # Both:
47 make audit-a11y
48 ```
49
50 Both tools exit non-zero on a high-severity (critical / serious)
51 violation. CI gates on a clean run; lower-severity findings go to
52 the audit record (`docs/internal/a11y-audit-record.md`).
53
54 ## What's covered
55
56 | Tool | Scope |
57 |--------------|------------------------------------------------------|
58 | pa11y-ci | `/`, `/signup`, `/login`, `/explore`, `/-/health` |
59 | axe-runner | dashboard, settings/profile, settings/security/2fa, `/new`, `/notifications` |
60 | Manual SR | every primary form + diff view + admin surfaces |
61
62 ## What's NOT covered by automation
63
64 - **Screen-reader semantics** that pass programmatic checks but
65 read poorly. Diff view labelling old/new for SR users is the
66 canonical example — `aria-label` checks pass; whether the
67 resulting verbal output makes sense needs a human.
68 - **Keyboard order** that diverges from visual order in
69 CSS-positioned layouts. `tabindex` audits help but don't catch
70 every case.
71 - **Modal focus trapping** under unusual interaction sequences.
72 - **Color contrast** in user-content (avatars, custom topic colors).
73
74 Findings from the manual audits are recorded in
75 `docs/internal/a11y-audit-record.md` along with the dispositions
76 (fixed / accepted-with-rationale).