Andie Choi

Building accessibility standards from zero

Have you ever heard a text-to-speech device read out all letters of a URL? It’s not pleasant.

Amazon’s SP-API documentation had no way to measure or improve accessibility. No one asked me to fix this. I founded a program, coordinated across 8 writers, platform vendors, and engineering, and built it into the team’s workflow. Minimum accessibility score lifted from 65% to 73%.

Role: Founded and led without a mandate; coordinated across writers, engineering, and platform vendors
Scope: 647 pages across 6 documentation sets on 2 hosting platforms
Timeline: November 2023 – January 2026


The problem

Amazon’s Selling Partner API documentation had no systematic way to measure or improve accessibility. There was no tooling in place, no content standards for accessible writing, and no feedback loop to tell us whether our docs were getting better or worse.

That meant we had no way of knowing whether the 16% of the global population living with a disability could actually navigate our content. The team had good instincts but no shared definition of what “accessible documentation” looked like in practice, no measurement, and no accountability.


Making the case

I evaluated two tooling options: Amazon’s internal Accessibility Evaluator and IBM’s open-source Equal Access Checker. I tested both against our documentation.

I wrote a proposal documenting the problem, the tooling evaluation, and a measurement plan. The proposal was approved.


Running the pilot

I scoped a 6-week pilot targeting 91 pages: the highest-traffic onboarding and API content.

To bring 8 writers up to speed, I created:


Turning ambiguity into standards

There were no existing accessibility conventions for documentation. I had to define what “accessible” meant in practice for this team, then make the standards concrete enough that 8 writers could apply them consistently.

  1. Alt text. Defined guidance for describing diagrams so screen reader users get the same understanding a sighted user does.
  2. Heading uniqueness. Standardized unique, descriptive heading conventions to replace repeated labels that broke assistive navigation.
  3. Visual position language. Replaced position-only instructions (“click the button on the right”) with element-label pairings that work without sight.
  4. Empty and skipped headings. Identified structural patterns breaking navigational hierarchy for assistive technology users.

Measuring results honestly

What improved:

What didn’t work:


Making it outlast me

A program that depends on one person isn’t a program. I built this to run without me.

Evaluated additional tooling. IBM + WAVE together covered the most ground. IBM handled severity-based reporting; WAVE handled interactive keyboard navigation testing.

Deployed automated monitoring. The AWS Level Access Crawler generates daily accessibility reports. The first full report showed 1,877 issues that were all platform-level false positives. Zero actionable content issues.

Built it into the workflow. Each week, the on-call writer reviews the crawler report, creates tickets, and updates the false-positive list. The program continues without depending on any single person.


From colleagues

“She went above and beyond in improving accessibility for our documentation, making thorough case studies and implementing lasting process improvements that helped our whole team create better work.”
Jack Evoniuk, Technical Writer

“Andie has consistently demonstrated exceptional skill as a Technical Writer, combining strong technical knowledge with a deep empathy for users of all backgrounds and abilities.”
Gibran Waldron, Business Analyst

“She took ownership of projects that had meaningful impact on user experience, inclusivity, and content quality.”
Wendy Kurko Giberson, Senior Technical Writer