Ghost Inspector recently integrated the Axe accessibility testing library to enable automated accessibility checks as part of your scripts. Used sensibly, this could be a useful tool to include in our suites, particularly if we don’t currently have a strategy to test for accessibility. Much like the screenshot comparison feature, we should look for opportunities to easily increase our coverage.
What it’s great for
One of the main challenges when it comes to accessibility is to ensure it’s present in the mindset of a delivery team throughout development. If, for example, the contract for the site we’re developing doesn’t stipulate accessibility compliance, then how can we justify time and effort spent improving accessibility? The first logical step is to raise awareness, and using a tool to quickly introduce checks is a great way to do this. Therefore, this can help to set the groundwork for establishing a more accessible site.
What it’s not great for
As always with automated solutions, it’s important to understand (and communicate) the limitations. Axe isn’t a replacement for a full accessibility audit. Because the DOM is being scrutinised, you can’t be certain that what’s actually visible on the site is still compliant. For example, just because an image has alt text doesn’t mean that alt text has anything to do with that image. Not all guidelines can be checked automatically. This UK Government article provides a really useful overview of how automated accessibility tools perform.
Integrating the checks into your tests
Interestingly, the Check Accessibility function has been implemented as a test step. This means you can run multiple checks throughout a single journey (unlike screenshot comparison). We need to be mindful of trying to cover too much in one script. There are multiple options for whether the Check Accessibility step should cause a test run failure. However, it makes little sense to do this where your script is also checking other functionality – you want to test these two things distinctly. Therefore, it makes sense to either create new tests specifically for accessibility checks, or mark these steps as optional so they can be manually reviewed later. Consider the most important pages first.
How to handle the results
The results will always require a level of analysis and scrutiny. First of all, we should check the selectors in our application to see what is the subject of a failure. Perhaps it isn’t something we need to be concerned by. We should also understand the severity of an issue, and the time cost of providing a resolution. Of course, raising an awareness across the team is one of the main objectives of integrating these tests, so always ensure the team is alerted to potential accessibility concerns.
Assuming we’re introducing awareness of accessibility where there was previously little or none, we should take the opportunity to advocate accessibility throughout the process. We shouldn’t be relying on GI tests to catch issues, we should be designing and implementing our features with accessibility in mind. If the Axe tests identify major concerns, a formal accessibility audit could be a consideration.
The existing johnlewis.com script from the Dynamics Tests article has been updated to include Check Accessibility steps for the 3 different page types the journey contains. The script is available to download at the bottom of this article.
Below are the results for the home page. Let’s take a look at a few of the failures in more detail.
The image lacking alt text is a close X. This (rightly or wrongly) isn’t actually keyboard-navigable, so the lack of alt text isn’t apparent when using a screen reader. There’s a good case for it being keyboard accessible and having alt text, as this banner appears on every page and it being repeated could be an annoyance.
Two of the links without discernible text are icons in the header region. From a screen reader perspective, the links get read as “My Account Link” and “Wishlist Link”, which is accurate and concise. We meet the requirement “Link text and alternate text for images, when used as links, must be discernible by a screen reader, must not have a duplicate label, and must be focusable”, so perhaps there is no need for further attention.
There are multiple <nav> landmarks. This is fine, but they should ideally contain additional labelling in order to describe the specific purpose of the navigation element (usually by adding an aria label.)
The home page contains no H1 heading. Other pages across the site appear to. However, it does seem a review of heading levels would be useful (for example, there are multiple H1’s on some pages, making the main purpose of the page unclear).