Web Magazine for Information Professionals

Accessibility Testing and Reporting With TAW3

Patrick Lauke gives a run-down of the free TAW3 tool to aid in accessibility testing of Web pages.

When it comes to assessing a Web site's accessibility, any Web designer should know by now that simply running the mark-up though an automated testing tool is not enough. Automated tools are limited, purely testing for syntax, easily ascertained "yes or no" situations and a set of (sometimes quite arbitrary) heuristics, which are often based on an interpretation of accessibility guidelines on the part of the tool's developers.

Nonetheless, automated checkers are a useful tool in the arsenal of accessibility-conscious designers, provided that their results are checked for false positives/negatives [1] and backed up by the necessary manual checks, carried out by a knowledgeable human tester who is familiar with any potential access issues and how they manifest themselves in a Web site.

This article gives a quick run-down of Testo Accesibilidad Web (TAW)3 [2], a free tool - developed by the Spanish Fundación CTIC (Centre for the Development of Information and Communication Technologies in Asturias) - to test Web pages against WCAG 1.0.

TAW3 is available both as an online version (similar to other tools such as Cynthia [3] and Wave [4]) and as a stand-alone java application. In this article, we will concentrate on the stand-alone version, which in version 3 is now also available in English.

Application Interface

The interface consists of a single window, divided into three main areas:

screenshot (35KB) : Figure 1: General layout of the TAW3 interface

Figure 1: General layout of the TAW3 interface

  1. standard menu bar;
  2. quick access buttons;
  3. analyser tab, which is further broken down into: a) analysis scope, b) action buttons, c) analyser settings, d) analysis result, e) source code view.

The analyser panel represents the main work area of the application. It is possible to work on any number of analyser panels within the same application window, which can act independently.

screenshot (22KB) : Figure 2: Scope section of the analyser tab, showing the different options available to spider multiple pages of a Web site

Figure 2: Scope section of the analyser tab, showing the different options available to spider multiple pages of a Web site

In the scope section of each analyser we can choose to test a single page, or to "spider" a site by defining the types of links to follow, the depth level and the overall number of pages to test.

screenshot (51KB) : Figure 3: Summary of the analysis of a single page

Figure 3: Summary of the analysis of a single page

After the initial automated part of the analysis has been completed, TAW presents us with a summary of all issues it found, as well as the number of points that require human judgement on the part of the tester.

screenshot (66KB) : Figure 4: Analyser tab showing a list of priority 1 issues found on the current page

Figure 4: Analyser tab showing a list of priority 1 issues found on the current page

Switching to the individual "Priority 1", "Priority 2" and "Priority 3" tabs, we get a comprehensive list of all the WCAG 1.0 checkpoints. From here, we can see which particular part of the page's mark-up triggered an error or requires human review.

Right-clicking on any of the checkpoints brings up a context menu to aid the tester in assessing the related issues:

screenshot (61KB) : Figure 5: 'Techniques' context menu option for an individual checkpoint, which opens the relevant part of the WCAG 1.0 page in a Web browser

Figure 5: 'Techniques' context menu option for an individual checkpoint, which opens the relevant part of the WCAG 1.0 page in a Web browser

screenshot (61KB) : Figure 6: For 'visual checking' of the current Web page a customised version of the page is opened in a Web browser

Figure 6: For 'visual checking' of the current Web page a customised version of the page is opened in a Web browser

Of course, this step can be further complemented by additional manual checking methodologies, such as those outlined in my previous article on "Evaluating Web Sites for Accessibility with Firefox" [6]

screenshot (66KB) : Figure 7: Validity level context menu for an individual checkpoint

Figure 7: 'Validity level' context menu for an individual checkpoint

screenshot (55KB) : Figure 8: Checkpoint annotation dialog window

Figure 8: Checkpoint annotation dialog window

Where necessary, it is possible to make specific annotations for each checkpoint tested (for instance, to back up a particular validity level that was chosen). So, for a comprehensive test, we work through each of the checkpoints, ensuring that all automated checks have yielded the correct result and assigning a validity level to all of the human checks.

screenshot (21KB) : Figure 9: Checkpoints checklist dialog window

Figure 9: Checkpoints checklist dialog window

At any point, we can get a slightly more compact checklist for the current analyser via the Checkpoint › Checklist menu option.

screenshot (9KB) : Figure 10: An HTML Guidelines settings dialog, with two priority 3 'until user agents' checkpoints unchecked

Figure 10: An HTML Guidelines settings dialog, with two priority 3 'until user agents' checkpoints unchecked.

An interesting feature of TAW is the ability to create sub-sets of WCAG 1.0 to test against. From the "Guidelines settings" dialog we can choose not only which level of compliance we're assessing (single A, double A, triple A), but also exclude specific checkpoints. For instance, if we make a conscious judgement that the priority 3 checkpoint 10.4 'Until user agents handle empty controls correctly, include default, place-holding characters in edit boxes and text areas' is not relevant any more (i.e. that the 'until user agents' clause has been satisfied for all user agents in common use today), we can explicitly omit this point from our testing regime.

screenshot (51KB) : Figure 11: User check dialog and the associated regular expression attribute wizard

Figure 11: User check dialog and the associated regular expression attribute wizard

Although very limited in scope (particularly due to a bug, see below), TAW also allows us to define additional checks via the User checkings' dialog. Currently, we are only able to select the HTML element the check applies to and (through a regular or wizard-based dialog) define which attribute this element is either required or not allowed to have in order to pass the check. For instance, we could create a rule to ensure that all BLOCKQUOTE elements found in a page also have a CITE attribute.

screenshot (45KB) : Figure 12: An HTML 'TAW report' opened in a Web browser

Figure 12: An HTML 'TAW report' opened in a Web browser

Once we are finished with the overall test, we can save our results in three different ways: as an HTML summary (which simply presents the frequency of errors encountered in tabular format), as an HTML "TAW report" (which adds markers and error descriptions to the analysed page, in the same way as TAW's online version) and as an Evaluation and Report Language (EARL) [7] file (a recent XML format which was specifically created with this type of application in mind).

screenshot (35KB) : Figure 13: An EARL report opened in Notepad

Figure 13: An EARL report opened in Notepad

Problems and Bugs

The main problem with TAW lies with its interface. Currently, there seems to be a large amount of unnecessary redundancy: in most cases, the same functions can be accessed via a button, menu option and context menu. This is not a bad thing in itself, but becomes confusing when it's not carried through consistently. For instance, the checkpoint checklist dialog provides buttons for "Visual checking" and "Checkpoint annotations", but does not offer the tester any way to set the checkpoints' validity levels.

The order in which some steps need to be carried out also matters, but no indication is given via the interface. For instance, "Reports" and "Guidelines" settings need to be chosen before an analysis is started. Changing the settings in an existing analyser pane has no effect, even after hitting the "refresh" button.

Some of the errors reported by the automated test are contentious and based on an interpretation of the relevant WCAG checkpoint. For example, the tool fails priority 2 checkpoint 3.5 "Use header elements to convey document structure and use them according to specification" if, for whatever reason, header levels "jump" by more than one level (e.g. having an

H3

immediately follow an

H1

, completely skipping

H2

) - neither WCAG nor the HTML specification explicitly forbid this (although it's admittedly not the best of practices).

 

screenshot (61KB) : Figure 14: Analyser summary of spidering multiple pages within a Web site

Figure 14: Analyser summary of spidering multiple pages within a Web site

Although the tool allows for the "spidering" of an entire site, if the same issue appears on multiple pages (which can happen, particularly if a site is based on a common template), it is not possible currently to review the issue only once and then set the validity level across all tested pages. This makes using the tool for more than one page at a time quite a tedious experience.

Guideline settings, and even the entire current TAW project, can be saved for future use, but surprisingly the developers opted for a proprietary file format. Personally, I would have welcomed the use of some flavour of XML, which would have opened up these files for potential reuse and manipulation outside of the TAW application itself.

screenshot (54KB) : Figure 15: Application bug: the context menu for user-defined checkpoints does not allow you to set the validity level

Figure 15: Application bug: the context menu for user-defined checkpoints does not allow you to set the validity level

Lastly, what I can only assume is a bug in the program: although it is possible to create user-defined checks, the tool does not allow the tester to set the validity level - the option is simply greyed out. This renders the whole concept of user checks fairly pointless for anything other than purely automated types of checks where the validity can be determined, without margin for error or human judgement, by the software.

Conclusion

TAW3 is certainly not as powerful and comprehensive as some of the commercial, enterprise level testing packages (such as the accessibility module of Watchfire WebXM [8]). In its current implementation, the application has a few bugs and an overall clumsy interface, which make it look unnecessarily complicated and confusing at first glance.

However, the strong emphasis on human checking and some of the advanced features (like the capability to export an EARL report, to test only against a sub-set of WCAG 1.0, and to create user-defined checks) make this a very interesting free tool for small-scale testing.

References

  1. Davies, M. "Sitemorse fails due diligence",
    http://www.isolani.co.uk/blog/access/SiteMorseFailsDueDiligence
  2. Fundación CTIC, Testo Accesibilidad Web online tool and download page,
    http://www.tawdis.net/taw3/cms/en
  3. HiSoftware Cynthia Says,
    http://www.contentquality.com/
  4. Wave 3.0 Accessibility Tool,
    http://wave.webaim.org/
  5. Fundación Sidar, 'HERA: Cascading Style Sheets for Accessibility Review' tool,
    http://www.sidar.org/ex_hera/
  6. Lauke, P.H. "Evaluating Web Sites for Accessibility with Firefox", Ariadne 44, July 2005
    http://www.ariadne.ac.uk/issue44/lauke/
  7. W3C Evaluation and Reporting Language (EARL) 1.0 Schema, http://www.w3.org/TR/2005/WD-EARL10-Schema-20050909/
  8. Watchfire WebXM product page, http://www.watchfire.com/products/webxm/

Author Details

Patrick H. Lauke
Web Editor
University of Salford

Email: p.h.lauke@salford.ac.uk
Web site: http://www.salford.ac.uk
Personal web site: http://www.splintered.co.uk

Return to top