Developing Usable Online Information for a Web Authoring Tool

Here are the following topics:


Related Research

Various researchers [1, 3, 7, 8, 9, 10] have presented methods for collecting usability data; however, only Chignell [3] and Mehlenbacher [8] discuss the usability of documentation. To date researchers have addressed:

The following sections take a brief look at these ideas, as well as existing usability checklists, before moving on to describe how the concepts were applied to test the usability of a software user guide.

Defining a Usable System

Before one can study the usability of a system, Mehlenbacher [8] reasoned that one must first determine the characteristics of a usable system. He summarized eight ways to evaluate online systems and documents in a paper at SIGDOC'93 and concluded that a usable system is one that is accessible, maintainable, visually consistent, comprehensive, accurate, and oriented around the tasks that users intend to perform.

The Importance of the Navigation Strategies

Chignell and Valdez [3] postulated that providing users with better access to information requires an understanding of the navigation strategies that they use to find the information. They proceeded to study a number of navigation strategies in experiments conducted at the University of Toronto, including:

The study compared the navigation strategies used in both printed and online documents, where users can follow predefined paths or links by choosing highlighted terms. The results showed that linear actions dominated in both media.

From their studies, Chignell and Valdez concluded that online documents should incorporate elements of the "book" metaphor, and include such navigation devices as a table of contents and index. Finding that there is no single measure that tests hypertext usability, they also determined that hypertext authoring requires many of the technical communication skills, such as concise writing and indexing that are required for printed material.

The Value of Heuristic Evaluation

According to Nielson [9], heuristic evaluation is an informal method of usability analysis. In user interface design, heuristic evaluation occurs when a number of evaluators are presented with an interface design and asked to comment on it according to a set of guidelines. Nielson warns that it is important to have several evaluators that conduct their evaluations independently. Nielson's experience indicates that about 3 to 5 evaluators can usually identify about 75 per cent of the usability problems of a particular design.

Because usability guidelines often include several hundred items, Nielson reduced the number of rules to nine basic usability principles:

In addition to reviewing the related research on usability, existing checklists were considered.

Existing Usability Checklists

Several existing checklists served as models in the development of our checklists; for example, the Ravden [10] human-computer interfaces checklist, as well as checklists prepared by members of the Society for Technical Communication (STC), including Hartshorn [4], Henkle [5], and a number of STC competition managers [11].

The Ravden checklist includes about 120 detailed questions in various categories, including visual clarity, consistency, compatibility, and information feedback. The Hartshorn checklist contains a list of questions to test a user guide or reference guide for retrievability, ease of understanding, tasks, technical accuracy, and presentation. The Henkle checklist has over 200 questions and asks evaluators to explain why a user guide does not meet a quality guideline. The STC checklist is a list of questions for evaluating various publications in the technical publications competition.


[Contents] [Back to Introduction] [Forward to Developing our Checklists]