UX 2.0 Usability planning
Posted September 4, 2009on:
So the first part of the usability research taking place in the UX 2.0 project is to perform a usability inspection of selected digital libraries (DLs). In order to do this two things had to be decided:
- What DLs to inspect
- How to perform the inspection
In this entry I have mapped out how these decisions were made and the implications.
The most difficult thing about selecting DLs to inspect was in narrowing the list because there are so many DLs out there. How many will give a sufficient breadth of information for comparison? What criteria should be used and which should be excluded? These were all questions that the team had to answer. As we wanted to compare our findings with the evaluation of the digital library, library@nesc later on in the project, it seemed appropriate to exclude commercial publisher digital libraries and focus on public digital libraries. In addition to that, the findings from the WorldCat usability testing report published for the ALA Annual in July 2009 revealed that academic users favour searching local, national and worldwide collections together as opposed to public library patrons who are interested in resources which are geographically close. This led us to think in terms of the geographic reach of DLs and the differences between them. As a result we selected 5 digital libraries which represent each geographic location; worldwide, continental, nationwide, regional and local. From this the following DLs were selected:
|Attribute||Digital Library||Web address|
|Worldwide||World Digital Library||http://www.wdl.org/en/|
|Nationwide (UK)||British Library||http://www.bl.uk|
|Local (Edinburgh)||Edinburgh University Aqua Browser||http://aquabrowser.lib.ed.ac.uk/|
Next thing to do was decide how to conduct the inspection. There are a number of well known and commonly used usability methodologies available. A number of factors affecting the scope of this inspection helped to narrow the choice:
- Scope: the inspection was proposed as a quick appraisal of the current state of digital libraries and was not intended as a detailed evaluation.
- Time-scales: the short time-scale meant that the inspection had to be done quickly. As a result, user testing would not be achievable at this stage in the project
Consequently, it would not be possible to evaluate the usefulness of each DL as outlined by Tsakonas and Papatheodorou (2007) and their triptych methodology. Factors such as the relevance, format, reliability and coverage would not be examined at this time. Instead the focus would be on the usability of the system to the user such as ease of use, aesthetics, navigation, terminology and learnability. As digital libraries generally have a well developed strategy and scope, it is more important to focus attention on the structure, skeleton and surface of the DL as explained by Jesse James Garrett in his book ‘The Elements of User Experience’. This includes information architecture, navigation design and visual design such as page format, colours and typography.
With all this in mind it was decided that a heuristic evaluation would be suitable. However, co-creator of these heuristics, Jakob Nielsen points out that heuristic evaluations are better when carried out by more than one evaluator. As there are no other specialists working on this project this would not be possible. However, as this inspection is intended as a quick evaluation of current DLs it was not considered detrimental to the research. To try and limit this issue, use of the cognitive walk-through method will also be integrated into the inspection. Formal task scenarios will not be created but typical user tasks such as searching will be considered when evaluating each DL. It is hoped that doing so will highlight any barriers to task success when its not possible to test with actual users.
For anyone who is unsure what a heuristic evaluation and cognitive walk-through entail, I plan to explain these in my next blog post.
So after deciding on the digital libraries to inspect and the method to inspect them, I am now at the stage of analysing each library and collecting my findings. Every usability expert has their own method for doing this but I find that familiarising myself with each site first then jotting down brief notes on each issue accompanied by a screen grab works best for me. After that, issues will be written up in detail assigned a severity rating and discussed. In addition, positive findings and the development of collaborative or personalised systems (if any) will also be examined. Finally, each DL will be compared and contrasted and conclusions drawn.
I hope this has helped to provide insight into the early stages of the usability research taking place. Please feel free to comment or discuss any aspect of the methodology.