Researching Usability

UX 2.0 Usability planning

Posted on: September 4, 2009

So the first part of the usability research taking place in the UX 2.0 project is to perform a usability inspection of selected digital libraries (DLs). In order to do this two things had to be decided:

  1. What DLs to inspect
  2. How to perform the inspection

In this entry I have mapped out how these decisions were made and the implications.

The most difficult thing about selecting DLs to inspect was in narrowing the list because there are so many DLs out there. How many will give a sufficient breadth of information for comparison? What criteria should be used and which should be excluded? These were all questions that the team had to answer. As we wanted to compare our findings with the evaluation of the digital library, library@nesc later on in the project, it seemed appropriate to exclude commercial publisher digital libraries and focus on public digital libraries. In addition to that, the findings from the WorldCat usability testing report published for the ALA Annual in July 2009 revealed that academic users favour searching local, national and worldwide collections together as opposed to public library patrons who are interested in resources which are geographically close. This led us to think in terms of the geographic reach of DLs and the differences between them. As a result we selected 5 digital libraries which represent each geographic location; worldwide, continental, nationwide, regional and local. From this the following DLs were selected:

Attribute Digital Library Web address
Worldwide World Digital Library http://www.wdl.org/en/
Continental (Europe) Europeana http://www.europeana.eu
Nationwide (UK) British Library http://www.bl.uk
Regional (Scotland) SCRAN http://www.scran.ac.uk/
Local (Edinburgh) Edinburgh University Aqua Browser http://aquabrowser.lib.ed.ac.uk/

Next thing to do was decide how to conduct the inspection. There are a number of well known and commonly used usability methodologies available. A number of factors affecting the scope of this inspection helped to narrow the choice:

  • Scope: the inspection was proposed as a quick appraisal of the current state of digital libraries and was not intended as a detailed evaluation.
  • Time-scales: the short time-scale meant that the inspection had to be done quickly. As a result, user testing would not be achievable at this stage in the project

Consequently, it would not be possible to evaluate the usefulness of each DL as outlined by Tsakonas and Papatheodorou (2007) and their triptych methodology. Factors such as the relevance, format, reliability and coverage would not be examined at this time. Instead the focus would be on the usability of the system to the user such as ease of use, aesthetics, navigation, terminology and learnability. As digital libraries generally have a well developed strategy and scope, it is more important to focus attention on the structure, skeleton and surface of the DL as explained by Jesse James Garrett in his book ‘The Elements of User Experience’. This includes information architecture, navigation design and visual design such as page format, colours and typography.

With all this in mind it was decided that a heuristic evaluation would be suitable. However, co-creator of these  heuristics, Jakob Nielsen points out that heuristic evaluations are better when carried out by more than one evaluator. As there are no other specialists working on this project this would not be possible. However, as this inspection is intended as a quick evaluation of current DLs it was not considered detrimental to the research. To try and limit this issue, use of the cognitive walk-through method will also be integrated into the inspection. Formal task scenarios will not be created but typical user tasks such as searching will be considered when evaluating each DL. It is hoped that doing so will highlight any barriers to task success when its not possible to test with actual users.

For anyone who is unsure what a heuristic evaluation and cognitive walk-through entail, I plan to explain these in my next blog post.

So after deciding on the digital libraries to inspect and the method to inspect them, I am now  at the stage of analysing each library and collecting my findings. Every usability expert has their own method for doing this but I find that familiarising myself with each site first then jotting down brief notes on each issue accompanied by a screen grab works best for me. After that, issues will be written up in detail assigned a severity rating and discussed. In addition, positive findings and the development of collaborative or personalised systems (if any) will also be examined. Finally, each DL will be compared and contrasted and conclusions drawn.

I hope this has helped to provide insight into the early stages of the usability research taking place. Please feel free to comment or discuss any aspect of the methodology.

3 Responses to "UX 2.0 Usability planning"

[…] help to inform the evaluation currently taking place on existing digital libraries (see previous blog). Highlights of the event will be posted next […]

First of all, how lovely to see womeone else doing (digital) library usability! I was really pleased when I saw your message come through on the JISC list, and I’ve enjoyed reading the analyses you’ve posted thus far.

In relation to this post, how did you decide on a definition of ‘digital library’? There is quite a range of systems in your analysis, including a novel interface to a standard library system, a couple of systems that host the content themselves, and some meta-indexing/search facilities (and the BL site includes elements of all of these), and I’m interested in how the different purpose/approach of each system plays into your analysis.

I’m very excited to read the rest of your reviews, I think what you’re doing is fascinating!

Thanks danamckay for your comment, we tried to cast a wide net when considering a definition of ‘digital library’ so that we could select a range of different types as you have noticed. We have approached the analysis in two ways, one is to examine the basic structure common to each DL such as search. The second has been to consider the end user and evaluate the different way each DL has approached the task of enhancing the user experience. As the purpose of each system differs we have tried to avoid directly comparing each DL, instead highlighting the way that functions have been implemented and evaluating how successful they are.

Leave a reply to Lorraine Cancel reply

del.icio.us bookmarks

Archive