Researching Usability

Posts Tagged ‘usability methods

Many thanks to everyone who made it along to our event yesterday. I hope that it was informative and provided some guidance on planning your own user research. I’ve uploaded my slides from the morning presentation on Slideshare for those might want to bookmark it or anyone who didn’t make it along. I think it’s fair to say that based on some feedback we received, the whole day was a great success. If you were there and wish to send your feedback (good and bad) please feel free to add your comments here or drop me a line.

If you hadn’t already heard, the UCD day was the last activity from the UX2.0 project which officially ended last month. It’s been a great 24 months working on this project and while putting together the slides for the UCD Day, I realised just how much work we have produced. Hopefully all of the documents, reports and blog posts which we contributed during that time will be put to good use by others. I’m hoping to continue this blog in some capacity once I settle into a new role but fear it might go a little quiet in the short-term.

Thanks for visiting the blogs and project website/wiki and I hope you’ll continue to enjoy reading my posts in the future.


EDIT: I forgot to mention, those who came to the prototyping session (and those who didn’t) who are interested in trying Balsamiq themselves can do so through our pilot scheme: Please remember to register as an EASE friend first to gain access. It’s a great opportunity to try Balsamiq and determine if it’s worth purchasing a licence. It really is a great tool and easy to use too!

As the UX2 project wraps up this month, we have planned an event which we hope will introduce user-centred design and usability in digital libraries to others through our case studies.

The event is free and funded by JISC meaning it’s primarily aimed at those in the education sector who are new to usability, user-centred design and user interface prototyping.

It takes place on Tuesday 17th May and the runs from 0930 – 1700

The practical session in the afternoon has a limited number of places so early registration is recommended. During the session you’ll get the chance to create your own wireframes first in paper then using the prototyping tool, Balsamiq. Our project recently funded a licence which makes Balsamiq available to others to try out. For more information on this visit the Wiki:


0900-0930 Registration

0930-1030 User-centred design and faceted search user interface (Boon Low, Lorraine Paterson)
This session will introduce user experience and user-centred design concepts and how these are related to open-source development. The complexities of UIs will be highlighted through a discussion on usability issues and the existing UI design patterns in particular, faceted search.

1030-1100 Break

1100-1230 Usability and user research: case studies and current practice (Lorraine Paterson, Boon Low)
This session will introduce user research methods and highlight examples from case studies to demonstrate how these methods can be used. A guide to usability test techniques will be provided including a best practice guide and practical advice.


1330-1700 Practical: Prototyping for beginners (max 12 participants) (Neil Allison, Liza Zamboglou, Lorraine Paterson)
This non-technical introduction to prototyping will be a mixture of presentations, discussion and practical activities. It will cover the basics of prototyping – the benefits, some case study examples and a range of approaches to consider. Practical activities will involve iterating the design of a simple interface; first with paper and then with the online prototyping tool Balsamiq.


For the FULL DAY including practical workshop register here:

If the full day event books up or you only want to attend the morning presentations, register here:

For more information including details on the instructors visit our project Wiki:

Bias is an issue that anyone gathering user data is weary of. Whether its usability testing, face-to-face interviews or online questionnaires, bias can affect the strength and integrity of results in a variety of ways. Question design is one of the most influential factors should therefore be given careful consideration. Leading questions can inadvertently give participants an idea of the desired answer and influence their response. However, sampling bias can also have a significant affect on the research results and is often overlooked by researchers.

I was reading Europeana’s Annual Report this week and noticed that the results from their online visitor survey was on the whole very positive. Reading the survey report in more detail I realised it was possible that sample bias may be affecting the survey results. Data from online visitor surveys are normally gathered using an intercept which invites a visitor to participate in the research when they arrive to the site. Anyone visiting the site is who receives this invite is eligible to participate making them ‘self-selected’. This means that they decide to  participate, not the researcher. Their motivation for participating may be related to the topic of the survey or the incentive provided to garner interest.  Consequently their participation is unlikely to provide a repserentative sample.

For example, those who participated in Europeana’s survey are more likey to be motivated by their enthusiasm and interest in the website. Certainly those who are apathetic or indifferent to the website are less likely to have participated. This is supported by the proportion of participants who were regular visitors to the site. Only 8.6% of participants were first time visitors and the results from these participants was generally more positive than the participants who had visited the site before. It would be interesting to find out if a larger sample of first time users would alter these results.

So what can researchers do to prevent sample bias in their results? It is very difficult to completely remove sample bias especially in online surveys where the researcher has no control over who participates. Generally speaking visitor surveys will always carry the risk of bias so the aims of the survey should take this into account. Designing a mixture of open and closed questions will provide some insight into the participant’s motivation. Descriptive answers which require more thought are less likely to be fully answered by those motivated by the incentive. It also provides the added benefit of giving users the opportunity to provide their own feedback. It is interesting to note the Europeana did not do this, leading some participants to email their comments to the researchers. Providing an optional section at the end of the survey for final comments could have provided rich feedback not obtained through closed questions. Indeed the comments Europeana received often described situations where users’ had trouble using the site or disliked a particular design feature.

Avoid asking questions which relate to the user’s overall opinion of the system before they have used all the features as it will not provide accurate results. For example, 67% of users stated they had never used the “My Europeana” feature before and were therefore unable to provide feedback on it. Usability testing often provides more insight into these issues by gathering this information retrospectively after asking a user to carry out tasks using the site. If it’s possible to use survey software which can do this then it is recommended because it is more likely to gather meaningful results. It is only after trying to complete a task that a user will be able to accurately describe their experience.

It is worth noting that Europeana have also conducted user testing with eyetracking in addition to focus groups and expert evaluations. The results of these are due to be published soon and I look forward to reading them. It will be interesting to compare the results against our heuristic inspection of Europeana and other DLs.

Yesterday I bought a book I had seen on-line which sounded very relevant to the work I’m currently doing. Combining both usability and libraries is something which few books deal with directly so I decided it was worth reading. Now that I have read it I thought it would be worthwhile writing a short review of it for others, I hope its helpful.

User Centred Library Websites, Usability evaluation methods by Carole A. George

User Centred Library WebsitesAs the name suggests, this book provides the reader with methods that can be used to gather feedback from users. It specifically discusses usability evaluation methods and helps the reader to determine which methods to use when to get the best feedback at different stages of the design cycle. Aimed predominantly at inexperienced or amateur usability specialists, it explains everything very simply and even provides a useful glossary at the back of the book. It is also well laid out with information that is easy to digest.  Any reader could expect to get through this book within a day but also dip into relevant sections for more detail as and when they need it.

Each method is discussed under several headings such as ‘What is this?’, ‘How long will it take?’, ‘What do I need?’ as well as advantages and disadvantages. Information is broken down in a way which will be useful to anyone conducting a usability study. I particularly liked the templates provided at the back of the book in addition to examples throughout which ensure that the reader is organised and prepared.  The only thing missing was a list of recommended books for further reading. As the book provides only an overview, if the reader wanted to put a method into practice they might need more detail.

Although a useful resource when choosing a suitable usability method, there were few references to library systems other than a few tailored example questions. In this sense the book is slightly misleading. The usability methods mentioned in the book can be applied to any interface including e-commerce websites or software packages. Instead is has assumed that the reader is likely to be someone responsible for or involved in the analysis of a digital library and has tailored the book to their level of knowledge. As someone who is a usability professional, it did not offer me many new insights. However, it is valuable as a knowledge refresher and can plug any gaps in knowledge. For example, it provided a great formula for measuring task completion rate.

Overall this book is great for anyone with limited knowledge or experience of using different usability methods. It was very easy to read and provided some great templates. I’m sure I’ll pick it up again before conducting my own usability studies. bookmarks

Twitter feed