Posted October 9, 2011on:
I’m very pleased to announce that the article written for Library Hi Tech based on research conducted earlier this year is now available. For more information and to read the article in full follow the link below. You will need to be logged in to read the full text.
- Library Hi Tech, Vol. 29 Iss: 3, pp412- 423
Many thanks to everyone who made it along to our event yesterday. I hope that it was informative and provided some guidance on planning your own user research. I’ve uploaded my slides from the morning presentation on Slideshare for those might want to bookmark it or anyone who didn’t make it along. I think it’s fair to say that based on some feedback we received, the whole day was a great success. If you were there and wish to send your feedback (good and bad) please feel free to add your comments here or drop me a line.
If you hadn’t already heard, the UCD day was the last activity from the UX2.0 project which officially ended last month. It’s been a great 24 months working on this project and while putting together the slides for the UCD Day, I realised just how much work we have produced. Hopefully all of the documents, reports and blog posts which we contributed during that time will be put to good use by others. I’m hoping to continue this blog in some capacity once I settle into a new role but fear it might go a little quiet in the short-term.
Thanks for visiting the blogs and project website/wiki and I hope you’ll continue to enjoy reading my posts in the future.
EDIT: I forgot to mention, those who came to the prototyping session (and those who didn’t) who are interested in trying Balsamiq themselves can do so through our pilot scheme: https://www.wiki.ed.ac.uk/display/UX2/JISC+Balsamiq+Pilot. Please remember to register as an EASE friend first to gain access. It’s a great opportunity to try Balsamiq and determine if it’s worth purchasing a licence. It really is a great tool and easy to use too!
As the UX2 project wraps up this month, we have planned an event which we hope will introduce user-centred design and usability in digital libraries to others through our case studies.
The event is free and funded by JISC meaning it’s primarily aimed at those in the education sector who are new to usability, user-centred design and user interface prototyping.
It takes place on Tuesday 17th May and the runs from 0930 – 1700
The practical session in the afternoon has a limited number of places so early registration is recommended. During the session you’ll get the chance to create your own wireframes first in paper then using the prototyping tool, Balsamiq. Our project recently funded a licence which makes Balsamiq available to others to try out. For more information on this visit the Wiki: https://www.wiki.ed.ac.uk/display/UX2/JISC+Balsamiq+Pilot
0930-1030 User-centred design and faceted search user interface (Boon Low, Lorraine Paterson)
This session will introduce user experience and user-centred design concepts and how these are related to open-source development. The complexities of UIs will be highlighted through a discussion on usability issues and the existing UI design patterns in particular, faceted search.
1100-1230 Usability and user research: case studies and current practice (Lorraine Paterson, Boon Low)
This session will introduce user research methods and highlight examples from case studies to demonstrate how these methods can be used. A guide to usability test techniques will be provided including a best practice guide and practical advice.
1330-1700 Practical: Prototyping for beginners (max 12 participants) (Neil Allison, Liza Zamboglou, Lorraine Paterson)
This non-technical introduction to prototyping will be a mixture of presentations, discussion and practical activities. It will cover the basics of prototyping – the benefits, some case study examples and a range of approaches to consider. Practical activities will involve iterating the design of a simple interface; first with paper and then with the online prototyping tool Balsamiq.
For the FULL DAY including practical workshop register here: http://www.nesc.ac.uk/esi/events/1196/
If the full day event books up or you only want to attend the morning presentations, register here: http://www.nesc.ac.uk/esi/events/1199/
For more information including details on the instructors visit our project Wiki: https://www.wiki.ed.ac.uk/display/UX2/UCD+Day
Following on from the usability testing of the desktop prototype digital library, we conducted more user research on the mobile prototype. The prototype is similar to the full version but with some services removed to create a simpler interface. The facet navigation situated in the right column is now provided within a ‘Refine’ link. The ability to bookmark items is also available. As before, the prototype is based on an open source ruby-on rails discovery interface, Blacklight which has been further developed throughout the project. The prototype indexes the catalogues provided by the National e-Science Centre at The University of Edinburgh (NeSC) and CERN – The European Organisation for Nuclear Research.
The data capture and test methods are detailed here alongside the main findings.
On the 10th and 11th March 2011, usability testing was conducted on the UX2 mobile optimised digital library prototype with six selected students from the University of Edinburgh (UoE). Each test was carried out as a one-to-one session and comprised an explanation of the research, undertaking task-based scenarios followed by a short post-test interview and word-choice questionnaire. A full description of the prototype and the changes made to the desktop version to optimise it for mobile devices will be documented shortly on the associate project blog, Enhancing User Interactions in Digital Libraries.
Six participants were recruited from the same list which was originally compiled for the focus group recruitment in January. Only those who owned an iPhone were invited to take part as this was the platform the prototype had been optimised for. Each participant was given a £10 Amazon voucher as payment for their time. Each session lasted between 50 and 65 minutes. A few additional statistics on the participant’s profiles are provided below:
- All 6 participants owned an Apple device (iPhone 3GS, iPhone 4 or iPod touch)
- 3 participants had attended our focus group in January, 3 had not
- 4 participants were undergraduates, 2 were postgraduates
- 50:50 male to female ratio
- All were on a pay monthly contract for their mobile phone device
To record the testing we used a similar set up to that suggested by Harry Brignull on his blog. It was fairly low-cost (approx £80 excluding Morae software) and required two webcams, a small piece of perspex (approx 35cm x 10cm), testing software (we used Morae v3 as it allowed us to capture from 2 webcams), cheap plastic mobile phone cases for each type of handset being tested (two in this case) and velcro to attach them to the perspex. It was very easy to mould the perspex to shape by heating it gently above something readily available, a toaster! The Logitech C905 webcam we used had a great advanced settings which allowed us to mirror and flip the image, making it easy to decipher what was going on (see image). Overall the setup worked well as it was lightweight and relatively unobtrusive. The camera was positioned in the best place at all times and this allowed us to see exactly what was going on as well as record participant’s interactions and body language using a second webcam.
The scenarios used were adapted from those used for the desktop usability tests. They were designed to test a variety of the prototype’s features. The main focus was the facet navigation system (Refine link), the presentation of information in the item detail page and the bookmarking service:
- As part of your coursework your lecturer has asked you to read a recent presentation on fusion. Using the prototype can you find a suitable presentation published in the last 2-3 years.
- You have to write an essay on quantum mechanics, can you use the prototype to find several resources which will help you get started? If you wanted to save these items to refer to later when you’re at a computer, how would you do this?
- You are looking for a paper on Grid Computing to read on the bus. Can you use the prototype to find a suitable digital resource?
- You are looking for a presentation given by Harald Bath at the EGEE summer school; can you find the video or audio presentation to watch at the gym?
Word Choice Results
The aim of the word choice exercise was to elicit an overall impression of the participant’s experience using the prototype. The word choice exercise presented participants with a list of 97 adjectives (both positive and negative) at the end of the session and asked them to pick those they felt were descriptive of the prototype
Each of the words shown in image above represents a word that was ticked at least once. The larger and darker the word is, the more often it was ticked by participants.
The most prominent words were all positive: Accessible, Straightforward, Easy to use, Clean and Useful. These words were chosen by 5 of the 6 participants surveyed. Other positive words include Clear, Convenient, Effective, Efficient, Flexible, Relevant and Usable. Some of these words are a particularly interesting as they could directly relate to the ability to access the service anywhere using a mobile device.
Some of the negative words selected include: Simplistic (could also be positive), Ordinary, Ambiguous, Inadequate, Ineffective, Old and Confusing. However some of these words were used by a minority of participants and were therefore not as visible. The words are not all that surprising considering the mobile website is a prototype with fewer services than the desktop version.
Finding Information (Navigation and refine)
Some students noticed that there was no universal ‘Home’ link throughout the site. Students felt that a shortcut back to the home page would be a good idea, despite the fact that the search field was present on the results page. This suggests a desire from students to be able to skip pages instead of navigating back through pages one at a time.
Many of the students stated a need to have an advanced search facility. Often students were looking for either an Advanced search link near the search field or a drop-down menu where they could specify details such as Author, Title, Subject/Keyword. As this is a feature they encounter in other search services, they still expect to have access to it on a mobile device. An example of a mobile library site providing such a service is North Carolina State University (see image).
The desire for an advanced search was problematic for users who did not see the ‘Refine’ link at the top of the page. Those users struggled to complete tasks and consequently their experience of the prototype was affected. Failing to use the Refine service made using the prototype much more difficult. This was demonstrated when students were asked to search for items from a specific year (task 1). Students had no way of knowing if they had searched through every relevant result without looking at every page of results. This finding suggests that it is imperative that a solution to the Refine service be provided. When the Refine link was pointed out to students they stated that they did not see the link due to its location at the top of the screen or did not fully understand its label. These students were searching for an ‘Advanced’ label located close to the search form and search results.
Those students who did use the refine service wanted to filter by more than one item at a time. This was particularly evident in task 1 when students were searching for presentations published in the last 2-3 years. Students found it quite laborious to select one year at a time to review the results. They wanted to be able to search a period of time, either the last 5 years or set the search terms themselves (possibly using a form or sliders). This finding was also revealed during the desktop prototype testing. It demonstrates that this is an important criteria for students to make searching easier and remains so when using the service on a mobile device.
The refine service itself was generally well received when used. The facets provided were considered useful to students and listed in a logical order of importance. One student realised that the items listed in the year facet was not necessarily the last ten years, rather the top ten results. This caused some confusion as first and could affect the success of student’s tasks. In addition, students found using the refine service laborious at times with unnecessary steps involved which could prove troublesome on a mobile device. An idea of the task flow is provided below to outline the steps involved in a typical task using the refine service:
Select ‘Refine’ > Select ’2010′ from Year facet > [view results] > Go back to ‘Refine’ > Remove 2010 > [view results] > Go back to ‘Refine’ > Select ’2009′…
Select ‘Refine’ > Select ’2010′ >[view results] > Remove 2010 from results page > [view results] > Go back to ‘Refine’ > Select ’2009′…
Note: actions in square brackets happen automatically in the process.
Several students used the first system to remove facets perhaps because they did not notice the option to do this on the results page. Consequently, the additional number of steps involved appeared to affect the user’s experience. This was also one of the reasons that students wanted to be able to select a range of dates at once instead of going backwards and forwards within the refine service. One student suggested a tick box option where they could select all the items within each facet they wanted to use to narrow their results at once.
Saving Information (Bookmarking service)
When students were asked to save information to refer to later as part of task 2, some used the bookmarking service. Those who did found it relatively easy to bookmark an item and retrieve it afterwards. The feedback seemed to suggest that it was clear when an item had been saved and the folder icon at the top of the screen was clearly visible. However, there seems to be a bug when a user tries to bookmark an item before logging in. Upon selecting the ‘Bookmark this’ link and completing the login form, users are taken to their bookmarks page without the item listed. The user has to go back to the results page and attempt the action again for it to be successful. In addition, although users could see the number of items bookmarked next to the folder icon, it might be more difficult to spot under different lighting conditions. The white text on pale green background makes less likely to stand out.
There was also a desire for additional features within the bookmarking service which would make it a more effective tool. Additional information on each item including author, date, location and most importantly, shelfmark, would make it easier for users to distinguish items with the same title and locate a number of saved items in the library quickly. Students also wanted to be able to categorise bookmarks into sections, similar to the folder system in a browser’s bookmarking service. Being able to export bookmarks by emailing them to an address was also considered a useful feature to provide.
Although the usability of the bookmarking service was considered to be good, the usefulness was not apparent to every student. Many students have their own system in place for recording items of interest in the library. Writing details down on paper, copy & pasting text to a separate document or simply minimising the browser to open again at the relevant moment. The Safari browser on iPhones allow users to have several windows open at once and automatically saves the state of all windows when it is closed. One participant in particular used this system to park information in order to retrieve it when required. Consequently they did not envisage themself using the bookmarking system. It is interesting to note that of those students who participated in the study, none of them stated that they used the existing bookmarking system available in the University’s own library website called ‘My List (Bookbag)’ .
Although the UX2 library is a prototype, students felt that the level of information provided on each item was for the most part adequate. Being able to view documents was often expected by students but was not always possible. One student questioned whether everyone would be able to preview documents if they did not have a Google account as this is requested upon selecting the electronic document. This could be problematic, especially because feedback from the focus groups indicated averseness to downloading files to mobile devices due to limited data storage.
In addition to previewing documents, links to information sources under the heading ‘Links, Files’ was not easily understood by students. The links were not easy to identify because they are not presented as conventional blue, underlined text. The long URLs and small text size also made it difficult for students to guess where the links might lead. In many situations, the links did not meet user expectations. Students would select a link expecting it to lead them to the full text item when instead it went to an external website. Students wanted to be warned when links lead to an external (and often not optimised for mobile) website. Although this was an issue during the desktop usability tests, it became even more apparent among students using the prototype on a smaller screen.
Something which was requested by students not only during the usability tests but also in the focus groups was the ability to easily access a map of libraries. Having such information would make it much easier to locate books, particularly when students are not familiar with the particular library. Students felt a link to such a map could be provided on the item information page and located under the library holding data to make it easy to access. There was also a desire to provide a simple system which informs students when an item is available or on loan. A colour coding system or simple icon was suggested which could be displayed in the results page next to each item – green for available, red for on loan. A library which has gone some way to address this need is NCSU. They provide users with the opportunity to filter out items which are not available (see earlier image).
Post test interview
Overall students felt that the prototype was fairly effective in helping them find information. Those who did not see the ‘Refine’ link naturally believed it could have been clearer. Another student stated that quality of results was sometimes an issue, suggesting the need for improvement. The timeliness of resources was often dependent on the subject students were studying. Subjects where research tends to move quickly, such as technology, recent publications are much more important. Students were asked to state which two things which they particularly liked about the prototype. Their answers are listed below:
- Refine page (2)
- Item information page
- Level of information provided – not overloaded
- Font style is modern
- Minimalist and contemporary design (2)
- Simple search
- Bookmarking system (2)
- Being able to filter results by type e.g. book, presentation
Some things that students believed could be improved:
- Refine search – visibility and task flow
- Design, particularly the home page – no logo, name or clear description of purpose (2)
- Provide a link to home page throughout the site (2)
- Provide search options next to search form
- Improve the date range of the Date facet
- Tips to guide you through the website
- Visibility and source of links on item page
- Provide an additional option to search/narrow results by library
Observation of the usability tests showed that participants coped well undertaking tasks using a smaller screen. The biggest issue was the visibility of the refine page which contained the facet navigation service. When participants were not aware of this option, their experience of using the prototype was severely compromised. Those who did use the refine service were able to complete tasks more efficiently but did find the number of steps involved to do so unnecessary. This suggests further work is required on the implementation of a facet navigation service to improve its usefulness and usability. Although some of the students appreciated the minimalist nature of the prototype, there was still desire to undertake more than just simple searches. The bookmarking service was on the whole well received and was considered useful with the addition of a few more features. However, the uptake of such a service is still unknown as students often had existing bookmarking systems in place.
Last week UX2 were fortunate to be invited back to present at the Scottish UPA’s regular meeting. Having introduced our project and the work we were doing at an event last year, this was a great opportunity to provide an update on the work which has taken place over the last 12 months while also share our latest research findings on mobile library services. The slides from the night are now available on Slideshare and have also been provided below. As the project winds up it was great to be able to highlight our work to other usability professionals. We were pleased to find out that researchers at Napier University Library were also in attendance. We hope the presentation was helpful and informative to them and everyone else who gave up their evening to attend.
As the project embarks on usability testing using mobile devices, it was important to evaluate mobile specific research methods and understand the important differences between desktop usability testing and that of mobile devices. The most important difference to be aware of when designing and testing mobile devices is that it IS different to traditional testing on desktop computers. Additional differences are provided below:
- You may spend hours seated in front of the same computer, but mobile context is ever-changing. This impacts (amongst other things) the users’ locations, their attention, their access to stable connectivity, and the orientation of their devices.
- Desktop computers are ideal for consumption of lengthy content and completion of complex interactions. Mobile interactions and content should be simple, focused, and should (where possible) take advantage of unique and useful device capabilities.
- Mobile devices are personal, often carrying a wealth of photos, private data, and treasured memories. This creates unique opportunities, but privacy is also a real concern.
- There are many mobile platforms, each with its own patterns and constraints. The more you understand each platform, the better you can design for it.
- And then there are tablets. As you may have noticed, they’re larger than your average mobile device. We’re also told they’re ideal for reading.
- The desktop is about broadband, big displays, full attention, a mouse, keyboard and comfortable seating. Mobile is about poor connections, small screens, one-handed use, glancing, interruptions, and (lately), touch screens.
Field or Laboratory Testing?
As our interaction with mobile devices happens in a different way to desktop computers, it seems a logical conclusion that the context of use is important in order to observe realistic behaviour. Brian Fling states in his book that you should “go to the user, don’t have them come to you” (Fling, 2009). However, testing users in the field has its own problems, especially when trying to record everything going on during tests (facial expressions, screen capture and hand movements). Carrying out contextual enquiries using diary studies are beneficial, they also have drawbacks as they rely on the participant to provide an accurate account of their behaviour which is typically not always easy to achieve, even with the best intentions. Carrying out research in a coffee shop for example provides the real-world environment which maximizes external validity (Demetrius Madrigal & Bryan McClain, Usability for Mobile Devices). However, for those who field studies are impractical for one reason or another, simulating a real-world environment within a testing lab has been adopted. Researchers believe they can also help to provide external validity which traditional lab testing cannot (Madrigal & McClain, 2011). In the past researchers have attempted a variety of techniques to do this and are listed below:
- Playing music or videos in the background while a participant carries out tasks
- Periodically inserting people into the test environment to interact with the participant, acting as a temporary distraction
- Distraction tasks including asking participants to stop what they are doing, perform a prescribed task and then return to what they’re doing (e.g. Whenever you hear the bell ring, stop what you are doing and write down what time it is in this notebook.) (Madrigal & McClain, 2010)
- Having participants walk on a treadmill while carrying out tasks (continuous speed and varying speed)
- Having participants walk at a continuous speed on a course that is constantly changing (such as a hallway with fixed obstructions)
- Having participants walk at varying speeds on a course that is constantly changing (Kjeldskov & Stage, 2003)
Although realism and context of use would appear important to the validity of research findings, previous research has refuted this assumption. Comparing the usability findings of a field test and a realistic laboratory test (where the lab was set up to recreate a realistic setting such a hospital ward) found that there was little added value in taking the evaluation into a field condition (Kjeldskov et al., 2004). The research revealed that lab participants on average experienced 18.8% usability problems compared to field participants who experienced 11.8%. In addition to this, 65 man-hours were spent on the field evaluation compared to 34 man-hours for the lab evaluation, almost half the time.
Subsequent research has provided additional evidence to suggest that lab environments are as effective in uncovering usability issues (Kaikkonen et al., 2005). In this study, researchers did not attempt to recreate a realistic mobile environment, instead comparing their field study with a traditional usability test laboratory set-up. They found that the same issues were found in both environments. Laboratory tests found more cosmetic or low-priority issues than in the field and the frequency of findings in general varied (Kjeldskov & Stage, 2004). The research did find benefits or conducting a mobile evaluation in the field. It was able to inadvertently evaluate the difficulty of tasks by observing participant behaviour; participants would stop, often look for a quieter spot and ignore outside distractions in order to complete the task. This is something that would be much more difficult to capture in a laboratory setting. The research also found that the field study provided a more relaxed setting which influenced how much verbal feedback the participant provided, however this is refuted by other studies which found the opposite to be true (Kjeldskov & Stage, 2004).
Both studies concluded that the laboratory tests provided sufficient information to improve the user experience, in one case without trying to recreate a realistic environment. Both found field studies to be more time-consuming. Unsurprisingly this also means the field studies are more expensive and require more resources to carry out. It’s fair to say that running a mobile test in the lab will provide results similar to running the evaluation in the field. If time, money and/or access to equipment is an issue it certainly won’t be a limitation to test in a lab or empty room with appropriate recording equipment. Many user experience practitioners will agree that any testing is always better than none at all. However, there will always be exceptions where field testing will be more appropriate. For example, if a geo-based mobile application is being evaluated this will be easier to do in the field than in the laboratory.
Deciding how to capture data is something UX2 is currently thinking about. Finding the best way to capture all relevant information is trickier on mobile devices than desktop computers. Various strategies have been adopted by researchers, a popular one being the use of a sled which the participant can hold comfortably and have a camera positioned above to capture the screen. In addition to this it is possible to capture the mobile screen using specialised software specific to each platform (http://www.uxmatters.com/mt/archives/2010/09/usability-for-mobile-devices.php). If you are lucky enough to have access to Morae usability recording software, they have a specific setting for testing mobile devices which allows you to record from two cameras simultaneously; one to capture the mobile device and the other to capture body language. Other configurations include a lamp-cam which clips to a table with the camera positioned in front of the light. This set-up does not cater for an additional camera to capture body language and would require a separate camera set up on a tripod. A more expensive solution is the ELMO-cam, specifically their document camera, which is stationary and requires the mobile device to remain static on the table. This piece of kit is more likely to be found in specialised research laboratories which can be hired for the purpose of testing.
Based on the findings from previous research, the limitations of the project and its current mobile service development stage, it seems appropriate for the UX2 project to conduct initial mobile testing in a laboratory. Adapting a meeting room with additional cameras and using participant’s own mobile device (where a specific device is recruited) will provide the best solution and uncover as many usability issues than if it took place in the field. A subsequent blog will provide more details of our own test methods with reflections on its success.
Fling, B., (2009). Mobile Design and Development, O’Reilly, Sebastopol, CA, USA.
Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A and Cankar, M. (2005) Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing, Journal of Usability Studies, Issue 1 Vol 1.
Kjeldskov, J., Stage, J. (2004). New techniques for usability evaluation of mobile systems, International Journal of Human-Computer Studies, Issue 60.
Kjeldskov, J., Skov, M.B., Als, B.S. and Høegh, R.T. (2004). Is It Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field, in Proceedings of the 5th International Mobile HCI 2004 Conference, Udine, Italy, Sringer-Verlag.
Roto, V., Oulasvirta, A., Haikarainen, T., Kuorelahti, J., Lehmuskallio, H. and Nyyssönen, T. (2004) Examining Mobile Phone Use in the Wild with Quasi-Experimentation, Helsinki Institute for Information Technology Technical Report.
Tamminen, S., Oulasvirta, A., Toiskallio, K., Kankainen, A. (2004). Understanding mobile contexts. Special issue of Journal of Personal and Ubiquitous Computing, Issue 8
The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:
The Blog-Health-o-Meter™ reads Fresher than ever.
A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 4,700 times in 2010. That’s about 11 full 747s.
In 2010, there were 35 new posts, growing the total archive of this blog to 50 posts. There were 43 pictures uploaded, taking up a total of 9mb. That’s about 4 pictures per month.
The busiest day of the year was October 6th with 159 views. The most popular post that day was Realism in testing search interfaces.
Where did they come from?
The top referring sites in 2010 were twitter.com, ux2.nesc.ed.ac.uk, boonious.typepad.com, informationdesign.org, and experiencesolutions.co.uk.
Some visitors came searching, mostly for sampling issues in online surveys, usability, is success model, liza zamboglou, and lorraine paterson.
Attractions in 2010
These are the posts and pages that got the most views in 2010.
Realism in testing search interfaces October 2010
UPA Conference 2010: Day 1 June 2010
User Research and Persona Creation Part 1: Data Gathering Methods July 2010
Sampling bias in online surveys March 2010