Researching Usability

Posts Tagged ‘planning

As the project embarks on usability testing using mobile devices, it was important to evaluate mobile specific research methods and understand the important differences between desktop usability testing and that of mobile devices. The most important difference to be aware of when designing and testing mobile devices is that it IS different to traditional testing on desktop computers. Additional differences are provided below:

  • You may spend hours seated in front of the same computer, but mobile context is ever-changing. This impacts (amongst other things) the users’ locations, their attention, their access to stable connectivity, and the orientation of their devices.
  • Desktop computers are ideal for consumption of lengthy content and completion of complex interactions. Mobile interactions and content should be simple, focused, and should (where possible) take advantage of unique and useful device capabilities.
  • Mobile devices are personal, often carrying a wealth of photos, private data, and treasured memories. This creates unique opportunities, but privacy is also a real concern.
  • There are many mobile platforms, each with its own patterns and constraints. The more you understand each platform, the better you can design for it.
  • And then there are tablets. As you may have noticed, they’re larger than your average mobile device. We’re also told they’re ideal for reading.
  • The desktop is about broadband, big displays, full attention, a mouse, keyboard and comfortable seating. Mobile is about poor connections, small screens, one-handed use, glancing, interruptions, and (lately), touch screens.

~ It’s About People Not Devices by Stephanie Rieger and Bryan Rieger (UX Booth, 8th February 2011)

Field or Laboratory Testing?

As our interaction with mobile devices happens in a different way to desktop computers, it seems a logical conclusion that the context of use is important in order to observe realistic behaviour. Brian Fling states in his book that you should “go to the user, don’t have them come to you” (Fling, 2009). However, testing users in the field has its own problems, especially when trying to record everything going on during tests (facial expressions, screen capture and hand movements). Carrying out contextual enquiries using diary studies are beneficial, they also have drawbacks as they rely on the participant to provide an accurate account of their behaviour which is typically not always easy to achieve, even with the best intentions. Carrying out research in a coffee shop for example provides the real-world environment which maximizes external validity (Demetrius Madrigal & Bryan McClain, Usability for Mobile Devices). However, for those who field studies are impractical for one reason or another, simulating a real-world environment within a testing lab has been adopted. Researchers believe they can also help to provide external validity which traditional lab testing cannot (Madrigal & McClain, 2011). In the past researchers have attempted a variety of techniques to do this and are listed below:

participant on a treadmill

Image from Kjeldskov & Stage (2004)

  • Playing music or videos in the background while a participant carries out tasks
  • Periodically inserting people into the test environment to interact with the participant, acting as a temporary distraction
  • Distraction tasks including asking participants to stop what they are doing, perform a prescribed task and then return to what they’re doing (e.g. Whenever you hear the bell ring, stop what you are doing and write down what time it is in this notebook.) (Madrigal & McClain, 2010)
  • Having participants walk on a treadmill while carrying out tasks (continuous speed and varying speed)
  • Having participants walk at a continuous speed on a course that is constantly changing (such as a hallway with fixed obstructions)
  • Having participants walk at varying speeds on a course that is constantly changing (Kjeldskov & Stage, 2003)

Although realism and context of use would appear important to the validity of research findings, previous research has refuted this assumption. Comparing the usability findings of a field test and a realistic laboratory test (where the lab was set up to recreate a realistic setting such a hospital ward) found that there was little added value in taking the evaluation into a field condition (Kjeldskov et al., 2004). The research revealed that lab participants on average experienced 18.8% usability problems compared to field participants who experienced 11.8%. In addition to this, 65 man-hours were spent on the field evaluation compared to 34 man-hours for the lab evaluation, almost half the time.

Subsequent research has provided additional evidence to suggest that lab environments are as effective in uncovering usability issues (Kaikkonen et al., 2005). In this study, researchers did not attempt to recreate a realistic mobile environment, instead comparing their field study with a traditional usability test laboratory set-up. They found that the same issues were found in both environments. Laboratory tests found more cosmetic or low-priority issues than in the field and the frequency of findings in general varied (Kjeldskov & Stage, 2004). The research did find benefits or conducting a mobile evaluation in the field.  It was able to inadvertently evaluate the difficulty of tasks by observing participant behaviour; participants would stop, often look for a quieter spot and ignore outside distractions in order to complete the task. This is something that would be much more difficult to capture in a laboratory setting. The research also found that the field study provided a more relaxed setting which influenced how much verbal feedback the participant provided, however this is refuted by other studies which found the opposite to be true (Kjeldskov & Stage, 2004).

Both studies concluded that the laboratory tests provided sufficient information to improve the user experience, in one case without trying to recreate a realistic environment. Both found field studies to be more time-consuming. Unsurprisingly this also means the field studies are more expensive and require more resources to carry out. It’s fair to say that running a mobile test in the lab will provide results similar to running the evaluation in the field. If time, money and/or access to equipment is an issue it certainly won’t be a limitation to test in a lab or empty room with appropriate recording equipment. Many user experience practitioners will agree that any testing is always better than none at all. However, there will always be exceptions where field testing will be more appropriate. For example, if a geo-based mobile application is being evaluated this will be easier to do in the field than in the laboratory.

Capturing data

Deciding how to capture data is something UX2 is currently thinking about. Finding the best way to capture all relevant information is trickier on mobile devices than desktop computers. Various strategies have been adopted by researchers, a popular one being the use of a sled which the participant can hold comfortably and have a camera positioned above to capture the screen. In addition to this it is possible to capture the mobile screen using specialised software specific to each platform (http://www.uxmatters.com/mt/archives/2010/09/usability-for-mobile-devices.php). If you are lucky enough to have access to Morae usability recording software, they have a specific setting for testing mobile devices which allows you to record from two cameras simultaneously; one to capture the mobile device and the other to capture body language. Other configurations include a lamp-cam which clips to a table with the camera positioned in front of the light. This set-up does not cater for an additional camera to capture body language and would require a separate camera set up on a tripod. A more expensive solution is the ELMO-cam, specifically their document camera, which is stationary and requires the mobile device to remain static on the table.  This piece of kit is more likely to be found in specialised research laboratories which can be hired for the purpose of testing.

lamp-cam configurations

Lamp-cam, image courtesy of Barbara Ballard

Conclusion

Based on the findings from previous research, the limitations of the project and its current mobile service development stage, it seems appropriate for the UX2 project to conduct initial mobile testing in a laboratory. Adapting a meeting room with additional cameras and using participant’s own mobile device (where a specific device is recruited) will provide the best solution and uncover as many usability issues than if it took place in the field. A subsequent blog will provide more details of our own test methods with reflections on its success.

References

Fling, B., (2009). Mobile Design and Development, O’Reilly, Sebastopol, CA, USA.

Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A and Cankar, M. (2005) Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing, Journal of Usability Studies, Issue 1 Vol 1.

Kjeldskov, J., Stage, J. (2004). New techniques for usability evaluation of mobile systems, International Journal of Human-Computer Studies, Issue 60.

Kjeldskov, J., Skov, M.B., Als, B.S. and Høegh, R.T. (2004). Is It Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field, in Proceedings of the 5th International Mobile HCI 2004 Conference, Udine, Italy, Sringer-Verlag.

Roto, V., Oulasvirta, A., Haikarainen, T., Kuorelahti, J., Lehmuskallio, H. and Nyyssönen, T. (2004) Examining Mobile Phone Use in the Wild with Quasi-Experimentation, Helsinki Institute for Information Technology Technical Report.

Tamminen, S., Oulasvirta, A., Toiskallio, K., Kankainen, A. (2004). Understanding mobile contexts. Special issue of Journal of Personal and Ubiquitous Computing, Issue 8

Thank you to everyone who managed to attend the Scottish Usability Professionals Association event last night. We hope that the presentation was informative and look forward to the possibility of presenting the project findings next year. The presentation slides are now available below:

Any additional questions can be asked by leaving a comment below.

So the first part of the usability research taking place in the UX 2.0 project is to perform a usability inspection of selected digital libraries (DLs). In order to do this two things had to be decided:

  1. What DLs to inspect
  2. How to perform the inspection

In this entry I have mapped out how these decisions were made and the implications.

The most difficult thing about selecting DLs to inspect was in narrowing the list because there are so many DLs out there. How many will give a sufficient breadth of information for comparison? What criteria should be used and which should be excluded? These were all questions that the team had to answer. As we wanted to compare our findings with the evaluation of the digital library, library@nesc later on in the project, it seemed appropriate to exclude commercial publisher digital libraries and focus on public digital libraries. In addition to that, the findings from the WorldCat usability testing report published for the ALA Annual in July 2009 revealed that academic users favour searching local, national and worldwide collections together as opposed to public library patrons who are interested in resources which are geographically close. This led us to think in terms of the geographic reach of DLs and the differences between them. As a result we selected 5 digital libraries which represent each geographic location; worldwide, continental, nationwide, regional and local. From this the following DLs were selected:

Attribute Digital Library Web address
Worldwide World Digital Library http://www.wdl.org/en/
Continental (Europe) Europeana http://www.europeana.eu
Nationwide (UK) British Library http://www.bl.uk
Regional (Scotland) SCRAN http://www.scran.ac.uk/
Local (Edinburgh) Edinburgh University Aqua Browser http://aquabrowser.lib.ed.ac.uk/

Next thing to do was decide how to conduct the inspection. There are a number of well known and commonly used usability methodologies available. A number of factors affecting the scope of this inspection helped to narrow the choice:

  • Scope: the inspection was proposed as a quick appraisal of the current state of digital libraries and was not intended as a detailed evaluation.
  • Time-scales: the short time-scale meant that the inspection had to be done quickly. As a result, user testing would not be achievable at this stage in the project

Consequently, it would not be possible to evaluate the usefulness of each DL as outlined by Tsakonas and Papatheodorou (2007) and their triptych methodology. Factors such as the relevance, format, reliability and coverage would not be examined at this time. Instead the focus would be on the usability of the system to the user such as ease of use, aesthetics, navigation, terminology and learnability. As digital libraries generally have a well developed strategy and scope, it is more important to focus attention on the structure, skeleton and surface of the DL as explained by Jesse James Garrett in his book ‘The Elements of User Experience’. This includes information architecture, navigation design and visual design such as page format, colours and typography.

With all this in mind it was decided that a heuristic evaluation would be suitable. However, co-creator of these  heuristics, Jakob Nielsen points out that heuristic evaluations are better when carried out by more than one evaluator. As there are no other specialists working on this project this would not be possible. However, as this inspection is intended as a quick evaluation of current DLs it was not considered detrimental to the research. To try and limit this issue, use of the cognitive walk-through method will also be integrated into the inspection. Formal task scenarios will not be created but typical user tasks such as searching will be considered when evaluating each DL. It is hoped that doing so will highlight any barriers to task success when its not possible to test with actual users.

For anyone who is unsure what a heuristic evaluation and cognitive walk-through entail, I plan to explain these in my next blog post.

So after deciding on the digital libraries to inspect and the method to inspect them, I am now  at the stage of analysing each library and collecting my findings. Every usability expert has their own method for doing this but I find that familiarising myself with each site first then jotting down brief notes on each issue accompanied by a screen grab works best for me. After that, issues will be written up in detail assigned a severity rating and discussed. In addition, positive findings and the development of collaborative or personalised systems (if any) will also be examined. Finally, each DL will be compared and contrasted and conclusions drawn.

I hope this has helped to provide insight into the early stages of the usability research taking place. Please feel free to comment or discuss any aspect of the methodology.


del.icio.us bookmarks

Twitter feed

Archive