Researching Usability

Archive for the ‘User Research’ Category

I’m very pleased to announce that the article written for Library Hi Tech based on research conducted earlier this year is now available. For more information and to read the article in full follow the link below. You will need to be logged in to read the full text.

http://www.emeraldinsight.com/journals.htm?articleid=1949248&ini=aob&.

– Library Hi Tech, Vol. 29 Iss: 3, pp412- 423

As the project embarks on usability testing using mobile devices, it was important to evaluate mobile specific research methods and understand the important differences between desktop usability testing and that of mobile devices. The most important difference to be aware of when designing and testing mobile devices is that it IS different to traditional testing on desktop computers. Additional differences are provided below:

  • You may spend hours seated in front of the same computer, but mobile context is ever-changing. This impacts (amongst other things) the users’ locations, their attention, their access to stable connectivity, and the orientation of their devices.
  • Desktop computers are ideal for consumption of lengthy content and completion of complex interactions. Mobile interactions and content should be simple, focused, and should (where possible) take advantage of unique and useful device capabilities.
  • Mobile devices are personal, often carrying a wealth of photos, private data, and treasured memories. This creates unique opportunities, but privacy is also a real concern.
  • There are many mobile platforms, each with its own patterns and constraints. The more you understand each platform, the better you can design for it.
  • And then there are tablets. As you may have noticed, they’re larger than your average mobile device. We’re also told they’re ideal for reading.
  • The desktop is about broadband, big displays, full attention, a mouse, keyboard and comfortable seating. Mobile is about poor connections, small screens, one-handed use, glancing, interruptions, and (lately), touch screens.

~ It’s About People Not Devices by Stephanie Rieger and Bryan Rieger (UX Booth, 8th February 2011)

Field or Laboratory Testing?

As our interaction with mobile devices happens in a different way to desktop computers, it seems a logical conclusion that the context of use is important in order to observe realistic behaviour. Brian Fling states in his book that you should “go to the user, don’t have them come to you” (Fling, 2009). However, testing users in the field has its own problems, especially when trying to record everything going on during tests (facial expressions, screen capture and hand movements). Carrying out contextual enquiries using diary studies are beneficial, they also have drawbacks as they rely on the participant to provide an accurate account of their behaviour which is typically not always easy to achieve, even with the best intentions. Carrying out research in a coffee shop for example provides the real-world environment which maximizes external validity (Demetrius Madrigal & Bryan McClain, Usability for Mobile Devices). However, for those who field studies are impractical for one reason or another, simulating a real-world environment within a testing lab has been adopted. Researchers believe they can also help to provide external validity which traditional lab testing cannot (Madrigal & McClain, 2011). In the past researchers have attempted a variety of techniques to do this and are listed below:

participant on a treadmill

Image from Kjeldskov & Stage (2004)

  • Playing music or videos in the background while a participant carries out tasks
  • Periodically inserting people into the test environment to interact with the participant, acting as a temporary distraction
  • Distraction tasks including asking participants to stop what they are doing, perform a prescribed task and then return to what they’re doing (e.g. Whenever you hear the bell ring, stop what you are doing and write down what time it is in this notebook.) (Madrigal & McClain, 2010)
  • Having participants walk on a treadmill while carrying out tasks (continuous speed and varying speed)
  • Having participants walk at a continuous speed on a course that is constantly changing (such as a hallway with fixed obstructions)
  • Having participants walk at varying speeds on a course that is constantly changing (Kjeldskov & Stage, 2003)

Although realism and context of use would appear important to the validity of research findings, previous research has refuted this assumption. Comparing the usability findings of a field test and a realistic laboratory test (where the lab was set up to recreate a realistic setting such a hospital ward) found that there was little added value in taking the evaluation into a field condition (Kjeldskov et al., 2004). The research revealed that lab participants on average experienced 18.8% usability problems compared to field participants who experienced 11.8%. In addition to this, 65 man-hours were spent on the field evaluation compared to 34 man-hours for the lab evaluation, almost half the time.

Subsequent research has provided additional evidence to suggest that lab environments are as effective in uncovering usability issues (Kaikkonen et al., 2005). In this study, researchers did not attempt to recreate a realistic mobile environment, instead comparing their field study with a traditional usability test laboratory set-up. They found that the same issues were found in both environments. Laboratory tests found more cosmetic or low-priority issues than in the field and the frequency of findings in general varied (Kjeldskov & Stage, 2004). The research did find benefits or conducting a mobile evaluation in the field.  It was able to inadvertently evaluate the difficulty of tasks by observing participant behaviour; participants would stop, often look for a quieter spot and ignore outside distractions in order to complete the task. This is something that would be much more difficult to capture in a laboratory setting. The research also found that the field study provided a more relaxed setting which influenced how much verbal feedback the participant provided, however this is refuted by other studies which found the opposite to be true (Kjeldskov & Stage, 2004).

Both studies concluded that the laboratory tests provided sufficient information to improve the user experience, in one case without trying to recreate a realistic environment. Both found field studies to be more time-consuming. Unsurprisingly this also means the field studies are more expensive and require more resources to carry out. It’s fair to say that running a mobile test in the lab will provide results similar to running the evaluation in the field. If time, money and/or access to equipment is an issue it certainly won’t be a limitation to test in a lab or empty room with appropriate recording equipment. Many user experience practitioners will agree that any testing is always better than none at all. However, there will always be exceptions where field testing will be more appropriate. For example, if a geo-based mobile application is being evaluated this will be easier to do in the field than in the laboratory.

Capturing data

Deciding how to capture data is something UX2 is currently thinking about. Finding the best way to capture all relevant information is trickier on mobile devices than desktop computers. Various strategies have been adopted by researchers, a popular one being the use of a sled which the participant can hold comfortably and have a camera positioned above to capture the screen. In addition to this it is possible to capture the mobile screen using specialised software specific to each platform (http://www.uxmatters.com/mt/archives/2010/09/usability-for-mobile-devices.php). If you are lucky enough to have access to Morae usability recording software, they have a specific setting for testing mobile devices which allows you to record from two cameras simultaneously; one to capture the mobile device and the other to capture body language. Other configurations include a lamp-cam which clips to a table with the camera positioned in front of the light. This set-up does not cater for an additional camera to capture body language and would require a separate camera set up on a tripod. A more expensive solution is the ELMO-cam, specifically their document camera, which is stationary and requires the mobile device to remain static on the table.  This piece of kit is more likely to be found in specialised research laboratories which can be hired for the purpose of testing.

lamp-cam configurations

Lamp-cam, image courtesy of Barbara Ballard

Conclusion

Based on the findings from previous research, the limitations of the project and its current mobile service development stage, it seems appropriate for the UX2 project to conduct initial mobile testing in a laboratory. Adapting a meeting room with additional cameras and using participant’s own mobile device (where a specific device is recruited) will provide the best solution and uncover as many usability issues than if it took place in the field. A subsequent blog will provide more details of our own test methods with reflections on its success.

References

Fling, B., (2009). Mobile Design and Development, O’Reilly, Sebastopol, CA, USA.

Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A and Cankar, M. (2005) Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing, Journal of Usability Studies, Issue 1 Vol 1.

Kjeldskov, J., Stage, J. (2004). New techniques for usability evaluation of mobile systems, International Journal of Human-Computer Studies, Issue 60.

Kjeldskov, J., Skov, M.B., Als, B.S. and Høegh, R.T. (2004). Is It Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field, in Proceedings of the 5th International Mobile HCI 2004 Conference, Udine, Italy, Sringer-Verlag.

Roto, V., Oulasvirta, A., Haikarainen, T., Kuorelahti, J., Lehmuskallio, H. and Nyyssönen, T. (2004) Examining Mobile Phone Use in the Wild with Quasi-Experimentation, Helsinki Institute for Information Technology Technical Report.

Tamminen, S., Oulasvirta, A., Toiskallio, K., Kankainen, A. (2004). Understanding mobile contexts. Special issue of Journal of Personal and Ubiquitous Computing, Issue 8

In November we ran a survey to all students at the university to investigate mobile demographics among students, Internet habits and attitudes towards mobile library services. The survey ran for two weeks and in that time 1,716 students responded. This was helped by an incentive prize draw of £50 Amazon vouchers – very attractive at this time of year.

In addition to demographic data and data on student’s mobile device, students were also asked about their mobile Internet habits and specifically their exposure to mobile university and library services. Finally they were asked which of the proposed mobile library services they would find most useful. The survey was loosely based on the research conducted by Information Services (IS) in March 2010 on mobile university services. Doing so allowed some comparisons to be made while also investigating mobile library services in-depth. Findings from the IS Survey can be found here: http://www.projects.ed.ac.uk/areas/itservices/integrated/ITS045/Other_documents/MobileSurvey2010.shtml.

The survey is part of the wider research within the UX2.0 project and relates to Objective 3 (deliverable 5.3) which is to evaluate the user experience in specific contexts involving real user communities. The quantitative data gathered from the survey will be supplemented by the focus group planned at the start of next year. The findings from the survey will help to shape the direction of the focus group which in turn will hopefully support the findings from the survey. Check back for a report on the focus group in the near future.

Some of the headline findings are provided below. The full report can be viewed here: http://bit.ly/ux2mobilesurvey.

  • 66.5% of students surveyed own a smartphone. This is an increase of 17.3% from the IS survey in March 2010.
  • Apple iPhones accounted for 21.9% of smart handsets, followed closely by Nokia at 20% and Samsung at 15.3%.
  • 68% of students have pay monthly contracts.
  • 74% students have either a contract that provides unlimited access to the Internet or provides sufficient access to meet their needs.
  • Services which students access online most frequently (several times a day or daily) are websites in general, email and social networks.
  • Activities which students are least likely to carry out on their mobile handsets regularly included downloading content and uploading images to photo sharing networks.
  • The biggest frustrations students experience using the Internet on mobile handsets included slow or limited Internet connection speeds, poorly designed mobile websites or websites without a mobile compatible website and the limitations of using a small screen with small buttons.
  • The highest proportion of students surveyed stated that they had not tried to access library services using their mobile device.
  • The top 3 potential University library services which students would find most useful would be:
    • Search library catalogue
    • Reserve items on loan
    • View your library record – see your charges summary and which books are reserved, requested, booked and loan

It was interesting to find that over half of students surveyed had not tried to access library services on their mobile device. It does seem that students use their mobile for other University services such as student email, the MyEd student portal and Web CT as well as other general university information such as shuttle bus timetables. Perhaps a mobile optimised library website with useful and easy accessible information would encourage students to use library services on the move more often. That coupled with effective communication that these services exist for mobile devices.

Students do seem open to the idea of mobile library services being useful, with the most useful being access to library records, the ability to search the library catalogue and databases, reserve items on loan and locate a shelf mark. However, the usefulness is dependent on the implementation of such services as many students reported that websites not optimised for smaller screens can be very difficult to use.

Limitations of smaller screen:

“It is slower than a computer and the screen is too small for a full-sized website. The library should have a special mobile website which does not take long to load and is easy to navigate on a small screen.”

Implementation of mobile websites:

“Websites should be tailored in order to be suitable for rendering on small screens. Endless scrolling makes for bad usability.”

Percentage of students who rate each library mobile service as “Very useful” or “Generally useful”

Rank % Library Service
1. 93 View your library record – see your charges summary and which books are reserved, requested, booked and loaned
2. 92.5 Search library catalogue
3. 90 Reserve items on loan
4. 89 Search library databases
5. 87 Locate a shelf mark
6. 84.5 Check PC availability in library
7. 82 Request an item through inter-library loan
8. 71 View maps of libraries
9. 67 Receive alerts relating to library information or services etc.
10. 64 Library Maps & Locations using GPS – find your way around University libraries
11. 58 Library statistics i.e. top books in different categories and popular searches
12. 55 Friend Locator – see where friends are in the library and contact them to meet up
13. 54 Read reviews others have left on items in the library
14. 52 Share items that you’ve found and/or read in the library that you think others will find useful
15. 49 Rate and review items from the library

Last week during our regular project meetings we decided on a roadmap for the user research aspect for the last few months of the project (is it that time already?!). In the spirit of transparency I decided to publish my plan here. I’ll of course be blogging along the way, reporting findings and evaluating the work as I go. If you can point me to any existing research that I should include in my investigation, please feel free to leave a comment with a link at the end and thanks in advance!

Background

This research will examine the current use of Web 2.0 developments in digital libraries, their use scenarios and applications. The scope of the remaining research will be centred on services which diffuse the digital library through mobile technology. The aim of this mobile research is to identify areas that may be relevant to The University of Edinburgh (UoE) and evaluate a prototype mobile library service.

Project Aim

To enhance the user experience of digital libraries through technological developments centred on usability inspection and evaluation.

Project Objectives

  1. To undertake usability inspection and contemporary UX techniques research
  2. To enhance digital libraries with state-of-the-art technologies
  3. To evaluate user experience in specific contexts involving real user communities

Mobile Research Aims:

1. Review current mobile digital library landscape, how services are diffused using mobile platforms and what UoE can learn (Obj3)

Formative evaluation informed by existing mobile digital library services and usability studies of mobile library services will be undertaken. This will help to provide a clear picture of the mobile digital library landscape which will inform the project’s own development work. Existing mobile usability research[i] will also provide insights into existing user centred design processes which can be adapted for this project.

Output: Blog series reviewing the trends in mobile digital library services, highlighting successful services and identifying what UoE can learn from other projects. In addition, a list of existing mobile digital library resources will be created as a resource for others.

2. Review good UX practices for mobile applications & websites as well as usability evaluation techniques (Obj1)

As mobile usability is a relatively new subject to the project, research will be conducted on usability practices for mobile design and development.  In addition, mobile evaluation methodologies will be identified and incorporated into the prototype evaluation (Obj2).

Output: Blog post which highlights good mobile UX resources and describes the evaluation technique which will be applied to the project.

3. Investigate what users want from a mobile library service (Obj3)

Continuing on from the Mobile Services survey conducted by Information Services (IS) in March 2010[ii], a subsequent survey will be conducted with UoE students which will focus on mobile library services. The findings will provide insight into the types of services users would find useful and this will hopefully influence the direction of development. The research will also help to support the ongoing mobile services development by IS as it will provide additional data which can be benchmarked against their previous survey.

The quantitative data gathered from the survey will be supplemented with a focus group with those likely to be using the service (end users) and those helping to provide the service (staff, developers, librarians). The findings will not only help to qualify the findings from the survey but also provide a broad perspective of how a digital library service should be shaped by including all stakeholders.

Output: Survey report detailing findings and outcomes from first focus group with stakeholders.

4. Evaluate the usability of the prototype mobile library service (Obj1)

The usability of the higher fidelity prototype will be evaluated with representative users. These one-on-one sessions will take place with a small number of users (6-12) and will be conducted using a simulator, a smart phone or the user’s own mobile handset. Qualitative date will be captured and reported. The objective of this usability study is to ensure the success of the prototype and provide a use case for digital library services at the University of Edinburgh and beyond.

Output: Summary of findings from focus group and detailed usability test report.


In a previous blog I evaluated the progress of the data gathering stage of persona creation for both Aquabrowser UX and UX2.0. As the data gathering has now been completed and analysed, we have the beginnings of our personas. It therefore seemed a good time to reflect on the process as well as document and review our methods. In the first of three blogs detailing our persona creation, I will first talk about the data gathering methods and reflect on its success.

Originally the plan had been to create personas by conducting qualitative research and validating the segments with quantitative data. Unfortunately we underestimated the time taken and resources required to conduct the qualitative research and as such were unable to validate the personas using quantitative research. Although this approach is good when you want to explore multiple segmentation models and back up the qualitative findings with quantitative data, personas created without this extra step are still valid and extremely useful. As this is the first time the team has conducted persona data gathering, it took longer to do than anticipated. Coupled with the restrictions on time and budget for this project, the additional validation was always an ambition. I’ve stepped through the process we used below to allow others to adopt it if needed. The process is a good template for conducting all types of interviews and not just to create personas.

1. Designing the interviews

When designing the interview questions the team first set about defining the project goals. This was used as a basis for the interview questions and also helped to ensure that the questions covered all aspects of the project goals.

Goal 1: In relation to University of Edinburgh digital library services, and AquaBrowser, identify and understand;

  • User general profiles (demographic, academic background, previous experiences)
  • User behaviours (e.g. information seeking) / use patterns
  • User attitudes (inc. recommendations)
  • User goals (functional, fit for purpose)
  • Data, content, resource requirements

To keep the interview semi-structured and more conversational, the questions created were used primarily as prompts for the interviewer and to ensure that the interviewees provided all the information being sought. More general questions were posed as a springboard for more open discussion. Each question represented a section of discussion with approximately six questions in total. Each question in turn had a series of prompts. The six opening questions are detail below:

  1. Could you tell me a bit about yourself…?
  2. Thinking now about how you work and interact with services online, what kind of activities do you typically do when you sit down at the computer
  3. I want to find out more about how you use different library services, can you tell me what online library services you have used?
  4. We want to know how you go about finding information…What strategy do you have for finding information?
  5. Finally, we’d like to ask you about your own opinions of the library services..  a. What library or search services are you satisfied with and why? b. Why do you choose <mentioned services> over other services?

Interviewees were also given the opportunity at the end of the interview to add anything they felt was valuable to the research or which they just wanted to get off their chest. Several prompt question were modified or added to the librarian interview script, otherwise the overall scripts were very similar

When the interview was adapted into a script for the interviewer, introductory and wrap-up sections were added to explain the purpose of the interview and put the interviewees at ease. These sections also provided prompts to the interviewer to ensure permission was obtained beforehand and that the participant was paid at the end.

2. Piloting the interview

The script was piloted on a colleague not involved in the project a few days before the interviews began. This provided an opportunity to tweak the wording of some of the questions so they were clearer, time the interview to ensure it could be conducted in approximately 45 minutes and also help the team to become more familiar with the questions and prompts. Necessary changes were consequently made to the script to be used for the first ‘real’ interview.

3. Recruitment – Creating a screener

In order to recruit a range of users at the university, a screener was devised. This would provide information on each participant’s use patterns and some basic demographic details. It also allowed us to find out the availability of each participant as the interviews were intended to be conducted over a four-week period in June and July. It also made it easier to collect contact details from users who had already agreed to take part. As with most user research where incentives are involved, there is always the danger that participants will be motivated by the reward of payment and consequently will say whatever they need to say in order to be selected. As we were looking for users who were familiar with Aquabrowser and Voyager (The ‘Classic’ catalogue), we disguised these questions among other library services. This prevented the purpose of the research from being exposed to the participant. The screener questions we used are detailed below:

Screener questions:

  1. Please confirm if you are willing to take part in a 45 minute interview? Note: There will be a £15 book voucher provided for taking part in an interview.
  2. In addition to interviews we are also recruiting for website testing sessions (45-60 min). Would you would be interested in taking part?
    Note: There will be a £15 book voucher provided for taking part in a website testing session.
  3. What do you do at the university? Undergrad: 1st/2nd/3rd/4th/Post grad/PhD/Library staff/Teaching staff/Research staff/other.
  4. What is your department or program of study?
  5. Which of the following online services do you use at the University and how many hours a week do you spend on each? Classic catalogue/ Aquabrowser catalogue/Searcher/E-Journal search/ PubMed/ My Ed/Web of Knowledge/Science/Science Direct.
  6. How much time per week do you spend in any of Edinburgh University libraries? None/Less than 1 hour a week/1-3 hours a week/4-10 hours a week/More than 10 hours a week.
  7. Please state your prefered mode of contact to arrange interview date/time.
  8. Please leave your name.
  9. Please leave relevant contact details: Email and/or telephone number.

Thank you very much for your time.  If you are selected to participate in the current study, we’ll be in touch to see when would be the best time for your session.

4. Recruitment – Strategy

A link to the screener was publicised through a variety of streams. An announcement was created and placed in the MyEd portal which every person within the university has access to (staff and students). In addition to this, an event was created which was also visible within the events section of MyEd. Several email invitations were sent via mailing lists requesting participation. These lists included the School of Physics, Divinity and Information Services staff.

To encourage students and staff to participate an incentive was provided. A £15 book voucher was promised to those who agreed to take part in an interview. The screener was launched on 21st May and ran until the interviews were completed on 15th July. Interviews were scheduled to take place over four weeks which began on 17th June. Six interviews were carried out on average each week, taking place over two separate days. These days varied, but often took place on Tuesdays, Thursdays and Fridays. This was influenced by the availability of team members to carry out the interviews. Each participant was given the opportunity to name their preferred location for the interview. Those interviews that can take place in the user’s own environment are more likely to put the participant at ease and consequently produce better interviews. However, every participant ended up opting to meet at a mutually convenient location – the main library. This public venue is familiar to all participants and centrally located making it less intimidating and easy to find. It also enabled more interviews to be conducted over a short period of the day as travelling to various locations was not required.

Participants were recruited based on a number of factors. Their position in the university (student, staff etc.), their familiarity (or in some cases not) with library services, especially Aquabrowser and Voyager (Classic catalogue). Individuals who spent a reasonable amount of time using these services were of interest but a number of individuals who did not spend much time using the services were also recruited to provide comparisons. Obviously their availability was also an important factor and anyone who was not available in June and/or July were excluded.

Although the screener speeded up the recruitment process, there was still a number of individuals on the list who did not respond to additional email requests to participate. This is always frustrating when people have apparently registered their interest when completing the screener. Despite this we managed to recruit 19 participants from a list of 82 respondents which was approximately a 23% response rate. Unfortunately from these 19 individuals, two individuals dropped out at the last-minute. One person did not show up and another cancelled due to ill-health. As these cancellations occurred on the last day of interviews and did not represent a under-represented demographic group, the decision was taken not to recruit replacements and to conclude the data gathering stage with 17 participants.

Unfortunately there were some groups who were under-represented. The biggest concern was the limited number of staff and in particular,  lecturers in the study. This ultimately meant that this group could not be adequately analysed. Time limitations meant it was difficult to undertake additional strategies to target these individuals. The data gathered was only able to provide personas representing those interviewed and consequently a persona for faculty staff was not possible. Any future persona development work should ensure that a variety of lecturers and researchers are interviewed as part of the process.

5. Conducting the interviews

Before the interviews began, several preparations had to be made. Booking audio recording equipment, sourcing a digital camera for images and creating consent forms for both audio and visual documentation was done. Two team members were present for every interview. One would take notes while the other interviewed the participant. These roles were swapped for each new interview, giving both team members the chance to be both interviewer and note-taker. After discussing how best to take notes it was decided that having a printed template for each interview which the note-taker could complete would be a good strategy. This would help to keep notes in context as much as possible and make the note-taking process as efficient as possible. Doing so removes the danger of important information being lost. The note-taker would also record time stamps each time something ambiguous or important was said so that clarification could be made later by listening to the recordings.

After each interview the team transferred the notes onto the project Wiki. Doing so allowed a post-interview discussion to take place where both members reflected on their understanding of the interviewees comments. It also provided the opportunity to review the interview and discuss improvements or changes that could be made to the next one. This was particularly useful for the first few interviews.

Having two team members present for each interview was time-consuming but also provided many benefits. During the interview it gave the interviewer a chance to consult with someone else before concluding the interview. Often the note-taker may have noted something the interviewer overlooked and this time at the end ensured that any comments made earlier could be addressed. In addition, any missed questions or prompts were asked at this point. It is also beneficial for all team members to be present when it comes to analysing the data at the end. Individuals do not have to rely heavily on the notes of another to become familiar with the participant. This is particularly important when it is not possible to have transcripts of each interview made. Finally, having a note-taker removes the burden from the interviewer who does not have to keep pausing the interview to note down anything that has been said. As these interviews were intended to be semi-structured and have a discursive feel, having a note-taker was crucial to ensuring that this was achieved.

Conclusion

Overall the interviews were quite successful. However, in future more interviews with staff should be conducted in addition to a web survey so that the segmentation can be validated. Full access to the documents created during the process and the resources consulted throughout can be found on the project Wiki.

In the second part of the series guest blogger, Liza Zambolgou will be discussing the segmentation process and how we analysed all of the data collected from the interviews.


del.icio.us bookmarks

Twitter feed

Archive