Researching Usability

Posts Tagged ‘usability

Many thanks to everyone who made it along to our event yesterday. I hope that it was informative and provided some guidance on planning your own user research. I’ve uploaded my slides from the morning presentation on Slideshare for those might want to bookmark it or anyone who didn’t make it along. I think it’s fair to say that based on some feedback we received, the whole day was a great success. If you were there and wish to send your feedback (good and bad) please feel free to add your comments here or drop me a line.

If you hadn’t already heard, the UCD day was the last activity from the UX2.0 project which officially ended last month. It’s been a great 24 months working on this project and while putting together the slides for the UCD Day, I realised just how much work we have produced. Hopefully all of the documents, reports and blog posts which we contributed during that time will be put to good use by others. I’m hoping to continue this blog in some capacity once I settle into a new role but fear it might go a little quiet in the short-term.

Thanks for visiting the blogs and project website/wiki and I hope you’ll continue to enjoy reading my posts in the future.

Lorraine

EDIT: I forgot to mention, those who came to the prototyping session (and those who didn’t) who are interested in trying Balsamiq themselves can do so through our pilot scheme: https://www.wiki.ed.ac.uk/display/UX2/JISC+Balsamiq+Pilot. Please remember to register as an EASE friend first to gain access. It’s a great opportunity to try Balsamiq and determine if it’s worth purchasing a licence. It really is a great tool and easy to use too!

Advertisements

Introduction

Following on from the usability testing of the desktop prototype digital library, we conducted more user research on the mobile prototype. The prototype is similar to the full version but with some services removed to create a simpler interface. The facet navigation situated in the right column is now provided within a ‘Refine’ link. The ability to bookmark items is also available. As before, the prototype is based on an open source ruby-on rails discovery interface, Blacklight which has been further developed throughout the project. The prototype indexes the catalogues provided by the National e-Science Centre at The University of Edinburgh (NeSC) and CERN – The European Organisation for Nuclear Research.

The data capture and test methods are detailed here alongside the main findings.

Method

On the 10th and 11th March 2011, usability testing was conducted on the UX2 mobile optimised digital library prototype with six selected students from the University of Edinburgh (UoE).  Each test was carried out as a one-to-one session and comprised an explanation of the research, undertaking task-based scenarios followed by a short post-test interview and word-choice questionnaire. A full description of the prototype and the changes made to the desktop version to optimise it for mobile devices will be documented shortly on the associate project blog, Enhancing User Interactions in Digital Libraries.

View of mobile device from webcam setup

Six participants were recruited from the same list which was originally compiled for the focus group recruitment in January. Only those who owned an iPhone were invited to take part as this was the platform the prototype had been optimised for. Each participant was given a £10 Amazon voucher as payment for their time. Each session lasted between 50 and 65 minutes. A few additional statistics on the participant’s profiles are provided below:

  • All 6 participants owned an Apple device (iPhone 3GS, iPhone 4 or iPod touch)
  • 3 participants had attended our focus group in January, 3 had not
  • 4 participants were undergraduates, 2 were postgraduates
  • 50:50 male to female ratio
  • All were on a pay monthly contract for their mobile phone device

Complete setup with webcam positioned on perspex above phone. Also the additional webcam to capture body language.

To record the testing we used a similar set up to that suggested by Harry Brignull on his blog. It was fairly low-cost (approx £80 excluding Morae software) and required two webcams, a small piece of perspex (approx 35cm x 10cm), testing software (we used Morae v3 as it allowed us to capture from 2 webcams), cheap plastic mobile phone cases for each type of handset being tested (two in this case) and velcro to attach them to the perspex. It was very easy to mould the perspex to shape by heating it gently above something readily available, a toaster! The Logitech C905 webcam we used had a great advanced settings which allowed us to mirror and flip the image, making it easy to decipher what was going on (see image). Overall the setup worked well as it was lightweight and relatively unobtrusive. The camera was positioned in the best place at all times and this allowed us to see exactly what was going on as well as record participant’s interactions and body language using a second webcam.

Task scenarios

The scenarios used were adapted from those used for the desktop usability tests. They were designed to test a variety of the prototype’s features. The main focus was the facet navigation system (Refine link), the presentation of information in the item detail page and the bookmarking service:

  1. As part of your coursework your lecturer has asked you to read a recent presentation on fusion. Using the prototype can you find a suitable presentation published in the last 2-3 years.
  2. You have to write an essay on quantum mechanics, can you use the prototype to find several resources which will help you get started? If you wanted to save these items to refer to later when you’re at a computer, how would you do this?
  3. You are looking for a paper on Grid Computing to read on the bus. Can you use the prototype to find a suitable digital resource?
  4. You are looking for a presentation given by Harald Bath at the EGEE summer school; can you find the video or audio presentation to watch at the gym?

Word Choice Results

The aim of the word choice exercise was to elicit an overall impression of the participant’s experience using the prototype. The word choice exercise presented participants with a list of 97 adjectives (both positive and negative) at the end of the session and asked them to pick those they felt were descriptive of the prototype

Word cloud results

Each of the words shown in image above represents a word that was ticked at least once. The larger and darker the word is, the more often it was ticked by participants.

The most prominent words were all positive: Accessible, Straightforward, Easy to use, Clean and Useful. These words were chosen by 5 of the 6 participants surveyed. Other positive words include Clear, Convenient, Effective, Efficient, Flexible, Relevant and Usable. Some of these words are a particularly interesting as they could directly relate to the ability to access the service anywhere using a mobile device.

Some of the negative words selected include: Simplistic (could also be positive), Ordinary, Ambiguous, Inadequate, Ineffective, Old and Confusing. However some of these words were used by a minority of participants and were therefore not as visible. The words are not all that surprising considering the mobile website is a prototype with fewer services than the desktop version.

Findings

Finding Information (Navigation and refine)

Some students noticed that there was no universal ‘Home’ link throughout the site. Students felt that a shortcut back to the home page would be a good idea, despite the fact that the search field was present on the results page. This suggests a desire from students to be able to skip pages instead of navigating back through pages one at a time.

NCSU Mobile search form

Many of the students stated a need to have an advanced search facility. Often students were looking for either an Advanced search link near the search field or a drop-down menu where they could specify details such as Author, Title, Subject/Keyword. As this is a feature they encounter in other search services, they still expect to have access to it on a mobile device. An example of a mobile library site providing such a service is North Carolina State University (see image).

The desire for an advanced search was  problematic for users who did not see the ‘Refine’ link at the top of the page. Those users struggled to complete tasks and consequently their experience of the prototype was affected. Failing to use the Refine service made using the prototype much more difficult. This was demonstrated when students were asked to search for items from a specific year (task 1). Students had no way of knowing if they had searched through every relevant result without looking at every page of results. This finding suggests that it is imperative that a solution to the Refine service be provided. When the Refine link was pointed out to students they stated that they did not see the link due to its location at the top of the screen or did not fully understand its label. These students were searching for an ‘Advanced’ label located close to the search form and search results.

Those students who did use the refine service wanted to filter by more than one item at a time. This was particularly evident in task 1 when students were searching for presentations published in the last 2-3 years. Students found it quite laborious to select one year at a time to review the results. They wanted to be able to search a period of time, either the last 5 years or set the search terms themselves (possibly using a form or sliders). This finding was also revealed during the desktop prototype testing. It demonstrates that this is an important criteria for students to make searching easier and remains so when using the service on a mobile device.

The refine service itself was generally well received when used. The facets provided were considered useful to students and listed in a logical order of importance. One student realised that the items listed in the year facet was not necessarily the last ten years, rather the top ten results. This caused some confusion as first and could affect the success of student’s tasks. In addition, students found using the refine service laborious at times with unnecessary steps involved which could prove troublesome on a mobile device. An idea of the task flow is provided below to outline the steps involved in a typical task using the refine service:

Select ‘Refine’ > Select ‘2010’ from Year facet > [view results] > Go back to ‘Refine’ > Remove 2010 > [view results] > Go back to ‘Refine’ > Select ‘2009’…

or

Select ‘Refine’ > Select ‘2010’ >[view results] > Remove 2010 from results page > [view results] > Go back to ‘Refine’ > Select ‘2009’…

Note: actions in square brackets happen automatically in the process.

Several students used the first system to remove facets perhaps because they did not notice the option to do this on the results page. Consequently, the additional number of steps involved appeared to affect the user’s experience. This was also one of the reasons that students wanted to be able to select a range of dates at once instead of going backwards and forwards within the refine service. One student suggested a tick box option where they could select all the items within each facet they wanted to use to narrow their results at once.

Saving Information (Bookmarking service)

When students were asked to save information to refer to later as part of task 2, some used the bookmarking service. Those who did found it relatively easy to bookmark an item and retrieve it afterwards. The feedback seemed to suggest that it was clear when an item had been saved and the folder icon at the top of the screen was clearly visible. However, there seems to be a bug when a user tries to bookmark an item before logging in. Upon selecting the ‘Bookmark this’ link and completing the login form, users are taken to their bookmarks page without the item listed. The user has to go back to the results page and attempt the action again for it to be successful. In addition, although users could see the number of items bookmarked next to the folder icon, it might be more difficult to spot under different lighting conditions. The white text on pale green background makes less likely to stand out.

There was also a desire for additional features within the bookmarking service which would make it a more effective tool. Additional information on each item including author, date, location and most importantly, shelfmark, would make it easier for users to distinguish items with the same title and locate a number of saved items in the library quickly. Students also wanted to be able to categorise bookmarks into sections, similar to the folder system in a browser’s bookmarking service. Being able to export bookmarks by emailing them to an address was also considered a useful feature to provide.

Although the usability of the bookmarking service was considered to be good, the usefulness was not apparent to every student. Many students have their own system in place for recording items of interest in the library. Writing details down on paper, copy & pasting text to a separate document or simply minimising the browser to open again at the relevant moment. The Safari browser on iPhones allow users to have several windows open at once and automatically saves the state of all windows when it is closed. One participant in particular used this system to park information in order to retrieve it when required. Consequently they did not envisage themself using the bookmarking system. It is interesting to note that of those students who participated in the study, none of them stated that they used the existing bookmarking system available in the University’s own library website called ‘My List (Bookbag)’ .

Reviewing Information

Although the UX2 library is a prototype, students felt that the level of information provided on each item was for the most part adequate. Being able to view documents was often expected by students but was not always possible. One student questioned whether everyone would be able to preview documents if they did not have a Google account as this is requested upon selecting the electronic document. This could be problematic, especially because feedback from the focus groups indicated averseness to downloading files to mobile devices due to limited data storage.

In addition to previewing documents, links to information sources under the heading ‘Links, Files’ was not easily understood by students. The links were not easy to identify because they are not presented as conventional blue, underlined text. The long URLs and small text size also made it difficult for students to guess where the links might lead. In many situations, the links did not meet user expectations. Students would select a link expecting it to lead them to the full text item when instead it went to an external website. Students wanted to be warned when links lead to an external (and often not optimised for mobile) website. Although this was an issue during the desktop usability tests, it became even more apparent among students using the prototype on a smaller screen.

Something which was requested by students not only during the usability tests but also in the focus groups was the ability to easily access a map of libraries. Having such information would make it much easier to locate books, particularly when students are not familiar with the particular library. Students felt a link to such a map could be provided on the item information page and located under the library holding data to make it easy to access. There was also a desire to provide a simple system which informs students when an item is available or on loan. A colour coding system or simple icon was suggested which could be displayed in the results page next to each item – green for available, red for on loan. A library which has gone some way to address this need is NCSU. They provide users with the opportunity to filter out items which are not available (see earlier image).

Post test interview

Overall students felt that the prototype was fairly effective in helping them find information. Those who did not see the ‘Refine’ link naturally believed it could have been clearer. Another student stated that quality of results was sometimes an issue, suggesting the need for improvement. The timeliness of resources was often dependent on the subject students were studying. Subjects where research tends to move quickly, such as technology, recent publications are much more important. Students were asked to state which two things which they particularly liked about the prototype. Their answers are listed below:

  • Refine page (2)
  • Item information page
  • Level of information provided – not overloaded
  • Font style is modern
  • Minimalist and contemporary design (2)
  • Simple search
  • Bookmarking system (2)
  • Being able to filter results by type e.g. book, presentation

Some things that students believed could be improved:

  • Refine search – visibility and task flow
  • Design, particularly the home page – no logo, name or clear description of purpose (2)
  • Provide a link to home page throughout the site (2)
  • Provide search options next to search form
  • Improve the date range of the Date facet
  • Tips to guide you through the website
  • Visibility and source of links on item page
  • Provide an additional option to search/narrow results by library

Conclusion

Observation of the usability tests showed that participants coped well undertaking tasks using a smaller screen. The biggest issue was the visibility of the refine page which contained the facet navigation service. When participants were not aware of this option, their experience of using the prototype was severely compromised. Those who did use the refine service were able to complete tasks more efficiently but did find the number of steps involved to do so unnecessary. This suggests further work is required on the implementation of a facet navigation service to improve its usefulness and usability. Although some of the students appreciated the minimalist nature of the prototype, there was still desire to undertake more than just simple searches. The bookmarking service was on the whole well received and was considered useful with the addition of a few more features. However, the uptake of such a service is still unknown as students often had existing bookmarking systems in place.

Last week UX2 were fortunate to be invited back to present at the Scottish UPA’s regular meeting. Having introduced our project and the work we were doing at an event last year, this was a great opportunity to provide an update on the work which has taken place over the last 12 months while also share our latest research findings on mobile library services. The slides from the night are now available on Slideshare and have also been provided below. As the project winds up it was great to be able to highlight our work to other usability professionals. We were pleased to find out that researchers at Napier University Library were also in attendance. We hope the presentation was helpful and informative to them and everyone else who gave up their evening to attend.

As the project embarks on usability testing using mobile devices, it was important to evaluate mobile specific research methods and understand the important differences between desktop usability testing and that of mobile devices. The most important difference to be aware of when designing and testing mobile devices is that it IS different to traditional testing on desktop computers. Additional differences are provided below:

  • You may spend hours seated in front of the same computer, but mobile context is ever-changing. This impacts (amongst other things) the users’ locations, their attention, their access to stable connectivity, and the orientation of their devices.
  • Desktop computers are ideal for consumption of lengthy content and completion of complex interactions. Mobile interactions and content should be simple, focused, and should (where possible) take advantage of unique and useful device capabilities.
  • Mobile devices are personal, often carrying a wealth of photos, private data, and treasured memories. This creates unique opportunities, but privacy is also a real concern.
  • There are many mobile platforms, each with its own patterns and constraints. The more you understand each platform, the better you can design for it.
  • And then there are tablets. As you may have noticed, they’re larger than your average mobile device. We’re also told they’re ideal for reading.
  • The desktop is about broadband, big displays, full attention, a mouse, keyboard and comfortable seating. Mobile is about poor connections, small screens, one-handed use, glancing, interruptions, and (lately), touch screens.

~ It’s About People Not Devices by Stephanie Rieger and Bryan Rieger (UX Booth, 8th February 2011)

Field or Laboratory Testing?

As our interaction with mobile devices happens in a different way to desktop computers, it seems a logical conclusion that the context of use is important in order to observe realistic behaviour. Brian Fling states in his book that you should “go to the user, don’t have them come to you” (Fling, 2009). However, testing users in the field has its own problems, especially when trying to record everything going on during tests (facial expressions, screen capture and hand movements). Carrying out contextual enquiries using diary studies are beneficial, they also have drawbacks as they rely on the participant to provide an accurate account of their behaviour which is typically not always easy to achieve, even with the best intentions. Carrying out research in a coffee shop for example provides the real-world environment which maximizes external validity (Demetrius Madrigal & Bryan McClain, Usability for Mobile Devices). However, for those who field studies are impractical for one reason or another, simulating a real-world environment within a testing lab has been adopted. Researchers believe they can also help to provide external validity which traditional lab testing cannot (Madrigal & McClain, 2011). In the past researchers have attempted a variety of techniques to do this and are listed below:

participant on a treadmill

Image from Kjeldskov & Stage (2004)

  • Playing music or videos in the background while a participant carries out tasks
  • Periodically inserting people into the test environment to interact with the participant, acting as a temporary distraction
  • Distraction tasks including asking participants to stop what they are doing, perform a prescribed task and then return to what they’re doing (e.g. Whenever you hear the bell ring, stop what you are doing and write down what time it is in this notebook.) (Madrigal & McClain, 2010)
  • Having participants walk on a treadmill while carrying out tasks (continuous speed and varying speed)
  • Having participants walk at a continuous speed on a course that is constantly changing (such as a hallway with fixed obstructions)
  • Having participants walk at varying speeds on a course that is constantly changing (Kjeldskov & Stage, 2003)

Although realism and context of use would appear important to the validity of research findings, previous research has refuted this assumption. Comparing the usability findings of a field test and a realistic laboratory test (where the lab was set up to recreate a realistic setting such a hospital ward) found that there was little added value in taking the evaluation into a field condition (Kjeldskov et al., 2004). The research revealed that lab participants on average experienced 18.8% usability problems compared to field participants who experienced 11.8%. In addition to this, 65 man-hours were spent on the field evaluation compared to 34 man-hours for the lab evaluation, almost half the time.

Subsequent research has provided additional evidence to suggest that lab environments are as effective in uncovering usability issues (Kaikkonen et al., 2005). In this study, researchers did not attempt to recreate a realistic mobile environment, instead comparing their field study with a traditional usability test laboratory set-up. They found that the same issues were found in both environments. Laboratory tests found more cosmetic or low-priority issues than in the field and the frequency of findings in general varied (Kjeldskov & Stage, 2004). The research did find benefits or conducting a mobile evaluation in the field.  It was able to inadvertently evaluate the difficulty of tasks by observing participant behaviour; participants would stop, often look for a quieter spot and ignore outside distractions in order to complete the task. This is something that would be much more difficult to capture in a laboratory setting. The research also found that the field study provided a more relaxed setting which influenced how much verbal feedback the participant provided, however this is refuted by other studies which found the opposite to be true (Kjeldskov & Stage, 2004).

Both studies concluded that the laboratory tests provided sufficient information to improve the user experience, in one case without trying to recreate a realistic environment. Both found field studies to be more time-consuming. Unsurprisingly this also means the field studies are more expensive and require more resources to carry out. It’s fair to say that running a mobile test in the lab will provide results similar to running the evaluation in the field. If time, money and/or access to equipment is an issue it certainly won’t be a limitation to test in a lab or empty room with appropriate recording equipment. Many user experience practitioners will agree that any testing is always better than none at all. However, there will always be exceptions where field testing will be more appropriate. For example, if a geo-based mobile application is being evaluated this will be easier to do in the field than in the laboratory.

Capturing data

Deciding how to capture data is something UX2 is currently thinking about. Finding the best way to capture all relevant information is trickier on mobile devices than desktop computers. Various strategies have been adopted by researchers, a popular one being the use of a sled which the participant can hold comfortably and have a camera positioned above to capture the screen. In addition to this it is possible to capture the mobile screen using specialised software specific to each platform (http://www.uxmatters.com/mt/archives/2010/09/usability-for-mobile-devices.php). If you are lucky enough to have access to Morae usability recording software, they have a specific setting for testing mobile devices which allows you to record from two cameras simultaneously; one to capture the mobile device and the other to capture body language. Other configurations include a lamp-cam which clips to a table with the camera positioned in front of the light. This set-up does not cater for an additional camera to capture body language and would require a separate camera set up on a tripod. A more expensive solution is the ELMO-cam, specifically their document camera, which is stationary and requires the mobile device to remain static on the table.  This piece of kit is more likely to be found in specialised research laboratories which can be hired for the purpose of testing.

lamp-cam configurations

Lamp-cam, image courtesy of Barbara Ballard

Conclusion

Based on the findings from previous research, the limitations of the project and its current mobile service development stage, it seems appropriate for the UX2 project to conduct initial mobile testing in a laboratory. Adapting a meeting room with additional cameras and using participant’s own mobile device (where a specific device is recruited) will provide the best solution and uncover as many usability issues than if it took place in the field. A subsequent blog will provide more details of our own test methods with reflections on its success.

References

Fling, B., (2009). Mobile Design and Development, O’Reilly, Sebastopol, CA, USA.

Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A and Cankar, M. (2005) Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing, Journal of Usability Studies, Issue 1 Vol 1.

Kjeldskov, J., Stage, J. (2004). New techniques for usability evaluation of mobile systems, International Journal of Human-Computer Studies, Issue 60.

Kjeldskov, J., Skov, M.B., Als, B.S. and Høegh, R.T. (2004). Is It Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field, in Proceedings of the 5th International Mobile HCI 2004 Conference, Udine, Italy, Sringer-Verlag.

Roto, V., Oulasvirta, A., Haikarainen, T., Kuorelahti, J., Lehmuskallio, H. and Nyyssönen, T. (2004) Examining Mobile Phone Use in the Wild with Quasi-Experimentation, Helsinki Institute for Information Technology Technical Report.

Tamminen, S., Oulasvirta, A., Toiskallio, K., Kankainen, A. (2004). Understanding mobile contexts. Special issue of Journal of Personal and Ubiquitous Computing, Issue 8

Last month a usability study was carried out on the UX2 digital library prototype. The study involved 10 student participants who tried to complete a variety of tasks using the prototype. The report is now available to read in full and can be accessed via the library (http://bit.ly/ux2usabilityreport1).

The prototype is based on an open source ruby-on rails discovery interface, Blacklight which has been further developed for the project to provide additional features. Existing component services have been ‘mashed-up’ to generate the UX2.0 digital library. The prototype currently indexes the catalogues provided by the National e-Science Centre at The University of Edinburgh (NeSC) and CERN – The European Organisation for Nuclear Research. The report presents the findings of the usability testing (work package 2 – WP2.3) of the prototype which was conducted with representative users at the university. The study reveals a range of issues uncovered by observing participants using the prototype by trying to complete tasks. The findings outlined in the report provide a number of recommendations for changes to the prototype in order to improve the user’s experience.

In order to identify and fully explain the technology responsible for each issue in the report, supplementary blogs will be published on the project website in stages, as and when developmental changes are made (follow the ux2 Twitter account for announcements). It is hoped that this supplemental development documentation will make it more accessible to other digital library developers and the wider JISC community. Some of the main findings from the report are summarised below.

Positive findings from the study highlighted positive aspects of the prototype:

  • Allowing users to narrow results using the faceted navigation links was useful.
  • Providing users with details of the item content including full text preview, video and presentation slides was informative.
  • Allowing users to bookmark items and add notes was considered a useful feature.
  • Overall the layout and design was clean, simple and useful.

These positive findings from the testing are reflected in the word cloud questionnaire participants were asked to complete:

UX2 word cloud

However there were some usability issues with the prototype:

  • It was not obvious when the system saved previously selected facets in the Scope, often misleading participant’s expectations.
  • The external ‘Other’ links were not relevant to participants and often mistrusted or considered a distraction.
  • It was not clear when an item had a full text preview feature
  • Links to information resources were not prominent and often missed by participants
  • The format of text within the item details page made it difficult to read and consequently participants often ignored it.

There were also a few lessons learned from the user study which I thought would be useful to share:

  1. Recruiting participants via telephone does not necessarily guarantee attendance. Two participants did not show up to their slot after arranging the appointment with them by phone and sending them an email confirmation. However, this could also have been affected by the time of year. It transpired that many students had coursework deadlines the same week and the offending students did say they forgot because they had a heavy workload.
  2. User generated tasks are not easy to replicate using prototypes. This was not unexpected but something which was tried anyway. As suspected, it was difficult to create a task which could generate results when using such a specialised and relatively small database. However, when it was successful it did return some useful findings.
  3. It’s difficult to facilitate usability tests and log details using Morae. Any usability practitioner will tell you that it’s important concentrate on observing the participant and interacting with them and to avoid breaking the flow by stopping to take detailed notes. I found it impossible to observe a participant, engage with what they were doing and log behaviour on Morae so would recommend you recruit a note-taker if this is important for your usability study.

Findings from day 2 of the UPA 2010 conference are detailed in the second part of my UPA blog.

Ethnography 101: Usability in Plein Air by Paul Bryan

Studying users in their natural environment is key to designing innovative, break-through web sites rather than incrementally improving existing designs. This session gives attendees a powerful tool for understanding their customers’ needs. Using the research process presented, attendees will plan research to support design of a mobile e-commerce application.

As our AquabrowserUX project contemplates an ethnographic study, this presentation seemed vital to better understanding the methods involved. Paul Bryan provided a very interesting insight into running such studies, explaining when to use ethnography and a typical project structure. The audience also got the chance to plan an ethnographic study using a hypothetical project.

Some basic information that I gathered from the talk is listed below:

What is ethnography?

  • It takes place in the field
  • It is observation
  • It uses interviews to clarify observations
  • It pays attention to context and artifacts
  • and it utilises a coding system for field notes to help with analysis

Some examples of ethnographic studies include:

  • 10 page diary study with 1 page completed by a participant each day
  • In home study using observation, interviews and photo montages created by participants to provide perspectives on subjects
  • Department store observation including video capture

Ethnography should be used to bring insight into a set of behaviours and answer the research question in the most economical way. It should also be used to:

  • Identify fundamental experience factors
  • Innovate the mundane
  • Operationalize key concepts
  • Discover the unspeakable, things which participants aren’t able to articulate themselves
  • Understand cultural variations

A proposed structure of an ethnographic project would be as follows:

  1. Determine research questions or focus
  2. Determine location and context
  3. Determine data capture method (this is dependent on question 1)
  4. Design data capture instruments
  5. Recruit
  6. Obtain access to the field
  7. Set up tools and materials
  8. Conduct research, including note taking
  9. Reduce data to essential values
  10. Code the data
  11. Report findings and recommendations, including a highlights video where possible
  12. Determine follow-up research

As I suspected, ethnography is not for the faint hearted (or light pocketed) because it clearly takes a lot fo time and people-power to conduct a thorough ethnographic study. It seems that as a result, only large companies (or possibly academic institutes) get the chance to do it which is a shame because it is such an informative method. For example, all video footage recorded must be examined minute by minute and transcribed. My favourite quote of the session was in response to a question over the right number of participants required for a study. Naturally this depends heavily on the nature of the project. Paul summed it up by comparing it to love: “When you know you just know”. Such a commonly asked question in user research is often a difficult one to answer exactly so I liked this honest answer.

When analysing the data collected Paul suggested a few techniques. Using the transcribed footage, go through it to develop themes (typically 5-10). In the example of a clothes shopping study this may be fit, value, appeal, style, appropriateness etc. Creating a table of quotes and mapping them to coded themes helps to validate the them. He also recommended that you focus on behaviours in ethnography, capture cases at opposite end of the user spectrum, and always look for unseen behaviours.

Designing Communities as Decision-Making Experiences by Tharon Howard and Wendy R. Howard

What can you do when designing an online community to maximize user experience? This presentation, based on two decades of managing successful online communities, will teach participants how to design sustainable online communities that attract and retain a devoted membership by providing them with “contexts for effective decision-making.”

This topic was interesting on a more personal level because it dealt with themes from my MSc dissertation on online customer communities. Tharon has recently published a book on the subject call ‘Design to Thrive’ which sounds really interesting. He and Wendy co-presented their knowledge of online communities detailing why you would create one, the difference between a community and a social network and the different types of users in a community. Their culinary acronym ‘RIBS’ (Renumeration, Influence, Belonging and Significance) provided a heuristic framework with which to follow in creating a successful community.

They pointed out that the main difference between a social network and a community is the shared purpose among members. Normally a community is developed around a theme or subject whereas social networks are created as a platform for individuals to broadcast information of interest to them and not necessarily on one topic. An online community is a useful resource when you want to build one-to-one relationships, share information quickly and easily and create a seed-bed where collective action can grow. I think this is true but that developments in social networks such as groups and categorised information means that social networking sites are beginning to provide communities within their systems.

Back to the RIBS acronym, Tharon talked about renumeration as the first heuristic for community creators. A mantra which he provided is a follows:

The most important renumeration community managers have to offer is the experience of socially constructing meaning about topics and events users wish to understand.

It is important to reward members for giving back to the community as this will reward those members, it will also ensure the continuation of the community through active participation; “It’s a two-way street“. Such rewards can include features that are ‘unlocked’ by active members and mentoring for new members (noobs). Tharon also states that influence in a community is often overlooked by managers. Members need to feel the potential for them to influence the direction of the community to continue to be an active participant. Providing exit surveys, an advisory council, a ‘report a problem’ link and rigorously enforcing published policies will help to ensure influence is incorporated into an online community.

Belonging is apparently often overlooked as well. By including shared icons, symbols or rituals  to represent a community allows members to bond through these common values and goals. Including a story of origin, an initiation ritual, levelling up ceremony, and symbols of rank all provide the sense of belonging which is important to a community member.

Significance is the building and maintenance of a community brand for those in the community. It’s a common characteristic of people to want to be part of an exclusive group. The exclusivity seems to increase the desire to join in many cases. By celebrating your community ‘celebrities’ and listing (often well-known) members in a visitor centre section of the community you can allude to its exclusivity. By making it invite only also helps to increase the significance of the community.

Touchdown! Remote Usability Testing in the Enhancement of Online Fantasy Gaming by Ania Rodriguez and Kerin Smollen

This session presents a case study on how ESPN/Disney with the assistance of Key Lime Interactive improved the user experience and increased registrations of their online fantasy football and baseball gaming through the effective use of moderated and automated remote usability studies.

This topic was the first of another series of short (this time 40 minute) presentations. As before, the time limitation often impacted on the detail within each talk. Understandably, speakers struggled to get through everything within the time allocated and either had to rush through slides or had to cut short questions at the end. Unfortunately this happened in the talk by Ania Rodriguez and Kerin Smollen. Although an interesting case of how ESPN (Smollen) have collaborated with Key Lime Interactive (Rodriguez) to conduct remote testing, it was not the type of remote testing I was hoping to learn more about. I already have some experience running an unmoderated test was more interested to hear detail on moderated remote testing. However, I was encouraged to hear that UserZoom came out favourably as the software of choice to run this remote study. I have been interested in using this software for a while and will hopefully get the chance to use it at some point in the future.

Multiple Facilitators in One Study: How to Establish Consistency by Laurie Kantner and Lori Anschuetz

In best practice for user research, a single researcher facilitates all study sessions to minimize variation. For larger studies, assigning one facilitator may miss an opportunity, such as catching select participants or delivering timely results. This presentation provides guidelines, with case study examples, for establishing consistency in multiple-facilitator studies.

Another short presentation which gave advice on how to ensure that consistency is achieved when several facilitators work on a project. It may not seem like rocket science but the best method used to capture information was a spreadsheet with various codes for observations. This document is shared and updated by each facilitator to ensure everything is accurately captured. It’s not a perfect system and often learning lessons and selecting facilitators carefully will help to reduce issues later but it seemed to work well in this case. Where most usability professionals would balk the idea of multiple facilitators which is often considered bad practice, it is too often a necessity in time constrained projects which may even be spread out around the world. Indeed Lauri and Lori suggest that multiple facilitators can bring benefits which includes more than one perspective, as the old proverb goes – ‘two heads are better than one‘!

Creating Richer Personas – Making the Mobile, International and Forward Thinking by Anthony Sampanes, Michele Snyder, Brent White and Lynn Rampoldi-Hnilo

Personas are a great way to get development teams in sync with a new space and their users. This presentation discusses solutions to extending personas to include novel types of information such as mobile behavior, cultural differences, and ways to promote forward thinking.

Examples of personas from presentation

Examples of personas from presentation

This 40 minute presentation provided lots of information that was useful to me (and the projects) and that presented new ways of working with personas. However, with additional time it would have been great to go over the data collection methods in more detail as this is something we are currently undertaking in AquabrowserUX.

Traditionally personas are limited to desktop users. However, this is changing as doing things on the move is now possible with the aid of smartphones. The presenters indicated that they found little literature on cross cultural or mobile personas which was a shock. The internationalization of business and development of smartphones is not new so I am surprised that more practitioners have not been striving to capture these elements in their own personas.

The team stated that they observed people to understand how they use smartphones to do new things, key tasks conducted, tools used, context and culture. Shadowing people over a day, surveying them and on the spot surveys, image diaries and interviews with industry people were all used to capture data. The outcome they discovered  was that mobile users are different in so many ways to other users they should therefore be considered uniquely. Consequently personas were created that focused on the mobile user, not what they did elsewhere (other personas were used for that purpose). The final personas included a variety of information and importantly, images as well.  Sections included ‘About Me’, ‘Work’, ‘Mobile Life’ and ‘My Day’.

In addition to mobile life, cultural differences were integrated throughout the persona. To incorporate a forward-thinking section called ‘Mobile Future’ researchers asked participants what they would want their phone to do in the future that it can’t currently do. This provided an opportunity for the personas to grow and not become outdated too quickly.

I hope the slides are available soon because I would love to read the personas in more detail. Outdated personas has always been a problem and was even discussed by delegates over lunch the day before. It is great to see how one organisation has tried to tackle this issue.

Usability of e-Government Web Forms from Around the World by Miriam Gerver

Government agencies worldwide are turning from paper forms to the Internet as a way for citizens to provide the government with information, a transition which has led to both successes and challenges. This presentation will highlight best practices for e-government web forms based on usability research in different countries.

Unfortunately technical issues impacted to some extend on the final presentation of the day. Fortunately the presenter had the foresight to prepare handouts of the slides which came in very handy. The bulk of the presentation was to provide insights into good and bad practices of government web forms around the globe. Some things that characterise government forms are the legal issues which require certain information to be included. For example a ‘Burden Statement’ must be provided according to US law. A Burden Statement includes information on the expected time to complete the form. Although this information is useful and should be on the page, it’s implementation in the form is not always ideal as other delegates pointed out. Position and labelling means that users may never find this information and consequently be aware that it exists.

I was impressed that some forms are designed with a background colour which matches the paper version as this helps to maintain consistency and avoid confusion. An issue raised in working with paper copies and digital forms is the potential problem of people using both simultaneously or copying work from a paper copy to the online version. By greying out irrelevant questions instead of hiding them, users can follow along with corresponding questions, avoiding potential confusion. I was also surprised to hear that some government forms allow you to submit the form with errors. If the form is important it makes sense that users are encouraged to submit it in any circumstances. However, users are also encouraged to fix errors before submitting the form where possible.

Below I have provided some insights and experiences from the presentations I attended on the first day of the UPA conference.

Bayerischer Hof Hotel

Bayerischer Hof Hotel, Munich - location of UPA2010

Opening Keynote: Technology in Cultural Practice by Rachel Hinman

In the keynote, Rachel shares her thoughts on the challenges and opportunities the current cultural watershed will present to our industry as well as the metamorphosis our field must undergo in order to create great experience across different cultures.

Opening keynote presentation

Opening Keynote presentation by Rachel Hinman

In the opening presentation Rachel recounted her experiences collecting data from countries such as Uganda and India and how her findings impact how we design for different cultures. One of the most interesting points she made was the discovery that people in Uganda don’t have a word for information. People there correlate the term information as meaning news which is the type of information they would be most interested in receiving through their phone. The key message from her presentation was that we the design of technology for cultures other than our own should begin from their point of view. We should not impose our own cultural norms on them. For example, the metaphor of books and libraries is alien to Indians who do not see this icon in their culture on a daily basis. Conducting research similar to that of Rachel’s will help to design culturally appropriate metaphors which can be applied and understood.

Using Stories Effectively in User Experience Design by Whitney Quesenbery and Kevin Brooks

Stories are an effective way to collect, analyze and share qualitative information from user research, spark design imagination and help us create usable products. Come learn the basics of storytelling and leave having crafted a story that addresses a design problem. You’ve probably been telling stories all along – come learn ways to do it more effectively.

This was a great presentation to kick off the conference. Whitney and Kevin were very engaging and talked passionately about the subject. There was lots of hands-on contribution from the audience with a couple of short tasks to undertake with you neighbour (which also provided a great way to meet someone new at the same time). The overriding message from this presentation was the importance of listening; asking a question such as ‘tell me about that‘ and then just SHUT UP and listen! I found this lesson particularly important  as I know it’s a weakness of mine. I often have to stop myself from jumping in when a participant (or anyone) is talking, even if just to empathise or agree with them. The exercise in listening to your neighbour speak for a minute without saying a word was very difficult. It’s true that being a good listener is one of the most important lessons you can learn as a usability professional.

Learning to listen and when to speak are key to obtaining the ‘juicy’ information from someone. Often those small throw-away comments are not noticed by the storyteller but if you know how to identify a fragment that can grow into a story you often reveal information which illuminates the data you have collected. Juicy information often surprises and contradicts common beliefs and is always clear, simple and most of all, compelling.

Usability professionals will often tell you that a participant feels the need to please in an interview or test and will often say things such as ‘yes that was easy to use’ even when you clearly saw them struggling. Whitney and Kevin recognise this common problem and provide a strategy to overcome it. Allow participants to feel that they have fulfilled their duty to the interviewer and then provide space at the end of the interview for final comments. Simply asking “Anything you might want to tell us about ...” allows participants to reveal their true feelings.

So once you have collected some stories how to you implement them to your work? Whitney and Kevin suggest that stories add richness to personas (descriptions of user segments) and provide more than just data. For example, when communicating a persona’s needs and goals you can create a story around the data to give them a more realistic feel. Stories in personas can provide perspective, generate imagery which suggests emotional connections and give the persona a voice through the language used.

Stories can also be used to create scenarios for usability testing. Often tasks force the participant to do something they might not ordinarily do. If you can create a scenario and ask the participants to ‘finish the story‘ it’s often easier for them to imagine themselves in the situation and realistically attempt the task. My favourite example came from an audience member who stated the problems she’d had when asking women to try to record a football game using a prototype UI (the prototype did not allow for any other task). The women would be very unlikely to record such a thing in normal circumstances and so struggled to attempt the task. When the facilitator changed the scenario by adding a story, the women were much happier to attempt the task: ‘Your boyfriend is out and has asked you to record the football for him, can you do this?

You can also collect stories during usability tests. The opening interview is often an opportunity to collect stories and use them to set up a task. It allows you to evaluate your tasks and check they match the stories and it also allows you to generate a new task from the story.

There was so much to learn during the presentation, often things which might seem obvious but are very often overlooked were discussed. Whitney and Kevin have recently published a book on the same subject which I have just bought as a direct result of attending their presentation. On first glance it looks full of insightful techniques and tips to incorporate storytelling more consciously to your research. I look forward to reading it!

Express Usability by Sarah Weise and Linna Manomaitis

For those with a strong foundation in usability. Learn to improve your website or application and educate your clients in as little as a week. Take away express approaches to traditional analysis and presentation techniques, and immediately apply them to your own projects.

This was the first of three short 30 minute presentations that took place in the afternoon. A good idea in theory but often there was just not enough time to go into sufficient detail on specific subjects. In this presentation Sarah presented a technique to apply to limited projects and ensure you get the best out of the time and resources available. The ‘fixed price menu’ style of ‘data gathering activity’ (appetizers), ‘analysis activity’ (mains) and ‘deliverable styles’ (desserts) were used in various combinations on projects they had undertaken. The list of client needs were useful for identifying what they client is looking to get from the project while the methods available to use ranged from heuristic evaluation, usability testing to focus groups and interviews. The unique approach to project management was quite fun and clearly effective, however I was expecting more information on how methods had been adapted to be used in an ‘express’ way. Perhaps the limited time prohibited this but I’m guessing it would have suited general practitioners more than experienced practitioners.

Agile UX: The Good, The Bad and the Potentially Ugly by Thyra Rauch

Agile development and UCD/UX processes can exist in harmony, even after a rocky start. This case study describes what I did as part of an Agile team. I will discuss our methods, share our success factors, things that were originally roadblocks, and concerns about future issues.

Thyra presented her own experiences of working within an Agile User Centred Design (UCD) framework very concisely and maximised her time by sticking closely to her presentation while also covering the basics of Agile and allowing sufficient time for questions/discussions at the end. Useful diagrams illustrated the different iterations involved in Agile UCD and the roles the designers, developers and user experience team play at various stages. She explained how Cycle 0 is used to plan and gather customer data, create profiles and recruit participants (with a parallel track for developers), Cycle 1 tests the mock-ups which is done iteratively in Cycle 2 etc. She points out that Agile includes a waterfall system while also iterating such processes frequently in a short space of time (every few weeks). Cycle 2 etc. is used to test three things: previous designs (now coded), current iteration of the design in a prototype and future ideas as paper mock-ups all within one session.

She points out that good communication is essential to Agile UCD. Regular weekly meetings with UX team, management, designers, developers etc and daily 15 minute scrums discussing the project status and issues both ensure the success of user centred design as it the provide a process everyone can follow while also allowing everyone on the team to be on the same page. Thyra discourages having a team on more than one project at a time to prevent the work becoming diluted. She also warns of the dangers of conducting Agile UCD with teams distributed globally as this makes it significantly harder to hold regular meetings when the time difference prohibits this.

Evaluating Touch Gesture Usability by Kevin Arthur

Multi-finger gestures on touchscreens and touchpads are becoming increasingly popular, but they don’t always work well for users. I’ll discuss the challenges of obtaining reliable measures of gesture usability and will present techniques for testing gestures, with examples from tests that evaluated multi-finger pinch, rotate, and swipe gestures on touchpads.

Another interesting presentation which provided a method for evaluating the usability of touch gestures in technology.  Some of the interesting take-aways from this presentation included questions worth posing when evaluating touch gestures: do users understand them and are the satisfying to use? A finding from Kevin’s research revealed that although the gestures perhaps looked intuitive, many users found them difficult to master. The slides from this presentation including an outline of the test framework and questionnaires used are available online.

Design for Happiness by Pieter Desmet

The ability to design products with a positive emotional impact is of great importance to the design research community and of practical relevance to the discipline of design. Emotion is a primary quality of human existence, and all of our relationships – those with inanimate objects as well as those with people – are enriched with and influenced by emotions. Not only do emotions have a considerable influence on purchase decisions, post-purchase satisfaction, and product attachment, but also on the general happiness of the people who own and use them. The emotions that we experience daily, including those we experience in response to the designed objects that surround us, have been shown to be main determinants of our general well-being. In his lecture, Desmet discusses the role of product design in emotional experiences, and proposes some opportunities to develop design strategies to conceptualise products that contribute to the happiness of their users.

Slide from Designing for Happiness presentation

Slide from 'Designing for Happiness' presentation

This presentation was always going to be interesting as it deals with such an elusive subject to measure and it lived up to expectations. Emotional design is very elusive and often the designer must trust their instincts. Although not easy to design for, all products develop emotions which means they are too important to ignore. Emotions can make products successful or make them fail therefore emotion must be part of the design process.

Pieter referred to ‘sleeping demons’ as concerns inherent within users which designers do not want to wake. Often designers spend too long concentrating on solving problems or removing negative emotions from a design. This bias in appraisal theory means the focus is often narrowed. New theories are being developed which focus on the positive emotion. Broaden and Build Theory deals with thought action patterns which broadens the design focus to discover and build on personal resources to help people flourish and be happy.

The 40% theory  states that happiness originates from three areas: 50% temperament or genes you are born with, 10% depends on circumstances and 40% originates from conscious thought.  This suggests that learning to be more happy is learning how to think differently e.g. positive thinking. So where does design fit into design? Pieter succinctly stated that ‘you cannot make a sail trip without a sail boat‘. Products affect our happiness as sources, as resources to be used in obtaining goals and as sources of meaning: design for rich experiences (savour), design for engagement (find or identify and attain goals) or impact of positive emotions on human product interactions (fascination). This reiterates his point on the importance of designing for happiness as much as designing to solve problems and remove negative emotions.

He also points out that it is possible to enjoy things which incite fear e.g. horror films, sports, rollercoasters etc. We can enjoy negative emotions as long as conditions such as a barriers are placed between the user and the ‘danger’. This should therefore also be taken into consideration in designing for happiness.

Overall a very insightful first day and one which set my expectations high for day 2! If you attended UPA 2010 and have your own experiences of day 1, perhaps you attended different presentations, please feel free to leave a comment below.


del.icio.us bookmarks

Twitter feed

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archive