Researching Usability

User Research and Persona Creation Part 1: Data Gathering Methods

Posted on: July 30, 2010

In a previous blog I evaluated the progress of the data gathering stage of persona creation for both Aquabrowser UX and UX2.0. As the data gathering has now been completed and analysed, we have the beginnings of our personas. It therefore seemed a good time to reflect on the process as well as document and review our methods. In the first of three blogs detailing our persona creation, I will first talk about the data gathering methods and reflect on its success.

Originally the plan had been to create personas by conducting qualitative research and validating the segments with quantitative data. Unfortunately we underestimated the time taken and resources required to conduct the qualitative research and as such were unable to validate the personas using quantitative research. Although this approach is good when you want to explore multiple segmentation models and back up the qualitative findings with quantitative data, personas created without this extra step are still valid and extremely useful. As this is the first time the team has conducted persona data gathering, it took longer to do than anticipated. Coupled with the restrictions on time and budget for this project, the additional validation was always an ambition. I’ve stepped through the process we used below to allow others to adopt it if needed. The process is a good template for conducting all types of interviews and not just to create personas.

1. Designing the interviews

When designing the interview questions the team first set about defining the project goals. This was used as a basis for the interview questions and also helped to ensure that the questions covered all aspects of the project goals.

Goal 1: In relation to University of Edinburgh digital library services, and AquaBrowser, identify and understand;

  • User general profiles (demographic, academic background, previous experiences)
  • User behaviours (e.g. information seeking) / use patterns
  • User attitudes (inc. recommendations)
  • User goals (functional, fit for purpose)
  • Data, content, resource requirements

To keep the interview semi-structured and more conversational, the questions created were used primarily as prompts for the interviewer and to ensure that the interviewees provided all the information being sought. More general questions were posed as a springboard for more open discussion. Each question represented a section of discussion with approximately six questions in total. Each question in turn had a series of prompts. The six opening questions are detail below:

  1. Could you tell me a bit about yourself…?
  2. Thinking now about how you work and interact with services online, what kind of activities do you typically do when you sit down at the computer
  3. I want to find out more about how you use different library services, can you tell me what online library services you have used?
  4. We want to know how you go about finding information…What strategy do you have for finding information?
  5. Finally, we’d like to ask you about your own opinions of the library services..  a. What library or search services are you satisfied with and why? b. Why do you choose <mentioned services> over other services?

Interviewees were also given the opportunity at the end of the interview to add anything they felt was valuable to the research or which they just wanted to get off their chest. Several prompt question were modified or added to the librarian interview script, otherwise the overall scripts were very similar

When the interview was adapted into a script for the interviewer, introductory and wrap-up sections were added to explain the purpose of the interview and put the interviewees at ease. These sections also provided prompts to the interviewer to ensure permission was obtained beforehand and that the participant was paid at the end.

2. Piloting the interview

The script was piloted on a colleague not involved in the project a few days before the interviews began. This provided an opportunity to tweak the wording of some of the questions so they were clearer, time the interview to ensure it could be conducted in approximately 45 minutes and also help the team to become more familiar with the questions and prompts. Necessary changes were consequently made to the script to be used for the first ‘real’ interview.

3. Recruitment – Creating a screener

In order to recruit a range of users at the university, a screener was devised. This would provide information on each participant’s use patterns and some basic demographic details. It also allowed us to find out the availability of each participant as the interviews were intended to be conducted over a four-week period in June and July. It also made it easier to collect contact details from users who had already agreed to take part. As with most user research where incentives are involved, there is always the danger that participants will be motivated by the reward of payment and consequently will say whatever they need to say in order to be selected. As we were looking for users who were familiar with Aquabrowser and Voyager (The ‘Classic’ catalogue), we disguised these questions among other library services. This prevented the purpose of the research from being exposed to the participant. The screener questions we used are detailed below:

Screener questions:

  1. Please confirm if you are willing to take part in a 45 minute interview? Note: There will be a £15 book voucher provided for taking part in an interview.
  2. In addition to interviews we are also recruiting for website testing sessions (45-60 min). Would you would be interested in taking part?
    Note: There will be a £15 book voucher provided for taking part in a website testing session.
  3. What do you do at the university? Undergrad: 1st/2nd/3rd/4th/Post grad/PhD/Library staff/Teaching staff/Research staff/other.
  4. What is your department or program of study?
  5. Which of the following online services do you use at the University and how many hours a week do you spend on each? Classic catalogue/ Aquabrowser catalogue/Searcher/E-Journal search/ PubMed/ My Ed/Web of Knowledge/Science/Science Direct.
  6. How much time per week do you spend in any of Edinburgh University libraries? None/Less than 1 hour a week/1-3 hours a week/4-10 hours a week/More than 10 hours a week.
  7. Please state your prefered mode of contact to arrange interview date/time.
  8. Please leave your name.
  9. Please leave relevant contact details: Email and/or telephone number.

Thank you very much for your time.  If you are selected to participate in the current study, we’ll be in touch to see when would be the best time for your session.

4. Recruitment – Strategy

A link to the screener was publicised through a variety of streams. An announcement was created and placed in the MyEd portal which every person within the university has access to (staff and students). In addition to this, an event was created which was also visible within the events section of MyEd. Several email invitations were sent via mailing lists requesting participation. These lists included the School of Physics, Divinity and Information Services staff.

To encourage students and staff to participate an incentive was provided. A £15 book voucher was promised to those who agreed to take part in an interview. The screener was launched on 21st May and ran until the interviews were completed on 15th July. Interviews were scheduled to take place over four weeks which began on 17th June. Six interviews were carried out on average each week, taking place over two separate days. These days varied, but often took place on Tuesdays, Thursdays and Fridays. This was influenced by the availability of team members to carry out the interviews. Each participant was given the opportunity to name their preferred location for the interview. Those interviews that can take place in the user’s own environment are more likely to put the participant at ease and consequently produce better interviews. However, every participant ended up opting to meet at a mutually convenient location – the main library. This public venue is familiar to all participants and centrally located making it less intimidating and easy to find. It also enabled more interviews to be conducted over a short period of the day as travelling to various locations was not required.

Participants were recruited based on a number of factors. Their position in the university (student, staff etc.), their familiarity (or in some cases not) with library services, especially Aquabrowser and Voyager (Classic catalogue). Individuals who spent a reasonable amount of time using these services were of interest but a number of individuals who did not spend much time using the services were also recruited to provide comparisons. Obviously their availability was also an important factor and anyone who was not available in June and/or July were excluded.

Although the screener speeded up the recruitment process, there was still a number of individuals on the list who did not respond to additional email requests to participate. This is always frustrating when people have apparently registered their interest when completing the screener. Despite this we managed to recruit 19 participants from a list of 82 respondents which was approximately a 23% response rate. Unfortunately from these 19 individuals, two individuals dropped out at the last-minute. One person did not show up and another cancelled due to ill-health. As these cancellations occurred on the last day of interviews and did not represent a under-represented demographic group, the decision was taken not to recruit replacements and to conclude the data gathering stage with 17 participants.

Unfortunately there were some groups who were under-represented. The biggest concern was the limited number of staff and in particular,  lecturers in the study. This ultimately meant that this group could not be adequately analysed. Time limitations meant it was difficult to undertake additional strategies to target these individuals. The data gathered was only able to provide personas representing those interviewed and consequently a persona for faculty staff was not possible. Any future persona development work should ensure that a variety of lecturers and researchers are interviewed as part of the process.

5. Conducting the interviews

Before the interviews began, several preparations had to be made. Booking audio recording equipment, sourcing a digital camera for images and creating consent forms for both audio and visual documentation was done. Two team members were present for every interview. One would take notes while the other interviewed the participant. These roles were swapped for each new interview, giving both team members the chance to be both interviewer and note-taker. After discussing how best to take notes it was decided that having a printed template for each interview which the note-taker could complete would be a good strategy. This would help to keep notes in context as much as possible and make the note-taking process as efficient as possible. Doing so removes the danger of important information being lost. The note-taker would also record time stamps each time something ambiguous or important was said so that clarification could be made later by listening to the recordings.

After each interview the team transferred the notes onto the project Wiki. Doing so allowed a post-interview discussion to take place where both members reflected on their understanding of the interviewees comments. It also provided the opportunity to review the interview and discuss improvements or changes that could be made to the next one. This was particularly useful for the first few interviews.

Having two team members present for each interview was time-consuming but also provided many benefits. During the interview it gave the interviewer a chance to consult with someone else before concluding the interview. Often the note-taker may have noted something the interviewer overlooked and this time at the end ensured that any comments made earlier could be addressed. In addition, any missed questions or prompts were asked at this point. It is also beneficial for all team members to be present when it comes to analysing the data at the end. Individuals do not have to rely heavily on the notes of another to become familiar with the participant. This is particularly important when it is not possible to have transcripts of each interview made. Finally, having a note-taker removes the burden from the interviewer who does not have to keep pausing the interview to note down anything that has been said. As these interviews were intended to be semi-structured and have a discursive feel, having a note-taker was crucial to ensuring that this was achieved.

Conclusion

Overall the interviews were quite successful. However, in future more interviews with staff should be conducted in addition to a web survey so that the segmentation can be validated. Full access to the documents created during the process and the resources consulted throughout can be found on the project Wiki.

In the second part of the series guest blogger, Liza Zambolgou will be discussing the segmentation process and how we analysed all of the data collected from the interviews.

Advertisements

10 Responses to "User Research and Persona Creation Part 1: Data Gathering Methods"

I appreciate the transparency of your work…your willingness to share your process. My one comment is that when I can’t do contextual inquiry but can interview, I like to focus on asking people to describe their most recent relevant activity rather than asking them to describe their usual practice. The former tends to be more grounded in actuality whereas the latter invites people to engage in wishful thinking. So instead of asking people, “How do you go about finding information?”, I ask, “What’s the last time you had to find an article?”. Then I ask them to walk me through the experience as they remember it, including why they were looking for it. What I find is that the remembered experience is much messier and informative than the idealized description of imagined typical behavior. Of course if they haven’t done a particular activity fairly recently, then the memory tends to be too clouded to be very useful. On approach I’ve taken is to interview people on their way out of the library, asking them about what they’ve just done.

Again, thanks for posting this. I look forward to seeing further postings, and especially the results of your work!

Hi Mark,

Thanks for your comment and the point you make. I agree that providing context is key to getting good information from a participant. I should have added in my post that we often did ask participants to describe what they did the last time they were in front of their computer. Participants often used their last experience as their point of reference without too much prompting, although I agree that re-wording the question would ensure this happens every time.

We were lucky that we had a PC set up in the interview space which often acted as a useful prop when discussing the user’s experience. Participants would show us what they tried to to last time in order to replicate the results for us. This provided us with lots of useful data however, we had to be very careful that the interview didn’t turn into a usability test.

We did do some contextual enquiry before we started the interviews – we grabbed people leaving the library or observed them casually while they were standing at the kiosks. Although we got lots of good data from this technique we ran into a number of problems which led us to take the decision to conduct interviews:

1. Often participants were unwilling to spend more then 10 mintues answering questions and did not give very detailed answers.
2. We conducted the contextual enquiry during exams which made it more difficult to approach people or to target a variety of people – most individuals were undergraduate students studying for exams. If we had more time we would have liked to conduct another round of contextual enquiry at the start of term.
3. Very few students we met in the library knew of or had used Aquabrowser (our principle subject of research). This was due to internal issues but meant that we were unable to find participants ad-hoc who could answer questions on it.

We are finalising the personas this week so check back soon to see them! 🙂

Sounds great! I look forward to the personas.

Hi Lorraine 🙂
wowzer this is a really thorough post! lots of good tips for the other UX projects to learn from which is great. I’m interested in how much user recognition there was for the different library services … did users who weren’t heavy users of some of the services recognise the names and understand what you were referring to. I have an (un-tested) theory that users don’t take much notice of what a service is called and are more focused on the functionality of the service … in other words they could be using a particular service but not recognise the name when they’re asked about it – does that ring true with your findings?

By the way I think there could be a typo in the response rate above … It says 4% but I make it more like a 23% response rate … maths skills are not my strong point though so there is a good chance that I’m wrong 🙂

Cheers, Helen

Hi Helen thanks for your comment.

Your un-tested theory has some truth- often we found that participants didn’t know the names of services (particularly internal services). In their screener they would sometimes answer that they didn’t use Aquabrowser for instance and during the interview revealed they had tried it but didn’t know it by name. Others just knew Voyager (Classic catalogue) to be the ‘library catalogue’ and were unaware of any other services or alternatives. Unfortunately it is not the focus of our research but it does present a case for a ‘one-stop shop’ type service for some users. Something worth a more close examination.

You are right about the numbers error and are very kind to suggest it as a typo – admittedly it is I who has the poor math skills. 🙂 Thank you for the clarification, I am surprised I hadn’t noticed it!

[…] include as much raw data and project documentation as possible on the project wiki. As mentioned in part 1, we had difficulty recruiting staff and consequently were unable to create a persona which […]

[…] User Research and Persona Creation Part 1: Data Gathering Methods […]

[…] User Research and Persona Creation Part 1: Data Gathering Methods July 2010 8 comments 4 […]

[…] User Research and Persona Creation Part I: Data Gathering Methods, Researching Usability […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

del.icio.us bookmarks

Twitter feed

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archive

%d bloggers like this: