Researching Usability

Archive for the ‘Round-up’ Category

Last Thursday I attended a Webinar organised by AmbITion called Tracking Impact. It was presented by David Sim from Open Brolly and 4TM who discussed ways to track your organisations activities online. As a webinar, it was streamed live to almost 50 people at one point but I was lucky enough to attend in person due to its close proximity to the office. The full webinar is available on the AmbITion website and the presentation slides are also available. Consequently instead of a full report on the presentation I have listed some things which came out of the event that were of interest to me and the project.

Research conducted suggests Facebook is a good tool for asking short questions

Asking short questions is an effective way to generate a dialogue between an organisation and its patrons or between users. Often questions which are easy for people to answer, such as favourite recipes or opinion on a book or movie get the biggest response. Not only does it generate interest in the organisation but also provides valuable information on your users, something which can cost time and money to obtain. In addition, the analytical tools available on Facebook provide some of the most detailed profile information on your users than anywhere else and this provides valuable data. Although I agree with this research it would have been nice to know who conducted this research in order to read it for myself.

Retweeting is a good gauge of influence and expertise

This may seem obvious to some but definitely worth pointing out. If an individual or organisation is using Twitter and their content is retweeted by others, this demonstrates an authority on a subject and the value others place in the information provided. There is something powerful about the ‘collective wisdom’ or ‘collective intelligence’ demonstrated by retweets which is characteristic of Web 2.0 (Högg et. al 2006). Retweeting behaviour goes some way to measuring impact and influence within social networks and is therefore one tool in an otherwise difficult to measure environment.

To search an exact phrase use inverted commas

OK so for any librarians reading, this will seem like a no-brainer but I have to admit I never considered it till it was pointed out. This tip was provided in relation to using Google Alerts to monitor an organisation or brand but also applies to any type of search. Often keywords are common and can return a lot of irrelevant information. Using exact phrase searches can reduce the amount of unwanted information getting through. I have now amended my own Google Alters to get more accurate results and have also used the minus (-) technique to remove any results from the UX2 project tag as it turns out it’s also a piece of sounds equipment!

Social media will become more useful

This was one of David’s  predictions for the future during the Q&A which followed the presentation. I have to agree that the semantic web will make it possible for information to be shared and accessed when a user needs it much more easily. It’s a shame there wasn’t more time for discussion on this subject as it’s something which I am keen to explore in more detail.

Advertisements

A few developments to the project this week has meant that we have some good news to announce:

After revisions to our proposal we got confirmation this week that our JISC Enhancing LMS: AquaBrowser UX study had been accepted! If you have been following the blog and the usability inspection report which evaluated five digital libraries, you will know that Edinburgh University’s Aquabrowser was one of the DLs we investigated. Ideas that spun out originally from my blog post led to a full-blown proposal to evaluate the system which will now be conducted alongside the UX2 project. It will greatly enhance the UX2 project, providing the opportunity to conduct extensive user research including personas and usability testing which can be compared to the heuristic inspection results. More details on the project will be announced on the UX2 project website once it kicks off next week so stay tuned for more information.

Included in the Aquabrowser project is the opportunity to attend the UPA 2010 Conference in Munich this May. By attending it will be possible to learn more on personas and measuring the effectiveness of Web 2.0 among many other subjects. I’m excited to be attending so please get in touch if you will also be there and would like to meet up. I will of course blog about the conference while I’m there.

On the subject of the Usability Professionals Association, the UX2 team will be talking about the project at the next event for the Scottish chapter. The talk takes place next week on 20th April at 6:30pm and during the presentation we hope to spread the word of our work while also discussing some of the inspection report findings. We will also discuss the development work taking place with a demonstration of the prototype. If you live in and around Edinburgh and would like to attend please visit the SUPA website for more information. If you can’t make it but are still interested, we are hoping to put the slides from the presentation online afterwards.

Information System Success Model

The next theoretical model that I look at this week is the Information System Success Model which was first developed by William H. DeLone and Ephraim R. McLean in 1992. As it says in the title, the framework was designed to create a comprehensive way of measuring the success of an information system. The premise is that “systems quality” measures technical success; “information quality” measures semantic success; and “use, user satisfaction, individual impacts” and “organisations impacts” measure effectiveness success. This was later streamlined into a revised version in 2003 which highlights the three main components of an information service: “information” (content), “system”, and “service” (see diagrams below). Each of these factors have an impact on the user; their intention to use (discussed in the TAM framework as ‘acceptance of a system’) and their actual usage of a system. These factors in turn influence user satisfaction and this provides an indication of the ultimate impact of the system on the user/group of users/organisation/industry.  The net benefits can be scaled so the researcher can decide the context in which the net benefits are to be measured, keeping it useful in any situation.

There are a number of similarities between the Success Model and other models examined in this blog, including ITF and TAM. In addition to the parallels with the Success Model’s ‘Intention to use’ and ‘Use’ with TAM’s acceptance model, there are also overlaps in the Interactive Triptych Framework. Examining the system and content individually as a means of understanding the impact on the user’s behaviour (intentions and usage) is mimicked in the Usability and Usefulness of the system and content to the user in ITF. In the same vein, the usefulness and usability is also paramount when evaluating user acceptance in the Technology Acceptance Model. In this respect all three frameworks are similar. Where the Revised Success Model differs is in its application of these measurements. Where TAM evaluates if a system will be accepted by users, the Success Model can generate a list of measurable benefits which can be used to gauge the system success. This provides the opportunity to evaluate the success of a system over time, as users become more familiar with a system over time.

DeLone and McLean believe that the rapidly changing environment of information systems does not require an entirely new set of measures. They recommend that identifying success measures which already exist and have been validated through previous application can be enhanced and modified where necessary. New, untested measures should be adopted as a last resort. ITF and TAM have demonstrated similarities in their approach to rule them out as new. While TAM has received extensive testing in previous research, ITF is still relatively young. Adopting the ITF model for the UX2.0 project will hopefully further the research in this area.

Girl Geek Dinners: Edinburgh

This week I attended the 3rd Girl Geek Dinner in Edinburgh, hosted by The Informatics Forum at Edinburgh University. Girl Geeks is for women (and men!) interested in technology, creativity and computing. The speakers Emma McGrattan and Lesley Eccles provided entertaining, candid and very interesting talks on their own experiences working in technological sectors. I attended the first dinner in Edinburgh last year and noticed how successful it has become thanks to the wonderful work done by the organisers. The events attract a real mixture of professionals and students with a variety of interests. The passion in technology that everyone brings to the event always leaves me with real optimism and inspiration for the future. Long may these types of events continue.

Free Digital UX Books

For those who missed it or don’t follow me on Twitter, I came across a useful list of free user experience Ebooks compiled by Simon Whatley (link provided via @BogieZero). I personally recommend reading Search User Interfaces by Marti A. Hearst. If you know of any other free Ebooks please feel free to leave a link here or on Simon’s blog.

Power, Perception and Performance (P3)

As part of the ongoing literature review I’ve been researching some of the theoretical models created or adapted to evaluate information systems. Over the last couple of weeks I’ve been blogging about the Technology Acceptance Model (TAM) which has been used by researchers to show its effectiveness at determining user acceptance of specific systems. The paper by Andrew Dillon and Michael Morris, Power, Perception and Performance: From Usability Engineering to Technology Acceptance with the P3 Model of User Response (1999), reveals limitations of the framework in the context of a usability engineering perspective. It is not clear how well TAM predicts usage when testing prototypes as research using TAM to date has involved the testing of complete systems. If the functionality is limited or incomplete how adequately can participants rate its usefulness? Additionally, they are less likely to be able to rate the system’s ease of use if the interface features have not all been designed yet.

The data collection method was also critiqued because it relies on self-ratings from the participants. Studies have shown that user’s ratings change with repeated exposure to a system over time and that it may shift independently of the usability of the interface. This also relates to last week’s blog which suggested that what users say and what they do are not always the same. Self-ratings provide quantitive feedback from users but ideally this data should be gathered in addition to observation which is conducted at regular intervals to reflect any system self-efficacy.

The issues raised certainly do provide a strong argument against using TAM if you are a designer looking for issues to fix. TAM will tell you if a system is likely to be accepted by users but may not provide insight into why. It is more beneficial to IS professionals or managers who want to know if a system is likely to be used, for example when considering the procurement of a new IS.

The P3 model developed by Dillon and Morris uses three aspects (power, perception, and performance) to assess a user’s ability to use a system. A system’s power indicates its potential to serve the user’s tasks. Perception and performance measures the user’s behavioural reactions. Dillon and Morris believe that the P3 model predicts the capability to use a system through effectiveness and efficiency while TAM reveals the perception of the system. Consequently these different constructs make them independent entities which should not be compared: “The P3 model is an attempt at providing a unified model of use that supports both the process of design and clarifies the relationship between usability and acceptability.”

Useful program: Nutshell Mail

I was alerted to this wonderfully simple tool from Mike Coulter during his Ambition presentation, Listening Online. Trying it out is the simplest thing and takes less than a minute to set up. The website describes the program as follows:

NutshellMail takes copies of all your latest updates in your social networking and email accounts and places them in a snapshot email.

It’s a great way to manage multiple accounts and could be useful for those of you who either can’t access your social media accounts throughout the day or have so many people in your network that you find it difficult to monitor your feeds effectively. Last week I blogged about the limited usefulness of Twitter Groups because of the way they are accessed. Well I might be eating my words now because Nutshell Mail gives you the most recent results from your groups in each email along with any other accounts you choose to connect, including LinkedIn, Facebook and MySpace. You can also schedule the emails to arrive at a time that best suits you. That way its less likely to get lost amongst all the emails that wait you every morning! Although another piece of mail in your inbox might not sound like the ideal solution for some people, I’m willing to give it a try to see if it does make life a little easier.

Remote Research by Nate Bolt and Tony Tulathimutte

I was alerted to a competition this week in which UX Booth were giving away three copies of the book, Remote Research. As I’ve conducted some remote studies myself, this was a topic that interested me. I thought I would try my luck and low and behold I actually won a copy which I have already received! Books by the publisher, Rosenfeld Media are always informative – I already own Web Form Design by Luke Wroblewski and Card Sorting by Donna Spencer. Looking through the contents it looks like this book continue this trend. Most notable is the chapter entitled ‘The Challenges of Remote Testing’. The debate of remote testing versus direct testing has been ongoing for a while and looks set to continue. In this chapter some of the possible pitfalls are discussed which will hopefully help users make informed decisions on how they conduct user research and select the best tools to meet their needs. I look forward to reading this book, the simple design of Rosenfeld books makes them quick and easy to digest. Due to interest, I hope to write my own review here once I’ve finished it.

Heuristic report

This week the heuristic inspection report has been published and is available to read. If you would like to read it feedback is very welcome. The document is available in Word or as a PDF from the NeSC digital library: http://bit.ly/ux2inspectionreport. It is a sizeable document so thanks in advance for taking the time to read it! 🙂

Not what you know, nor who you know, but who you know already

This is a research paper which was a collaboration between myself, Hazel Hall and Gunilla Widén-Wulff. The research was undertaken when I first graduated from my Masters in 2007 and this week I received the good news that it will be published  in Libri: International Journal of Libraries and Information Services at some point this year. The paper examines online information sharing behaviours through the lens of social exchange theory. My contribution was the investigation into the commenting behaviour of undergraduate students at Edinburgh’s Napier University as part of their coursework. I’m very excited by this news as it is only my second publication. I look forward to seeing it in print and will provide details here if it becomes available online.

TAM part 2: revised acceptance model by Bernadette Szajna

Another paper which I read this week was ‘Epirical Evaluation of the Revised Technology Acceptance Model’ by Bernadette Szajna (1996). In this paper Szajna uses the revised Technology Acceptance Model (TAM) from Davis et al. (1989) to measure user acceptance of an electronic mail system over a 15 week period in a longitudinal study. By collecting data from participants at different points in the study she was able to reveal that self-reported usage differed from actual usage and that as a consequence it may not be appropriate as a surrogate measure. This supports what those who’ve been running usability tests have been saying for a while: what users say and what they do are seldom the same. In user research terms this means that observing what users do during their interaction with a system is as important as what they say about their experience.

In addition the paper revealed that “unless users perceive an IS as being useful at first, its ease of use has no effect on the formation of intention”. This struck a chord with me because as a usability professional I often assume that ease of use is a barrier to the usefulness of a system; if a user does not know how to manipulate the interface they are unable to discover the (possibly useful) information below the surface. Then when I was considering the usefulness of Twitter groups I realised that it began to follow the same pattern.  Twitter groups is a recent addition to Twitter and available to users. It allows those you are following to be categorised into self-named groups. For example, it’s best application is a means for users to differentiate their professional connections from personal ones. In theory it is a good idea and one which I thought I might use as a way of separating out different networks would certainly make them easier to monitor. I can’t imagine it being too difficult to set up a group if I so wished but the problem is that I never considered it useful for me to do so and consequently I  never did (note: I created a private group today to test my theory). The reason in this case is that I rarely use Twitter’s website to monitor or communicate with those I’m following. There are many different client managers such as TweetDeck who can do this for me. I’m sure there are a few people out there who have created groups and view them regularly but could these people be in the minority? I’d be interested to test my theory so any comments on your own Twitter group behaviour is welcome.

My conclusion is that (for me) the usefulness of the groups tool was a greater barrier to use than the ease of creating a group, verifying Szajna’s findings. This illustrates how important usefulness is to the user acceptance of technology and is therefore something that should be evaluated in every system to ensure success.

Mendeley Webinars

Lastly Mendeley directors are hosting webinars which will provide an introduction to its features including inserting citations and using the collaborative tools. The webinars will be held on Tuesday, February 23, 2010 5:00 PM – 6:00 PM GMT and Wednesday, February 24, 2010 9:00 AM – 10:00 AM GMT respectively. I have signed up for the webinar on Wednesday and look forward to learning more. So far I’ve managed to add items to my library and connect with others online but don’t feel I have exploited its features fully and am having difficulties amending my bibliography in Word. Hopefully this webinar will provide help and advice.

So you might have noticed the different title for this week’s weekly round-up. The reason for this change is to make each week’s title a bit more meaningful to readers. I also suspect that navigating old posts would be easier if the titles alluded to the content rather than forcing people to remember the date it was written. It’s an experiment for now and I might tweak it a bit in the future so feedback is always welcome.

This week the team have been doing some final edits to the inspection report. Although the content was completed last week, a few minor changes have been done to orientate readers through the report, provide better context and tweak the layout. It is expected to be finalised next week (promise!) so will post download details when it is available.

Measuring the user’s experience: pleasure and satisfaction

Something that I was reading about recently is the idea of measuring the playfulness and pleasure of digital libraries. In a short paper by Toms, Dufour and Hesemeier- ‘Evaluating the User’s Experience with Digital Libraries’, they have devised a method of assessing the entertainment value of digital libraries by adapting an e-commerce experiential value scale. It struck me reading this paper that there is little research on this aspect of evaluation. As with the ITF framework, many evaluation models focus on usability, usefulness and performance of a digital library. However, there appears to be scope for libraries to be more than just for the purpose of finding, acquiring and using information (Toms et al.). This becomes important as new features and services are added to digital libraries. The heuristic inspection that UX2 carried out provides evidence to support this idea and suggests that digital libraries are already doing this: bringing people together through social media and using new UI patterns that provide a more engaging experience than traditional search systems. Good examples include the ‘Stuff’ feature provided by Scran and the timeline and map used by World Digital Library.

Satisfaction is another term used when evaluating digital libraries. Myke Gluck wrote a paper: ‘Exploring the Relationship Between User Satisfaction and Relevance in Information Systems’ (1995) which revealed a strong relationship between user satisfaction, the relevance of retrieved items and the process of retrieving the item. This supports the idea that there is a connection between the performance of a system and it’s usefulness to the user. It also reveals that the usability of the UI affects satisfaction, supporting the need to evaluate an information system by adopting a holistic approach. As usefulness and usability are both determinants in the user acceptance of digital libraries (as discussed in last week’s blog), satisfaction is an influential factor in the success of a digital library.

BBC Virtual Revolution Series

Back in November I blogged about the documentary series being created by the BBC on the World Wide Web. I realised this week that it’s now finished and the first episode aired last Saturday. I plan to watch it on iPlayer this weekend before the next episode airs. If you want to know more about the documentary and watch the episodes, you can do so on their website.

Nanocrowd

This week Phil Bradley blogged about the movie search engine, Nanocrowd. I decided to check it out for myself and was impressed. The autocomplete or autosuggest system prevents users from misspelling words, reducing the chance of returning no results. The only thing that seems to be missing is information on the movie. Synopsis information appears when a user hovers over the film link, this information is loaded directly from Amazon. However, users are more likely to select the film link and expect to find information on the following page. Although there is a ‘movie in a nutshell’ word cloud in the right-hand column, the body of the page is blank. It would be nice to have things like the synopsis in this space or at least a link pointing users in the right direction. Alternatively, move the word cloud into the body of the page so people are more likely to notice it. Overall, this is a great tool for exploring movie genres and discovering new films. I’ll certainly be using it next time I’m searching for a film that matches my mood.

My second round-up of the new year and already my last one for January. It seems that this month has flown by quite quickly!

Technology Acceptance Model (TAM)

Returning my attention to the evaluation of the Interactive Triptych Framework which I first blogged about in November has included the investigation of other evaluation concepts. One such concept which is discussed by Tsakonas and Papatheodorou (2006) is the Technology Acceptance Model (TAM). This model, which seeks to understand acceptance of computers systems, was first put forward by Fred D. Davis in 1989 with his paper- ‘Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology‘. It was later used by Thong, Hong and Tam in 2002 to understand user acceptance of digital libraries in their paper- ‘Understanding user acceptance of digital libraries: what the roles of interface characteristics, organisational context, and individual differences?

Thong, Hong and Tam state that TAM has been used frequently by researchers to explain and predict user acceptance in information technology. It is predominantly based on the belief that a person’s intention to adopt an information system is affected by two beliefs; the perceived ease of use and the perceived usefulness. Ease of use is commonly described as the ease with which people can employ a particular tool or other human-made object in order to achieve a particular goal. Usefulness is defined as the extent to which a person believes using the tool or system will benefit their task performance.

It feels that the TAM system provides a manageable framework which can evaluate the main barriers to user acceptance; ease of use and usefulness.  One difference between TAM and ITF is the absence of a performance attribute. The role of the evaluation period of the project will be to identify the most suitable framework to use when assessing the technological outcomes. Historically performance has been missing from similar research and would be required if a holistic approach was being sought. If the ITF is selected for ux2, one of the challenges will be to design a data gathering system (or systems) that can accurately and thoroughly investigate the performance aspect of digital libraries. This could include questionnaires, interviews, observation and web metrics.

One thing that the Thong et al. paper considered was the influence of individual differences and organisational context on user acceptance of digital libraries. External factors such as these are more difficult to control or change as they deal with the experience and knowledge of users and the accessibility/visibility of the system within the organisation. These factors can affect the perceived ease of use and perceived usefulness of a system and are therefore worthwhile investigating. Methodologies such as contextual enquiry have the potential to address these factors by understanding typical user groups to generate appropriate personas. This strengthens the argument for using this data gathering method in the project.

iPad

Well everyone has been talking about it for weeks (apparently) so as a curious non-apple user I thought I would tune in to see what the fuss was about. Turns out Apple went with one of my least favourite names for their new device but that aside the new device certainly looks interesting. I guess time will tell how successful it is but marketing it at the lower than expected price will certainly help. A lot of disappointment and scepticism (me included at times) was the general reaction to the new product but I’m told the reaction was similar for the iPhone and look at it now! If you want to read why the iPad will succeed from a usability perspective, check out the blog by Econsultancy.

Fun Apple tablet created for a local iPad event, hosted by Moo Cafeteria

Tags: , ,

del.icio.us bookmarks

Twitter feed

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archive