Researching Usability

Posts Tagged ‘TAM

Power, Perception and Performance (P3)

As part of the ongoing literature review I’ve been researching some of the theoretical models created or adapted to evaluate information systems. Over the last couple of weeks I’ve been blogging about the Technology Acceptance Model (TAM) which has been used by researchers to show its effectiveness at determining user acceptance of specific systems. The paper by Andrew Dillon and Michael Morris, Power, Perception and Performance: From Usability Engineering to Technology Acceptance with the P3 Model of User Response (1999), reveals limitations of the framework in the context of a usability engineering perspective. It is not clear how well TAM predicts usage when testing prototypes as research using TAM to date has involved the testing of complete systems. If the functionality is limited or incomplete how adequately can participants rate its usefulness? Additionally, they are less likely to be able to rate the system’s ease of use if the interface features have not all been designed yet.

The data collection method was also critiqued because it relies on self-ratings from the participants. Studies have shown that user’s ratings change with repeated exposure to a system over time and that it may shift independently of the usability of the interface. This also relates to last week’s blog which suggested that what users say and what they do are not always the same. Self-ratings provide quantitive feedback from users but ideally this data should be gathered in addition to observation which is conducted at regular intervals to reflect any system self-efficacy.

The issues raised certainly do provide a strong argument against using TAM if you are a designer looking for issues to fix. TAM will tell you if a system is likely to be accepted by users but may not provide insight into why. It is more beneficial to IS professionals or managers who want to know if a system is likely to be used, for example when considering the procurement of a new IS.

The P3 model developed by Dillon and Morris uses three aspects (power, perception, and performance) to assess a user’s ability to use a system. A system’s power indicates its potential to serve the user’s tasks. Perception and performance measures the user’s behavioural reactions. Dillon and Morris believe that the P3 model predicts the capability to use a system through effectiveness and efficiency while TAM reveals the perception of the system. Consequently these different constructs make them independent entities which should not be compared: “The P3 model is an attempt at providing a unified model of use that supports both the process of design and clarifies the relationship between usability and acceptability.”

Useful program: Nutshell Mail

I was alerted to this wonderfully simple tool from Mike Coulter during his Ambition presentation, Listening Online. Trying it out is the simplest thing and takes less than a minute to set up. The website describes the program as follows:

NutshellMail takes copies of all your latest updates in your social networking and email accounts and places them in a snapshot email.

It’s a great way to manage multiple accounts and could be useful for those of you who either can’t access your social media accounts throughout the day or have so many people in your network that you find it difficult to monitor your feeds effectively. Last week I blogged about the limited usefulness of Twitter Groups because of the way they are accessed. Well I might be eating my words now because Nutshell Mail gives you the most recent results from your groups in each email along with any other accounts you choose to connect, including LinkedIn, Facebook and MySpace. You can also schedule the emails to arrive at a time that best suits you. That way its less likely to get lost amongst all the emails that wait you every morning! Although another piece of mail in your inbox might not sound like the ideal solution for some people, I’m willing to give it a try to see if it does make life a little easier.

Remote Research by Nate Bolt and Tony Tulathimutte

I was alerted to a competition this week in which UX Booth were giving away three copies of the book, Remote Research. As I’ve conducted some remote studies myself, this was a topic that interested me. I thought I would try my luck and low and behold I actually won a copy which I have already received! Books by the publisher, Rosenfeld Media are always informative – I already own Web Form Design by Luke Wroblewski and Card Sorting by Donna Spencer. Looking through the contents it looks like this book continue this trend. Most notable is the chapter entitled ‘The Challenges of Remote Testing’. The debate of remote testing versus direct testing has been ongoing for a while and looks set to continue. In this chapter some of the possible pitfalls are discussed which will hopefully help users make informed decisions on how they conduct user research and select the best tools to meet their needs. I look forward to reading this book, the simple design of Rosenfeld books makes them quick and easy to digest. Due to interest, I hope to write my own review here once I’ve finished it.

Advertisements

Heuristic report

This week the heuristic inspection report has been published and is available to read. If you would like to read it feedback is very welcome. The document is available in Word or as a PDF from the NeSC digital library: http://bit.ly/ux2inspectionreport. It is a sizeable document so thanks in advance for taking the time to read it! 🙂

Not what you know, nor who you know, but who you know already

This is a research paper which was a collaboration between myself, Hazel Hall and Gunilla Widén-Wulff. The research was undertaken when I first graduated from my Masters in 2007 and this week I received the good news that it will be published  in Libri: International Journal of Libraries and Information Services at some point this year. The paper examines online information sharing behaviours through the lens of social exchange theory. My contribution was the investigation into the commenting behaviour of undergraduate students at Edinburgh’s Napier University as part of their coursework. I’m very excited by this news as it is only my second publication. I look forward to seeing it in print and will provide details here if it becomes available online.

TAM part 2: revised acceptance model by Bernadette Szajna

Another paper which I read this week was ‘Epirical Evaluation of the Revised Technology Acceptance Model’ by Bernadette Szajna (1996). In this paper Szajna uses the revised Technology Acceptance Model (TAM) from Davis et al. (1989) to measure user acceptance of an electronic mail system over a 15 week period in a longitudinal study. By collecting data from participants at different points in the study she was able to reveal that self-reported usage differed from actual usage and that as a consequence it may not be appropriate as a surrogate measure. This supports what those who’ve been running usability tests have been saying for a while: what users say and what they do are seldom the same. In user research terms this means that observing what users do during their interaction with a system is as important as what they say about their experience.

In addition the paper revealed that “unless users perceive an IS as being useful at first, its ease of use has no effect on the formation of intention”. This struck a chord with me because as a usability professional I often assume that ease of use is a barrier to the usefulness of a system; if a user does not know how to manipulate the interface they are unable to discover the (possibly useful) information below the surface. Then when I was considering the usefulness of Twitter groups I realised that it began to follow the same pattern.  Twitter groups is a recent addition to Twitter and available to users. It allows those you are following to be categorised into self-named groups. For example, it’s best application is a means for users to differentiate their professional connections from personal ones. In theory it is a good idea and one which I thought I might use as a way of separating out different networks would certainly make them easier to monitor. I can’t imagine it being too difficult to set up a group if I so wished but the problem is that I never considered it useful for me to do so and consequently I  never did (note: I created a private group today to test my theory). The reason in this case is that I rarely use Twitter’s website to monitor or communicate with those I’m following. There are many different client managers such as TweetDeck who can do this for me. I’m sure there are a few people out there who have created groups and view them regularly but could these people be in the minority? I’d be interested to test my theory so any comments on your own Twitter group behaviour is welcome.

My conclusion is that (for me) the usefulness of the groups tool was a greater barrier to use than the ease of creating a group, verifying Szajna’s findings. This illustrates how important usefulness is to the user acceptance of technology and is therefore something that should be evaluated in every system to ensure success.

Mendeley Webinars

Lastly Mendeley directors are hosting webinars which will provide an introduction to its features including inserting citations and using the collaborative tools. The webinars will be held on Tuesday, February 23, 2010 5:00 PM – 6:00 PM GMT and Wednesday, February 24, 2010 9:00 AM – 10:00 AM GMT respectively. I have signed up for the webinar on Wednesday and look forward to learning more. So far I’ve managed to add items to my library and connect with others online but don’t feel I have exploited its features fully and am having difficulties amending my bibliography in Word. Hopefully this webinar will provide help and advice.

My second round-up of the new year and already my last one for January. It seems that this month has flown by quite quickly!

Technology Acceptance Model (TAM)

Returning my attention to the evaluation of the Interactive Triptych Framework which I first blogged about in November has included the investigation of other evaluation concepts. One such concept which is discussed by Tsakonas and Papatheodorou (2006) is the Technology Acceptance Model (TAM). This model, which seeks to understand acceptance of computers systems, was first put forward by Fred D. Davis in 1989 with his paper- ‘Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology‘. It was later used by Thong, Hong and Tam in 2002 to understand user acceptance of digital libraries in their paper- ‘Understanding user acceptance of digital libraries: what the roles of interface characteristics, organisational context, and individual differences?

Thong, Hong and Tam state that TAM has been used frequently by researchers to explain and predict user acceptance in information technology. It is predominantly based on the belief that a person’s intention to adopt an information system is affected by two beliefs; the perceived ease of use and the perceived usefulness. Ease of use is commonly described as the ease with which people can employ a particular tool or other human-made object in order to achieve a particular goal. Usefulness is defined as the extent to which a person believes using the tool or system will benefit their task performance.

It feels that the TAM system provides a manageable framework which can evaluate the main barriers to user acceptance; ease of use and usefulness.  One difference between TAM and ITF is the absence of a performance attribute. The role of the evaluation period of the project will be to identify the most suitable framework to use when assessing the technological outcomes. Historically performance has been missing from similar research and would be required if a holistic approach was being sought. If the ITF is selected for ux2, one of the challenges will be to design a data gathering system (or systems) that can accurately and thoroughly investigate the performance aspect of digital libraries. This could include questionnaires, interviews, observation and web metrics.

One thing that the Thong et al. paper considered was the influence of individual differences and organisational context on user acceptance of digital libraries. External factors such as these are more difficult to control or change as they deal with the experience and knowledge of users and the accessibility/visibility of the system within the organisation. These factors can affect the perceived ease of use and perceived usefulness of a system and are therefore worthwhile investigating. Methodologies such as contextual enquiry have the potential to address these factors by understanding typical user groups to generate appropriate personas. This strengthens the argument for using this data gathering method in the project.

iPad

Well everyone has been talking about it for weeks (apparently) so as a curious non-apple user I thought I would tune in to see what the fuss was about. Turns out Apple went with one of my least favourite names for their new device but that aside the new device certainly looks interesting. I guess time will tell how successful it is but marketing it at the lower than expected price will certainly help. A lot of disappointment and scepticism (me included at times) was the general reaction to the new product but I’m told the reaction was similar for the iPhone and look at it now! If you want to read why the iPad will succeed from a usability perspective, check out the blog by Econsultancy.

Fun Apple tablet created for a local iPad event, hosted by Moo Cafeteria

Tags: , ,

del.icio.us bookmarks

Twitter feed

Archive