Researching Usability

Archive for the ‘Round-up’ Category

Last Thursday I attended a Webinar organised by AmbITion called Tracking Impact. It was presented by David Sim from Open Brolly and 4TM who discussed ways to track your organisations activities online. As a webinar, it was streamed live to almost 50 people at one point but I was lucky enough to attend in person due to its close proximity to the office. The full webinar is available on the AmbITion website and the presentation slides are also available. Consequently instead of a full report on the presentation I have listed some things which came out of the event that were of interest to me and the project.

Research conducted suggests Facebook is a good tool for asking short questions

Asking short questions is an effective way to generate a dialogue between an organisation and its patrons or between users. Often questions which are easy for people to answer, such as favourite recipes or opinion on a book or movie get the biggest response. Not only does it generate interest in the organisation but also provides valuable information on your users, something which can cost time and money to obtain. In addition, the analytical tools available on Facebook provide some of the most detailed profile information on your users than anywhere else and this provides valuable data. Although I agree with this research it would have been nice to know who conducted this research in order to read it for myself.

Retweeting is a good gauge of influence and expertise

This may seem obvious to some but definitely worth pointing out. If an individual or organisation is using Twitter and their content is retweeted by others, this demonstrates an authority on a subject and the value others place in the information provided. There is something powerful about the ‘collective wisdom’ or ‘collective intelligence’ demonstrated by retweets which is characteristic of Web 2.0 (Högg et. al 2006). Retweeting behaviour goes some way to measuring impact and influence within social networks and is therefore one tool in an otherwise difficult to measure environment.

To search an exact phrase use inverted commas

OK so for any librarians reading, this will seem like a no-brainer but I have to admit I never considered it till it was pointed out. This tip was provided in relation to using Google Alerts to monitor an organisation or brand but also applies to any type of search. Often keywords are common and can return a lot of irrelevant information. Using exact phrase searches can reduce the amount of unwanted information getting through. I have now amended my own Google Alters to get more accurate results and have also used the minus (-) technique to remove any results from the UX2 project tag as it turns out it’s also a piece of sounds equipment!

Social media will become more useful

This was one of David’s  predictions for the future during the Q&A which followed the presentation. I have to agree that the semantic web will make it possible for information to be shared and accessed when a user needs it much more easily. It’s a shame there wasn’t more time for discussion on this subject as it’s something which I am keen to explore in more detail.

A few developments to the project this week has meant that we have some good news to announce:

After revisions to our proposal we got confirmation this week that our JISC Enhancing LMS: AquaBrowser UX study had been accepted! If you have been following the blog and the usability inspection report which evaluated five digital libraries, you will know that Edinburgh University’s Aquabrowser was one of the DLs we investigated. Ideas that spun out originally from my blog post led to a full-blown proposal to evaluate the system which will now be conducted alongside the UX2 project. It will greatly enhance the UX2 project, providing the opportunity to conduct extensive user research including personas and usability testing which can be compared to the heuristic inspection results. More details on the project will be announced on the UX2 project website once it kicks off next week so stay tuned for more information.

Included in the Aquabrowser project is the opportunity to attend the UPA 2010 Conference in Munich this May. By attending it will be possible to learn more on personas and measuring the effectiveness of Web 2.0 among many other subjects. I’m excited to be attending so please get in touch if you will also be there and would like to meet up. I will of course blog about the conference while I’m there.

On the subject of the Usability Professionals Association, the UX2 team will be talking about the project at the next event for the Scottish chapter. The talk takes place next week on 20th April at 6:30pm and during the presentation we hope to spread the word of our work while also discussing some of the inspection report findings. We will also discuss the development work taking place with a demonstration of the prototype. If you live in and around Edinburgh and would like to attend please visit the SUPA website for more information. If you can’t make it but are still interested, we are hoping to put the slides from the presentation online afterwards.

Information System Success Model

The next theoretical model that I look at this week is the Information System Success Model which was first developed by William H. DeLone and Ephraim R. McLean in 1992. As it says in the title, the framework was designed to create a comprehensive way of measuring the success of an information system. The premise is that “systems quality” measures technical success; “information quality” measures semantic success; and “use, user satisfaction, individual impacts” and “organisations impacts” measure effectiveness success. This was later streamlined into a revised version in 2003 which highlights the three main components of an information service: “information” (content), “system”, and “service” (see diagrams below). Each of these factors have an impact on the user; their intention to use (discussed in the TAM framework as ‘acceptance of a system’) and their actual usage of a system. These factors in turn influence user satisfaction and this provides an indication of the ultimate impact of the system on the user/group of users/organisation/industry.  The net benefits can be scaled so the researcher can decide the context in which the net benefits are to be measured, keeping it useful in any situation.

There are a number of similarities between the Success Model and other models examined in this blog, including ITF and TAM. In addition to the parallels with the Success Model’s ‘Intention to use’ and ‘Use’ with TAM’s acceptance model, there are also overlaps in the Interactive Triptych Framework. Examining the system and content individually as a means of understanding the impact on the user’s behaviour (intentions and usage) is mimicked in the Usability and Usefulness of the system and content to the user in ITF. In the same vein, the usefulness and usability is also paramount when evaluating user acceptance in the Technology Acceptance Model. In this respect all three frameworks are similar. Where the Revised Success Model differs is in its application of these measurements. Where TAM evaluates if a system will be accepted by users, the Success Model can generate a list of measurable benefits which can be used to gauge the system success. This provides the opportunity to evaluate the success of a system over time, as users become more familiar with a system over time.

DeLone and McLean believe that the rapidly changing environment of information systems does not require an entirely new set of measures. They recommend that identifying success measures which already exist and have been validated through previous application can be enhanced and modified where necessary. New, untested measures should be adopted as a last resort. ITF and TAM have demonstrated similarities in their approach to rule them out as new. While TAM has received extensive testing in previous research, ITF is still relatively young. Adopting the ITF model for the UX2.0 project will hopefully further the research in this area.

Girl Geek Dinners: Edinburgh

This week I attended the 3rd Girl Geek Dinner in Edinburgh, hosted by The Informatics Forum at Edinburgh University. Girl Geeks is for women (and men!) interested in technology, creativity and computing. The speakers Emma McGrattan and Lesley Eccles provided entertaining, candid and very interesting talks on their own experiences working in technological sectors. I attended the first dinner in Edinburgh last year and noticed how successful it has become thanks to the wonderful work done by the organisers. The events attract a real mixture of professionals and students with a variety of interests. The passion in technology that everyone brings to the event always leaves me with real optimism and inspiration for the future. Long may these types of events continue.

Free Digital UX Books

For those who missed it or don’t follow me on Twitter, I came across a useful list of free user experience Ebooks compiled by Simon Whatley (link provided via @BogieZero). I personally recommend reading Search User Interfaces by Marti A. Hearst. If you know of any other free Ebooks please feel free to leave a link here or on Simon’s blog.

Power, Perception and Performance (P3)

As part of the ongoing literature review I’ve been researching some of the theoretical models created or adapted to evaluate information systems. Over the last couple of weeks I’ve been blogging about the Technology Acceptance Model (TAM) which has been used by researchers to show its effectiveness at determining user acceptance of specific systems. The paper by Andrew Dillon and Michael Morris, Power, Perception and Performance: From Usability Engineering to Technology Acceptance with the P3 Model of User Response (1999), reveals limitations of the framework in the context of a usability engineering perspective. It is not clear how well TAM predicts usage when testing prototypes as research using TAM to date has involved the testing of complete systems. If the functionality is limited or incomplete how adequately can participants rate its usefulness? Additionally, they are less likely to be able to rate the system’s ease of use if the interface features have not all been designed yet.

The data collection method was also critiqued because it relies on self-ratings from the participants. Studies have shown that user’s ratings change with repeated exposure to a system over time and that it may shift independently of the usability of the interface. This also relates to last week’s blog which suggested that what users say and what they do are not always the same. Self-ratings provide quantitive feedback from users but ideally this data should be gathered in addition to observation which is conducted at regular intervals to reflect any system self-efficacy.

The issues raised certainly do provide a strong argument against using TAM if you are a designer looking for issues to fix. TAM will tell you if a system is likely to be accepted by users but may not provide insight into why. It is more beneficial to IS professionals or managers who want to know if a system is likely to be used, for example when considering the procurement of a new IS.

The P3 model developed by Dillon and Morris uses three aspects (power, perception, and performance) to assess a user’s ability to use a system. A system’s power indicates its potential to serve the user’s tasks. Perception and performance measures the user’s behavioural reactions. Dillon and Morris believe that the P3 model predicts the capability to use a system through effectiveness and efficiency while TAM reveals the perception of the system. Consequently these different constructs make them independent entities which should not be compared: “The P3 model is an attempt at providing a unified model of use that supports both the process of design and clarifies the relationship between usability and acceptability.”

Useful program: Nutshell Mail

I was alerted to this wonderfully simple tool from Mike Coulter during his Ambition presentation, Listening Online. Trying it out is the simplest thing and takes less than a minute to set up. The website describes the program as follows:

NutshellMail takes copies of all your latest updates in your social networking and email accounts and places them in a snapshot email.

It’s a great way to manage multiple accounts and could be useful for those of you who either can’t access your social media accounts throughout the day or have so many people in your network that you find it difficult to monitor your feeds effectively. Last week I blogged about the limited usefulness of Twitter Groups because of the way they are accessed. Well I might be eating my words now because Nutshell Mail gives you the most recent results from your groups in each email along with any other accounts you choose to connect, including LinkedIn, Facebook and MySpace. You can also schedule the emails to arrive at a time that best suits you. That way its less likely to get lost amongst all the emails that wait you every morning! Although another piece of mail in your inbox might not sound like the ideal solution for some people, I’m willing to give it a try to see if it does make life a little easier.

Remote Research by Nate Bolt and Tony Tulathimutte

I was alerted to a competition this week in which UX Booth were giving away three copies of the book, Remote Research. As I’ve conducted some remote studies myself, this was a topic that interested me. I thought I would try my luck and low and behold I actually won a copy which I have already received! Books by the publisher, Rosenfeld Media are always informative – I already own Web Form Design by Luke Wroblewski and Card Sorting by Donna Spencer. Looking through the contents it looks like this book continue this trend. Most notable is the chapter entitled ‘The Challenges of Remote Testing’. The debate of remote testing versus direct testing has been ongoing for a while and looks set to continue. In this chapter some of the possible pitfalls are discussed which will hopefully help users make informed decisions on how they conduct user research and select the best tools to meet their needs. I look forward to reading this book, the simple design of Rosenfeld books makes them quick and easy to digest. Due to interest, I hope to write my own review here once I’ve finished it.

Heuristic report

This week the heuristic inspection report has been published and is available to read. If you would like to read it feedback is very welcome. The document is available in Word or as a PDF from the NeSC digital library: It is a sizeable document so thanks in advance for taking the time to read it! 🙂

Not what you know, nor who you know, but who you know already

This is a research paper which was a collaboration between myself, Hazel Hall and Gunilla Widén-Wulff. The research was undertaken when I first graduated from my Masters in 2007 and this week I received the good news that it will be published  in Libri: International Journal of Libraries and Information Services at some point this year. The paper examines online information sharing behaviours through the lens of social exchange theory. My contribution was the investigation into the commenting behaviour of undergraduate students at Edinburgh’s Napier University as part of their coursework. I’m very excited by this news as it is only my second publication. I look forward to seeing it in print and will provide details here if it becomes available online.

TAM part 2: revised acceptance model by Bernadette Szajna

Another paper which I read this week was ‘Epirical Evaluation of the Revised Technology Acceptance Model’ by Bernadette Szajna (1996). In this paper Szajna uses the revised Technology Acceptance Model (TAM) from Davis et al. (1989) to measure user acceptance of an electronic mail system over a 15 week period in a longitudinal study. By collecting data from participants at different points in the study she was able to reveal that self-reported usage differed from actual usage and that as a consequence it may not be appropriate as a surrogate measure. This supports what those who’ve been running usability tests have been saying for a while: what users say and what they do are seldom the same. In user research terms this means that observing what users do during their interaction with a system is as important as what they say about their experience.

In addition the paper revealed that “unless users perceive an IS as being useful at first, its ease of use has no effect on the formation of intention”. This struck a chord with me because as a usability professional I often assume that ease of use is a barrier to the usefulness of a system; if a user does not know how to manipulate the interface they are unable to discover the (possibly useful) information below the surface. Then when I was considering the usefulness of Twitter groups I realised that it began to follow the same pattern.  Twitter groups is a recent addition to Twitter and available to users. It allows those you are following to be categorised into self-named groups. For example, it’s best application is a means for users to differentiate their professional connections from personal ones. In theory it is a good idea and one which I thought I might use as a way of separating out different networks would certainly make them easier to monitor. I can’t imagine it being too difficult to set up a group if I so wished but the problem is that I never considered it useful for me to do so and consequently I  never did (note: I created a private group today to test my theory). The reason in this case is that I rarely use Twitter’s website to monitor or communicate with those I’m following. There are many different client managers such as TweetDeck who can do this for me. I’m sure there are a few people out there who have created groups and view them regularly but could these people be in the minority? I’d be interested to test my theory so any comments on your own Twitter group behaviour is welcome.

My conclusion is that (for me) the usefulness of the groups tool was a greater barrier to use than the ease of creating a group, verifying Szajna’s findings. This illustrates how important usefulness is to the user acceptance of technology and is therefore something that should be evaluated in every system to ensure success.

Mendeley Webinars

Lastly Mendeley directors are hosting webinars which will provide an introduction to its features including inserting citations and using the collaborative tools. The webinars will be held on Tuesday, February 23, 2010 5:00 PM – 6:00 PM GMT and Wednesday, February 24, 2010 9:00 AM – 10:00 AM GMT respectively. I have signed up for the webinar on Wednesday and look forward to learning more. So far I’ve managed to add items to my library and connect with others online but don’t feel I have exploited its features fully and am having difficulties amending my bibliography in Word. Hopefully this webinar will provide help and advice.

So you might have noticed the different title for this week’s weekly round-up. The reason for this change is to make each week’s title a bit more meaningful to readers. I also suspect that navigating old posts would be easier if the titles alluded to the content rather than forcing people to remember the date it was written. It’s an experiment for now and I might tweak it a bit in the future so feedback is always welcome.

This week the team have been doing some final edits to the inspection report. Although the content was completed last week, a few minor changes have been done to orientate readers through the report, provide better context and tweak the layout. It is expected to be finalised next week (promise!) so will post download details when it is available.

Measuring the user’s experience: pleasure and satisfaction

Something that I was reading about recently is the idea of measuring the playfulness and pleasure of digital libraries. In a short paper by Toms, Dufour and Hesemeier- ‘Evaluating the User’s Experience with Digital Libraries’, they have devised a method of assessing the entertainment value of digital libraries by adapting an e-commerce experiential value scale. It struck me reading this paper that there is little research on this aspect of evaluation. As with the ITF framework, many evaluation models focus on usability, usefulness and performance of a digital library. However, there appears to be scope for libraries to be more than just for the purpose of finding, acquiring and using information (Toms et al.). This becomes important as new features and services are added to digital libraries. The heuristic inspection that UX2 carried out provides evidence to support this idea and suggests that digital libraries are already doing this: bringing people together through social media and using new UI patterns that provide a more engaging experience than traditional search systems. Good examples include the ‘Stuff’ feature provided by Scran and the timeline and map used by World Digital Library.

Satisfaction is another term used when evaluating digital libraries. Myke Gluck wrote a paper: ‘Exploring the Relationship Between User Satisfaction and Relevance in Information Systems’ (1995) which revealed a strong relationship between user satisfaction, the relevance of retrieved items and the process of retrieving the item. This supports the idea that there is a connection between the performance of a system and it’s usefulness to the user. It also reveals that the usability of the UI affects satisfaction, supporting the need to evaluate an information system by adopting a holistic approach. As usefulness and usability are both determinants in the user acceptance of digital libraries (as discussed in last week’s blog), satisfaction is an influential factor in the success of a digital library.

BBC Virtual Revolution Series

Back in November I blogged about the documentary series being created by the BBC on the World Wide Web. I realised this week that it’s now finished and the first episode aired last Saturday. I plan to watch it on iPlayer this weekend before the next episode airs. If you want to know more about the documentary and watch the episodes, you can do so on their website.


This week Phil Bradley blogged about the movie search engine, Nanocrowd. I decided to check it out for myself and was impressed. The autocomplete or autosuggest system prevents users from misspelling words, reducing the chance of returning no results. The only thing that seems to be missing is information on the movie. Synopsis information appears when a user hovers over the film link, this information is loaded directly from Amazon. However, users are more likely to select the film link and expect to find information on the following page. Although there is a ‘movie in a nutshell’ word cloud in the right-hand column, the body of the page is blank. It would be nice to have things like the synopsis in this space or at least a link pointing users in the right direction. Alternatively, move the word cloud into the body of the page so people are more likely to notice it. Overall, this is a great tool for exploring movie genres and discovering new films. I’ll certainly be using it next time I’m searching for a film that matches my mood.

My second round-up of the new year and already my last one for January. It seems that this month has flown by quite quickly!

Technology Acceptance Model (TAM)

Returning my attention to the evaluation of the Interactive Triptych Framework which I first blogged about in November has included the investigation of other evaluation concepts. One such concept which is discussed by Tsakonas and Papatheodorou (2006) is the Technology Acceptance Model (TAM). This model, which seeks to understand acceptance of computers systems, was first put forward by Fred D. Davis in 1989 with his paper- ‘Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology‘. It was later used by Thong, Hong and Tam in 2002 to understand user acceptance of digital libraries in their paper- ‘Understanding user acceptance of digital libraries: what the roles of interface characteristics, organisational context, and individual differences?

Thong, Hong and Tam state that TAM has been used frequently by researchers to explain and predict user acceptance in information technology. It is predominantly based on the belief that a person’s intention to adopt an information system is affected by two beliefs; the perceived ease of use and the perceived usefulness. Ease of use is commonly described as the ease with which people can employ a particular tool or other human-made object in order to achieve a particular goal. Usefulness is defined as the extent to which a person believes using the tool or system will benefit their task performance.

It feels that the TAM system provides a manageable framework which can evaluate the main barriers to user acceptance; ease of use and usefulness.  One difference between TAM and ITF is the absence of a performance attribute. The role of the evaluation period of the project will be to identify the most suitable framework to use when assessing the technological outcomes. Historically performance has been missing from similar research and would be required if a holistic approach was being sought. If the ITF is selected for ux2, one of the challenges will be to design a data gathering system (or systems) that can accurately and thoroughly investigate the performance aspect of digital libraries. This could include questionnaires, interviews, observation and web metrics.

One thing that the Thong et al. paper considered was the influence of individual differences and organisational context on user acceptance of digital libraries. External factors such as these are more difficult to control or change as they deal with the experience and knowledge of users and the accessibility/visibility of the system within the organisation. These factors can affect the perceived ease of use and perceived usefulness of a system and are therefore worthwhile investigating. Methodologies such as contextual enquiry have the potential to address these factors by understanding typical user groups to generate appropriate personas. This strengthens the argument for using this data gathering method in the project.


Well everyone has been talking about it for weeks (apparently) so as a curious non-apple user I thought I would tune in to see what the fuss was about. Turns out Apple went with one of my least favourite names for their new device but that aside the new device certainly looks interesting. I guess time will tell how successful it is but marketing it at the lower than expected price will certainly help. A lot of disappointment and scepticism (me included at times) was the general reaction to the new product but I’m told the reaction was similar for the iPhone and look at it now! If you want to read why the iPad will succeed from a usability perspective, check out the blog by Econsultancy.

Fun Apple tablet created for a local iPad event, hosted by Moo Cafeteria

Tags: , ,

Firstly, apologies for the long gap in blog entries. My Dad passed away aged just 57 in Dec from skin cancer. Consequently I took an unscheduled break from work and with the holidays on top of that and then catching up on everything when I got back, the blog has been neglected. However, since my return I’ve manage to finalize the summative report on the heuristic evaluation which I’ve been blogging about. I’m hoping to write a separate blog with some of my conclusions as requested in the next week or so. This week’s round-up is slightly shorter than usual but normal service will resume next week!

Now that the summative report is complete, attentions have turned to the next 6 months of the project. Researching the evaluation process with regards to existing frameworks and structures is next on the agenda and will inform the evaluation of ux2 outputs. Evaluation of the technological outcomes from the project will initially be conducted next and will include persona research followed by usability and usefulness testing of the prototype.

Alan Cooper talks about qualitative research in his book ‘About Face 3: The Essentials of Interaction Design’. In the book he uses the term ‘Persona Hypothesis’ which is a good term to describe the first stage in synthesizing personas. It attempts to discuss questions such as ‘What different sorts of people might use this product?’, ‘How might their needs and behaviours vary?’ and ‘What ranges of behaviour and types of environments need to be explored?’ In the case of this project these questions relate to the type of library user who would visit library@nesc, what their needs would be and how they vary. Borrowing from existing persona research will help to generate such a hypothesis and enable recruitment for subsequent user interviews. The existing persona research being reviewed for the project is documented on the ux2 wiki.

Re-visiting the definition of a digital library

This week has been pretty busy, filled with lots of meetings and preparation for the project meeting which we are hosting on the 15th December. This week ux2 have been re-visiting the definition of the digital library and came to the (perhaps obvious) conclusion that it is something which cannot be tied down to one definitive version.  I’ve been reading ‘Evaluation of Digital Libraries: an insight into useful applications and methods’ Edited by Giannis Tsakonas and Christos Papatheodorou. The definition by Jesse H. Shera mentioned in the introduction was one which resonated with me as it seemed to touch on what we are trying to achieve in our project:

…contributing to the total communication system in society…

Though the library is an instrumentality created to maximise the utility of graphic records for the benefit of society, it achieves that goal by working with the individual and through the individual it reaches society.  (Shera, 1972:48)

Too often it feels like definitions concentrate on the technical parameters of a digital library and in differentiating it from the traditional library. This idea describes a common goal of both traditional and digital libraries; the interaction with individuals and society. Including users in the evaluation of a digital library is something which we hope to do at each stage in the project because social and individual benefits and feedback between them are important criteria to evaluate. Whatever definition used, there seem to be four critical elements which should be present in addition to the digitised format for a digital library to be correctly labelled: curation, preservation, archiving and cataloguing.

Interactive Information Retrieval (IIR)

Another term which was discussed during the meeting was Interactive Information Retrieval. It came up during the Designing User Interface tutorial which ux2 attended at ECDL09. Some of the examples discussed involved multifaceted ways of retrieving information. I started to think that there might be a better term for describing these particular interfaces because IIR can describe most forms of interaction with digital libraries from simple to complex and unique. A term was floated which might better describe IIR which uses multi-faceted/web2.0 interaction: Immersive Interactive Information Retrieval (I²R)? The dictionary defines Immersive as “pertaining to immersing or plunging into something”. I think this could describe the synchronous interaction that takes place when using web2.0 technology because the interaction is immediate and does not have to stop and start, keeping the user’s experience fluid and continuous. If there is an existing term for the type of interaction I am talking about I would be interested to find out.


For some Friday fun I thought I would share a few word clouds that I generated through the services Wordle and Tweet Cloud. I’ve known about Worlde for a while but never used it in anger. Earlier this week I heard people at the Online09 conference tweeting about the idea of using it in conjunction with a CV which seemed like a good idea. This got me thinking about it as a good way of quickly communicating information to someone to give them a snapshot of someone’s ideas and interests. I therefore decided to create one for this blog and for my delicious links to see what patterns were emerging. I’ve provided the resulting images below.

Twitter Cloud does the same kind of thing, grabbing data from all your tweets over a specified period (day/week/month/year). The clouds aren’t quite as impressive as the Wordle ones and you can’t customise the design yet but its a great idea and something which I imagine will grow in interest as people seek to analyse their tweets. As I will be marking my first anniversary using Twitter on the 9th Dec, I thought it would be appropriate to include a cloud from a year of tweets to see what it looked like. I was pleased to discover that the three most used words were: usability, thanks and blog! 🙂

Wordle blog

Wordle delicious links

Twitter Cloud: a year of tweets


You might notice that I’ve put a link to my Mendeley profile in the right-hand column of the blog recently. I finally got round to taking a closer look at it after hearing good things about it back in September. I’ve created a public folder for the ux2 project which I want to use to keep track of papers relevant to the project. For those who do not know what I’m talking about Mendeley is a network which allows researchers to create a bibliographic database (there is also a video presentation on YouTube). So far it seems to be a good way of organising papers because they are tagged and categorised for you. Recommendations aren’t available yet but I can see it being very useful and a good way to find new material.

I did run into a couple usability issues when trying to find other members and build my network. Firstly, if you want to check your email for contacts it is not possible to do so with your University email address (which is likely to be a common problem for academics). Currently you can only search Gmail, Yahoo, Hotmail, AOL and GMX. Secondly, the search system within ‘Find People’ was not as intuitive to use as I was hoping. Turns out the search form  searches your contacts by default. I did not realise this at first for a couple reasons:

  1. The search is very responsive and immediately starts searching your contacts as you type letters by highlighting where the letters appear in all of your contacts. Responsiveness is not necessarily a bad thing but I wrongly assumed that this meant it was only searching my contacts and started to think that I had to go elsewhere to search the whole system.
  2. Secondly, I did not read the message which appeared once I submitted my search and as a result, drew the wrong conclusions again (see image). I did what users often do and that is to scan the words quickly. The first part I read was ‘None of your contacts match this search term’ which only helped to reinforce my conclusion that this search form only searched my existing contacts. I therefore did not bother to continue reading the second line which said ‘Click Search to include all Mendeley users in the results’. The word ‘Search’ was linked making it a different colour and as a result it stood out from the rest of the text. Again I only read ‘Click Search’ and quickly dismissed it because I felt that I had already completed this action when I selected Search this first time. I spent quite a while looking around the rest of the site to find out how to search all members and eventually came to the conclusion that it wasn’t possible yet. To the credit of Mendeley, I tweeted my difficulties on Twitter and got a quick response. This showed how proactive they are at fixing problems. It wasn’t until I asked a colleague for help that I finally realised that the message asks users to click Search a second time to search all members. I think that it would be more intuitive for users if it did this by default. In addition it should give users the ability to choose whether they want to search all users or just their contacts, otherwise it is likely to be confusing. If something requires instructions to be used correctly it often means that it is not intuitive!

Anyway, I still believe that Mendeley has the potential to be a useful tool for researchers and I will continue to use it. Please feel free to connect with me if you also have an account.

COI Usability Toolkit

This week COI announced that they had developed a usability toolkit aimed at public sector websites (although anyone can access it). The toolkit provides good-practice guides on a variety of topics including search form design and search results design. After briefly looking at some of the guides they are effective in communicating the main points in a clear and concise manner with annotated illustrations to help time-poor users get a good understanding very quickly. There is also a section where you can test your knowledge and this helps to reinforce user’s learning. It is predominantly aimed at those with limited or no  previous knowledge of usability and avoids using any techy words or code.

To use the toolkit visit


Infomaki is an open-source, lightweight usability testing tool developed by The New York Public Library’s Digital Experience Group which I came across this week. It is based on the ideas of fivesecondtest, a tool which I have come across in the past. Based on the same premise, it asks users to answer one question. The questions can either be multiple choice or a design question asking users to state where they would click on a page to complete a specified task. You can also make comparisons between two designs and test the user’s recall of features. The beauty of the concept is that it does not require a lot of the users time. Answering one question can take only a few seconds and this has been shown to be attractive to users as the response to the survey was very high. So much so that the developers found that 90% of users wanted to answer more than one question – behaviour which is difficult to elicit through traditional market research methods.

Evidence suggests that tools like this are extremely successful in gathering a large volume of quantitative data which can help to back up one-to-one usability test data. The developers also plan to incorporate features that collect demographic data. This will add even more value to the tool as it will help in the construction of user persona’s.

Tools such as Infomaki are particularly useful to those working in digital libraries without a dedicated user research team. Open source means it is free and it can be set up by anyone interested in gathering data about their digital library.

There is more information on the software available in addition to the article on Code4Lib by Michael Lascarides which can be found here:

Usability Week, Berlin

My UX2 colleague, Boon returned for Nielsen Norman Group Usability Week in Berlin with lots of knowledge to share. One of the many things that came out of his time there was the idea of user persona creation for digital libraries. After a couple leads we were pointed to the work by Max Planck Digital Library and the persona’s they created. If anyone else knows of other work that has been conducted in persona creation for digital libraries please let us know, thanks. bookmarks

Twitter feed