Researching Usability

Archive for the ‘Evaluation’ Category

Screen shot of University of Edinburgh’s AquaBrowser with resource discovery services highlighted.

Screen shot of University of Edinburgh’s AquaBrowser with resource discovery services highlighted.

Background

The aim of the AquabrowserUX project was to evaluate the user experience of AquaBrowser at the University of Edinburgh (UoE). The AquaBrowser catalogue is a relatively new digital library service provided at UoE alongside the Classic catalogue  provided via Voyager which has been established at the university for a number of years. A holistic evaluation was conducted throughout with a number of activities taking place. These included a contextual enquiry of library patrons within the library environment, stakeholder interviews for persona creation and usability testing.

Intended outcome(s)

The objectives of the project were three-fold:

  1. To undertake user research and persona development. Information gathered from the contextual enquiry and stakeholder interviews were used to create a set of personas which will benefit the project and the wider JISC community. The methodologies and processes used were fully documented in the project blog.
  2. To evaluate the usefulness of resource discovery services. Contextual enquiry was conducted to engage a broader base of users. The study determined the usefulness on site and off site which will provide a more in-depth understanding of usage and behavioural patterns.
  3. To evaluate the usability of resource discovery systems. Using the personas derived from the user research, typical end users were recruited to test the usability of the AquaBrowser interface. A report was published which discusses the findings and makes recommendations on how to improve the usability of UoE’s AquaBrowser.

The challenge

There were a number of logistical issues that arose after the project kicked off. It became apparent that none of the team members had significant experience in persona development. In addition, the external commitments of subcontracted team members meant that progress was slower than anticipated. A period of learning to research established methodologies and processes for conducting interviews and analysing data took place. Consequently the persona development took longer than anticipated which delayed the recruitment of participants for usability testing (Obj3). The delay also meant that participants would be recruited during the summer months when the university is traditionally much quieter. This was a potential risk to recruit during this time but did not end up being problematic to the project. However the extension of time spent gathering qualitative data meant that it was not possible to validate the segmentation with quantitative data. This was perhaps too ambitious for a project of this scale.

The challenge of conducting the contextual enquiry within the library was to find participants willing to be observed and interviewed afterwards. The timing once again made this difficult as it took place during exams. The majority of people in the library at that time were focussed on one task which was to revise for exams. This meant that persuading them to spend ten minutes talking to researchers was understandably difficult. In addition to this, the type of users that were common in the library at that time were limited to those revising and whose needs were specific to a task and did not necessarily represent their behaviour at other times of the year.

Ensuring that real use data and participation was captured during the contextual enquiry was also a challenge. Capturing natural behaviour in context is often difficult to achieve and carries a risk of influence from the researcher. For example, to observe students in the library ethically it is necessary to inform subjects that they are being observed. However, the act of informing users may cause them to change their behaviour. In longitudinal studies the researcher is reliant on the participant self-reporting issues and behaviour, something which they are not always qualified to do effectively.

Recruitment for the persona interviews and usability testing posed a challenge not only in finding enough people but also the right type of people. Users from a range of backgrounds and differing levels of exposure to AquaBrowser who fulfil the role of one of the personas could be potentially difficult and time-consuming to fulfil. As it turned out, recruitment of staff (excluding librarians) proved to be difficult and was something that we did not manage to successfully overcome.

Established practice

Resource discovery services for libraries have evolved significantly. There is an increasing use of dynamic user interface. Faceted searching for example provides a “navigational metaphor” for boolean search operations. AquaBrowser is a leading OPAC product which provides faceted searching and new resource discovery functions in the form of their dynamic Word Cloud. Early studies have suggested a propensity of faceted searching to result in serendipitous discovery, even for domain experts.

Closer to home, The University of Edinburgh library have conducted usability research earlier this year to understand user’s information seeking behaviour and identify issues with the current digital service in order to create a more streamlined and efficient system. The National Library of Scotland has also conducted a website assessment and user research on their digital library services in 2009. This research included creating a set of personas. Beyond this, the British Library are also in the process of conducting their own user research and persona creation.

The LMS advantage

Creating a set of library personas benefits the University of Edinburgh and the wider JISC community. The characteristics and information seeking behaviour outlined in the personas have been shown to be effective templates for the successful recruitment of participants for user studies. They can also help shape future developments in library services when consulted during the design of new services. The persona hypothesis can also be carried to other institutes who may want to create their own set of personas.

The usability test report highlights a number of issues, outlined in Conclusions and Recommendations, which the university, AquaBrowser and other institutions can learn from. The methodology outlined in the report also provides guidance to those conducting usability testing for the first time and looking to embark on in-house recruitment instead of using external companies.

Key points for effective practice

  • To ensure realism of tasks in usability testing, user-generated tasks should be created with each participant.
  • Involve as many stakeholders as possible. We did not succeed in recruiting academic staff and were therefore unable to evaluate this user group however, the cooperation with Information Services through project member Liza Zamboglou did generate positive collaboration with the library during the contextual enquiry, persona interviews and usability testing.
  • Findings from the user research and usability testing suggest that resource discovery services provided by AquaBrowser for UoE can be improved in order to be useful and easy to operate.
  • Looking back over the project and the methods used to collect user data we found that contextual enquiry is a very useful method of collecting natural user behaviour when used in conjunction with other techniques such as interviews and usability tests.
  • The recruitment of participants was successful despite the risks highlighted above. The large number of respondents demonstrated that recruitment of students is not difficult when a small incentive is provided and can be achieved at a much lower cost than if a professional recruitment company had been used.
  • It is important to consider the timing of any recruitment before undertaking a user study. To maximise potential respondents, it is better to recruit during term time than between terms or during quieter periods. Although the response rate during the summer was still sufficient for persona interviews, the response rate during the autumn term was much greater. Academic staff should also be recruited separately through different streams in order to ensure all user groups are represented.

Conclusions and recommendations

Overall the project outcomes from each of the objectives have been successfully delivered. The user research provided a great deal of data which enabled a set of personas to be created. This artifact will be useful to UoE digital library by providing a better understanding of its users. This will come in handy when embarking on any new service design. The process undertaken to create the personas was also fully documented and this in itself is a useful template for others to follow for their own persona creation.

The usability testing has provided a report (Project Posts and Resources) which clearly identifies areas where the AquaBrowser catalogue can be improved. The usability report makes recommendations that if implemented has potential to improve the user experience of UoE AquaBrowser. Based on the findings from the usability testing and contextual enquiry, it is clear that the contextual issue and its position against the other OPAC (Classic) must be resolved. The opportunity for UoE to add additional services such as an advanced search and bookmarking system would also go far in improving the experience. We recommend that AquaBrowser and other institutes also take a look at the report to see where improvements can be made. Evidence from the research found that the current representation of the Word Cloud is a big issue and should be addressed.

The personas can be quantified and used against future recruitment and design. All too often users are considered too late in a design (or redesign and restructuring) process. Assumptions are made about ‘typical’ users which are based more opinion than in fact. With concrete research behind comprehensive personas it is much easier to ensure that developments will benefit the primary user group.

Additional information

Project Team

  • Boon Low, Project Manager, Developer, boon.low@ed.ac.uk – University of Edinburgh National e-Science Centre
  • Lorraine Paterson, Usability Analyst, l.paterson@nesc.ac.uk – University of Edinburgh National e-Science Centre
  • Liza Zamboglou, Usability Consultant, liza.zamboglou@ed.ac.uk – Senior Manager , University of Edinburgh Information Services
  • David Hamill, Usability Specialist, web@goodusability.co.uk – Self-employed

Project Website

PIMS entry

Project Posts and Resources

Project Plan

Conference Dissemination

User Research and Persona Development (Obj1)

Usefulness of Resource Discovery Services (Obj2)

Usability of Resource Discover Services (Obj3)

AquabrowserUX Final Project Post

Now that the usability testing has been concluded, it seemed an appropriate time to evaluate our recruitment process and reflect on what we learned. Hopefully this will provide useful pointers to anyone looking to recruit for their own usability study.

Recruiting personas

As stated in the AquabrowserUX project proposal (Objective 3), the personas that were developed would help in recruiting representative users for the usability tests. Having learned some lessons from the persona interview recruitment, I made a few changes to the screener and added a some new questions. The screener questions can be seen below. The main changes included additional digital services consulted when seeking information such as Google|Google Books|Google Scholar|Wikipedia|National Library of Scotland and an open question asking students to describe how they search for information such as books or journals online. The additional options reflected the wider range of services students consult as part of their study. The persona interviews demonstrated that these are not limited to university services. The open question had two purposes; firstly it was able to collect valuable details from students in their own words which helped to identify which persona or personas the participant fitted. Secondly it went some way to revealing how good the participant’s written English was and potentially how talkative they are likely to be in the session. Although this is no substitute for telephone screening, it certainly helped and we found that every participant we recruited was able to talk comfortably during the test. As recruitment was being done by myself and not outsourced to a 3rd person, this seemed the easiest solution at the time.

When recruiting personas the main things I was looking for was the user’s information seeking behaviour and habits. I wanted to know what users typically do when looking for information online and the services they habitually use to help. The questions in the screener were designed to identify these things while also differentiate respondents into one type of (but not always exclusive) persona.

Screener Questions

The user research will be taking place over a number of dates. Please specify all the dates you will be available if selected to take part

26th August |27th August | 13th September | 14th September

What do you do at the university?

Undergraduate 1st |2nd |3rd | 4th | 5th year| Masters/ Post-graduate | PhD

What is your program of study?

What of the following online services do you use when searching for information and roughly how many hours a week do you spend on each?

Classic catalogue | Aquabrowser catalogue | Searcher | E-journals | My Ed | Pub Med | Web of Knowledge/Science | National Library of Scotland | Google Books | Google Scholar | Google | Wikipedia

How many hours a week do you spend using them?

Never|1-3 hours|4-10 hours|More than 10 hours

How much time per week do you spend in any of Edinburgh University libraries?

Never|1-3 hours|4-10 hours|More than 10 hours

Tell me about the way you search for information such as books or journals online?

Things we learned

There were a number of things that we would recommend to do when recruiting participants which I’ve listed below:

  1. Finalise recruitment by telephone, not email. Not surprisingly, I found that it’s better to finalise recruitment by telephone once you have received a completed screener. It is quicker to recruit this way as you can determine a suitable slot and confirm a participant’s attendance within a few minutes rather than waiting days for a confirmation email. It also provides insight into how comfortable the respondent is when speaking to a stranger which will affect the success of your testing.
  2. Screen out anyone with a psychology background. It is something of an accepted norm amongst professional recruitment agencies but something which I forgot to include in the screener. In the end I only recruited one PhD student with a Masters in psychology, so did not prove much of a problem in this study. Often these individuals do not carry out tasks in the way they would normally do, instead examining the task and often trying to beat it. This invariably can provide inaccurate results which aren’t always useful.
  3. Beware of participants who only want to participate to get the incentive. They will often answer the screener questions in a way they think will ensure selection and not honestly. We had one respondent who stated that they used every website listed more than 10 hours a week (the maximum value provided). It immediately raised flags and consequently that person was not recruited.
  4. Be prepared for the odd wrong answer. On occasion, we found out during the session that something the participant said they had used in the past they hadn’t seen before and vice versa. This was particularly tricky because often students aren’t aware of Aquabrowser by name and are therefore unable to accurately describe their use of it.

Useful resources

For more information on recruiting better research participants check out the article by Jim Ross on UX Matters: http://www.uxmatters.com/mt/archives/2010/07/recruiting-better-research-participants.php. There is also a similarly useful article by Abhay Rautela on Cone Trees with tips on conducting your own DIY recruitment for usability testing: http://www.conetrees.com/2009/02/articles/tips-for-effective-diy-participant-recruitment-for-usability-testing/.

Have I missed anything? If there is something I’ve not covered here please feel free to leave a comment and I’ll make sure I respond. Thanks

As mentioned in a previous blog, a review of how libraries are currently engaging with Web 2.0 was proposed as part of the ongoing research for Workpackage 2. This is not currently top priority for the project which means the blogs will be published over a period of time. However, this first part of the review introduces the data gathering method and some of the current theory of attitudes towards Twitter.

MinXuan Lee  wrote about the 5 Stages of Twitter Acceptance in her side-show, ‘How Twitter Changed My Life’. She effectively describes the range of behaviours typically displayed by people to represent their experience of Twitter. Each stage, Denial, Presence, Dumping, Conversing, and Microblogging, map attitudes towards Twitter before using it, through to its use for ‘true microblogging’.  Users can often identify with these stages at some point during their experience. Rightly or wrongly, some users may only aspire to ‘dumping’ and not have the desire to ‘converse’ or write a microblog. This review hopes to find out what stage of acceptance Twitter libraries are currently at. Furthermore, the findings will suggest which libraries are more successful on Twitter and the reasons behind it. This will provide other libraries with an idea on how to get the best from Twitter and ensure that it meets their needs.

Information was gathered on a random sample of libraries with existing Twitter accounts. The accounts were predominantly provided through the Libraries and Web 2.0 Wiki and CILIP’s Twitter Libraries List. Thirty libraries were selected which had their own dedicated Twitter accounts. Any accounts which served a wider audience such as a council were not included. Data was gathered on each library using tweetstats.com between 22nd and 26th March 2010. Using this tool in addition to Twitter it was possible to gathering the following information:

  • Number of followers
  • Number following
  • Number of lists user’s have created
  • Number of tweets to date
  • Does the library retweet content created by others?
  • Does the library reply to tweets?
  • What Twitter clients does the library use to create tweets (in order of use)?
  • Date joined Twitter (month/year)
  • Does the library have a Facebook page?
  • Does the library have a Flikr page?
  • Does the library have any other social media accounts, if so what?
  • Does the library have their own blog or news feed with comment facility?

The initial data was gathered and placed on the UX2 wiki page for everyone to access.

A number of questions arose while conducting the research which will be discussed in future blog posts. More questions will hopefully be added as they arise:

  • Do libraries advertise their Twitter account (and other social media pages) elsewhere e.g on the library website?
  • If libraries have few followers, is there a reason for this? If so what?
  • What are the most popular Twitter clients used among libraries?
  • Is the level of engagement among libraries related to the type of twitter client they use?
  • To what extent do libraries ‘Converse’ using Twitter?
  • Which level of ‘Twitter Acceptance’ are most libraries aligned to?
  • What should libraries do if they want to engage more with people on Twitter?

Some initial findings are that many libraries are using Twitter mainly as a broadcast medium and less as a microblogging medium. Also that a high number of libraries are still using Twitter.com to communicate and not 3rd party client managers.

Overview of Twitter Use in Libraries

For those not involved in Higher and Further Education, JISC provides a service called JISCMail which facilitates knowledge sharing within the UK using mailing list based services.  One of these mailing lists, Web 2.0, is for anyone interested in Web 2.0 and its use in libraries. This month there has been an ongoing and interesting discussion (via the public library mailing list) on the evaluation of Web 2.0 services in libraries. This is something which is relevant to our project and in particular Workpackage 2: to undertake usability inspection and contemporary UX techniques research. Out of this discussion is seemed appropriate to examine how libraries are currently engaging with Web 2.0 and the impact it is having on users.

Something which Phil Bradley said in his response to the discussion stood out:

Measurement and evaluation has to be linked more to the activity than anything else

Quite often we concentrate on the technology or platform such as Twitter or Facebook, when often it’s the experience that should be considered first. Often there are multiple tools available to do the same job. In addition, tools are increasingly working together to ‘mash-up’ technology into a single service for end users. User’s are more interested in obtaining information to fulfill their needs or complete specific tasks than the technology being used.

I am hoping to write a series of blogs on this subject providing a snapshot picture of Web2.0 use in libraries. This will help to identify trends as well as the more innovative things being done which other libraries might be able to learn from. It wont be exhaustive but will hopefully provide a grounding for the project’s development work while also being useful to those working in libraries.

I have included the list of resources which were provided to the mailing list during the discussion. Thank you to everyone who contributed, the information will provide a valuable starting point for the overview. If you know of other resources please feel free to post them here.

Edinburgh Coffee Morning Quiz of the Year

This morning Edinburgh’s techy collective lined up to take part in the annual social media quiz as organised by Mike Coulter. As a regular at the weekly coffee mornings, I was excited to take part this year and test my knowledge. As it turned out I have a lot to learn, especially in knowing the meaning of acronyms like ASCii, JPG and USB, recognising company headquarters, famous tech faces or knowing the Google Protocol for rss feeds. The winner will be announced later today but I doubt our team are anywhere in the running. As they say, its the taking part that counts! Below is an image taken by Brendan MacNeill of me dutifully writing our answers down. Thanks to Mike for organising such a fun event for us geeks.
MacNeill_100319-58

Information System Success Model

The next theoretical model that I look at this week is the Information System Success Model which was first developed by William H. DeLone and Ephraim R. McLean in 1992. As it says in the title, the framework was designed to create a comprehensive way of measuring the success of an information system. The premise is that “systems quality” measures technical success; “information quality” measures semantic success; and “use, user satisfaction, individual impacts” and “organisations impacts” measure effectiveness success. This was later streamlined into a revised version in 2003 which highlights the three main components of an information service: “information” (content), “system”, and “service” (see diagrams below). Each of these factors have an impact on the user; their intention to use (discussed in the TAM framework as ‘acceptance of a system’) and their actual usage of a system. These factors in turn influence user satisfaction and this provides an indication of the ultimate impact of the system on the user/group of users/organisation/industry.  The net benefits can be scaled so the researcher can decide the context in which the net benefits are to be measured, keeping it useful in any situation.

There are a number of similarities between the Success Model and other models examined in this blog, including ITF and TAM. In addition to the parallels with the Success Model’s ‘Intention to use’ and ‘Use’ with TAM’s acceptance model, there are also overlaps in the Interactive Triptych Framework. Examining the system and content individually as a means of understanding the impact on the user’s behaviour (intentions and usage) is mimicked in the Usability and Usefulness of the system and content to the user in ITF. In the same vein, the usefulness and usability is also paramount when evaluating user acceptance in the Technology Acceptance Model. In this respect all three frameworks are similar. Where the Revised Success Model differs is in its application of these measurements. Where TAM evaluates if a system will be accepted by users, the Success Model can generate a list of measurable benefits which can be used to gauge the system success. This provides the opportunity to evaluate the success of a system over time, as users become more familiar with a system over time.

DeLone and McLean believe that the rapidly changing environment of information systems does not require an entirely new set of measures. They recommend that identifying success measures which already exist and have been validated through previous application can be enhanced and modified where necessary. New, untested measures should be adopted as a last resort. ITF and TAM have demonstrated similarities in their approach to rule them out as new. While TAM has received extensive testing in previous research, ITF is still relatively young. Adopting the ITF model for the UX2.0 project will hopefully further the research in this area.

Girl Geek Dinners: Edinburgh

This week I attended the 3rd Girl Geek Dinner in Edinburgh, hosted by The Informatics Forum at Edinburgh University. Girl Geeks is for women (and men!) interested in technology, creativity and computing. The speakers Emma McGrattan and Lesley Eccles provided entertaining, candid and very interesting talks on their own experiences working in technological sectors. I attended the first dinner in Edinburgh last year and noticed how successful it has become thanks to the wonderful work done by the organisers. The events attract a real mixture of professionals and students with a variety of interests. The passion in technology that everyone brings to the event always leaves me with real optimism and inspiration for the future. Long may these types of events continue.

Free Digital UX Books

For those who missed it or don’t follow me on Twitter, I came across a useful list of free user experience Ebooks compiled by Simon Whatley (link provided via @BogieZero). I personally recommend reading Search User Interfaces by Marti A. Hearst. If you know of any other free Ebooks please feel free to leave a link here or on Simon’s blog.

Bias is an issue that anyone gathering user data is weary of. Whether its usability testing, face-to-face interviews or online questionnaires, bias can affect the strength and integrity of results in a variety of ways. Question design is one of the most influential factors should therefore be given careful consideration. Leading questions can inadvertently give participants an idea of the desired answer and influence their response. However, sampling bias can also have a significant affect on the research results and is often overlooked by researchers.

I was reading Europeana’s Annual Report this week and noticed that the results from their online visitor survey was on the whole very positive. Reading the survey report in more detail I realised it was possible that sample bias may be affecting the survey results. Data from online visitor surveys are normally gathered using an intercept which invites a visitor to participate in the research when they arrive to the site. Anyone visiting the site is who receives this invite is eligible to participate making them ‘self-selected’. This means that they decide to  participate, not the researcher. Their motivation for participating may be related to the topic of the survey or the incentive provided to garner interest.  Consequently their participation is unlikely to provide a repserentative sample.

For example, those who participated in Europeana’s survey are more likey to be motivated by their enthusiasm and interest in the website. Certainly those who are apathetic or indifferent to the website are less likely to have participated. This is supported by the proportion of participants who were regular visitors to the site. Only 8.6% of participants were first time visitors and the results from these participants was generally more positive than the participants who had visited the site before. It would be interesting to find out if a larger sample of first time users would alter these results.

So what can researchers do to prevent sample bias in their results? It is very difficult to completely remove sample bias especially in online surveys where the researcher has no control over who participates. Generally speaking visitor surveys will always carry the risk of bias so the aims of the survey should take this into account. Designing a mixture of open and closed questions will provide some insight into the participant’s motivation. Descriptive answers which require more thought are less likely to be fully answered by those motivated by the incentive. It also provides the added benefit of giving users the opportunity to provide their own feedback. It is interesting to note the Europeana did not do this, leading some participants to email their comments to the researchers. Providing an optional section at the end of the survey for final comments could have provided rich feedback not obtained through closed questions. Indeed the comments Europeana received often described situations where users’ had trouble using the site or disliked a particular design feature.

Avoid asking questions which relate to the user’s overall opinion of the system before they have used all the features as it will not provide accurate results. For example, 67% of users stated they had never used the “My Europeana” feature before and were therefore unable to provide feedback on it. Usability testing often provides more insight into these issues by gathering this information retrospectively after asking a user to carry out tasks using the site. If it’s possible to use survey software which can do this then it is recommended because it is more likely to gather meaningful results. It is only after trying to complete a task that a user will be able to accurately describe their experience.

It is worth noting that Europeana have also conducted user testing with eyetracking in addition to focus groups and expert evaluations. The results of these are due to be published soon and I look forward to reading them. It will be interesting to compare the results against our heuristic inspection of Europeana and other DLs.

Heuristic report

This week the heuristic inspection report has been published and is available to read. If you would like to read it feedback is very welcome. The document is available in Word or as a PDF from the NeSC digital library: http://bit.ly/ux2inspectionreport. It is a sizeable document so thanks in advance for taking the time to read it! 🙂

Not what you know, nor who you know, but who you know already

This is a research paper which was a collaboration between myself, Hazel Hall and Gunilla Widén-Wulff. The research was undertaken when I first graduated from my Masters in 2007 and this week I received the good news that it will be published  in Libri: International Journal of Libraries and Information Services at some point this year. The paper examines online information sharing behaviours through the lens of social exchange theory. My contribution was the investigation into the commenting behaviour of undergraduate students at Edinburgh’s Napier University as part of their coursework. I’m very excited by this news as it is only my second publication. I look forward to seeing it in print and will provide details here if it becomes available online.

TAM part 2: revised acceptance model by Bernadette Szajna

Another paper which I read this week was ‘Epirical Evaluation of the Revised Technology Acceptance Model’ by Bernadette Szajna (1996). In this paper Szajna uses the revised Technology Acceptance Model (TAM) from Davis et al. (1989) to measure user acceptance of an electronic mail system over a 15 week period in a longitudinal study. By collecting data from participants at different points in the study she was able to reveal that self-reported usage differed from actual usage and that as a consequence it may not be appropriate as a surrogate measure. This supports what those who’ve been running usability tests have been saying for a while: what users say and what they do are seldom the same. In user research terms this means that observing what users do during their interaction with a system is as important as what they say about their experience.

In addition the paper revealed that “unless users perceive an IS as being useful at first, its ease of use has no effect on the formation of intention”. This struck a chord with me because as a usability professional I often assume that ease of use is a barrier to the usefulness of a system; if a user does not know how to manipulate the interface they are unable to discover the (possibly useful) information below the surface. Then when I was considering the usefulness of Twitter groups I realised that it began to follow the same pattern.  Twitter groups is a recent addition to Twitter and available to users. It allows those you are following to be categorised into self-named groups. For example, it’s best application is a means for users to differentiate their professional connections from personal ones. In theory it is a good idea and one which I thought I might use as a way of separating out different networks would certainly make them easier to monitor. I can’t imagine it being too difficult to set up a group if I so wished but the problem is that I never considered it useful for me to do so and consequently I  never did (note: I created a private group today to test my theory). The reason in this case is that I rarely use Twitter’s website to monitor or communicate with those I’m following. There are many different client managers such as TweetDeck who can do this for me. I’m sure there are a few people out there who have created groups and view them regularly but could these people be in the minority? I’d be interested to test my theory so any comments on your own Twitter group behaviour is welcome.

My conclusion is that (for me) the usefulness of the groups tool was a greater barrier to use than the ease of creating a group, verifying Szajna’s findings. This illustrates how important usefulness is to the user acceptance of technology and is therefore something that should be evaluated in every system to ensure success.

Mendeley Webinars

Lastly Mendeley directors are hosting webinars which will provide an introduction to its features including inserting citations and using the collaborative tools. The webinars will be held on Tuesday, February 23, 2010 5:00 PM – 6:00 PM GMT and Wednesday, February 24, 2010 9:00 AM – 10:00 AM GMT respectively. I have signed up for the webinar on Wednesday and look forward to learning more. So far I’ve managed to add items to my library and connect with others online but don’t feel I have exploited its features fully and am having difficulties amending my bibliography in Word. Hopefully this webinar will provide help and advice.


del.icio.us bookmarks

Twitter feed

Archive