Web Magazine for Information Professionals

Uncovering User Perceptions of Research Activity Data

Cecilia Loureiro-Koechlin discusses the outcomes and lessons learned from user tests performed on the Oxford Blue Pages, a tool designed to display information about researchers and their activities at the University of Oxford.

Competition, complex environments and needs for sophisticated resources and collaborations compel Higher Education institutions (HEIs) to look for innovative ways to support their research processes and improve the quality and dissemination of their research outcomes. Access, management and sharing of information about research activities and researchers (who, what, when and where) lie at the heart of all these needs and driving forces for improvements. The planning of new research needs to consider information about current and previous related activities, and about relevant expertise for collaboration which may cross subject field boundaries. Research outcomes are published in renowned academic journals and conferences, and also disseminated by other means, such as university and subject field Web sites. Information about these various forms of dissemination is also essential to the development of research strategies. Yet in some places information of this type is not easily accessible, nor is it presented in ways that provide a clear overview either within a research field or of the interconnections within and beyond an institution.

How could technologies make information about research accessible to support research and related activities in HEIs? As technologies develop, new sophisticated ways of managing and shaping data about research emerge. Data can be aggregated from multiple sources, classified and presented in multiple ways. Similarities between sets of data can be identified and transformed into connections, which in turn become useful sets of information for research purposes. However, as all these developments evolve, we need to make sure they are in line with the changing needs of academia. Therefore the question to ask is: how can we assess the usefulness of these technologies to research?

This article discusses the testing of the Oxford Blue Pages, a Web service designed to display multiple views of information about research gathered from many and disparate sources. Through tests, insights from potential users were obtained which guided the development process of the software. These tests also proved to be extremely effective in gathering perspectives on the usefulness of the tool in several research and administrative contexts. These perspectives complement an initial analysis of stakeholders' interests and provide information about new uses for the Blue Pages and information about research.

Building the Research Information Infrastructure (BRII) is a JISC-funded project [1] which aims to support the efficient sharing of Research Activity Data (RAD) captured from a wide range of sources. BRII develops an infrastructure that harvests and archives RAD, and Web services which disseminate and reuse this kind of data by using a lightweight solution based on semantic web technologies. Phases of the project include: a stakeholder analysis to collect views from interested parties (e.g., academics and administrators); an iterative development process which uses information collected in the analysis phase; and an embedding and sustainability phase where user acceptance is assessed and strategies to support the expansion of the information research infrastructure are designed. Outcomes of the BRII include: an application programming interface (API) for harvesting and querying data; a collection of ontologies and taxonomies used to organise and classify data; a themed Web site; and the Oxford Blue Pages displaying RAD in creative ways. By facilitating access to RAD, BRII expects to improve the research visibility of the institution and its research impact, as well as boost collaboration.

RAD are data that describe researchers, projects, funding and outputs. RAD can take the form of long descriptions as in researchers' biographies, or lists of words or phrases such as lists of research interests or projects in one department. The language of RAD tends to be academic (i.e., specialised terminology) requiring readers to have some knowledge of the subject. BRII gathers publicly available RAD about research in Oxford from internal and external sources. BRII is dependent on the quality of data available on these sources. (BRII does not collect sensitive data such as financial data and information, conformant to confidentiality agreements.) Within the University of Oxford, RAD are displayed in online sources such as departmental, project and researchers' Web sites, as well as other sources, such as departmental databases and spreadsheets. RAD about Oxford are also found in external sites such as research councils' Web sites, online journals and databases. Sources of RAD are independent and disconnected from the others, fulfilling only the purposes of their creators. Content and format depend on and reflect the underlying research culture of the people and activities that are represented. Data about scientists describe their activities as discrete blocks of research, which are generally carried out in groups and with sophisticated resources. RAD in the humanities describe researchers' interests in longer narratives. Their activities are not as clearly defined as the scientists'. However they are recognised by their output (e.g., books).

The main drivers behind the implementation of existing RAD-oriented resources (e.g., Web sites) are certifying ownership, disseminating work and promoting collaboration, as well as increasing research visibility and reputation [2]. Indeed, RAD provide new means of scholarly communication. One common use of RAD is as metadata for research outputs. However, on their own, they can provide comprehensive pictures of research expertise and activities across an institution and/or research fields. These depictions can be useful both to research experts and non-experts. RAD can be used: to discover research activities and the staff working in them; to uncover connections among people and research fields; and to identify strengths and weaknesses within a department or research area. RAD are also essential in administrative and academic processes, such as finding specific expertise for a project, designing research strategies and the writing of reports at departmental or divisional levels. Needs for RAD also vary in scope. For instance, researchers need specialised and narrow-scope information and administrators or consultants need general information whose scope covers a number of subject fields.

screenshot (49KB) : Figure 1 : The Oxford Blue Pages, three screenshots from a work in progress version

Figure 1: The Oxford Blue Pages, three screenshots from a work in progress version

The Oxford Blue Pages is one tool that BRII is developing to access and display RAD. (See Figure 1.) Its aim is to provide a directory of research expertise within and beyond the University of Oxford. Through an intuitive design, and various search options, it provides access to and views of RAD from different angles and depths. The Blue Pages display RAD in the form of objects which represent the different aspects of research at a macro level:

The 'people' and 'research projects' objects comprise the core of the Blue Pages. Their information is presented in the form of profiles containing biographies and scientific explanations such as theoretical and methodological summaries of research, as well as lists of publications. Access to full-text publications are provided from the Oxford University Research Archive ORA [3] or other external sources when those are available. To a certain extent, profiles can provide insights into aspects of research at a micro level. Connections to other objects have been implemented within profiles. For example, researchers' profiles contain lists of internal and external collaborators' and links to their profiles if they work in Oxford. Researchers' subject fields may become links to lists of other researchers and research activities in the same area. 'Funders' and 'academic units' are 'connection objects,' through which information about people and activities can be found. Funders and academic units' profiles contain short descriptive information about them and lists of researchers and activities related to them. For example, in the case of a funding body, the Engineering and Physical Sciences Research Council (EPSRC) profile contains a list of people and projects funded by the Council. Furthermore, a list of publications resulting from specific funded partnerships could also be generated. Other kinds of connections can also be unveiled through the Blue Pages, particularly connections which are not necessarily acknowledged by the original creators of RAD sources. For example, a search for the keyword 'genetics' returns lists of people, research projects and publications from sources across research fields (i.e., not only within the medical sciences but from other areas such as philosophy and law.) Users interested in overviews would find those lists extremely useful. However for users interested in specific details, links are displayed to access researchers and projects' profiles.

The above uses of the Blue Pages have clear implications for scholarly communication processes. It is expected that the Blue Pages will play a role in the dissemination of Oxford's research outcomes, as well as the discovery and sharing of research knowledge and expertise which are crucial to start collaborations. The Blue Pages can also serve administrative and strategic purposes. At departmental and University levels there are needs for discovery of hidden connections which may lead to future institutional collaboration. These needs may emerge from sponsors wanting to fund original, interdisciplinary research, or from the departments' research strategy which identifies gaps or weaknesses in its current research. The Blue Pages will connect information offered at any level to whatever other information is available in the research information infrastructure.

User Tests: Approach and Design

The Oxford Blue Pages, as any other kind of software application, was subject to examination to evaluate its design in relation to the needs of its target audience. Having diverse research and administrative communities as potential users, the Blue Pages needed to be designed to cater for a variety of uses. The assessment of the quality of data and connections as presented in the tool assisted the development of the research information infrastructure and the processes of data harvesting. The infrastructure and the Blue Pages are separate entities and data harvesting is part of the former. However, as infrastructure and data harvesting are rather abstract concepts and invisible to the majority of users, a direct evaluation was not possible. A mediated assessment of RAD quality through the Blue Pages was therefore the best way to obtain further feedback from non-technical people.

The evaluation of the Blue Pages was designed in line with the development strategy [4]. The development of the Blue Pages (and so the harvesting of RAD) was guided by agile principles. Agile approaches address 'the problems of rapid change: changes in market forces, systems requirements, implementation technology and project staff occurring within a single project's development period.' [5] Agile methodologies place emphasis on [6]:

Agile methodologies are adaptive rather than predictive [7]. They de-emphasize up-front analysis and design, and minimize the documentation effort [8]. This does not mean that no analysis is performed, but instead analysis, design and programming are done concurrently in increments. Software evaluations are carried out after each increment to assess the latest additions and modifications. Their outcomes are the inputs for design which may lead to further adjustments and new functionalities.

User tests were found to be the best method for evaluating the Blue Pages because they provided the BRII team with first-hand feedback from real users and the opportunity to establish a rapport with a cross-section of them. The aim was to achieve user satisfaction by allowing them to offer their input at different stages of the development. Problems found by users are unlikely to be discovered by other methods and witnessing users testing the software has greater impact on developers [9]. The majority of tests were carried out in the field, i.e., in volunteer testers' offices, to provide a real-life set-up for volunteers. Although it was expensive and time-consuming, this strategy allowed BRII to promote the project across the University [4]. The design of the tests was influenced by methods such as Heuristic Evaluation [10][11] and Concurrent and Retrospective Think Aloud protocols with users. Heuristic evaluation is a method applied at early stages of development [9] where usability and technology experts perform evaluations following particular usability guidelines. The think-aloud method attempts to capture testers' thoughts. In concurrent think-aloud tests 'participants are asked to vocalize whatever they are thinking as they negotiate the user interface' [12]. This allows the capture of rich information which cannot be obtained any other way. However, this method can be contrived, biased and misleading, as the tester is asked to do two things at the same time [13]. In retrospective think-aloud tests participants perform the test silently but are asked for feedback at the end of the session. One drawback of this approach is that testers are likely to forget what they were thinking when they were doing the tasks [14].

screenshot (67KB) : Figure 2: Screenshot of the Blue Pages mock-up

Figure 2: Screenshot of the Blue Pages mock-up

Following Bowman et al [15] the Blue Pages tests involved: one of the developers of the Blue Pages; a facilitator who conducted the session; and a volunteer who acted as a tester. The Blue Pages tests were carried out in rounds of seven to fifteen tests, each with an average duration of 45 minutes. As the Blue Pages are still under development, this article reports only on the first four rounds of user tests, which took place in a period of 4 months. The first round of tests started after a mock-up version of the Blue Pages was finished. The mock-up was made using the software application Balsamiq [16] (see Figure 2). This first round was designed as a combination of heuristic evaluation and user testing. Volunteer testers participating in this early round were IT and library experts who performed a dual role. They are familiar (but not experts) with the concepts of usability, research and research outputs. They are also part of the potential user base of the Oxford Blue Pages and RAD in general. Outcomes of this round were used to build the first live version of the software which was then employed in the next rounds. Users for these rounds were selected from stakeholder groups identified in BRII's stakeholders analysis [2]:

In addition to the groups mentioned above, a fourth group was included which was identified during BRII's networking and dissemination activities. This group comprises business consultants who work in collaboration with researchers and who need to answer requirements for specific expertise from industry partners. It was found that their interests in, and requirements of, RAD represented a mixture of those of administrators and strategists. However they need to keep records both of researchers who have worked or are working with industry and of researchers who are willing to participate in industrial partnerships.

While the BRII team tried to recruit as many testers as possible from each of the mentioned groups, more emphasis was placed on obtaining high-quality feedback from in-depth conversations during the tests. This qualitative approach promoted the discovery of hidden patterns of thought and behaviour among testers. Many issues were brought up which would not have been identified had a quantitative approach been taken. This was particularly important in the area of perceived relevance of the Blue Pages and RAD in volunteers' daily activities.

Scripts, Setup and Feedback

Scripts containing tasks and questions were designed for each round of tests (see Figure 3). Tasks were designed to assess the usability of the Blue Pages and to obtain insights into volunteer testers' understanding of the tool in relation to their work. Tasks required the volunteers to access at least one functionality. Volunteers were asked to tell the facilitator everything they were thinking while they were doing the tasks. For example, why they chose one option over another, why they liked or disliked the interface and its functionality, whether they found that functionality easy or difficult to use, or if they could not understand it. After the last task, volunteers were invited to reflect briefly on the tasks they had carried out. They also explained whether similar tasks would be relevant to their own work. Questions were asked at the beginning and end of the tests. Initial questions were asked in rounds one and two and were designed to assess testers' first impressions of the interface, particularly the homepage, before using the tool. Briefing questions were asked at the end to assess the testers' perceptions of usefulness of the Blue Pages and RAD in general (which is discussed in the next section.)

screenshot (70KB) : Figure 3: User test script – Second round

Figure 3: User test script – Second round [text version]

The atmosphere for tests was conversational and relaxed rather than formal. While the laptop with the software was being prepared by the developer, the facilitator broke the ice by asking the volunteer about his or her job and by providing a brief of the project. Little information about the Blue Pages was imparted so as not to bias the tester. Testers were asked to sit between the facilitator and the developer so all three had a clear view of the screen. A copy of the script was given to the tester but most of the time the facilitator read the tasks and questions to the tester. Developer and facilitator recorded comments from testers separately. Tests were carried out until feedback reached a saturation point. That is, testers were reporting similar issues consistently, and enough information had been gathered to work on another iteration. At the end of each round meetings between the developers and the facilitator were organised to discuss feedback and to design a strategy for the next development iteration.

Usability, Perceived Ease of Use and Perceived Usefulness

User tests had two main objectives. First, they were necessary to identify usability problems in the user interface. User tests are the most reliable way to achieve usability in software applications [17]. By usability it is meant the ease of use of the tool as well as its learnability [15]. This can be reflected in the way the interface portrays concepts and contexts and allows their exploration and manipulation. From this, user tests are used to discover major problems in the interface which could 'result in human error, terminate the interaction and lead to frustration on the part of the user' [13]. Additionally, the term usability involves the way testers understand concepts and interactions and the speed at which they can start applying this knowledge when they use the software. Tests are important to uncover volunteers' perceived ease of use of a tool. That is, 'the degree to which a person believes that using a particular system would be free of effort' [18]. Understanding volunteer testers' perceptions of ease of use is of particular importance to the design of the look and feel of the interface and the terminology used in it.

A second objective was to measure the relevance and usefulness of the Blue Pages and RAD to users' work. By running tests in contexts related to their actual jobs, we were able to reveal the perceived usefulness to the volunteers. Perceived usefulness is the 'degree to which a person believes that using a particular system would enhance his or her job performance.' [18] Through tests, the BRII team confirmed the needs, benefits and potential uses of RAD identified in the stakeholder analysis. Furthermore, tests also identified unknown benefits and uses, as well as issues related to sensitivities regarding the nature and shareability of data. They will be discussed further in the next section.

Ease of use and usefulness are two complementary concepts. For example, users who think an application is useful to their job will not use it if they find it is hard to use. Also, 'perceived ease of use is predicted to influence perceived usefulness, because the easier a system is to use, the more useful it can be' [19]. Both aspects, perceived ease of use and perceived usefulness, are determinants of user acceptance of technology [20]. That is, users will use a particular technology if they find it useful to their work and easy to use. Understanding perceived ease of use and perceived usefulness is important for software designers and test facilitators to assess why users accept or reject a specific technology. Such insights are important factors in the design of systems as they can lead to crucial alterations in an application and the processes of its development. They can also influence the implementation of work to promote the application.

Lessons Learned

The Oxford Blue Pages User tests proved to be an eye-opener. Information gathered formed the basis for further implementations of the tool, as well as corrections and improvements in the current code. The first round of tests carried out with the Balsamiq mockup was a time-saver as several design decisions were made in a short period of time without the need for producing any code. Further rounds built on top of the first one by adding more detailed inputs which I will discuss in the following section. For analytical purposes this discussion is divided into outcomes related to software usability and outcomes related to perceptions of usefulness.

Test Outcomes Leading to Usability Improvements

This section explains the main outcomes which led to usability improvements in the Blue Pages user interface. These outcomes are classified according to the four main aspects of the Blue Pages layout and functionality:

1. Display of Information and Categories of Information

As mentioned before, the Blue Pages display information about four main categories of data: people, academic units, funders and research projects. These four objects have to be easily identifiable within the interface. Tests were useful in designing ways to organise data which represent these objects. One example is the home page which contains links to the four main categories of information. The four links are presented in a simple style to facilitate easy identification (see top left screenshot in Figure 1). Volunteers understood this clearly and were able to foresee the kinds of attributes they would find under each category of data. For example, all volunteers understood that publications could be found under 'people.'

Some of them also expected publications to be found under 'academic units' and 'research projects.' Volunteers liked the minimalistic approach and stated that this made the home page user-friendly and easy to use. Another example is the researcher's profile page (see bottom left screenshot in Figure 1). This page is a representation of an instance of the people object. The 'About' tab contains information which is basic to that object. Other tabs contain connections to other objects but which seen from another perspective could be thought of as part of that object. (See the 'Research Projects' tab which lists research projects in which the researcher is working or has worked.) Volunteers stated they liked the tabs because they separated large amounts of information in short sets, but at the same time they kept all these sets linked together. They also found tabs easy to navigate.

2. Display of Connections between Objects

These connections are relations between objects and their attributes. For example, researchers are affiliated to one or more academic unit(s), researchers work in projects, researchers work (collaborate) with other researchers; research activities are carried out by researchers and are sponsored by funders. User tests provided information to assess and display these connections and others. Some of them were easy to assess, for example, researchers' affiliations. Testers understood them as attributes of researchers and stated they should be displayed next or close to the names. Testers also expected to find more information about these affiliations by clicking on the names of the academic units displayed in the profiles.

Other kinds of connections were more difficult to convey due to their complex nature. One example of this is research collaboration (see right screenshot in Figure 1). Research collaborations are displayed in the 'Collaborators' tab in researchers' profiles. Collaboration is a relationship which involves two or more researchers working together in some kind of research activity. The Blue Pages displays names of researchers who have some form of connection to the researchers on screen. This information is extracted from various sources such as co-authored publications, research projects or industry partnerships. These sources correspond to the nature of the collaboration which is also displayed alongside each name. Various mechanisms for organising this information were implemented from feedback from user tests. For example, grouping of data. Lists can either show the nature of collaboration (publications or projects) and when expanded show the name(s) of people collaborating, or show the names of collaborators and when expanded show the nature of their collaboration(s).

Testers also asked for more information to be displayed such as collaborators' institutions and countries. This information is particularly important for administrators at divisional and University level who need to classify information.

3. Browsing and Searching

User tests helped the BRII team to identify three kinds of users: browsers, searchers and users who do both. Tests showed no relation between these preferences and the volunteer's background (researcher, administrator, etc.) From these three user profiles data were gathered to design browse and search facilities, and to design the browse and search result pages in such a way that they can be easily navigated. When given a specific keyword, browsers accessed one of the four links in the home page which displayed lists of objects belonging to that category (e.g., lists of people.) The testers then accessed detailed information in profiles by clicking on the links in the lists.

Although browsers may be interested in one specific piece of information, they appreciate coming across other kinds of information while browsing which may provide them a picture of what is going in their area or University. One important issue identified was the ordering and presentation of data. Tests showed that no confusions arose when browsing for people or funders as users expected them to be ordered by (sur)name. However this was not the case for the other two categories of data.

In the case of 'academic units' for example, users preferred information to be organised by the name which represents the main topic of the research area. For example, they would like to find 'Faculty of Philosophy' listed under 'P' and not 'F.' In the case of research projects, volunteers did not like them to be listed by their name but by research topic or keyword. These two examples show how volunteer testers in general use research topics or keywords to find information. Searchers looked for information using a combination of keywords in the simple or advance search options. They expected search results ordered by relevance and containing enough information to show why they were relevant to the search keyword(s). They were concerned about the time and amount of effort they needed to invest in using the tool. Volunteers who browsed and searched did so depending on the amount of information to hand. For example, if they knew the name of the person, they used search. If they did not know the right spelling of a name, they used browse.

4. The Terminology

Terms used to describe categories of information are crucial for users to understand what they can find in the Blue Pages. Incorrect use of words misleads and confuses users. Tests were used to decide how to name the four main categories. While testers understood the people and funders concepts, they had problems with academic units and research projects. This does not mean that they could not understand the concept (although some of them were initially confused) but that, in their opinion, other words or terms should be used to facilitate understanding.

A series of names have been tested since round one. Before 'academic units,' names such as 'departments' and 'units' were used but rejected as testers stated they did not represent the entirety of places where research is carried out in the University. At the time of writing this article, 'academic unit' is being used although some late tests have showed that this name may sow confusion among administrative users who see this term as representing the organisational structure of the university which may not include informal structures such as research groups. Terms such as 'research' and 'research activity,' the predecessors of 'research projects,' were discarded as volunteers could not make sense of them. The current usage is 'Research project'. However some clarifications have been requested by volunteer testers in terms of how this term encompasses other kinds of activities such as research trials and surveys. As a consequence, the design decision has been made to add short descriptive explanations under the name of each category in the home page to explain their meaning.

Discovering Users' Perceptions of RAD

User tests helped to uncover patterns of perception among potential users. These patterns have correlations with the different stakeholder groups. That is, volunteer testers belonging to the same group had similar thoughts. This information helped the BRII team build a clearer picture of the Blue Pages target audience, its interests and intentions. What follows is a summary of these insights.

The Value of RAD in the Research Infrastructure

While administrative staff and strategists stated that making researchers and their research more accessible through the Blue Pages would be beneficial for publicising academic units and the University, researchers had their doubts. Administrators and strategists were excited about the idea of having aggregated RAD in one place so that they would not need to do the aggregation manually. They agreed that, as the Blue Pages will be open to the general public, it would facilitate exposure nationally and internationally with the research communities and with potential sponsors. Moreover, as it provides an institutional and standardised view of all research taking place within the University it will facilitate access to RAD to non-experts such as potential students.

Researchers, on the other hand, stated they disseminate their research in specialised sources such as publications and conferences. The Blue Pages did not seem academic enough for them. However, after they used the Blue Pages themselves researchers began to recognise that being in the Blue Pages would not do any harm, particularly because it would require little or no effort from them. One researcher said that the Blue Pages could constitute a gateway through which a new user base can access their research (e.g., students, new researchers and researchers from other fields.)

Uses of Blue Pages in Research Activity

Strategists, administrators and researchers agreed that the Blue Pages will be extremely useful in the planning of future activities. Some administrators stated they would monitor the Blue Pages frequently to search for news and any information that could lead to new initiatives. The main and obligatory condition for use though, administrators and strategists manifested, was that the Blue Pages should have an excellent coverage of data. That is, in terms of quantity of data (from most research units within the University and relevant external sources) and in terms of the nature of data (including the whole range of research fields from natural sciences to the humanities.) Only in this way they saw value in using the services instead of accessing the several sources they knew had more complete data collectively.

On the other hand, researchers stated they would use the Blue Pages to look for people with particular expertise within the University with whom they could work together. However they also stated that they were interested in good coverage within their research fields which tended to cross institutional boundaries. Because their first impression of the Blue Pages was that it was restricted to researchers in Oxford University, researchers said it had limited value. However after seeing the collaborators page (see right screenshot in Figure 1), some researchers stated that they would be interested in external researchers who have worked with Oxford people.

Uses of the Blue Pages for Administration Purposes

Administrators stated that the Blue Pages and its aggregated information with added connections among categories of data would be a time-saver. Currently they have to gather information about research from multiple sources and manually organise it according to their needs. Administrators asked for ways to export or download RAD from the Blue Pages in formats which would allow them to manipulate the data further.

Uses of the Blue Pages for Building Strategies

Knowing who is doing what and where in the University is extremely useful for the evaluation of current research and for the formulation of future plans. Strategists asked whether the Blue Pages could answer questions such as who is collaborating with universities in China or who is working with Microsoft. Information like this is extremely useful also for promotion tours and the writing of University brochures.

Further Suggestions

The following are two examples of suggestions from volunteer testers which correspond to further additions to the research information infrastructure, the Blue Pages and other possible services. It is worth noting that they are not part of a plan for future development. These suggestions and further ideas which emerged from tests will be evaluated by BRII at a later point. However what is important to highlight is the volunteers' ability to come up with innovative and realistic suggestions after they have experienced the Blue Pages and the usefulness of the tests to allow facilitators and developers to acquire further insights from volunteer testers' comments.

The suggestions were:

Conclusions

User tests are a good way to test the usability of software applications as well as obtaining data on how these applications will be used in real life. These data are important in the design of strategies to improve the acceptance of software. By running tests with users in their own working environments, users can have a say in the development process. They feel comfortable and are able to engage with the tool in realistic contexts (as in pretending they are using the tool for a real assignment.) On the other hand, developers can get first hand input from users by observing and talking to them. They can see with their own eyes how users use the tool, what are its strengths and weaknesses. Developers can ask users about their impressions on how the application could be improved and how users would use it.

An important conclusion of this exercise is that allowing users to interact with the technology changes their expectations and perceptions of this technology. This is particularly important for the understanding of rather abstract concepts such as the research information infrastructure and RAD. Most volunteer testers understood these notions better by accessing data through the Blue Pages. They were then able to ask technical and procedural questions about data harvesting, permissions and privacy issues, and contribute with their points of view. Additionally volunteers were able to foresee many more benefits than they had originally predicted.

References

  1. More information about the BRII project can be found in the project's Web site http://brii.ouls.ox.ac.uk/ and weblog http://brii-oxford.blogspot.com/
  2. Loureiro-Koechlin, C., Stakeholder analysis. Exploratory study into the requirements and uses for research activity data at the University of Oxford. 2009
    http://ora.ouls.ox.ac.uk/objects/uuid%3A1df69991-cd37-445b-a4c7-3573ce80c36e
  3. Oxford University Research Archive http://ora.ouls.ox.ac.uk/
  4. Rowley, D.E., Usability testing in the field: bringing the laboratory to the user. In Proceedings of the SIGCHI conference on human factors in computing systems: celebrating interdependence, 1994.
  5. Cockburn, A., Highsmith, J., Agile software development: the people factor, 2001
    http://alistair.cockburn.us/Agile+software+development:+the+people+factor
  6. Cockburn, A., Agile software development: software through people. Ed. C.H.S. Editors, 2001: Addison Wesley.
  7. Fowler, M., The New Methodology. 2005 http://www.martinfowler.com/articles/newMethodology.html
  8. Rising, L., Agile Methods: What's it all about? In DDC-I Online News, November 2001 http://ddci.com/NewsArchive/news_vol2num9.php#Agile
  9. Jeffries, R., Desurvire, H., Usability testing vs. heuristic evaluation: was there a contest? SIGCHI Bulletin, 1992. 24(4): p. 39-41.
  10. Nielsen, J., Molich R., Heuristic evaluations of user interfaces. In Proceedings of CHI'90, 1990.
  11. Nielsen, J., Finding usability problems through heuristic evaluation. In CHI '92: Proceedings of the SIGCHI conference on Human factors in computing systems, 1992. New York, NY, USA: ACM Press.
  12. Wildman, D., Getting the most from paired-user testing. Interactions, 1995. 2(3): p. 21-27.
  13. Norman, K.L. and E. Panizzi, Levels of automation and user participation in usability testing. Interacting with computers, 2006. 18(2): p. 246-264.
  14. Norman, K.L., Murphy, E.B., Usability testing of an Internet Form for the 2004 Overseas Enumeration Test: A comparison of think aloud and retrospective reports. In Proceedings of the Human Factors Society 48th Annual Meeting, 2004. New Orleans.
  15. Bowman, D., Gabbard, J.L., Hix, D., A survey of usability evaluation in virtual environments: classification and comparison of methods. Presence: Teleoperators and Virtual Environments, 2002. 11(4): p. 404-424.
  16. Balsamiq Mockups http://www.balsamiq.com/
  17. Woolrych, A., Cockton, G., Why and when five test users aren't enough. In Proceedings of IHM-HCI 2001 Conference, 2001. Toulouse.
  18. Davis, F.D., Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 1989. 13(3): p. 319-340.
  19. Carter, L., France, B., The utilization of e-government services: citizen trust, innovation and acceptance factors. Information Systems Journal, 2005. 15(1): p. 5-25.
  20. Davis, F.D., Bagozzi, R.P., Warshaw, P.R., User acceptance of computer technology: a comparison of two theoretical models. Management Science, 1989. 35(8): p. 982-1003.

Author Details

Cecilia Loureiro-Koechlin
Project Analyst
Systems and e-Research Service - Oxford University Library Services

Email: cecilia.loureiro-koechlin@ouls.ox.ac.uk
Web site: http://brii.ouls.ox.ac.uk

Return to top