Skip to Content

Overview of all keyword tags in articles

Inspecting content image: copyright, used under license from shutterstock.com

This page provides an overview of 42 tags, ordered by trending factor. Column headings allow re-sorting by other criteria. In the expanding tab below you can adjust filters to display sub-sets of tags and narrow the focus to specific items of interest (see FAQs on filtering for usage tips). Select this link to remove all filters.

Term Brief description Total articles Total usage Trending factor Charts

bath information and data services

Bath Information and Data Services (BIDS) provided bibliographic database services to the academic community in the UK from 1991 to 2005. BIDS academic and scholarly journals services are now incorporated into IngentaConnect www.ingentaconnect.com (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.2%.
3 2

data curation for e-science

The DTI and the Research Councils are committing £118M to a government-industry programme on e-Science. The reason for this investment is that GRID technology is seen as the natural successor to the world wide web and the UK wants to take a leading role in order to develop solutions for its scientists and developing opportunities for its industry. The world wide web has revolutionised the way companies do business and fundamentally altered people's personal lives but it can no longer cope with the demands being placed on it by science. The world wide web allows very easy access to information, Grid allows that same easy access to computing power, data processing and communication of the results. The opportunities are immense, it will allow the efficient manipulation of vast amounts of information such as that contained in the human genome or the results from experiments in CERN's new Large Hadron Collider. It will also allow the ability to mine data again and again by comparing existing data sets collected for one purpose with new and previously unrelated information, so generating new knowledge. This consultancy will establish the current provision and future requirements for curation of primary research data being generated within e-science in the UK. This will include the e-science core programme but is anticipated to extend beyond this to other e-science research and primary research data. A consultancy report will provide a synthesis of findings and make recommendations for future action. The consultancy will support aims to manage JISC involvement in e-Science and the Research Grid, and to work in partnership to support the research community through activities such as its digital preservation programme. Project start date: 2003-02-01. Project end date: 2004-02-02. (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 1

data train project

The DataTrain project aims to build on findings and tools developed in the Incremental project (JISC 07/09 funding strand) by developing disciplinary focussed data management training modules for post-graduate courses in Archaeology and Social Anthropology at the University of Cambridge. To this end, the project will develop training modules for each of the two departments, and pilot these as part of the departments' postgraduate training provision in Spring of 2011. Beyond this, the modules would be embedded within research methods courses in each department. To extend its impact, the project would also make the training resources available through the University of Cambridge's institutional repository's support provision and via the Archaeology Data Service (ADS) and Digital Curation Centre (DCC). Project start date: 2010-08-01. Project end date: 2011-07-31. (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 1

data without boundaries

The Data without Boundaries û DwB û project exists to support equal and easy access to official microdata for the European Research Area, within a structured framework where responsibilities and liability are equally shared. Europe needs a comprehensive and easy-to-access research data infrastructure to be able to continuously produce cutting-edge research and reliable policy evaluations. (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 4

datagovuk

data.gov.uk is a UK Government project to open up almost all non-personal data acquired for official purposes for free re-use. Sir Tim Berners-Lee and Professor Nigel Shadbolt are the two key figures behind the project. The beta version of data.gov.uk has been online since the 30 September 2009 and by January 2010 more than 2,400 developers had registered to test the site, provide feedback and start experimenting with the data. When the project was officially launched in January 2010 it contained 2,500 data sets and developers had already built a site that showed the location of schools according to the rating assigned to them by education watchdog Ofsted. (Excerpt from Wikipedia article: Data.gov.uk)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 4

datashare

DataShare, led by Edina, arises from an existing UK consortium of data support professionals working in departments and academic libraries in universities (Data Information Specialists Committee-UK), and builds on an international network with a tradition of data sharing and data archiving dating back to the 1960s in the social sciences. By working together across four universities and internally with colleagues already engaged in managing open access repositories for e-prints, this partnership will introduce and test a new model of data sharing and archiving to UK research institutions. By supporting academics within the four partner institutions who wish to share datasets on which written research outputs are based, this network of institution-based data repositories develops a niche model for deposit of 'orphaned datasets' currently filled neither by centralised subject-domain data archives/centres/grids nor by e-print based institutional repositories (IRs). The project's overall aim is to contribute to new models, workflows and tools for academic data sharing within a complex and dynamic information environment which includes increased emphasis on stewardship of institutional knowledge assets of all types; new technologies for doing e-Research; new research council policies and mandates; and the growth of the Open Access / Open Data movement. Project start date: 2007-03-01. Project end date: 2009-03-31. (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.5%.
8 19

dealing with data

UKOLN was asked to undertake a small-scale consultancy for JISC to investigate the relationships between data centres and institutions which may develop data repositories. The resulting direction-setting report will be used to advance the digital repository development agenda within the JISC Capital programme (2006 - 2009), to assist in the co-ordination of research data repositories and to inform an emerging Vision and Roadmap. The study includes a synthesis of some of the lessons learned from the projects within the Digital Repositories programme that were concerned with research data. Project start date: 2006-11-01. Project end date: 2007-05-31. (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.3%.
5 6

british oceanographic data centre

The British Oceanographic Data Centre (BODC) is a national facility for looking after and distributing data about the marine environment. BODC deal with a range of physical, chemical and biological data, which help scientists provide answers to both local questions (such as the likelihood of coastal flooding) and global issues (such as the impact of climate change). BODC is the designated marine science data centre for the UK’s Natural Environment Research Council (NERC). The centre provides a resource for science, education and industry, as well as the general public. BODC is hosted by the National Oceanography Centre (NOC), Liverpool. (Excerpt from Wikipedia article: British Oceanographic Data Centre)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 3

codata

The Committee on Data for Science and Technology (CODATA) was established in 1966 as an interdisciplinary committee of the International Council for Science. It seeks to improve the compilation, critical evaluation, storage, and retrieval of data of importance to science and technology. The CODATA Task Group on Fundamental Constants was established in 1969. Its purpose is to periodically provide the international scientific and technological communities with an internationally accepted set of values of the fundamental physical constants and closely related conversion factors for use worldwide. The first such CODATA set was published in 1973, later in 1986, 1998, 2002 and the fifth in 2006. The latest version is Ver.6.0 called "2010CODATA" published on 2011-06-02. The CODATA recommended values of fundamental physical constants are published at the NIST Reference on Constants, Units, and Uncertainty. (Excerpt from Wikipedia article: CODATA)

Percentage of Ariadne articles tagged with this term: 0.3%.
6 37

council of european social science data archives

CESSDA is an umbrella organisation for social science data archives across Europe. Since the 1970s the members have worked together to improve access to data for researchers and students. CESSDA research and development projects and Expert Seminars enhance exchange of data and technologies among data organisations. Preparations are underway to move CESSDA into a new organisation known as CESSDA European Research Infrastructure Consortium (CESSDA ERIC). (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 3

authority data

Functional Requirements for Authority Data (FRAD), formerly known as Functional Requirements for Authority Records (FRAR) is a conceptual entity-relationship model developed by the International Federation of Library Associations and Institutions (IFLA) for relating the data that are recorded in library authority records to the needs of the users of those records and facilitate and sharing of that data. (Excerpt from Wikipedia article: FRAD)

Percentage of Ariadne articles tagged with this term: 0.3%.
6 8

data compression

In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use. Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. (Excerpt from Wikipedia article: Data compression)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 2

datamining

Data mining (the analysis step of the knowledge discovery in databases process, or KDD), a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems. The overall goal of the data mining process is to extract knowledge from a data set in a human-understandable structure and besides the raw analysis step involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps. (Excerpt from Wikipedia article: Data mining)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 1

dublin core metadata initiative

The Dublin Core Metadata Initiative (DCMI) incorporated as an independent entity (separating from OCLC) in 2008 that provides an open forum for the development of interoperable online metadata standards for a broad range of purposes and of business models. DCMI's activities include consensus-driven working groups, global conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. (Excerpt from Wikipedia article: Dublin Core Metadata Initiative)

Percentage of Ariadne articles tagged with this term: 2.4%.
41 59

learning object metadata

Learning Object Metadata is a data model, usually encoded in XML, used to describe a learning object and similar digital resources used to support learning. The purpose of learning object metadata is to support the reusability of learning objects, to aid discoverability, and to facilitate their interoperability, usually in the context of online learning management systems (LMS). (Excerpt from Wikipedia article: Learning Object Metadata)

Percentage of Ariadne articles tagged with this term: 0.7%.
13 30

lexical database

A lexical database is a lexical resource which has an associated software environment database which permits access to its contents. The database may be custom-designed for the lexical information or a general-purpose database into which lexical information has been entered. Information typically stored in a lexical database database includes lexical category and synonyms of words, as well as semantic relations between different words or sets of words. (Excerpt from Wikipedia article: Lexical database)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 2

library data

An integrated library system (ILS), also known as a library management system (LMS), is an enterprise resource planning system for a library, used to track items owned, orders made, bills paid, and patrons who have borrowed. An ILS usually comprises a relational database, software to interact with that database, and two graphical user interfaces (one for patrons, one for staff). Most ILSes separate software functions into discrete programs called modules, each of them integrated with a unified interface. Examples of modules might include: acquisitions (ordering, receiving, and invoicing materials); cataloging (classifying and indexing materials); circulation (lending materials to patrons and receiving them back); serials (tracking magazine and newspaper holdings); the OPAC (public interface for users). Each patron and item has a unique ID in the database that allows the ILS to track its activity. Larger libraries use an ILS to order and acquire, receive and invoice, catalog, circulate, track and shelve materials. (Excerpt from Wikipedia article: Integrated library system)

Percentage of Ariadne articles tagged with this term: 0.6%.
10 15

machine-readable data

Machine-readable data is data (or metadata) which is in a format that can be understood by a computer. There are two types; human-readable data that is marked up so that it can also be read by machines (examples; microformats, RDFa) or data file formats intended principally for machines (RDF, XML, JSON). (Excerpt from Wikipedia article: Machine-readable data)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 3

metadata model

Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful some predefined class of problems. Meta-modeling is the analysis, construction and development of the frames, rules, constraints, models and theories applicable and useful for the modeling in a predefined class of problems. The meta-data side of the diagram consists of a concept diagram. This is basically an adjusted class diagram as described in Booch, Rumbaugh and Jacobson (1999). Important notions are concept, generalization, association, multiplicity and aggregation. (Excerpt from Wikipedia article: Metadata model)

Percentage of Ariadne articles tagged with this term: 0.5%.
9 14

metadata schema registry

A metadata schema registry is a network service that stores and makes available information about the metadata schemas in use by other services. (Excerpt from JISC Information Environment Glossary)

Percentage of Ariadne articles tagged with this term: 0.4%.
7 16
CSVXML


about seo