Skip to Content

Overview of all keyword tags in articles

Inspecting content image: copyright, used under license from shutterstock.com

This page provides an overview of 42 tags, ordered by trending factor. Column headings allow re-sorting by other criteria. In the expanding tab below you can adjust filters to display sub-sets of tags and narrow the focus to specific items of interest (see FAQs on filtering for usage tips). Select this link to remove all filters.

Term Brief description Total articles Total usage Trending factor Charts

british oceanographic data centre

The British Oceanographic Data Centre (BODC) is a national facility for looking after and distributing data about the marine environment. BODC deal with a range of physical, chemical and biological data, which help scientists provide answers to both local questions (such as the likelihood of coastal flooding) and global issues (such as the impact of climate change). BODC is the designated marine science data centre for the UK’s Natural Environment Research Council (NERC). The centre provides a resource for science, education and industry, as well as the general public. BODC is hosted by the National Oceanography Centre (NOC), Liverpool. (Excerpt from Wikipedia article: British Oceanographic Data Centre)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 3

codata

The Committee on Data for Science and Technology (CODATA) was established in 1966 as an interdisciplinary committee of the International Council for Science. It seeks to improve the compilation, critical evaluation, storage, and retrieval of data of importance to science and technology. The CODATA Task Group on Fundamental Constants was established in 1969. Its purpose is to periodically provide the international scientific and technological communities with an internationally accepted set of values of the fundamental physical constants and closely related conversion factors for use worldwide. The first such CODATA set was published in 1973, later in 1986, 1998, 2002 and the fifth in 2006. The latest version is Ver.6.0 called "2010CODATA" published on 2011-06-02. The CODATA recommended values of fundamental physical constants are published at the NIST Reference on Constants, Units, and Uncertainty. (Excerpt from Wikipedia article: CODATA)

Percentage of Ariadne articles tagged with this term: 0.3%.
6 37

council of european social science data archives

CESSDA is an umbrella organisation for social science data archives across Europe. Since the 1970s the members have worked together to improve access to data for researchers and students. CESSDA research and development projects and Expert Seminars enhance exchange of data and technologies among data organisations. Preparations are underway to move CESSDA into a new organisation known as CESSDA European Research Infrastructure Consortium (CESSDA ERIC). (Excerpt from this source)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 3

datacite

DataCite is an international consortium which aims to improve data citation in order to: establish easier access to scientific research data on the Internet; to increase acceptance of research data as legitimate, citable contributions to the scientific record; and to support data archiving that will permit results to be verified and re-purposed for future study. (Excerpt from Wikipedia article: DataCite)

Percentage of Ariadne articles tagged with this term: 0.8%.
14 93

uk data archive

The UK Data Archive is a national centre of expertise in data archiving in the United Kingdom (UK). It houses the largest collection of digital data in the social sciences and humanities in the UK. Located in Colchester, the UK Data Archive is a specialist centre of the University of Essex. It is funded by the Economic and Social Research Council (ESRC), the Joint Information Systems Committee (JISC) and the University of Essex. (Excerpt from Wikipedia article: UK Data Archive)

Percentage of Ariadne articles tagged with this term: 1.7%.
30 56

authority data

Functional Requirements for Authority Data (FRAD), formerly known as Functional Requirements for Authority Records (FRAR) is a conceptual entity-relationship model developed by the International Federation of Library Associations and Institutions (IFLA) for relating the data that are recorded in library authority records to the needs of the users of those records and facilitate and sharing of that data. (Excerpt from Wikipedia article: FRAD)

Percentage of Ariadne articles tagged with this term: 0.3%.
6 8

bibliographic database

A bibliographic database is a database of bibliographic records, an organized digital collection of references to published literature, including journal and newspaper articles, conference proceedings, reports, government and legal publications, patents, books, etc. In contrast to library catalogue entries, a large proportion of the bibliographic records in bibliographic databases describe analytics (articles, conference papers, etc.) rather than complete monographs, and they generally contain very rich subject descriptions in the form of keywords, subject classification terms, or abstracts. (Excerpt from Wikipedia article: Bibliographic database)

Percentage of Ariadne articles tagged with this term: 1.7%.
30 40

big data

In information technology, big data consists of datasets that grow so large that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage, search, sharing, analytics, and visualizing. This trend continues because of the benefits of working with larger and larger datasets allowing analysts to "spot business trends, prevent diseases, combat crime." Though a moving target, current limits are on the order of terabytes, exabytes and zettabytes of data. Scientists regularly encounter this problem in meteorology, genomics, connectomics, complex physics simulations, biological and environmental research, Internet search, finance and business informatics. Data sets also grow in size because they are increasingly being gathered by ubiquitous information-sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, Radio-frequency identification readers, wireless sensor networks and so on." Every day, 2.5 quintillion bytes of data are created and 90% of the data in the world today was created within the past two years. (Excerpt from Wikipedia article: Big data)

Percentage of Ariadne articles tagged with this term: 0.8%.
14 97

data compression

In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use. Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. (Excerpt from Wikipedia article: Data compression)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 2

data model

A data model in software engineering is an abstract model, that documents and organizes the business data for communication between team members and is used as a plan for developing applications, specifically how data is stored and accessed. A data model explicitly determines the structure of data or structured data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually data models are specified in a data modeling language. (Excerpt from Wikipedia article: Data model)

Percentage of Ariadne articles tagged with this term: 2.3%.
40 101

datamining

Data mining (the analysis step of the knowledge discovery in databases process, or KDD), a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems. The overall goal of the data mining process is to extract knowledge from a data set in a human-understandable structure and besides the raw analysis step involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps. (Excerpt from Wikipedia article: Data mining)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 1

dublin core metadata initiative

The Dublin Core Metadata Initiative (DCMI) incorporated as an independent entity (separating from OCLC) in 2008 that provides an open forum for the development of interoperable online metadata standards for a broad range of purposes and of business models. DCMI's activities include consensus-driven working groups, global conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. (Excerpt from Wikipedia article: Dublin Core Metadata Initiative)

Percentage of Ariadne articles tagged with this term: 2.4%.
41 59

educational data mining

Educational Data Mining (EDM) describes a research field concerned with the application of data mining to information generated from educational settings (e.g., universities and intelligent tutoring systems). At a high level, the field seeks to develop methods for exploring this data, which often has multiple levels of meaningful hierarchy, in order to discover new insights about how people learn in the context of such settings. A key area of EDM is mining computer logs of student performance. Another key area is mining enrollment data. Key uses of EDM include predicting student performance, and studying learning in order to recommend improvements to current educational practice. EDM can be considered one of the learning sciences, as well as an area of data mining. A related field is learning analytics. (Excerpt from Wikipedia article: Educational Data Mining)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 13

learning object metadata

Learning Object Metadata is a data model, usually encoded in XML, used to describe a learning object and similar digital resources used to support learning. The purpose of learning object metadata is to support the reusability of learning objects, to aid discoverability, and to facilitate their interoperability, usually in the context of online learning management systems (LMS). (Excerpt from Wikipedia article: Learning Object Metadata)

Percentage of Ariadne articles tagged with this term: 0.7%.
13 30

lexical database

A lexical database is a lexical resource which has an associated software environment database which permits access to its contents. The database may be custom-designed for the lexical information or a general-purpose database into which lexical information has been entered. Information typically stored in a lexical database database includes lexical category and synonyms of words, as well as semantic relations between different words or sets of words. (Excerpt from Wikipedia article: Lexical database)

Percentage of Ariadne articles tagged with this term: 0.1%.
2 2

library data

An integrated library system (ILS), also known as a library management system (LMS), is an enterprise resource planning system for a library, used to track items owned, orders made, bills paid, and patrons who have borrowed. An ILS usually comprises a relational database, software to interact with that database, and two graphical user interfaces (one for patrons, one for staff). Most ILSes separate software functions into discrete programs called modules, each of them integrated with a unified interface. Examples of modules might include: acquisitions (ordering, receiving, and invoicing materials); cataloging (classifying and indexing materials); circulation (lending materials to patrons and receiving them back); serials (tracking magazine and newspaper holdings); the OPAC (public interface for users). Each patron and item has a unique ID in the database that allows the ILS to track its activity. Larger libraries use an ILS to order and acquire, receive and invoice, catalog, circulate, track and shelve materials. (Excerpt from Wikipedia article: Integrated library system)

Percentage of Ariadne articles tagged with this term: 0.6%.
10 15

machine-readable data

Machine-readable data is data (or metadata) which is in a format that can be understood by a computer. There are two types; human-readable data that is marked up so that it can also be read by machines (examples; microformats, RDFa) or data file formats intended principally for machines (RDF, XML, JSON). (Excerpt from Wikipedia article: Machine-readable data)

Percentage of Ariadne articles tagged with this term: 0.1%.
1 3

metadata model

Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful some predefined class of problems. Meta-modeling is the analysis, construction and development of the frames, rules, constraints, models and theories applicable and useful for the modeling in a predefined class of problems. The meta-data side of the diagram consists of a concept diagram. This is basically an adjusted class diagram as described in Booch, Rumbaugh and Jacobson (1999). Important notions are concept, generalization, association, multiplicity and aggregation. (Excerpt from Wikipedia article: Metadata model)

Percentage of Ariadne articles tagged with this term: 0.5%.
9 14

metadata schema registry

A metadata schema registry is a network service that stores and makes available information about the metadata schemas in use by other services. (Excerpt from JISC Information Environment Glossary)

Percentage of Ariadne articles tagged with this term: 0.4%.
7 16

microdata

Microdata is a WHATWG HTML5 specification used to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page and use it to provide a richer browsing experience for users. Microdata use a supporting vocabulary to describe an item and name-value pairs to assign values to its properties. Microdata helps technologies such as search engines and web crawlers better understand what information is contained in a web page, providing better search results. Microdata is an attempt to provide a simpler way of annotating HTML elements with machine readable tags than the similar approaches of using RDFa and Microformats. (Excerpt from Wikipedia article: Microdata)

Percentage of Ariadne articles tagged with this term: 0.3%.
5 7
CSVXML


by Dr. Radut