Overview of all keyword tags in articles

Inspecting content image: copyright, used under license from shutterstock.com

This page provides an overview of 44 tags, ordered by trending factor. Column headings allow re-sorting by other criteria. In the expanding tab below you can adjust filters to display sub-sets of tags and narrow the focus to specific items of interest (see FAQs on filtering for usage tips). Select this link to remove all filters.

Term Brief description Charts

data compression

In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use. Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_compression">Wikipedia article: Data compression</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data management

Data management comprises all the disciplines related to managing data as a valuable resource. The official definition provided by DAMA International, the professional organization for those in the data management profession, is: "Data Resource Management is the development and execution of architectures, policies, practices and procedures that properly manage the full data lifecycle needs of an enterprise." This definition is fairly broad and encompasses a number of professions which may not have direct technical contact with lower-level aspects of data management, such as relational database management. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_management">Wikipedia article: Data management</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data mining

Data mining (the analysis step of the "Knowledge Discovery in Databases" process, or KDD), an interdisciplinary subfield of computer science, is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_mining">Wikipedia article: Data Mining</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data model

A data model in software engineering is an abstract model, that documents and organizes the business data for communication between team members and is used as a plan for developing applications, specifically how data is stored and accessed. A data model explicitly determines the structure of data or structured data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually data models are specified in a data modeling language. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_model">Wikipedia article: Data model</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data set

A data set (or dataset) is a collection of data, usually presented in tabular form. Each column represents a particular variable. Each row corresponds to a given member of the data set in question. Its values for each of the variables, such as height and weight of an object or values of random numbers. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_set">Wikipedia article: Data set</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data visualisation

Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information". Data visualization is closely related to Information graphics, Information visualization, Scientific visualization and Statistical graphics. In the new millennium data visualization has become active area of research, teaching and development. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_visualization">Wikipedia article: Data visualization</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

data visualization

Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information". Data visualization is closely related to Information graphics, Information visualization, Scientific visualization and Statistical graphics. In the new millennium data visualization has become active area of research, teaching and development. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_visualization">Wikipedia article: Data visualization</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

database

A database is a system intended to organize, store, and retrieve large amounts of data easily. It consists of an organized collection of data for one or more uses, typically in digital form. One way of classifying databases involves the type of their contents, for example: bibliographic, document-text, statistical. Digital databases are managed using database management systems, which store database contents, allowing data creation and maintenance, and search and other access. (Excerpt from <a href="http://en.wikipedia.org/wiki/Database">Wikipedia article: Database</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

datamining

Data mining (the analysis step of the knowledge discovery in databases process, or KDD), a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems. The overall goal of the data mining process is to extract knowledge from a data set in a human-understandable structure and besides the raw analysis step involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_mining">Wikipedia article: Data mining</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

dublin core metadata initiative

The Dublin Core Metadata Initiative (DCMI) incorporated as an independent entity (separating from OCLC) in 2008 that provides an open forum for the development of interoperable online metadata standards for a broad range of purposes and of business models. DCMI's activities include consensus-driven working groups, global conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. (Excerpt from <a href="http://en.wikipedia.org/wiki/DCMI">Wikipedia article: Dublin Core Metadata Initiative</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

educational data mining

Educational Data Mining (EDM) describes a research field concerned with the application of data mining to information generated from educational settings (e.g., universities and intelligent tutoring systems). At a high level, the field seeks to develop methods for exploring this data, which often has multiple levels of meaningful hierarchy, in order to discover new insights about how people learn in the context of such settings. A key area of EDM is mining computer logs of student performance. Another key area is mining enrollment data. Key uses of EDM include predicting student performance, and studying learning in order to recommend improvements to current educational practice. EDM can be considered one of the learning sciences, as well as an area of data mining. A related field is learning analytics. (Excerpt from <a href="http://en.wikipedia.org/wiki/Educational_data_mining">Wikipedia article: Educational Data Mining</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

geospatial data

A geographic information system (GIS), geographical information system, or geospatial information system is a system that captures, stores, analyzes, manages and presents data with reference to geographic location data. In the simplest terms, GIS is the merging of cartography, statistical analysis and database technology. GIS may be used in archaeology, geography, cartography, remote sensing, land surveying, public utility management, natural resource management, precision agriculture, photogrammetry, urban planning, emergency management, landscape architecture, navigation, aerial video and localized search engines. (Excerpt from <a href="http://en.wikipedia.org/wiki/Geographic_information_system">Wikipedia article: Geographic information system</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

learning object metadata

Learning Object Metadata is a data model, usually encoded in XML, used to describe a learning object and similar digital resources used to support learning. The purpose of learning object metadata is to support the reusability of learning objects, to aid discoverability, and to facilitate their interoperability, usually in the context of online learning management systems (LMS). (Excerpt from <a href="http://en.wikipedia.org/wiki/Learning_object_metadata">Wikipedia article: Learning Object Metadata</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

lexical database

A lexical database is a lexical resource which has an associated software environment database which permits access to its contents. The database may be custom-designed for the lexical information or a general-purpose database into which lexical information has been entered. Information typically stored in a lexical database database includes lexical category and synonyms of words, as well as semantic relations between different words or sets of words. (Excerpt from <a href="http://en.wikipedia.org/wiki/Lexical_database">Wikipedia article: Lexical database</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

library data

An integrated library system (ILS), also known as a library management system (LMS), is an enterprise resource planning system for a library, used to track items owned, orders made, bills paid, and patrons who have borrowed. An ILS usually comprises a relational database, software to interact with that database, and two graphical user interfaces (one for patrons, one for staff). Most ILSes separate software functions into discrete programs called modules, each of them integrated with a unified interface. Examples of modules might include: acquisitions (ordering, receiving, and invoicing materials); cataloging (classifying and indexing materials); circulation (lending materials to patrons and receiving them back); serials (tracking magazine and newspaper holdings); the OPAC (public interface for users). Each patron and item has a unique ID in the database that allows the ILS to track its activity. Larger libraries use an ILS to order and acquire, receive and invoice, catalog, circulate, track and shelve materials. (Excerpt from <a href="http://en.wikipedia.org/wiki/Integrated_library_system">Wikipedia article: Integrated library system</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

linked data

Linked Data describes a method of publishing structured data, so that it can be interlinked and become more useful. It builds upon standard Web technologies, such as HTTP and URIs - but rather than using them to serve web pages for human readers, it extends them to share information in a way that can be read automatically by computers. This enables data from different sources to be connected and queried. Tim Berners-Lee, director of the World Wide Web Consortium, coined the term in a design note discussing issues around the Semantic Web project. (Excerpt from <a href="http://en.wikipedia.org/wiki/Linked_Data">Wikipedia article: Linked Data</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

machine-readable data

Machine-readable data is data (or metadata) which is in a format that can be understood by a computer. There are two types; human-readable data that is marked up so that it can also be read by machines (examples; microformats, RDFa) or data file formats intended principally for machines (RDF, XML, JSON). (Excerpt from <a href="http://en.wikipedia.org/wiki/Machine-readable_data">Wikipedia article: Machine-readable data</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

metadata

Metadata can be defined literally as "data about data," but the term is normally understood to mean structured data about digital (and non-digital) resources that can be used to help support a wide range of operations. These might include, for example, resource description and discovery, the management of information resources (including rights management) and their long-term preservation. In the context of digital resources, there exists a wide variety of metadata formats. Viewed on a continuum of increasing complexity, these range from the basic records used by robot-based Internet search services, through relatively simple formats like the Dublin Core Metadata Element Set (DCMES) and the more detailed Text Encoding Initiative (TEI) header and MARC formats, to highly specific formats like the FGDC Content Standard for Digital Geospatial Metadata, the Encoded Archival Description (EAD) and the Data Documentation Initiative (DDI) Codebook. (Excerpt from <a href="http://www.ukoln.ac.uk/metadata/">this source</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

metadata model

Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful some predefined class of problems. Meta-modeling is the analysis, construction and development of the frames, rules, constraints, models and theories applicable and useful for the modeling in a predefined class of problems. The meta-data side of the diagram consists of a concept diagram. This is basically an adjusted class diagram as described in Booch, Rumbaugh and Jacobson (1999). Important notions are concept, generalization, association, multiplicity and aggregation. (Excerpt from <a href="http://en.wikipedia.org/wiki/Metadata_modeling">Wikipedia article: Metadata model</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

metadata schema registry

A metadata schema registry is a network service that stores and makes available information about the metadata schemas in use by other services. (Excerpt from <a href="http://www.ukoln.ac.uk/distributed-systems/jisc-ie/arch/glossary/">JISC Information Environment Glossary</a>)

Percentage of Ariadne articles tagged with this term: [term_node_prcnt_1]%.

Pages