'Buzz' tags used most often over past 52 weeks (RFU)

This page provides an overview of 617 keyword tags in Ariadne, ordered by recent frequent usage.

Note: filters may be applied to display a sub-set of tags in this category; see FAQs on filtering for usage tips. Select this link to remove all filters.

Term Description Charts

data compression

In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use. Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_compression">Wikipedia article: Data compression</a>)

data management

Data management comprises all the disciplines related to managing data as a valuable resource. The official definition provided by DAMA International, the professional organization for those in the data management profession, is: "Data Resource Management is the development and execution of architectures, policies, practices and procedures that properly manage the full data lifecycle needs of an enterprise." This definition is fairly broad and encompasses a number of professions which may not have direct technical contact with lower-level aspects of data management, such as relational database management. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_management">Wikipedia article: Data management</a>)

data mining

Data mining (the analysis step of the "Knowledge Discovery in Databases" process, or KDD), an interdisciplinary subfield of computer science, is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_mining">Wikipedia article: Data Mining</a>)

data model

A data model in software engineering is an abstract model, that documents and organizes the business data for communication between team members and is used as a plan for developing applications, specifically how data is stored and accessed. A data model explicitly determines the structure of data or structured data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually data models are specified in a data modeling language. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_model">Wikipedia article: Data model</a>)

data set

A data set (or dataset) is a collection of data, usually presented in tabular form. Each column represents a particular variable. Each row corresponds to a given member of the data set in question. Its values for each of the variables, such as height and weight of an object or values of random numbers. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_set">Wikipedia article: Data set</a>)

data visualisation

Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information". Data visualization is closely related to Information graphics, Information visualization, Scientific visualization and Statistical graphics. In the new millennium data visualization has become active area of research, teaching and development. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_visualization">Wikipedia article: Data visualization</a>)

data visualization

Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information". Data visualization is closely related to Information graphics, Information visualization, Scientific visualization and Statistical graphics. In the new millennium data visualization has become active area of research, teaching and development. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_visualization">Wikipedia article: Data visualization</a>)

database

A database is a system intended to organize, store, and retrieve large amounts of data easily. It consists of an organized collection of data for one or more uses, typically in digital form. One way of classifying databases involves the type of their contents, for example: bibliographic, document-text, statistical. Digital databases are managed using database management systems, which store database contents, allowing data creation and maintenance, and search and other access. (Excerpt from <a href="http://en.wikipedia.org/wiki/Database">Wikipedia article: Database</a>)

datamining

Data mining (the analysis step of the knowledge discovery in databases process, or KDD), a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems. The overall goal of the data mining process is to extract knowledge from a data set in a human-understandable structure and besides the raw analysis step involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps. (Excerpt from <a href="http://en.wikipedia.org/wiki/Data_mining">Wikipedia article: Data mining</a>)

dc terms

The Dublin Core Metadata Initiative provides a namespace specifying frequently used metadata terms: http://dublincore.org/documents/2010/10/11/dcmi-terms/ (Excerpt from <a href="http://dublincore.org/documents/2010/10/11/dcmi-terms/">this source</a>)

Pages