Overview of trending keyword tags
This page provides an overview of 299 recently trending keyword tags, ordered by trending factor. Column headings allow re-sorting by other criteria. In the expanding tab below you can adjust filters to display sub-sets of tags and narrow the focus to specific keywords of interest (see FAQs on filtering for usage tips). Select this link to remove all filters.
Note: This page displays only recently trending keywords; see our overview of keyword tags for a comprehensive keyword inventory.
| Term | Description | Trending factor | Statistics |
|---|---|---|---|
| data |
The term data refers to qualitative or quantitative attributes of a variable or set of variables. Data (plural of "datum") are typically the results of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are often viewed as the lowest level of abstraction from which information and then knowledge are derived. Raw data, i.e. unprocessed data, refers to a collection of numbers, characters, images or other outputs from devices that collect information to convert physical quantities into symbols. (Excerpt from Wikipedia article: Data) |
16080 | |
| research |
Research can be defined as the search for knowledge, or as any systematic investigation, with an open mind, to establish novel facts, usually using a scientific method. The primary purpose for basic research (as opposed to applied research) is discovering, interpreting, and the development of methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world and the universe. (Excerpt from Wikipedia article: Research) |
10486 | |
| drupal |
Drupal is a free and open source content management system (CMS) and Content Management framework (CMF) written in PHP and distributed under the GNU General Public License. It is used as a back-end system for at least 1.5% of all websites worldwide ranging from personal blogs to corporate, political, and government sites including whitehouse.gov and data.gov.uk. It is also used for knowledge management and business collaboration. (Excerpt from Wikipedia article: Drupal) |
9754.5 | |
| sushi |
The Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol standard (ANSI/NISO Z39.93-2007) defines an automated request and response model for the harvesting of electronic resource usage data utilizing a Web services framework. Built on SOAP, a versioned Web Services Description Language (WSDL), and XML schema with the syntax of the SUSHI protocol, this standard is intended to replace the time-consuming user-mediated collection of usage data reports. SUSHI was designed to be both generalised and extensible, so that it could be used to retrieve a variety of usage reports. An extension designed specifically to work with COUNTER reports is provided with the standard, as these are expected to be the most frequently retrieved usage reports. (Excerpt from this source) |
9300 | |
| cerif |
CERIF (Common European Research Information Format) emerged first as a simple standard not unlike a library catalogue card or the present DC (Dublin Core Metadata Standard) and was intended as a data exchange format. It was based on records describing projects, with persons and organisational units as attributes. However, it was soon realised that in practice this CERIF91 standard was inadequate: it was too rigid in format, did not handle repeating groups of information, was not multilingual / multi character set and did not represent in a sufficiently rich way the universe of interest. A new group of experts was convened and CERIF2000 was generated. Its essential features are: (a) it has the concept of objects or entities with attributes such as project, person, organisational unit; (b) it supports n:m relationships between them (and recursively on any of them) using 'linking relations' thus providing rich semantics including roles and time; (c) it is fully internationalised in language and character set; (d) it is extensible without prejudicing the core datamodel thus providing guaranteed interoperability at least at the core level but not precluding even richer intercommunication. It is designed for use both for data exchange (data file transfer) and for heterogeneous distributed query / result environments. With CERIF2004, minor improvements in consistency have been released. With CERIF2006 substantial improvements have been implemented with the model, concerning in particular the introduction of a so-called Semantic Layer, that makes the model flexible and scalable for application in very heterogeneous environments. (Excerpt from this source) |
8769.6 | |
| vufind |
VuFind is an open source library search engine that allows users to search and browse beyond the resources of a traditional OPAC. Developed by Villanova University, version 1.0 was released in July 2010 after two years in beta. VuFind operates with a simple, Google-like interface and offers flexible keyword searching. While most commonly used for searching catalog records, VuFind can be extended to search other library resources including but not limited to: locally cached journals, digital library items, and institutional repository and bibliography. The software is also modular and highly configurable, allowing implementers to choose system components to best fit their needs. As of March 2012, a total of 64 institutions are running live instances of Vufind including the Georgia Tech Library, the London School of Economics, the National Library of Ireland, Yale University, and the DC Public Library. (Excerpt from Wikipedia article: VuFind) |
7918.1 | |
| data set |
A data set (or dataset) is a collection of data, usually presented in tabular form. Each column represents a particular variable. Each row corresponds to a given member of the data set in question. Its values for each of the variables, such as height and weight of an object or values of random numbers. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows. (Excerpt from Wikipedia article: Data set) |
5753.3 | |
| big data |
In information technology, big data consists of datasets that grow so large that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage, search, sharing, analytics, and visualizing. This trend continues because of the benefits of working with larger and larger datasets allowing analysts to "spot business trends, prevent diseases, combat crime." Though a moving target, current limits are on the order of terabytes, exabytes and zettabytes of data. Scientists regularly encounter this problem in meteorology, genomics, connectomics, complex physics simulations, biological and environmental research, Internet search, finance and business informatics. Data sets also grow in size because they are increasingly being gathered by ubiquitous information-sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, Radio-frequency identification readers, wireless sensor networks and so on." Every day, 2.5 quintillion bytes of data are created and 90% of the data in the world today was created within the past two years. (Excerpt from Wikipedia article: Big data) |
5121.1 | |
| raptor |
The Retrieval, Analysis, and Presentation Toolkit for usage of Online Resources (RAPTOR) project was designed to build a free-to-use, open source software toolkit for reporting e-resource usage statistics (from Shibboleth IdPs and EZProxy) in a user-friendly manner suitable for non-technical staff. Given the current economic climate and likelihood of tightening funding, understanding the usage of e-resources is becoming increasingly important as it allows an institution to understand which resources they need to keep subscribing to, and those which they may wish to unsubscribe from (potentially resulting in cost savings). (Excerpt from this source) |
5000 | |
| data citation |
Data citation refers to the practice of providing a reference to data in the same way as researchers routinely provide a bibliographic reference to printed resources. The need to cite data is starting to be recognised as one of the key practices underpinning the recognition of data as a primary research output rather than as a by-product of research. While data has often been shared in the past, it is rarely, if ever, cited in the same way as a journal article or other publication might be. If datasets were cited, they would achieve a validity and significance within the cycle of activities associated with scholarly communications and recognition of scholarly effort. (Excerpt from this source) |
3537.3 | |
| cloud computing |
Cloud computing refers to the provision of computational resources on demand via a computer network. In the traditional model of computing, both data and software are fully contained on the user's computer; in cloud computing, the user's computer may contain almost no software or data (perhaps a minimal operating system and web browser only), serving as little more than a display terminal for processes occurring on a network of computers far away. A common shorthand for a provider's cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud". (Excerpt from Wikipedia article: Cloud computing) |
3525.9 | |
| data management |
Data management comprises all the disciplines related to managing data as a valuable resource. The official definition provided by DAMA International, the professional organization for those in the data management profession, is: "Data Resource Management is the development and execution of architectures, policies, practices and procedures that properly manage the full data lifecycle needs of an enterprise." This definition is fairly broad and encompasses a number of professions which may not have direct technical contact with lower-level aspects of data management, such as relational database management. (Excerpt from Wikipedia article: Data management) |
3137.1 | |
| ocr |
Optical character recognition, usually abbreviated to OCR, is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files, to computerize a record-keeping system in an office, or to publish the text on a website. OCR makes it possible to edit the text, search for a word or phrase, store it more compactly, display or print a copy free of scanning artifacts, and apply techniques such as machine translation, text-to-speech and text mining to it. OCR is a field of research in pattern recognition, artificial intelligence and computer vision. (Excerpt from Wikipedia article: Optical character recognition) |
3103.1 | |
| repositories |
A repository in publishing, and especially in academic publishing, is a real or virtual facility for the deposit of academic publications, such as academic journal articles. Deposit of material in such a site may be mandatory for a certain group, such as a particular university's doctoral graduates in a thesis repository, or published papers from those holding grants from a particular government agency in a subject repository, or, sometimes, in their own institutional repository. Or it may be voluntary, as usually the case for technical reports at a university. (Excerpt from Wikipedia article: Repository) |
2515.5 | |
| oer |
Open educational resources (OER) are "digitised materials offered freely and openly for educators, students and self-learners to use and reuse for teaching, learning and research." Being a production and dissemination mode, OER are not involved in awarding degrees nor in providing academic or administrative support to students. However, OER materials are beginning to get integrated into open and distance education. Some OER producers have involved themselves in social media to increase their content visibility and reputation. OER include different kinds of digital assets. Learning content includes courses, course materials, content modules, learning objects, collections, and journals. Tools include software that supports the creation, delivery, use and improvement of open learning content, searching and organization of content, content and learning management systems, content development tools, and on-line learning communities. Implementation resources include intellectual property licenses that govern open publishing of materials, design-principles, and localization of content. They also include materials on best practices such as stories, publication, techniques, methods, processes, incentives, and distribution. (Excerpt from Wikipedia article: Open Educational Resources) |
2373.5 | |
| archives |
An archive is a collection of historical records, or the physical place they are located. Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of an organization. In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value. Archival records are normally unpublished and almost always unique, unlike books or magazines for which many identical copies exist. This means that archives (the places) are quite distinct from libraries with regard to their functions and organization, although archival collections can often be found within library buildings. (Excerpt from Wikipedia article: Archive) |
1636.7 | |
| bs8878 |
Web accessibility refers to the inclusive practice of making websites usable by people of all abilities and disabilities. When sites are correctly designed, developed and edited, all users can have equal access to information and functionality. For example, when a site is coded with semantically meaningful HTML, with textual equivalents provided for images and with links named meaningfully, this helps blind users using text-to-speech software and/or text-to-Braille hardware. When text and images are large and/or enlargable, it is easier for users with poor sight to read and understand the content. When links are underlined (or otherwise differentiated) as well as coloured, this ensures that color blind users will be able to notice them. When clickable links and areas are large, this helps users who cannot control a mouse with precision. When pages are coded so that users can navigate by means of the keyboard alone, or a single switch access device alone, this helps users who cannot use a mouse or even a standard keyboard. When videos are closed captioned or a sign language version is available, deaf and hard of hearing users can understand the video. When flashing effects are avoided or made optional, users prone to seizures caused by these effects are not put at risk. And when content is written in plain language and illustrated with instructional diagrams and animations, users with dyslexia and learning difficulties are better able to understand the content. When sites are correctly built and maintained, all of these users can be accommodated while not impacting on the usability of the site for non-disabled users. (Excerpt from Wikipedia article: BS 8878) |
1600 | |
| sharepoint |
Microsoft SharePoint is a web application platform developed by Microsoft. First launched in 2001, SharePoint is typically associated with web content management and document management systems, but it is actually a much broader platform of web technologies, capable of being configured to suit a wide range of solution areas. SharePoint is designed as a central application platform for common enterprise web requirements. SharePoint's multi-purpose design allows for management, scaling, and provisioning of a broad variety of business applications. It provides a layer of management and abstraction from the web server, with the ultimate goal of enabling business users to leverage web features without having to understand technical aspects of web development. SharePoint also contains pre-defined 'applications' for commonly requested functionality, such as intranet portals, extranets, websites, document & file management, collaboration spaces, social tools, enterprise search and business intelligence. Other common use-cases for SharePoint include process integration, system integration, workflow automation, and providing core infrastructure for third-party solutions (such as ERP, CRM, BI, and social enterprise packages). (Excerpt from Wikipedia article: SharePoint) |
1578 | |
| solr |
Solr is an open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is highly scalable. Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Apache Tomcat. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use from virtually any programming language. Solr's powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required. Apache Lucene and Apache Solr are both produced by the same ASF development team since the project merge in 2010. It is common to refer to the technology or products as Lucene/Solr or Solr/Lucene. (Excerpt from Wikipedia article: Solr) |
1472.4 | |
| api |
An application programming interface (API) is a particular set of rules and specifications that a software program can follow to access and make use of the services and resources provided by another particular software program that implements that API. It serves as an interface between different software programs and facilitates their interaction, similar to the way the user interface facilitates interaction between humans and computers. (Excerpt from Wikipedia article: API) |
1358.5 |


(complete, paged)
(complete, paged
(complete, paged)