Web Magazine for Information Professionals

What Do Application Profiles Reveal about the Learning Object Metadata Standard?

Jean Godby assesses the customised subsets of metadata elements that have been defined by 35 projects using the LOM standard to describe e-learning resources.

A Metadata Standard for Learning Objects

As learning objects grow in number and importance, institutions are faced with the daunting task of managing them. Like familiar items in library collections, learning objects need to be organised by subject and registered in searchable repositories. But they also introduce special problems. As computer files, they are dependent on a particular hardware and software environment. And as materials with a pedagogical intent, they are associated with metrics such as learning objectives, reading levels and methods for evaluating student performance. The conventional wisdom is that a learning object should be accompanied by a metadata record, whose minimal form would contain the information typically found in the description of a book or journal article, such as title, author, subject, and a unique identifier. But a more complete record would describe the technical and educational context required to activate the learning object and connect it with others to create a rich educational experience for an appropriate audience.

These needs motivated the development of the IEEE Learning Object Metadata (LOM) standard [1]. The most recent draft of the standard, issued in 2002, states that LOM was designed “to facilitate the search, evaluation, acquisition, and use of learning objects - by learners or instructors or automated software processes.” The standard proposes a set of 77 elements, distributed among the nine categories shown in Table 1.

Table 1 The high-level LOM categories, as described in the Draft Standard

Category

Description

General

This category groups the general information that describes this learning object as a whole.

LifeCycle

This category describes the history and current state of this learning object and those entities that have affected this learning object during its evolution.

Meta-Metadata

This category describes this metadata record itself (rather than the learning object that this record describes).

Technical

This category describes the technical requirements and characteristics of this learning object.

Educational

This category describes the key educational or pedagogic characteristics of this learning object.

Rights   

This category describes the intellectual property rights and conditions of use for this learning object.

Relation

This category defines the relationship between this learning object and other learning objects, if any.

Annotation

This category provides comments on the educational use of this learning object, and information on when and by whom the comments were created.

Classification

This category describes where this learning object falls within a particular classification system.

Though all elements are optional, the fullest expression supports three levels of description:

1. The first, and perhaps most familiar, is that of an object resembling a bibliographic record, encoded in a coarse-grained standard designed for computer files, such as Dublin Core (DC) [2] or the Metadata Object Description Schema (MODS) [3]. Most of these elements are subsumed in the General category, which lists the title, author, unique identifier and language of the resource, as well as keywords indicating a subject. A lightweight bibliographic record could be completed by filling out the Relation and Rights elements and enhanced with the Classification element, which would index the resource with controlled vocabulary from a standard subject classification scheme.

2. Most of the remaining elements represent LOM innovations that describe a learning object’s potentially complex social and technical context. The LifeCycle element captures the fact that learning objects may have a revision history to which many parties have contributed. The Technical element lists the software and hardware requirements for accessing the learning object. The Educational and Annotation elements describe critical details of the learning experience created by the learning object. For example, who is the intended audience? How much interaction does it require? How long does it take? How difficult is it? How have instructors used it?

3. Finally, the Meta-Metadata element acknowledges the fact that the descriptive record for a learning object is also a piece of intellectual property, which must be maintained separately because a description has its own author, language, and controlled vocabulary.

Application Profiles Derived from LOM

The LOM standard is currently being tested by members of the e-learning community who are committed to open access and interoperability. High-profile projects that manage collections of education metadata in East Asia, Europe, and North America have adopted variants of the standard. Metadata schemas that are being designed for the needs of specialised education communities also strive to be LOM-compliant. Among them is the standard developed by the U.S. Department of Education-sponsored Gateway to Educational Materials (GEM) Project [4], which manages learning objects for primary and secondary school students. The Shareable Courseware Object Reference Model (SCORM) standard [5] also has a LOM-compliant section, which is enhanced with elements required for describing the unique technical requirements of learning objects designed for military and industrial training. In addition, the Dublin Core community has developed extensions for education metadata which are derived from LOM or which can be easily mapped to it [6].

These activities imply that the LOM standard has achieved a high degree of informal acceptance in communities that develop education metadata. But a closer look reveals an adoption rate that is less than comprehensive. Because LOM elements are optional, projects that have adopted the LOM standard usually define application profiles, or subsets of metadata standards, which follow certain established guidelines: they retain the broad semantics of the elements and may combine elements from multiple standards but introduce no new definitions. According to Heery and Patel [7], application profiles can achieve a reasonable compromise between interoperability and the unique needs of locally defined projects.

To monitor progress on these goals for the LOM community, Friesen [8] and Campbell [9] have assembled data from application profiles into a machine-processable format defined by 29 projects in Asia, Europe, and North America. The raw data is coded in a spreadsheet whose rows represent the 77 LOM elements and whose columns represent the projects identified by Friesen and Campbell that create and maintain the profiles. Each cell contains a value indicating whether an element in a given profile is mandatory, recommended, or optional. The Friesen study summarises this data in a graphic that represents the three levels of adoption as successively darker shades of red. The bands of dark red, which represent mandatory elements, are prominent in the LOM General category and can be used to create descriptions that roughly correspond to a Dublin Core record. Friesen also tried to obtain records that comply with these profiles, but the effort was largely unsuccessful because the sample size was small and not widely distributed across the profiles. Nevertheless, the small number of records he examined reveal that the LOM standard can be used to create sophisticated descriptions of learning objects, though the recommended elements in the application profiles were often unused.

In the rest of this article, I update this analysis of LOM application profiles. My study builds on the Friesen/Campbell spreadsheet, which they kindly made available to me, and includes profiles that were not available earlier. It attempts a more detailed interpretation and addresses the following questions:

1. Which elements are most widely adopted? Can a viable record be assembled from the most popular elements-i.e., one which would describe the unique characteristics of learning objects, or at least support discovery and harvesting?

2. What are the prospects for interoperability in a complex metadata scheme that consists entirely of optional elements?

3. What can be learned about the motivation for developing an application profile-either from the coded data or the supporting documentation? Should the LOM standard be revised?

The study of application profiles represents a shorthand for a much more extensive analysis, which would critically review the arguments that motivated the profiles and examine records that conform to them. But a study of recommendations for use in real-world contexts gives a high-level view of these important issues and provides a framework for future discussion.

Selection criteria

I retained all of the data in the Friesen and Campbell spreadsheet and added six more profiles from the United States, Europe and Australia. Two criteria govered my selection. First, the profiles must be accessible from a public Web site and represent a persistent organisation or project that has been creating or collecting learning object metadata since the LOM standard was published in 2001. Second, the LOM application profile must consist primarily of LOM elements. Unfortunately, these criteria eliminated three well-known learning object initiatives: the MERLOT Project [10], whose metadata record is based on a LOM profile that has not been made public; GEM [4], which is committed to compliance with LOM but has not published a detailed description of the relationship between the two element sets; and Dublin Core Education [6], which proposes to use a small number of LOM elements to enhance a Dublin Core record.

The sample is skewed toward English-language projects, a consequence of fact that the three primary contributors to this study live and work in English-speaking countries. Nevertheless, the sample contains application profiles from non-English-language projects in Central Europe, China and Japan.

Data Tabulation

In addition to codes representing three levels of recommended compliance to the LOM standard, the Friesen/Campbell spreadsheet also has missing cells and shows evidence of several coding conventions which may have had differing intents that can no longer be interpreted. To reduce confusion, I added data that is as simple as possible. It has just two levels - yes and no - and ignores any conditional logic that may be encoded in a given profile (for example: Lifecycle.contribute is mandatory if Lifecycle.contribute.role is present). A Perl script reduces the original data in the spreadsheet to the same binary classification by scoring all cell entries of M (mandatory), Y (yes), R (required) and Auto (automatic) as yes; all other data, including blank entries, is scored as no.

A Core LOM Record

The first priority in this study is to identify a core or composite record that would support interoperability across repositories of learning object metadata created in different institutions and provide some measure of confidence that the elements defined in the LOM standard are perceived as useful. Since the rows in the spreadsheet represent LOM elements, while the columns represent institutions that have created application profiles, it should be a simple matter of collapsing the columns across rows to obtain the most popular elements.

Despite the relaxed criteria for scoring that I have adopted in this study, no elements scored yes in all 35 profiles. As Table 2a shows, the most commonly cited elements are General.Title (31) and General.Description (30). The remaining entries in Table 2a represent the one-fourth of the elements with the highest yes scores, which have been assembled into the LOM record structure to promote readability. Below this cutoff point, the yes score rapidly drops to less than 50%. By contrast, Table 2b represents the one-fourth of the elements with the lowest yes scores.

Table 2a
The most commonly recommended
LOM elements

Table 2b
The least commonly recommended
LOM elements

LOM Element

Count

Dublin Core
equivalent

LOM Element

Count

Dublin Core
equivalent

General
   Title
   Description
   Identifier
   Language
   Keyword

Lifecycle
   Contribute
      Entity
      Role
      Date

 

Meta-metadata
  Medatascheme

 

Technical
    Format
    Location

 

Rights
    Cost
    Copyright
AndOtherRestrictions  

Classification
    Purpose

28
33
34
31
25
26

29
29
27
23
24

 


22

 


25
22

 

24
24
25


22
25


Title
Description
Identifier
Language
Subject


Contributor
or
Publisher

Date

 

 

 

 

Format
Identifier


Rights

 


Subject

General
  Structure
  Coverage

Technical
   OrComposite
       Name
       MaximumVersion
       MinimumVersion

OtherPlatformRequirements
   InstallationRemarks
   Duration

 

Educational
   SemanticDensity
   Difficulty
   Language

Relation
    Kind
    Description
        Catalog
        Entry

Annotation


5
6


2
6
5
4

8
7
6

 


4
8
6


8
4
4
4

8

 

Coverage

 

 

 

 

 

 

 

 

Relation

 

Total:        35

Average: 26.3

  

Total:
35

Average:
   5.6

 

In Table 2a, a composite LOM record emerges that gives substance to the impressions recorded in Friesen’s analysis [8]. As he observed, most of the recommended elements map to Dublin Core equivalents.The result is remarkably coherent, considering that it is produced from a crude tabulation and that the projects from which the data was obtained were implemented by the early adopters of a new standard who were not necessarily working with one another’s knowledge. Viewed as a Dublin Core surrogate, the composite record has just two redundancies-for subjects and identifiers, which is perhaps a reflection of disagreement or confusion about how to represent this information. For example, the LOM standard defines General.ldentifier as a “globally unique identifier” and Technical.Location as “a string that is used to access this learning object” - a fine distinction that may not be universally necessary. Nevertheless, the record could be easily converted to the simplified Dublin Core specification promoted by the Open Archives Initiative (OAI) [11] to support a coarse degree of interoperability required for discovery through federated searching or harvesting. But with an average yes score of 26.3, the result is not a resounding consensus.

In the composite record shown in Table 2b, assembled from the least commonly recommended elements, only two of the elements map to Dublin Core equivalents: General.Coverage and Relation, neither of which is widely used, according to studies of usage patterns in Dublin Core-encoded records [12]. Perhaps the elements in Table 2b are only locally useful or should be candidates for refinement by the sponsors of the LOM standard. Of these, Educational.SemanticDensity is the most controversial. This element attracts many comments in users’ guides for creators of learning object metadata because it seems to be ill-defined.Though the LOM standard defines semantic density as “the degree of conciseness of a learning object,” it is defined slightly differently in another description as “a subjective measure of a resource’s usefulness relative to size or duration.” Many practitioners have commented that the LOM education elements are especially problematic and difficult to use. Perhaps the problems stem from the fact that these elements blur the distinction between educational purpose and material form, or that they fail to address more fundamental issues such as how learning objects should be defined and how their granularity should be measured, as Campbell [9] observed.

In addition to the educational elements, the LOM elements that define the technical features of learning objects other than those that map to the Dublin Core elements Format and Identifier have also failed to win widespread recommendation, as shown by the short list of technical elements in Table 2a and the long list in Table 2b. And the Relation element shows the same imbalance. As noted in description of the CELEBRATE profile, Relation is optional because implementation is complex and time-consuming: multiple relationships require multiple instances of the category, and it is unclear how to associate learning objects in multiple languages. Thus the authors of the CELEBRATE profile recommend omitting the Relation unless the considerable effort required to identify and document the related resources can be justified [13]. Taken together, these observations about the Educational, Technical, and Relation elements imply that the parts of the LOM record designed to distinguish learning objects from other electronic resources-by establishing the context for access, presentation to an appropriate audience, and evaluation-are among those least commonly recommended.

Table 2 is derived from an overall tabulation of the spreadsheet, but further regularities emerge from separate tabulations that divide the 35 application profiles into the three obvious geographic regions represented by the sample. 18 are from Europe; 8 are from North America and 9 are from the Pacific. Since each region has highly visible projects that must answer to local government agencies and educational institutions - UK LOM Core [14] and CELEBRATE [13] in Europe; CanCore [15] and SCORM [5] in North America; and IMS [16], the Le@rning Federation [17] and CELTS (China) [18] in the Pacific-there is no reason to expect that the recommendations will result in records that are fully interoperable with those developed in distant locations.

Table 3 shows the results of separate tabulations by geographic region. As in Table 2, only the top fourth of the elements are shown, sorted by yes count and organised into the LOM record structure. These tabulations reveal slightly more detail than the overall tabulation. For example, the data in the Europe and North America profiles reflect an apparent interest in rights data that would be more detailed than an unqualified Dublin Core description. Moreover, some of the LOM innovations are given greater prominence, such as the Meta-Metadata elements in the North American profiles and the Educational categories in all three profiles. Nevertheless, a more striking observation about the data summarised in Table 3 is that when the application profiles are tabulated by geographic region, the composite records simply represent different proposals for descriptions that could have been largely encoded in a minimal Dublin Core record. And when all three regions are overlaid, only four of the high-level LOM categories and four subcategories emerge as common. This data suggests that the prospects for interoperability among LOM application profiles are greatest within a single region and they suffer when two regions are involved. But if records are to be shared across arbitrary projects without regard to geographic location, additional coordination will be required to ensure their compatibility.

Table 3: Composite LOM records from three geographic regions

Europe

Pacific

North America

All three regions

General
     Identifier
     Title
     Language
     Description


Lifecycle
   Contribute
       Role
       Entity
       Date

 

 


Technical
       Format
       Location

Educational
       LearningResourceType


Rights
    Cost
    CopyrightAndOtherRestrictions
    Description

 

Classification
       Purpose

General
     Identifier
     Title
     Language
     Description
     Keyword

Lifecycle
    Contribute
         Entity
         Date

 

 

 

Technical
     Format


Educational
IntendedEndUserRole
     Context

 

 

 


Classification
     TaxonPath
         Purpose
            Source
            Taxon

General
     Identifier
     Title
     Description
     Keyword


Lifecycle
     Status

 

 

Meta-metadata
     Identifier
     MetadataScheme

Technical
     Format
     Location

Educational
     TypicalLearningtime


Rights
    Cost
    CopyrightAndOtherRestrictions


Relation
     Identifier

General
   Identifier
   Title
   Description

 

Lifecycle



 

 

 

Technical
     Format


Educational

Total: 18

Total: 9

Total: 8

Total: 35

Interoperability among Projects

The foregoing discussion identified the LOM elements that are most and least likely to be recommended in application profiles by collapsing the columns in the Friesen/Campbell spreadsheet and nesting the application profiles within elements. Additional insights can be obtained by collapsing the rows in the spreadsheet, which has the effect of nesting the elements within the profiles. This tabulation provides a first draft of the answer to one of the most important questions posed by the presence of 35 application profiles created by projects on nearly every continent: will they be able to pool or share their resources?

The answer requires the calculation of an agreement score, which can be straightforwardly obtained by representing each application profile as a vector of yes and no values. For every pair of vectors, the agreement score is incremented if the yes and no values match in a given location. This value is a formal expression of the fact that two institutions might, for example, recommend the use of General.Description or refrain from recommending Educational.SemanticDensity. The results are sorted by agreement score for all unique pairs.

Table 4 shows the 11 pairs of application profiles that have achieved the highest agreement scores. SCORM Content Aggregation and SCORM Shareable Content Object have perfect agreement according to the coding standards adopted here. In other high-scoring pairs, the application profiles usually originate from the same geographic region or from the same enterprise. (Links to the application profiles cited in this section are listed in the Appendix at the end of this article).

Table 4: Application profiles with the highest pairwise agreement scores

Score

Application profiles

Same region?

77
76
75
70
69
69
69
68
69
68
68
68

SCORM Content Aggregation and SCORM Shareable Content Object
CELT(China) and Celebrate
LT Scotland Objects and LT Scotland Resources  
Learning-and-Teaching-Scotland and LT Scotland Objects
Curriculum Online and SingCORE
SCORM Shareable Content Object and SCORM Asset
SCORM Content Aggregation and SCORM Asset
Japan JST and Japan EHDO
CanCore 1.9 and SDLF
CanCore and TBITS39
LT Scotland and LT Scotland Resources
CanCore and UfI

Y

Y
Y

Y
Y
Y


Y
Y

No pair of profiles shows perfect disagreement, even at the bottom of the ranked list. The lowest agreement scores result from application profiles that show the greatest divergence in the scope of their recommendations. For example, the Eisenhower National Clearinghouse (ENC) profile was developed in the context of a project [19] whose goal is to create a collection of detailed original records for the National Science Digital Library (NSDL) [20], a U.S. National Science Foundation-funded intiative. Accordingly, the ENC profile leaves open the possibility of defining a potentially rich LOM record and recommends the use of all but two of the elements. By contrast, CanCore 1.9 defines a minimal record, with the goal of establishing a ‘meta-profile’ that promotes interopability among projects throughout Canada. Yet despite differences in goals and scope, the agreement score for ENC and CanCore 1.9 is not zero because the elements recommended for CanCore 1.9 are common to both, which suggests that ENC and CanCore-compliant records can interoperate.

The ranked agreement scores can also be used to identify the application profiles that have the greatest and least resemblance to the other profiles in the sample. The application profiles shown in the left half of Table 5 appeared most frequently in the top one-fourth of the ranked list of pairwise agreements. Among them are English-language meta-profiles from three regions: RDN from the United Kingdom [21]; IMS Core from Australia [16]; and TBITS39 from Canada [15], all of which recommend a large number of elements that map to Dublin Core. By contrast, the application profiles in the right half of Table 6 appear most frequently in the bottom one-fourth of the ranked list, indicating that they show the greatest disagreement with the other profiles in the sample. Many, but not all, are projects with local or regional scope.

Table 5: Application profiles at the top and bottom of the ranked agreements

Top

Bottom

RDN
IMS CORE
COSE
TBITS39
Japan EHDO
CELT (China)

 

ENC
CanCore 1.9
SDLF
Learning and Teaching Scotland
ENCORE
Resl

Finally, the agreement scores can be used to focus on a single application profile of interest and assess its prospects for interoperability with a range of peers. As an illustration, Table 6 shows the ranked pairwise agreement scores involving SCORM Content Aggregation. The profiles showing the highest pairwise agreement with SCORM Content Aggregation are, in descending order of agreement:

  1. from the same enterprise (SCORM)
  2. from IMS Core, an English-language project in the Pacific region
  3. from Europe and North America and
  4. from non-English-language projects in the Pacific

The data in Table 6 supports a hypothesis that is also suggested by Table 3: the best prospects for interoperability are local, and they degrade as institutional, linguistic, and cultural boundaries are crossed.

Table 6: Ranked Agreements with SCORM Content Object
Key: Regions: R=related enterprise; N=North America; E=Europe; P=Pacific

Profile

Score

Region

Profile

Score

Region

1. SCORM Shareable Content Object
2. SCORM Asset
3. IMS CORE
4. COSE
5. RDN
6. NGfL Scotland Content
7. Metakka
8. UfI
9. TBITS39
10. CanCore
11. UK Common Framework Core

77
69
66
59
57
57
56
56
55
55
54

R
R
P
E
E
E
E
E
N
N
E

12. Japan EHDO
13. Normetic
14. HLSI
15. Japan NICER
16. Curriculum Online
17. BECTA NLN
18. Japan JST
19. Japan ALIC & eLC
20. CELT (China)
21. The Le@rning Federation
22. SingCore

54
53
53
51
50
50
50
50
49
48
48

P
N
E
E
E
E
P
P
P
P
P

The Learning Object Metadata Standard: A short progress report

The foregoing discussion presented an analysis of 35 LOM application profiles and described a methodology that can be re-administered if more data becomes available. It yields an abstract view of the metadata schema that is a major contender to become a worldwide standard for describing learning objects. The results can be cited in answers to the three questions posed at the beginning of the previous section.

A Viable Record

The tabulations illustrated in Tables 2 and 3 suggest that a viable record can be assembled from the most highly recommended LOM elements. It is descriptively similar to an unqualified Dublin Core record and exhibits regional variation. Such a record lacks the elements for describing the educational, social, and technical contexts required for a successful interaction with a learning object. But application profiles designed primarily for the management of locally produced records, such as ENC, include most of these elements and support a rich description. By contrast, meta-profiles such as CanCore, RDN, and UK LOM Core, which are designed to promote interoperability among similar projects, have far fewer recommended fields.

Perhaps the composite record that emerges from the recommendations in the application profiles is adequate for supporting resource discovery. But since subject data is sparsely represented, and subject classification schemes for learning objects are still under active development, discovery strategies for LOM records will probably be restricted to known-item searching.

Prospects for Interoperability

The results of this study contain a mixture of good and bad news. The good news is that, despite a large sample, the application profiles examined here exhibit a relatively small subset of logically possible variations, which can be sensibly interpreted as an OAI record capable of supporting limited discovery and harvesting. But it is important to keep in mind that the data from this study gives only a best-case estimate of interoperability prospects because it has analyzed only application profiles-or, templates for records-and not the records themselves, which are still difficult to obtain on the large scale required for an empirical study. Friesen [22] offers a glimpse of the real data in his analysis of 250 records from five projects, two of which are publicly available and included in my analysis. His tabulations of the most frequently used elements largely overlap with the results of my study and support the conclusion that the recommendations documented in the LOM profiles are, in fact, being implemented.

Nevertheless, this evidence constitutes only a precondition for interoperability. A study of application profiles must make the simplifying assumption that profiles are interoperable if they recommend the same elements. But, as Powell 23] points out, two application profiles might use LOM.Classification.Purpose and still fail to interoperate because this element could be used to annotate different facets of the resource, such as pedagogical intent and position within a knowledge hierarchy. Though consistency could be enforced by record editing tools that implement the profiles and recommended controlled vocabularies, such tools are not yet widely available.

These observations imply that it is premature to assess the prospect for interoperability of LOM records, especially those created by geographically and culturally distant institutions. Indeed, studies of interoperability for records describing much better-understood resources are only now beginning to appear in the research literature. For example, Ruddy [24] describes the technical and social requirements for the shared access of repositories of digital mathematics monographs by Cornell University, the University of Michigan, and the University Library of Göttingen. Such studies can provide important clues about how to design a rigorous test of interoperability and define a set of priorities for metadata description. As Ruddy argues, the record format must be adequate for local needs and must be consistent with the formats of institutions most likely to share records. If these requirements are met and the record format is based on a recognised standard, it is possible to start thinking more globally.

Motivations for Developing Application Profiles

Perhaps the most compelling issue raised by this study is existential: why have so many application profiles been developed with a core of LOM elements that they generate a significant body of data to be analyzed?

Viewed from a high level, the variability results from two countervailing pressures. On the one hand, there is a pressure to conform because the development and promotion of an interoperable metadata standard is a long-term intellectual effort that few institutions have the resources to undertake. But there is also substantial pressure to innovate. Web-accessible learning objects present a host of new problems for access and management, which are only partly addressed by a relatively new metadata standard that is still being tested. An application profile is a sensible response to these conditions, and the needs of the material may dictate whether the elements will be drawn primarily from LOM or from more relevant standards.

For example, the WebDance Project [25], which was not included in my study, attempts to catalogue e-learning materials for dance education and has developed an application profile based on LOM, Dublin Core Education, and MPEG. Since the major problem for the WebDance Project is the management of multimedia files while the educational issues are subordinate, the application profile emphasises MPEG elements and recommends only a subset of LOM elements, thus failing the major criterion for inclusion in this analysis. Other projects have developed extensive subject and educational vocabulary that is only locally relevant and may propose extensions or namespaces that are not part of the LOM standard. Still other projects are relatively new, with small collections, so they have not yet encountered a pressing need for the LOM elements that describe relationships to other versions or similar resources.

Thus it is possible to argue that the large number of application profiles represents an expected state of affairs for a new genre of electronic resources and a new standard for describing them. The institutional investment in these profiles implies that LOM has a chance of meeting these needs even if it must be supplemented or modified. But LOM has to have demonstrable value in the face of competition with other standards for describing education metadata, such as Dublin Core Education and EDNA [26].

Some Recommendations

Given that the learning object community is in a state of flux, what must stakeholders do to increase the odds that LOM is widely adopted in the fullest possible form?

Following the empirical orientation of the study I have described in this paper, I believe that successful projects with a significant investment in LOM need to mature and report their experiences. The learning object community would benefit if such projects could share pieces that are now missing from the formal statement of the standard and would be candidates for incorporation into future versions, such as syntactic bindings, the most useful extensions to controlled vocabulary, and protocols for communicating with like-minded projects.

This study suggests that the LOM standard could be conceptualised as a series of layers defined by need. The first layer would define a core set of elements for minimal interoperability and metadata harvesting. The core set could be obtained from the composite records illustrated here, with minor editing to eliminate redundancies and enforce the need for desirable elements such as subject and identifier. The second layer would contain pointers to data of local interest, such as descriptions of the context for accessing, executing, and interacting with the learning object. The outer layer would collect the lifecycle elements for the subset of learning objects that will eventually have a long revision history and complex relationships to other objects and will perhaps always remain a small percentage of the total. This model is suggested by usage patterns that are already observable and could certainly be improved, but the problems it addresses must be solved because the context for the management of learning objects is growing more complex. In the future, learning object metadata collections must interoperate not only with one another, but with other digital repositories developed by libraries and cultural heritage institutions [27]. The groundwork is now being prepared.

Appendix

35 application profiles were analysed in this study. The raw data includes multiple versions of profiles from CanCore, Curriculum Online, SCORM and LT Scotland.

The CanCore site has a useful service, currently in beta form which lists some of the major profiles and allows users to specify three levels of detail about them. http://phenom.educ.ualberta.ca/n/cancore/

ALIC Advanced Learning Infrastructure Consortium, Japan. http://www.alic.gr.jp/eng/index.htm

BECTA NLN (British Educational Communications and Technology Agency National Learning Network)
BECTA: http://www.becta.org.uk/index.cfm

NLN: http://www.becta.org.uk/index.cfmhttp://www.nln.ac.uk/materials/default.asp

CanCore, CanCore 1.9 http://www.cancore.ca/guidelines/CanCore%20Guidelines%20version%201.1.doc

CELEBRATE http://users.utu.fi/lasnir/docs/CELEBRATE_app_prof.doc

CELT (China) Centre for Learning Technology. http://www.cetis.ac.uk/content/20010817102511

COSE Creation of Study Environments. http://www.staffs.ac.uk/COSE/
Application profile described in http://www.staffs.ac.uk/COSE/cosenew/metadata_used_in_cose.doc

Curriculum Online http://www.dfes.gov.uk/curriculumonline/

EHDO (Employment and Resources Development Organization of Japan) “LOM Activity and Implementation in Japan.” http://jtc1sc36.org/doc/36N0720.pdf

ENC (Eisenhower National Clearinghouse) http://www.enc.org/
Application profile described in http://www.dlib.org/dlib/september03/lightle/09lightle.html

ENCORE (Enriching Courses with Resources) http://lib.derby.ac.uk/encore/encore.html

Application profile described in http://lib.derby.ac.uk/encore/publications3rdpge.html

FAILTE (Facilitating Access to Information on Learning Technology for Engineers) http://www.failte.ac.uk Description of application profile in http://www.staffs.ac.uk/COSE/

FERL (Further Education Resources for Learning) http://ferl.becta.org.uk/display.cfm?page=1
Description of application profile in http://metadata.cetis.ac.uk/usage_survey/cs_ferl.pdf

HLSI (High Level Skills for Industry Repository) http://www.hlsi.org.uk/
Description of application profile in http://metadata.cetis.ac.uk/usage_survey/cs_hlsi.pdf

IMS CORE (IMS Global Learning Consortium, Inc.) http://imsproject.org/

JST (Japan Science and Technology) “LOM Activity and Implementation in Japan.” http://jtc1sc36.org/doc/36N0720.pdf

Learning and Teaching Scotland http://www.ltscotland.org.uk/
Application profile accessible at: http://www.ltscotland.org.uk/files/lts_info_model_v0p7.xls

The Le@arning Federation http://www.thelearningfederation.edu.au/tlf/newcms/d2.asp
Application profile accessible at: http://www.thelearningfederation.edu.au/repo/cms2/tlf/published/8519/Metadata_Application_Profile_1_3.pdf

LT Scotland Objects, LT Scotland Resources Learning and Teaching Scotland http://www.ltscotland.org.uk/

Metakka (Finland) http://www.tieke.fi/metakka/lom.nsf

NGFL Scotland Content National Grid for Learning. http://www.ltscotland.org.uk/ngflscotland/

NICER (National Information Center for Educational Resources), Japan http://www.nicer.go.jp/english/

Normetic http://www.profetic.org:16080/normetic/
Application profile described in http://www.profetic.org:16080/normetic/IMG/pdf/annexeG.pdf

RDN (Resource Development Network) RDN/LTSN application profile:
http://www.rdn.ac.uk/publications/rdn-ltsn/ap/

RESL (Reusable Educational Software Library) http://www.resl.ac.uk/
Description of application profile in http://metadata.cetis.ac.uk/usage_survey/cs_resl.pdf

SCORM Asset, SCORM Shareable Content http://adlnet.org/

SDLF (Smart Learning Design Framework) http://www.digitalmedia.uow.edu.au/sldf.html

SingCORE (E-Learning Compentency Centre) http://www.ecc.org.sg/cocoon/ecc/website/standards/singcore.standards

TBITS39 (Treasury Board Information Management Standard)
http://www.cio-dpi.gc.ca/its-nit/index_e.asp

UfI (University for Industry) http://www.ufi.com/
Application profile described in http://www.openline-learning.net/MILO/MILOProfile01.htm

UK Common Framework Core http://www.ukoln.ac.uk/metadata/education/uklomcore/

Acknowledgements

I have benefited greatly from discussions and correspondence with Debbie Campbell, Norm Friesen, Neil McLean, and Jian Qin. They are, of course, not responsible for the shortcomings of this work.

References

(Links accessed 15 September 2004)

  1. Draft Standard for Learning Object Metadata. Learning Technology Standards Committee of the IEEE. July 2002. http://grouper.ieee.org/LTSC/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdf
  2. Dublin Core Metadata Initiative. 2004. http://dublincore.org/
  3. Metadata Object Description Schema. Library of Congress. 2004. http://www.loc.gov/standards/mods/mods-userguide-announce.html
  4. Gateway to Educational Materials. 2004. http://geminfo.org/
  5. SCORM Overview.” Advanced Distributed Learning. 2003. http://www.adlnet.org/index.cfm?fuseaction=scormabt
  6. DCMI Education Working Group.” Dublin Core Metadata Initiative. 2003. http://dublincore.org/groups/education/
  7. Rachel Heery and Manjula Patel. Application profiles: mixing and matching metadata schemas.“Ariadne. Issue 25, September 2000. http://www.ariadne.ac.uk/issue25/app-profiles/
  8. Norm Friesen. Survey of LOM Implementations. CanCore. September 2003. http://www.cancore.ca/lomsurvey.doc
  9. Lorna M. Campbell. UK LOM Core Update.” PowerPoint presentation, CETIS. September 2003. http://metadata.cetis.ac.uk/sig_meetings/lon_presentations/mdsig_040903_uklomcore.ppt
  10. MERLOT: Multimedia Education Resource for Learning and Online Teaching. 2004. http://www.merlot.org/Home.po
  11. Open Archives Initiative. 2004. http://www.openarchives.org/
  12. Ellen Knutsen, Carole Palmer and Michael Twidale 2003. Tracking Metadata Use for Digital Collections. In DC-2003: Proceedings of the International DCMI Metadata Conference and Workshop p. 241-242. http://www.siderean.com/dc2003/706_Poster49-color.pdf
  13. CELEBRATE Metadata Application Profile. May 2003. http://users.utu.fi/lasnir/docs/CELEBRATE_app_prof.doc
  14. UK Learning Object Metadata Core. 2003. http://www.ukoln.ac.uk/metadata/education/uklomcore/
  15. CanCore Learning Object Metadata.“Version 1.1. CanCore Initiative. Athabasca University, Edmonton, Alberta. 2002. http://www.cancore.ca/guidelines/CanCore%20Guidelines%20version%201.1.doc
  16. “IMS Global Learning Consortium, Inc.” 2004. http://www.imsglobal.org/
  17. The Learning Federation. 2003. http://www.thelearningfederation.edu.au/tlf/newcms/d2.asp
  18. CELT. Centre for Learning Technology. http://www.cetis.ac.uk/content/20010817102511
  19. Kimberly S. Lightle and Judith Ridgway. Generation of XML Records across Multiple Metadata Standards. DLIB Magazine. September 2003. http://www.dlib.org/dlib/september03/lightle/09lightle.html
  20. NSDL: National Science Digital Library. 2004. http://www.nsdl.org/
  21. Andy Powell. RDN/LTSN LOM Application Profile. Version 1.0. University of Bath. 2004. http://www.rdn.ac.uk/publications/rdn-ltsn/ap/
  22. Norm Friesen. International LOM Survey: Report. DLIST, 2004. http://dlist.sir.arizona.edu/archive/00000403/
  23. Andy Powell, personal communication.
  24. David Ruddy. “A distributed digital library of mathematical monographs: technical aspects.” PowerPoint presentation at the Digital Library Federation Spring Forum. April 2004. http://www.diglib.org/forums/Spring2004/Ruddy0404_files/frame.htm
  25. “WebDANCE: 3D dance for All using virtual Cultural E-learning tools.” Version 1.0. University of the Aegean. Mytilene, Greece. January, 2003. http://www.aegean.gr/culturaltec/webdance/reports/WebDANCE_metadata_overview.pdf
  26. “The EDNA Metadata Standard.” Education Network Australia. 2004. http://www.edna.edu.au/metadata
  27. Neil McLean and Clifford Lynch. “Interoperability between liberary information services and learning environments-bridging the gaps.” A joint white paper on behalf of the IMS Global Learning Consortium and the Coalition for Networked Information. May 10, 2004. http://www.imsglobal.org/digitalrepositories/CNIandIMS_2004.pdf

Author Details

Carol Jean Godby
Research Scientist
OCLC Online Computer Library Center
6600 Frantz Rd.
Dublin, Ohio 43017 USA

Email: godby@oclc.org
Web site: http://www.oclc.org/research/staff/godby.htm

Return to top