, Violet B. Fox, Editor
Disciplinary Differences: LCSH and Keyword Assignment for ETDs from Different Disciplines
Margaret Beecher Maurer & Shadi Shakeri
ABSTRACT: This research concerns the frequency of the assignment of author-supplied keyword strings and cataloger supplied subject heading strings within a library catalog. The results reveal that, on average, more author-assigned keywords and more cataloger-assigned Library of Congress Subject Headings were assigned to works emerging from the arts & humanities than to works emerging from the social sciences and science, technology, engineering, and mathematics (STEM) disciplines. STEM disciplines in particular received a lower amount of topical metadata, in part because of the under-assignment of name/title, geographical, and corporate subject headings. These findings reveal how librarians could increase their understanding of how topical access is functioning within academic disciplines.
KEYWORDS: Library of Congress Subject Headings (LCSH), electronic theses or dissertations (ETDs), disciplinary differences, student-author keyword assignment, graduate student acculturation/enculturation, cataloging for different academic disciplines, topical access
Chronology in Cataloging Chinese Archaeological Reports: An Investigation of Cultural Bias in the Library of Congress Classification
Junli Diao & Haiyun Cao
ABSTRACT: This article first examines cultural limitations embedded in the Eurocentric Library of Congress Classification and calls for catalogers' sensitivity to authors' cultural background while cataloging the Bronze China archaeological materials. It then discusses the ambiguity in Library of Congress Subject Headings Manual H1225 and presents a debate on the necessity of including Chinese dynastic information in constructing subject headings through comparing facets extracted from this manual and title patterns of Chinese archaeological reports. Furthermore, this article elaborates the significance of the chronological issue from three different perspectives: Faceted Application of Subject Terminology headings, local library users' need, and next-generation catalogs.
KEYWORDS: Catalogers, cataloging, subject cataloging, classification, classification systems, Library of Congress Classification, Library of Congress Subject Headings (LCSH)
VIOLET B. FOX
Welcome to the news column. Its purpose is to disseminate information on any aspect of cataloging and classification that may be of interest to the cataloging community. This column is not just intended for news items, but serves to document discussions of interest as well as news concerning you, your research efforts, and your organization. Please send any pertinent materials, notes, minutes, or reports to: Violet Fox via email at, phone: 320-363-3032. Following their publication in CCQ, news columns will be freely available from the CCQ website at .
We would appreciate receiving items having to do with:
Research and Opinion
American Library Association Midwinter Meeting: Cataloging Norms Interest Group
Boston, Massachusetts, January 9, 2016
Submitted by Kathryn Lybarger, Head of Cataloging and Metadata, University of Kentucky Libraries
Mike Chopey, Catalog Librarian at the University of Hawaii at Manoa Libraries, spoke about his project to improve metadata for their library materials in endangered languages in his presentation, "Enhancing Access to Pacific-Language Resources at the University of Hawaii at Manoa and in OCLC WorldCat." Many MARC Languages codes represent general language groups without specific codes for the individual languages. For example, the MARC code "paa" for "Papuan (Other)" represents 164 distinct languages. Working with graduate assistants in linguistics and cataloging, they identified the specific languages for each piece in the project (around 18,000 total) and updated the catalog records to reflect each specific language using ethnologue codes and subject headings. Future steps include creating an online thesaurus/database to collect all of these codes and headings and make them available as linked open data.
Peggy Griesinger, Metadata and Cataloging Librarian at George Mason University Libraries, presented "Bridging the Gap between Metadata Librarians and Art Conservators" in which she described her experience in a collaborative project. Her advice for metadata librarians collaborating with domain experts included: (1) Embed yourself in that new domain, shadow professionals, attend meetings, and generally soak up their culture; (2) Appreciate their materials, view them in appropriate context to understand which aspects are important to record and why; (3) Translate your questions and advice into their language rather than making them learn yours, to make it clear why your advice will help them, and to more seamlessly integrate new processes into their existing workflows.
Andrea Payant, Data Management Cataloger; Betty Rozum, Data Services Coordinator and Undergraduate Research Librarian; and Liz Woolcott, Head of Cataloging and Metadata Services, from Utah State University presented "Where's the Data?" Their project tracked data sets and faculty publications using the online catalog and institutional repository. They described what data management plans are, and why they are needed (beyond just being required by many granting agencies). They also described the need for data to be deposited in a repository and available rather than just available in theory as in "will provide access to other scientists upon request." They described how libraries can help by cataloging datausing Describing Archives: A Content Standard (DACS) archival principles if necessaryto provide access research data now and into the future.
American Library Association Midwinter Meeting: Library of Congress Bibliographic Framework (BIBFRAME) Initiative Update Forum
Boston, Massachusetts, January 10, 2016
Submitted by Donna Frederick, Metadata Librarian, University of Saskatchewan, Canada
The Bibliographic Framework (BIBFRAME) update forum consisted of seven parts, which were presented by numerous speakers.
An introduction was provided by Sally McCallum, Chief of the Network Development and MARC Standards office at the Library of Congress. She stressed that libraries are entering a time of dramatic change that will likely last for the next decade or decade and a half. She put a call out to the library community to begin serious thought and discussion about who will lead the change and how it will be carried out. She pointed out that while the larger library community tends to look to the Library of Congress to set standards and provide leadership, the types of changes we will see require the collaborative efforts of the larger library community and not just one institution, department, or person. She suggested that those who have ideas and an interest in participating should begin to step forward and begin the work of leading libraries through the new era we are entering.
The second section was an update on the Library of Congress BIBFRAME pilot presented by Beacher Wiggins, Director for Acquisitions and Bibliographic Access at the Library of Congress. He described the early phases of this project to test the BIBFRAME editor, launched in the spring of 2015, and described the challenges that were incurred in the process. These challenges included difficulty with getting the BIBFRAME editor running and MARC records not converting. Eventually, the challenges were overcome and training and testing began. Because ongoing cataloging work needs to happen during the pilot, both MARC and BIBFRAME data are being produced in parallel. So far BIBFRAME data have been created for about 900 items. Work will continue for a few more months after which an evaluation will occur and then the pilot will be restarted. It is expected that the pilot will continue to run for a year or two before BIBFRAME is expected to enter the production phase. In the meantime training materials and webinars can be found on the Library of Congress website.
The third section was an update on BIBFRAME (BF) vocabulary development, presented by Sally McCallum. She gave a summary of work on BF vocabulary development since 2014 and discussed how both the vocabulary and the understanding of BF in general has evolved during the past year. MARC to BF transformation tools and other resources are available on the Library of Congress website (. She described how discussions and findings from experimental work on BF 1.0 lead to a realization of the need for a revision of the basic model, which has come to be known as BF 2.0. Draft specifications for the revised model are available at . Examples of changes to the vocabulary include changing "holding annotation" to "item," and changing "authority" to "agent and concepts." In the case of the latter change, "agents" can include people, families, organizations, meetings, or jurisdictions while "concepts" include topics, places, times, events, and works. There are still many issues to be worked out. These issues include making a clear differentiation between data types and object properties; distinguishing types by class; and defining reciprocal properties where appropriate. It was also recognized that there is a need to separate administrative and descriptive metadata so that, for example, libraries can record how a description was created in addition to the description itself. A study done by AVPreseve addressed issues regarding technical, structure, and preservation metadata; their report, "BIBFRAME AV Assessment," is available at . Work on BF 2.0 is continuing and the expectation is that the model, principles, naming, and MARC mapping should be available sometime in the spring of 2016.
The fourth section, "One Supplier's approach to BF/Linked Data," was presented by Tiziana Possemato of Casalini Libri. Possemato described a process whereby a library can enrich MARC records to simplify an automated BF conversion using an open source conversion product called ALIADA. For example, International Standard Name Identifiers (ISNI), Virtual International Authority File (VIAF) identifiers, and Library of Congress/Name Authority Cooperative (NACO) name authority Uniform Resource Identifiers (URIs) associated with access points in MARC records can be added to a MARC field in subfield 0 paired with a subfield 2 that identifies the source of the URI. ALIADA is able to read MARCMXL and convert it to Resource Description Framework (RDF) and can make use of any existing relevant linked data. The conversion process typically makes use of multiple linked data ontologies. The resulting object-oriented Functional Requirements for Bibliographic Records (FRBRoo)/BF output is characterized by a three-layer architecture. One layer consists of "persons" or "works" (typically VIAF or Name Authority File [NAF] data), "instances" and "items" (which is holdings data).
The fifth section was an overview of the Linked Data for Production (LD4P) project that included descriptions of projects underway or completed. Presenters included Jennifer Baxmeyer of Princeton University ("Archives and Annotations"), Melanie Wacker of Columbia University ("Art and Museums"), and Chiat Naun Chew of Cornell University ("Rare Books and Hip Hop LPs"). Baxmeyer described a challenge with a Derrida collection that was notable for its marginalia, markings, notes, and inserted papers. The library wanted to be able to create annotations for this notable content and also represent relationships among the people who wrote inscriptions. They found that EAD was overly textual and focused on strings, while MARC was not rich enough to represent this type of content. A project to extend BF for item-level descriptions was described as a solution for this challenge. Wacker discussed the potential of BF for dealing with works of art and museum objects as BF is designed to be usable for different content models and to accommodate different needs for resource description. There are a number of different projects ongoing already, and she spoke of a literature review she undertook to see what research and work have already been done and what is known. It became apparent that there are a few remaining issues to be resolved including a lack of some properties relevant for works of art and problems describing events. Columbia University is experimenting with Karma, which is used to map museum metadata to RDF, in combination with Vitro to see if the known problems and limitations can be overcome. Finally, Chiat Naun described a highly unique collection of Hip Hop LPs that have album covers annotated by the disc jockey Afrika Bambaataa. Librarians at Cornell University Library undertook a project to catalog this collection using the cataloging guidelines of the Rare Books and Manuscripts Section (RBMS) of the Association of College and Research Libraries, a division of the American Library Association. They soon realized that with linked data, it would be possible to leverage external data sources relevant to the collection in order to improve the quality and usefulness of the discovery data. For example, MusicBrainz is an open source encyclopedia of popular musical recordings that makes its metadata available as linked open data. Linked data sources such as MusicBrainz may be the type of external ontology which would be highly useful to include in BF.
The sixth section was an OCLC Update provided by John Chapman, the Product Manager for OCLC's WorldShare Metadata Services. John described three key areas of work at OCLC, which included the BF model and vocabulary, developments in production services, and experiments in visualization. OCLC is collaborating with the Library of Congress to explore what it means to implement BF on a large scale. John also explained that OCLC had some difficulty with BF 1.0 and finds that BF 2.0 should be more suitable seeing as it distinguishes between manifestation and item wherein "content" is the work or expression, "product" is the manifestation, and "copy" is the item. The Person Identity Lookup Pilot Project, which uses linked data from sources such as VIAF and ISNI to extract URIs that will map all of the identities for a single person, was discussed. Finally, John briefly mentioned the Entity JS project.
The seventh and final section was a Zepheira Update provided by Eric Miller, President of Zepheria, Inc. Miller shared some key messages from the ongoing work Zepheira is doing with BF and linked data. These included: learn by iterating, working rapidly, and learn from what you do as you go along. Libraries should not wait until things are "finished" but can start to do things with linked data now. He talked about some key benefits of libraries publishing linked data. While the data that is published is often used by Google, the work is more than what can be accessed via Google. Once published to the web, the data can be used with many applications so there are many possibilities.
American Library Association 2016 Midwinter Meeting: Program for Cooperative Cataloging (PCC) Participants Meeting
Boston, Massachusetts, January 10, 2016
Submitted by Donna Frederick, Metadata Librarian, University of Saskatchewan, Canada
The PCC Participants Meeting began with a discussion of PCC's Vision, Mission, and Strategic Directions given by Kate Harcourt, PCC Chair. This new plan is intended to reconceive the strategy of the PCC to reflect changes in the information and technology environment and assist with the transition from older practices and policies and toward new ways of thinking and making decisions.
The introductory talk was followed by a Harvard Library panel discussion on strategies and options for expanding community participation in creating identifiers and authority data. Panel members included: Mary Jane Cuneo, Senior Cataloger and NACO contact; Steven Riel, Manager of Serials Cataloging; Christine Fernsebner Eslao, Metadata Management Librarian; and Honor Moody, Cataloger in the Arthur and Elizabeth Schlesinger Library on the History of Women in America. The initial discussion was on reframing authority work as identity management in our emerging metadata environment and the related conceptual shift from thinking about "strings" to thinking about "things" when creating and managing identity data. For example, the NACO training and review process is very detailed and time consuming because participants must learn to accurately compose complex strings of data. Issues explored included how to lower the NACO training threshold, which is currently a barrier for many librarians and institutions who may otherwise be active in authority data creation. Other data sources such as VIAF and ISNI have demonstrated that it is possible to do effective authority work without creating complex strings of data. With regard for concerns over a loss of quality, the question of what "quality" is and how it can be measured was posed. A suggestion was made that if the process of creating authority data were to be simplified, more time could be spent working on disambiguation issues and creating needed documentation. There were also discussions around the topics of creating sustainable data, creating an understanding of the metadata lifecycle, and how to make use of the knowledge of expert communities and other sources of data.
The panel then went on to discuss the special challenges of creating authority data for corporate bodies. For example, it can be difficult to determine which identity to use for a body and when a name change means that a new identity needs to be created. Different stakeholders require different definitions or different ways to conceptualize corporate bodies. Fortunately, linked data is able to accommodate these different needs but the definitions must be clear and our current practices do not include procedures for this. Other issues include the understanding that linked data cannot exist when there are blind references and that changes in corporations over time are often more complex than simple name changes. While services like the International Standard Serial Number (ISSN) Portal have been able to handle name changes over time, there is no established way to reflect complex corporate histories in a meaningful or useful way for the purpose of discovery.
Michelle Durocher, Head of Metadata Management/Metadata Creation for Harvard Library, gave a presentation called "Muted Metadata: A NACO Lite Proposal for Zines." While there is a need to create URIs for zines in order to catalog them, the resource format is particularly problematic because many zine authors use only their first name, use a "fake" name, or change their name over time or according to the topic they are writing about. There are also privacy concerns as some zines were originally created in the pre-Internet age and authors did not foresee the implications of publishing personal or sensitive information on the web. Because of this, authors may need to control the data which is published about them. It is also recognized that some of the data may need to be masked during the life of the author and other data may never be made public. The need for privacy and control over certain aspects of the data should be balanced with the ability to read and identify the source of the data.
The final section of the meeting began with a demonstration of the user experience in searching Harvard's discovery system. The demonstration revealed that authors are typically listed in multiple formats with books tending to list authors one way, conference proceedings another way and journal articles one or more additional ways. The problem of name variation was particularly noticeable for journal articles. The key issue identified was that the discovery system searches for strings rather than things and each form of an author's name is a string. Harvard librarians suggested that we need "radical collaboration" in libraries to resolve the existing problems and that we should be measuring our success in terms of our ability to build a new information ecosystem which resolves poor user experiences such as the one that was demonstrated. It is possible that linking relationships among entities is actually more important than addressing disambiguation of non-related entities. Another question posed was with regard to the issue of identity management and whether there is not some degree of confusion between the information technology sector and libraries as to what identity management is and a lack of common understanding about why it is important.
American Library Association Midwinter Meeting: Heads of Cataloging Interest Group
Boston, Massachusetts, January 11, 2016
Submitted by Donna Frederick, Metadata Librarian, University of Saskatchewan, Canada
The ALCTS CaMMS Heads of Cataloging Interest Group session featured presentations by Nancy Lorimer, Head of Metadata Services at Stanford University and Katherine Wisser, professor of library and information science at Simmons College.
Nancy Lorimer began her presentation, "Authorities, Entities, Real World Objects, and ... Cats?: Moving from Authority Creation to Identity Management", by discussing how linked data projects in libraries are sometimes created in isolation and can result in dumps of data on the web that create problems for others to address. The presence of these dumped data is a less than ideal situation. The LD4P initiative is a two-year project to develop real workflows and practices for creating and using linked data in libraries. The desired outcome is to develop practices through which the impact of disjointed efforts can be overcome. Lorimer discussed the concept of "entities" in the linked data context. In traditional cataloging, entities are the access points while in BF they are "works," "instances," and "items." In other schema, for example, entities may include "agents," "corporations," "geographical locations," "concepts," and "events." She explained how name authority data for a cat called "Kit" can be represented by a URI, which essentially is a web page that contains authority data. She showed that there is an essential difference between BF 1.0, which uses the "agent" entity, which is essentially a textual label for the cat, and BF 2.0, which uses the "person" entity, which is a real world object represented by a linked data identifier such as one from the VIAF, located at. It is a challenge that many bibliographic records currently contain "headings" that lack corresponding authority data. While this is not a problem in MARC, an entity that has nothing to link to is not allowed in linked data. She pointed out that unlike the implementation of Resource Description and Access (RDA), there will be no "Day 1" for BF. It appears that libraries need to add URIs to legacy MARC metadata (in subfield zero). Considering that every access point could need a subfield zero, there is potentially a lot of data to load and it may take a very long time to achieve. These subfields would also require maintenance that includes protecting them from being overwritten by automated heading update services.
Lorimer described a project at Stanford University where legacy metadata from an environment that uses Search Works for discovery, has a Symphony Integrated Library System (ILS), and includes a digital repository which uses the Metadata Object Description Schema (MODS), was enriched with URIs for authority data. One experiment added URIs to local authority records without actually inserting a URI in each bibliographic record. If or when the URI would need to be inserted, that could be done using MarcEdit. However, this would only work for those headings for which there are authority records. In addition, analytical 7xx fields can be difficult to match depending on the system and whether or not the subfields can be isolated.
In terms of dealing with new identifiers, URIs will inevitably need to be created. Lorimer described a model where cataloging and authority vendors could create URIs and link authority data. She also discussed three options for creating new identifiers for use in BF: creating authority data in MARC, creating an Open Researcher and Contributor ID (ORCID) or ISNI, or, as a last resort, creating some sort of local identifier and adding that identifier to BF. In the future, catalogers may want to create ISNI identifiers when they create Name Authority Records (NARs). There may also be situations where there is not time to create a NAR and ISNI does not apply (e.g., animals or fictitious characters) so some sort of local identifier will need to be created. There are many vocabularies published in BF but work still needs to be done to reconcile conflicts and duplication. To some extent, VIAF has addressed issues created by using multiple vocabularies but there are still concerns. Lorimer noted that her library will need to reconcile the URIs her library has created with recognized ones such as NARs or ISNI, but currently does not have a proposal for how to accomplish this. Even in VIAF, there are sometimes mistakes when the software does not detect or incorrectly interprets language or script variations. There is considerable work to be done on vocabulary reconciliation. Lorimer recognizes that in the short run it will likely not be possible to create all of the required identifiers. The more important outcome of experimentation and investigation appears to be to find effective and efficient ways to create, store, and manage identifiers locally.
Lorimer's presentation slides are available at
Katherine Wisser's presentation, called "Encoded Archival Context: Corporate Bodies, Persons, and Families," focused on the topic of "context control," which is similar to but distinct from authority control. She explained the history and development of authority control in archival contexts as it was first introduced by "The Three Dutchmen"Muller, Feith and Fruinand later expanded on in the development of standards such as the General International Standard Archival Description (ISAD(G)), the International Standard Archival Authority Record for Corporate Bodies, Persons and Families (ISAAR (CPF)), and Describing Archives: A Content Standard (DACs). From this foundation a standard for context control, Encoded Archival Context: Corporate Bodies, Persons and Families (EAC-CPF), was conceptualized in 1998. EAC-CPF was developed and refined over several years and then fully adopted by The Society of American Archivists (SAA) in 2011. She went on to explain some of the key EAC-CPF concepts such as the differences between single identities, multiple identities (where a person may have different roles or have held different offices over time), and collaborative identities (where a number of people comprise a single identity). She also demonstrated the structure of EAC-CPF data and showed how elements can be mapped to ISAAR (CFP). She then discussed various applications of EAC-CPF in library and archival contexts for linking the documents and other artifacts in collections in meaningful and complex ways to both their creators and other historical persons. Examples include the "Connecting the dots" project that describes manuscript collections at Harvard and Yale (), the National Library of Australia's Trove project, which links significant biographical and historical information ( ), and Archives Portal Europe, which links archival and biographical information from countries across Europe ( ). These projects demonstrate the powerful discovery environments that the use of EAC-CPF can create to expose relationships and permit researchers to explore linkages among persons, places, events, institutions, and historical periods. Wisser concluded by talking about the Social Networks and Archival Context (SNAC) Cooperative, a new international cooperative that is currently seeking new members ( ) and is interested in hearing feedback and taking questions on the topic of archival context.
Katherine Wisser's presentation slides are available at.
Additional reports from the American Library Association [ALA] Midwinter Meeting from committees and interest groups that report to the ALA Association for Library Collections and Technical Services (ALCTS) or to the Cataloging and Metadata Management Section (CaMMS) are available atand .
Boston, Massachusetts, January 13, 2016
Submitted by Amber Billey, Metadata Librarian, Columbia University
On Wednesday after ALA Midwinter was over and almost everyone had already returned home, a group of technical services librarians, systems librarians, and library technology developers gathered at Simmons College School of Library and Information Science for a one day post-conference. It was the first #mashcat in-person meeting held in the United States. Mashcat describes itself as "a loose group of library cataloguers, developers and anyone else with an interest in how library catalogue data can be created, manipulated, used and re-used by computers and software." Their aim is "to work together and bridge the communications gap that has sometimes gotten in the way of building the best tools we possibly can to manage library data" (). Conversations began first in Twitter using the #mashcat hashtag during regularly scheduled topic-based conversations.
It was a full day of activities with a panel discussion, presentations, lightning talks, and a brainstorming session about zines. Galen Charlton, Vernica Downey, Christina Harlow, and other members of the Mashcat Community started the day off with a panel discussion on "Why mashcat?" They discussed how to blend or make IT accessible in libraries through creating documentation, use codes of ethics like the Hacker School rules, hold meetings, go to meetings, invite people in order to demonstrate that the group is welcoming, and to look to see who is missing and then invite those people.
The second presentation was a prepared talk by Kate Deibel entitled, "Shall We become Two-Headed Monsters? Cross-Disciplinary and Multiliteracy Perspectives for Mashcat's Goals" where Deibel explored what it means to have the emerging role of cataloger-coder in the profession.
Margaret "Annie" Glerum and Ethan Fenichel gave the third presentation for the entitled, "Tech-Savvy Catalogers Remediate Messy Series Fields to Alleviate Display Issues in the State University System of Florida Libraries Consortial Interface." They discussed how they were able to clean up messy serials data by creating a process that compared the local records to their corresponding master OCLC records, and deploy it over 12 universities and 209,671 problematic bibliographic records.
Jacob Shelby presented, "Finding Aid-LD: Implementing Linked Data in a Finding Aid Environment." Shelby proposed a framework for publishing finding aids as Linked Data, and explained proof-of-concept applications of finding aids as Linked Data. Shelby also outlined the advantages and challenges, what it would take to publish Linked Data finding aids, and identified steps that can be taken to prepare finding aids for Linked Data environment.
Marlene Harris and Galen Charlton discussed systems and data migration in their talk, "Migratory Patterns... and Antipatterns." They emphasized how successful migrations depend on getting everyone involved with the migration to talk to each other about the process.
Alison Babeu presented, "Building and Rebuilding the Perseus Catalog, or CTS, Blacklight, and Github, oh my!" Babeu discussed how the success of a project to migrate the Perseus Catalog from various systems relied on the collaborative work and relationship between the digital librarian, the head software development, and the digital library analyst. It also described their creation of Metadata Object Description Schema (MODS) and Metadata Authority Description Schema (MADS) data, the utilization of Canonical Text Services, the challenges of selecting and implementing an open source catalog system, the challenges of making all bibliographic data and source code open and well documented, and the opportunities of building relationships between those with traditional and newer professional roles.
There were three lightning talks in the afternoon. Erin Leach discussed empathy and kindness in collaboration with "There's Only One Rule I Know of ... You've Got to be Kind." Anne Neatrour and Liz Woolcott presented "Library Workflow Exchange: Because We're All Tired of Asking 'Who has Already Done This?'" They introduced a new repository website for library workflow documentation, Library Workflow Exchange (). Katherine Deibel presented, "Value Sensitive Design." Value sensitive design (VSD) is a design approach that has been applied to many domains, and that emphasizes the importance of identifying and respecting human values throughout interactions between society and technology. Deibel suggests that this approach would be useful when redesigning cataloging methods and the technology that drives discovery to be more inclusive and better represent the diversity of our users.
The final session of the day wrapped up with "Zine Union Catalog Project with Dreams of Open Linked Data." Amber Billey, Violet Fox, and Jenna Freedman facilitated group discussions to gather thoughts and suggestions on ways to approach building a union catalog for zines utilizing Open Linked Data.
Mashcat 2016's organizers and volunteers included Galen Charlton, Ethan Fenichel, Christina Harlow, Kathryn Lybarger, Shana McDanold, Candy Schwartz, and Patrick Shea. Presentation slides are available at.
World-wide review of FRBR-Library Reference Model
Submitted by Pat Riva, Associate University Librarian, Collection Services, Concordia University, Montreal
You are invited to comment on FRBR-Library Reference Model, available at, as part of a world-wide review. Comments are due by May 1, 2016. Please see the files at
The FRBR-Library Reference Model (FRBR-LRM) was developed in response to the need to unify the three separately developed conceptual models (FRBR, Functional Requirements for Authority Data [FRAD], Functional Requirements for Subject Authority Data [FRSAD]) and consolidate them into a single, consistent model covering all aspects of bibliographic data. FRBR-LRM aims to be a high-level conceptual reference model developed within an entity-relationship modeling framework. The FRBR Review Group worked actively towards a consolidated model starting in 2010. In 2013, the FRBR Review Group constituted a Consolidation Editorial Group (CEG) that is responsible for the drafting of this model document.
To assist in the evaluation of the FRBR-Library Reference Model, a Transition Mapping is being made available as complementary material. The Transition Mapping begins with a brief overview of the principal differences between FRBR-LRM and the three preceding models in the Functional Requirements (FR) family. The Transition Mapping then charts each user task, entity, attribute, and relationship declared in the FRBR, FRAD, and FRSAD models and maps to how each is handled in FRBR-LRM. While these mappings are not a part of the model itself, and do not constitute a formal International Federation of Library Associations and Institutions (IFLA) standard, they are made available to aid in the understanding of the consolidated model and to assist in the transition of implementations. Comments on the Transition Mapping are also welcome at this time.
Comments will be reviewed by the FRBR Review Group and by the CEG.
Please send all comments to Chris Oliver (), Chair of the FRBR Review Group, by May 1, 2016.
The Library of Congress to cancel the subject heading "illegal aliens"
Submitted by Janis L. Young, Policy and Standards Division, Library of Congress
In response to constituent requests, the Policy and Standards Division of the Library of Congress, which maintains Library of Congress Subject Headings (LCSH), has investigated the possibility of canceling or revising the heading Illegal aliens. The Policy and Standards Division (PSD) also explored the possibility of revising the broader term Aliens. It concluded that the meaning of Aliens is often misunderstood and should be revised to Noncitizens, and that the phrase illegal aliens has become pejorative. The heading Illegal aliens will therefore be canceled and replaced by two headings, Noncitizens and Unauthorized immigration, which may be assigned together to describe resources about people who illegally reside in a country.
Other headings that include the word aliens or the phrase illegal aliens (e.g., Church work with aliens; Children of illegal aliens) will also be revised. All of the revisions will appear on a Tentative List and be approved no earlier than May 2016; the revision of existing bibliographic records will commence shortly thereafter.
For background on the history and purpose of the headings Aliens and Illegal aliens, the rationale for the revisions to LCSH, and a description of the scope of the project, please see the full announcement on the Library of Congress website at.
Questions or comments on these revisions may be directed to Libby Dechman () in the Policy and Standards Division.
Removal of GMDs from WorldCat
Submitted by Cynthia Whitacre, OCLC Manager, WorldCat Quality Team
Can anyone believe it has been three years since RDA was fully implemented by the Library of Congress for all cataloging? As stated in the OCLC RDA Policy Statement (), OCLC agreed, in conjunction with the Program for Cooperative Cataloging (PCC), that General Material Designations (GMDs, found in MARC field 245 subfield h) would be retained for a period of three years in existing WorldCat bibliographic records that were not re-cataloged according to RDA. That three-year period is quickly coming to an end on March 31, 2016. We are therefore announcing that OCLC will begin deleting GMDs from existing bibliographic records starting in April 2016. The removal of GMDs is part of the ongoing work to introduce RDA elements and practices to existing WorldCat records.
GMDs will not disappear overnight. This will be a gradual process over the next few years. We will start in April with WorldCat record #1 and move systematically forward, first adjusting records with English language of cataloging (field 040 subfield b = eng). When other records are updated for reasons apart from the systematic walk through the database, GMDs may be removed from those as well. When GMDs are removed, OCLC will assure that the appropriate 33X fields for content, media, and carrier types are added to records whenever possible. Because we are using automated methods to make these changes, it may not always be possible to add all three fields. We will err on the side of omitting a field rather than adding an incorrect one.
Please note that members of the OCLC cooperative may continue to contribute new unique records to WorldCat formulated according to any cataloging code they are currently using. OCLC does not require libraries to catalog using RDA. Any institution wishing to add GMDs to bibliographic records may do so through local editing, but please do not add GMDs to an OCLC master record.
Additionally, for those who have access to WorldShare Collection Manager, you can configure WorldCat updates (a one-time process) to automatically receive updated WorldCat records once the GMD has been removed and 33X fields have been added. See.
Questions regarding these changes may be sent to.
Cataloging Defensively Series
Submitted by Jay Weitz, Senior Consulting Database Specialist, Data Services and WorldCat Quality Management, OCLC
At the Music OCLC Users Group (MOUG) annual meeting in Cincinnati, I presented the PowerPoint "Cataloging Sound Recordings Defensively: 'When to Input a New Record' in the Age of DDR" as part of the "Ask Everything" session on March 2, 2016. That PowerPoint is now linked from the OCLC "Cataloging Defensively" page at. (The Sound Recordings presentation itself may be found at .)
It is my intention to create a series of "Cataloging Defensively" presentations for various specific types of bibliographic materials over the coming months. One on the topic of videorecordings is scheduled to be presented at the Online Audiovisual Catalogers (OLAC) membership meeting during the ALA Annual Conference in June 2016.
The "Cataloging Defensively" presentations are not cataloging workshops, per se, but are designed to give some background to how OCLC's Duplicate Detection and Resolution (DDR) software deals with bibliographic records, both generally and for the specific bibliographic format in the title. They should help catalogers use MARC 21 and the instructions in both RDA and Anglo-American Cataloguing Rules, Second Edition (AACR2)to the best advantage in making sure that DDR performs appropriately when encountering a record that is legitimately unique according to the descriptive conventions.
From that OCLC "Cataloging Defensively" page there are also links to the more general 2010 "Cataloging Defensively" slides and recorded Webinar, as well as to "Cataloging Maps Defensively," which was presented in January 2016 to the Map and Geospatial Information Round Table (MAGIRT) Cataloging and Classification Committee. We hope that you find these presentations useful.