Volume 51, no. 4, 2013


Book Reviews

Library Classification Trends in the 21St Century by Rajendra Kumbhar
Reviewed by Judy Jeng

Logic and the Organization of Information by Martin Frické
Reviewed by Seth van Hooland

Practical Strategies for Cataloging Departments edited by Rebecca L. Lubas
Reviewed by Linda Smith Griffin


Cataloging News, Robert Bothmann, News Editor

Original Articles

The UNC-Chapel Hill RDA Boot Camp: Preparing LIS Students for Emerging Topics in Cataloging and Metadata
Madeline Veitch, Jane Greenber, Caroline Keizer & Wanda Gunther

ABSTRACT: The implementation of Resource Description and Access (RDA) in 2013 or after will have a powerful impact on the skill set required of new library and information science professionals. This article chronicles the development of an RDA "boot camp" at UNC-Chapel Hill's School of Information and Library Science. Curriculum for the three-hour camp included a review of relevant theoretical frameworks and a hands-on exercise creating RDA records. Findings from a post-boot camp survey point to areas for further growth in cataloging and metadata course development and suggest that students are eager for more practical experience with emerging schema.

KEYWORDS: Resource Description and Access (RDA), cataloging education, library and information science education, cataloging curriculum, training, cataloging standards

Name Authority Work in Public Libraries
Susan K. Burke & Jay Shorten

ABSTRACT: Random samples of U.S. public libraries were surveyed in summer 2010 to ascertain their name authority cataloging practices. Comparisons of authority work processes and types of authority work done were made among public libraries of different sizes. Results were compared to previous studies of public and academic libraries

KEYWORDS: authority control, catalogers, cataloging, surveys, public libraries, college and university libraries

Subject Headings in Spanish: The lcsh-es.org Bilingual Database
Michael Kreyche

ABSTRACT: Spanish is one of the most widely spoken languages in the world and the various subject heading lists in the language reflect its geographic diversity. Catalogers assigning Spanish subject headings typically must rely on a variety of different sources in different formats. The lcsh-es.org database unites several of these sources in a single search interface to simplify the work of Spanish language subject catalogers and encourage collaboration. A look at current developments suggests that high-level international agreement on linked data technology and policy bode well for the future of multilingual subject authorities.

KEYWORDS: subject headings, multilingual authorities, authority control, Spanish language, linked data

Evolving Landscape in Name Authority Control
Jinfang Niu

This article presents a conceptual framework for library name authority control, including methods for disambiguating agents that share the same name and for collocating works of agents who use multiple names. It then discusses the identifier solutions tried or proposed in the library community for name authority control, analyzes the various identity management systems emerging outside of the library community, and envisions future trends in name authority control.

KEYWORDS: authority control, bibliographic data, interoperability, catalogers, cataloging, cataloging standards

General Notes in Catalog Records versus FRBR User Tasks
Michele Seikel

This article analyzes the literature concerning uses of notes in bibliographic records and also certain grammatical conventions used by catalogers to communicate information about the resources they are describing. It shows that these types of data do not aid the Functional Requirements for Bibliographic Records (FRBR) user tasks in the resource discovery process. It also describes how general notes are addressed in Resource Description Access (RDA), and advocates that cataloging practices involving most general notes and such conventions as bracketing and abbreviations should be discontinued with the widespread use of RDA.

KEYWORDS: descriptive cataloging, general notes, MARC 21 note fields, FRBR user tasks


Jakobsonian Library Science? A Response to Jonathan Tuttle's Article "The Aphasia of Modern Subject Access"
David Bade

This article responds to Jonathan Tuttle's article "The Aphasia of Modern Subject Access" in which Roman Jakobson's semiology of "shared codes" consisting of preexisting signs is offered as the explanation for two redundant linguistic tools associated with cataloging: LCSH and LCC. The article criticizes Tuttle's terminology, his semiology, and his argument that selection and combination are both necessary for the operation of language but each are associated with only one of these tools.

"The Aphasia of Modern Subject Access" by Jonathan Tuttle appears in Cataloging & Classification Quarterly, Vol. 50, Issue 4, 2012, pp. 263-275. doi: 10.1080/01639374.2011.641199. Jonathan Tuttle's "Jakobsonian Library Science? A Response to David Bade" appears in Cataloging ∓ Classification Quarterly, Vol. 51, Issue 4, 2013, pp. 439-440. doi: 10.1080/01639374.2013.763321.

KEYWORDS: subject cataloging, Jonathan Tuttle, Roman Jakobson, linguistics

Jakobsonian Library Science? A Response to David Bade
Jonathan Tuttle

This article comments on "Jakobsonian Library Science? A Response to Jonathan Tuttle's Article 'The Aphasia of Modern Subject Access'" by David Bade, appearing in Cataloging ∓ Classification Quarterly, Vol. 51, Issue 4, 2013, pp. 428-438. doi: 10.1080/01639374.2012.750637. Jonathan Tuttle's "The Aphasia of Modern Subject Access" appears in Cataloging ∓ Classification Quarterly, Vol. 50, Issue 4, 2012, pp. 263-275. doi: 10.1080/01639374.2011.641199.



Cataloging News
Robert Bothmann, News Editor

Welcome to the news column. Its purpose is to disseminate information on any aspect of cataloging and classification that may be of interest to the cataloging community. This column is not just intended for news items, but serves to document discussions of interest as well as news concerning you, your research efforts, and your organization. Please send any pertinent materials, notes, minutes, or reports to: Robert Bothmann, Memorial Library, Minnesota State University, Mankato, ML 3097, PO Box 8419, Mankato, MN 56002-8419 (email: robert.bothmann@mnsu.edu, phone: 507-389-2010. News columns will typically be available prior to publication in print from the CCQ website at http://catalogingandclassificationquarterly.com/.

We would appreciate receiving items having to do with:

Research and Opinion

  • Abstracts or reports of on-going or unpublished research
  • Bibliographies of materials available on specific subjects
  • Analysis or description of new technologies
  • Call for papers
  • Comments or opinions on the art of cataloging


  • Notes, minutes, or summaries of meetings, etc. of interest to catalogers
  • Publication announcements
  • Description of grants
  • Description of projects


  • Announcements of changes in personnel
  • Announcements of honors, offices, etc.


Research and Opinion

BIBFRAME Model Document

The Library of Congress (LC) announced in late November 2012 the introduction of a draft data model for Web-based bibliographic description for the Bibliographic Framework (BIBFRAME) Initiative. The new encoding model shall be called BIBFRAME and the current model document provides a high-level overview. The document, Bibliographic Framework as a Web of Data: Linked Data Model and Supporting Services is available at http://www.loc.gov/marc/transition/pdf/marcld-report-11-21-2012.pdf. LC also announced that it will partner with the British Library, Deutsche Nationalbibliothek, George Washington University, National Library of Medicine (United States), OCLC, and Princeton University, collectively known as the Early Experimenters, for experimentation and testing of BIBFRAME. A report and update from the Early Experimenters will be held at the 2013 American Library Association Midwinter Meeting on Sunday January 27 from 10:30 am to noon in the Conference Center of the Washington Convention Center, Room 304. Additional information about the BIBFRAME model can be found in Kevin Ford's presentation at the Semantic Web in Libraries conference in Cologne, Germany, November 28, 2012: http://bit.ly/FordBIBFRAME-SemanticWebKoln2012


Increasing Visibility of Libraries in the Global Information Space: Linking Library Data to the Rest of the World

Notes on the Seminar “Global Interoperability and Linked Data in Libraries,” June 18–19, 2012, Florence, Italy
Submitted by Ginevra Peruginelli, Istituto di Teoria e Tecniche dell’Informazione Giuridica, Florence, Italy

Recent advances in network technologies for the publication and use of structured data in an open format combined with revised policies for reuse of public data open up new scenarios in the construction of innovative information services based on public data. Next to the open data phenomenon, the rapid explosion of the linked open data movement supported by Semantic Web technologies and by the Resource Description Framework (RDF) model marks the landscape of a new digital world.

The basic idea of linked data is the re-use of the architectural principles of the “traditional” web of documents for sharing data on a global scale by combining simplicity with decentralization and openness. The key advantage of this model is interoperability and flexibility: the data value increases significantly when multiple datasets created and published by different providers, can be freely reused and interconnected by third parties without technical barriers.

The adoption of the linked open data paradigm opens new opportunities in all fields of knowledge and in particular in the context of bibliographic data, due to the huge amount and the intrinsic cultural value of collected data held by libraries. Against this background, the Seminar “Global Interoperability and Linked Data in Libraries” focused on data enrichment and interconnection exploring possible solutions to many library issues, like enhanced Web searching, authority control, classification, data portability, and disambiguation.

The Seminar was organized by the University of Florence together with the Istituto centrale per il catalogo unico delle biblioteche italiane (ICCU), Biblioteca nazionale centrale di Firenze (BNCF), Casalini Libri, City of Florence, Conferenza dei rettori delle università italiane (CRUI), Associazione italiana biblioteche (AIB), Istituto di teoria e tecniche dell’informazione giuridica del Consiglio nazionale delle ricerche (ITTIG-CNR), and Fondazione Rinascimento digitale, Associazione italiana Editori (AIE).

    The goal of the Seminar was twofold:
  • To focus on interoperability and open linked data in libraries, analyzing standards, experiences, and best practices for adopting Semantic Web technologies in this area.
  • To bring together researchers, data producers, and public actors in innovation to explore current experiences and discuss the challenges of new ways of representing, describing, and delivering contents on the Web.

The Seminar was organized into four sessions dedicated to:

  1. Linked data as a new paradigm of data interconnection;
  2. Publishing value vocabularies as linked data;
  3. Standards and applications;<
  4. Experiences from libraries and the public administration.

The sessions were split between presentations and discussion, allowing creative and open debate. The goal was to contribute to the scientific development of linked data in libraries, particularly by building the evidence base to evaluate the efficacy, efficiency, and rigor of methods and techniques associated with this model for bibliographic data.

Session 1, chaired by Daniela Tiscornia of the National Research Council, focused on the importance of data interconnection with a critical approach on the issue, and included the following speakers:

  • Karen Coyle presented linked data as a possibly disruptive, but essentially logical, evolution of library data that responds to the World Wide Web as our global information and communication environment.
  • Aldo Gangemi (Istituto di Scienze e Tecnologie della Cognizione of CNR), by illustrating some Italian case studies, addressed the current state of the art and evolution scenarios of semantic technologies for linked data production, exploitation, and managing identity and interoperability of administrative entities and data.
  • Giovanni Tummarello (Digital Enterprise Research Institute; National University of Ireland–Galway, and Fondazione Bruno Kessler—FBK) explored the technological frontier of the web of data and its perspectives.
  • Tom Baker (Dublin Core Metadata Initiative), based on his experience with the Dublin Core Metadata Set, focused on emerging Semantic Web–based services and open publishing models for both data and content.
  • Paola Mazzucchi (mEDRA) addressed the issue of the role of technology in ensuring data interconnection.
  • Federico Morando (Politecnico di Torino—Nexa Center), focused on legal interoperability as a main requirement to make government data compatible with businesses and community data, discussing existing public licenses and the main open data licenses developed by European governments

Session 2, chaired by Prof. Mauro Guerrini of the University of Florence, addressed practical experiences around Europe. Speakers for this session were:

  • Alan Danskin (British Library) examined Resource Description and Access (RDA) as the new cataloging standard providing guidance and instruction on how to identify and record attributes or properties of resources for discovery purposes. Recent developments such as the publication of the RDA element set and vocabularies on the Open Metadata Registry as linked open data were illustrated, together with challenges and the potential of linked open data in the broader framework of bibliographic control.
  • Kevin Ford (Library of Congress [LC]) illustrated the U.S. Library of Congress’ objective to publish in 2012 another of its largest authority files (after Subject Headings and Names) as linked data: LC Classification relying on the MARC (Machine Readable Cataloging) Classification format. Reference was made to how mapping from LC Classification to Metadata Authority Description Schema in RDF (MADS/RDF) or Simple Knowledge Organisation System (SKOS) has been challenging. With comparison to the publication of Library of Congress Subject Headings (LCSH) and Names at ID.LOC.GOV, this paper examined the issues encountered—and how those challenges were addressed—during the conversion of LC Classification to MADS/RDF and SKOS for release as linked data at ID.LOC.GOV.
  • Joan S. Mitchell and Michael Panzer (OCLC) explored the history, use cases, and future plans for making the Dewey Decimal Classification (DDC) system linked data. In particular, the linking process to GeoNames was examined as an example of cross-domain vocabulary alignment.
  • Marie-Veronique Leroi (Ministry of Culture and Communication, France) illustrated Linked Heritage, a collaborative Terminology Management Platform (TMP) for a network of multilingual thesauri and controlled vocabularies. Reference was made to the Athena project, whose recommendations are followed in the work on the development of the TMP. This platform will allow any cultural institution to register, SKOSify, and manage its terminology in a collaborative way, thus providing a network of multilingual and cross-domain terminologies.
  • Giovanni Bergamin and Anna Lucarelli (Central National Library of Florence) analyzed the potential to implement the Nuovo Soggettario of the National Library of Florence as a linked-data service.
  • Tommaso Agnoloni, Ginevra Peruginelli, Maria Teresa Sagri, and Elisabetta Marinai (Istituto di Teoria e Tecniche dell’Informazione Giuridica of the CNR) analyzed the advantages of linked open data models and use of data for new purposes pointing out the added value of explicitly representing data in standard Web formats (extensible markup language [XML], RDF, Universal Resource Identifier [URI]). To achieve these benefits, the project of new advanced services to be made available from the DoGi-Legal Literature database (one of the most valuable sources for online access to legal doctrine, created and managed by the Institute of Legal Information Theory and Techniques of the CNR) is illustrated, and the schema of the data representing the database in RDF format is described. This will make the DoGi database interoperable with different data and service providers (libraries, publishers, information services for accessing national and European legal information). In particular, the focus is on the goal to promote semantic interoperability between the DoGi classification scheme and other semantic indexing tools in the legal domain.

Session 3, chaired by Rossella Caffo of ICCU, includes experiences and technical issues, with the following speakers:

  • Pat Riva (Bibliothèque et Archives nationales du Québec, Canada; International Federation of Libraries Associations and Institutions [IFLA]; Functional Requirements for Bibliographic Records [FRBR] Review Group) illustrated the work of the FRBR Review Group which is in charge of the review and maintenance of IFLA's family of conceptual models and of developing guidelines and interpretive documents to assist in the application of the models (FRBR, Functional Requirements for Authority Data [FRAD], Functional Requirements for Subject Authority Data [FRSAD]). Emphasis is put on the interrelationships between the models with regards to the subject entities and relationships and the “naming” entities, as well as on the extension work made in conjunction with the FRBR/International Committee for Documentation-Conceptual Reference Model [CIDOC-CRM] Harmonisation Working Group. Reference is also made to the development of a series of namespaces for the entities, relationships, attributes, and user tasks as defined in the three models.
  • Elena Escolano Rodrìguez (previously Chair of the International Standard Bibliographic Description [ISBD] Review Group), addressed the issue of ISBD adaptation to the Semantic Web of bibliographic data in linked data format to “enhance the portability of bibliographic data in the Semantic Web environment and the interoperability of the ISBD with other content standards,” as stated in the Purpose of Consolidated ISBD, 2010. Declarations in RDF, standard definitions, and translations are essential to make multilingualism work effectively in the new Semantic Web environment.
  • Michael Hopwood (EDItEUR) and Patrizia Martini (Istituto Centrale per il Catalogo Unico delle biblioteche italiane—ICCU) analyzed the potential for linked data collaboration between commercial and cultural sectors by illustrating Linked Heritage, a European project intended to provide new contents from public and private sectors to Europeana. These sectors’ different roles and objectives can provide an added value as commercial metadata offer new information and services to Europeana users, while consistency and granularity are provided by library standards, thus allowing effective resource discovery. The Wp4, public–private partnership, will create ad-hoc metadata mappings to make interoperability work effectively.
  • Axel Kaschte (Ex Libris) explored the importance of library linked data models and illustrates the research Ex Libris is involved in and how this work can be utilized by innovative libraries.
  • Tiziana Possemato (Atcult) identifies the role of software producers in enhancing the linked data model.
  • Gordon McKenna (Collection Trusts) described his experience in heritage information in a linked-data environment.
  • Jan Brase (German National Library of Science and Technology) by illustrating the mission of the consortium Datacite, underlined the need of citable data sets that can be cross-linked from journal articles, helping scientists to gain credit for making their data available.
  • Maurizio Lunghi, Chiara Cirinnà, and Emanuele Bellini (Fondazione Rinascimento Digitale, Florence) addressed the issue of trust and persistence of Internet resources, and the need for their authenticity, integrity, provenance, and relations with other pieces of information. Certification systems using Uniform Resource Name (URN) technology like the persistent identifiers for digital objects, for authors, and for bodies can help to refine the quality of information retrievable from the Internet and to increase its usability and potential development.

Session 4, chaired by Maria Letizia Sebastiani of the Central National Library of Florence, covered different experiences at national and local levels. Speakers at this session were:

  • Roberto Moriondo (Regione Piemonte) described the experience of a public administration authority, the Regione Piemonte.
  • Giovanni Menduni (Politecnico of Milano) and Gianluca Vannuccini (City of Florence) illustrated the experience of the City of Florence. Following an internal structured assessment process, each department named an Open Data referee, and analyzed which available public data stores were eligible to be opened up in a suitable Web site section. The W3C Linked Data star-rating scheme was taken into account and a number of datasets mapped into RDF have been implemented. The main efforts on this field are now focused on the improvement of the RDF-mapped portion of the whole data store (the museums dataset was recently published), and on the enhancement of the adopted dictionaries. Due to the lack of specific and easy-to-use semantic standards for Public Administration, a home-made dictionary has been adopted, but the need is felt for collaborations with other public bodies, such as central national bodies for government ontology standardization.
  • Gabriele Messmer (Bayerische Staatsbibliothek, Germany) illustrated the German experience on linking library metadata to the Web. In 2011 the libraries of Bavaria, Berlin, and Brandenburg decided to publish their shared network catalog with nearly 23 million records as open data and as linked open data (in March 2012 this data pool won the second prize in the first German-wide programming competition “Apps for Germany”). The steps of the project were presented together with the Europeana Libraries project where more than 5 million records will be ingested and published as linked open data.
  • Romain Wenz (Bibliothèque Nationale de France [BNF]) presented the new project of the BNF, which brings together data from catalogs (MARC), archives (Encoded Archival Description [EAD]), and digital resources (Dublin Core). It makes links and publishes Web pages with already about 750,000 linked resources. All raw data are also displayed in RDF and available with an Open Licence. The importance of authority files and identifiers to build this kind of service is illustrated, together with a first feedback on how users have been reacting to it and what kind of content is being used.
  • Martin Malmsten (National Library of Sweden) explained how, as part of a strategic investment in openness, the Swedish National Library has released the National Bibliography and accompanying authority file as open data with a Creative Commons Zero license, effectively putting it in the public domain. Emphasis is given to the need for ways to track and respond to changes in other datasets, as data become more interconnected and distributed.
  • Paola Manoni (Biblioteca Apostolicana Vaticana) focused on the application profiles recently implemented in the Vatican Library's new discovery tool that interacts with interoperability standards and manages different metadata. The Library's plans for accessing Web-based digitized manuscripts collections were also illustrated.
  • Finally Roman Nanni (Biblioteca Leonardiana di Vinci) described the e-Leo project as an example of a digital archive for the history of Renaissance techniques and science manuscripts.

Through these presentations, the Florence Seminar discussed the benefits of linked data for libraries, while offering suggestions on practical ways in which libraries can participate in the development of the Semantic Web. This responds to the strong need for library data to be connected to the rest of the world, and make them open to the general public for many different uses. The participation of different stakeholders, involved in a variety of activities reflecting the trend in enhanced exchange of data, should ensure a fruitful exchange of professional experiences, new market opportunities and the development of publishing and dissemination models.

NETSL Programs at the NELA Annual Conference 2012

Submitted by Jennifer Eustis, Catalog/Metadata Librarian, University of Connecticut Libraries,
Storrs, Connecticut, United States

Every year, the New England Technical Services Librarians (NETSL) sponsor programs at the New England Library Association Annual Conference. This year, NETSL sponsored two well-attended programs. The first was AACR2 and RDA: Key Differences. This was a two-session program consisting of a theoretical first hour and a hands-on workshop. This program was presented by Steven Arakawa, Yale's University Librarian for Training and Documentation. The second was Using ORCID and Author Identifiers presented by Dr. Micah Altman, Director of Research at MIT Libraries.

Steven Arawaka is in charge of all the RDA training for Yale University Libraries. His presentation was at once detailed and fun. It might seem contradictory to use the term fun in relation to RDA. However, Steven was able to present his content in such a way that people were able to laugh about certain rules and still learn a great deal about RDA. Steven's first hour focused on the key differences between RDA and Anglo-American Cataloguing Rules, Second Edition (AACR2). In the next session, the attendees got to try their hand at creating records in RDA. Unlike many of the webinars on RDA, the attendees really appreciated this hands-on experience. These exercises even generated a great discussion of how to apply certain of the RDA rules. His presentation materials can be found at the NELA Annual Conference 2012 Web site under Tuesday morning at http://nelaconference.weebly.com/programs.html#tueam.

Micah Altman was recently name to the board of ORCID or the Open Research and Contributor Identifier Registry (http://orcid.org). Micah introduced people to ORCID. Essentially, in the realm of journals and in particular science, there is a problem with people being cited for what is essentially someone else's work because two or more researchers share the same name. There have been attempts, such as SCOPUS, to address this problem. ORCID attempts to go one step further by bringing together those who have something in place like SCOPUS or PubMed and link this to the work done by other publishers and institutions. The service will allow researchers to control their own profile, to keep track of their publications, and to search for collaborators and funding. This work can be done directly by researchers or by a third party and includes integration with other services such as SCOPUS and Google Scholar.

Both programs received positive feedback and left NETSL attendees wanting to learn more about RDA and ORCID.

Conference Reports from the 2012 OLAC Conference, October 18–21, 2012, Albuquerque, New Mexico

Submitted by Jan Mayo, OLAC Newsletter Conference Reports Column Editor,
East Carolina University, Greenville, NC, United States

Editor's note: Presentation materials may be accessed in the conference Dropbox folder at http://bit.ly/OLAC2012-ConferenceMaterials. Workshop Session summaries have been edited to include descriptive elements. Readers should view the full conference report in the Online Audiovisual Catalogers, Inc. (OLAC) Newsletter for the tips, rules, and how-to statements not included in this reprint. The full conference report may be accessed from the OLAC Newsletter, volume 32, number 4, December 2012: http://olacinc.org/drupal/newsletters/enews/2012Dec/reports.html

Managing Cataloging Departments, or, the Accidental Leader (Preconference Session)
Presented by Rebecca Lubas, University of New Mexico Libraries and
Bobby Bothmann, Minnesota State University, Mankato
—Reported by Christina Hennessey, Loyola Marymount University

The first pre-conference of OLAC 2012 was led by Rebecca Lubas, Director of Cataloging and Discovery Services, University of New Mexico, and Bobby Bothmann, Associate Professor and Metadata & Emerging Technologies Librarian at Minnesota State University, Mankato. Both have been a major force in OLAC for many years and both contributed to the recent book, Practical Strategies for Cataloging Departments (Libraries Unlimited, 2011). They spoke to an international crowd of approximately 30 librarians, mostly department heads.

Bothmann led the first section, Implementing RDA: Resource Description and Access. This is an update to a presentation Bothmann and Lubas gave at the American Library Association (ALA) in 2010 on a similar topic. Bothmann started with some positive words on RDA. In addition to his regular job, he teaches cataloging for the University of Illinois at Urbana-Champaign and explained how those new to cataloging are “getting it” much quicker with RDA than they did in AACR2.

There are implementation questions to answer. Is my system RDA-compliant? That is an easy one to answer as RDA is still in MARC, so if it is MARC-compliant, it is RDA-compliant. Will your system take RDA records? Yes, but your system might not display the RDA fields or download all the fields correctly so you need to work with your systems department to fix that. Do I have to use RDA? No. But the Program for Cooperative Cataloging (PCC) and the Library of Congress will implement RDA in March 2013, so more and more records will be RDA.

Something to help get you and your staff ready for RDA is to start using the vocabulary. Review appendices I–L in the RDA Toolkit for the correct relationship terms. Talk about terms in cataloger staff meetings. Unlearn MARC-speak: do not say 245|c, but learn the names of metadata elements and use them.

As part of your planning, create a calendar for the implementation, and decide who will learn and who will teach. When will the library start using RDA completely (or in phases)? Review your indexing and decide on the core elements for the library. To really get the group thinking, we then split up into groups to create RDA implementation plans for a small library, and discussed the plans in a larger group.

Lubas led the second section, Using contractors, vendor cataloging products, and “insourcing.” Throw out your assumptions when deciding whether to contract or not, including both “in-house is the most expensive” and “outsourcing is evil.” Sometimes we do not outsource because we think we have local exceptions that make this impossible. Do a rigorous assessment of those processes and know the reasons behind them. Make sure you include all stakeholders in these decisions as they may know a reason in another department or process that requires this exception.

When evaluating a new vendor, managers should develop a pilot project involving all staff who touch the material during processing, not just those in your department. Always review the record samples the vendor provides, including loading them into your system to see how it handles them. Sometimes “in-sourcing” is a better idea after all. If special handling of items is required, it may be cheaper to do it in-house.

It is important to assess and re-assess once you have chosen a vendor. Vendors and contractors have turnover just like in libraries, and your product and quality of service may change after the time of setup. The final thought to take away from this section: use your cataloger-hours wisely and constantly review your blend of in-house work and vendor work.

Bothmann led the third section, Training. You should have learning objectives in any training you do: what should the trainee know at the end of the session, the end of the module, and then at the end of the entire training. Trainers should define their vocabulary with the trainee, define “quality” (not just “I know it when I see it”), define “acceptable,” and be consistent in reviewing work.

The group was given the following scenario to discuss in groups: you are training a new hire to copy catalog electronic books with AACR2. Name four learning objectives that define what the trainee will be able to do after the training. This led to a list of things that will help in training: written policies, written procedures (to support the policy statements), specific steps and instructions with details on how to accomplish tasks, and workflow diagrams.

Another scenario was posed to the group for discussion in small groups: you have made the decision to use RDA for books and e-resources beginning January 2013, and AACR2 for non-print media until September 2013. Sketch a policy addressing your rationale. This was a great scenario to jump start the attendees into thinking about RDA planning and discussing RDA plans with other departments. It is easier to get money for training and the RDA Toolkit when administration hears, “I have a plan.” This section concluded with an excellent list of training blogs, resources, and selected readings on the topic, all which are available on the presentation link.

Lubas led the final section, Managing catalogers—the human factor. Many of those in cataloging management ended up there because they were good catalogers, not necessarily good managers, and may not have had any management training. The plight of the middle manager is in balancing working toward the goals of the library overall versus the job satisfaction and needs of our employees. These may be working in opposite directions.

Even if you have worked with your catalogers for many years as a manager, or are newly promoted out of the cataloging staff, you may not know your fellow catalogers as well as you should. It may be awkward, but schedule mini-interviews with your staff to review their skill inventory and educational background. You will often be surprised about skills or interests you did not know your co-workers had. You can also find out your employees’ goals during this time, both short term and long term.

Map Cataloging (Preconference Session)
Presented by Paige Andrew, Pennsylvania State University Libraries
—Reported by Scott Piepenburg, University of Wisconsin–Stevens Point

This session focused on the bare essentials of map cataloging, particularly identifying the different parts of maps, terminology, techniques, resources, and a brief history of map cataloging. The 20 attendees were each supplied a comprehensive binder with far more material than could be covered; in fact, the material distributed represents the material used by Andrew during his standard full-day workshops.

Before one can catalog maps, or cartographic materials, knowledge of the necessary terms and features of maps are necessary. Using various maps, Andrew provided examples of neat lines, legends, statements of responsibility, and most importantly depictions of scale (a ratio explaining how distance is represented on the map). With that knowledge, one can begin to complete a MARC cataloging record using the rules specified in AACR2 and other resources, notably Cartographic Materials: A Manual of Interpretation for AACR2, 2002 revision. While mentioned at times, RDA principles were not covered in depth because they have not been fully defined for cartographic resources.

Perhaps one of the most challenging activities of map cataloging is determining what the actual title is, particularly if there are maps on both sides of a sheet or if there are multiple maps on a single side. If there are multiple maps, are the maps equal in emphasis or is there a main map with inset maps and/or ancillary maps? If the map is folded, one will also need to address the issue of a cover versus a panel for a title. In some cases, there may be a personal-name main entry if the name of the cartographer or person responsible for creating the map is listed.

One of the most confusing areas of map cataloging, as well as the most critical, is the creation and entry of the 255 tag, which denotes the scale of the map. Some maps actually list the scale, denoting how much an inch on the map in question represents. Many times the scale is listed as “1 in. = 10 miles.” If this is the case, then the cataloger will need to perform the necessary math to create a ratio entry in the 255. Once that is calculated, then the same value can be entered, sans the initial part of the ratio, in the 034 MARC tag. If no scale is given, then the cataloger will need to use the Natural Scale Indicator provided to each attendee to calculate the scale of a map. This process highlighted just how important a good magnifying lens is to the process of cataloging maps. Particularly helpful was the statement, “The larger the scale, the less the detail, and the smaller the scale, the greater the detail.” For this writer, at least, this has always been a vexing problem.

The session then moved on to creating the 300 tag. There was a discussion about how many “maps” one actually had on a sheet, particularly if it was the same rendering on both sides but in different languages (a tete beche) or if there were multiple maps the same size on a single side. Coloration was also discussed, especially if the map was all one “color” as it was then not noted that it was in color; if the map was all in black but had blue for rivers, then it was considered to be in color. Physical size measurements were explained as measuring between the neat lines, measuring top to bottom and then side to size, both open and, in appropriate cases, if the map could be folded, which highlighted the need for another tool, a tape measure, as opposed to a meter stick due to its portability and flexibility.

The session concluded with each table being given a map and a workform and being expected to create those areas of the record as covered in class. Each group had a different map and there was significant sharing of ideas and questions/comments between the groups. One group even had a laptop that had access to OCLC thereby permitting practice in actual online real-time cataloging.

As Andrew mentioned, this class was far from comprehensive, as it did not cover historical maps, atlases, or globes, but it served its purpose in introducing the nomenclature, methodology, and standards that should be followed in the cataloging of cartographic resources. Having many maps available as examples and to practice on served to make the material presented more relevant than simple lecture alone.

Plenary Sessions

“Big, Social, and Media-Rich”
Opening Keynote Address by Eric Childress, OCLC
—Reported by Erminia Chao, Brigham Young University

Eric R. Childress is a consulting project manager in OCLC Research and is active in ALCTS (Association for Library Collections and Technical Services division of the American Library Association). He is on several advisory boards, gives vital project management support for OCLC Research Initiatives, and contributes to various research projects. He is at the forefront of new technologies and is considered to be a modern guru in developing and adapting cataloging methods to the ever changing nature of resources. In his presentation for OLAC, Childress spent most of the lecture focusing on the overarching patterns, specifically the virtualization of audiovisual materials, which are affecting media resources.

The first portion of his lecture he labeled “Big Patterns”; the movements that are trending currently and are impacting how people interact with media items. Social media seems to be the greatest movement, having changed how people interface with audiovisual material. Companies such as Apple and Google are succeeding tremendously by providing the means to accessing it.

Creativity also seems to be going through a transformation from being under the “exclusive reign” of professional corporations, publishing houses, and corporations to cheap publishing methods, being published primarily on electronic sources, and being owned by the creator. Libraries are also slowly becoming electronic and social with Web sites such as Goodreads, Academia.edu, and Mendeley.

Childress then went into detailing specific patterns and sharing interesting “tidbits” about where audiovisual media is heading today. He shared his observations regarding videogames, music, television, and film.

He first explained that gaming is moving away from Consoles, such as the Xbox 360, and that game sales are dropping in many international markets. However, sales for digitally formatted games specifically designed for mobile platforms such as phones and tablets are rising. This is in part due to game development becoming more accessible to a wider independent community and the increased availability of mobile technologies.

Childress then proceeded to demonstrate how consumers are interacting with the music industry. While sales of compact discs are dropping, LP sales have actually increased over 39% since 2010. While this trend has been evident for several years now, cloud-based streaming services, such as Pandora or Spotify, are rising rapidly. Like the published word, music is straying from record labels and becoming very much an independent market.

Television and On-demand, Childress points out, is becoming less and less attached to a given schedule or even the television itself. Instead, the DVR and streaming has become the preferred viewing method. Netflix, YouTube, network Web sites, Hulu, and other streaming sites give audiences the ability to watch their favorite shows wherever and whenever they would like and many times without commercials.

DVD and box office sales are declining in regard to films. This is in part due to higher cost of movie tickets and the rise of Blu-ray formatting. Piracy threats and the boom of streaming and low-cost movie rentals have also taken a toll on the movie industry's sales. The entertainment industry is beginning to find ways to sell “live experiences,” which are only possible in theaters or by purchasing newly released films. Making films in 3D is one such method. The increasing virtualization of audiovisual materials has not left the movie industry unscathed.

Childress finished his presentation by showing how large the Harry Potter franchise is, over 20 billion dollars, which came from an aspiring, amateur author. He also mentioned that 50 years ago the first installment of the James Bond movie tradition, Dr. No, was released as well as the Beatles hit song, “Love Me Do.”

“Post-Modern Cataloging: It's All AV Now!”
Closing Keynote Address by Lynne Howarth, University of Toronto
—Reported by Bojana Skarich, Michigan State University Libraries

Keynote speaker Lynne Howarth is the Associate Dean of Research and a professor at the Faculty of Information at the University of Toronto. Her research interests include knowledge organization standards and systems, as well as the evaluation of libraries’ technical services. Howarth's professional memberships include the Canadian Committee on Cataloguing, the IFLA Classification and Indexing Section, and IFLA's ISBD Review Group.

The theme of the conference this year was “post-modern cataloging.” Howarth chose this theme for her presentation to reflect the dramatic shift of the media landscape in the last 20 years. She contends that media creation and management have moved from an “expert/gatekeepers” model to a “new player” model. In the first model, media is created and managed by “old guard” entities such as newspapers, television networks, and publishers. In the second model, media creation and management is more diffuse, constructed and recycled by an “everyman creative class” via social media, retail, devices, and so on. The key players in this model are Google, Amazon, Facebook, Twitter, and many millions of media users worldwide. Howarth led the attendees on an amusing tour of the 2012 conference that tied the sessions presented back to the conference theme.

This new media landscape presents some challenges and opportunities, and the new bibliographic framework, RDA, seeks to address some of the new ways that information is being created, accessed, and used. Howarth called the previous era of cataloging a “flat earth perspective,” which focused on AACR and MARC, as the definitive standards and syntax to which catalogers look in order to describe these increasingly complex media formats. In order to move toward more sophisticated data organization and access techniques, we as catalogers and librarians must be flexible and creative in adopting new bibliographic standards and technologies, or to be open to “seeing the world as round.” What will cataloging in the postmodern age look like? Howarth said that in her cataloging classes, she teaches about a 50/50 mix of AACR2 and RDA. She said that eventually AACR2 will be phased out and will make way for a 100% emphasis on RDA and the FRBR conceptual model. She says new Library and Information Science (LIS) graduates will need to be “bilingual” so that they are aware of both AACR2 and its successor technology.

Although RDA is still very much under development, Howard emphasized that change is a constant in the cataloging world: “the world thinks, as catalogers, that we are the paragons of fixity. But of course we’re constantly changing, because the world is changing. I’m looking forward to the renaissance, structured data, linked data.” To some catalogers, especially those working for decades with AACR and AACR2, such a huge change can seem daunting. Howarth, however, is more optimistic: “I would like to say very gently to people: try to keep an open mind. RDA is not coming in fully formed … it's a piece of work.” By this definition, it is still being fleshed out, refined, and adapted to emerging media formats. RDA's success thus depends on how readily we adopt it and are able to further develop it based on our library patrons’ searching behaviors and needs. During the OLAC conference, there were countless workshop speakers who echoed the same idea. They would remark, “this is how I interpret RDA rule number X. You may interpret it differently and you may also be right in doing so.” Thus we in the cataloging community, with our involvement and commitment, will be responsible for developing best practices for using RDA.

A very interesting point that Howarth touched on was, in researcher Peter H. Lisius’ words, that “RDA should find a way to provide more consistency for accessing audio-visual materials.” For although there are more or less consistent rules and interpretations for print materials in RDA, when it comes to cataloging maps, music CDs, DVDs, and streaming video, there are inconsistencies and varying interpretations of the application of these rules. One of OLAC's roles has been to examine and recommend a set of best practices for these special types of material. On serving this professional development need, Howarth remarked that “OLAC will continue to be a first-rate conference for leading-edge, hands-on applied (and theoretical) exposure to trends and applications in AV/media cataloging. It is timely and relevant.” She is hopeful about the future of cataloging, even though there are many things still uncertain: “The one constant is change. I've been in this game for a long time. We’re doing well. You are so well-positioned for an engaging form of cataloging.” Indeed, the future is in our hands, and it's time to roll up our sleeves.

“PCC Practice for Assigning Motion Picture and Television Program Uniform Titles”
Presented by Peter Lisius, Kent State University Libraries
—Reported by Jan Mayo, East Carolina University

Recipient of the 2010 OLAC Research Grant, Peter Lisius, Music and Media Catalog Librarian, Kent State University Libraries, presented the results of his research into inconsistencies in the construction of uniform titles representing motion pictures, television programs, and radio programs. Lisius eventually narrowed the scope of his research to motion pictures and television programs, but the topic was still large enough to generate two articles, “PCC Practice for Assigning Uniform Titles for Motion Pictures: Principle versus Practice” and “PCC Practice for Assigning Uniform Titles for Television Programs: Principle versus Practice.”

Lisius summarized both of his papers, providing examples to illustrate the kinds of situations where uniform titles are needed. He shared the results of the searching he did in WorldCat for uniform titles for motion pictures and found only about a third followed the PCC practice for assigning uniform titles. For television programs, the percentage was similar.

He feels that these materials would be more discoverable if greater use was made of uniform titles and hopes that RDA will find a way to ensure more consistency in providing such access points. Lisius recommends universal adoption of standards for motion picture and television program access points.

“From Carrier to Equivalence: Cataloging Reproductions in an RDA/FRBR Environment”
Presented by Morag Boyd, Ohio State University and Kevin Furniss, Tulane University
—Reported by Sandy Roe, Illinois State University

Recipients of the 2008 OLAC Research Grant, Kevin Furniss, Serials and Electronic Resources Catalog Librarian, Tulane University, and Morag Boyd, Head, Special Collections Cataloging, The Ohio State University, spoke of their research into the cataloging of reproductions now that we are “on our way to RDA.” Boyd began with a discussion of their findings. These included that in RDA there is a more clear separation between content and carrier, that we have new opportunities to describe relationships, and that FRBR and clustering have the potential to bring increased clarity to users and to catalogers. They recommend that we shift our focus to cataloging the manifestation-in-hand, agree on a consistent approach to the treatment of reproductions, and leverage bibliographic relationships. Boyd pointed out that while systems are linking data in better ways than in the past, that linkages could be further utilized to make more meaningful user displays.

Boyd went on to lay out the theoretical ground work beginning with the RDA glossary definitions for reproduction and facsimile. She navigated us through RDA 1.11 Facsimiles and Reproductions; RDA Section 8 Recording relationships between works, expressions, manifestations, and items; how we record relationships using RDA (RDA 24.4); and lastly the LC-PCC PS for RDA 27.1, which makes “related manifestation” a core element for the Library of Congress for reproductions.

With this background established, Furniss took the floor to move the presentation from theory to practice. He chose to illustrate his points by working through an RDA/MARC 21 record for an electronic book. Vendor e-book records require various levels of attention, and the audience was ready to tackle this one right along with the speaker. He reminded the audience that Form should no longer be “s” for electronic but “o” for online. The Provider-Neutral (P-N) guidelines are important here, and he directed the audience to “Provider-Neutral E-Resource MARC Record Guide: P-N/RDA version,” a working draft for the Program for Cooperative Cataloging that includes revisions to March 1, 2012, for instructions for several fields including 264, 336, 337, 338, and 588. He recommended that we all use the option in the Edit menu of OCLC Connexion called “insert from cited record” because it automates the completion of the linking entry 776 field with information from the corresponding record for the title in print so nicely. He made it clear that if you are cataloging a Digital Library Federation (DLF) or Hathi trust title then you no longer have a provider-neutral situation and should not be using those guidelines. Furniss praised (and recommended) the documentation available from North Carolina State University Libraries on RDA (https://staff.lib.ncsu.edu/confluence/display/MNC/RDA). He urged us not to forget the RDA Appendix B abbreviations. Finally, remember to never include local information in your 856 field! This portion of the session helped to anchor the new rules into the already familiar MARC record, and Furniss’ mix of new instructions and reminders was easy to follow and useful.

Boyd and Furniss concluded their session by reviewing some of the problems that they see that still remain, and by encouraging the cataloging community to reconcile inconsistent cataloging approaches for reproductions by keeping it about content equivalence. RDA gives us some new opportunities, but the cataloging community needs to seize them. Make the expression of relationships engrained in our cataloging practice, or, in their words, “essentialize bibliographic relationships.”

Workshops and Seminars

“Best Practices for Batchloading E-Serials”
Presented by Bonnie Parks, University of Portland
—Reported by Anna Goslen, Swarthmore College

Bonnie Parks, Technology and Catalog Librarian at the University of Portland, presented an introductory workshop covering batchloading records for serials and other e-resources. Parks reviewed common types of e-resource content, such as e-journals, reference books, technical books, content from multiple publishers/aggregator databases, and streaming media. Records for these resources can come from a variety of sources.

Batchloading records for serials and e-resources can present several challenges. Large batches of records must be edited and loaded into systems quickly, efficiently, and accurately for discovery by users. Detecting duplicates across multiple packages can be difficult. Methods for tracking updates, additions, and deletions to packages must be determined, and practices vary by publisher. Within local environments, staffing can be a challenge if quality workflows are not in place. Additionally, one must determine if staff members possess the necessary skills for performing the work or if further training is needed.

Parks discussed four keys to success in batchloading serial and e-resource records: communication, workflow, documentation, and training. Parks concluded the workshop with a series of batch processing examples, primarily using MARCEdit.

“Cataloging Digital Images”
Presented by Vicki Sipe, University of Maryland, Baltimore County
—Reported by Autumn Faulkner, Michigan State University

Vicki Sipe, catalog librarian at University of Maryland, Baltimore County, gave a beautifully presented talk about the special approach needed for cataloging digital images. Her enthusiasm for images made her presentation a joy.

Her focus centered on digital images of physical photographs, drawings, and so on rather than born-digital materials. When cataloging an image, one's visual literacy is an essential tool. Sipe displayed a photograph of a hay-baling team working outside a barn and asked the audience to note all observations about objects in the photo and any important information about what might be going on. This, she stressed, is a skill machines still do not have—only librarians, trained in analysis and confident in their judgment, can make these kinds of decisions about content and subject.

To wrap up, Sipe showed examples of records encoded in both MARC 21 and Dublin Core, as well as an RDA record. Although a few differences exist in presentation between these schemas, and also between an AACR2 record and an RDA record, catalogers of digital images should continue to rely on the Descriptive Cataloging of Rare Materials (Graphics), the Art & Architecture Thesaurus, the Thesaurus for Graphic Materials, and other standards for graphic materials to create records for digital images. Using this approach will ensure rich, consistent description and access for users regardless of encoding or presentation.

“Constructing RDA Access Points”
Presented by Adam L. Schiff, University of Washington Libraries
—Reported by Deborah Ryszka, University of Delaware

Adam Schiff, principal cataloger, University of Washington, presented an in-depth and thorough presentation on the changes and differences between the construction of headings in AACR2 and the construction of access points in RDA. Schiff's goals were for the participants in his workshop to understand the key changes in constructing access points in RDA as compared to AACR2, to obtain some hands-on experience in constructing these access points, to gain familiarity with the changes in terminology between the two codes, and to review some of the new MARC 21 fields for recording attributes. There was a specific emphasis in his presentation on the access points and RDA specifics that were of interest to the OLAC audience. His examples and practice exercises at the end of his presentation focused on the types of materials OLAC catalogers regularly encounter.

This workshop was full of practical instruction on how to construct access points using the rules in RDA. In the last part of the workshop, the attendees spent time working through the exercises Schiff had prepared for them. These exercises reinforced the many topics he covered throughout his presentation. Participants had several opportunities to create authorized access points for motion pictures, television programs, and specific episodes of television programs. Additionally, attendees were able to create access points for creators and other individuals associated with moving images. In this section of the workshop, Schiff briefly covered some of the relevant RDA rules (chapter 9) for constructing personal name access points. He noted that fictitious characters, such as Miss Piggy and Uggie (the dog in the motion picture The Artist), could now have access points as creators or contributors under RDA rule 9.0.

Schiff's presentation was accompanied by detailed and comprehensive documentation that can be accessed at his Web site (http://faculty.washington.edu/aschiff/), along with other relevant RDA documentation he has created.

“E-Serials Cataloging Using the CONSER Standard Record”
Presented by Steve Shadle, University of Washington Libraries
—Reported by Jan Mayo, East Carolina University

Steve Shadle, Serials Access Librarian at the University of Washington Libraries, presented a condensed version of his workshop on e-serials cataloging using the Cooperative Online Serials (CONSER) Standard Record (CSR). His goals for the session were to provide an overview of the CSR, create an e-serial catalog record using the CONSER RDA Cataloging Checklist and compare current CSR practice with anticipated CONSER RDA practice.

He began by listing what will not change in RDA. The definition of a serial remains the same, serials will continue to be described as a whole, successive entry will be employed, basis of description will remain the first issue, serials will be cataloged using the main entry/preferred access point, and the PCC provider-neutral policy will remain in force. Shadle described the CSR as a means to reduce redundancy within the catalog record and that it emphasizes access points, simplifies record creation and maintenance, and establishes a mandatory element set (or floor). See http://www.loc.gov.laneproxy.stanford.edu/catdir/cpso/conserdoc.pdf for more detail.

Shadle's content-rich presentation covered a lot of rules and how-to in a scant two hours. The slides of the serial record for the journal Stigma Research and Action as we worked through it really helped to put the CSR in context, while illustrating the differences between AACR2 and RDA.

“FRBR, Facets, and Discoverability of Moving Image Materials in Libraries”
Presented by Kelley McGrath, University of Oregon
—Reported by Israel Yáñez, Sacramento State University

This well-attended workshop was a report on the project involving the development of a user-centric prototype interface for moving image resources based on the FRBR conceptual model. To see what a search interface that focused on movies and versions (instead of publications) would look like, OLAC funded the development of a prototype. This prototype was built on a small scale, with limited data points, few fields and records, and a simplified data model.

McGrath gave more details about plans to take the prototype further and develop a centralized discovery interface that incorporates the FRBR model and faceted navigation. These details included extracting existing MARC data and transforming it into normalized FRBR-based data, creating a backend interface for ongoing creation and management of metadata, and agreeing on guidelines for catalogers.

Why base the prototype and the discovery interface on FRBR? The FRBR conceptual model allows us to focus on the movie or the work while providing contextual information to aid in the selection of a particular version. It also enables the library community to share the creation of movie-level records while reducing redundancy of work. An added benefit is more complete and accurate metadata.

One of the challenges in faithfully applying the FRBR model to moving images is that the creation and realization of a moving image work is most often a collaborative effort involving many people. The line between work and expression can seem blurred. As a practical compromise, the Moving Image Work-Level Records Task Force proposed the idea of a work/primary expression (WPE).

The WPE includes the standard FRBR work and the primary expression. The primary expression usually refers to the first public release of a moving image work. The expected advantage of a WPE-level record is that it would provide all the metadata for re-use with a new expression or manifestation. In addition, other expressions would be more contextually meaningful when contrasted with the WPE.

McGrath went on to briefly talk about faceted navigation and its user-centric benefits. She compared faceted navigation to a traditional library catalog interface. She pointed out the flexibility that faceted navigation provides the user, who is able to start faceted browsing at any of the FRBR entity levels.

For the final portion of the session, McGrath talked about controlled machine-actionable data and how it can support the desired discovery interface with more readable displays and faceted access. She broadly covered extracting data from existing manifestation records, clustering manifestations by work, and creating provisional work records from the data in those clusters. She had more detailed slides available on the Web that participants could access after the workshop.

McGrath made a case for several things catalogers can do now to have more machine-actionable data in our current records. These include using MARC fields 130 for uniform titles when applicable, code the 257 field for country or producing entity, use field 046 $k for original release date, use field 041 $h for original language regardless of whether there is a translation involved, and use relator codes and relator terms.

McGrath encourages anyone interested in participating in this project to contact her at kelleym@uoregon.edu.

Presented by Zoe Chao and Rob Olendorf, University of New Mexico
—Reported by Julie Renee Moore, California State University, Fresno

Robert Olendorf is a Research Data Librarian at the University of New Mexico, and Zoe Chao is a Metadata Librarian at the University of New Mexico. Together, this dynamic duo presented a general (and often humorous) introduction to metadata. Robert's background in science provided an especially interesting and fresh metadata vantage point.

Memorable in this workshop, Chao provided a vivid mental image of The Terminator in the phone booth, looking for Sarah Connor. Chao explained that The Terminator was exercising the FRBR user tasks: to find, identify, select, and obtain. This was surely the most entertaining explanation of the FRBR user tasks, ever!

Various definitions of metadata were provided. A concise version is that metadata is structured data that facilitates an action, such as to find, identify, select, and obtain—and to provide organization and management of the data. The FRBR concepts were briefly explained, including the Group 1 entities (Work, Expression, Manifestation, and Item); the Group 2 entities and responsibility relationships; and the Group 3 entities and subject relationships. FRBR provides catalogers and metadata specialists with instructions on which attributes are required, a common language, and a framework for extension and schema creation.

We looked at the Open Archives Initiative (OAI). The various types of metadata were explained, including representation metadata, descriptive metadata (examples: MARC, Dublin Core, and EAD), technical metadata (explains how the digital object was created; examples: Metadata for Images in XML Standard (MIX), National Information Standards Organization [NISO] Z39.87, TextMD), preservation provenance metadata (explains how the digital object is archived and preserved and also explains the history of ownership and changes; examples: OAIS, PREservation Metadata: Implementation Strategies [PREMIS]), and rights metadata (defines intellectual property rights and permissions, examples: Creative Commons, software licenses, GNU General Public License).

The concept of “domain” was explained as a blueprint for the application profile construction. The application profile is a mixture of existing namespaces, including schemas, vocabularies, and definitions. The benefit of providing a well-planned application profile includes consistency and having guidelines to follow.

“Sound Cataloging”
Presented by Jay Weitz, OCLC
—Reported by Scott M. Dutkiewicz, Clemson University Libraries

“If there is only one thing you take away from this workshop, it should be that compact discs can only have a date of … 1982 or later!” This is an example of the practicality of the workshop presented by Jay Weitz, OCLC Senior Consulting Database Specialist, author of the OLAC Newsletter column “Questions and Answers,” and longtime OCLC liaison to OLAC.

In many ways, such as when to input a new record, sources of information, dates, and titles, there is little difference between AACR2 and RDA. An interesting discussion took place over the distinction between type code j (musical) and i (nonmusical). There are materials that blur that divide, such as exercise music with spoken instructions. The best practice would be to choose by the predominance of music (Type “j”) or spoken instruction (Type “i”), to catalog read-along materials as nonmusical sound recordings (Type “i”) with accompanying text, and to catalog recorded theses as sound recording (Type “i” or “j”) as appropriate.

The remainder of the presentation covers the 024 field, statements of responsibility, and other audio formats such as DVD audio, streaming audio, and the Playaway. Close study of this material is essential for sound formats catalogers of any level of experience. Weitz offered a wealth of experience, a valuable historical perspective, and a passion for cataloging, which benefited and inspired workshop participants.

“Video Cataloging”
Presented by Jay Weitz, OCLC
—Reported by Maureen Puffer-Rothenberg, Valdosta State University

Throughout his presentation, Jay Weitz evidenced both detailed knowledge and dry wit; his issues with RDA guidelines are couched in curmudgeonly good humor, and the room appreciated both his humor and his optimism that stakeholders will wrestle into submission RDA's thornier aspects.

Weitz emphasized throughout that RDA is “still very much in flux”; several organizations, such as Joint Steering Committee for Development of RDA (JSC), the Committee on Cataloging: Description and Access (CC:DA), and Machine-Readable Bibliographic Information (MARBI), are working to “tame” RDA and develop/document best practices.

There is not a lot of difference between AACR2 and RDA regarding sources of information for DVD-Video. The “chief” source of information (title frames for video) is now the “preferred” source. In RDA, if the preferred source does not provide the required information, we use a permanently affixed label, or “embedded metadata in textual form.” Lacking the preferred source, RDA's instruction to use “another source forming part of the resource itself” allows us a lot of leeway, but best practices coming out of the visual materials cataloging community might change that.

Weitz ended with a brief discussion of languages; a DVD can house many language versions of the same thing, what with subtitles, captions, dubbing, and so on. We code for the language of the main content. There are a lot of issues with how to distinguish among language expressions that are still up in the air.



Return to the top of the page.


©Taylor & Francis