Cataloging & Classification Quarterly

Volume 34, no. 4, 2002

 


 

CATALOGING NEWS

 

 Sandy Roe, News Editor

 

Welcome to the news column.  Its purpose is to disseminate information on any aspect of cataloging and classification that may be of interest to the cataloging community.  This column is not just intended for news items, but serves to document discussions of interest as well as news concerning you, your research efforts, and your organization.  Please send any pertinent materials, notes, minutes, or reports to: Sandy Roe; Memorial Library; Minnesota State University, Mankato; Mankato, MN 56001-8419 (email: mailto:skroe@ilstu.edu).  Phone: 507-389-2155.  News columns will typically be available prior to publication in print from the CCQ website at http://www.catalogingandclassificationquarterly.com.

 

We would appreciate receiving items having to do with:

 

Research and Opinion 

 Events 

 People

 EVENTS

 

Moving Beyond the Catalog: Bibliographic Access in a Web World, Conference presented by NELINET, December 11, 2001, College of the Holy Cross, Worcester, Mass.

 

As we continue to grapple with the challenges of representing new and emerging media in our catalogs, and as cataloging rules and data formats evolve to answer those challenges, we discover even newer horizons which, in our more elevated (or simply optimistic) moments, we are determined to incorporate into technical services practice. Chief among these is the pressing imperative to provide integrated access to the most diverse types of metadata. These originate with an ever-expanding set of information providers representing many constituencies, from faculty-created research databases and visual image archives, to electronic dissertations and digitized collections of unique cultural materials. The NELINET conference, "Moving Beyond the Catalog," brought together a variety of practitioners and vendor representatives, who shared their experiences and reflections on a situation which sometimes seems to be of fractal complexity.

 

Michael Kaplan, Director of Product Management for Ex Libris (USA) Inc., gave the keynote address, "Wherefore Bibliographic Access: The Next Stage in the Web (R)Evolution." Mr. Kaplan wove together a number of themes as he explored the pressures driving access models beyond those provided by traditional bibliographic data structures. He first reminded the audience of Gresham's Law, which can be stated as "Bad coinage drives out good coinage." He derived a corollary, Kaplan's Law, which states that "Cheaper or quicker cataloging will generally prevail over more expensive or slower cataloging."

 

There are multiple factors which make Kaplan's Law operational. The first is the managerial need to drive down the cost of production by whatever means are most efficient and produce the desired results in a given situation. This broad principle interlaces with some library/information-specific conditions: the need to reduce backlogs, the need to complete retrospective conversion (as materials without online metadata will be "relegated to the dustbins of scholarship"), and the increasing preference of many users for full-text materials online in addition to the metadata needed to access them. In other words, the universe addressed by bibliographic control continues to expand and diversify, putting continual downward pressures on slower or more expensive cataloging.

 

The issues before us, however, go beyond cataloging policies and practices, extending to a larger conception of information architecture. Mr. Kaplan reviewed a number of recommendations resulting from the Library of Congress' Bicentennial Conference on Bibliographic Control for the New Millennium (http://lcweb.loc.gov/catdir/bibcontrol). These include recommendations which address not only the availability of standard records for electronic resources, but also include collaboration with other communities to develop diverse but interoperable standards, as appropriate for diverse types of resources. These and other discussions imply an information architecture which goes beyond the "single-bucket" model in which traditional cataloging is central, and also beyond the inadequacies of even the smartest keyword-driven search engines.

 

The "portal" or "gateway" model now under discussion seems to promise a solution to the problems of bringing together heterogeneous information resources. The underlying assumption is the desirability of a single point of access to multiple resources, and a single interface "view" for each user (perhaps customizable). But if the gateway model assumes that greatly diverging query formats and data standards will result in a single "bucket" of unified search results, there is still reason for skepticism. Mr. Kaplan posited, instead, multiple "buckets", brought together for the user via a unified interface. For the balance of his talk, he discussed Ex Libris' MetaLib and SFX technologies, as one approach to a workable library gateway.

 

MetaLib and SFX represent "front door" and "back door" approaches, respectively, to the problem of bringing multiple, diverse resources together for end users. MetaLib uses a knowledge base, which may be used directly as provided by Ex Libris or customized by local librarians, to mediate between a unified query interface and a unified display of results. This knowledge base includes both descriptive knowledge (general information which describes a resource), and functional knowledge (the rules that define the flow of data, interface, and manner of searching for a given resource). Authentication and authorization are handled by MetaLib, and individuals may customize their view in "MyLibrary" fashion. SFX uses the OpenURL specification (which is being developed as a NISO standard) to allow interconnection among diverse resources at the level of the individual resource itself, whether that is a full-text retrieval or catalog/metadata record. Similar to MetaLib, a library may query Ex Libris' own SFX server or install its own. These technologies, as well as others available or in development from other vendors, have the promise of a new level of assistance for those navigating the labyrinth of various OPACs, A&I services, web sites, electronic journals, full-text databases and more.

 

Bill Carney, Consulting Marketing Analyst, OCLC Online Computer Library Center Inc., followed with an overview of current developments titled "Helping Libraries Help People: OCLC's Plan to Expand  WorldCat." Among the "megatrends" shaping OCLC's evolution are a dramatically increased demand for quality metadata worldwide, a looming cataloger shortage, competition for skilled librarians from the private sector, and globalization of information exchange. In response, OCLC has articulated several strategic principles: 1) Libraries must and will play a crucial role in the digital world; 2) Weave libraries into the Web and the Web into libraries; 3) Enhance services around an enriched WorldCat; and 4) Simplify access for the patron.

 

There are a number of aspects of globalization that are pertinent to OCLC's development. Support for UNICODE as the "native data representation" is essential to providing multilingual/multiscript sorting, indexing, and thesaural support, but it is not possible in OCLC's present environment. Global networking and linking technologies are coming to the fore, as is the need for explicit support for non-MARC21 formats and data types. Of great importance is OCLC's implementation of the multilevel entity-relationship model outlined in Functional Requirements for Bibliographic Records (or FRBR), published by the International Federation of Library Associations and Institutions in 1998 (http://www.ifla.org/VII/s13/frbr/frbr.pdf). FRBR highlights four basic tasks of information seekers – to find, identify, select, and obtain – and a four-level model for information-bearing entities, from the abstract "work" to the particular physical "item." Implementation of the FRBR data model will allow OCLC the flexibility to provide different views of bibliographic records, consisting of diverse combinations of data elements, and suited to a wider variety of end user needs than is possible at present.

 

To accomplish these goals, OCLC will move to a radically different online environment, and plans to phase out the Passport interface by the end of 2003. OCLC's new Cataloging & Metadata Services program plans to develop a suite of metadata capture services, metadata linking services, consulting and contract services, and online cataloging services. Several expected outcomes can be highlighted here. The WorldCat database will expand to include metadata for a great variety of resource types. Libraries will be able to use a greater variety of metadata standards, such as EAD and ONIX, and will be able to catalog material in a greater range of languages and scripts, such as Hebrew and Cyrillic. Existing and new services and products will be integrated into a "single entry-point, browser-accessible, web-based service," or "library services desktop," allowing for both greater productivity and simplicity of use.

 

Mr. Carney concluded with a quick tour of OCLC's CORC, CatExpress, First Search, and web-based Interlibrary Loan, relating these existing products to the planned new services. For information on OCLC's migration plans, see http://www.oclc.org/strategy/cataloging/guidetomigration.pdf.

 

Before the lunch break, Fay Zipkowitz and Gabriel Hamilton spoke on "Preserving a Culture: the Steven Spielberg Digital Yiddish Library at the National Yiddish Book Center." Prof. Zipkowitz is Director of the Yiddish Book Department and Yiddish Cataloger, and Mr. Hamilton is Director of Information Technology for the Center, which is located at Hampshire College in Amherst, Massachusetts. The Center is embarking on a project to digitize the written artifacts of an entire culture, which thrived from the last quarter of 19th century through the mid-20th century. Catalog records for the digitized materials have been created and will be available through the major utilities. The project presents a number of challenges. Since Yiddish is not a written language, transliteration using Hebrew characters presents problems for cataloging, such as conflicting orthography and multiple forms of names. The nature of the materials themselves, their quality or rarity, also factors into the process. Some microfiche materials, for example, were filmed with so little attention to quality control that they are barely, if at all, usable. By contrast, there are items (such as the memorial books published after World War II) that really belong in research collections, and should be digitized without destruction by another, more appropriate institution.

 

Robin Wendler, Metadata Analyst at Harvard University, opened the afternoon with a subject that must be on the minds of many: "When More is Too Much: The Challenges of Separate Catalogs, Super-Catalogs, and Cross-Catalog Searching." She addressed the multiple factors which drive the continuing diversity among catalogs and databases, and which frustrate efforts to create a seamless, "one-stop" megaresource.

 

The number and variety of catalogs available both inside and outside Harvard continues to proliferate, and although Ms. Wendler focused on research institutions, this picture is also true of higher education institutions in general to different extents. Besides the by-now traditional ILS systems representing mainstream library collections, there are new catalogs for specialized "pockets" of library materials, new or newly accessible catalogs of non-library collections, and commercial undertakings with library- and non-library-like content.

 

Why are there so many catalogs? Four major forces operate to guarantee this situation: their development and operation by different organizations, differing functional requirements, differences in granularity, and variations in aspects of metadata.

 

The Harvard College Libraries include a multiplicity of independent administrative units, serving a variety of distinct constituencies, and setting their own independent policies. In some cases, these libraries may have stronger ties to peer units outside the university than with other units at Harvard. Although there are different user groups addressed by the existence of multiple libraries, the picture is complicated by a blurring of the edges between one collection and another, especially when information is digital. Individual users may not care about administrative boundaries, and become frustrated or confused by jurisdictional policies resulting in different means of access. There are cultural barriers to coordinated efforts within the library system, even when some units have the knowledge and expertise that other units lack.

 

Different catalogs may be specialized to provide different functions. A traditional OPAC, for example, will be used to request retrieval of materials from remote storage, among other purposes. Image catalogs must allow browsing of image result sets; geospatial information will be searched using "bounding polygons." Each of these uses requires different technical specifications, data structures, and interface designs.

 

The question of granularity addresses the issue of the level of detail available in description and retrieval. A catalog may be used to retrieve metadata for "atomic items" such as single images, "composite items" such as books, or collection-level/archival metadata. If metadata for all of these levels are equally treated in a particular catalog, records for major resources may be obscured in result sets. Mixing levels of granularity also makes search optimization more problematic.

 

The metadata itself will vary along a number of dimensions. Principles of description may be based in different concepts, corresponding to varying contexts and historical practices. Traditional library cataloging proceeds from describing the "object in hand," while cataloging of images may focus on describing the object pictured, and archival principles concentrate on the materials as they are organized at the collection level. These three very different approaches will necessarily result in divergent metadata element sets and structures, from the flat structures of the typical MARC record to the multi-level organization available via EAD (Encoded Archival Description). Metadata will also vary according to the standards of different library/user communities; this affects issues such as the amount of detail present in description, the choice of or absence of controlled vocabularies, etc. Lastly, metadata differs semantically, based on the context in which the material is held and the way it is expected to be sought or used. As an example, an image data may signify the date an art object was created, the date a photograph was taken, or in the case of astronomical images, the date and time will help to identify the very content of the image.

 

While these factors force the continuing proliferation of new catalogs, indexes and databases, there is still the demand for "one-stop shopping," for some type of unifying technology that will make the complexity as transparent as possible for the end user. But here a paradox emerges. Smaller catalogs, simply, are easier to use for most people. There are fewer functions to learn, and they are used more often. Smaller, local catalogs can be tailored to known audiences. At the same time, this requires users with more complex needs to consult multiple catalogs, which reintroduces the confusion. Additionally, relegating different sets of metadata to different catalogs or databases implies disciplinary boundaries, and as Ms. Wendler put it, "Any boundary you draw when setting the scope of a catalog annoys someone."

 

Ms. Wendler concluded by discussing the pros and cons of various models of catalog integration. The portal or gateway model is currently popular; interestingly, there is an increasing demand for specialized, or discipline-specific portals, somewhat contradicting the original notion of what a portal's value is. A second model is distributed search, or broadcast search to multiple servers. This model requires a complex front-end system, the need to understand each target system's requirements, and heavy maintenance. Performance in this situation is sensitive to the weakest link. Systems may have non-parallel functions or use different terminology, and consolidating results into a single set (the "single bucket" referred to by Michael Kaplan) is difficult. The third model is the "super-catalog," in which metadata is aggregated ahead of time from many different catalogs. In this model, aggregated data may require homogenizing. This is most difficult to do for terminology, and may force the metadata presented toward the lowest common denominator. Finally, there is the possibility of navigation between catalogs. SFX is an example of a technology that is used for this purpose. In summary, at present every catalog integration approach dumbs-down access while broadening the scope of search and retrieval. While further work must be done to improve this situation, it is also essential to educate end users now, with respect to what they are likely to find under any model of catalog integration.

 

The afternoon included three additional presentations describing local policies and practices. Jill Thomas, Digital Resources Cataloger at Boston College, discussed BC's implementation of MetaLib as an extension of its Ex Libris OPAC. She demonstrated MetaLib's ability to pull together results from heterogeneous information sources, using as examples the phrase "islamic painting” searched simultaneously in BC's OPAC and the Islamic Art and Architecture database, and "thomas aquinas" searched in the OPAC and in Philosopher’s Index. Among the difficulties presented by the MetaLib project are the need to “double catalog” resources in both MARC and MetaLib formats, and the steep learning curve the project presents for patrons, librarians, and Ex Libris staff alike. On the positive side, though, MetaLib "dramatically leverages" the searching power of OPACs, and (of particular importance in higher education) it has the potential to unify the efforts of multiple database creators, including faculty.

 

Keith Glavash, Head of Document Services, MIT Libraries, described the processes involved in "Building Catalog Records for Electronic Theses at MIT." At present, most theses at MIT are received in paper; few are "born digital," as this format is not yet required. There are approximately 100,000 theses in the Libraries' collections, with about 2,000 more added each year. Of these, over 6,000 popular theses have been scanned and are freely available at http://theses.mit.edu/. The theses are cataloged after archival processing and filming and/or scanning has taken place. However, because scanned theses are added to the digital collection through a separate workflow, catalog records do not carry links to the online formats. It is a goal to eliminate this discrepancy, and for all online theses to be reflected as such in catalog records. This will involve re-ordering the processing workflow, and potentially requiring more keyword and abstract information from thesis authors.

 

Cecilia Piccolo Tittemore, Head of Cataloging and Metadata Services, Dartmouth College, surveyed "Access to Digital Resources in the Dartmouth College Catalog." She discussed the evolution of Dartmouth's policies and procedures, which have been based on a constant pragmatic assessment of the options available, given certain assumptions. Among the "givens" at Dartmouth are that the catalog is identified as the central repository of information about web resources, in contrast to multiple alphabetical lists of resources. Selection of digital resources is bibliographer-driven. They make use of a separate-record approach to the problem of multiple electronic versions, again driven pragmatically by the need to move batches of data in and out of the system as expeditiously as possible.

 

Much of Ms. Tittemore's talk, however, was devoted to intriguing speculations as to the future of the catalog record itself. We have asked catalog records to take on a number of additional burdens in recent years, particularly for the control of digital resources. The definition of "the catalog" seems to be developing toward including both what the library owns and what it licenses access to, in addition to a selection of valuable "free" web sites. It is not necessarily clear that this is how the catalog is best defined. Much of what has transpired in cataloging and catalog design is based in expediency; the tools that catalogers use have been, in general, the best available for providing the desired access to this range of materials. However, in a world which features an enormous array of mutating digital resources, it may be that the AACR-based bibliographic description has become a very elaborate "unique identifier." Ms. Tittemore wondered if it might be possible to explicitly develop a system of identifiers to represent works or their expressions, connected to a host of other resources which amplify or expand on the entity represented by the identifier. What would we need, she asked, to add back on to that identifier to make this system a viable basis for a catalog? These speculations are in the spirit of much of the work being done today on new data structures and developing cataloging codes, much of it stemming from IFLA's Functional Requirements for Bibliographic Records, as mentioned by Bill Carney, and also from findings on the frontiers of research in interoperability. How much of description, indeed, can we "split" in the quest for a new kind of modular flexibility in presenting data, and how much, indivisible by nature, must remain "lumped?"

 

David Miller, Head of Technical Services

Levin Library, Curry College, Milton, Massachusetts

 

 

Meeting Minutes of the ALCTS Technical Services Directors of Large Research Libraries Discussion Group (“Big Heads”), held during the Midwinter American Library Association Meeting, New Orleans, LA, January 18, 2002

 

Welcome, introductions and announcements (Larry Alford, Chair)

 

Introductions. For announcements see Miscellaneous.

 

OhioLINK switch to basing serials subscription costs on electronic rather than print

 

Tom Sanville - philosophy behind proposal and vendor/publisher reactions

 

Tom Sanville, Director of OhioLINK, described the shift or “flip” in e-journal license pricing that OhioLINK is implementing. In this new structure the electronic is treated as the primary medium with print as the optional medium.

 

In most electronic licenses the bulk of the annual expense is for the library's print renewals through the traditional channel of serials agents to publishers, what Sanville called the Print-Plus-Electronic model.  In the Electronic-Plus-Print model the bulk of the library's annual expense is paid to the publisher via OhioLINK with a minor amount going via the serial agent. It is a way of re-arranging the cash flow. A major goal of this re-structuring is to do it in a way that maintains the economic necessities of libraries, serials agents, and publishers. That means that the total net revenue that the publisher receives from the library (via OhioLINK) and from the serial agent must be maintained and the library does not increase the net funds it expends while the serials agent maintains the dollar margin difference between the amount paid to the publisher and the amount received from the library. With the majority of the funds attached to the electronic license rather than to the print subscriptions adjustments need to be made for the negative impact on the serials agent from the lower revenues on which to receive discounts, charge service fees, etc. Under the Print-Plus model each library has to conduct an annual inventory of print renewals: a laborious, error-prone process. Over time, as print subscriptions disappear, continuing to use them as the primary focus for payment becomes a less and less accurate reflection of reality.

 

[For a fuller description of the “flip” in the price structure of electronic and print journals when brought through a consortium-wide license see the article by and the interview with Tom Sanville in The Charleston Advisor.  To obtain them go to http://charlestonco.com/tcarvws.cfm.  Once you are there, choose Browse Columns by Author looking for Sanville, Tom.  Both article and interview will be listed. – J. Hopkins] 

 

Carol Diedrichs - proposal from perspective of large research library

 

Carol said that from a collections perspective she likes flip pricing.  It puts the costs where they belong: on the electronic side.   From the technical services perspective, however, flip pricing creates more issues. Starting in spring she is accustomed to pre-pay for subscriptions and she gets a prepayment discount from serials agents for doing so. In the Electronic-Plus model the interest on the prepayment comes from the publisher through OhioLINK. One disadvantage of the Electronic Plus model is that OhioLINK does not provide the electronic invoice that serials agents did. 

 

This is first full year of program. What is now needed is to have conversations with serials agents to learn how this change has worked for them: Have they gotten enough income?

 

Larry Alford (U. of North Carolina) asked whether the serial agent gets the same discount from a publisher for an electronic version of a title as it does for the print version?  Tom Sanville said there is a compensating factor to ensure that each party doesn’t lose.  Publisher has to provide a larger discount for the electronic version of a title so the serials agent doesn’t lose, e.g., the cost for print was $20,000 and for the electronic version it is $2,000. If the publisher bases its discount on a percentage of price, the serials agent loses.

 

Bob Wolven (Columbia) asked about the reaction from publishers.  Tom Sanville said that they had been willing to work with OhioLINK.  He doesn’t know about their future intentions.

 

Duane Arenales (NLM) commented that to the extent that publishers still bundle pricing we will see more and more publishers unbundling.  Tom Sanville said they are trying to make the system indifferent to print vs. electronic subscriptions.  That approach allows those publishers who wish to stop issuing print to do so. 

 

Carol Pitts Diedrichs (OSU) said that many of the libraries in Ohio have given up print because OhioLINK provides archival access to the electronic version.

 

Rosann Bazirjian (Penn State) asked about the impact of flip pricing on the bibliographic utilities. Carol said she was unaware of any impact on the utilities.

 

Cynthia Shelton (UCLA) asked what do you anticipate basing pricing on? Tom Sanville said the past print expenditures are still a starting point for negotiations.

 

Joyce Ogburn (Washington) said that the University of Washington had flipped prices for Science Direct; that saves them over 8% of sales tax under Washington state law.  They use a serials agent to do so. The serials agent loses its discount as does the library which has to pay a small processing fee but it is worth it. Tom Sanville noted that in Europe electronic versions, but not print versions, are subject to VAT tax. 

 

Joyce Ogburn (Washington) noted we all will have to struggle with the question of what to base prices on.

 

Tom Sanville pointed out that even if you spend the same amount of money as you did for print subscriptions alone, you get more titles. 

 

Larry Alford (UNC): Is it still true that libraries are showing much greater use of titles than they did when they had access only to print?  Tom Sanville said yes, that the average use of both print and electronic titles has gone up dramatically.

 

To a question of whether he had any information about whether publishers are planning to change their pricing models there was no answer. 

 

Budgeting for electronic subscriptions - any new ideas for how to improve the process? (Harriette Hemmasi, Indiana University, discussion leader)

 

Harriette noted that in tracking electronic resources it is difficult knowing what the cost is, let alone knowing what it is based on. The idea of developing a shadow system to track these costs is very labor intensive but they have no choice except to do it.

 

Larry Alford (UNC) wondered whether anyone has figured out how to do it cheaply.

 

Beth Picknally Camden (UVA) said that some agency at the University of Virginia keeps track.

 

Judi Nadler said the University of Chicago is also looking more closely on use statistics on a wide range of materials.  There is no model to say what is a good and what is a bad cost per use.

 

Larry Alford (UNC) warned that we need to be very cautious on using cost per use; research libraries have to collect materials with low uses.

 

Someone said that a high cost per use is no guarantee of research value of material.

 

 Carol Pitts Diedrichs (OSU) noted that there is an urgency and elevation in urgency factor in dealing with electronic resources.  Publishers need to introduce best practices devices and not to cut off a resource immediately if the contract expires.

 

According to Catherine Tierney, Stanford also had subscriptions cease as contracts expired.

 

Larry Alford said that the University of North Carolina at Chapel Hill had provisions for automatic renewal in their contracts.

 

Cynthia Shelton and Lee Leighton reported that the University of California is working on collecting data on use of both print and electronic data. Brian Schottlaender is the principal investigator.

 

Library of Congress Action Plan for Bibliographic Control

 

Beacher Wiggins - Library of Congress Plan for Action - implications for large research libraries

 

Larry Alford - Program for Cooperative Cataloging participation

 

Karen Calhoun - ALCTS Task Force on the LC Action plan

 

Beacher Wiggins distributed copies of the current version (rev. Dec. 19, 2001) of the LC Action Plan (http://lcweb.loc.gov/catdir/bibcontrol/actionplan.html).  It introduced the concept of a principal investigator.  The Cataloging Directorate had gotten some internal funding to carry out some of the work. They had also gotten commitment from several organizations (both within LC such as the Network Standards Office and outside, e.g., ALA divisions such as ALCTS and RUSA) to cooperate with the Cataloging Directorate.  He would be meeting with potential collaborators later in the conference.  Getting our institutions involved is another way the Big Heads can help.

 

Larry Alford (UNC) and Chair of the PCC said that the role of the Program for Cooperative Cataloging (PCC) is largely collaborative, working with other groups that are taking lead roles, e.g., developing specifications for aggregated data bases, especially methods of showing additions and deletions.  PCC is also involved in training and continuing education.

 

Karen Calhoun (Cornell) talked about the ALCTS Advisory Task Force on the Action Plan (established at the 2001 annual conference in San Francisco) which she chairs. John Byrum of LC is on it as are Judy Mansfield (LC), Carlen Ruschoff (Md.), Priscilla Kaplan (Florida Center for Library Automation), Brian Schottlaender (Univ. of California), Mark Sandler (Univ. of Michigan) and Steve Shadle (Univ. of Washington).  ALCTS is keenly interested in developing a plan to provide access to selected web resources. Its strategic plan also has a strong commitment to continuing education of catalogers.  The Task Force will determine what parts of the action plan it would be suitable for ALCTS to be involved in, determine what groups in ALCTS should be involved (or which should be set up), etc.

 

Numbers 5.1 and 5.3 are action items that LC has asked ALCTS to take the lead on.  Action 5.1 says: 'Address educational needs through improved curricula in library and information science schools and through continuing education for cataloging practitioners by: promoting consensus on determination of "Core Competencies;" devising training in the two areas of "Mind set and values" and "Managing operations;" developing Toolkits; and identifying other  mechanisms to meet these needs'; and 5.3 says 'Promote the use and understanding of standards for describing Web resources through education, targeted outreach, etc.' The principal investigator for the Library and information science education (LIS) aspects of 5.1 and 5.3  Ingrid Hsieh-Yee (Catholic University of America).

 

Joyce Ogburn (Washington) asked about the involvement of various ALCTS education committees.  Karen Calhoun said they are all involved.

 

Copy Cataloging survey: Discussion and follow-up (Arno Kastner and Judi Nadler)

 

UC Berkeley experience with CatMe - Lee Leighton

 

Other techniques to improve productivity

 

CORE record standard - Why is it accepted for copy operations but not for original cataloging

 

Arno Kastner (NYU) distributed copies of the December 14, 2001 report (http://www.nyu.edu/library/bobst/research/tsd/catasurv.htm). The survey was based on the assumption that our libraries do not fully maximize the use of copy cataloging; the results of the survey verified that assumption. There is too much scrutiny of records and too high a staff level is involved. Among the reasons for the involvement of higher levels of staff: historical pattern, workflow issues, and difficulty of changing administrator's point of view.

 

Arno noted that about 25% of the libraries are now doing cataloging at the point of receipt. He asked the group: What kind of training did you provide to acquisitions staff? What fields did you determine should be examined?

 

Catherine Tierney (Stanford) said that what drives their changes are the economic realities they deal with; it's a trade off between the efficiencies of looking at only a few key fields vs. checking everything.

 

Judi Nadler (Chicago) said one can bring the quality level of staff up; doesn't think it worse to take a bit longer. At Chicago they did a survey comparing quality of records completed in Acquisitions and Cataloging; it was a wash. They maximized the quality of the work done in Acquisitions and decreased the time it takes to get a book on the shelf.

 

Carol Pitts Diedrichs (OSU) said that the people in Acquisitions and Cataloging who work on cataloging are of same level and OSU is careful to keep them that way. They have identified an acceptable level of quality that can result in greater speed of getting material to the shelf. They focus on access points in record review.

 

Barbara Henigman said that the University of Illinois at Urbana-Champaign takes the same approach. Less emphasis is placed on type of record (DLC or others).

 

Bob Wolven (Columbia) said many institutions use students and they don't get the same level of training as full time staff.

 

Joyce Ogburn (Washington) said the factors in assigning staff that they consider are language and subject expertise (they have some original catalogers who are not professionals; they do some copy).

 

Cynthia Clark said that at NYPL the Acquisitions staff has traditionally been classified lower than cataloging staff; she has therefore not pushed to transfer work from cataloging. She hopes in the future to reclassify staff and redefine who can do what.

 

Sally Sinn (NAL) said she identified 2 themes in the report: (1) at what part of the process is copy cataloging done and by what level of staff; and (2) what is the level of trust in the copy regardless of source? They need to improve the training of catalogers to make them more knowledgable about PCC standards.

 

Catherine Tierney (Stanford) commented that numbers are important. Improved access is less measurable.

 

Carol Pitts Diedrichs (OSU) noted that the increase in vendor records in the utility databases is a counter balance.

 

Lee Leighton said that at Berkeley, more than half of the original catalogers are non-professionals; everyone did everything. Then vendor records hit, without call numbers and many without subject heading. Berkeley had a crash program before Christmas 2001 to deal with the German and Italian current backlogs. Anyone who could classify turned to it; they did about 1,500 records using CatME.

 

Duane Arenales (NLM): What can we do collectively to get more usable records from vendors?

 

Lee Leighton (Berkeley) said it would be a tremendous burden on vendors to ask them to use AACR2 levels of cataloging and to provide call numbers that would be of no use to many of their customers.

 

Judi Nadler (Chicago) said that without paying for it, there is no way to get better records.

 

Larry Alford (UNC) said there is general agreement that vendor records are useful for acquisitions purposes.

 

Jane Ouderkirk (Harvard) said it would be helpful for workflow to have vendor records coded differently from cataloging records.

 

Lee Leighton (Berkeley) said it would be helpful for libraries to upgrade vendor records in the utilities instead of their local systems. When Berkeley upgrades records in their local system, they send them to OCLC and RLIN but the changes Berkeley makes to the records are not reflected in OCLC.

 

Harriette Hemmasi (Indiana) said it would be a wonderful entrepreneurial opportunity for someone to upgrade these records.

 

Duane Arenales (NAL) suggested pushing description to vendors and having librarians add the value of providing access through call numbers and subject headings.

 

Bob Wolven (Columbia) said we would need to determine what we need in description. Columbia isn't too bothered by vendor records; they do save keying of basic information.

 

Larry Alford (UNC) reiterated what Lee Leighton has said earlier: that a major issue is that many libraries are upgrading the records but the upgrades are not getting into the utilities.

 

Glenn Patton (OCLC) speaking from the audience reported that one of the things they are trying to deal with is handling upgraded records coming in from local systems. A recent PCC task group had asked them to take mixed batches of records (set holdings only, records upgraded to PCC standards, etc.). They know that a significant percentage of the upgraded records (based on figures from Cornell) are vendor or other foreign MARC records. They have managed to split the files into sheep and goats but they still need to replace member records with upgraded record.

 

Karen Smith-Yoshimura (RLG) also speaking from the audience, said that RLG treats all book vendor records as "non-standard". Non-English cataloging records are also treated as lower level cataloging than English-language cataloging. Each record is retained in the RLG Union Catalog; RLG matches records based on the descriptive elements to group records for the same title together. The record representing the "best" cataloging (English-language cataloging following AACR2 rules) is the one you see first. Anyone who adds an English-language cataloging record, even minimal level, will become the first record you see over a book-vendor or non-English cataloging record for the same title.

 

Catherine Tierney said that Stanford has noticed an improvement in vendor records recently. They are worth having.

 

Judi Nadler (Chicago) commented that the survey shows we all use copy cataloging and we all do some tweaking; however the added value we provide is not shared. She said she was given hope by what Glenn Patton said about batch loading changes at OCLC; it is a question of credits.

 

Joyce Ogburn (Washington) commented we should contribute to providing (and training) catalogers to work for the vendors.

 

Cynthia Shelton said that UCLA needs to maintain two workflows; they need to touch material once to get them into Acquisitions and then again to update then and get the records into OCLC.

 

Bob Wolven reported that Columbia has been trying to fully catalog current Russian monographs; he wondered what effect that effort had had on other libraries. Joan Swanekamp (Yale) and Larry Alford (UNC) agreed that their Slavic backlogs had decreased. [After the meeting Catherine Tierney reported that Stanford had seen an increase in Slavic copy cataloging in the last few months.]

 

Lee Leighton (Berkeley) said that cooperative agreements to specialize in certain areas fall apart with changes in cataloging staffing. Duke would applaud anything that would move things along cooperatively.

 

Catherine Tierney said that Stanford had changed its model from one of having original catalogers focusing on things that had aged two years to emphasizing doing current receipts. They had a crash program to deal with the backlog so they could switch to the new model. You can't allow backlogs to develop because they kill access.

 

Jane Ouderkirk said that Harvard budgets an equivalent of a full time cataloger to pay OCLC's Tech Pro whenever a backlog develops in an area. Local catalogers do the material of highest research value; Tech Pro gets the rest.

 

Duane Arenales (NLM) said we need to globalize.

 

Sally Sinn (NAL) said she agreed with what Bob Wolven said and what Duane Arenales is urging. Big Heads will continue discuss this topic at lunch and probably at next summer's meeting as well.

 

Larry Alford (UNC) commented that the Banush report (http://www.loc.gov/catdir/pcc/bibco/coretudefinal.html) showed we are willing to accept PCC core records but less willing to create them ourselves.

 

Instant access demands and implications for workflow (Katharine Farrell) [Because of time constraints this item was not discussed.]

 

Recommendations of CRL Task Force on Collections (Judith Nadler)

 

The CRL Assessment Task Force was created in January 2001 to review the content of the CRL collections and to determine how those collections could be made more visible and accessible to the CRL membership. It was chaired by Ross Atkinson of Cornell.

 

Several broad assumptions underlie the recommendations made by the Task Force.

  1. Bibliographic access can range from traditional, full MARC cataloging to less traditional web lists of a collection's holdings;

  2. The way in which an uncataloged collection can be made accessible and useful depends upon the nature of the collection and the use to which it is likely to be put;

  3. Within the option of traditional cataloging, homogeneity of standards is not a prerequisite;

  4. Subject access is very important to information discovery and should be provided to the degree possible;

  5. Engaging the CRL staff in the assessment and in the recommendations will be a great learning experience and will be vital to its success and implementation.

 

At present CRL catalogs most of its journals and newspapers and most of the materials received or produced for the Area Studies Microform Projects. CRL also catalogs its microform sets though most of those sets are not analyzed. It is financially unrealistic to assume that the Center's remaining uncataloged collections can be cataloged.

 

The Task Force decided that there are two main option for enhancing the visibility and use of the Center's uncataloged collections.

 

       1.   To catalog those collections that represent significant, uncontrolled segments of publications or scholarship and that have the potential for broad use across a range of disciplines. All records should always be loaded in OCLC and RLIN. Future deposits to CRL should include cataloging records whenever possible.

 

       The collection most in need of cataloging is the international dissertations. CRL currently holds nearly 800,000 of them and cataloging them is estimated to be a four year project.

 

        Larry Alford (UNC) asked why the Big Heads couldn't each take a share of those dissertations and catalog them.

 

        Jane Ouderkirk (Harvard) said she would prefer to contribute some money to have it done.

 

       2.   To create clear and well-constructed Web-based lists of the remaining uncataloged collections that warrant such effort. Such lists should be accurate, detailed and keyword rich so that they can be easily accessed by standard search engines. The lists should be created in such a way as to make them key candidates for linking from members' Web pages and other related bibliographic databases. These lists should be maintained and enhanced on an ongoing basis.

 

Candidates for such lists include:

       A.   U.S. College Catalogs

       B.   International Bank Reports

       C.   Textbooks

       D.   Microform sets

 

Judi asked this group to express support for the great value of these collections.

 

 [The CRL Task Force report can be found in the CRL Newsletter, Volume XXI, Number 2, December '01/January '02: http://wwwcrl.uchicago.edu/info/focus/Focus%20in%20pdf/1201Focus.pdf]

 

CAROL: Collections and Acquisitions Research Online (Bob Nardini - Chair, ALCTS CMDS Quantitative Measures for Collection Development Committee)

 

Bob Nardini spoke on CAROL, an information clearinghouse for library research. It is a web-based database where researchers in any area of library collections or acquisitions can log a description of their research and locate other researchers active in areas of interest to them. The purpose of CAROL is to encourage research by providing a central source of information about ongoing work. Programming for it was done by Christian Boissonas and his daughter. Mr. Nardini urged the Big Heads to encourage their colleagues to enter their projects on CAROL. Its scope covers all systematic, planned research, whether intended for publication or in-house use.

 

Joyce Ogburn (Washington) said she had been thinking of setting up a repository of in-house studies; this might serve that purpose.

 

Agenda topics for summer 2002 [This was not discussed.]

 

Miscellaneous

 

Beacher Wiggins spoke briefly on LC's effort to upgrade its Endeavor system. As of February 16, 2002, they will bring up a new version of Voyager. For 1-2 weeks, while they do so, the LC catalog will be unavailable for input/update. The Web OPAC will be accessible, current through February 15. There will be only limited distribution of cataloging records by CDS (Cataloging Distribution Service): all distribution will cease during the implementation period,   except for CONSER and JACKPHY records. LC will use an interim database to process CIP publications. The CIP records will not be loaded into the upgraded LC database and distributed until after the upgrade implementation is complete. LC may offer access to MARC formatted authorities in mid-spring, as a separate upgrade. The delay with the authorities’ implementation stems from Voyager's still not being fully able to deal with MARC21 diacritics. LC will keep users updated through various LC web pages.

 

LC has not received mail since Oct. 17, 2001; there is a backlog of some 3,500,000 items. The only mail they can currently accept is that coming via courier or FedEx and UPS.

 

Judith Hopkins

State University of New York at Buffalo


CCQ Homepage | Tables of Contents | Back to vol. 34, nr. 4 | Informaworld |

Comments to: Jeffrey Beall at
© Haworth Press, Inc.