Volume 49, no. 4, 2011



 

Columns

Cataloging News, Robert Bothmann, News Editor

The International Observer
It's about Time! Temporal Aspects of Metadata Management in the Work of Isabelle Boydens
David Bade

 

Original Articles

Cooperative e-book cataloging in the OhioLINK Library Consortium
Carrie A. Preston

ABSTRACT: Since 2004, members of OhioLINKÕs Database Management and Standards Committee have worked together to produce and distribute bibliographic records for over 44,000 electronic books. Using historical evidence, as well as the personal experience of key personnel, this paper examines the ways in which division of labor, cataloging standards, and procedures are negotiated within the consortium. Two case studies illustrate the ways in which cooperative e-book cataloging projects are created, developed, and adapted in response to changing circumstances. Challenges to current practices are discussed, and recommendations are offered to other libraries and consortia preparing to embark on cooperative cataloging projects.

KEYWORDS: Cooperative cataloging, Electronic books, Cataloging administration/management, Case studies, Cataloging research, College and university libraries


Who's doing what? Findability and Author-supplied ETD Metadata in the Library Catalog
Margaret Beecher Maurer, Sevim McCutcheon, and Theda Schwing

ABSTRACT: Kent State University Libraries' ETD cataloging process features contributions by authors, by the ETDcat application, and by catalogers. Who is doing what, and how much of it is findable in the library catalog? An empirical analysis is performed featuring simple frequencies within the KentLINK catalog, articulated by the use of a newly devised rubric. The researchers sought the degree to which the ETD authors, the applications, and the catalogers can supply accurate, findable metadata. Further development of combinatory cataloging processes is suggested. The method of examining the data and the rubric are provided as a framework for other metadata analysis.

KEYWORDS: ETD cataloging, automatic metadata generation, author-supplied metadata, electronic theses & dissertations, cataloging applications, keyword searching, metadata, findability


The End of the Line? A Case Study of a Cataloging Department Achieving BIBCO Status
Paige G. Andrew

ABSTRACT: The Cataloging and Metadata Services Department of the Pennsylvania State University Libraries successfully achieved full participation in the Library of Congress' Monographic Bibliographic Record Cooperative Program of the PCC (BIBCO) in 2010. Reaching this goal means that the catalogers at Penn State are now fully engaged in all of the cooperative cataloging initiatives overseen by the PCC, as well as participating in cooperative quality control efforts for three formats within the OCLC Enhance Program. This article describes the journey towards achieving both BIBCO status and full participation in all cooperative programs and the importance for doing so. It also serves as a means to invite other institutions to take the next steps on their journey towards full participation in cooperative cataloging, or to join us in these important efforts.

KEYWORDS: BIBCO, Cooperative cataloging, PCC, OCLC Enhance


Is There a Future for Library Catalogers?
Michael A. Cerbo II

Is there a future for the library cataloger? For the past thirty years this debate has increased with the continued growth of online resources and greater access to the World Wide Web. Many are concerned that library administrators believe budgetary resources would be better spent on other matters, leaving library users with an overabundance of electronic information to muddle through on their own. This article focuses on the future of the cataloging profession and its importance to the needs of library patrons.

KEYWORDS: Cataloging, Catalogers, Training, Cataloging administration / management, Cataloging research

 

The International Observer

It's about Time!
Temporal Aspects of Metadata Management in the Work of Isabelle Boydens

DAVID BADE
Column Editor

One of the principle emphases in Roy Harris's writings on language and communication is the element of time in communicative action. According to Erik Hollnagel, one of the critical features of all work is the temporal dimension. Both the cognitive psychologist Dietrich Dörner and the professor of management Guy Callender found that a failure to consider temporal developments was characteristic of managers whose decision making produced catastrophic failures in simulated (Dörner) and real (Callender) situations. The temporal dimensions of database construction, management, and use have been a matter of frequent reflection on my part, but I have never pursued the issue in any depth (not enough time, perhaps). Nor do I recall running across any sustained discussion of the issue in the literature of library and information science during my searching over the past decade. Clearly my failure, as Isabelle Boydens' 1999 book Informatique, normes et temps proves. My only excuse is that searching EBSCO's Library, Information Science & Technology Abstracts with Full Text, Wilson's Library Literature and Information Science Retrospective, and LISA (CSA Illumina)-all of them-today (26 January 2011) turns up not a single reference to Boydens1. That is shocking, so shocking in fact that someone ought to find out what it is these databases are doing and why a prolific author like Boydens is not in any of them. Her book is in OCLC WorldCat, but only two U.S. libraries were listed as having copies. That is all the justification I need for this issue's column, and indeed, for The International Observer column itself.

Like Carlo Revelli's work discussed in the last International Observer column, the number of Boydens' publications and the length of Informatique, normes et temps presents just too much to go through in detail. At times the level of technical detail goes over my head, and at other times I had difficulty imagining how this would translate into cataloging practice (i.e., my world) as opposed to database management efforts by programmers (definitely not my world). Yet even in what were for me the heaviest sections of her monograph, the synthesis at the end of each chapter (a very nice feature of her book) invariably increased my level of understanding and led me to some heavy underlining. Rather than a detailed review of twenty years of publications, I shall comment on a few themes in her major work Informatique, normes et temps that particularly interested me, and follow that with comments on some of her other publications as well as those of her former student Seth van Hooland.

Before discussing some of the particulars of her published work (chiefly in French, but a few papers in English), I would like to mention two notable characteristics of her writing. First of all, in spite of the technical topic, her writing is easy to read and understand. In many academic papers every statement is justified by one, two, or ten references which are then either never discussed, or when discussed at all, the discussion only proves that the authors did not really read the works cited. Other authors insist on filling half the book with quotations and the other half with footnotes (the pot is calling the kettle black here and he is well aware of it). Readers of Isabelle Boydens' book will suffer none of these horrors. References, quotations, and footnotes are neither obtrusive nor excessive, and are always pertinent, with their importance for the argument made clear in the discussion that follows. One might think this is just a matter of style, but good writing like hers is generally the product of clear thinking, and that is the second characteristic to note.

When Boydens mentions or quotes Raymond Aron, Fernand Braudel, Norbert Elias, Gilbert Hottois, Friedrich Nietzsche, Max Weber, or Ludwig Wittgenstein (which she does), the reference is presented as the basis for a discussion that takes those remarks and elaborates what they mean for database management. Not only information scientists but philosophers, sociologists, and historians inform her discussion-they do not merely embellish it. Theories developed outside of information science and long before its origins inform her work, principally hermeneutics and historical criticism. Her discussion of metadata (she uses the term 'méta-information' and has been publishing on the topic since 1993) in the third chapter of Informatique, normes et temps ("Bases de données et incertitude") is a good example of how the depth of her analysis is rooted in theories and concepts that have been developed outside LIS. If one compares her discussion of metadata with the widely acclaimed works of philosopher and metadata advocate David Weinberger, the latter comes off as Timothy Leary to Boydens' Edmund Husserl. Her 'méta-information' is neither everything (and therefore nothing) nor miscellaneous (and therefore meaningless) but theoretically introduced in the chapter on the problem of uncertainty in databases. With a notion of metadata rigorously and theoretically defined in its philosophical, social, and technical significance, she clarifies many issues that plague metadata managers and is able on that theoretical basis to proceed towards the development of automated methods for managing particularly difficult problems.

The chief issue she addresses in her book appears to be simple enough. Time, she insists, is a part of the real world that must be taken into consideration by database managers from the very beginning. The problem? The world changes, whether the data which we utilize in our decision-making regarding that real world does or not. Her metadata is, therefore, not just meta- as in higher generalization (metaphysics), but meta- as in change (metamorphosis). Her book is, however, more than just a treatise on information science the author of which had more time on her hands than most. The preface is a very good indication of the diversity of approaches she takes to database management as well as the multiple means for addressing the problems of temporal databases. In that preface four authors address the four principle dialogues that inform her monograph. In her book information science confronts public administration and management (Alain Pirotte), hermeneutics (Françoise D'Hautcourt), history (Jean-Philippe Genet), and epistemology (Jean-Louis Besson). That was more than enough to propel me through the next 550 pages.

After a general introduction to the problem that her book addresses and the structure of her argument, the first three chapters present the state of the art in research on how to evaluate and improve the quality of databases. Included is a discussion and marvelous critique of the MIT "data quality management" research program of the early 1990s, a discussion of current methods of managing data quality, and the problem of uncertainty which was at the heart of her critique of that research: "the question of the accuracy of a database can find no satisfactory response because of the absence of referentiality that would allow the validation of the adequacy of the information [in the database] to the real world it represents." Uncertainty, she notes, "designates that which is neither fixed nor determined a priori."

For operational reasons, most current database models rest on the hypothesis of a closed world: all the facts not included in the database are interpreted as false. Theoretically, a database is thus considered as complete (all the values logically deriving from a given state of the database are present) and coherent (all the present values are correct), inside the area specified by the schema of the database. Nevertheless, in practice databases come to have incomplete or incoherent values.2

And how do databases come to have incomplete or incoherent values? Time changes the world that the data must accurately reflect if the database is to be useful at all. Time also changes the database structures, schema, and sources of data, as well as the social forces that determine what information needs to be in the database, e.g., database users' desiderata and legislation, as well as discoveries and inventions, all of which may make all-important what was previously insignificant (and vice versa).

One of Boydens' chief arguments concerns the limitations of the TDQM (Total Data Quality Management) approach associated with Redman, Wang, and MIT in the 1990s. She begins her critique by noting that the solutions proposed by these researchers "has as its object the improvement of procedures for treating the inadequacy of formally identifiable and measurable errors (incoherencies, incompleteness, programming errors)".3 Boydens reminds the reader that beyond such formal errors there is a particularly important matter that TDQM approaches to data quality do not address at all, and that is the human interpretation of information.

The measure of the accuracy of a fact rests upon a hypothetical bijective relation between a value v contained in the database and the corresponding true value v'. Yet because of the absence of referentiality in any empirical domain of application, what these authors call the "correct value v" is in absolute terms "unknowable". For example, to verify the validity of the name of a salaried worker presupposes that one has available a precise and determinate definition of the concept of "salaried worker". Yet, the juridical and informatic norms that permit the representation of the concept, exactly like the real world alongside it, never cease to evolve. ... A fortiori, to verify the validity of the names of several hundreds of thousands of individuals supposes that one can identify the whole of the population at every moment."4

She proceeds to outline the "ontological foundations" set forth in a famous paper by Wand and Wang5 and three postulates upon which those foundations rest:

The world is composed of discrete, unequivocal elements that are clearly identifiable and perceptible;

Combinations and knowledge of these elements are governed by laws;

It is possible to establish a bijective relation between the observable reality and its informational representation by virtue of the isomorphism that links the one to the other.6

Her critique begins simply enough: "Wand and Wang's approach is tautological."7 Ockham gets the credit for refuting Wand and Wang's logic seven centuries ago, and Gilbert Hottois leads her to the observation that "the absence of a 'contradictory' observation is not sufficient to prove the validity of a proposition but only a temporary indication."8

She then develops an argument on the basis of census databases, the semantics of which are, she insists, "precisely characterised by the absence of any isomorphism with the corresponding reality and that for three reasons."9 First, a census can never be complete. Second, the evaluation of census data can never rest on the available sources. And finally all economic and statistical observations are artificially attributed to a given period of time when in fact there is always of necessity a discrepancy between the state of the real world and its measurement.10 A few pages later she quotes Stuart Madnick:

"There are often real reasons why different people, different societies, different countries, different functions, different organizations may look at the same picture and see something different. To assume that this can be prevented is a mistake. We must accept the fact that there is diversity in the world."11

Boydens herself goes further, arguing that "even in the case of a single fact and of a single observer, an unequivocal informational representation of 'observable reality' is illusory."12 This leads her to suggest in the final sentence of the first chapter that the question TDQM researchers asked-Is the information contained in the database correct?-should be replaced by the question ""How is information constructed over time?" This change in the question we ask of databases is itself a splendid example of her thesis and the reason for the new question: the world changes for us because we ask and expect different things of it at different times; that being so, both our data and our metadata need to change to reflect that new state of the world. This is quite a strong claim, namely that neither data nor metadata have some ontological status that remains unchanged regardless of the user, the uses, and the questions that ground each use of the database.

As mentioned above, chapter three theoretically puts metadata in its place: the management of uncertainty and change. Boydens argues that systems of metadata should be constructed on the basis of four tasks: the identification of a minimal group of metadata based on usage, a compromise between economy and completeness of information, assessment of the organization within which the metadata will function, and an effort at minimizing manual labor.13 It is preferable, she insists, "to forego schema enrichment rather than adding elements" if the organization lacks the human resources that updating such metadata would require, since the provision of partial or dubious metadata would be "a remedy worse than the problem being addressed."14 The validity of probabilistic or "fuzzy" indexing in a real-and therefore uncertain-environment is liable to be even more uncertain than the original uncertainty of the real it is intended to represent.15

The third part of the book describes the database that the author studied, the Belgian social security database (LATG), and the principle methodologies informing her approach: heuristics, historical criticism, and hermeneutics. Historians, like database users, "confront the absence of referentiality"

In order to verify the correctness of some value, one must have available a normative reference. Yet, in an empirical domain of application, that reference does not exist. ... In order to verify the correctness of the information contained in a database, one must ideally know a priori a reality that only that database allows one to know.16

For the historian, the past is past and inaccessible directly, while in a database, each item of information it contains reflects a different past state of affairs that has changed since the effort of data collection. This remains true even in a "live" system; the only difference being the shorter time interval between data capture and data use. The opacity of a networked system decreases as the capacity of access increases, yet

from node to node, from context to context, information is transformed in the act of circulating. And the user, further and further from the source producing the information, does not necessarily have the resources that would allow him to decode the meaning of the data obtained.17

Any information scientist who cites the Belgian philosopher Gilbert Hottois-and Boydens does-gets my attention, but anyone who quotes the British philosopher Robin George Collingwood not only gets my attention but gives me great pleasure. In her chapter on hermeneutics for databases, we get to read Collingwood's own words in French translation:

Data, on the one hand, and principles of interpretation on the other, are the two elements of all historical thought. But they do not exist separately and then undergo a combination. They exist together or not at all.18

She follows this with the remark "the interpretation of the same concept varies according to the place, the period, the context and even the author. Scientific questions themselves have their own histories."19 This issue is further elucidated in the sections entitled "The interpretation of the norm interacts with interpretations of the facts"20 and "The interpretation of the facts interacts with that of the norm."21 With her introduction of Braudel's notion of time levels and Elias's evolutive continuums, we have entered into the heart of the problem that she addresses and her approach to dealing with that problem.

That is a very brief summary of the first two parts/six chapters of Boydens' book. The remaining sections deal with the management of the flow and change of data, proposed methods for automating as much of that effort as possible, and general conclusions. In this book and in subsequent publications Boydens illustrates "how hermeneutics, embodied through the use of a temporal framework, can help to interpret changes in the quality of empirical databases and lead the way to operational recommendations."22 The management strategies described in her publications apply "to all information systems whose structure evolves according to the interpretation of the realities that they aim to grasp. This is particularly true of empirical databases, in which the homogeneity of the formal codifications clashes with the heterogeneity of the empirical categories."23

One of the most interesting aspects of studying the LATG database was that the data really mattered, mattered to the people affected (pensioners, the unemployed), and to the responsible administrative agencies. National and international legal regimes enforced both the collection of the data and its interpretation in a never-ending and frequently retrospective sequence of changing and sometimes conflicting laws. To construct and manage a database that really matters requires a very different mindset than that frequently encountered in the library literature. Related to that is another matter that I should have noted myself, and long ago, but never have: database quality isn't only a matter of human knowledge, ignorance and error, nor of human error directed and exacerbated by bad policies, rather it is above all a matter of the very structure of existence in time. It is that insight that has informed Boydens' work since the early 1990s, and the implications of which are now being pursued not only by Boydens herself, but also by some of her present and former students. A few remarks on that growing body of research follow.

Many of the themes treated in depth in Informatique, normes et temps were discussed in a few earlier papers that may be easier to find and to read than the monograph. Using historical criticism to think about database management was the topic of two early papers "Informatique et qualité de l'information. Application de la critique historique à l'étude des informations issues de bases de données" (1993) and "La critique historique face aux sources informatiques" (1996).24 Metadata was the topic of a few papers of the late 1990s and the past decade.25 Managing data transformation and data quality over time were the particular concerns of three papers in 1998,26 though this is really the basic concern in all of her published work. Since the publication of her monograph in 1999, Boydens has touched on topics such as controlled vocabulary,27 the conflict between the disorder in the real world and the order constructed within a database,28 Web2.0,29 and the semantic web.30

A number of recent papers in English present in concise and updated form both the theoretical approach and the operational strategies that she has developed during the past decade. Her latest paper describes the use of her approach to data quality in electronic government in Belgium. In E-government databases "the pooling of data and dematerialization of procedures demands interoperability between sectors and departments, and this potentially multiplies the interpretation difficulties to be overcome."31 She describes the three methods developed for dealing with these problems in Belgian government databases:

1. Master Data Management is a general methodology to analyze and improve the quality of the concepts and flows judged to be the most fundamental within the information system.

2. Anomalies and Management Strategies are an original operational approach that we applied in the scope of our research about interpretation of the Belgian social security database.

3. Documentation of Application and Services aims to present an electronic data dictionary (glossaires de la sécurité sociale) that was implemented in Belgium to improve interpretation of e-government databases by the Belgian Data Quality Competency Center presented in the introduction.32

"Hermeneutics applied to the quality of empirical databases" (with Seth Van Hooland) is a short but comprehensive introduction in English to Boyden's approach to database management,33 and a forthcoming paper coauthored with Van Hooland and Eva Méndez Rodriguez, also in English, is a very interesting and iconoclastic empirical study of user-generated metadata in cultural heritage institutions.34 In this latter paper the authors note that studies of user-generated metadata in Web2.0 environments have focused on the usefulness and efficacy of such metadata for current users, but the authors are particularly interested in other questions, namely the responsibility of cultural heritage institutions towards the past we are preserving and future uses of those materials. (It is amazing how differently we can evaluate a practice depending on the questions we ask of it!) In a 2010 paper she describes centralized/hierarchical and distributed/anarchic systems of knowledge organization in the Western world from the medieval era to our own.35

Two of the papers mentioned above were written in collaboration with Seth van Hooland, current holder of the chair in Digital Information at the Information and Communication Science department of the Université Libre de Bruxelles and a former student of Boydens. One of the major topics that van Hooland has researched is the changing nature of description and the role of changing technologies and government policies in that development. That is one of the central issues discussed in his thesis36 and in personal communication he has indicated that he is working on a paper looking at the successive records created for a single object over the past 150 years, in collaboration with museums and libraries in Brussels and Berlin. His paper (in Dutch) on the history of metadata looks at how successive technologies bring us different kinds of metadata and consequently different possibilities, from the card catalogue to Web2.0.37 Other papers in English and French deal with metadata, folksonomies, and ontologies in museum collection databases.38

I am just beginning to think about the philosophical and practical implications of taking time seriously when thinking about databases and in using them. Boydens and Van Hooland have given me much to consider along those lines. But they are not armchair philosophers: they both really understand the nuts and bolts of information technologies and their intention is not only to understand how metadata works, the limitations and possibilities that metadata (however produced) offers to users of databases, but also how to ameliorate the limitations and enhance the possibilities through operationalizing metadata management. That is where they go way beyond anything I have done or ever will do.

It is not often that anyone manages to open up a whole new dimension to any field, but by bringing research into the problems and questions associated with heuristics, hermeneutics, and the study of history, Boydens has not only broken new ground in approaches to database management, but opens up an entirely new dimension of reflection on cataloging and classification. Those whose interest veers toward the philosophical questions associated with cataloging and classification will find Boydens' publications, from the first paper to the forthcoming, among the most interesting research produced during the last few decades.

*********

The next column now in preparation will be devoted to the work of a number of French anthropologists and information scientists who have been studying the adoption, adaptation of, and the use, misuse, abuse, and sometimes rejection and disuse of new technologies in airline cockpits, museums, and of course libraries: Victor Scardigli, Joëlle Le Marec, Sophie Deshayes, Emmanuel Souchier, and Yves Jeanneret.

 

Return to the top of the page.

 

NOTES

1 For a complete list of Prof. Boydens' publications see her web page:
http://www.ulb.ac.be/cours/iboydens/

2 Isabelle Boydens, Informatique, normes et temps (Bruxelles: Bruylant, 1999), 100.

3 Ibid., 57.

4 Ibid., 58.

5 Yair Wand and RichardY. Wang, "Anchoring Data Quality Dimensions in Ontological Foundations," Communications of the ACM 39, no. 11 (November 1996): 86-95.

6 Boydens, op. cit., 62.

7 Ibid., 63.

8 Ibid., 64.

9 Ibid., 65.

10 Ibid., 65-66.

11 Madnick, quoted in Boydens, ibid., 68.

12 Ibid., 68.

13 Ibid., 117.

14 Ibid.

15 Ibid., 118.

16 Ibid., 144.

17 Ibid., 152.

18 Collingwood, from The Philosophy of History (London: Published for the Historical Association by G. Bell and Sons, 1930), quoted by Boydens, ibid.,161.

19 Boydens, ibid., 161.

20 Ibid., 163.

21 Ibid., 164.

22 Isabelle Boydens and Seth van Hooland, "Hermeneutics Applied to the Quality of Empirical Databases," Journal of Documentation 67, nr. 2 (2011): 287.

23 Ibid.

24 Isabelle Boydens, "Informatique et qualité de l'information. Application de la critique historique à l'étude des informations issues de bases de données," Belgisch Tijdschrift voor Nieuwste Geschiedenis. Revue belge d'histoire contemporaine, vol. 3-4(1993): 399-439; Isabelle Boydens, "La critique historique face aux sources informatiques," in Actes de la Journée de l'histoire contemporaine 1996 - session "Internet pour les historiens" - Vereniging voor Geschiedenis en Informatica (VGI). Université Catholique de Louvain-La-Neuve, 27 April 1996: 15-17.

25 Isabelle Boydens, "Les systèmes de méta-information, instruments d'interprétation critique des sources informatiques," History and Computing 1, nr. 8 (January 1996): 11-23; Isabelle Boydens, "Les systèmes de méta-information," Techno, publication technique de la SmalS-MvM, nr.1 (April 1997); Isabelle Boydens, "E-gouvernement en Belgique: Un retour riche d'expériences," L'informatique Professionnelle, nr. 217 (Octobre 2003): 29-35.

26 Isabelle Boydens, "Analyser le processus de transformation de l'information : du "stemma codicum"" au "data tracking"," in Roelants-Abraham J., éd., Information et documentation : du réel au virtuel (Bruxelles: Infodoc-ULB, 1998): 57-70; Isabelle Boydens, "Evaluer et améliorer la qualité des bases de données," Techno, publication technique de la SmalS-MvM, nº 7 (January 1998); Isabelle Boydens, "Managing Time in Historical and Contemporaneous Databases," in International Congress on Historical Information Systems, November 6th-8th 1997 (Vitoria-Gasteiz: Juntas Generales de Alava, 1998): 159-172.

27 Isabelle Boydens, "Déploiement coopératif d'un dictionnaire électronique de données administratives," Document numérique 5, nr. 3/4 (2001): 27-43.

28 Isabelle Boydens, "Les bases de données sont-elles solubles dans le temps?" La Recherche hors série ("Ordre et désordre"). Hors série nº 9 (November-December 2002): 32-34.

29 Isabelle Boydens, E. Bruillard, P.-A. Caron, G. Gallezot and D.K. Schneider, "Entretien," Revue distance et saviors 7, nr. 3 (Numéro spécial: Informations scientifiques et pratiques numériques acadéques, ed. by Timini I., DelaMotte E. and Peraya D. Paris: Editions Hermès Sciences-Lavoisier): 479-500.

30 Isabelle Boydens, "Du "Web sémantique" au "Web pragmatique"," Research Note - SmalS-MvM, nº5 (April 2004), 19 p. (slides)

31 Isabelle Boydens, "Strategic Issues Relating to Data Quality for E-government: Learning from an Approach Adopted in Belgium," in S. Assar, I. Boughazala and I. Boydens, eds., Practical Studies in E-Government : Best Practices from Around the World (New York: Springer, 2011): 113-130; quotation from p. 128.

32 Ibid., 121.

33 Isabelle Boydens and Seth van Hooland, "Hermeneutics Applied to the Quality of Empirical Databases," Journal of Documentation 67, no. 2 (2011): 279-289.

34 Seth van Hooland, Eva Méndez Rodriguez and Isabelle Boydens, "Between Commodification and Sense-making. On the Double-sided Effect of User-generated Metadata within the Cultural Heritage Sector," in Paul F. Marty and Michelle M. Kazmer, eds, "Involving Users in the Co-Construction of Digital Knowledge in Libraries, Archives, and Museums," special issue of Library Trends, (forthcoming).

35 Isabelle Boydens, "Hiérarchie et anarchie : dépasser l'opposition entre organisation centralisée et distribuée?" es Cahiers du Numérique (Numéro thématique « Organisation des connaissances et Web 2.0 », Widad Mustafa El Hadi and Michelle Hudon, eds) 6, no. 3 (2010): 77-101.

36 Seth van Hooland, "Metadata Quality in the Cultural Heritage Sector: Stakes, Problems and Solutions" (PhD diss., Université Libre de Bruxelles, 2009). Available at: http://homepages.ulb.ac.be/~svhoolan/these.pdf.

37 Seth van Hooland and Hein Vanhee, "Van steekkaart tot webinterface. De evolutie van metadatabeheer in de culturele erfgoedsector," in B. de Nil and J. Walterus, Erfgoed 2.0. (Brussel: FARO, 2009): 87-106. Available at: http://ia700200.us.archive.org/20/items/VanSteekkaartTotWebinterface.DeEvolutieVanMetadatabeheerBinnenDe/Erfgoed2.0_vanhooland_vanhee_87_106.pdf.

38 Seth van Hooland, "Spectator Becomes Annotator: Possibilities Offered by User-generated Metadata for Image Databases." Paper presented at Immaculate Catalogues: Taxonomy, Metadata and Resource Discovery in the 21st Century, 13-15 September 2006, University of East Anglia, UK, available at: http://homepages.ulb.ac.be/~svhoolan/Usergeneratedmetadata.pdf; Seth van Hooland, "Entre formalisation et déconstruction: état de l'art critique de l'application documentaire des ontologies et folksonomies dans le domaine de l'indexation du patrimoine culturel numérique," in Organisation des connaissances et société des savoirs : Concepts, usages, acteurs. Actes du colloque ISKO 2007, Université Paul Sabatier IUT, Toulouse, 7 and 8 June 2007 (Toulouse, 2007): 33-47; Seth van Hooland, Yves Bontemps, and Seth Kaufman, "Answering the Call for More Accountability: Applying Data Profiling to Museum Metadata," in Proceedings of the International Conference on Dublin Core and Metadata Applications, 22- 26 September 2008, Berlin (Berlin: Dublin Core Metadata Initiative, 2008): 93-103.

 

 

Cataloging News
Robert Bothmann, News Editor

Welcome to the news column.  Its purpose is to disseminate information on any aspect of cataloging and classification that may be of interest to the cataloging community.  This column is not just intended for news items, but serves to document discussions of interest as well as news concerning you, your research efforts, and your organization.  Please send any pertinent materials, notes, minutes, or reports to: Robert Bothmann, Memorial Library, Minnesota State University, Mankato, ML 3097, PO Box 8419, Mankato, MN 56002-8419 (email: robert.bothmann@mnsu.edu, phone: 507-389-2010.  News columns will typically be available prior to publication in print from the CCQ website at http://catalogingandclassificationquarterly.com/.

We would appreciate receiving items having to do with:

Research and Opinion

  • Abstracts or reports of on-going or unpublished research
  • Bibliographies of materials available on specific subjects
  • Analysis or description of new technologies
  • Call for papers
  • Comments or opinions on the art of cataloging

Events

  • Notes, minutes, or summaries of meetings, etc. of interest to catalogers
  • Publication announcements
  • Description of grants
  • Description of projects

People

  • Announcements of changes in personnel
  • Announcements of honors, offices, etc.

 

Research and Opinion



Call for Papers -- CCQ Special Issue on the RDA Testing Experience



A special issue of Cataloging & Classification Quarterly will be devoted to the experiences of catalog and metadata librarians at formal and informal RDA test sites during the recent U.S. testing phase. We invite submissions from professionals in cataloging and metadata, as well as other related disciplines. Submissions should address an aspect related to the RDA test, including but not limited to implementation, teaching, and training.

The guest editors of the special issue, Drs. Sylvia D. Hall-Ellis and Robert O. Ellett, Jr., welcome the submission of papers for consideration. Instructions for authors can be found at http://www.informaworld.com/0163-9374. The deadline for submissions is May 16, 2011. To submit a paper, please use the ScholarOne submission system.

Cataloging & Classification Quarterly is dedicated to gathering and sharing information in the field of bibliographic organization. This highly respected journal considers the full spectrum of creation, content, management, use, and usability of bibliographic records and catalogs, including the principles, functions, and techniques of descriptive cataloging; the wide range of methods of subject analysis and classification; provision of access for all formats of materials; and policies, planning, and issues connected to the effective use of bibliographic data in catalogs and discovery tools. The journal welcomes papers of practical application as well as scholarly research. All manuscripts are peer reviewed. Once published, papers are widely available through Taylor & Francis' Informaworld database and other outlets.



Stanford University Libraries Reports on Testing RDA


RDA (RDA: Resource Description & Access) testing was a stimulating and collaborative undertaking at Stanford. Our early decision to adopt RDA for all original cataloging provided a full immersion experience, allowing us to concentrate fully on the new guidelines rather than going back and forth between AACR2 (Anglo-American Cataloguing Rules, second edition) and RDA. We experimented, but also put RDA to the less exciting, but nevertheless important, test of making it work in a production environment. And for the most part it works well, though I did encounter a few problems and unanswered questions along the way.

Generally, I found that RDA provides both more flexibility and more precision than AACR2. As someone who catalogs music associated with many languages (many of which I don't know), I appreciate the option to transcribe capitalization as given, instead of following ISBD practice. I am pleased that the edition statement and music presentation statement are now combined into one element, a nice simplification of AACR2. The introduction of the concept of encoding format for audio and video, e.g., CD audio, is very welcome (although there are a few issues to work through) and I am ecstatic that I can put in "vinyl" or "shellac" (base medium) in the extent statement for archival sound recordings. I also found the content type/media type/carrier type group much more powerful than the AACR2 GMD (general material designation). This was particularly evident when I cataloged a serial made up primarily of interleaved sound sheets containing musical performances, speeches, and sounds, such as the first EKG (electrocardiogram) recording. All this could be expressed through RDA in a way that was impossible in AACR2. Unfortunately, our system can do nothing with these fields at the moment, but I am optimistic that they will be very useful.

To me, however, one of the most valuable innovations of RDA is the formalization about relationships between resources and vocabulary developed to express these relationships. The value of this new emphasis is especially clear in sound recording cataloging, with its prevalence of multiple works, related works, contributors, and creators. For the most part, the vocabulary is well-developed for music, though the addition of the relationship "Reissue of" would be useful.

I did run into some areas where RDA was simply not clear and perhaps contradictory. This was particularly evident in RDA's approach to performers/performing groups as creators. RDA permits performing groups to be creators of works in some instances, but does not seem to cover individual performers or groups of performers, such as an unnamed jazz ensemble. Why should these really be any different from a named performing group? To make it more confusing, RDA does use individual performers (e.g., Amy Winehouse, Earl Hines) as examples of creators. This becomes even more complicated when adding relater terms-performer is not a valid relater term for a creator.

Also thorny are the rules for access points for expressions. The guidelines provide a wealth of vocabulary for identifying expressions; the problem is how to put it to practical use. For now, I have resorted to using only the expression access points that were available through AACR2 (e.g., translations, arrangements). This is something that will need to be worked out by the larger music community.

Finally, RDA lacks guidelines for recording producers, sound technicians, or other technical credits for sound recordings, the relevant RDA applying only to film and video. It seems peculiar to be restricting the transcription of this type of data by format, a practice that to me is inconsistent with RDA's overall model.

In the end, however, the most difficult part of using RDA has been shoehorning the code into the MARC format. The MARC to RDA mapping in the RDA Toolkit has been an invaluable resource to me for specific mappings, but it is clear that MARC and RDA do not play well together. The flat file structure and lack of granularity in MARC weaken many of the innovative aspects of RDA and in several cases multiple RDA elements are packed into single MARC fields or subfields. For the cataloging community to truly realize the innovations of RDA, either the MARC format has to change drastically or we need to move to something new. I do believe, though, that RDA is a good first step to the transformation of cataloging.

Nancy Lorimer
Head, Music Technical Services
Stanford University Libraries



Stanford University Libraries is one of the few who implemented RDA for original cataloging immediately upon commencement of the test period. Being thrown into the deep end of the pool has been an invigorating experience for me and my colleagues as we had many weeks of intense study, discussion, and experimentation with RDA. While many materials quickly became routine, we are also frustrated by many unanswered questions.

We combined the Library of Congress' (LC) institutional decisions on RDA's options with local decisions, which continue to evolve. The most striking of these was to transcribe the capitalization as given on the piece, instead of following ISBD (International Standard Bibliographic Description) practice (also an option in RDA). This was an interesting experiment. We've discovered that it was more difficult to transcribe and proofread, but is quite handy when harvesting existing data. It was not difficult to get used to spelling out abbreviations. Recording the RDA form on every NAR (name authority record) during the test period took up a great deal of our time. The addition of relater terms to name added entries is trickier than expected, since for many works the role of the person or body is unknown or is not covered by the list of relationships in RDA appendix I. Most of the important relationships for videos are covered, but lacunae for other materials included video game creators as well as a name for the relationship of a conference to its proceedings. We transcribe and trace all creators, which gets messy on books with multiple authors (no more rule of three) but is surprisingly easy for videos since RDA defines only a "filmmaker" and screenwriter as creators! My cataloger's judgment is to also trace directors and stars when appropriate. Cataloger's judgment turns out to be a large part of RDA.

I am pleased with the addition of the 257 and 046$k for place and date of original production, as well as the 336, 337 and 338 fields as replacements and expansions of the GMD. However, some computer-oriented materials get short shrift in the new system. I'm doubling the content types "two-dimensional moving image" and "computer program" for video games and adding all applicable media types for multimedia, but am concerned that our systems will not be able to combine these in a meaningful way, especially since multiple terms could mean either a single piece with multiple characteristics or a piece with accompanying material of a different type.

I was experimenting with terms in common usage in the 300 field, such as "DVD video" but have also found that more difficult than expected. There is no consensus on terminology and no guidance on specificity (e.g., DVD, or DVD-video or DVD+R?). Observing records of other catalogers, there was wide variety in interpretation of the terminology and even which data elements went into the 300 fields. This is an area where we will need a community standard. but have also found that more difficult than expected. There is no consensus on terminology and no guidance on specificity (e.g., DVD, or DVD-video or DVD+R?). Observing records of other catalogers, there was wide variety in interpretation of the terminology and even which data elements went into the 300 fields. This is an area where we will need a community standard.

Interpretation of the RDA text is often challenging, and it remains almost as book-centric as AACR2. For example, it is difficult to ascertain whether a source of title note is required for non-textual and non-video materials. The instructions on transcribing the statement of responsibility are incoherent when applied to video credits. While it is clear that separate added entries are required for each language version of a resource (no more "Spanish and English" or "Polyglot"), it is unclear whether subtitled versions constitute a language expression.

WA big change is the lack of any instructions comparable to AACR2 1.1G1 for collections of works in which one is predominant-as is the case with a video with bonus materials. This makes it impossible to use title frames as the chief source-one must find a collective title (we've been using the label or container), and transcribe the titles and first statement of responsibility of the feature and the bonus materials in the 505, adding 730s to trace the titles.

Internet materials are not necessarily considered published, but there is no definition of "published." Online reproductions are cataloged as the reproduction and not as the print with a 533. Information about the original goes into a 776 note, which (like the 533) has insufficient subfields for coding those data elements in a machine-retrievable way. It does not appear that the provider-neutral approach is compatible with RDA.

It has become clear that one of the biggest problems with RDA is actually MARC, which has no way of distinguishing the various FRBR (Functional Requirements for Bibliographic Records) levels as required, no way to associate data elements with a particular work in a collection, and cannot unambiguously encode many data elements. This will be a bigger challenge to overcome than there mere adoption of RDA.

Greta de Groat
Discovery Metadata Librarian
Stanford University Libraries



Stanford University Library Reports on Adoption of RDA for Original Cataloging


Stanford is one of several libraries that have decided to continue to use RDA for original cataloging now the test period is over. We made this decision before we started the test-if we found nothing that permanently impeded us from doing our work we would continue to use RDA. The reasons for our decision are discussed below.

Our primary reason for adopting RDA is RDA itself. RDA is based on the FRBR and FRAD conceptual models, and is neutral in its relationship to communication and display formats, such as MARC and ISBD. It is designed to be flexible enough to appeal to many metadata communities and is extensible enough to handle all types of resources. RDA allows the library community more freedom to accept and reuse data from other communities through traditional cataloging methods and through linked data, freeing our catalogers to concentrate on value-added data, rather than reworking what is already there. In return, our very rich data pool can link out to others, whether those data pools use RDA or not. The controlled vocabularies, relationship designators, and element set of RDA can be key building blocks in this process. We hope that the internal flexibility of RDA will permit us to make far better use of our data in the future. The ability to incorporate linked data into the RDA framework, the shift to the FRBR model, and the reduced prominence of MARC, were strong drivers in our decision to adopt the standard.

Another aspect of RDA that appealed was its greater emphasis on internationalization of the rules. The emphasis on transcribing what you see, choice of cataloging language, flexibility of capitalization rules, the reduction in use of abbreviations, and the adoption of international symbols, all seem to be logical steps in making our rules less Anglo-American, and thus more relevant globally.

This is not to say that we are entirely enamored with all aspects of RDA. The rules are imperfect, as any new code is bound to be, and we expect there will be changes and refinements as there were in AACR2. And like most other institutions, we are constrained by the limitations of the MARC format-its flat-file structure, its lack of granularity-which currently prevent adoption of some the more innovative changes in RDA. We are also curious to see how organizations such as PCC (Program for Cooperative Cataloging), LC, or even OCLC will balance the inbuilt flexibility of RDA with the traditional desire for consistent application of the rules.

One final reason for adopting RDA permanently at Stanford was that trainers and catalogers invested a lot of time and energy in learning to use RDA and the RDA toolkit, and to making RDA work in a production environment. To us, it seemed inefficient and wasteful to expend all that effort learning something new, only to drop it completely, go back to our old ways for what may be a finite amount of time. It seems that if we went back to AACR2, and RDA was then adopted, we would have to relearn much of what we had done during the test period.

Meanwhile, we wait impatiently for the decision of the national libraries, and hope for a positive outcome regarding RDA. While RDA is by no means perfect, at Stanford we feel it is an overdue and essential first step in the transformation of library cataloging.

Nancy Lorimer
Head, Music Technical Services,
Stanford University Libraries

With input from Philip Schreur
Head, Metadata Department,
Stanford, University Libraries



Writing on Cataloging Research


Have you conducted some cataloging and/or classification-related research in the past year? Do you have some ideas for writing and publishing a journal article and need some help moving forward? Pick up a copy of Wendy Laura Belcher's Writing Your Journal Article in 12 Weeks: A Guide to Academic Publishing Success. Los Angeles: Sage, 2009. ISBN 978-1-4129-5701-4 (pbk.) $51.95

This text-book/work-book style manual provides no-nonsense, down-to-earth plans and guidelines for successful writing. The first exercise is a self-reflection exercise to help you discover potential text you may have already written or worked on and left behind. The hand-holding nature of this workbook can help you create a new writing and research habit.


Events



Faceted Subject Access Interest Group Formed


The Cataloging and Classification Section (CCS) of the Association for Library Collections and Technical Services (ALCTS) division of the American Library Association (ALA) has a new interest group that will begin meeting at the ALA Annual Conference in June. The purpose of the new interest group is to "discuss theory and applications related to subject terminology intended for faceted application." For more information interested parties should visit the interest group's ALA Connect page: http://connect.ala.org/node/132161.


Publication Announcements


Blum, Rudolf. Kallimachos: The Alexandrian Library and the Origins of Bibliography. Wisconsin Studies in Classics. Madison, Wis.: University of Wisconsin Press, 2011.

Coyle, Karen. Understanding the Semantic Web: Bibliographic Data and Metadata. Library Technology Reports. Chicago: American Library Association, 2010.

Falk, Patricia K. and Stefanie Dennis Hunker. Cataloguing Outside the Box: A Practical Guide to Cataloguing Special Collections Materials. Oxford: Chandos Pub., 2010.

Giaretta, David. Advanced Digital Preservation. New York: Springer, 2011.

Hall-Ellis, Sylvia D., Stacey L. Bowers, Christopher Hudson, Joanne Patrick, and Claire Williamson. Librarian's Handbook for Seeking, Writing, and Managing Grants. Santa Barbara, Calif.: Libraries Unlimited, 2011.

Hider, Philip. Information Resource Description: Creating and Managing Metadata. London: Facet Publishing, 2011.

Kumbhar, Rajendra. Library Classification Trends in the 21st Century. Oxford: Chandos Publishing, 2011.

Lubas, Rebecca L., ed. Practical Strategies for Cataloging Departments. Third Millennium Cataloging. Santa Barbara, Calif.: Libraries Unlimited, 2011.

Stokes, Roy. A Bibliographical Companion. Lanham, Md.: The Scarecrow Press, Inc., 2011.

Weber, Mary Beth and Fay Angela Austin. Describing Electronic, Digital, and Other Media Using AACR2 and RDA: A How-to-do-it Manual and CD-ROM for Librarians. New York: Neal-Schuman Publishers, 2011.


People



Kelley McGrath Cited by OLAC

The Online Audiovisual Catalogers gave a special citation to Kelley McGrath at the 2011 membership meeting held during the Midwinter American Library Association meeting in San Diego. The text of the citation is as:

Long-time OLAC member and current Committee on Cataloging: Description and Access liaison Kelley McGrath has conceived and spearheaded an effort to improve user access to moving image materials held by libraries and archives, inspired by the Functional Requirements for Bibliographic Records (FRBR). With the help of programmer Chris Fitzpatrick and funding from OLAC, Kelley has created the OLAC Work-Centric Moving Image Discovery Interface Prototype. Leveraging the FRBR model and faceted search, the new discovery interface presents users with one result per FRBR work, similar to results obtained by searching the Internet Movie Database (IMDb). Users are then invited to narrow their selection by choosing aspects such as format and language of soundtrack. The goal is to provide a discovery interface based on the characteristics that searchers typically value, resulting in a more streamlined, easily-understood display. This discovery interface prototype would not have come to fruition without Kelley's vision. The OLAC Executive Board applauds Kelley's accomplishment and we eagerly anticipate her progress in moving the prototype into a production environment.


Marielle Veve, 2011 Esther J. Piercy Award Recipient

The Association for Library Collections & Technical Services (ALCTS) division of the American Library Association (ALA) has named Marielle Veve, cataloging and metadata librarian at the University of Tennessee's Hodges Library as the 2011 recipient of the distinguished Esther J. Piercy Award. The Piercy Award recognizes librarians with no more than ten years of professional experience who make significant contributions to the profession in library collections and technical services and show outstanding promise for leadership and continued contributions to the profession. See http://www.americanlibrariesmagazine.org/news/ala/piercy-award-marielle-veve.

 

 

Return to the top of the page.

 

©Taylor & Francis