Article Summary for Lecture #5—Delsey

In “Standards in Descriptive Cataloguing: Two Perspectives on the Past Twenty Years” Tom Delsey, writing in 1989, analyzes the growing significance of two factors that he believes have furthered the development of standardized cataloguing rules and practices. Calling the first of these the “economics of shared cataloguing,” Delsey asserts that, despite slight differences in language, culture, and approach, the “interchangeability of records” made possible by the creation and refinement of cataloguing standards has been economically beneficial to the industry. The second factor influencing the growth of standardization is the application of computer technology, which, according to Delsey, pressured cataloguers to refocus their methods toward increasing the “precision and logic” associated with bibliographic records. This new focus eventually led to the emergence of “horizontal standardization,” which Delsey defines as the process of applying structures consistently across a variety of material-specific formats instead of creating structures specific to the material types being described. (51-52, 54, 56)

Delsey states that the most basic objective to shared cataloguing is eliminating inefficiencies associated with duplicating records. Complicating this goal, however, are differences in cataloguing practice brought about by cultural variances, multiple approaches to diverse audiences, and historical precedent. In spite of these differences, contends Delsey, cataloguers have succeeded in developing a system of rules and standards that, while perhaps not universally observed, are internationally compatible. He mentions landmarks in the development of international cataloging standards, spearheaded by the IFLA, that include the International Standard Bibliographic Description (ISBD)—originally for monographs in 1971 but expanded to include other formats—and successful Anglo-American collaboration expressed in AACR1 and AACR2. The economic benefits of standardization, Delsey continues, cannot be fully realized without the successful implementation of the rules across the board. Fortunately, cataloguing agencies have been proactive in creating supporting mechanisms aimed at eliminating divergent practices and promoting uniform compliance of rules. Some of these efforts have been collaborative, such as those of the ABACUS Group, and, according to Delsey, have done much to minimize potentials for deviation and, by extension, to secure the economic benefits of shared cataloguing. (51-54)

Delsey states that, unlike more traditional methods, computerized cataloguing has a low tolerance for deviation. Simple typographical errors easily accommodated by the card catalog are problematic for machine-readable records. This simple reality, says Delsey, led to a reassessment of cataloguing practices that ushered in a new era of bibliographic recording based on precision and logic. This “second wave” of cataloguing standardization, as he calls it, was characterized by a focus on creating structures “horizontally” across material-specific formats and, thus, extending the bibliographic framework beyond monographs and serials. These rules were embodied in the Joint Steering Committee’s ISBD(G) (1977) and adhered to in both the AACR2 and IFLA’s ISBDs. Despite several revisions, adaptations, and extensions since then, Delsey claims, the model has “proven sufficiently flexible” in maintaining compatibility between records of different formats. Interestingly, Delsey concludes by anticipating newer technological approaches based on database models emphasizing relationships among entities. (56-59)

While I am not wholly qualified to critique Delsey’s historical assessment of cataloguing standardization, I feel that, conceptually, he is spot on. Efficiency is a goal of every industry and behind that goal is an economic impetus. Thus, it makes perfect sense for cataloging agencies to pursue standardization of rules and uniformity in implementation in order to reduce the need for duplication of records, thereby maximizing efficiency and increasing economic gains. Perhaps the only surprising element to this process is the degree of international cooperation, which, due to political and cultural differences, often can be difficult to obtain. Delsey also provides an accurate description of the advantages—and disadvantages—of computerized cataloguing. Due to inherent limitations, computer technology has increased greatly the need for more accurate record keeping and, in so doing, has created stricter standards for the industry. While this may on the surface appear as a disadvantage, the possibilities associated with these technological advances are immense. Interestingly, Delsey foreshadows the emergence of the FRBR entity-relationship model as a “third wave” in cataloguing development, anticipating some of the issues surrounding it—such as physical versus intellectual attributes and problems associated with multiple expressions of a work. On the whole, therefore, Delsey has done well in explaining key factors affecting the development of descriptive cataloguing and in predicting further changes to the profession. (59-60)

Article Summary for Lecture #4—O’Neill

In “FRBR: Application of the Entity-Relationship Model to Humphry Clinker” Edward O’Neill explores both “benefits and drawbacks” of the IFLA’s entity-relationship (FRBR) model by applying it to The Expedition of Humphry Clinker by Tobias Smollet. As a widely held and previously studied work of midlevel complexity, explains O’Neill, Clinker offers a suitable lens through which to examine the Group 1entities—work, expression, manifestation, and item—of the FRBR model. He makes the assumption that, if the FRBR model will work for Clinker, it will work for similar works as well. O’Neill states that his goal with this study was to “go beyond organizing bibliographic records to organizing the bibliographic objects represented by bibliographic records.” (152, 153)

O’Neill begins by defining FRBR’s entity-relationship model and differentiating it from current cataloguing methods. Current practices, he argues, are characterized by a focus “on a single bibliographic unit” and, as such, are limited greatly in terms of their collocating capabilities. (151) FRBR’s hierarchical structure, on the other hand, provides a way to identify the “relationships between the entities” of a given work, thus improving the organizational and navigational functions within a bibliographic system. O’Neill then goes on to define each of the Group 1 entities, emphasizing the relationships between them as the “most important aspects of the FRBR model.” (152) His “FRBRization” of Clinker, however, reveals several problems that threaten to undermine the overall functionality of the FRBR system.

The key problem, claims O’Neill, involves the expression entity. The IFLA report states that any revisions or modifications of a work, despite their level of significance, constitute a new expression of that work. O’Neill argues, however, that this clear-cut definition is contradicted by the amount of flexibility allowed in its interpretation. In the case of Clinker, supplemental materials added to the work since its original publication in 1771 led to 48 different expressions among the 157 records examined. O’Neill sees many of the modifications as rather insignificant and, therefore, he questions the added complexity. Of those revisions of greater significance, however, the bibliographic records often failed to supply information sufficient enough to accurately identify a new expression. The highly inconsistent and ever-evolving cataloguing methods of identifying editors, illustrators, and bibliographies associated with Clinker rendered any attempt to accurately differentiate between the work’s varied expressions a monumental endeavor. Clearly identifying the different expressions of a work, according to the results of O’Neill’s study, remains a great challenge to the overall effectiveness of the FRBR model.

Despite the problem of identifying expressions, O’Neill states that the remaining entity-relationships are sound and provide a “powerful means” to improve bibliographic functionality. He also suggests as an alternative to the expression entity the inclusion of “additional manifestation attributes” that might offer a way to “generate custom expression-like bibliographic record displays.” (158) I think that O’Neill might be on to something here. For a work like Clinker, a clearer method of sorting through the roles of contributors, such as editors and illustrators, seems to be all that is needed to streamline a design built on the FRBR model. A potential weakness of this study, of course, is its limited focus. Perhaps O’Neill’s assumption that the conclusions are transferable to similar works is accurate—but perhaps it is not. Similar studies are needed. In the end, however, it seems like the entity-relationship model still presents more benefits than drawbacks, especially when considering the scarcity of works like Clinker. For the host of other works that lack Clinker’s level of complexity, FRBR methods can be applied more consistently, completely, and without ambiguity.

Article Summary for Lecture #3—Yee

In her article “‘Wholly Visionary,’ The American Library Association, the Library of Congress, and the Card Distribution Program,” Martha Yee offers a brief history of the development of the Library of Congress’s card distribution program and describes its subsequent impact on the trajectory of cataloguing procedures among American libraries. Yee asserts that, due to its highly cooperative nature, the LC card program resulted in the standardization of cataloguing rules and techniques within the nation’s library system and “produced the equivalent of a national bibliography.” She therefore admires the efforts of those, such as Melvil Dewey and Herbert Putnam, who were instrumental in the program’s development, calling it an “ingenious scheme” and referring to the “courage it took” to engage in such a monumental undertaking. She closes with a discussion on the changing landscape of the cataloguing profession, expressing concern over matters involving technology and the allocation of funds, and questions the LC’s increasingly precarious role within the industry. (68, 72)

Yee points to three attempts during the second half of the nineteenth century to “centralize” cataloguing work within American libraries. Founded in 1876, the ALA was an adamant supporter of this cause and contributed many resources to the establishment of more universal procedures. The first attempt described by Yee involved partnering with the nation’s book publishers to obtain cataloguing information as new books were released. This endeavor failed for a number of reasons, according to Yee, the most important perhaps being the lack of standard cataloguing rules. The second attempt, the creation of a cataloguing “bureau” within the ALA, suffered from low subscription rates and also from no standardization. A third option involved both the creation of standardized rules for cataloguing and a national agency for administering and controlling this industry. This idea found eventual success with the LC’s card distribution program. (69)

The establishment of the LC card system did not happen overnight, however, and was not without its controversies. Yee describes the program’s development as a “clash of objectives” over such matters as the interests of large research libraries versus smaller ones, and which classification system to implement. In addition, adequate funding was required for the program to succeed. In the end, the program was designed to accommodate larger scholarly libraries—the idea being that smaller libraries would benefit from this approach as well—and the creation of a new classification system using a list of subject headings was chosen over that created by Dewey. Due mostly to the political connections of Putnam, the program benefited from the support of Congress and the ALA and, so, suitable funding was never and issue. The card program officially went into effect in 1901 and, due to several factors noted by Yee, achieved remarkable success. Available funding meant that the LC was adequately staffed to handle the workload associated with the project. Importantly, the program was “legitimatized” both through standardization and the presence of a national cataloguing agency. Finally, emerging during the period of Carnegie’s philanthropy toward libraries and the already shifting nature of cataloguing, Yee contends that the timing was right for the project to take root. (73)

In closing, I agree with Yee’s assertions concerning the significance of the LC cards. The creation of the card system was indeed an undertaking of monumental scope that had far-reaching consequences in regards to the expansion and dissemination of knowledge. So, yes, there is something in this narrative that is “profoundly democratic” and perhaps “peculiarly American.” I am not so sure that I share her concerns about the changing nature of the cataloguing industry, however. Much of this concern can be attributed simply to anxiety associated with the passing of a “golden age.” Indeed, change often brings out pessimism in people. Yes, there are changes to the profession—technological and otherwise—taking place everyday and sometimes these changes outpace original objectives and render obsolete traditional techniques. That said, with a clear focus on the “goals” Yee alludes to in making use of “humanity’s entire cultural record,” and perhaps a little of the courage exemplified by Putnam, cataloguing professionals today can continue to make a positive contribution toward the societies of which they are a part. (74-76)

Reflections on the Principle of Least Effort

In chapter eight of his book Library Research Models Thomas Mann discusses the generally accepted, but often ignored, concept of the Principle of Least Effort, which, put simply, states that researchers typically utilize information resources that are easily accessible—despite their level of quality. Mann embraces the principle as common knowledge and he includes a sampling of the extensive body of literature (within a variety of disciplines) that attests to the principle’s universal influence. As such, Mann endeavors neither to challenge nor to support the principle directly but, rather, he laments the fact that library and information professionals have chosen to overlook it when designing bibliographic and other information organizing systems. While other practitioners pay it “lip service,” Mann views the principle as a genuine determinant of information-seeking behavior and, therefore, suggests a new model that would make the “best sources for researchers’ inquiries” more readily available. (Mann 91, 92)

In The Intellectual Foundations of Information Organization Elaine Svenonius offers an analysis of bibliographical theory and practice that, while not in direct alignment with his overall purpose, supports the essence of Mann’s point of view. In chapter two of her book, Svenonius evaluates the five objectives of a full-featured bibliographic system—finding, collocating, choice, acquisition, and navigation—tracing their development, examining their current level of implementation, and noting problems and limitations associated with each. Throughout her assessment, the author stresses the links—with respect to both design and evaluation—between the objectives and user needs. She explains, for example, that the fifth objective is derived in part from the simple reality that users require guidance within the ever-expanding bibliographic universe. (Svenonius 18-20) This assertion fits well alongside those of Mann, who cleverly employs a pinball analogy to illustrate his position that the level of guidance provided within a system (the “slope of the gameboard”) outweighs users’ skills in determining the results of information-seeking activity. (Mann 92,93)

Svenonius also addresses the various arguments both for and against full-featured systems. Key arguments against full-featured bibliographic systems include cost and a lack of user need/capability. Add to this list Mann’s mention of the tendency among designers to attribute failures to “laziness” among users, instead of accepting blame for the inadequacies of their systems. (Mann 98, 100) Arguments in favor of advanced systems, according to Svenonius, stem from the belief that users are “shortchanged” by systems offering less. These include the views that 1) the varying degree of user needs and capabilities should not limit a system’s level of sophistication; 2) the difficulty users experience while seeking information should be addressed and remedied by the system; and 3) the advance of knowledge depends upon the ultimate success of a system’s design and functionality. (Svenonius 28-29) While Mann would agree with these arguments in a general sense, he would caution the idea associated with the first argument of designing a system that caters to scholars, as research shows that they are just as inclined to follow the Principle of Least Effort as other users.

In “The Invisible Substrate of Information Science” Marcia Bates adds texture to this discussion by illustrating some of the lesser-known and under-explored aspects of the information science paradigm. In addition to the usual elements—gathering, organizing, retrieving, and dissemination—that define the field, there is also a “below-the-water-line” portion of information science, which, according to Bates, is best described as a “meta-discipline” because its practice cuts across a number of content disciplines. (Bates 1043-1044) Keys to Bates’ assessment of the field are theory and methodology. Specifically, she argues that information science is characterized by an emphasis on information expertise (representational and organizational skills) over content expertise (subject knowledge). What further distinguishes the field, continues Bates, is that it focuses primarily on “recorded information and people’s relationship to it.” She refers to the study of this relationship as the “intellectual domain” of information science and urges researchers to delineate the “parameters and variables” of this area of study. (Bates 1048) Collectively, Mann, Svenonius, and Bates make a strong case for basing system design on bibliographic objectives that fully support user needs by incorporating all that is known about information-seeking behavior and the information science paradigm.

I feel personally that, as a general rule of information-seeking behavior, the Principle of Lease Effort is valid—although there are exceptions to every rule. Yes, there are slackers who are content with whatever the system affords them. But I tend to believe that for every slacker there is a dedicated researcher who is willing to go the extra mile to find the information he or she needs. Most users probably fall somewhere between these extremes. It makes sense, therefore, to design a system that is as universally accommodating as possible of users at each point on the information-seeking behavioral spectrum. Mann’s main point—simple but sound—is that good information needs to be readily accessible to those who desire it. System designers should keep this goal in mind. If they did, I believe the systems they fashion will increase access to quality information and, therefore, successfully serve the greatest number of users—dedicated researchers and slackers alike.