Issue 23, 2014-01-17
Hack your life with 10 New Year’s resolutions from Code4Lib Journal.
The Road to Responsive: University of Toronto Libraries’ Journey to a New Library Catalogue Interface
With the recent surge in the mobile device market and an ever expanding patron base with increasingly divergent levels of technical ability, the University of Toronto Libraries embarked on the development of a new catalogue discovery layer to fit the needs of its diverse users. The result: a mobile-friendly, flexible and intuitive web application that brings the full power of a faceted library catalogue to users without compromising quality or performance, employing Responsive Web Design principles.
Standards-based metadata in digital library collections are commonly less than standard. Limitations brought on by routine cataloging errors, sporadic use of authority and controlled vocabularies, and systems that cannot effectively handle text encoding lead to pervasive quality issues. This paper describes the use of Linked Data for enhancement and quality control of existing digital collections metadata. We provide practical recipes for transforming uncontrolled text values into semantically rich data, performing automated cleanup on hand-entered fields, and discovering new information from links between legacy metadata and external datasets.
The University of North Texas (UNT) and the Oklahoma Historical Society (OHS) are collaborating to digitize, process, and make publicly available more than one million photographs from the Oklahoma Publishing Company’s historic photo archive. The project, started in 2013, is expected to span a year and a half and will result in digitized photographs and metadata available through The Gateway to Oklahoma History. The project team developed the workflow described in this article to meet the specific criterion that all of the metadata work occurs in two locations simultaneously.
This article discusses how the WSLS-TV News Digitization Project at the University of Virginia Libraries was the catalyst for creating a more formalized project workflow and the eventual creation of a Project Management Office. The project revealed the need for better coordination between various groups in the library and more transparent processes. By creating well documented policies and processes, the new project workflow clarified roles, improved communication, and created greater transparency. The new processes enabled staff to understand how decisions are made and resources allocated which allowed them to work more efficiently.
Audio digitization is becoming essential to many libraries. As more and more audio files are being digitally preserved, the workflows for handling those digital objects need to be examined to ensure efficiency. In some instances, files are being manually manipulated when it would be more efficient to manipulate them programmatically. This article describes a time-saving solution to the problem of how to split master audio files into sub-item tracks.
A prototype Digital Video Library was developed as part of a project to assist rural primary care clinics with diagnosis of autism, funded by the National Network of Libraries of Medicine. The Digital Video Library takes play sample videos generated by a rural clinic and makes it available to experts at the Autism Spectrum Disorders (ASD) Clinic at The University of Alabama. The experts are able to annotate segments of the video using an integrated version of the Childhood Autism Ratings Scale-Second Edition Standard Version (CARS2). The Digital Video Library then extracts the annotated segments, and provides a robust search and browse feature. The videos can then be accessed by the subject’s primary care physician. This article summarizes the development and features of the Digital Video Library.
The Unix environment offers librarians and archivists high-quality tools for quickly transforming born-digital and digitized assets, such as resizing videos, creating access copies of digitized photos, and making fair-use reproductions of audio recordings. These tools, such as ffmpeg, lame, sox, and ImageMagick, can apply one or more manipulations to digital assets without the need to manually process individual items, which can be error prone, time consuming, and tedious. This article will provide information on getting started in using the Unix environment to take advantage of these tools for batch processing.
Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings) and video content (e.g., audio-visual recordings, broadcast content) requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content.
FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter). It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.