Issue 16, 2012-02-03
The winter months bring us festivities like Mardi Gras. Here at the Code4Lib Journal, we present you with a veritable feast to indulge in as our mid-winter festival offering. Consume slowly, to fully appreciate the myriad flavors and enjoy the richness of the fare.
In creating a mobile-optimized website for Drexel University Libraries, we have strived to preserve the seamless transition between platforms that our desktop users experience. We employ separate technology and coding solutions to make Drupal, WordPress, and HTML sections mobile optimized, while continuously improving the mobile user experience in terms of design, usability, and site performance. This paper details how, through extensive research, design, and development, we found the best solution for creating a steady mobile experience for our users.
On June 2, 2011, Bing, Google, and Yahoo! announced the joint effort Schema.org. When the big search engines talk, Web site authors listen. This article is an introduction to Microdata and Schema.org. The first section describes what HTML5, Microdata and Schema.org are, and the problems they have been designed to solve. With this foundation in place section 2 provides a practical tutorial of how to use Microdata and Schema.org using a real life example from the cultural heritage sector. Along the way some tools for implementers will also be introduced. Issues with applying these technologies to cultural heritage materials will crop up along with opportunities to improve the situation.
Using VuFind, XAMPP, and Flash Drives to Build an Offline Library Catalog for Use in a Liberal Arts in Prison Program
When Grinnell College expanded its Liberal Arts in Prison Program to include the First Year of College Program in the Newton Correctional Facility, the Grinnell College Libraries needed to find a way to support the research needs of inmates who had no access to the Internet. The library used VuFind running on XAMPP installed on flash drives to provide access to the Libraries’ catalog. Once the student identified a book, it would be delivered from the Libraries to students on request. This article describes the process of getting VuFind operating in an environment with no Internet access and limited control of the computing environment.
When a library end-user searches the online catalogue for works by a particular author, he will typically get a long list that contains different translations and editions of all the books by that author, sorted by title or date of issue. As an attempt to make some order in this chaos, the Pode project has applied a method of automated FRBRizing based on the information contained in MARC records. The project has also experimented with RDF representation to demonstrate how an author’s complete production can be presented as a short and lucid list of unique works, which can easily be browsed by their different expressions and manifestations. Furthermore, by linking instances in the dataset to matching or corresponding instances in external sets, the presentation has been enriched with additional information about authors and works.
Presenting results as dynamically generated co-authorship subgraphs in semantic digital library collections
Semantic web representations of data are by definition graphs, and these graphs can be explored using concepts from graph theory. This paper demonstrates how semantically mapped bibliographic metadata, combined with a lightweight software architecture and Web-based graph visualization tools, can be used to generate dynamic authorship graphs in response to typical user queries, as an alternative to more common text-based results presentations. It also shows how centrality measures and path analysis techniques from social network analysis can be used to enhance the visualization of query results. The resulting graphs require modestly more cognitive engagement from the user but offer insights not available from text.
A dentograph is a visualization of a library’s collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
This paper explores how to integrate data across a hybrid relational database and XML-based management system. It examines specifically how XSLT’s SQL extension can be used to communicate information between SQL tables and TEI-conformant XML documents to make data-centric content more manageable and flexible and thereby leverage the strengths of both systems. In what follows, one will learn about some of the methods, benefits, and shortcomings of XSLT’s SQL extension in the context of Encyclopedia Virginia, an open access publication of the Virginia Foundation for the Humanities that utilizes a suite of digital humanities and digital library XML vocabularies such as TEI and METS.
Many who would benefit the most from timesaving bibliographic managers hesitate to adopt the technology due to the difficulties in importing legacy bibliographies developed over years. Existing shortcuts rely on manual reformatting or on re-searching online databases for the records – often almost as laborious as retyping the references. Ref2RIS was developed to automate the task of converting a bibliography in specific citation styles from common word processing document formats into the widely used RIS format. It uses the Unix stream editor sed and the conversion options of Apple’s textutil. It can be invoked as a series of simple shell commands on any Linux terminal, or more simply as a drag-and-drop Applescript application on MacOS 10.4 or higher.
Throughout the library community examples can be found of development projects evolving into mission critical components within an organization’s workflow. How these projects make that move is unique and varied, but little discussion has been had about how these projects impact their developers and the project community. What responsibilities does a developer have to ensure the long-term viability of their project? Does simply freeing the code meet those long-term responsibilities, or is there an implied commitment to provide long-term “care and feeding” to project communities built up over time? Code4Lib represents a group of developers consistently looking to build the next big thing, I’d like to step back and look at some of my own experiences related to the long-term impacts that come with developing successful projects and communities, and try to provide library developers food for thought as they consider their own ongoing responsibilities to their projects and user communities.