Creating a Custom Queueing System for a Makerspace Using Web Technologies
This article details the changes made to the queueing system used by Virginia Tech University Libraries’ 3D Design Studio as the space was decommissioned and reabsorbed into the new Prototyping Studio makerspace. This new service, with its greatly expanded machine and tool offerings, required a revamp of the underlying data structure and was an opportunity to rethink the React and Electron app used previously in order to make the queue more maintainable and easier to deploy moving forward. The new Prototyping Queue application utilizes modular design and auto building forms and queues in order to improve the upgradeability of the app. We also moved away from using React and Electron and made a web app that loads from the local filesystem of the computer in the studio and runs on the Svelte framework with IBM’s Carbon Design components to build out functionality with the frontend. The deployment process was also streamlined, now relying on git and Windows Batch scripts to automate updating the app as changes are committed to the repository.
Editorial: On FOSS in Libraries
Some thoughts on the state of free and open source software in libraries.
Annif Analyzer Shootout: Comparing text lemmatization methods for automated subject indexing
Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
Citation Needed: Adding Citations to CONTENTdm Records
The Tennessee State Library and Archives and the Illinois State Library identified a need to add citation information to individual image records in OCLC’s CONTENTdm (https://www.oclc.org/en/contentdm.html). Experience with digital archives at both institutions showed that citation information was one of the most requested features. Unfortunately, CONTENTdm does not natively display citation information about image records; to add this functionality, custom JavaScript had to be written that would interact with the underlying React environment and parse out or retrieve the appropriate metadata to dynamically build record citations. Detailed code and a description of methods for building two different models of citation generators are presented.
An XML-Based Migration from Digital Commons to Open Journal Systems
The Oregon Library Association has produced its peer-reviewed journal, the OLA Quarterly (OLAQ), since 1995, and OLAQ was published in Digital Commons beginning in 2014. When the host institution undertook to move away from Bepress, their new repository solution was no longer a good match for OLAQ. Oregon State University and University of Oregon agreed to move the journal into their joint instance of Open Journal Systems (OJS), and a small team from OSU Libraries carried out the migration project. The OSU project team declined to use PKP’s existing migration plugin for a number of reasons, instead pursuing a metadata-centered migration pipeline from Digital Commons to OJS. We used custom XSLT to convert tabular data exported from Bepress into PKP’s Native XML schema, which we imported using the OJS Native XML Plugin. This approach provided a high degree of control over the journal’s metadata and a robust ability to test and make adjustments along the way. The article discusses the development of the transformation stylesheet, the metadata mapping and cleanup work involved, as well as advantages and limitations of using this migration strategy.
Choose Your Own Educational Resource: Developing an Interactive OER Using the Ink Scripting Language
Learning games are games created with the purpose of educating, as well as entertaining, players. This article describes the potential of interactive fiction (IF), a type of text-based game, to serve as learning games. After summarizing the basic concepts of interactive fiction and learning games, the article describes common interactive fiction programming languages and tools, including Ink, a simple markup language that can be used to create choice based text games that play in a web browser. The final section of the article includes code putting the concepts of Ink, interactive fiction, and learning games into action using part of an interactive OER created by the author in December of 2020.
Machine Learning Based Chat Analysis
The BYU library implemented a Machine Learning-based tool to perform various text analysis tasks on transcripts of chat-based interactions between patrons and librarians. These text analysis tasks included estimating patron satisfaction and classifying queries into various categories such as Research/Reference, Directional, Tech/Troubleshooting, Policy/Procedure, and others. An accuracy of 78% or better was achieved for each category. This paper details the implementation details and explores potential applications for the text analysis tool.
Managing Electronic Resources Without Buying into the Library Vendor Singularity
Over the past decade, the library automation market has faced continuing consolidation. Many vendors in this space have pushed towards monolithic and expensive Library Services Platforms. Other vendors have taken “walled garden” approaches which force vendor lock-in due to lack of interoperability. For these reasons and others, many libraries have turned to open-source Integrated Library Systems (ILSes) such as Koha and Evergreen. These systems offer more flexibility and interoperability options, but tend to be developed with a focus on public libraries and legacy print resource functionality. They lack tools important to academic libraries such as knowledge bases, link resolvers, and electronic resource management systems (ERMs). Several open-source ERM options exist, including CORAL and FOLIO. This article analyzes the current state of these and other options for libraries considering supplementing their open-source ILS either alone, hosted or in a consortial environment.
Natural Language Processing in the Humanities: A Case Study in Automated Metadata Enhancement
The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
Programming Poetry: Using a Poem Printer and Web Programming to Build Vandal Poem of the Day
Vandal Poem of the Day (VPOD) is a public poetry initiative led by the Center for Digital Inquiry and Learning (CDIL) at the University of Idaho Library. For four academic years VPOD has published contemporary poems daily in collaboration with award-winning poetry presses and journals. This article details the project’s genesis and history, focusing on two aspects of the project: 1) the customized WordPress site, CSS, and plugins that enable the layout, publication, and social media promotion of the poetry and 2) the innovative means we have developed for promoting the site using receipt printers. The latter portion includes details and code related to two different physical computing projects that use receipt printers–one using a Raspberry Pi and the other using a recycled library circulation printer– to print individual VPOD poems on demand.
Editorial Edit
A few words about our editors. A farewell to one editor. A solicitation for new editors.
Wayfinding Serendipity: The BKFNDr Mobile App
Librarians and staff at St. John’s University Libraries created BKFNDr, a beacon-enabled mobile wayfinding app designed to help students locate print materials on the shelves at two campus libraries. Concept development, technical development, evaluation and UX implications, and financial considerations are presented.
Adaptation: the Continuing Evolution of the New York Public Library’s Digital Design System
A design system is crucial for sustaining both the continuity and the advancement of a website’s design. But it’s hard to create such a system when content, technology, and staff are constantly changing. This is the situation faced by the Digital team at the New York Public Library. When those are the conditions of the problem, the design system needs to be modular, distributed, and standardized, so that it can withstand constant change and provide a reliable foundation. NYPL’s design system has gone through three major iterations, each a step towards the best way to manage design principles across an abundance of heterogeneous content and many contributors who brought different skills to the team and department at different times. Starting from an abstracted framework that provided a template for future systems, then a specific component system for a new project, and finally a system of interoperable components and layouts, NYPL’s Digital team continues to grow and adapt its digital design resource.
Getting Real in the Library: A Case Study at the University of Florida
In the fall of 2014, the University of Florida (UF) Marston Science Library, in partnership with UF IT, opened a new computer lab for students to learn and develop mobile applications. The Mobile Application Development Environment (MADE@UF) features both software and circulating technology for students to use in an unstructured and minimally-staffed environment. As the technological landscape has shifted in the past few years, virtual and augmented reality have become more prominent and prevalent, signaled by companies like Facebook, Google, and Microsoft making significant financial investments in these technologies. During this evolution, MADE@UF has migrated to focus more on virtual and augmented reality, and we will discuss the opportunities and challenges that hosting and managing such a space has provided to the science library and its staff.
DIY DOI: Leveraging the DOI Infrastructure to Simplify Digital Preservation and Repository Management
This article describes methods for how staff with modest technical expertise can leverage the DOI (Digital Object Identifier) infrastructure in combination with third party storage and preservation solutions to build safer, more useful, and easier to manage repositories at much lower cost than is normally possible with standalone systems. It also demonstrates how understanding the underlying mechanisms and questioning the assumptions of technology metaphors such as filesystems can lead to seeing and using tools in new and more powerful ways.
Extending Omeka for a Large-Scale Digital Project
In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.
Linked Data is People: Building a Knowledge Graph to Reshape the Library Staff Directory
One of our greatest library resources is people. Most libraries have staff directory information published on the web, yet most of this data is trapped in local silos, PDFs, or unstructured HTML markup. With this in mind, the library informatics team at Montana State University (MSU) Library set a goal of remaking our people pages by connecting the local staff database to the Linked Open Data (LOD) cloud. In pursuing linked data integration for library staff profiles, we have realized two primary use cases: improving the search engine optimization (SEO) for people pages and creating network graph visualizations. In this article, we will focus on the code to build this library graph model as well as the linked data workflows and ontology expressions developed to support it. Existing linked data work has largely centered around machine-actionable data and improvements for bots or intelligent software agents. Our work demonstrates that connecting your staff directory to the LOD cloud can reveal relationships among people in dynamic ways, thereby raising staff visibility and bringing an increased level of understanding and collaboration potential for one of our primary assets: the people that make the library happen.
What’s New? Deploying a Library New Titles Page with Minimal Programming
With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP) or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users.
Editorial: Some Numbers
Wherein the Journal’s most popular article and other small mysteries are revealed.
Creation of a Library Tour Application for Mobile Equipment using iBeacon Technology
We describe the design, development, and deployment of a library tour application utilizing Bluetooth Low Energy devices know as iBeacons. The tour application will serve as library orientation for incoming students. The students visit stations in the library with mobile equipment running a special tour app. When the app detects a beacon nearby, it automatically plays a video that describes the current location. After the tour, students are assessed according to the defined learning objectives.
Special attention is given to issues encountered during development, deployment, content creation, and testing of this application that depend on functioning hardware, and the necessity of appointing a project manager to limit scope, define priorities, and create an actionable plan for the experiment.
Extracting, Augmenting, and Updating Metadata in Fedora 3 and 4 Using a Local OpenRefine Reconciliation Service
When developing local collections, librarians and archivists often create detailed metadata which then gets stored in collection-specific silos. At times, the metadata could be used to augment other collections but the software does not provide native support for object relationship update and augmentation. This article describes a project updating author metadata in one collection using a local reconciliation service generated from another collection’s authority records. Because the Goddard Library is on the cusp of a migration from Fedora 3 to Fedora 4, this article addresses the challenges in updating Fedora 3 and ways Fedora 4’s architecture will allow for easier updates.
Integration of Library Services with Internet of Things Technologies
The SELIDA framework is an integration layer of standardized services that takes an Internet-of-Things approach for item traceability in the library setting. The aim of the framework is to provide tracing of RFID tagged physical items among or within various libraries. Using SELIDA we are able to integrate typical library services—such as checking in or out items at different libraries with different Integrated Library Systems—without requiring substantial changes, code-wise, in their structural parts. To do so, we employ the Object Naming Service mechanism that allows us to retrieve and process information from the Electronic Product Code of an item and its associated services through the use of distributed mapping servers. We present two use case scenarios involving the Koha open source ILS and we briefly discuss the potential of this framework in supporting bibliographic Linked Data.
3D Adaptive Virtual Exhibit for the University of Denver Digital Collections
While the gaming industry has taken the world by storm with its three-dimensional (3D) user interfaces, current digital collection exhibits presented by museums, historical societies, and libraries are still limited to a two-dimensional (2D) interface display. Why can’t digital collections take advantage of this 3D interface advancement? The prototype discussed in this paper presents to the visitor a 3D virtual exhibit containing a set of digital objects from the University of Denver Libraries’ digital image collections, giving visitors an immersive experience when viewing the collections. In particular, the interface is adaptive to the visitor’s browsing behaviors and alters the selection and display of the objects throughout the exhibit to encourage serendipitous discovery. Social media features were also integrated to allow visitors to share items of interest and to create a sense of virtual community.
Transforming Knowledge Creation: An Action Framework for Library Technology Diversity
This paper will articulate an action framework for library technology diversity consisting of five dimensions and based on the vision for knowledge creation, the academic library’s fundamental vision. The framework focuses on increasing diversity for library technology efforts based on the desire for transformation and inclusiveness within and across the dimensions. The dimensions are people, content and pedagogy, embeddedness and the global perspective, leadership, and the 5th dimension – bringing it all together.
Training the Next Generation of Open Source Developers: A Case Study of OSU Libraries & Press’ Technology Training Program
The Emerging Technologies & Services department at Oregon State University Libraries & Press has implemented a training program for our technology student employees on how and why they should engage in Open Source community development. This article will outline what they’ve done to implement this program, discuss the benefits they’ve seen as a result of these changes, and will talk about what they viewed as necessary to build and promote a culture of engagement in open communities.
Exposing Library Services with AngularJS
This article provides an introduction to the JavaScript framework AngularJS and specific AngularJS modules for accessing library services. It shows how information such as search suggestions, additional links, and availability can be embedded in any website. The ease of reuse may encourage more libraries to expose their services via standard APIs to allow usage in different contexts.
Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server
Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description) already stored inside a relational DBMS such as MariaDB (a fork of MySQL) or PostgreSQL, Sphinx Search Server (Sphinx) is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.
Review of DigitalSignage.com
Digital signage has been used in the commercial sector for decades. As display and networking technologies become more advanced and less expensive, it is surprisingly easy to implement a digital signage program at a minimal cost. In the fall of 2011, the University of Florida (UF), Health Sciences Center Library (HSCL) initiated the use of digital signage inside and outside its Gainesville, Florida facility. This article details UF HSCL’s use and evaluation of DigitalSignage.com signage software to organize and display its digital content.
The Road to Responsive: University of Toronto Libraries’ Journey to a New Library Catalogue Interface
With the recent surge in the mobile device market and an ever expanding patron base with increasingly divergent levels of technical ability, the University of Toronto Libraries embarked on the development of a new catalogue discovery layer to fit the needs of its diverse users. The result: a mobile-friendly, flexible and intuitive web application that brings the full power of a faceted library catalogue to users without compromising quality or performance, employing Responsive Web Design principles.
Using a Raspberry Pi as a Versatile and Inexpensive Display Device
This article covers the process by which a library took some unused equipment and added a cheap computing device to produce very inexpensive but effective electronic signage. Hardware and software issues as well as a step-by-step guide through the process are included.