Stefano Cossu, Ruven Pillay, Glen Robson and Michael D. Smith published the result of a classical measurement based experiment in Evaluating HTJ2K as a Drop-In Replacement for JPEG2000 with IIIF. They compared the effects of using different image formats (TIFF, JPEG 2000 and High-Throughput JPEG 2000 – abbreviated as HTJ2K) in the context of IIIF requests.
Jennifer Ye Moon-Chung’s article Standardization of Journal Title Information from Interlibrary Loan Data: A Customized Python Code Approach shows that when we would like to use the log files that contains data entered by the users of interlibrary loan service we have clean the input data: normalize values such as ISSN and ISBN numbers, and make use of external services to retrieve standardized title information.
Several noteworthy analyses called attention to the limitations of text corpora without proper metadata of the individual documents. Erin Wolfe in ChronoNLP: Exploration and Analysis of Chronological Textual Corpora presents a web based application called ChronoLNP to help the usage of corpora in which the historical aspects are important. ChronoLNP enables the combination of temporal trend analysis and a variety of natural language processing approaches.
Do you use (or plan to use) FOLIO in the backend? Then Aaron Neslin and Jaime Taylor’s review A Very Small Pond: Discovery Systems That Can Be Used with FOLIO in Academic Libraries is for you. They check all possible commercial and open source front-end options. For this review they had talks with library sysadmins to get informed about the practicalities. A plus: information about the support of accessibility for each tool.
Elizabeth Joan Kelly’s Supporting Library Consortia Website Needs: Two Case Studies shows how a central unit of a library consortia could support the partner institutions with customizable central services when the details of their requirements are different.
The paper by Vlastimil Krejčíř, Alžbeta Strakošová, and Jan Adler From DSpace to Islandora: Why and How is the only one from Europe in this issue, and as far I remember the first one from the Czech Republic. The authors compare the two popular repository software (technological stack, data structure, customization etc.) and describe the process of migrating several services from one place to the other.
Creating a Full Multitenant Back End User Experience in Omeka S with the Teams Module written by Alexander Dryden, Daniel G. Tracy highlights the problems of content and rights separation in Omeka S when an institution would like to run a single instance for multiple projects. The authors not only make their usage scenarios clear, but provide with a solution, a new, open source module written by themselves.
The Forgotten Disc: Synthesis and Recommendations for Viable VCD Preservation by Andrew Weaver, and Ashley Blewer introduces the reader the preservation of contents of video discs: what is this format, where it was popular, how to save the bitstream from it (including the metadata), and how we can view and further manipulate it.
Krista L. Gray’s Breathing Life into Archon: A Case Study in Working with an Unsupported System is a nice account on how a devoted archivist with some programming knowledge could keep a legacy software in sync with the changed requirements. Personally for me it is very sympathetic that the author confesses her limitations. I hope it will encourage others with similar backgrounds to follow a similar path.
How to select an open source software backing our service? There are some answers to this frequently asked question, but I don’t think that the library (or – thinking of the research software landscape – the whole academic) community encounters all possible aspects Jenn Colt’s An introduction to using metrics to assess the health and sustainability of library open source software projects sheds light to four recent metrics scrutinizing the behaviors of the development community behind a software.
Finally, let’s party like it’s 2023! Kent Fitch’s Searching for meaning rather than keywords and returning answers rather than links shows his experiments with the usage of Large Language Models (yes, including ChatGPT). Instead of talking from a bird’s perspective the paper discusses the advantages, disadvantages, limitations, and costs of down-to-earth use cases.
Many thanks to the authors for bringing all these to Code4Lib!
Subscribe to comments: For this article | For all articles
Leave a Reply