It has been a great pleasure to be the coordinating editor of C4LJ issue 46, and to quote Eric’s editorial last issue, “Library technology is a wonderful community in that we are encouraged to share our solutions with each other and extend them.”
This issue has several articles that use focus on sharing tools built with freely accessible software such as Google Sheets Add-Ons and APIs, Python and R. The recipes and ideas shared have inspired me to try a few ‘hacks’ of my own, and prompted me to think more about what becomes possible to us as the computing tool set becomes both more powerful and more accessible to us ‘mere mortals’. Combine that with the increasing conglomeration of bibliographic data in shared cloud ILS systems, and one starts to wonder what the possibility for ‘libraries collections data at scale’ might look like. WorldCat provides some insight into some slices of some of the worlds’ collections, and the increasing popularity of Alma has brought many academic library bibliographic records into a shared cloud environment. How long will it be before the tools exist that allow those of us at the local level to explore these ‘big data’ sets in new tools customized to our local needs?
Over the past year I’ve been watching the growth of shared print programs such as the Rosemont Alliance and the emerging Partnership for Shared Book Collections, and seen the challenges that a lack of clean data and open tools present to these organizations. Last year C4LJ ran an article on Machine Learning which explored some of the possibilities of applying tools such as TensorFlow or Keras ( https://journal.code4lib.org/articles/13671) to library data, and this issue features an article on Natural Language Processing to create clean metadata. I’m curious to know who else has tried these types of tools in real world projects. If so, please consider submitting your work to C4LJ.
Here’s looking forward to learning more from you all.
Subscribe to comments: For this article | For all articles
Leave a Reply