top of page

Student Learning Outcome 6
There are so many systems and tools and open source services that all assist some aspect or need in libraries and information institutions. From library management systems or ILS to OPACs to discovery layers to archives management systems to so much more, technology is vital in libraries to connect users with information. As my interests in the technical services side of librarianship grows, the understanding that there is no end to what technology libraries use in public services as well as behind the scenes in cataloging, data, processing, and collections management. Throughout this program I’ve learned to read MARC records, organize metadata and read tags, create my own collections, and apply all of this to my work in the library.
From a practical standpoint, it made sense for me to take as many cataloging and metadata and adjacent courses I could because analytics is my mind’s default processor. Structured data is what I’m about and what I want to pursue. I strive for consistency and standards, which is why I’m interested in linked data and how that can help the technical services field. It all has to make sense to the creator and the end user as I have learned in my classes. Open access is another passion of mine and making items and collections searchable is part of that movement.
Digital is where most to all collections are going or planned to go. At Laupus Library, we have one floor of stacks because most health sciences resources are electronic databases, journals, and apps. It makes sense because the healthcare field never slows down and is always adapting to advances in technology. Digital also fosters research through access. Digitization projects are making collections readily available to users without the demand of in-person visitation. That’s metadata and standards are so important when creating records for collections. We want the items to be searchable and records to be easy to understand.
While taking my Digital Libraries course, I realized the Metadata class I took beforehand served as a good foundation. It had already given me an introduction to standards, elements, and fields. The assignment for my first digital collection was to use CONTENTdm as the platform to create it and we could select the items to add. We were asked to pick different types of items which helped in choosing how to describe the analog object in the format field. Dublin Core Simple was the metadata standard we used and LCSH were the subject terms added to make our items accessible. The images of the items were uploaded to CONTENTdm to complete the records and form our digital collection. Our professor showed us features like editing the landing page to make our collection presentable. CONTENTdm proved to be manageable and user friendly, which made my experience using it fun.
Metadata and metadata harvesting hasn’t reached the point of perfection yet. There are still some things that will need to be cleaned up after a migration like crosswalk mismatches in schemas and application profiles. This leads to the manual intervention of metadata specialists and librarians. It would almost seem in this modern time and the availability of so much technology that the human part could be removed from data and metadata input. But, metadata is just getting groundings in the information profession, expanding from its conception in the 1960s, and the processes for which we create metadata are still being refined.
​
Just when I thought Metadata class was going to be the only time I would have to work with or convert anything to XML, I was interested to know metadata is carving a place in the Digital Libraries course. In this assignment we got to walk some actual library data through a migration using programs like Open Refine, Oxygen, and Islandora. Guest speakers from UNCG’s own Jackson Library came to our class to explain what the process has been for them transitioning to Islandora. They were candid regarding features from the old system to what it translates to in Islandora. We received the data from the Greenville Public Library Collection within Jackson Library. The idea was to clean and edit the data presented in the spreadsheet, making it correlate and transferable into Open Refine. It was with Open Refine where the XML files were created from the template. Having used Oxygen in my Metadata class, I was aware how important that next step is with cleaning and validating the XML, especially when examining the data in the fields using the MODS standard. The validate XML files were then ingested into Islandora where everything, including the data that was edited back in the spreadsheet phase, came out looking correct. A pretty decent metadata migration step-by-step through the different programs.
Explain Artifact
This Metadata course really let me test things for myself to get better acquainted with processes, systems, and technology. There are plenty of different open source programs and platforms for repositories that use certain metadata standards, making harvesting a bit tedious between two systems. Some metadata standards don’t have the perfect crosswalk so it takes some discretion to put the data in a corresponding field that makes sense. With so many options for schemas, the idea of standardization really plays a role when it comes to interoperability. As long as metadata is collected and input differently by different librarians and institutions, the need for universal processes and linked becomes more apparent.
Through this course, I had the opportunity to develop my own Application Profile like a Metadata Librarian would do in a library. The important aspect of this assignment was to understand the lifecycle of metadata, identify any issues with my application profile and suggest solutions to improve. The project started out as a group assignment where my partner and I worked together to develop an application profile that would crosswalk the metadata from one schema, Dublin Core, to our profile that used several different schemas like MODS-Lite, CDWA-Lite, and VRA Core. I used this application profile in the final solo part of the assignment to assess how well the crosswalk worked. The first step, I used OAI-PMH to harvest the metadata because it works best for interoperability. Then I input the schema we developed into Oxygen with the fields we chose. This created the XML file which was uploaded into Omeka as the platform to showcase the end result record in a digital library platform. I reflected in this assignment that overall the profile application worked, but my partner and I had missed one of the fields and the data didn’t transfer. Regardless, it was still a success through each stage of the different technology I used.
Patti Wilson
Writer & Librarian
bottom of page