LEADS Blog

Final Post: Julaine Clunis Wrap Up

This has been quite an amazing experience for me and I am really very grateful for the opportunity. 

As was noted in my previous posts my task was to find a method or approach for matching terms to similar terms in the primary vocabularies and making the terminology more consistent to support analytics.  
I explored two methods for term matching. 

Method 1

The first method utilized Open Refine and it’s reconciliation services via the API of the focus vocabulary. This method utilized Python script that matched terms in the DPLA dataset with terms from LCSH, LCNAF, and AAT. This method is very time-consuming. Using only a small sample of the dataset consisting of about 796508 terms took about 5-6 hours and returned only about 16% matching terms. (These were exact matches). While this method can definitely be used to find exact matches. Testing should be done to ascertain if the slow speed has to do with the machine and connection specs of the testing machine. However, this method did not prove useful for fuzzy matches. Variant and compound terms were completely ignored unless they matched exactly. Below is an example of the results returned through the reconciliation process.
image.png
The scripts used for reconciliation are open source and freely available via GitHub and may be used and modified to suit the needs of the task at hand.
Method 2
The second method involved obtaining the data locally then constructing a workflow inside the Alteryx Data Analytics platform. To obtain the data, Apache Jena was used to convert the N-Triple files from the Library of Congress and the Getty into comma-separated values format for easy manipulation. These files could then be pulled into the workflow. 
image.png
The first thing that was done was some data preparation and cleaning. Removing leading and trailing spaces, converting all the labels to lowercase and removing extraneous characters. We then added unique identifiers and source labels to the data to be used later in the process. The data was then joined on the label field to obtain exact matches. This process returned more exact match results than the previous method with the same data, and even with the full (not sample) dataset, the entire process took a little under 5 minutes. The data that did not match was then processed through a fuzzy match tool where various algorithms such as key match, Levenshtein, Jaro, or various combinations of these may be used to process the data and find non-exact matches.  
Each algorithm returns differing results and more study needs to be given to which method may be best or which combination yields the best and most consistent results. 
What is true of all of the algorithms though is that a match score lower than 85% seems to results in matches that are not quite correct, with correct matches interspersed. Although even high match scores using the character Levenshtein algorithm displays this problem with LCSH compound terms in particular. For example, [finance–law and legislation] is being shown as a match with [finance–law and legislation–peru]. While these are similar, should they be considered any kind of match for the purposes of this exercise? If so, how should the match be described?
image.png
Character Levenshtein
image.png
Character Levenshtein

Still despite the problems, trying various algorithms and varying the match thresholds returns many more matches than the exact match method only. This method also seems useful for matching terms that were using the LCSH compound term style with close matches in AAT.  Below are some examples of results 
image.png
Character: Best of Levenshtein & Jaro
image.png
Word: Best of Levenshtein & Jaro
In the second image, we can look at the example with kerosene lamps. In the DPLA data, it seems to have been labeled using the LCSH format as [lamp–kerosene], but the algorithm is showing it is a close match with the term [lamp, kerosene] in AAT. 
The results from these algorithms need to be studied and refined more so that the best results can be obtained consistently. I hope to be able to look more in-depth at these results for a paper or conference at some point and come up with a recommended usable workflow.
This is where I was at the end of the ten weeks and I am hoping to find time to look deeper at this problem. I welcome any comments or thoughts and again want to say how grateful I am for the opportunity to work on this project.

Julaine Clunis 
LEADS Blog

Working with LCSH

One of my goals after working with this new tool is to obtain the entire LCSH dataset and try to do matching on the local machine. In part because the previous method, while effective may not scale well, so we wish to test out whether we can get better results from downloading and checking it ourselves.
Since the data formats in which they make the data available are not ideal for manipulation my next task will be to try to convert the data from nt format to csv using Apache Jena. The previous fellow had made some notes about trying this method so I will be reading his instructions and seeing if I can replicate this having never used Jena before.
Once I obtain the data in a format that I can use, I will add it to my current workflow and see what the results are looking like. Hopefully the results will be useful and something that can be scaled up.
— Julaine Clunis
LEADS Blog

New Data Science Tool

For the project, we are also interested in matching against AAT. We have written a SPARQL query to get subject terms from the Getty AAT which was downloaded in json format.
Having data in these various formats I needed to find a way to work with both and evaluate data in one type of file against the other. In the search for answers I came across a tool for data analytics. It can be used for data preparation, blending, advanced and predictive analytics and other data science tasks and so is able to take inputs from various file formats and work with them outright.
A unique feature of the tool is the ability to build a workflow which can be exported and shared and which other members of a team can use as is or can probably easily turn into code if need be.
I’ve managed to join the json and csv file and check for matches and was able to identify exact matches after performing some transformations. This tool has a fuzzy match function which I am still trying to figure out and get working in an effective workflow that can be reproduced. I suspect that will be taking up quite a bit of my time.

Julaine Clunis
LEADS Blog

Clustering

One of the things that we’ve noticed about the dataset is that beyond duplicate terms there are subject terms that refer to the same thing that are spelled or entered differently by the contributing institution but which refer to the same thing. We’ve been thinking about using clustering applications to look at the data to see what kinds of matches are returned.
It was necessary to first do some reading on what the different clustering methods would do and how that might work for the data we have. We did end up trying some clustering using various key collision methods (Fingerprint, n-gram fingerprint) and KNN and Levenshtein distance methods. They return different results and we are still looking at the results returned from this before performing any merge functions. It is possible that terms look the same or seem similar but are in fact different so it is not as simple as just merging everything that matches.
One important question to answer is how accurate are the clusters and whether we can trust the results enough to go ahead and automatically merge. My feeling is that a lot of human oversight is needed to evaluate the clusters.
Another thing we want to test is how much faster the reconciliation process would be if we accepted and merged the results from the cluster and whether it was worth the time to do it, i.e. if we cluster and then do string matching, is there an improvement in the results or are they basically the same.

Julaine Clunis
LEADS Blog

Julaine Clunis, Week 1: Getting Started

Hi everyone!

This is Julaine and my assignment is with the Digital Public Library of America (DPLA). The DPLA has more than 3 million unique subject headings, with only a portion of those being from controlled vocabularies which can lead to various issues arising when records use slight term variations or synonyms for the same concept.
The aim of my project is to continue working on the development and testing of an effective method for analyzing record content and matching content. This includes keywords with relevant controlled terms from a defined list, in an effort to create a consistent vocabulary to aid users and that can be reliably re-ingested as well as consistently support analytics.
I have spent the last couple of days reading through a ton of documentation about the work that has already been completed on this project. Familiarizing myself with the DPLA Metadata Application Profile and getting set up and familiar with the software and data that has been recommended for use. I have been exploring, for the first time, Apache Spark and I am slowly finding my way around it (downloading, installing and setting up the environment for its use on my machine and reviewing tutorials),so I haven’t really done much in terms of coming up with any solutions to this problem as I am just getting to know the tools and the data.
My mentors have been incredibly supportive and helpful and make themselves available to me in several ways. I expect I will learn a lot from working with them and am feeling really thankful for that. We use various tools such as Slack, Zoom and email to stay in touch so I am feeling positive about having access to direction or support if and when I need it.
Well, that is about all I have to report at this time.
I wish everyone the best of luck going forward with their projects.

Julaine Clunis