Sam Grabus exploring the canals in Utrecht
News & Events

MRC’s Sam Grabus presents at Digital Humanities 2019, in Utrecht

MRC PhD student Sam Grabus and Temple University’s Peter Logan presented their paper at Digital Humanities 2019 in Utrecht, Netherlands, on Thursday, July 11th.

Sam Grabus presenting at DH 2019, in Utrecht
Sam Grabus presenting at DH 2019, in Utrecht, demonstrating how the HIVE tool maps naturally-extracted keywords to controlled vocabulary terms.

The presentation, entitled, “Knowledge Representation: Old, New, and Automated Indexing,” shared comparative topic relevance results from automatically indexing 19th century Encyclopedia Britannica entries with two controlled vocabularies: an historical knowledge organization system developed by Ephraim Chambers, as well as the contemporary Library of Congress Subject Headings.

LEADS Blog

Jamillah Gabriel: Moments in Time

The Tule Lake exit phone book (FAR) data represents the majority of information available about the many Japanese American citizens who passed through the internment camp system. In most cases, this in conjunction with the limited data from the entry file, represents all of the information that is available. While there is not much here that allows us to paint a complete picture of their lives, we are at least able to conceive of some select moments in time, which I attempt to do in the case of Mrs. Kashi.

A screenshot of a computer Description automatically generated

Above: FAR exit file

 

A screenshot of a computer Description automatically generated

Above: Exit record for Mrs. Mitsuye Kashi

Mrs. Mitsuye Kashi was born on April 24, 1898 in the southern division of Honshu, Japan. Eighteen years later, she would arrive in the US, and later become an American citizen, marry Jutaro Kashi, and have a son, Tomio. Before internment, she and her family lived in Sacramento, California. But in 1942, the family was moved to a local assembly center located on Walerga Road, and soon after, assigned to the Sacramento internment camp. At some point, the family was transferred to the Tule Lake internment camp, where sadly, Mrs. Kashi would spend her last days. On June 4, 1943, less than a year after arriving at Tule Lake, Mrs. Kashi committed suicide. She was 45 years old. We have no record of why she committed suicide, but one can assume that life in the internment camp was unbearable for her.

In September, just three months later, Mrs. Kashi’s son was sent away to Central Utah Project, or the Topaz camp, which was a segregation center for dissidents. There are no records of what happened to Tomio afterwards. Her husband was released on June 28, 1944 and upon final departure from the camp, became a resident of Santa Fe, New Mexico.

LEADS Blog

Rongqian Ma; Week 3 – Visualizing the Date Information

During week 3 I focused on working with the date information of the manuscripts data. Similar to the geographical data, working with date information also means working with variants. The date information of the manuscript data is presented in descriptive texts (e.g., early 15th century); and the ways of the description vary across the collection. Most of the time data appear as ranges, e.g., 1425-1450), and there is a lot of overlap between the ranges. Ambiguity of the information exists across the dataset, mostly because the date information was collected and pieced together from texts of the manuscripts. Additionally, some manuscripts appear to be produced and refined during different time periods, with texts created earlier during the 14th and 15th centuries and illustrations/decorations added at a later time – for example. So the first task I did was to regroup the date information and make it more clear for visualization. This graph shows how I color-coded the dataset and grouped the data into five general categories – before the 15th century, 1400-1450 (first half of the 15th century), 1450-1500 (second half of the 15th century), 16th century, and cross-temporal/multiple periods.

 

       

 

Based on the groupings, I created multiple line graphs, histograms, and bar charts to visualize the temporal distribution of the book of hours productions from different aspects. The still visualizations assisted me in finding some interesting insights – for example, the production of the book of hours experienced an increase from the 1450s onward, which was relatively the same period of the inventing of the printing press.

 

But one problem of the still graphs is that they can’t effectively combine the date information with other information in the dataset, to explore the relationships between various aspects of the manuscript data and to display the “ecosystem” of the book of hours production and circulation in the middle ages Europe. Some questions that might be answered by interactive graphs include: If, during certain periods of time, was the book of hours production especially popular in certain countries or regions? And, did the decorations or stylistics of the genre change over time? To explore more interactive approaches, I am also exploring TimelineJS and creating a chronological gallery for the book of hours collection. TimelineJS is a storytelling tool that allows me to integrate time information, images of sample book of hours, and descriptive texts into the presentation. I am currently communicating with my mentor about this idea and I look forward to sharing more about it in the next few weeks’ blogs.

 

Best,

Rongqian

 

LEADS Blog

LEADS Blog #3 Setup a `virtualenv` for yamz!

 

Setup a `virtualenv` for yamz!

Hanlin Zhang

July 9th, 2019

 

This week I have solved a Google OAuth login problem caused by incompatible Python environments. Typically, there could be multiple versions of Python that are installed on the same machine, e.g. I have Python 2.7.10 (comes with my macOS), Python 2.7.16 (Anaconda), Python 3.7.1 (Anaconda) installed on my laptop, which may create some compatibility issue. In our case, we know yamz requires Python 2, but the real problem is that there are different versions of Python 2 and unexpected errors may occur if the program was installed on a “wrong” Python setup. The good news is, Bridget is able to run yamz successfully with the following configuration:

 

Python 2.7.10 on with Mac Mojave 10.14.5

 

However, I was unable to reproduce the same result in the first place since the program kept throwing me out an error message. I have done the initial debugging process with the help from Bridget, but I was still unable to solve the problem until John Kunze, our LEADS mentor, shed light on isolating the Python environment with `virtualenv`. John suspects the error was caused by running yamz on an Anaconda distribution of Python:

 

Python 2.7.16 (Anaconda) on macOS Mojave 10.14.5

 

which keeps fighting against the system’s default one. However, this can be solved by using a Python package called `virtualenv`. According to the documentation of `virtualenv` (see https://virtualenv.pypa.io/en/latest/), this Python package is able to “create isolated Python environments”, where it extracts a specified version of Python from my laptop and builds a virtual environment to run the program, which is very like running a virtual machine for Python on my laptop.

Luckily, `virtualenv` has solved the problem and now I’m able to login! Further, I’m also able to isolate the Python environment now, which allows me to do further investigations on the impact of Python versions on installing yamz. I’m going to explore install yamz on several different Python versions. Since Anaconda distributions are so common right now, I think it might worth it for me to test Anaconda Python and put the result in the new readme file. I’m curious about if the login problem was caused by Anaconda Python itself or the conflict between the default version of Python on my laptop and the Anaconda distribution I installed later. 

 

To learn more about `virtualenv`:

  • Virtualenv and why you should use virtual environments

https://www.youtube.com/watch?v=N5vscPTWKOk&t=139s

  • Working Effectively with Python Virtual Environments (Virtualenv)

https://www.youtube.com/watch?v=8KWVEc6vFgA&t=53s

 

LEADS Blog

Week 3: Metadata – data about data

 

LEADS site: Repository Analytics & Metrics Portal

 

On the 3rd week, I worked on downloading the metadata from the institutional repository. We already prepared a script to download the metadata based on the RAMP dataset we want to analyze. However, because every period there will always be a new request to the different documents, we must download the unique URL requested to have complete metadata.
 
Besides gathering the metadata, I also tried to do some analysis on the metadata and the Institutional Repositories. From my observation, I found that each Institutional Repository commonly uses some metadata terms, and there are also some unique terms that are used by only some IRs.
 
On the weekly meeting, we gathered some ideas that we want to focus working on, and we will work on to tackle these research questions with the supervision of Prof. Arlitsch and Jonathan for the upcoming weeks.
 
Nikolaus Parulian
LEADS Blog

Jamillah Gabriel: Deep Diving into the Data

This past week has been spent delving into the datasets available to me in order to get a better sense of the lives of internees of the Japanese American interment camps, from entry to exit. What this means is that I’m looking at the entry data, exit data, and incident cards to glean a better understanding of life during this time. Some of the data that helps me in this endeavor are details about the first camp where a person entered the system, the assembly center they were taken to before getting to the camp, the date they first arrived at camp, other camps they may have been transferred to or from, the camp they last stayed at before exiting, their final departure date, the destination after their departure from the camp, birthdate, birthplace, and where they lived before internment (among many other details). The incident cards represent the recordkeeping system that includes details of various “offenses” that took place within the camps, and were typically only written up for people who violated rules in the camp, or in some cases, to keep records of deaths within the camp. Not every internee has incident cards, so there are silences and erasures within these archival records that might never be uncovered. But what one can do is gather up all of these details and possibly try to glean from them a narrative about the life of the internee imprisoned in these camps.

This is what I’m currently working on and I hope to share a little bit about select people in coming weeks. One of the most important things to consider is the sensitivity of these records as not all data can be publicly divulged at this point. NARA, the current steward of the records, has asked that we adhere to the restriction of 75 years when disclosing data. In other words, any records taking place after July 8, 1944 cannot be revealed. This is something I’ll have to keep in mind going forward in terms of how to best present the data in ways that both highlight and privilege the narratives and stories of the people unjustly imprisoned in these camps.

 

LEADS Blog

Week 3: Bridging NHM collection to Biodiversity Occurrence dataset – example of land snails

To recap what I was trying to do: I wanted to find a species in Taiwan that also happened to be mentioned in the Proceedings of Academy of Natural Sciences.
We chose Taiwan as our geographic point of interest because it has been historically complex in terms of sovereignty and will probably be interesting as an example to see shifting geopolitical realities.
This whole week I have been brushing up the use cases on the example we gathered from the Biodiversity Heritage Library — a land snail species “Pupinella swinhoei sec. H. Adams 1866″.
 
The idea is to bring the Natural History Museum literature (NHM) closer to real life biodiversity occurrence dataset. I then gathered the dataset from GBIF with search term on scientific name “Pupinella swinhoei”. The aggregated GBIF dataset contains 50 occurrence records across 18 institutions (18 datasets), ranging from year 1700 to now
 

(different colors indicate they are from different data source)
Though the ‘countryCode’ field mostly indicated that the records are from TW (Taiwan), it may not be the sovereignty at that time period. To merge these datasets with the sovereignty at the time, I examined two of the 18 data sources first: MCZ dataset versus NSSM dataset.  
The 1700 Taiwan is a county within the Qing Dynasty China.
And the 1930s Taiwan is a colonized region of Japan.
I had some preliminary results to merge these two dataset’s sovereignty field by using the logic-based taxonomy alignment approach. However, since I am preparing a submission for a conference based on this use case- I don’t want to jinx anything! (Fingers crossed).
If I am allowed to share more about the paper, I promise to discuss more in the next blog post!
Yi-Yun Cheng
PhD student, Research Assistant
School of Information Sciences, University of Illinois at Urbana-Champaign
Twitter: @yiyunjessica

 

LEADS Blog

Minh Pham, Week 2: Mapping data out with aesthetics and readability

 

In week 2, I focused on refining the visualizations I did in week 1 to better visualize and understand one dataset among the three large datasets (so far) we have in the project. Thanks to the visualizations, I have some sense of information seeking behaviors of users who use institutional repositories (IR) to search and download information including devices used, device differences due to geolocation, time of search, factors affecting their clicks and clickthroughs etc. 

 

To improve the aesthetics of the visualization, I paid attention to color contrast, graphic resolution, color ramp, transparency of colors, shapes, and scales of x and y axis. To enhance the readability of the visualization, I tried not to present too much information in one visual using Miller’s law of “The Magical Number Seven, Plus or Minus Two” to make sure that people will not feel overwhelmed when looking at the visual and processing information. 

 

Besides working with visualizing information which struck me as interesting in the first dataset, I also tried to wrangle the other datasets. Nikolaus managed to harvest metadata relevant to each URL. This means we can look into metadata content related to each search. However, it also creates a challenge for me regarding how to make unstructured string data into structured data. This is not what I often do but I am excited to brush up my skills in working with text data in the coming weeks.

 

Minh Pham



LEADS Blog

LEADS Blog #2 Deploying yamz on my machine!

 

Deploying Yamz on my machine!

Hanlin Zhang

July 3rd, 2019

 

Last week has been a tough week for me. I had been working closely with Bridget and John to set up a local yamz environment on my machine. Both John and Bridget are super helpful and very experienced in software developing and problem-solving. I asked John a question since started to read the readme document: what does the ‘xxx’ mark in the readme file stand for? I noticed a lot of ‘xxx’ marks in the Readme document of yamz.net (https://github.com/vphill/yamz), for instance, there are a couple of blocks start with the mark of ‘xxx’, such as:

 

xxx do this in a separate “local_deploy” dir?

xxx user = reader?

 

I was really interested in what does those line mean. Based on my experience with yamz, most of the lines started with ‘xxx’ is pretty useful and definitely something worth to read in the first place. John said in the world of software development, ‘xxx’ mark stands for problem waiting to be solved or comments so critical that should be paid attention to immediately. It seems my intuition was right but it is also confusing to those people without developing experience. We are going to rewrite the readme file in the summer to make it more reader-friendly. Meanwhile, I’m still debugging some error I’ve encountered while developing:

 

flask_oauth.OAuthException

 

OAuth

 

According to Margaret Rouse (see the link below), OAuth “allows an end user’s account information to be used by third-party services, such as Facebook, without exposing the user’s password”. The central idea of OAuth is to reduce the total number of times password is required in order to establish an identity, and instead to ask trusted parties to issue certificates for security and convenience concerns. But it also raises a question of to what extent we trust Google, Facebook or Twitter, and etc. as a gatekeeper for our personal identity? What is the price we are paying to use their service in lieu of money?  Will it stop at ‘we run ads’?

 

To read more:

  • What does XXX mean in a comment?

https://softwareengineering.stackexchange.com/questions/65467/what-does-xxx-mean-in-a-comment

  • OAuth

https://searchmicroservices.techtarget.com/definition/OAuth

 

LEADS Blog

Rongqian Ma: Week 2 – Visualizing complexities of places/locations in the manuscript data

During week 2 I started working with data that demonstrates the geolocation where the manuscripts were produced and used. Something I didn’t quite realize before I delved deep into the data is that they are not simply names of places, but geo-information represented in different formats and with different connotations. The variety of the geo-location data exists in the following aspects: a) missing data (i.e., N/A), b) different units presented in regions, nation-states, and cities, respectively; c) suspicious information (i.e., “?”) indicated in the original manuscripts, d) change of geographies over different historical periods, so being hard to visualize the inconsistency of geographies over time; e) single vs. multiple locations represented in one data entry. Facing this situation, I spent some time cleaning and reformatting data as well as thinking about strategies to visualize this part of the data. I merged all the city information with country/nation-states information and also conducted some search for old geographies such as Flanders (and found its complexities…). The geographies also transit during times, which is hard to present in one single visual. I created a pie chart that shows the proliferation and popularity of the book of hours in certain areas, and multiple bar charts showing the merged categories (e.g., city information, different sections of Flanders area). I also found a map of Europe during the middle ages (time period represented in the dataset) and add other information (e.g., percentage) to the map, which I think may be a more straightforward way to communicate the geographical distribution of the book of hours productions. As the geographical data are necessarily related to the temporal data and other data categories regarding the content and decorations of the manuscripts, for the next step I’m aiming to create more interactive visualizations that can connect different categories of the dataset. I’m excited to work with such complexities of the manuscripts data, which also reminded me of a relatively similar case I encountered before about Chinese manuscripts, where the date information was represented in various formats, especially in a combination of the old Chinese style and the western calendar style. Standardization might not always be a good way to communicate the ideas behind the data and to visualize the complexity is a challenge.