LEADS Blog

Minh Pham, Week 2: Mapping data out with aesthetics and readability

 

In week 2, I focused on refining the visualizations I did in week 1 to better visualize and understand one dataset among the three large datasets (so far) we have in the project. Thanks to the visualizations, I have some sense of information seeking behaviors of users who use institutional repositories (IR) to search and download information including devices used, device differences due to geolocation, time of search, factors affecting their clicks and clickthroughs etc. 

 

To improve the aesthetics of the visualization, I paid attention to color contrast, graphic resolution, color ramp, transparency of colors, shapes, and scales of x and y axis. To enhance the readability of the visualization, I tried not to present too much information in one visual using Miller’s law of “The Magical Number Seven, Plus or Minus Two” to make sure that people will not feel overwhelmed when looking at the visual and processing information. 

 

Besides working with visualizing information which struck me as interesting in the first dataset, I also tried to wrangle the other datasets. Nikolaus managed to harvest metadata relevant to each URL. This means we can look into metadata content related to each search. However, it also creates a challenge for me regarding how to make unstructured string data into structured data. This is not what I often do but I am excited to brush up my skills in working with text data in the coming weeks.

 

Minh Pham



LEADS Blog

LEADS Blog #2 Deploying yamz on my machine!

 

Deploying Yamz on my machine!

Hanlin Zhang

July 3rd, 2019

 

Last week has been a tough week for me. I had been working closely with Bridget and John to set up a local yamz environment on my machine. Both John and Bridget are super helpful and very experienced in software developing and problem-solving. I asked John a question since started to read the readme document: what does the ‘xxx’ mark in the readme file stand for? I noticed a lot of ‘xxx’ marks in the Readme document of yamz.net (https://github.com/vphill/yamz), for instance, there are a couple of blocks start with the mark of ‘xxx’, such as:

 

xxx do this in a separate “local_deploy” dir?

xxx user = reader?

 

I was really interested in what does those line mean. Based on my experience with yamz, most of the lines started with ‘xxx’ is pretty useful and definitely something worth to read in the first place. John said in the world of software development, ‘xxx’ mark stands for problem waiting to be solved or comments so critical that should be paid attention to immediately. It seems my intuition was right but it is also confusing to those people without developing experience. We are going to rewrite the readme file in the summer to make it more reader-friendly. Meanwhile, I’m still debugging some error I’ve encountered while developing:

 

flask_oauth.OAuthException

 

OAuth

 

According to Margaret Rouse (see the link below), OAuth “allows an end user’s account information to be used by third-party services, such as Facebook, without exposing the user’s password”. The central idea of OAuth is to reduce the total number of times password is required in order to establish an identity, and instead to ask trusted parties to issue certificates for security and convenience concerns. But it also raises a question of to what extent we trust Google, Facebook or Twitter, and etc. as a gatekeeper for our personal identity? What is the price we are paying to use their service in lieu of money?  Will it stop at ‘we run ads’?

 

To read more:

  • What does XXX mean in a comment?

https://softwareengineering.stackexchange.com/questions/65467/what-does-xxx-mean-in-a-comment

  • OAuth

https://searchmicroservices.techtarget.com/definition/OAuth

 

LEADS Blog

Rongqian Ma: Week 2 – Visualizing complexities of places/locations in the manuscript data

During week 2 I started working with data that demonstrates the geolocation where the manuscripts were produced and used. Something I didn’t quite realize before I delved deep into the data is that they are not simply names of places, but geo-information represented in different formats and with different connotations. The variety of the geo-location data exists in the following aspects: a) missing data (i.e., N/A), b) different units presented in regions, nation-states, and cities, respectively; c) suspicious information (i.e., “?”) indicated in the original manuscripts, d) change of geographies over different historical periods, so being hard to visualize the inconsistency of geographies over time; e) single vs. multiple locations represented in one data entry. Facing this situation, I spent some time cleaning and reformatting data as well as thinking about strategies to visualize this part of the data. I merged all the city information with country/nation-states information and also conducted some search for old geographies such as Flanders (and found its complexities…). The geographies also transit during times, which is hard to present in one single visual. I created a pie chart that shows the proliferation and popularity of the book of hours in certain areas, and multiple bar charts showing the merged categories (e.g., city information, different sections of Flanders area). I also found a map of Europe during the middle ages (time period represented in the dataset) and add other information (e.g., percentage) to the map, which I think may be a more straightforward way to communicate the geographical distribution of the book of hours productions. As the geographical data are necessarily related to the temporal data and other data categories regarding the content and decorations of the manuscripts, for the next step I’m aiming to create more interactive visualizations that can connect different categories of the dataset. I’m excited to work with such complexities of the manuscripts data, which also reminded me of a relatively similar case I encountered before about Chinese manuscripts, where the date information was represented in various formats, especially in a combination of the old Chinese style and the western calendar style. Standardization might not always be a good way to communicate the ideas behind the data and to visualize the complexity is a challenge.
LEADS Blog

Week 2: Kai Li: It’s all about MARC

It’s been two very busy weeks since my last update. It has almost become a common sense that getting your hands dirty with data is the most important thing in any data science project. That is exactly what I have been doing in my project.

The scope of my project is one million records of books that are published in the US and UK since the mid-20th century. The dataset turns out to be even larger than I originally imagined. In the format of XML, the size of the final data is a little below 6 gigabytes, which is almost the largest dataset that I have ever used. As someone who has (very unfortunately) developed quite solid skills to parse XML data using R, the size of the file became the first major problem that I had to solve in this project: I could not load the whole XML file into R because it would exceed the limit of the string size that R allows (2 GB). But thanks to this limitation of R, I had the chance to re-learn about XML parsing in the environment of Python. By re-using some codes written by Vic last year, the new parser was developed without too much friction.

According to Karen Coyle (whom BTW, is one of my heroes in the world of library cataloging), the development of MARCXML represents how this (library cataloging) community missed the chance to fit its legacy data into the newer technological landscape (Coyle 2015, p. 53). She definitely got a point here: while MARCXML does an almost perfect job translating the countless MARC fields, subfield, and indicators into the structure of XML, it doesn’t do anything beyond that. It kept all the inconveniences of using MARC format, especially the disconnection between text and semantics, which is the reason why we had the publisher entity problem in the first place.

blog2_pic1.jpg

[A part of one MARC record]

Some practical problems also emerged from this characteristics of MARCXML. The first one is that data hosted in the XML format keeps all punctuations in the MARC records. The use of punctuations is required by the International Standard Bibliographic Description (ISBD), which was developed in the early 1970s (Gorman, 2014) and has been one of the most important cataloging principles in the MARC21 system. Punctuations in the bibliographic data mainly serve the needs of printed catalog users: they are said to help users to get more contexts about the information printed in the physical catalog (which no one is using today, if you noticed). Not surprisingly, this is a major source of problem for the machine-readability of library bibliographic data: different punctuations are supposed to be used when the same piece of data are used before different subfields within the same field, a context that is totally irrelevant to the data per se. One example about publisher statement is offered below, in which London and New York are followed by different punctuations because they are followed by different subfields:

graph2_pic2.jpg

[An example of a 260 field]

The second practical problem is the fact that a single semantic unit in the MARC format may contain one to many data values. This data structure makes it extremely difficult for machine to understand the meaning of the data. A notable example is the 24-27 digits in the 008 field ([https://www.loc.gov/marc/bibliographic/bd008b.html]). For book records, these digits represent what type of contents that the described resource is or contains. This semantic unit has 28 values that catalogers may use, including bibliographies, catalogs, et al. and for each record, up to four values can be assigned to the record. The problem is that, even though using a single value (such as “b”) can be very meaningful, it is much less so when values like “bcd” are used. In this case, this single data point in the MARC format has to be transformed into more than two dozen binary fields indicating whether a resource contains each type of content or not, so that the data can be meaningfully used for the next step.

While cleaning the MARC data can be quite challenging, it is still really fun for me to use my past skills to solve this problem and get new perspectives on what I did in the past.

REFERENCES

Coyle, K. (2015). FRBR, before and after: a look at our bibliographic models. American Library Association.

Gorman, M. (2014). The Origins and Making of the ISBD: A Personal History, 1966–1978. Cataloging & Classification Quarterly, 52(8), 821–834. https://doi.org/10.1080/01639374.2014.929604

LEADS Blog

California Digital Library

California Digital Library – YAMZ (Week 2)
Bridget Disney
This week, I’ve been learning more about YAMZ. Going through the install process has been tedious but I have (barely) achieved a working instance. I was able to start the web server and display YAMZ on my localhost, and learned a bit in the process, so that was exciting!    
The difference is because I don’t have any data in my PostgreSQL database. Here’s were things get a little bit murky. To add a term, I have to log in to the system via Google. The login didn’t seem to be working so I changed some code to make it work on my local installation. However, it could be that the login was only intended for use with the Heroku (not local) system so what I really need to do is to somehow bypass the login when it runs on my computer. So it’s back to the drawing board.
Even when I do login successfully, I am getting error messages – still working on those! These messages look like they might have something to do with one of the subsystems that YAMZ uses.    
After going through all that, Hanlin and I had a very useful Zoom session with John Kunze, our mentor, and the plans have been adjusted slightly. The directions for using YAMZ are different now due to the fact that it’s been a few years and the versions of the software used have changed. Also, the free hosting server has limitations and needs to be moved from Heroku to Amazon’s AWS. As such, Hanlin and I are revising the directions in Google doc to document the new process.
John is working to get us direct access to the CDL server which requires us to VPN into our respective universities and then connect to the YAMZ servers. When that is all set up, we will work through the challenge of figuring out how to proceed to move code from development to production environments.
In the meantime, looking through the code I see there are also two Python components I need to get up to speed on – Flask (a micro framework for the user interface) and Django (a web framework for use with HTML).
LEADS Blog

Week 02 – Historical Society of Philadelphia

This week’s work could be defined by data gathering and meeting having. I handled a lot of logistics, such as creating a communication plan with Caroline Hayden, my mentor at the Historical Society of Pennsylvania (HSP). I was also able to discuss the project with last year’s Fellow, Karen Boyd, who gave me a great overview from her perspective. I’d previously viewed Karen’s lightning talk about her work at HSP, but being able to discuss what she did and what she thinks is a good next phase helped me figure out the scope for my own work on the project. Along with the coordination with Caroline and Karen, the LEADS Fellows had an online meeting where we discussed what we’ve been doing since leaving our boot camp in Philadelphia. I enjoyed hearing how other people’s work is progressing and am excited to begin the next stage of my own. 

Alyson Gamble
Doctoral Student, Simmons University
LEADS Blog

Week 2: Understanding the limitation of data – What we can’t do

LEADS site: Repository Analytics & Metrics Portal

 

 

After developing some visualization to understand the relationship between columns in the RAMP dataset, we had a follow-up meeting to discuss the visualization result.
The visualization I discussed on the meeting focuses on aggregation between categorical values in the ramp dataset including the number of visits for each index and each domain name (URL), number of visitors for citable and non-citable content, number of visits based on the user devices, and providing histogram for position, clicks, and clickThrough.
In the meeting, we also discussed the possibilities of incorporating external data such as metadata for each index. One of the mentors Jonathan have been trying to merge metadata to the older RAMP dataset period (2018), and we also can extract the metadata from the new dataset that we want to focus on analyzing.
What I will do next for this dataset is extracting metadata, make the data reacher so we can understand more about the behavior of the users through the metadata and form a research question that we want to focus on for the RAMP dataset.
Nikolaus Parulian

 

LEADS Blog

Jamillah Gabriel: From Relocation to Internment to Detention (and Everything in Between)

In the past couple of weeks, a flurry of articles have been published about concentration camps and their place in American society and history. My mentor shared them with me and I have found them useful in contextualizing my work with the Japanese American internment cards. I’m reminded of how my LEADS project and the data I’m working with are still relevant today, when concentrations camps can’t be relegated to the past and, in fact, are very much a reincarnated racist reality in the present. Three of the four articles sent to me (listed below) connect the history of Japanese American internment camps with current issues around the migrant detention camps that have been implemented to detain migrant children crossing the border from Mexico, and highlight the fact that this, unfortunately, is history repeating itself. For instance, Ft. Sill, which is now a migrant detention center, was founded in 1869 and was once “a relocation camp for Native Americans, a boarding school for Native children separated from their families, and an internment camp for 700 Japanese American men in 1942” (Hennessy-Fiske, 2019). Its unmitigated and irreconcilable history is a continued legacy of racial difference, segregation, and discrimination. All of the articles reinforce the importance of this project that I (and two other LEADS fellows before me) am working on, but the last piece written by the granddaughter of a survivor of the Japanese American incarcerations is truly the most motivating factor for this work: so that former internees and their family members can know their own histories.

 

 

References:

Friedman, M. (2019, June 19). American concentration camps: A history lesson for Liz Cheney. The Typescript. Retrieved from http://thetypescript.com/american-concentration-camps-a-history-lesson-for-liz-cheney

 Hennessey-Fiske, M. (2019, June 22). Japanese internment camp survivors protest Ft. Sill migrant detention center. Los Angeles Times. Retrieved from https://www.latimes.com/nation/la-na-japanese-internment-fort-sill-2019-story.html

 Provost, L. (2019, June 22). Prepared for arrest: Japanese-Americans protest at Fort Sill over incoming migrant children. The Duncan Banner. Retrieved from https://www.duncanbanner.com/news/prepared-for-arrest-japanese-americans-protest-at-fort-still-over/article_789070aa-9542-11e9-8107-9fcd6387dce9.html

 Sakurai, C. (2019, June 25). More than a name in the census: Piecing together the story of my grandmother’s life. National Japanese American Historical Society. Retrieved from https://www.facebook.com/notes/national-japanese-american-historical-society/more-than-a-name-in-the-census-piecing-together-the-story-of-my-grandmothers-lif/2679119588783598

 

Jamillah R. Gabriel, PhD Student, MLIS, MA
School of Information Sciences
University of Illinois at Urbana-Champaign
jrg3@illinois.edu

 

LEADS Blog

Week 2: Elaborating on our multi-level alignment idea and an initial exploration on the BHL collection

This week I explored more into the multi-level alignment idea , and I was almost convinced that we can leverage this idea into a ‘dataset merging’ problem.
The dataset merging idea is not new. For example in this one paper from my PhD advisor, they have discussed briefly about how to merge taxonomic data: Towards Best-effort merge on taxonomically organized data
But for our group in UIUC (in collaboration with systematic experts from ASU), we have mainly been working on the actually taxonomic names alignment rather than ‘dataset merging’.
For the dataset merging idea, our proposal is pretty simple.
If we can align taxonomic names, we should also be able to align other things in the dataset such as spatial information (in our case, countries/areas).
Naturally, finding the intersection from my project site the Academy of Natural Sciences and my interest in taxonomy has become the priority for this week. The task I have set for myself was to find a certain species that is endemic or popular across Taiwan (my geographical point of interest), and that also happens to appear somewhere in the text of either the proceedings or the journals of the Academy of Natural Sciences.
The quest went on with me fascinated (and slightly sidetracked) by all the orchids population and its varieties Taiwan has. To my surprise, one of the news (in Chinese) mentioned that Taiwan has more than 0.9 billions of moth orchids!
(image source:britannica.com)
Then I went on to create our dataset merging idea first around the orchids:
Basically, the idea is that if we have two occurrence datasets on orchids, then we can do the dataset merging with the two datasets like the figure shown above, with each column being one ‘taxonomy alignment problem’.
Just as I was almost set on going for the beautiful orchid flowers, I finally turned back to BHL to search the keyword “Taiwan” and set the Titles on “Academy of Natural Sciences”. This is when I found a whole new world of Mollusca (snails)!
The entry that returned results of intersection of “Taiwan” and “ANS” is from the Proceedings of Academy of Natural Sciences, v.57, 1905, and the title of the page/chapter is :“Catalogue of the Land and Fresh-water Mollusca of Taiwan (Formosa) with descriptions of new species”. 
Like the above BHL search interface shows, the scientific names on this page were also extracted and shown on the bottom left corner. Having this breakthrough on the Mollusca (possibly endemic to Taiwan), I will begin to work with this species on the dataset merging idea next week!
 
Yi-Yun Cheng
PhD student, Research Assistant
School of Information Sciences, University of Illinois at Urbana-Champaign
Twitter: @yiyunjessica

 

LEADS Blog

Week 2-3: Sonia Pascua, I am one of the “Mix” ‘s in the Metadata Mixer

LEADS site: Digital Scholarship Center
Project title: SKOS of the 1910 Library of Congress Subject Heading

 

Last June 13, 2019, I presented our LEADS-4-NDP project at the Metadata Mixer.

 I started my lighting talk discussing the bigger picture of our project.

The Digital Scholarship Center has an ongoing project which is the Nineteenth-Century Knowledge Project that builds the most extensive, open, digital collections available today for studying the structure of the 19th Century knowledge and transformation using historic editions of the Encyclopedia Britannica. This project is progressing hugely towards establishing the controlled vocabulary terms for the purpose of metadata consistency and interoperability and is utilizing vocabularies in HIVE especially LCHS.

Our project works around the SKOS – ination of the 1910 LCHs.

The hypothesis that we would like to explore is that there may be gap or we call it “vocabulary divide” between the vocabularies of the past and the present. With the current version of LCHS (2016) in HIVE, we aim to include the 1910 version of LCHS to cater the researches using resources from the past especially the 19th century knowledge.

Above is our conceptual model. As shown, the 1910 LCHS would be digitized to text format for easy manipulation of words. Then from the text, be it in csv, xls, DocX format – the RDF/XML format is constructed for HIVE integration. Once the 1910 LCHS is into HIVE, it could now be used as a tool for automatic indexing.
In the 5-min talk, I was able to present the proof of concept
We formulated use cases based on the data sets – 1910 LCHS and 2016 LCHS. Four scenarios were devised for data analysis.The gap or “vocabulary divide” is verified and validated by these use cases. 
A simulation of a word – Absorption was conducted. The article about the sun was taken from the 1911 Encyclopedia Britannica. It was subjected to a text analysis using TagCrowd. Frequencies of the words in the article were extracted. For subject cataloging, which was done manually, the descriptors were selected to represent the ABOUTNESS of the article. 1910 LCHS was used for indexing and vocabulary was generated. The same process was executed but this time with the use of 2016 LCHS in HIVE for automatic indexing. The case study fell under scenario 2 which meant that the word “Absorption” intersected both data sets, thus the word existed from 1910 till 2016.