The Web of Science group honored Drexel CCI’s Dr. Erjia Yan with the 2019 Eugene Garfield Award for Innovation in Citation Analysis, at the 17th International Conference on Scientometrics and Informetrics, in Rome. Read more about the award here.
Author: Sam Grabus
LEADS Forum: January 24th, 2020
After two successful years of the LEADS-4-NDP program, the Metadata Research Center and Drexel CCI will host a LEADS forum on Friday, January 24th, here at Drexel University.
This event is an opportunity for LEADS advisory board members, mentors, and fellows from both cohorts of participants from LEADS program to get together. The forum will include a panel of project mentors, student presentations, breakout groups, and an opportunity to discuss different models for continuing the LEADS program.
What: LEADS-4-NDP Forum
Date: January 24th, 2020
Time: 10am – 3pm
Where: 3675 Market St, Quorum, floor 2
Drexel University
Philadelphia, PA
Forum agenda: TBA.
Final Post: Julaine Clunis Wrap Up
—
Final post: Kai Li: Wrapping up of OCLC project
In this project, we tried to apply network analysis and community detection methods to identify meaningful publisher clusters based on the ISBN publisher code they use. From my perspective, this unsupervised learning approach was selected because of a lack of baseline test conducted from a large-scale perspective, so that supervised approach using any real-world data is not possible.
In the end, we get yearly publisher clusters that hopefully reflects the relationship between publishers in a given year. That is being said, community detection methods is difficult to be combined with temporal considerations. The year may not be a fully meaningful unit to analyze how publishers are connected to each other (the relationship between any two publishers may well change in the middle of a given year), but we still hope this approach to publisher clusters could generate more granular results than using data in all years. The next step, though turned out to be much more substantial that what was expected, is to use manual approach to evaluate the results. And hopefully this project will be published in a near future.
Despite its limitations, I really learnt a lot from this project. This is the first time I have to play with library metadata in a really large scale. As almost my first project too large to be dealt with by R, I gained extensive experiences using Python to deal with XML data. And during the process, I also read a lot about the publishing industry, whose relationship with our project was proven to be more than significant.
The last point above is also one that I wish I better realized in the beginning of this project. The most challenging part of this project is not any technical issue, but the complexity of the reality that we aim to understand through data analysis. Publishers and imprints could mean very different things in different social and data contexts. And there are different approaches to clustering them with their own meanings underlying the clusters. My lack of appreciation of the importance of the real publishing industry prevented me from foreseeing the difficulties of evaluating the results. I think in a way, this could mean that field knowledge is fundamental to any algorithmic understanding of this topic (or other topics data scientists have to work on), and to a lesser extent, any automatic method is only secondary to the final solution to this question.
Rongqian Ma; Week 8-10
Week 8-10: Re-organizing place and date information. Based on the problems that have appeared in the current version of visualizations, I performed another round of data cleaning and modification, especially for the date and geography information. With the goal of reducing the categories for each visualization, I merged some more data into others. For example, all the city information was merged into countries, single date information (e.g., 1470) was merged into the corresponding time period (e.g., in the case of the year 1470, it was merged into the 1450-1475 time period), and inconsistency of data across the time and geography categories was further manipulated. As demonstrated in the following example, the new version of visualizations gets more “clean” in terms of the number of categories and becomes more readable. For the last couple of weeks, I have also had discussions with my mentor about the visualizations, the problems I had, and have worked with my mentor for the data merge. I’m also working on a potential poster submission to iConference 2020.
Example:
Rongqian Ma; Week 6-7: Exploring Timeline JS for the Stories of Book of Hours
Alyson Gamble, Week 5: Historical Society of Pennsylvania
—
Bridget Disney, California Digital Library – YAMZ
- Bridging the gap between librarians and computer science knowledge
- Maintaining the continuity of on going projects
Jamillah Gabriel: Python Functions for Merging and Visualizing
This past week, I’ve been working on a function in Python that merges the two different datasets (WRA and FAR) so as to simplify the process of querying the data.
The reason for merging the data was to find a simpler alternative to the previous function for searching developed by Densho which involved if/else for loops to pull data from each dataset.
Now, one can search the data for a particular person and recover all of the available information about that person in a simple query. After the merge, the data output looks something like this when formulated as a list:
In addition to this, I’ve also played with some basic visualizations using Python to display some of the data in pie charts. I’m hoping to wrap up the last week working on more visualizations and functions for querying data.