HIVE Evaluation Plan

HIVE Overview | Demo | HIVE Community | Publications | Archives

  • Driving factors for HIVE evaluation plan:
  1. what research will have the most impact for HIVE long-term?
  2. HIVE evaluation deliverables outlined in the grant
  • High level areas/some brainstorming

1. log analysis:

  • the general idea here is to log HIVE use: what users are doing, how they are using HIVE? gathering log data would give us a picture of HIVE use, likely frustrations, how people are exploring HIVE, perhaps some indication of needs. we could also attach a survey of some sort, “optional”.
  • we could also encourage experimentation to increase log data via dataone, our advisory board, Dryad depositors, and even Elena.
  • log analysis data gathering might be contextualized, some, by users willing to complete a voluntary survey linked to HIVE, or a feedback window/box saying something like “tell us about your HIVE experience?”

2. HIVE IR performance study

  • the HIVE proposal indicates plans for a HIVE performance study, measuring precision and recall, and related factors, and the Dryad proposal also identifies IR research as a priority; however, both indications are quite general and lack detail. we really need to hone in on this, and i shared, at some point, the Tague-Sutcliffe’s, Pragmatics of Information Retrieval Experimentation, Revisited (10.1016/0306-4573(92)90005-K). (in my [jane’s] opinion, this is a must read… or something similar, as we plan a full-fledged IR study. a log analysis can inform our design as well.
  • an IR study could, to some degree, integrate and build on methodologies worked out in master’s paper research conducted by Jacki Sherman and Lina Huang. I have placed drafts of their work in dropbox @: My Dropbox\Dryad\HIVE\master’sPapers. (note–these are master’s papers, and they are not polished articles; additionally, Lina’s paper is still a draft…and i believe the title has changed. for context Jacki compared smart HIVE w/the NCBO bioportal; and Lina’s paper is a usability study.). [I [jg] am still looking for a draft of Maddy’s master’s paper, which focused on some aspect of machine learning and HIVE.]
  • during the most recent discussion — Hollie, Ryan, and jane spoke some briefly about a the pros-and-cons of running an IR study in a laboratory setting, or from a naturalisitc setting (scientist’s desk top). there was a general consensus that having scientists (Dryad depositors) compare smart HIVE and simple HIVE, would be useful. we didn’t talk much about automatic peformance measures, and i’d like to see us give this more time, with Bob Losee’s input too.

3. Librarian/informatician users

  • Some preliminary and useful work has been conducted via the HIVE advisory board.
  • Lina has done some exploration here via her master’s paper, and in our most recent brain-storm meeting — Hollie, Ryan, and jane agreed that the basic HIVE evaluation framework is there for workshop participants, but the protocol and questions will need to be modified as each workshop is offered.
  • Hollie will check on IRB and issues w/workshop participants.