Searching on Health Information Databases: A Search Interface Including Thesaurus Term and Tree Browsers is More Effective than a Simple Search Interface
Keywords:information literacy, information retrieval
AbstractA Review of:
Mu, X., Lu, K., Ryu, H. (2014). Explicitly integrating MeSH thesaurus help into health information retrieval systems: An empirical user study. Information Processing and Management, 50(1), 24-40. http://dx.doi.org/10.1016/j.ipm.2013.03.005
Objectives – To compare the effectiveness of a search interface with built-in thesaurus (MeSH) terms and tree browsers (MeshMed) to a simple search interface (SimpleMed) in supporting health information retrieval. Researchers also examined the contribution of the MeSH term and tree browser components towards effective information retrieval and assessed whether and how these elements influence the users’ search methods and strategies.
Design – Empirical comparison study.
Setting – A four-year university in the United States of America.
Subjects – 45 undergraduate and postgraduate students from 12 different academic departments.
Methods – Researchers recruited 55 students, of which 10 were excluded, using flyers posted across a university campus from a wide range of disciplines. Participants were paid a small stipend taking part in the study.
The authors developed two information retrieval systems, SimpleMed and MeshMed, to search across a test collection, OHSUMED, a database containing 348,566 Medline citations used in information retrieval research. SimpleMed includes a search browser and a popup window displaying record details. The MeshMed search interface includes two
additional browsers, one for looking up details of MeSH terms and another showing where the term fits into the tree structure. The search tasks had two parts: to define a key biomedical term, and to explore the association between concepts. After a brief tutorial covering the key functions of both systems, avoiding suggestion of one interface being better than the other, each participant then searched for six topics, three on each interface, allocated randomly using a 6x6 Latin square design.
The study tracked participants’ perceived topic familiarity using a 9-point Likert scale, measured before and after each search, with changes in score recorded. It examined the time spent in each search system, as recorded objectively by system logs, to measure engagement with searching task. Finally, the study examined whether participants found an answer to the set question, and whether that response was wrong, partially correct, or correct. Participants were asked about the portion of time they spent on each of the system components, and transaction log data was used to capture transitions between the search components. The participants also added their comments to a questionnaire after the search phase of the experiment.
Main results – The baseline mean topic familiarity scores were similar for both interfaces, with SimpleMed’s mean of 2.01, with a standard deviation 1.43, compared to MeSHMed’s mean of 2.08 with a standard deviation of 1.60. The mean was taken for topic familiarity change scores over three questions on each interface and compared using a paired sample two-tailed t-test. This showed a statistically significant difference between the mean change in topic familiarity scores for SimpleMed and MeSHMed.
Only 46 (17%) of the questions were not answered, 34 (74%) when participants were using SimpleMed and 12 (26%) when using MeSHMed. Researchers found a chi-squared test association between the interface and whether the answer was correct, suggesting that MeSHMed users were less likely to answer questions incorrectly. The question-answer scores positively correlated to the topic familiarity change scores, indicating that those participants whose familiarity with the topic improved the most were more likely to answer the question correctly.
The mean amount of time spent overall using the two interfaces was not significantly different, though researchers do not provide data on mean times, only total time and test statistics. On the MeSHMed interface, on average participants found the Term Browser feature the most useful aspect and spent the most amount of time in this component. The Tree Browser feature was rated as contributing the least to the searching task and the participants spent the least amount of time in this part of the interface.
Patterns of transitions between the components are reported, the most common of which were from the Search Browser to the Popup records, from the Term to the Search Browser, and vice versa. These observations suggest that participants were verifying the terms and clicking back and forth between the components to carry out iterative and more accurate searches. The authors identify seven typical patterns and described four different combinations of transitions between components.
Based on questionnaire feedback, participants found the Term Browser helpful to define the medical terms used, and for additional suggested terms to add to their search. The Tree Browser allowed participants to see how terms relate to each other, and helped identify related terms, despite many negative feedback comments about this feature. Almost all participants (43 of 45) preferred MeSHMed for searching, finding the extra components helpful to produce better results.
Conclusion – MeSHMed was shown to be more effective than SimpleMed for improving topic familiarity and finding correct answers to the set questions. Most participants reported a preference for the MeSHMed interface that included a Term Browser and Tree Browser to the straightforward SimpleMed interface. Both MeSHMed components contributed to the search process; the Term Browser was particularly helpful for defining and developing new concepts, and the Tree Browser added a view of the relationship between terms. The authors suggest that health information retrieval systems include visible and accessible thesaurus searching to assist with developing search strategies.
How to Cite
The Creative Commons-Attribution-Noncommercial-Share Alike License 4.0 International applies to all works published by Evidence Based Library and Information Practice. Authors will retain copyright of the work.