Issues in Science and Technology Librarianship
Head & Assistant Professor
Gemmill Library of Engineering, Mathematics & Physics
University of Colorado, Boulder
Julia R. Bordeaux
Red Rocks Community College
In this paper, we explore how librarians can teach students to deconstruct the concept of authority using questions and considering contextual needs in STEM fields. We argue that examining the complexities of common signifiers of authority, such as peer review, citation rates, and types of sources, as well as exploring contextual factors such as authority in academic and professional settings, are key to developing an understanding of the ACRL Framework for Information Literacy Frame "Authority is constructed and contextual" in the sciences. For each signifier of authority, we present ways to approach and discuss the question in the classroom.
In the context of information literacy, the construction of authority in the sciences and applied sciences can appear deceptively simple: peer-reviewed journal articles are authoritative. But the reality is more complex. While philosophers, historians of science, and scientists themselves have explored the complexity of authority in the sciences, these explorations rarely take place in undergraduate science classes. Librarians teaching information literacy in the sciences face the challenge of teaching students to question and deconstruct authority in one of the disciplines where it is least obviously amenable to dissection and, at least apparently, functions fairly smoothly. Teaching students to apply a critical lens to scientific authority and question established standards without undermining science can feel fraught. While we strive to encourage students to consider the nuances and degrees of authority in the sciences, they must still be able to participate in a highly structured scientific discourse and understand the unique affordances of that discourse.
In order to teach the construction and contextualization of authority as part of information literacy, it is important for librarians to review and deconstruct the idea of authority in the field. What 'counts' as authority in science? If we asked students at the beginning of their college careers, they might answer 'my professors and my textbook,' but over the course of their academic careers, they are expected to recognize the additional authority of primary scientific research. Despite the central role of peer-reviewed scholarly journals, handbooks, tacit knowledge, data sets, and other more complex authoritative sources are also used in day-to-day practice. To complicate matters further, once students have graduated and become practicing engineers or non-academic scientists, the standards for authority change, and the answer might be 'the specs my team agreed to' or 'the document my manager handed me.'
In addition to exploring the broad range of authoritative sources, there is plenty to discuss about the authority of peer-reviewed articles. In an ideal world, these articles are based on experiments and observations conducted following the scientific method, written up by highly-trained experts, peer reviewed by equally highly-trained experts who identify flaws, revised to correct errors, published in journals that serve as markers of quality, and then cited according to their worth. This process seems quite clear. However, it is also deceptively simplistic. The following observations illustrate just how quickly authority is qualified.
Thus, the information literate science student must be aware of the complex nature of authority across STEM fields and familiar with the scholarly norms of their own chosen field.
In this paper, we will explore the construction of authority in the sciences and then highlight some examples of how they may be discussed across the STEM fields. It is important to remember that different disciplines, indeed perhaps different arenas of the same discipline, may have very different concepts of authority. We will identify and explore the construction of four key signifiers of authority in the sciences: peer review, personal academic authority, citation rates, and types of sources. We will also explore the contextual nature of authority beyond academia. For each approach, we will suggest ways to approach the discussion in the library classroom.
Many academic librarians are engaging with how to contextualize the ACRL Framework for Information Literacy for Higher Education in ways that are meaningful to students in particular disciplines. Several authors have explored aspects of situating information literacy in the STEM disciplines (Souchek & Meier 1997; Brown & Krumholz 2002; Fosmire 2012; Clairoux et al. 2013), and further literature clearly suggests that science students benefit from discipline-focused information literacy in an academic context (Grafstein 2002; Manuel 2004; Fosmire 2012; Fosmire & Radcliffe 2012; Douglas et al. 2015). This benefit extends beyond the academic context into the work world; there is evidence that engineers that have been exposed to information literacy coursework consult formal information sources (e.g., journals, standards, and patents) at a higher rate than those without (Douglas et al. 2015). Manuel (2004) in particular has argued that science information literacy competencies can only be taught successfully within the discipline and identified unique issues in STEM, such as a particularly collaborative workflow and a reliance on complex data analysis, which must be understood by librarians for effective teaching. Though these characteristics have become more common across disciplines as digital scholarship has developed, they are particularly entrenched in the sciences. In a similar vein, Grafstein (2002) conceives of librarians as responsible for more general information literacy skills, such as assessing authority, and faculty as responsible for more discipline-specific skills, such as assessing methodological validity. Although Grafstein (2002) differentiates between general and discipline-specific skills, the general skills she identifies still have significant disciplinary variations. Scaramozzino (2010) further supports situated information literacy, noting that students in the sciences must have a basic understanding of scientific content and process, develop understandings of subject-specific mores in scholarly communication, and understand how to relate literature research to laboratory research in order to apply their information literacy skills effectively. We explore the relatively general but still discipline-specific practices of identifying and understanding the construction of signifiers of authority in the sciences.
While we advocate for contextualization, of course it must be noted that an excessive focus on a single field and context is undesirable. As Wilkinson notes in his critique of the ACRL Framework for Information Literacy, students should master a broad range of concepts, not just a single narrow framework (Wilkinson 2014). Students must be exposed to various ways of thinking outside of their discipline in order to fully understand how scholars interact with information. By teaching the Framework to emphasize transfer of skills from one context to another, however, we can avoid this narrowing of focus (Kuglitsch 2015). Asking students to compare ways authority is constructed outside of the sciences with ways it is constructed in the sciences, or even to compare differences among the sciences, we can not only enhance student understanding of authority in the immediate case, but prepare them to manage future questions of authority in any context.
According to the frame "Authority is constructed and contextual," in the ACRL Framework for Information Literacy for Higher Education (2015):
information resources reflect their creators' expertise and credibility, and are evaluated based on the information need and the context in which the information will be used. Authority is constructed in that various communities may recognize different types of authority. It is contextual in that the information need may help to determine the level of authority required.
Situated in the sciences, this means that information literate students understand not only how scientific authority is constructed within a peer-reviewed journal process, but also how it is constructed around other types of publications, such as professional blogs. Additionally, they need to understand how context affects the weighting of different types of authority. When is currency most important, when is access to data sets, and when does the authority derived from a conventional publication model weigh most heavily? To do this, students need to develop a definition of authority within their discipline and learn to identify signifiers of authority within that discipline, both formal (publication in a peer-reviewed journal vs. publication in a serious magazine) and informal (a blog by a scientist on a recognized science blogging platform vs. an enthusiast's personal blog). The frame "Authority is constructed and contextual" is written in a way that "recognizes that authoritative information can be found through a variety of formal and informal channels," supporting a nuanced exploration of the concept of authority (Carncross 2015 248). Understanding that authority can come in many forms allows students to understand the complexities of authority and how it might impact the choice of formats they consult. This in turn can help them understand when it is most appropriate to use different formats as they develop their own authoritative voices in the sciences.
Understanding the concept of authority has practical implications for both developing researchers and informed citizens. An expert understanding of this frame leads to "an attitude of informed skepticism and an openness to new perspectives, additional voices, and changes in schools of thought" (ACRL 2015 4). For those students who will become scientists, there is a clear professional need for this approach; for those who will not, it is a foundational skill to being an informed citizen. It is empowering for students, whatever their professional goals, to understand that they are forming their own authoritative voices and to understand issues of authority in their chosen fields.
Peer review is typically viewed as a quality control mechanism, an examination by peers of the authors' assertions, to ensure that they are significant, accurate, original, unbiased, and methodologically sound. Elmborg (2006 196) characterized it as appearing "obsessive, even paranoid, to outsiders." It is easy to rely on this level of dedication to assume that peer review ensures a constant advance of truth. But Ionannidis's (2005 0696) empirical work on study design suggests that most studies are "simply accurate measures of the prevailing bias." Acknowledging these questions can help students understand science as a form of knowledge produced by humans and inflected with the same concerns and biases as any human enterprise--while continuing to value the unique affordances of this kind of information. It is imperative to teach students the importance of the rigor associated with the peer-review process and that its goal is to evaluate and improve the work in question. But it is also important to include how much time it takes for an article to go through the publishing cycle and how peer review can, sometimes, fail to ensure correctness.
The replication crisis is one point at which to enter the discussion in the classroom. More and more studies are being called into question and often retracted (Van Noorden 2011). Reasons range from obvious misconduct, to incorrect data from faulty instruments, small mistakes in complex models and code whose implications become more serious with every replication, and so on. In several instances, peer review has missed problems such as poor training of researchers in experimental design or utilization of inappropriate methodologies (Campbell 2014 Jul 13). In another case, Springer retracted 64 articles due to fake e-mail addresses of their reviewers (Kaplan 2015 Aug 18.). Regardless of whether there has been an increase in misconduct or just an increased attention to it, students must understand that the process of peer review is both valuable and fallible.
One way to teach students this concept is to have them look at retracted articles, either reading the articles directly or supplementing them with more accessible material such as the RetractionWatch blog, which can make challenging peer-reviewed literature more understandable to undergraduate readers. Librarians can begin discussion by asking why the retraction might have occurred. Is it a case where someone consciously manipulated data? Used unreliable or invalid methodologies? Interpreted data too hopefully? Or is it a scenario where results are reported in good faith according to accepted science, but are ultimately shown to be wrong? This activity may be best suited for upper-division or graduate students, since many lower-division students do not have the methodological expertise to critique papers. However, in lower-division classes, an awareness that scholarly scientific publishing is vulnerable to retraction can spark the awareness that, while peer-reviewed scientific articles are a foundational source for making scholarly arguments, they are not infallible.
As well as understanding the fallibility of peer review, students also need to understand that conflating peer review with the article format and relying on that as a signifier of authority can slow the conversation of science. The peer-review process can be lengthy. In fast moving fields, such as computer science, a peer-reviewed conference presentation might be more credible than a peer-reviewed journal article, since the length of the journal publication process means articles are inherently less cutting edge and the most innovative and important results are shared in conferences. Similarly, the poster plus published abstract format is the customary way for geologists to communicate their newest research in geology. Consequently, it is important that students understand their particular discipline's approach to grey literature, abstracts, and conference papers in order to properly incorporate authoritative research may be either not yet be formally peer reviewed, or be peer reviewed but in a less familiar format.
In STEM, unlike many other fields where single or small group authorship is the norm, the construction of individual versus large-group authority introduces challenges. This ranges from the big questions raised by authorship roles within large multi-authored papers to understanding basic authorship conventions--Who is an author? First, students should learn conventions related to author order on smaller papers. They should learn to identify practices in their field and understand that other fields may have different practices. For example, in some science fields, the first author is the prime intellectual mover, and the last author is the overseeing authority--perhaps the head of the lab in which the work was performed or the primary investigator on the grant that funds the work, and middle authors are active contributors to the paper--perhaps designing a test, writing sections, or conducting statistical analysis. However, in others, it is more common that the first author is the overseeing authority.
For more advanced students, particularly those considering a graduate career in STEM, it is useful to dig further. Who is a guest author--included despite having made little active intellectual contribution to a paper but perhaps making a reputational or situational contribution? Who is a ghost author--someone who made significant intellectual contributions but is not included in the article? The recent prevalence of these practices are explored in several disciplines (Jabbehdari & Walsh 2017). Since these practices are by nature, hidden, it is challenging to approach them in the classroom. For fields that register protocols, however, it is possible to identify a protocol and then match it to the published article, then identifying who was listed on a protocol and who was listed as author (Gøtzsche et al. 2007).
Beyond the relatively straightforward conventions of small multi-author papers (say papers with three to seven authors), enormous multi-author studies that run to potentially hundreds of authors also raise important questions. How is it, students may wonder, that all the most important experimental physicists have names that begin with 'A'? This also raises the question of who even qualifies as the author in a particular field. In some fields, a lab technician would never be an author; in others, it would be unthinkable to exclude them from the author list. Does someone who designs the experimental equipment count as an author? That depends on the field and the local situation. What about someone who contributes a statistical analysis technique, or code for processing data? What about someone whose grant funding paid for the study, as a component of their larger research program, but who wasn't actively involved in the study? All of these questions are contextually dependent, and it is important for students to learn and discuss conventions in their fields.
Large-group multi-author papers also raise other, less practical questions of authority. Who is actually responsible for the work as a whole, if, as might be the case, no one person understands the whole thing? This becomes an even more vexing question as the research teams begin to answer complex, interdisciplinary questions, where perhaps no team member can understand every aspect of the research question or methodology. Is it imperative for a work to be attached to an individual with a full understanding, or is it acceptable for no one to have a full view of the research? What does that mean in terms of ways of knowing? Does it matter how the reputation of a group was constructed? According to Longino (2015), "the consequence [of large group authorship] is an experimental result...[wherein] the evidence for which is not fully understood by any single participant in the experiment." Now, not only is the reader faced with the question of identifying authority among a group, but identifying some kind of group authority. Here, students need not answer these questions, but is important that they ask them.
These questions, both practical and theoretical, can be raised in many ways. A librarian might, for example, ask the class to read and discuss as a group a small excerpt of a disciplinarily relevant definition of authorship, such as that from the International Committee of Medical Journal Editors (ICMJE n.d.). Or a librarian might select one of the many lively discussions about authorship etiquette on message boards aimed at academic scientists, such as Science Careers Forum, which are particularly interesting as evidence of practicing researchers speaking within their community of practice. Topics like whether supplying material counts for authorship, working out how a lab supervisor contributes to a paper, and whether data processing counts can all open up discussion in the classroom as well as online (Science Careers Forum n.d.). For a more formal entry point with less kvetching, trade literature for career scientists could also open the conversation, such as that in Science Careers (Venkatraman 2010). This is a view of what is actually happening in the scientific publishing process, rather than simply what is supposed to happen. Students could be asked to review a conversation and use it to identify what practicing scientists agree upon and what seems to be up for debate.
Understanding these aspects of authority are key for students who hope eventually to practice as scientists. However, they can be useful to others as well. Understanding the phenomenon of ghost authors, for example, has a very concrete impact on an individual who, later in life, is evaluating evidence for or against a particular treatment or health concern, or one who is trying to draw nuanced conclusions about how to vote on questions that rely on differentiating between scientific controversy and denialism.
Once an article is peer reviewed and published, a typical next chronological step in establishing authority is citation rate. Tools such as Web of Science are sought after for their ability to identify citation rates. However, foregrounding high citation rates as a simple indicator of authority can be problematic. Articles may be highly cited for many reasons--among them because they are actively disagreed with or because, although the foundation of a field they might have become passé in practical terms. In these cases, a high citation rate can be misleading. For example, consider the work of Andrew Wakefield, whose discredited work on a link between vaccination and autism was and, for very different reasons now, is still widely cited. His work is now commonly cited as an example of research misconduct rather than as foundational work. Citation rates are unchanged by retraction, so, it is important to raise this question in instruction sessions.
One might do this by asking students why they cite work in their research, drawing out reasons ranging from supporting an assumption, to contextualizing a result, to disagreeing with a fundamental premise of one of the cited authors. Applying their experience as authors of term and position papers to the scholarly literature can help make it more apparent why high citation rates are not a sign of correctness and validity. It also provides an opportunity to discuss other kinds of authority that are appealed to in the media, which is particularly important for the public understanding of science. For example, students might examine a news article discussing a health authority and catalog the criteria used to identify them as an expert, comparing them to the criteria scientists might use to assess an expert. This can help clarify the gap between scientific claims and media claims, perhaps building increased trust in science.
Traditionally, scientific communication has relied heavily on the peer-reviewed article to communicate discoveries and establish intellectual priority, but, although they are a primary vehicle for communication, peer-reviewed articles have never been the sole way to convey information. In recent years, particularly, direct products of research, such as computer code and data sets, are being shared increasingly often. Thus, information literate science students must be able to assess authority outside of the peer-reviewed article and learn to interact with these sources, which are typically less formally vetted.
While librarians are unlikely to have the disciplinary experience to teach students to assess the validity of a particular piece of code or a data set, they should acknowledge the importance of these sources and include them in discussions of potentially authoritative work. Students must learn to engage with code in order to evaluate it as a part of the research product; while this is obviously the case in computer science, it is increasingly a part of other scientific investigations. Additionally, as research shifts to involve manipulating very large data sets, the code used to parse and analyze such sets becomes as much a part of the research record as the data itself. While librarians clearly cannot teach students how to assess such code, they should acknowledge and raise the question with students, just as they have begun to integrate questions about accessing data sets.
Another arena to discuss with students is the importance of tacit knowledge, indigenous knowledge, and user knowledge. Understanding people as authoritative information sources is important in many fields. In many fields, lab techs and equipment managers have key tacit knowledge needed to successfully complete experiments. In addition to the simple reality that students should be able to assess and understand alternative authorities, it has a clear practical application in certain fields. In restoration biology and ecology, for example, indigenous knowledge may be the best or indeed only avenue to historical information that enables study and re-creation of ecosystems. For engineers, an understanding that the user of a system is the best authority on its use can pay off deeply or prevent tragic mishaps that might occur when systems are used improperly or misunderstood. This approach applies to any observational field or field where science and design are applied to interact with humans.
In addition to these increasingly recognized forms of knowledge, there are many other more traditional types of sources that are as credible as articles written in peer-reviewed journals when used in the proper context. Many undergraduate science students, and indeed more advanced students, use textbooks, handbooks, general encyclopedias, and books when learning about a new topic (Yatcilla and Fosmire 2014). Handbooks in particular provide basic data useful for scientists such as specific characterizations of materials or chemical properties (Yatcilla and Fosmire 2014). These have a different sense of authority than a peer reviewed article, but they establish a baseline of knowledge and often are sources for standard lab practices and data. A student who insists on finding recent peer-reviewed articles on basic photosynthesis will be gravely disappointed. Librarians should approach these sources by providing clear examples of when and why they are authoritative and exploring via discussion when their authority is appropriate, and when it may be lacking or simply inapplicable to a given situation.
After determining and learning to identify authority in the academy, another question is how authority is viewed in other intellectual cultures. Fosmire and Radcliff (2012) note that engineers often take a least-effort approach to information gathering and rely heavily on colleagues and personal collections because they provide the lowest barrier to locating information, even though they might be of limited scope. Understanding this is part of fitting in to the professional culture. A student who graduates and blindly insists on only peer-reviewed articles (admittedly a mythical student) will not fit in to the engineering workplace. Researchers with Project Information Literacy have identified the tension between local tacit knowledge and published knowledge as a major challenge for students transitioning into professional life (Head 2012). This tactic of locating has its own costs, since the amount of time spent locating an appropriate colleague can take intellectual and social effort, but knowing when the local authority is more important than the published authority will help students function in the workplace (Fosmire & Radcliffe 2012). Consequently, it is key for students to understand how to evaluate the context of authority so they can deploy it effectively.
In order to address this challenge, librarians can discuss with students why and how authority is viewed by the academy versus practicing professionals. One option, particularly well-suited to classes with significant contact with industry, might be for students speak with professionals from their chosen field and report back to their professor or librarian on how practitioners go about deciding whether or not something has authority, and what weight that authority has in a given situation. A less time-intensive option, perhaps more suited to a single class session, might be for librarians to present students with a scenario, several source types (peer-reviewed journal articles, handbooks, standards, a fictional expert) and ask them to think about the advantages and drawbacks for each. Which is most thorough? Most current? Most time efficient? What situations might change the weight of each factor?
The complexity of authority in STEM fields is deeper than it might at first appear, and teaching students a more nuanced view of authority is essential to position them to move them beyond a novice understanding. A student with an expert understanding of authority in the sciences will understand the importance, but also fallibility of peer review, the variety of authorship roles and citation practices, how authority might be conveyed in new formats, and how contextual requirements affect the type of authority needed. While information literate students in STEM disciplines must understand the conventions behind scientific communication and authority in order to participate in that community of practice, it is equally important for them to be able to question that authority and understand where its weak points lie. Critical approaches to authority are too often thought to apply only to disciplines conceived of as relatively subjective; while there are perhaps fewer obvious ways to approach this in the sciences, it is no less essential. It is only by critically investigating authority in the sciences that students will be ready to take their places as responsible creators, readers, and users of scientific information, regardless of their ultimate professional goals.
Campbell, H. 2014 Jul 13. The corruption of peer review is harming scientific credibility. Wall Street Journal. [accessed 2016 Nov 3]. http://www.wsj.com/articles/hank-campbell-the-corruption-of-peer-review-is-harming-scientific-credibility-1405290747
Carncross, M. 2015. Redeveloping a course with the Framework for Information Literacy for Higher Education: From skills to process. College & Research Libraries News 76:248-273. DOI: 10.5860/crln.76.5.9309
Clairoux, N., Desbiens, S., Clar, M., Dupont, P. & St-Jean, M. 2013. Integrating information literacy in health sciences curricula: a case study from QuÃ©bec. Health Information & Libraries Journal 30:201-211. DOI: 10.1111/hir.12025
Douglas, K.A., Epps, A.S.V., Mihalec-Adkins, B., Fosmire, M. & Purzer, S. 2015. A comparison of beginning and advanced engineering students' description of information skills. Evidence Based Library and Information Practice 10:127-143. DOI: 10.18438/B8TK5Z
Gøtzsche, P.C., Hróbjartsson, A., Johansen, H.K., Haahr, M.T., Altman, D.G. & Chan, A-W. 2007. Ghost authorship in industry-initiated randomised trials. PLOS Medicine 4:e19. DOI: 10.1371/journal.pmed.0040019
International Committee of Medical Journal Editors [ICMJE]. Defining the Role of Authors and Contributors. [accessed 2016 Dec 29]. http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
Kaplan, S. 2015 Aug 18. Major publisher retracts 64 scientific papers in fake peer review outbreak. Washington Post. [accessed 2016 Nov 3]. https://www.washingtonpost.com/news/morning-mix/wp/2015/08/18/outbreak-of-fake-peer-reviews-widens-as-major-publisher-retracts-64-scientific-papers/
Longino, H. 2015. The social dimensions of scientific knowledge. In: Zalta, E.N., editor. The Stanford Encyclopedia of Philosophy. Spring 2015. [accessed 2015 Jul 2]. http://plato.stanford.edu/archives/spr2015/entries/scientific-knowledge-social/
Science Careers Forum - Biotech, Pharmaceutical, Faculty, Postdoc jobs on Science Careers. [accessed 2017 Jul 7]. http://scforum.sciencecareers.org/viewforum.php?f=1
Venkatraman, V. 2010. Conventions of scientific a uthorship. Science Careers. [accessed 2017 Jul 7]. http://www.sciencemag.org/careers/2010/04/conventions-scientific-authorship
Wilkinson, L. 2014. The problem with threshold concepts. Sense & Reference. [accessed 2015 May 15]. https://senseandreference.wordpress.com/2014/06/19/the-problem-with-threshold-concepts/
Yatcilla, J.K. & Fosmire, M. 2014. Research in the sciences. In: Keeran, P. & Levine-Clark, M., editors. Research within the Disciplines: Foundations for Reference and Library Instruction. 2 edition. Lanham, Maryland: Rowman & Littlefield Publishers.
This work is licensed under a Creative Commons Attribution 4.0 International License.