In the Name of Neutrality - Laura Saunders
28 April 2021
Jason Ertz: On behalf of my colleagues at the College of the DuPage Library, Depaul University Library and Moraine Valley Community College Library, it gives me great pleasure to be able to introduce our first keynote speaker: Dr Laura Saunders. Dr Saunders is an associate professor at Simmons University's School of Library and Information Science. She teaches and conducts research in areas of reference, instruction, information literacy, and intellectual freedom. She has a strong interest in the connections between information literacy and social justice issues, as well as in the impact of mis- and disinformation. In April of 2018 she ran the Know News symposium which brought together 80 librarians, journalists, and allied professionals to discuss collaborative responses to the challenge of misinformation. Her articles have appeared in a variety of journals including College and Research Libraries, The Journal of Academic Librarianship and Communications in Information Literacy. She has written or edited several books, most recently, Reference and Information Services, an introduction, the sixth edition co-edited with Melissa Wong. Laura has a PhD and master of Library and Information Science, both from Simmons College and a Bachelor of Arts degree in English literature from Boston University. She serves as the trustee for the Somerville public library and Somerville, MA, and she is the 2019 recipient of the Simmons university's Provost Award for excellence in graduate teaching.
Her presentation today titled, In The Name of Neutrality: What we mean by neutrality and what it means for us, will explore various conceptualizations of neutrality as related to library instruction and information literacy, and how these definitions take on new meanings when applied to different aspects of our professional practice. Our discussion will examine questions like, is neutrality possible? - Is it something we value as a profession? And, if so, what does neutrality look like in the library classroom? - among other questions. So please join me in welcoming Laura Saunders. Dr. Saunders, thank you for coming.
Laura Saunders (she/her): Thank you so much Jason and Thank you everyone for being here, I just want to mention that I am joining you from Somerville, Massachusetts as Jason mentioned, and that these are the ancestral lands of the Pawtucket and Massachusett Nations and I just want to acknowledge the stewardship of those people. The continued existence of indigenous people and my respectfulness of those who continue to call these lands home today.
I'm really excited to be here today, even if it's only virtually I would have loved to have been joining you in person, and maybe having a donut with you. But I'm really excited to be doing this, and I want to thank the summit committee for inviting me, and in particular Tish Hayes for coordinating all the logistics and answering all of my questions. I’m also honored to be in the company of Fobazi Ettarh and Lisa Hinchliffe, whose work has helped to shape my own thinking, including about the topics I’ll be speaking about today.
In particular, in discussing vocational awe, Ettarh (2018) challenges us to take a step back and critically reflect on our profession. To question the systems and structures that have shaped our profession, to recognize when those systems and structures are flawed or even oppressive, and to challenge, and change them.
Today I’d like to take up that challenge with regard to the question of neutrality, especially as it relates to information literacy and instruction. We have been struggling with the idea of neutrality for a long time, and the debates seem to raise more questions than they answer: Should we try to be neutral? Is it even possible to be neutral? Is it an ideal to be constantly worked toward? Or is it a threat to social justice and civil society that undermines our real work and better values?
A few issues make these debates more challenging. First, our professional associations don’t actually give us a lot of guidance. The word neutrality is not in our professional code of ethics or our values statements, making it hard to know exactly what the professional expectations really are with regard to neutrality. And in our professional discussions, we’re not always clear in how we use the word neutrality. Finally, while we’ve debated neutrality across various functional areas like reference and cataloging, there is less attention to what neutrality would look like within the realm of information literacy instruction specifically.
In this talk, I want to explore some of these lesser-discussed areas, of what neutrality is, how we use the word, and what it would mean for instruction. But before I really dive in, I need to offer a few caveats:
First, I need to acknowledge my own positionality as a white, cis-gender able-bodied woman in a field dominated by white, cis-gender, able-bodied women. I don’t bring a very unique perspective to this discussion. As part of the dominant culture in the field and in the United States generally, I see myself and my perspectives mirrored back to me in all areas of our practice and service, from how our facilities are set up, to assumptions inherent in our reference practices and classification systems, to the kinds of books that make up the bulk of our collections. Perhaps more importantly, neutrality is often discussed in relation to stances on intellectual freedom and freedom of speech as well as social justice topics like diversity, equity, and inclusion; and anti-oppressive practices or activism. Because of my privileged position, the status quo on these issues can be a very comfortable place for me, and certainly my perspectives on these issues are colored by my privilege. So, for instance, while I might be offended, disgusted, or appalled by certain viewpoints, I’ve never felt my own safety to be directly threatened by persons or groups espousing those viewpoints. In a way, intellectual freedom has some of the “awe” around it that Ettarh speaks of. Many of us have taken it as a given that intellectual freedom is necessary to a democracy and that democracy and intellectual freedom are both inherently good systems, above reproach. But these goods or benefits have not been distributed equally. As someone who is coming from a privileged background, I need to explicitly acknowledge that positionality before I can critique or challenge the system.
Second, I want to acknowledge that I am not a lawyer or a philosopher. All of my background and training are in LIS and I have no doubt that there are plenty of you in the audience today who have more knowledge than I do in this area. And that brings me to my third caveat: keynote addresses are often one-directional. The speaker is usually invited because they are considered knowledgeable and they come to impart their wisdom. I don’t feel that way about this topic. Both for the reasons I just mentioned, but also because this topic is infinitely complex. My own thinking on these topics tends to be in constant flux. So I am not here to give you answers on this topic, and rather than an “address” I’d rather think of this as the opening of a dialogue. There will be time for Q&A and discussions at the end. I would be more than happy to continue this conversation in other outlets and venues and I am honestly approaching it with an open mind.
So, let’s start that conversation now!
As I see it, the two main questions up for debate are “should we be neutral,” and “can we be neutral”? The former tends to assume that neutrality is, or has been historically, a value of the profession and the question is whether neutrality is an appropriate value, especially if it comes into conflict with other professional, institutional, or personal values. The latter generally sets aside the argument about whether neutrality is appropriate, positing instead that neutrality is not even possible because of existing and inherent inequities and biases.
But before we address whether neutrality is appropriate or possible, we need to address what we mean by neutrality in library science? The OED defines neutrality as “not taking sides in a controversy, dispute, or disagreement”…but I’ve noticed in LIS that we often conflate neutrality with positions that are decidedly not neutral. The Presidential Panel on neutrality at the 2018 ALA Midwinter conference was a case in point: across the board, panelists on the side of
“neutrality” were actually arguing for intellectual freedom and freedom of speech. But even if we describe intellectual freedom as maintaining view-point neutrality in terms of, say, collections, upholding intellectual freedom is not a neutral action but is a stance in favor of a very particular position.
And that is not the only other definition we have in mind when we discuss neutrality. Recently, one of my students (Scott & Saunders, 2021) did a survey of over 500 public librarians from around the country, and the vast majority—almost 70%-- defined neutrality as being objective. But neutrality and objectivity aren’t the same thing. Again, going back to the dictionaries, the
OED defines being objective as “Not unduly or improperly influenced or inclined; unprejudiced, impartial,” which is really not the same as neutrality. So if you’re being objective, you can take a
side, as long as you do your best not to allow your biases to influence your position. Now, we can argue about how easy or even likely it is that we can suppress our biases when making choices, but either way, it seems to me that neutrality and objectivity are two different things.
So it turns out that when we talk about neutrality in LIS, we are really talking about at least 3 different things—intellectual freedom; objectivity; and not taking sides. That lack of precision in our terminology probably means that at least some of the time we’re probably talking past each other and misunderstanding each other. People will probably react differently if I say librarians should be neutral than if I say librarians should be objective or that they should support intellectual freedom and oppose censorship.
But also, each of these three things could mean different things in terms of information literacy and instruction. So, I want to look at each of these things—intellectual freedom, objectivity, and not taking sides—and what they could mean for us as librarians, especially in relation to information literacy and instruction. And I’m going to couch this in terms of physical access—or the ability to find, locate, and access information—and intellectual access—or our ability to understand, evaluate, and use that information.
Let’s start with intellectual freedom and physical access. This part might not seem as directly relevant to information literacy and instruction—even though we do teach people how to access information in library instruction—but it’s important because that physical access is the first step in understanding, evaluating and using information. If we can’t access the information, the rest doesn’t matter.
Emily Knox describes intellectual freedom as the “right to access the whole of the information universe without fear of reprisal from the ‘powers that be’” (Knox 2015, 11). And unlike neutrality- which I mentioned does not appear in our professional standards-- this freedom has long been a core value of the library profession, and our support of it is explicitly stated in our code of ethics, the Library Bill of Rights, and the Freedom to Read statement. But why has intellectual freedom historically been so important to us? Well, the idea is that intellectual freedom is related to what I would refer to as physical access to information—our ability to actually get our hands on information whether it’s in hard copy or virtual. And the further argument is that we have a right to access information, even if that right is not explicitly identified. So, in the United States, we have certain “inalienable” rights, which include freedom of speech, freedom of the press, a right to the pursuit of happiness and, although it was only extended to all adults much later in our history, we also have the right to vote.
Now, a right to access information isn’t included in our constitutional rights, but some legal scholars (Bishop, 2011; Weeramantry, 1995) have made the argument that it is implied or an ancillary right, meaning that it underpins all of our other rights. I’m simplifying a bit here, but basically the argument is that we need access to a wide range of information, including different viewpoints, in order to exercise our other rights. We need access to information to make good decisions about our own health and safety and that will help us to create a safe and healthy environment, and we need information to decide how we want to vote: who do we want to vote for, what platforms, policies, and ballot initiatives do we want to support? Without good information we can’t fully exercise all of our other rights.
And of course, the library plays a role in facilitating this right to access, not just in building and organizing collections, but in teaching people the skills needed to navigate and search these collections in order to engage in that access.
Okay, so intellectual access has been recognized as a right, but we need to be clear that, like nearly any other right you can think of, it is not absolute. There are limitations to access that are also baked into our system. It’s illegal to distribute obscene materials. Privacy laws limit other people’s ability to access things like our medical records. And in libraries, we routinely limit access through our collection decisions, by using Internet filters, and even through things like cataloging practices that carry implicit value judgments or policies like charging fines that might disproportionately impact some of our patrons.
And these limits to intellectual freedom are exacerbated by the way the benefits of and limitations to that right are distributed.
And of course we know that distribution is not equitable. We know that factors like the digital divide, geography, policy and systemic racism hinder access to information. And we also know that people from marginalized groups not only face disproportionate barriers to access, but also disproportionate detriments and drawbacks when they do have access. For instance, they are more likely to have their data misused, or to be targets of online bullying, doxing, stalking and so on. And at the same time we know that some groups, like white supremacists and other hate groups, are very skilled at manipulating information and access to information in order to indoctrinate and radicalize followers. And I think this is where we begin to see the conflict between intellectual freedom—sometimes stylized as neutrality—and social justice. Because if people do not have equitable access to information can they equitable exercise their intellectual freedom rights? And if that access and that ability are not equitable, should we be supporting that freedom?
From this perspective, could we understand intellectual freedom as a white ideal that basically helps to maintain the status quo by protecting those people who already have access and voice, and is therefore incompatible with a social justice framework that seeks to empower marginalized individuals and communities?
And if that is the case, and if we are already making choices that impact access, should we just embrace our gatekeeping role and use it for good? Why not use our position to amplify marginalized voices and filter out harmful, offensive or hateful information that could only be hurtful to some and could be motivating to others? If our motives are good, if we have our patrons’ best interests in mind, and are only seeking to avoid harm, why not make some good decisions about what people access?
These are questions that I, personally, am struggling with right now. On the face of it, I think this proposition of good gatekeeping has some validity in terms of harm reduction. But at the same time, it raises some concerns. Recently, I had Kade Crockford, who is Director of the
Technology for Liberty Program at the ACLU of Massachusetts, speak in my intellectual freedom course and they made the point that information is power, and when you control information you control people. This isn’t a new idea, of course, but in the context of a course on intellectual freedom, and at a time when I and my students, and it would seem much of the profession is struggling with this question of intellectual freedom as perhaps incompatible with social responsibility, I think this idea raises the question of when does gatekeeping turn into censorship and control? And, in the end, aren’t censorship and control also a tools of the dominant culture to keep the status quo?
As Emily Knox (2015) eloquently explains, when we try to control access to information, we are engaging in an act of power that can impact an individual’s and a community’s identity, and that makes certain assumptions about the nature of knowledge. We assume there is only one way to “read” the information that is being accessed. We assume we know how people will understand the information they are receiving, and what they will do with it. And again that sets up a potentially problematic power dynamic.
Some gatekeeping might seem justified in order to protect people from harmful or offensive things, but who decides what is harmful? Who decides who needs to be protected and who is then left without the ability—or the information-- to make a decision for themselves? If we take on an expansive gatekeeping role, even with the best of intentions, are we putting ourselves in a paternalistic position of deciding what’s best for someone, and by extension, further marginalizing if not even infantilizing some of our patrons? Are we limiting their ability to make a good decision by denying them access to the full scope of the information, however misleading, harmful, or offensive? In other words, could there be any unintended, and perhaps even harmful, outcomes of good gatekeeping?
To illustrate this point, I want to share two brief stories. The first story relates to the first question of who is the gatekeeper: Recently, I was part of a conversation about making choices about what to include in a collection one person stated “as a white librarian, it becomes my responsibility to make decisions that will protect my patrons.” I have no doubt this person’s heart was in the right place. I have no doubt that she was coming from a place of care. But, as they say, the road to hell is paved with good intentions. For me, this statement begs that question of who is making the gatekeeping decisions and what is the impact of those decisions. It brings me back to the fact that our profession is over 80% white, and if we white librarians are, even with all of the best intentions in the world, deciding what is safe, or appropriate for our entire community, I think that is problematic.
Now, we know we can’t avoid the gatekeeping role altogether—we will continue to make decisions about what is officially part of our collection, what we spend our limited purchasing money on. We can address this issue in part by including more, and more diverse, voices into the role of gatekeeper. We can work to diversify the profession. I, personally, can work to attract a more diverse student body to LIS. I can continuously work to decolonize my curriculum so all students see themselves and their perspectives reflected. I can work to support students and make them feel included, valued, and integrated into my courses, into the larger program, and into the profession so that eventually we have a more diverse group of librarians making these decisions. But changing the demographics takes time. In the meantime, we can actively seek out those voices and perspectives in other ways: for instance by building boards of trustees and advisory boards that reflect our patron community.
But having a more diverse set of gatekeepers doesn’t necessarily address the second question, about unintended consequences. After all, the gatekeeping role involves a power dynamic and I worry in making decisions for patrons, even with the best of intentions, even in the name of keeping them safe, that we are exercising that power in a paternalistic way that ultimately infantilizing our patrons and, in so doing, undermining their ability to make informed decisions. Sometimes you need the context, including the problematic information, in order to make a decision. Which brings me to my second story.
I was actually teaching my intellectual freedom course as part of a study abroad program in Rome. We had some extra money in the budget, so I bought tickets to visit Castel San Angelo, and through a friend of a friend, I set up an informal, unofficial “tour.” The tour was supposed to be in English, but that night the guide said he was tired and asked me to translate for him. The tour started off fine and he was describing how the castle was originally built as a monument during peace times, but was turned into a fortress as the Roman Empire began going to war…And at this point in the tour he said, in Italian, “back then, just like now, the Arabs came over and started causing trouble.” And then he paused and waited for me to translate. I hesitated only for a second and in that second I decided I wouldn’t repeat what he said because I thought it was offensive, and I was afraid that my students would be upset. So I just said “the Roman Empire started engaging in battles and turned the monument into a fortress.” But I had forgotten the guide spoke English—he stopped me and told me to explain it the way he had said it, and then he repeated himself. I was terribly embarrassed, but I explained to the students that he wanted me to give them a more accurate translation, and then I repeated what he said. At that point, one of the students broke off from the group and went off on his own. At the end of the day, when we regrouped, I apologized to the students and at the point, the student who had left told me “I was almost more mad at you for not translating what he said than at him for saying it, because I thought he was saying something like that but without you translating I couldn’t be sure, and I couldn’t decide if I wanted to stay for the tour and listen to this guy.”
I’ve returned to this story over and over as an example that people should be able to access to as much information as possible, and that fullest access is necessary in order for people to make informed decisions in all areas of their lives. Yes, what that guide said was offensive… but my students needed to know what he said, decide for themselves how they felt about it, and also decide if they wanted to be a part of that tour at all. These questions around gatekeeping and censorship actually make me wonder if it is possible that intellectual freedom is not only not incompatible with social responsibility, but might actually be an act of social responsibility in itself in that it gives our patrons power and agency, including the power to decide for themselves what types and sources to avoid, ignore, or challenge? I’m still not sure there is a “right” answer with regard to intellectual freedom and physical access, but returning to the idea of neutrality, either way, we are choosing a side—either we are engaging in gatekeeping and controlling access or we are doing our best to facilitate access and intellectual freedom. And supporting intellectual freedom is not a neutral position.
I just want to end this section by saying that I don’t think support of intellectual freedom has to be a zero sum game. I certainly don’t believe that we need to strive for perfectly balanced collections where we have one to one matches of perspectives, or that we need to purchase materials that are misleading, unsound or offensive just to say that we have them. That’s the old “just in case” mentality for collections. With the current abundance of information and overall ease of access, we can take a just in case approach that allows us to facilitate access when people want it without having to anticipate every possible request. And I don’t see why we can’t amplify marginalized voices, and try to increase their reach and access to them through our purchasing, displays, the examples we use in classes and guides or the sources we seek when we work with students, without necessarily denying other voices’ right to exist.
Now, so far, I’ve been focused on physical access to information, but, as I said, physical access is only the first step. Not all information is created equal, and if people are going to use information to inform their decisions and to exercise their rights, they need to be able to evaluate the information they access, to separate the good from the bad, the reliable and trustworthy from the misleading, deceptive, and destructive. Which leads to the idea of intellectual access and information literacy:
Intellectual access has to do with a person’s ability to evaluate, understand, and use the information they have physically accessed. And I and others have made the argument before (Saunders, 2013; Sturges & Gastinger, 2010) that if access to information is a human right, than access to instruction in how to understand and use that information should also be a human right. We could have access to all the best information in the world, but if we can’t understand it, it is meaningless. And just as intellectual access supports physical access by providing meaning to the information, it also supports intellectual freedom in the sense that we can’t truly exercise our right to intellectual freedom unless we can make sense of the information that we access.
And of course, we instruction librarians have an opportunity to play a major role in facilitating intellectual access by teaching people how to evaluate information. And returning to the idea of neutrality, information literacy instruction is incompatible with the idea of not taking a side. To begin with, teaching itself is not a neutral act. Education takes place within a political and cultural system that privileges certain kinds of knowledge and certain ways of demonstrating that knowledge and higher education is driven by systems of accountability that often focus on workforce development which has real potential for replicating existing systems. So if we are going to engage in a teaching role at all, we have to be aware of how weighted that is.
And information literacy is another political and cultural system within that system of education. For too long, we’ve presented information and information literacy as neutral or common goods, even as we privileged certain sources of information and certain formats and outlets over others. And when we begin to think about teaching people how to evaluate information, we have to be clear that we are making value judgements. We know that not all information is created equal, and we can’t help people evaluate information without ultimately deciding that some pieces of information are better than others. And this brings us to the idea of objectivity. Now, you might recall that in the survey my student did that the vast majority of people equated neutrality with objectivity. But, just as intellectual freedom is not neutrality, objectivity is not the same as neutrality either. Remember, neutrality means not taking a side. When we evaluate information, we are choosing sides in the sense of making judgements about the value of that information.
The point of objectivity, though, is that we do our best to not let our personal biases and assumptions influence us.
If we embrace the position of objectivity, then our role is to help patrons to develop critical thinking skills to evaluate information by using the best evidence depending on the context. But I want to be clear this is not just about teaching them to evaluate for authority or fact check or weigh evidence. In fact, I would argue that we have to move away from oversimplified notions of what makes for “good” or “authoritative” information and from exclusively Western or white criteria for examination. More than that, we have to critique some of these systems of authority, like peer review, and acknowledge and integrate new ways of knowing. We have to teach students to be reflective in order for this to work. And to do that, I think we need to expand toward a broader examination of the systems and infrastructures that impact our intellectual access, including the individual cognitive, the social, and the technical or systemic restraints that can interfere with our ability to intellectually access information. That is, we have physically located the information, and we have the cognitive capacity to understand the information, but our biases, backgrounds, and values as well as the technical infrastructures we use to access information all influence our ability to interpret, evaluate, or accept the information. In teaching people to evaluate information we need to extend our instruction to address these biases and influences and take into account how they impact every aspect of information, from creation to dissemination, from access to understanding.
So how do we do this? What do we know and what do we still need to know about these topics?
At this point, I think we are pretty well informed about the reality of cognitive biases, and within the field it seems like we’re taking substantial steps to inform ourselves about these biases and how they might impact information access across the three areas. We know that people tend to seek out and are more likely to believe information that confirms their world view; we know that algorithms and social media tend to exacerbate these issues to some extent; and we know that people employ shortcuts or heuristics when assessing and interpreting information and that these heuristics are, at least initially, automatic and cannot be controlled.
So we know that our biases and backgrounds impact our social access to information and that these issues are exacerbated by technology. What is somewhat less clear is the extent to which we can overcome these biases. There is a lot of research out there to show that these biases exist but less about combatting it. However, there are a few encouraging studies. Daniel Kahneman (2013), one of the first scientists to identify cognitive biases, tells us that our brains operate on two modes or levels. Our fast brain makes initial judgements automatically and un- or subconsciously. Our slow brain can make more informed judgements, but this is effortful and we tend to avoid it. However, he also explains that experts develop their own heuristics that are generally more accurate than the layperson’s. This suggests at least that training can improve evaluation, which has been confirmed by other studies, including one that showed training through “serious games,” including video games might be more effective at reducing fundamental contribution and confirmation bias than lectures (Shaw, et.al., 2018). And other research also supports this position. Another study found that deliberation, or slow thinking, led to more accurate evaluations of fake news (Bago & Rand, 2020).
Definitely, this is an area where we need to know more. Cognitive scientists, psychologists, political scientists, and educators including librarians, need to learn more about how these biases work and the extent to which training can mitigate them. But some of the early research seems promising. In addition to our own biases, we’re constantly learning more about how the systems we use to access information impacts our understanding of that information. Thanks to the work of people like Safiya Noble (2018) and others, we know that the information infrastructure exacerbates these issues by exploiting our biases. These researchers have made us more aware of the how the programs and algorithms behind the technology we rely on to access information impacts what we see and when and how we see it. We are beginning to understand that “free” social media and search platforms that were once viewed as great equalizers have hidden economic and social costs that, unsurprisingly, benefit those who already hold power while often further oppressing those who are already marginalized. We have learned that bad actors are able to access our data and manipulate these platforms to promote disinformation and drive partisan divisions and that, at least in some cases, those abilities were enabled by the policies of the companies controlling those platforms.
These issues all impact social access to information by driving who sees what, by promoting suspicions and fears, and by eroding trust. What’s more, even if our patrons and students are aware of some of these issues, most don’t feel like they have a strong understanding of them or what to do about them. In her recent study on information literacy in the Age of Algorithms, Alison Head (2020, p.14) described “a tangle of resignation and indignation” toward algorithm driven platforms. These students are hungry for more training and information and are frustrated by the fact that in most cases their faculty are not addressing these topics. And research from the Pew Research Center confirms that most American adults want more training on these digital literacy topics, and they want it from the library.
So, assuming that there is value in training people in this kind of reflection with regard to information, the final question is how do we do that? We can do this in two main ways: Surfacing these issues and helping our students to confront them, and being empathetic in our approach.
A lot of it is about raising awareness; helping our patrons to recognize and acknowledge these biases and their impacts and to identify biases and assumptions within our cultural and political structures, including our information ecosystems and library systems we’ve developed. We can do this in big and little ways. We can start with ourselves. Even when we are a guest in someone else’s classroom, and don’t have direct influence on the content, the assignments, or the ways of knowing that are privileged, we can use techniques that Lisa Hinchliffe (2016) and Emily Drabinski (n.d.) have described to integrate examples into our teaching that normalize issues that have been historically marginalized, that surface tensions and problems, and that challenge the status quo. We can explore sources and voices that have not been recognized in the scholarly conversation, and we can problematize systems like peer review. And from there, we can begin to challenge students to think about their own enculturated biases, and eventually we can empower them to challenge and work for change. I often remind my own students that it wasn’t a group of LIS students who lobbied the Library of Congress to change our controlled vocabulary from illegal aliens to undocumented workers but a group of undergraduates who recognized a problem and were willing to address it. Now you might ask what this has to do with objectivity… but remember that objectivity is about mitigating bias… and for many of us, that bias is white and Western.
I think the Framework for Information Literacy offers some good guidance for this kind of instruction. The AiCC frame encourages us to examine our own and others biases and how they impact evaluation, and Scholarship as Conversation and Information has Value frames remind users to consider whose voices are allowed into the conversation and are invested with authority, which helps us to think about issues of information insiders and outsiders, and how trust is built. All of these frames point to the power structures in information creation and evaluation and, to a lesser extent, I think the Information Creation as a Process frame alerts us to the information structures and how they impact understanding and evaluation.
The Framework gives us an excellent start, but I think we could take it even further. In particular, except briefly in Information has Value, the Framework does not really address the information infrastructure issues I raised earlier, in particular with regard to the social and economic impacts of how information is created and shared especially on social media platforms, how are data is collected, commoditized and used, and how our biases are exploited.
Given how much we’ve learned about these issues even just in the few years since the Framework was adopted, I would argue that we need to rethink how we might address some of this. The exciting thing about the Framework is that it was presented from the beginning as a living document and an ongoing conversation, and we have always been encouraged to adapt it. As some of you might know, I’ve suggested in the past that while the Framework does integrate attention to these issues of social justice, bias and representation, the attention is somewhat scattered throughout the Framework and as such might get buried. To that end, I wrote an article in Communications in Information Literacy proposing a new seventh frame that would bring these ideas together in one place.
The basic idea for the frame, which I called information social justice, is that:
Information is created within existing power structures, and those power structures can impact the production and dissemination of information as well as distort, suppress, or misrepresent information. To understand and use information most effectively, users must be able to examine and interrogate the power structures that impact that information, and analyze the ways that information can be used to both to inform and misinform.
The accompanying knowledge indicators and dispositions focused on recognizing, interrogating and challenging the power structures that control information and employing informed skepticism and reflective practice when evaluating information. They also focus on empowering learners and developing their agency so that they can challenge and change problematic systems. Again, I don’t see this proposed frame as an answer or an end in itself. I have hoped it might be a conversation starter, and a way to spur some discussion about how we address questions of social justice within our instruction and, perhaps more importantly, what outcomes we would want for students. Do we want them to be good consumers in an existing information ecosystem? Or do we want them to be critical and reflective change agents?
At this point, I would encourage us to keep adding to and refining the Framework as a whole, informed by what we are learning about the impacts of technology and our own cognition, and to bring all of this into the library classroom.
Now, I also suggested that if we are going to teach people to be reflective and to challenge their own biases, we have to do it with some empathy. Understanding how people’s biases, cultural values, and backgrounds can impact their interactions with information, I would suggest that we have to be careful not to invalidate people’s experiences and prior knowledge even as we might challenge them to rethink their understanding of certain information.
Rather, we should build on their experiences and prior knowledge while we help them to think about how their lives and understandings are impacted by the powerful forces that wield information and the infrastructure that facilitates that power, and show them how developing their critical thinking abilities can allow them to make better decisions about how they interact with information, what they choose to believe, how they make their data available or what steps they take to protect that data, and ultimately how understanding information gives them the ability to challenge some of those existing structures. Imagine how empowering this approach could be for our students! At a time when they are feeling indignant and resigned, to give them a sense of agency.
But just as we are asking our students to be reflective and to recognize and mitigate their biases, we have to do the same thing. As we teach things like location and evaluation of information, we have to ask ourselves are we letting our own biases and assumptions influence us? Are we pointing learners toward certain kinds of information, certain perspectives or sources, because we are uncritically accepting them as authoritative and likewise ignoring or steering them away from other information? Are we privileging certain knowledge and invalidating others? Ideally, our approach to instruction and evaluation of information should strive to be objective.
Now, I’ve discussed intellectual freedom in terms of physical access and objectivity in instruction and evaluation. Is objectivity incompatible with intellectual freedom? No. The distinction here is that we are placing value on and making judgements about the information… not the person, or their information needs or abilities. In fact, I might even take this a step further. In providing information literacy instruction with a focus on evaluation of information and critical thinking, we are teaching people how to think, not what to think. In that way, we’re still supporting their intellectual freedom, but we are trying to ensure that intellectual freedom rests on a solid foundation of knowledge.
So, to my mind, objectivity and instruction are not incompatible with intellectual freedom. But I would say that all three of these things—objectivity, instruction, and intellectual freedom-- are compatible with neutrality in the sense of not taking a side. With each of these things, we are making a decision and we are taking a side. We have to decide whether to support intellectual freedom or to engage in good gatekeeping. We have to decide to apply our knowledge of information sources and structures to evaluate information and to help our patrons learn how to do that.
So what about this question of a neutral profession, in the sense of not taking sides? At this point, I honestly don’t believe that there is any professional expectation that we should be neutral. All of the professional guidance I see supports our taking on various roles related to intellectual freedom, objectivity, instruction, and evaluation of information—none of which are neutral. I think that the idea that librarians are supposed to be neutral is misreading of the guidance and the intent behind the guidance.
Now, I keep saying that our professional guidance points us in a certain direction, and that those directions are not neutral, and I keep saying that we have to make a decision. But couldn’t we decide not to decide? I want to be clear that I actually disagree when people say that neutrality is a myth or an impossibility. I do think it is possible to choose not to take a side, to decide not to decide. I just don’t think it is desirable to do that. Because while choosing not to choose a side might be a neutral action, we have to acknowledge that the outcome is not neutral—we’re simply putting things off and, in the end the loudest and most powerful voices will generally have their way. And to be clear, being neutral tends to absolve us of making a decision, but that does not necessarily absolve us of responsibility. If things turn out a certain way because of our lack of action, are we less responsible for that outcome than we would be for an outcome that we caused through our actions?
So I’d like to conclude with one final story. This one isn’t my own story, it’s one that I’m borrowing, but I’m doing it because I think it illustrates my point. I’d like to share the story of Dante and the Great Refusal. When I was an undergraduate at Boston University, I had the opportunity to take several classes on Dante’s Divine Comedy, so I spent a lot of time in the Inferno. If you’re familiar with this work, you know that it’s the story of Dante’s love for a woman named Beatrice who has died and gone to heaven. In order to join her in heaven, Dante has to travel through the nine circles of hell and through Purgatory. As Dante first enters hell, passing through the Gate with the famous inscription “abandon all hope you who enter here,” one of the first people he sees is someone that he identifies as the person who made the Great Refusal. Now there are a lot of theories about who this person is, and one explanation is that it’s Pontius Pilate, who famously or infamously refused to make a decision himself about Jesus’s fate, but instead left it up to the will of the people. And there are a few other theories of who this person might be. In the end, though, who this person is doesn’t really matter to my story so much as the fact that this person ends up in hell because he refused to make a decision. He refused to take a side. So, if you’ll forgive the somewhat religious overtones here, I don’t know if we want to trust our fate to a 700 year old Italian poem but, if we take Dante at his word, then I would say being neutral might be possible, but it might also be damning. Thank you.
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. https://doi-org.ezproxy.simmons.edu/10.1037/xge0000729.supp (Supplemental)
Bishop, C.A. (2011). Access to information as a human right (law and society). LFB Scholarly Publishing LLC
Drabinski, E. (n.d.) Teaching the radical catalog. http://www.emilydrabinski.com/wpcontent/uploads/2012/06/drabinski_radcat.pdf
Ettarh, F. (2018, January 10). Vocational awe and librarianship: The lies we tell ourselves. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/
Head, A.J., Fister, B., & Macmillan, M. (2020). Information literacy in the age of algorithms. Project Information Literacy. https://projectinfolit.org/pubs/algorithm-study/pil_algorithmstudy_2020-01-15.pdf
Hinchliffe, L.J. (2016). Loading examples to further human rights education. In Pagowsky, N. & McElroy, K. (eds), Critical Library Pedagogy Handbook, Volume 1: Essays and Workbook Activities (pp. 75-84). ACRL.
Kahneman, D. (2013). Thinking, fast and slow. Farrar, Strauss, and Giroux.
Knox, E. J. M. (2017). Opposing Censorship in Difficult Times. Library Quarterly, 87(3), 268– 276. https://doi-org.ezproxy.simmons.edu/10.1086/692304
Knox, Emily J. M. (2015). Book Banning in 21st-Century America. Beta Phi Mu Scholars Series. Rowman & Littlefield.
Knox, E. (2011). Intellectual Freedom. Public Services Quarterly, 7(1/2), 49–55. https://doiorg.ezproxy.simmons.edu/10.1080/15228959.2010.520593 Noble, S. (2018). Algorithms of oppression. NYU Press.
Saunders, L. (2013a). Information as weapon: Propaganda, politics, and the role of libraries. In Mueller, D. M. (Ed.). Imagine, Innovate, Inspire: The Proceedings of the ACRL 2013 Conference, Indianapolis, IN: ACRL.
Scott, D., & Saunders, L. (2021). Neutrality in public libraries: How are we defining one of our core values? Journal of Librarianship and Information Science, 53(1), 153-166.
Shaw, A., Kenski, K., Stromer-Galley, J., Mikeal Martey, R., Clegg, B. A., Lewis, J. E., Folkestad, J. E., & Strzalkowski, T. (2018). Serious efforts at bias reduction: The effects of digital games and avatar customization on three cognitive biases. Journal of Media Psychology: Theories, Methods, and Applications, 30(1), 16–28. https://doiorg.ezproxy.simmons.edu/10.1027/1864-1105/a000174
Sturges, P., & Gastinger, A. (2010). Information literacy as a human right. LIBRI, 6(30), 195-
202. doi: 10.1515/libr.2010.017
Weeramantry, C.G. (1995). Access to information: A new human right. The right to know. Asian Yearbook of International Law, 4(102). 10.
Questions and Answers
Jason Ertz: Thank you, Dr Saunders. We appreciate that. That was wonderful.
Jason Ertz: So I do have a couple of questions right now, but if anybody has any more questions for Dr Saunders, go ahead and pop those in the Q & A box. I will start with the two that I have right now and we can go from there.
Jason Ertz: So the first question I do have from the audience is: Should we focus on training Librarians on biases and cultural competencies as much as we train them for library science and instructional skills?
Laura Saunders (she/her): I would say yes, I would say that we can't really teach people how to teach. We can't teach them how to evaluate information themselves or teach other people how to evaluate information without understanding these cognitive processes and it's taken me a while to get here, because this wasn't a part of my training either. But I really think that unless we understand people's mental models and how they think and how these biases get in the way.
I don't really think that we can teach well and I want to be clear, you know within LIS we've done a lot of research into information behaviors but again we've often done it in a way that's very, to use that word, ‘neutral’ right so we've looked at it, as though these are just sort of these Mental models that sort of exist outside of the person. And so I think now what we need to do, hopefully, the direction will move in is to contextualize this a lot more with cognitive science and to do that, I think we need to bring other people into the conversation. I don't know that this is something that we can do by ourselves in LIS, so I think we should be working together with the scientists and psychologists and so on.
Jason Ertz: Might we gauge the size of the effect of a given cognitive bias to better inform the nature and extent of attention we give to it?
Laura Saunders (she/her): wow that's a really good question.
I'll have to be honest, I don't know. I have never seen...I've never really thought about it that way before and I've never to as far as I can remember I've never kind of seen it addressed that way in the literature. Although one thing I will say is I actually just did a presentation earlier this week for the ACRL Instruction section on enabling the teachable moment, and in some ways, I guess, I was almost talking about things in this way with the idea being. Sometimes I'll get questions about, well, what do you do when you're trying to teach and somebody just doesn't want to learn? You know, so this idea of a reluctant learner and what I was trying to do in this presentation was to reframe this as. Are they necessarily reluctant learners like is it a problem with the learner or is it a problem with sort of the context in our approach?
Right, and so I guess you know again part of what I was saying is that in some cases there may be some bandwidth issues in terms of the students, where they are cognitively as well as where they might be, with other issues right. You know if they're if they're anxious right, anxiety can get in the way of critical thinking, but to kind of keep it on cognitive biases there could be these cognitive biases that are getting in their way and that have something to do with where they are developmentally cognitively and I don't know that we can directly measure that. But I do think that we can assess it in some sort of informal way to kind of try and get a sense of what is it that's the stumbling block here, and, you know, and where is this person in their thinking so you know, using the framework of the zones of proximal development if somebody is really at an early stage in their thinking on some of these topics and if those cognitive biases are really in the way. We might not be able to move them from point A to point Z but we might be able to move them from A to B. I don't know if this is quite answering that question, but I think it's the best that I can do right now.
Jason Ertz: As a library instructor, I will have faculty request students find three peer reviewed sources, when I thought to dive deeper into some of these unique topics I find non peer reviewed sources that are better than the peer reviewed pool of articles. How would I get the faculty buyin to adjust the requirements so the student can use that good piece of information that does not check the peer review box?
Laura Saunders (she/her): I acknowledge this is really challenging. Unfortunately, as you know as I alluded to, we are often guests in other people's classes as instruction Librarians and therefore we often don't have a lot of direct influence on how the content is presented, or how the assignments are structured and that absolutely can be a drawback. You know I would hope and this might be a little overly optimistic of me, but I would hope that if a faculty member has extended the invitation, then they're acknowledging our abilities and expertise in this area, and so they might be open to a conversation. So I think that we might be able to just try to engage them and just say, I was reading over the assignment prompt, this looks great, but, as I was preparing for the session, I came across these things... so try to engage them in a conversation about. What they're doing, and where the limits might be and how maybe it's going to lead to better outcomes for their students if they think about it in a slightly different way.
You know, like I said, that might be overly optimistic, a lot of times faculty maybe don't even have the time to engage in these conversations, or their own sort of cognitive bandwidth to rethink some of these assignments, but I would at least try to engage that. If we can't do that, then you know I think we have to be a little more careful how we tread here, but I think that we could acknowledge with the students as we're working through this- here are the things that you're going to want to do for this assignment. Thinking ahead, when you're out in the you know out in the business world, or when you're working on your career, or when you're trying to make decisions about you know your personal life, you might also want to look at these things, so kind of bring this into the situation as something that goes beyond what's in the classroom. So that hopefully it doesn't seem like you're contradicting the faculty member but you're just bringing in some additional information.
Jason Ertz: yeah great that's what I was thinking as we start where they are and help them kind of work through that and then deal with the instructors parameters.
Jason Ertz: Such an important topic to address with our students, but for many of us the only interaction, the only opportunity for us to teach our students is through one shot instruction sessions, or when a student comes to the reference desk, how can you possibly integrate this topic in that environment?
Laura Saunders (she/her): yeah like I said it's definitely challenging but there are some really good models out there already. So I alluded really quickly in my speech, and I did see in the chat that someone said they'd like to read the speech - that'd be more than happy to provide the texts, along with the citations right so that you can follow up on this, but you know in particular Emily Drabinski and Lisa Hinchcliffe have written about how they try to integrate some of these things into their one shot sessions. And like I said, sometimes it's as simple, you know, not to belittle it, but it's as simple as using examples that are going to help to surface some of these ideas and some of these tensions and to get students thinking about them and talking about them or again in the chat, and i'm sorry i'm only seeing a few things that pop up in the chat, but as someone said in the chat, we can also ask the students directly, why do you think xyz, why do you think you're being asked to look at this, or what would you think about it if it was presented this way? So again kind of challenging them to start thinking about these things themselves rather than presenting it to them in a whole package. So I do think that there are some ways to do this with one shot instruction but absolutely I mean, I have to acknowledge that it's a lot more challenging because. The other part of it is that these can be really challenging conversations, really challenging I mean, especially when you're challenging somebody's worldview right. And in a one shot session you don't have a whole lot of time to build trust with your audience, so that they can understand where these challenges are coming from and that, you know, again that we're trying to do this with empathy and that we've got a focus on a certain outcome. Whereas, you know, I, for instance, in an intellectual freedom class have an entire semester to build that trust, so I do know that it's challenging but I will try to provide those citations that I think kind of provide us with a start.
Jason Ertz: Do you think that's different than the instruction that goes on in a reference interview?
Laura Saunders (she/her): Oh yes, thank you. So, in a reference interview, I think that, in some ways we've probably got a little bit more leeway because we're not necessarily trying to fit our instruction into some pre-existing system that a faculty has set up, I mean, of course, yes we're probably working, but one of the few reasons, with students might come to us as probably because of an imposed query and an assignment right, so we are working towards you know or working within this structure that a faculty Member has imposed. But we don't have that faculty Member, you know we're not in their space anymore now we're in our space. And so I think that we have a little bit more leeway to work with the student and depending on what our culture is like and what our workspaces are like and all of that we may have more time to, because in a one shot session, even if we've got 60 minutes as a whole let’s say, we only have a couple minutes to interact with each student. In a reference interview we've got all of our attention can be on this one student for maybe it's 15 minutes, maybe it's 20 minutes. And so I think we can spend a little bit more time kind of talking them through what they're doing, bringing in different perspectives, asking them those challenging questions, so I think that if we kind of approach it with that in mind, and again there's some good writing out there, some of it by, one thing that leaps to mind quickly is Dane Ward wrote an article about structuring the reference interview around information literacy outcomes, this is the old information literacy standards, but I think it could be adapted so that we're going into each interview with that framework in mind.
Jason Ertz: Wonderful Thank you.
Jason Ertz: All right, we'll just keep going. they're coming in now. So what do you think would be the best approach to getting faculty and other disciplines to be more open to acceptance of non-traditional sources in their research assignments?
Laura Saunders (she/her): So I think this is challenging and it's interesting because I actually just fairly recently finished a study, where I was looking at faculty perspectives on mis/disinformation across disciplines and so what was there was a couple of interesting things, one thing was that there weren't as many statistically significant differences across disciplines, as I might have as I might have hypothesized right, it seemed like it was more the individual faculty member than it was sort of this disciplinary perspective that you know misinformation is a problem or isn't a problem or impacts our field, or does not impact our field. But with that said, definitely what I would say is they were those people who are really far along in their thinking and those people who are not so far along in their thinking. So there were faculty members who you know we're already talking about how they were changing their teaching methods to address things like searching Google and using Wikipedia and you know, using social media as an information source and things like that, so I think one thing we can do is find those people who are already thinking this way and start working with them. Because, to an extent there might be some snowball effect here where once you've got a few people on board, who are like hey, this is the way they're doing it, and you know I. Often, these are sort of the leaders in their department as well. They're sort of more on the cutting edge, then they might bring other people along and once you do something in one person's classroom, it might be easier to kind of do it in other people's classrooms. But with that said, even though they weren't necessarily statistically significant differences across disciplines, there were some differences and I would say that there are some fields and probably not surprisingly things like social work and psychology and things like that that are moving a little bit more quickly on some of these questions as opposed to some fields where I would say, you know, again, it can vary by instructor but there might be a little more reticence, just to put it bluntly, I was more likely to hear things like ‘well I teach accounting and that's just math so these questions don't come up here. And I would be like, I'm not sure I buy that, but again just like with our students, I think we have to start with a person where they are and try to move them forward slowly by having open dialogues and discussions and again having those faculty who are early adopters, for lack of a better word, or who are champions of these things, have them, you know, collaborate with them as sort of leaders in their field.
Jason Ertz: Which audiences should we as Librarians and information professionals be most focused on trying to impact and teach how to evaluate information and understand their biases? Young students, high school, undergrad, grad, or adults? Are there creative ways to reach these audiences where they are?
Laura Saunders (she/her): That again it's an excellent question and my answer might sound flippant, but my answer seriously would be ‘yes-all of these.’ If we're just starting to talk about some of these issues of cognitive bias and how to mitigate it and stuff like that, at the undergraduate level, that's late. Hopefully, these questions are being discussed much earlier and so again here I think that we in the field of academic librarianship can do some work to partner with our peers in elementary and high school. I know in Massachusetts they've done this, and I think they're probably doing it in lots of other places where our state library system will run the ‘Your high school seniors, My college freshmen’ kind of event, and will kind of share information and stuff like that, but I think you know, ideally, we have to start addressing these topics really early on because you know those cognitive biases are baked in and the earlier we can start to unlearn them, I think, the better it is, but that also doesn't mean we give up on the other end, and I know ALA, for instance, has just put out that practitioners guide to media literacy for adults, so I think it really does span the spectrum, and I think we can be creative in terms of how we reach people. It doesn't all have to be classroom instruction or it certainly doesn't all have to be college classroom instruction, I think that we can we can run events, I think we can have dialogues one on one with faculty with students with other people, I think we can we can use social media, as many people are already doing, to try and have some of these discussions, or to raise some of these questions in a space and with a group that might not otherwise engage with us.
Jason Ertz: Yeah that's great I just had a presentation in a public library on mis/disinformation. You gotta get out there a little bit. Alright next question is: I couldn't help but think as you spoke about the safety of inaction for those who are not actively harmed by the status quo. If someone doesn't have a fully developed plan, there is a tendency to remain the same, rather than make a wrong choice. Do you have any suggestions on how to approach this?
Laura Saunders (she/her): I completely resonate with that like I said at the beginning. The status quo is a pretty safe place for me too and it's challenging even doing a discussion like this, even doing a talk like this. You know I worry, not just about the you know inaction and action, but what if I take action and it's not the right action, what if I do something that ends up unintentionally causing more harm, and that can then become a sort of self fulfilling prophecy of I start to feel paralyzed so sometimes it's inaction because I'm comfortable and sometimes it's inaction actually because I'm uncomfortable.
But I think the first thing that I would do is just to try and acknowledge that. If this is something I believe in. Then inaction just shouldn't be an option. If this is something that I think is important, as comfortable as I might be, as safe as it might be, it should be something that I want to see changed. If that's the case, then I have to work towards that change. Also just kind of acknowledging the fact that it's not necessarily helpful to not take action. Again we're probably enabling the status quo, but we're also not being true to our own values. So these are just sort of broad ways of thinking about it, to try and talk ourselves into the fact that, yes, this might be risky, yes, it might be uncomfortable. But if it's something that I believe in, you know, and especially recognizing that people from marginalized communities often don't have that choice, right, so trying to just kind of recognize that that choice is such a privilege, and it's a responsibility as well to try and use that privilege in a way that is going to help to undo some of these structures that i've been talking about. So kind of look at it as a privilege and as a responsibility, and then think about the small steps that you can start with.
Just as we're talking about trying to move our students along a certain path and trying to maybe move our faculty along a certain path, we have our own developmental path to work and while we might wish that we also were at point Z. Maybe we're not there yet, but what are the steps that we can take to start moving in the right direction. And so like I said, it might be things like, maybe it just starts with reading, attending some sessions, thinking about these things, educating ourselves a little bit more about what these realities are, whether it's the realities of systemic racism or the realities of cognitive bias. Whatever areas that we want to learn more about, you start with that, and there's so much information out; so many books, so many online videos and things like that, and then start thinking about ‘how can I start to put this in action?’. We can do things like just think about what could I do today. What could I do next month, what could I do next year. So what's one small step that I could take, and maybe it is just using a different example in my instruction sessions, you know, using an example that highlights different experiences or brings in different voices and things like that. Maybe that's a relatively easy step that I could take early on.
And then starting to think about ‘what can I start to work towards on a longer term basis?’ Maybe it's starting to integrate more of these examples, maybe it's starting to actually raise some of these questions, maybe it's raising some of these questions with my colleagues, with the other people who are doing instruction or who are engaging with the other systems and services in our library, and maybe it's starting to have those conversations in other areas with the faculty that we're working with, with our students. So kind of giving ourselves short and long term goals to make it feel a little bit more manageable.
Jason Ertz: All right, couple more, I think we got about 10 minutes left everyone so if there are questions that we don't get to with Dr. Saunders, I'll forward those questions to her, and we’ll try to compile similar ones, you can perhaps help you later on, so the next one is: Since this topic and the summit lend itself more to reflection within the academic library environment, how do we address the topic of information literacy in our public libraries?
Laura Saunders (she/her): Yeah that's a great question. So I've been trying to make some inroads there myself, you know a lot of my work does focus on academic libraries, but, as Jason mentioned, I'm on the board of trustees for my public library. A lot of my students are planning to go on a public library career path, so I really tried to stay up to date with that, and I've done lots of trainings in public libraries as well. So I think that we've got to do a few things, the first thing is to think about who our audience is in the public library. This goes back to that other question of who's our audience, and who are we trying to teach or who are we trying to engage with. And although our audience in a public library could be any age and any background, in a lot of cases I think we're focusing on programming and education for adults. So there's a couple things that I would say here, the first thing is to think about who our adults are - what their needs are - what their interests are, and as I alluded to really briefly in my talk, Pew research did quite a bit of research on this topic, a couple of years ago, and they really uncover the fact that not only do most American adults trust their library, but they actually are very interested in instruction from their library and, in particular in digital literacy instruction, which I think they were using that term to encompass things that I would maybe consider more information literacy, I think of ‘information literacy’ is more of the umbrella term, but at any rate, the point being that people are interested in this. But also thinking again about the fact that there are going to be a wide range of backgrounds and a wide range of points that people are going to be coming to us from.
So I think that we can think about trying to learn as much about our potential audience as possible and then trying to think about what outcomes we are trying to work towards here. On the one hand we're much more open in a public library. We're not working within somebody else's classroom or somebody else's system of three peer reviewed articles, etc. We are able to kind of set some outcomes and some goals for ourselves, but again, they should kind of align with WHO our audience is and what they might be interested in. And then we also have to think about the fact that our audience in a public library, trying to think of how to phrase this, essentially our learners in an academic library have an obligation to learn however more or less engaged they feel with that obligation. They are there and they are working towards a particular outcome. In a public library, it's much different, and you might expect that someone who elects to come to an instruction session or program is interested in it, but again, they don't have a particular obligation to engage with us. So we really do need to think about meeting them where they are. And sometimes we can do this through some little pre assessments. This works in academic classrooms as well, obviously, but doing things like just asking some open ended questions, or doing some polls or things like that, to try and get a sense of where our audience is before we jump into a topic.
Thinking about whatever the outcomes that we want. You know, when people ask me that question: what about those people who just don't want to learn, or what about those people who just don't want to engage? One of the things I try to think about is, again in addition to trying to reframe this so it's not a problem with the learner, thinking about what is the outcome that we want here. Are we trying to change someone's mind? Are we trying to convince them of something? Are we trying to engage in a discussion? Are we trying to maybe help them develop skills that again they are going to go on to apply in their own context so kind of trying to think about all of those issues as well, and then finally. You know one other thing that I would think about is, again, building those collaborations between academic and public libraries, between public libraries and school libraries, but also thinking about the extent to which we might be able to bring in, especially if we're located in an academic center, the way that we might be able to bring in some people from those other fields, who are doing some of this work as well and coming at it from a different perspective.
Jason Ertz: Well, we're going to do one more question and I'm going to blend two that seem fairly similar. So last question everyone. As you note, our information infrastructures are imbued with exploitative power imbalances meant to reinforce things of bias. They are pervasive. How do we, especially over the course of a semester, combat that dread that naturally occurs when students think critically about these structures? The tendency seems to go towards this distrust of everything, that's cynicism right, and this is from the other person's question, how do we handle this trajectory from being skeptical to being cynical, in many cases?
Laura Saunders (she/her): Yeah that's an excellent question. So there's a couple of things that I would say here. First of all, it's actually kind of part of the developmental stages to go through this stage of cynicism. At least, you know, according to, cynicism might not be exactly the right word. But if we look at Perry's developmental stages, one of the things Perry talks about is learner's initially tend to approach things believing everything has a right and wrong answer and they're really interested in having someone just tell them what the right answer is, and then, eventually, they get to a point where they start to recognize that there is more nuance that there are various perspectives and that it's not necessarily just a right or wrong answer. But at this stage, they tend to start to feel overwhelmed and start to have this reaction that you're talking about, where they start feeling like there's no point, all these positions are equally valid, or maybe they're all equally bad, you can't trust anything and there's sort of this sense of resignation and giving up. If that were a stopping point, then that would definitely be a problem. But if we can see that it's just part of the developmental process and help them to understand that actually no, it may seem this way and that's frustrating. But there are actually, and this is the final stage is when students start to understand, or learners start to understand that there is nuance and that there is, even if there's not necessarily always a single right and wrong answer, that there is better and worse information; there are better answers and worse answers.
And so they can kind of get past that [initial] stage of feeling really stuck, so I think we have to kind of recognize where people are, and try to kind of meet them where they are, once again, and, as I said, I think one of the things that we know tends to make people dig their heels in, is when they feel like their worldview is being challenged. And so approaching it so that it feels less personal, less about -’yeah you're wrong about this’ and more about - ‘let's talk about how we're thinking about this and let's talk about how we're deciding which information we want to trust and what might be the outcomes of trusting this.’ So really kind of again focusing on the information and the processes and the skills over the sort of personal aspects of it as much as possible and, again, I recognize that that's not always terribly easy to do. And I do know, again, you know we might not be able to get every student to that point by the end of a class or the end of the semester, but our hope is that we're kind of moving them on that way and kind of hopefully recognizing that for some of them, it may take more time and that's okay.
Jason Ertz: Wonderful. Well, thank you very much, Dr Saunders for a very thought provoking presentation of ideas and the discussion afterwards, as much as we can discuss in a group of 300.
Laura Saunders (she/her): Thank you so much, I really, really appreciate the opportunity.
Jason Ertz: Thank you, everybody for your great questions and for coming to our kickoff for the first keynote speaker to our Information Literacy Summit. I should say that the first breakout session will start in 15 minutes.
Thank you again, Dr Saunders and we look forward to seeing everybody in the breakouts.