1 of 46

Effective Altruism�introductory reading and discussion group

�slides for an official ECTS-accredited philosophy course at the University of Zurich

independently organised and run by Eleos Arete Citrini

vouched for by Prof. Dr. Peter Schaber

in autumn semester 2021 (3rd undergrad semester)

Effective Altruism Forum post: link

2 of 46

Week 01 – Friday, 24th September 2021�Introduction

3 of 46

Week 02 – Friday, 1st October 2021�EA Switzerland in Bern (1st-3rd October)�UZH event cancelled

4 of 46

Week 03 – Friday, 8th October 2021�Global Health and Development

5 of 46

Global Health and Development(introductory article by Jess Whittlestone)

  • “The case for global health and development as an important cause area”
  • “Some concerns about prioritising global health as a cause area”
  • “Why might you not choose to prioritise this cause area?”

6 of 46

  • “Does it really matter that you don’t have to walk around these children as you walk down your street?” (2m01s)
  • “All lives have equal value.” (5m16s)
    • What are different ways this could be interpreted? If true, what might (not) follow?
  • a potentially very impactful (but not the only such career choice) is earning to give: “if you earn a lot of money, you can give away a lot of money!” (9m29s)
    • What do you think about the reasoning behind this? What are chances and risks?
  • some websites Singer mentioned:

7 of 46

  • “Our century is the first in which it has been possible to speak of global responsibility and a global community. […] Our capacity to affect what is happening, anywhere in the world, is one way in which we are living in an era of global responsibility. […] In these circumstances, the need for a global ethic is inescapable. Is it nevertheless a vain hope? […] In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. […] If the circle of ethics really does expand, and a higher ethical consciousness spreads, it will fundamentally change the society in which we live.”
    • discuss

8 of 46

  • “Some object that consequences are not the only thing that matters. For example, some people think that acting virtuously or avoiding violating rights matters too. However, all plausible ethical theories hold that consequences are an important input into moral decision-making, particularly when considering life or death situations, or those affecting thousands of people. Indeed these are precisely the types of cases in which people think that it may even become permissible to violate rights. However, in the cases under consideration, there is not even a conflict between producing a much greater good and acting virtuously or avoiding violating people’s rights. The consequences are thus of great moral importance, with no serious moral factors counting in the opposite direction. Proponents of all ethical theories should therefore agree about the moral importance of funding the most cost-effective interventions.”
    • discuss

9 of 46

10 of 46

social evening event on 14.10. @ CEVI

CEVI is at Forchstrasse 58, 8008 Zürich, <1 km from Stadelhofen train station�joining EA Zurich on Slack: click -> here <-

11 of 46

Week 04 – Friday, 15th October 2021�Animal Welfare�postponed to 26th November

12 of 46

Week 05 – Friday, 22nd October 2021�The Long-Term Future

13 of 46

The Long-Term Future(introductory article by Jess Whittlestone)

  • “The case for the long-term future as a target of altruism”
  • “Some concerns about prioritising the long-term future”
  • “Reasons you might not choose to prioritise the long-term future”

14 of 46

  • On the (un)popularity of nuclear security as a cause area:�“But if you’re an effective altruist, you don’t give a hoot about whether it’s boring or not, right? It’s irrelevant if it’s boring. You wanna geek out and look at the numbers, the probabilities, the megatons, the impact; and you wanna focus your efforts on what’s gonna do the most good, regardless of how you feel about it. […]�If you care about climate change, obviously it would be against the spirit of EA to only care about one kind of climate change [and not about nuclear winter] because you’ve heard more about it or feel more emotionally connected to it because your friends are doing it.” (9m06s)
    • discuss
  • On positively shaping the development of artificial intelligence:�“It’s pretty clear that solving the technical problems alone isn’t enough. […] What kind of future do we ultimately want to create with AI?” (23m48s)

15 of 46

  • Bostrom quoting Derek Parfit:�“I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:�1. Peace.�2. A nuclear war that kills 99 per cent of the world’s existing population.�3. A nuclear war that kills 100 per cent.�2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater.” (p. 17f.)
    • What is their reasoning behind this; which assumptions underlie their conclusion? What do you think about those? Under which assumptions would one reach different conclusions?
  • “Many different normative perspectives thus concur in their support for existential-risk mitigation, although the degree of badness involved in an existential catastrophe and the priority that existential-risk mitigation should have in our moral economy may vary substantially among different moral theories.” (p. 24)
    • Can “x-risk reduction as one of the top priorities” be reconciled with common sense ethics that avoids the fanaticism resulting from taking expected value estimates at face value? Which normative perspectives ascribe substantially less value to reducing x-risks than Bostrom’s and how plausible do you think they are? Are there plausible normative perspectives ascribing disvalue to (pure) x-risk reduction?

16 of 46

Week 06 – Friday, 29th October 2021�EA Global in London (29th-31st October)�UZH event cancelled

17 of 46

Week 07 – Friday, 5th November 2021�Putting EA into Practice – Part 1/2

18 of 46

On Caring (by Nate Soares)

  • “Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.��The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.��But this is an error — prominent altruists aren't the people who have a larger care-o-meter; they're the people who have learned not to trust their care-o-meters.��Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.”
    • discuss

19 of 46

  • “I’m not saying we should just always accept common sense. I mean, if this were the case, we wouldn’t need EA. The whole point of EA is that it seems that common sense is in many cases horribly wrong and we need to make progess.”(25m30s)
    • Where do you think common sense is horribly wrong? What should the role of common sense in EA be?
  • “Doing the most good by using reason and evidence is a really simple idea, but actually applying it in the real world is extremely difficult and it’s so difficult that sometimes we can go wrong and we should be aware of these biases and mistakes that we can make in order to avoid them.” (29m33s)
    • Which of these biases and mistakes are you most worried about and why? And how should EA(s) deal with that?

20 of 46

Doing Good Effectively (short interview with UZH philosopher Stefan Riedener)

  • On “How does helping others benefit us?”:�“It’s part of leading a successful life. If you only ever look after yourself, you’re missing something. Such a life would be poor and pitiful, perhaps without any greater purpose. On the contrary, if I help others, I will one day be able to look back on my life with a sense of happiness and say that my life has had meaning.”
    • discuss�

21 of 46

Week 08 – Friday, 12th November 2021�Global Catastrophic Biological Risks

22 of 46

  • “The Spanish flu pandemic was remarkable in having very little apparent effect on the world’s development, despite its global reach. It looks as if it was lost in the wake of the first world war, which, despite a smaller death toll, seems to have had a much larger effect on the course of history.”
    • Relatedly, what might we be missing if we focus only on the death tolls when trying to compare the badness of two events (e.g. a pandemic and a war)?
  • “The problem is not so much an excess of technology as a lack of wisdom. Carl Sagan put this especially well: ’Many of the dangers we face indeed arise from science and technology – but, more fundamentally, because we have become powerful without becoming commensurately wise. The world-altering powers that technology has delivered into our hands now require a degree of consideration and foresight that has never before been asked of us.’”
    • a bit tongue-in-cheek (beware of false dichotomies):�Should we push for more wisdom while pushing against more technology or for more technology and even more wisdom? Do we need technological solutions to technological problems or do we need to stop creating more technological problems (first)?

23 of 46

  • “Another consideration is thinking about state-run bioweapons programs vs individual bioterrorists. There are a lot of reasons to believe that one is nastier than the other.” (16m20s)
    • Which problem are you more concerned about? What do you think about their respective scope, tractability, and neglectedness?
  • “Should dual-use information be spread, or contained?�And in which circumstances?” (21m37s)

24 of 46

Information Hazards in Biotechnology(by Gregory Lewis, Piers Millett, Anders Sandberg, Andrew Snyder-Beattie, and Gigi Gronvall)�

  • “Biological information cannot be neatly segregated into the safe and open, and the hazardous and secret: much is to a greater or lesser degree “dual use”; it is also often incremental, building upon prior information that is openly available. Further, the appropriate degree of openness or secrecy is not solely intrinsic to the information in question, but also depends on the characteristics of potential good or bad actors that might (mis)use it. Given the possibility of deliberate misuse by intelligent adversaries, both openness and secrecy can backfire in surprising ways.”�(p. 976)
    • discuss
  • regarding lab accidents:�What type(s) of research, if any, should not be conducted in the first place?�And how should we go about regulating this?

25 of 46

Week 09 – Friday, 19th November 2021�Positively Shaping the Development of�Artificial Intelligence

26 of 46

Benefits & Risks of Artificial Intelligence(by the Future of Life Institute)

  • “Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree.��What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?��Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way?��Please join the conversation!”

27 of 46

Intro to AI Safety, Remastered (18:04)�(by Robert Miles)

  • Miles quoting Stuart Russell:�“A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values. If one of those unconstrained variables is something we care about, the solution found may be highly undesirable.” (9m57s)
    • So what should we do about this?
  • Convergent instrumental goals: self preservation, goal preservation, resource acquisition, self improvement (14m36s)
    • To which extent do (arguably) convergent instrumental goals help us predict the behaviour of artificial agents exhibiting general intelligence on par with or even exceeding human intelligence? What else might help us predict it?

28 of 46

The Ethics of Artificial Intelligence(by Nick Bostrom and Eliezer Yudkowsky)

  • “Considering the ethical history of human civilizations over centuries of time, we can see that it might prove a very great tragedy to create a mind that was stable in ethical dimensions along which human civilizations seem to exhibit directional change. […]�This presents us with perhaps the ultimate challenge of machine ethics: How do you build an AI which, when it executes, becomes more ethical than you? […]�If we are serious about developing advanced AI, this is a challenge that we must meet. If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness.” (p. 16f.)
    • How should we go about tackling this “ultimate challenge of machine ethics”?

29 of 46

EA Switzerland Christmas Social - Zurich�on 12.12. @ CEVI (click link above to sign up)

30 of 46

Week 10 – Friday, 26th November 2021�Animal Welfare�postponed from 15th October

31 of 46

Animal Welfare(introductory article by Jess Whittlestone et al.)

  • “The case for animal welfare as an important cause area”
  • “Some concerns about prioritising animal welfare as a cause area”
  • “Why might you not choose to prioritise animal welfare as a cause area?”

32 of 46

A Meat Eater's Case For Veganism (24:06)�(by CosmicSkeptic aka Alex J. O'Connor)

  • O’Connor quoting Jeremy Bentham:�“The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sentient being?” (10m05s)
    • What are the necessary conditions and what are the sufficient conditions for moral patienthood? What is the relevance of sentience and of sapience here?�What else might be relevant here? And how should we act given the uncertainty about e.g. the sentience and sapience of others? And can entities that are not individuals, such as ecosystems, have intrinsic moral value?
  • O’Connor building upon John Rawls’ Veil of Ignorance (10m56s):

What do you think about this revised thought experiment?

33 of 46

Why farmed animals? (by Jon Bockman)�on animalcharityevaluators.org

  • “Each life is inherently valuable, and we want to spare as many lives as we possibly can from suffering. Numbers help us understand and describe situations that are too large and pervasive for us to understand, which enables us to maximize our impact. Just like an animal shelter might make a tough choice to accept 20 puppies instead of 5 adult dogs because puppies are more “adoptable,” we need to use numbers to inform our work so that we can help as many individual animals as possible.”
    • What are different ways this could be interpreted?�If true, what might (not) follow, e.g. regarding cause prioritisation or interventions regarded as among the most promising ones?

34 of 46

  • Eskander quoting Richard Dawkins:�“The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease.” (2m49s)
    • What might humanity be able do about this in the near-, medium-, and long-term future?�What should we (humanity / EAs / animal advocates) do about wild animal suffering (WAS)?
  • “One very common objection is that [WAS] is not tractable. There is the very common idea that it is too impractical to intervene in the wild right now, because it’s too costly, we just don’t know enough and it takes too much work. A related objection is that we don’t understand ecosystems well enough to know whether or not interventions could have unintended side effects. However, I think, rather than to suggest that [WAS] has low tractability, what it suggests is that the tractability of [WAS] is uncertain. I think we just don’t know enough about the problem or about what solutions could look like to know whether [WAS] is tractable.” (6m29s)
    • discuss

35 of 46

EA Switzerland Christmas Social - Zurich�on 12.12. @ CEVI (click link above to sign up)

36 of 46

Week 11 – Friday, 3rd December 2021�Buffer (topic/content could be attendee-driven)

37 of 46

Week 12 – Friday, 10th December 2021�Global Priorities Research and Moral Uncertainty

38 of 46

  • “[T]he tree of ought suggests a rather counter-intuitive idea that does not seem shared by many, namely that contemplating fundamental values […] should be our first priority. I think this is largely correct, at least if we do not have a highly qualified answer in place already. Our fundamental values can fairly be thought of as the point of departure that determines our forward direction, and if we take off in just a slightly sub-optimal direction and keep on moving, we might well end up far away from where we should ideally have gone. In other words, being a little wrong about fundamental values can result in being extremely wrong at the level of the specifics, which is why it is worth spending a lot of resources on being extremely well-considered about the fundamentals.�So contrary to what we may naively assume, the tree of ought suggests that the question concerning fundamental values is not an irrelevant, purely theoretical question that prevents us from doing something useful. Rather, it is the question that determines what is useful in the first place. And answering it is far from trivial.” (p. 6f.)
    • Why might reflection of fundamental values and cause prioritisation (not) be key priorities?

39 of 46

  • Which of the five outlined responses to what Greaves calls cluelessness are you most sympathetic to? What are your worries about each of them? (14m12s)
    • 1) make the analysis more sophisticated
    • 2) give up the effective altruist enterprise
    • 3) make bolder estimates
    • 4) ignore things we can't even estimate
    • 5) “go longtermist”
  • “Considerations of cluelessness are often taken to be an objection to longtermism, because of course it’s *very* hard to know what's going to beneficially influence the course of the very far future on timescales of centuries of millennia. Again, we still have the point that we can't do randomised control trials on those time scales. However, what my own journey through cluelessness has convinced me, tentatively, is that that's precisely the wrong conclusion, and in fact, considerations of cluelessness *favour* longtermism rather than undermining it.” (26m58s)
    • discuss

40 of 46

Why We Should Take Moral Uncertainty Seriously (by William MacAskill, Krister Bykvist, Toby Ord)

  • “In this chapter, we have seen that there is a strong case for the position that there are norms (besides first-order moral norms) that govern what we ought to do under moral uncertainty. This position is intuitive and can be made sense of by identifying these norms either with higher-level moral norms or with norms of rationality for morally conscientious agents.” (p. 38)
    • discuss
  • How should (the importance of) taking moral uncertainty seriously affect individual EAs and the EA movement at large?

41 of 46

Week 13 – Friday, 17th December 2021�Putting EA into Practice – Part 2/2

42 of 46

  • “Effective altruism is a young movement committed to certain claims and ideals the details of which are still being worked out. Understood as broadly welfarist, consequentialist, and scientific in its outlook, the movement is vulnerable to the claim that it overlooks the importance of justice and rights, is methodologically rigoristic, and fails to isolate the activities likely to have the greatest impact overall. In most cases, I have shown that effective altruists are able respond to these objections, though sometimes this would mean changing their modus operandi in significant ways.” (p. 14)
    • In general, which critiques of (aspects of) EA do you find most weighty? Have you come across points of criticism that you feel are not (or no longer) justified? How preoccupied should we be with criticism from outside and from within?

43 of 46

  • “I don’t think the liability framing is necessary or correct. My view is that the qualities of effective altruism that make it hard also provide a rich source of opportunity.” (0m44s)
    • What are your thoughts on the liability framing vs the opportunity framing?
  • Regarding rapid developments in various areas and the better actions we now – with the benefit of hindsight – know we could have taken:�“Literally think about something that has happened in the world that if you had known it one year ago or three years ago, whatever makes sense for you, you would have made different decisions about how to make the world a better place.” (6m21s)

44 of 46

Advice for undergraduates on 80000hours.org

  • What were your key takeaways from this for you personally?
  • Were there considerations in the article that you feel do not apply to you or that you feel are no longer or not yet relevant?
  • What are your current key uncertainities regarding the next few years of your career?
  • How could you reduce pertinent uncertainties or make your career plans more robust?

45 of 46

Take action on effectivealtruism.org

  • What are your EA-related plans?
  • How did you come up with your EA-related plans?
  • Which of your EA-related plans might interfere positively and which might interfere negatively with your other plans?
  • Why/how are you currently motivated to pursue EA-related plans?
  • How might other EA community members be able to help you?
  • How might you be able to help other EA community members?
  • How might you be able to help your future namesake?

46 of 46

UZH Students' Lecture Series�(“Ringvorlesung der Philosophiestudierenden”)