Swimming Against the Tide
Forget shrimp. Electrons are the new hotness!
Amaze! of the week: Good News … from Vox?
Article of the week (which prompted the below): Do Shrimp Suffer? by Gabriel Vasquez-Peterson.
Song of the week: The Chicks “The Long Way Around”
Reminder of the week: You can unsubscribe at the link at the bottom (sorry it isn’t easier).
tl;dr - EV-measuring-contests are leading loads of people to waste their lives, causing much more real, intense, unnecessary suffering.
Longer tl;dr
Utilitarianism is fatally flawed. (“Biting the Philosophical Bullet,” p. 379 here; posts)
Expected values of “total suffering” are delusions.
It literally took me three decades to recognize this.
Sorry.
Preface: Mutual Mental Masturbation
You work to reduce cruelty to chickens? But there are more fishes!
No, no - focus on shrimp!
Honey is the cruelest product!
Loser! There is a holocaust right on your face!
You monster - electrons FTW!
(The shrimp (obv) and honey are real; I don’t want to link to them. More on why you need to Save The Electrons below.)
Introduction: Mathletes against Progress
An earlier post about fish and advocacy came to mind when, as mentioned above, Gabriel dared to question the sea bugs mafia. That led me to think of the two pieces I reproduce below.
This matters because loads of intellectually-talented keyboard kommandos waste their time trying to one-up each other about their concern-of-the-moment’s expected values (EV). They argue we should care more about bugs than a mother watching her child suffer to death.
These EV Activists are like bulldogs*, attacking those who are “wrong on the internet,” rather than doing real (but boring) work to create actual progress. They would rather preach to other mathletes than help those whose probability of intense suffering is 100%.
Am I doing that with this column? Probably. And I spend my days working to reduce cruelty to chickens, not eradicating Guinea worm. Guilty, as explained in the chapter “Biting the Philosophical Bullet” referenced above (p. 379 here).
Sorry. I did try this.
*Apologies to bulldogs.
Bullet Biting: Short Edition
from 2022; please see p. 379 here for full argument
I have written a fair amount about the problems I have with utilitarianism and “effective” altruism (1, 2, 3). Recently [2022], I found this discussion by Holden Karnofsky to be a good example of the fundamental error at play:
I’ll give my own set of hypothetical worlds:
World D has 10^18 flourishing, happy people.
World E has 10^18 horribly suffering people, plus some even larger number (N) of people whose lives are mediocre/fine/”worth living” but not good.
There has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!
I don’t think there is any number N that works here. That is, you can’t offset horrible suffering with any number of other people.
As Holden notes, taking a non-utilitarian position leads to some counter-intuitive outcomes, like not valuing more happy people over fewer happy people. But as I've written elsewhere, “It is the summing across individuals that really gets me. There is no entity experiencing ‘all the suffering [happiness, net utility] in the universe.’ Only individuals suffer -- the universe doesn’t suffer [or experience aggregate happiness or net utility].”
It is just our intuition that more is better. I held this view for most of my life, but it is now obvious that this assumption is simply false. It feels right to want more total happiness, but “total happiness” doesn’t actually exist in our universe. Only individuals experience (finite) happiness. There is absolutely no ethical relevance to “the total net happiness in the universe.” It doesn’t exist.
Total net anything is just a dream that exists in the minds of utilitarians.
Once you see the folly in maximizing a fictitious variable, you avoid morally offensive conclusions. For example, you aren’t ethically obligated to torture someone in order to provide a slight pleasure to N others. (You also aren't personally ethically obligated to have as many children as possible, a consequence of utilitarianism that Holden tries to hand-wave away.)
You might wonder why I continue to flog this issue. It is because it is upsetting that so many smart individuals dedicate their 80,000 hours trying to one-up each other's expected value while there is so much acute and unnecessary suffering in the world.
Every moment of an electron's existence is suffering
Note: I did not write this, but I wish I had.
Excerpts:
Scale: If we think there is only a 1% chance of panpsychism being true (the lowest possible estimate on prediction websites such as Metaculus, so highly conservative), then this still amounts to at least 10^78 electrons impacted in expectation.
Neglectedness: Basically nobody thinks about electrons, except chemists, physicists, and computer engineers. And they only think about what electrons can do for them, not what they can do for the electrons. This amounts to a moral travesty far larger than factory farms.
Tractibility: It is tremendously easy to affect electrons, as shown by recent advances in computer technology, based solely on the manipulation of electrons inside wires.
Electrons are suicidal
If electrons can only sense the charges of their neighbors, they know that positrons are positively charged, and that if they make contact with a positron the pair will immediately annihilate (all reasonable assumptions by any metric), then the only reason it would travel in the opposite direction of electric fields is in the hopes that it ends up colliding with positrons, thereby ending its existence.
This means every moment of an electron’s existence is pain, and multiplying out this pain by an expected 10^78 produces astronomical levels of expected suffering.
Since pain is worse than pleasure is bad, and it seems highly unlikely electrons would run towards certain death if their lives were pleasurable, this possibility dominates the moral calculus in this scenario.
Epistemic status: Certain
Related past post: Why Not Fish?
Chart by Ben Davidow.



Great post! Definitely agree with not obsessing on expected values. Especially with dubious claims of extreme "suffering" which may mostly just be nociception.
However, to counter argue, when faced with the choice of helping two individuals suffering from extreme suffering versus one, isn’t it "better" to assist two individuals rather than one?