Past-Future | Adobe Stock 402693093
Adobe Stock

Could Longtermism Cause Short-Term Damage?

William MacAskill’s What We Owe the Future is an audacious plea to help our future humans with longtermism thinking, but it is blind to what we need now.

What We Owe the Future
William MacAskill
Basic Books
August 2022

Many ideas with great potential to cause damage have at their core at least one fundamentally sound principle. In philosopher William MacAskill’s new book What We Owe the Future—a white paper for the general public about the philosophy known as longtermism—he defines the school of thought under the heading “The Silent Billions”:

Future people count. There could be a lot of them. We can make their lives go better.

It’s a mind-opening concept and difficult to push back on. Who—short of a cultist who believes the Earth will be destroyed at 5:45 pm on a specific Thursday in 2035 (obviating any need to worry about the future)—would not appreciate that moral logic? One of the oldest arguments for improving the world has been an appeal to make life better for future generations.

However, longtermism, as MacAskill describes it, takes the concept further. Billions of years farther. What We Owe the Future looks into the future to calculate just how many people the actions we take today can impact. Estimating about 100 billion humans have lived up to now and positing that eventually, the species might “take to the stars”, he believes that actions taken today could ultimately affect untold trillions of future people. This makes sense, as does his description of modern-day climate change action, which by definition potentially makes things better for the as-yet-unborn, as being “proof of concept” for longtermist thinking. Cutting fossil fuel emissions radically now might not have an immediate impact on extreme weather or species extinction rates, but it can slow ice cap melting and reduce sea level rise over the next century. 

Again, this is all good. Then MacAskill takes a flier on theories more obscure to the average reader. Assuming that climate change is a settled topic (if only it were), he uses longtermism to address “neglected” issues he sees as “at least as important” as climate change: “the ascent of artificial intelligence, preventing engineered pandemics, and averting technological stagnation.”

You will notice a similarity between these priorities: Each involves quantifiable technology, has a certain hard sci-fi glint to the concept, and does not directly engage with the lives of humans. This is not unusual for the quant-like math-brain figures who tend to cluster on the wonkier fringes of longtermism and can seem more comfortable doing charitable work with a whiteboard and some Excel spreadsheets than dispensing meals to disaster victims. Commentator David Brooks described this personality type as “one of those people who loves humanity in general but not the particular humans immediately around.”

For his part, a moral philosopher like MacAskill seems driven by a true imperative to take dramatic action to help people. His writing about the abolition of slavery (which, unlike common perception, was still immensely profitable for the British economy when it was outlawed) shows a passion for doing the hard work for the right cause. Nevertheless, there is a somewhat bloodless aspect to his writing that can strip it of emotional impact.

That may be intentional. By definition, longtermism demands a degree of cold-bloodedness. Since resources will always be more limited than need, any effort or resources dedicated to the future means less is available for the present. Most of the U.S. National Institute of Health’s $45 billion budget goes toward medical research. In that case, a dollar spent on studying lung cancer is a dollar that cannot go toward Alzheimer’s study. People will debate what trade-offs to make, but the necessity of trade-offs between short- and long-term priorities is noncontroversial.

Those tradeoffs are a central part of the Effective Altruism (EA) movement, which the longtermist school grew from. EA tries in part to bring math-based utilitarianism to what is generally called (for lack of a better term) charity. EA givers do everything they can to maximize their giving. In addition to donating a large percentage of their incomes to charitable causes, they also study how to ensure those dollars do the most good for the most people. It’s Moneyball for charity and can be highly effective. Longtermists take that perspective and apply it not to eradicate malaria but to improve things for the future.

So why does longtermism generate such debate, and in some quarters, even fear? This is partially due to longtermism’s tendency to attract finance and tech types who seem more comfortable plotting out Mars colonization than figuring out what to do with the homeless tent city down the block.

What We Owe the Future is intentionally low-key and calm in approach. An associate professor at the University of Oxford, MacAskill uses a voice with a certain soothing cadence that is likely useful in the classroom and follows this style of progression: “If A is true, then B is likely to happen down the road, and that would be bad, meaning we should probably do C to stop B.” As a writer, MacAskill can be a persuasive rationalist. His nightmare scenarios about advanced AGI (artificial general intelligence) being used by corporate or state actors for malicious ends are more plausible than one might imagine.

Where MacAskill and some of his colleagues get crosswise with intellectual opponents is their narrowness of approach. MacAskill’s book rings with an insistence on intensely driving time and resources towards those few sci-fi scenarios in the Very Bad Someday If True category, instead of the Very Bad Right Now and Definitely True category. The former is sexier to a certain kind of imaginative thinker (whiteboards, Excel, data crunching). That is how you get sections in MacAskill’s book like this:

…even if superintelligent AGI were to kill us all, civilisation would not come to an end. Rather, society would continue in digital form, guided by the AGI’s values. What’s at stake when navigating the transition to a world with advanced AI, then, is not whether civilisation continues but which civilisation continues.

An easy counterpart to this somewhat abstruse argument is “You lost me after ‘kill us all’.” Not because such a scenario is impossible to imagine—though the number of very smart people working on the killer AGI problem because they believe in Terminator 2‘s Skynet is a bit disturbing—but because what is left to discuss if humanity is extinct?

Another aspect of What We Owe the Future that makes its pitch less persuasive is how it skips past the critique. Like any good debater, MacAskill anticipates arguments and sets up strawmen to be easily knocked down. He even includes an appendix with responses. They generally make sense: in response to “Future people can take care of themselves”, MacAskill argues we are causing problems for those future people, and “it’s easier to avoid burning coal than it is to suck carbon dioxide out of the atmosphere.” Very true.

Tellingly, however, MacAskill does not spend much time addressing the most salient pushback against his kind of longtermism, namely, “How much present-day suffering do we ignore in favor of helping stop things that might happen to people who might be born millions of years in the future?” The longtermist argument too often relies on a shock-and-awe math argument: When there is potential to help even a few percent of trillions of future humans, that can make any even massive present-day efforts seem trivial since they may merely help millions. That is, of course, unless you are one of those sick, homeless, addicted, depressed, unemployed, unsafe, starving people living now who are not deemed worthy of investment by the longtermist algorithm.

What We Owe the Future is audacious, big-picture thinking whose premise could lead some people to do good, even great things. It discourages presentism and encourages the kind of optimistic, can-do approach to the future required for these times. It is also one of those books (Yuval Noah Harari’s Sapiens also comes to mind) that takes such a high-altitude view of humanity it is hard to see any of the people at all.

RATING 5 / 10
RESOURCES AROUND THE WEB