Molly White's Citation Needed - Effective obfuscation
Episode Date: November 30, 2023Silicon Valley's "effective altruism" and "effective accelerationism" only give a thin philosophical veneer to the industry's same old impulses. Originally published on November 25, 2023....
Transcript
Discussion (0)
I'm Molly White, and you're listening to the audio feed of the citation-needed newsletter.
You can see the text version of the newsletter online at newsletter.mollywhite.net.
Effective obfuscation
Silicon Valley's effective altruism and effective accelerationism only give a thin philosophical veneer to the industry's same old impulses.
This issue was originally published on November 25, 2023.
As Sam Binkman-Fried rose and fell, people outside of Silicon Valley began to hear about effective altruism, or EA, for the first time.
Then riffs submerged between OpenAI, with the ouster and then reinstatement of CEO Sam Altman,
and the newer phrase, effective accelerationism, often abbreviated to E-S-ACC on Twitter, began to enter the mainstream.
Both ideologies ostensibly center on improving the fate of humanity, offering anyone who adopts the label an easy way to brand themselves as a deep-thinking do-gooder.
At the most surface level, both sound reasonable. Who wouldn't want to be effective in their altruism after all?
And surely it's just a simple fact that technological development would accelerate given that newer advances build off the old, right?
But scratching the surface of both reveal their true form, a twisted morass of Silicon Valley
techno-utopianism, inflated egos, and greed, same as it always was.
Effective altruism
The one-sentence description of effective altruism sounds like a universal goal rather than an obscure
pseudo-philosophy.
After all, most people are altruistic to some extent, and no one wants to be in a
effective in their altruism. From the group's website, quote, effective altruism is a research
field and practical community that aims to find the best ways to help others and put them into
practice. Pretty benign stuff, right? Dig a little deeper, and the rationalism and utilitarianism
emerges. Unsatisfied with the generally subjective attempts to evaluate the potential positive impact of
putting one's financial support towards, say, reducing malaria in Africa versus ending factory farming,
versus helping the local school district hire more teachers, effective altruists try to reduce
these enormously complex goals into so-called impartial, quantitative equations.
In order to establish such a rubric in which to confine the messy, squishy, human problems they
have claimed to want to solve, they had to establish a philosophy. And effective altruists
dove in to the philosophy side of things with both feet. Countless hours have been spent around
coffee tables in Bay Area housing co-ops, debating the morality of prioritizing local causes
above ones that are more geographically distant, or where to prioritize the rights of animals
alongside the rights of human beings.
Thousands of posts and far more comments have been typed on sites like Less Wrong,
where individuals earnestly fling around jargon about Bayesian mindset and quality-adjusted life years.
The problem with removing the messy, squishy, human part of decision-making is you can end up with an ideology like effective altruism,
one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.
Take, for example, the widely held belief among effective altruists that it is more effective for a person to take an extremely high-paying job than to work for a nonprofit,
because the impact of donating lots of money is far higher than the impact of one individual's work.
The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street.
A detail, I think, is illuminating about effective altruism's demographic makeup.
Anyway, this is a useful way to justify working for a company that many others might view as ethically dubious,
say a defense contractor developing weapons, a technology firm building surveillance tools,
or a company known to use child labor.
It's also an easy way to justify life's luxuries.
If every hour of my time is so precious that I must maximize the amount of it spent
earning so I may later give, then it's only logical to hire help to do my housework
or order takeout every night or hire a car service instead of using public transit.
The philosophy has also justified other not-so-altruistic things.
One of effective altruism's ideological originators, William McCaskill, has urged people not to boycott sweatshops.
Quote, there is no question that sweatshops benefit those in poor countries, he says.
Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing
in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes.
You know, hypothetically.
Other issues arise when it comes to the task of evaluating who should be prioritized when it comes to aid.
A prominent contributor to the effective altruist ideology, Peter Singer, wrote an essay in 1971,
arguing that a person should feel equally obligated to save a child halfway around the world
as they do a child right next to them.
Since then, EAs have taken this even further.
Why prioritize a child next to you when you could help ease the suffering of a better child somewhere else?
Why help a child next to you today when you could instead help hypothetical children born 100 years from now,
or help artificial sentient beings 1,000 years from now?
The focus on future artificial sentience has become particularly prominent in recent times,
with effective altruists emerging as one synonym for so-called AI safety advocates or AI Dumers.
Despite their contemporary prominence in AI debates,
these tend not to be the thoughtful researchers who have spent years advocating for responsible and ethical development of machine learning systems
and trying to ground discussions about the future of AI in what is probable and plausible.
Instead, these are people who believe that artificial general intelligence, that is a truly
sentient, hyper-intelligent, artificial being, is inevitable, and that one of the most important
tasks is to slowly develop AI such that this inevitable superintelligence is beneficial to humans
and not an existential threat. This brings us to the competing ideology, effective accelerationism.
While effective altruists view artificial intelligence as an existential risk that could threaten humanity,
and often push for a slower timeline in developing it, though they push for developing it nonetheless,
there is a group with a different outlook, the effective accelerationists.
This idea has been embraced by some powerful figures in the tech industry,
including Andresen Horowitz's Mark Andresen, who published a manifesto in October.
in which he worshipped the, quote, techno capital machine, end quote, as a force destined to bring
about a, quote, upward spiral, if not constrained by those who concern themselves with such concepts as
ethics, safety, or sustainability. Those who seek to place guardrails around technological
development are no better than murderers, he argues, for putting themselves in the way of
development that might produce life-saving AI.
This is the core belief of effective accelerationism that the only ethical choice is to put the
pedal to the metal on technological progress pushing forward at all costs because the hypothetical
upside far outweighs the risks identified by those they brush aside as doomers or
decels, short for decelerationists.
Despite their differences on AI, effective altruism and effective accelerationism share much in common in addition to the similar names.
Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take.
Both philosophies embraces a given the idea of a super-powerful artificial general intelligence being just around the corner,
an assumption that leaves little room for discussion of the many ways that AI is harming real people today.
This is no coincidence.
When you can convince everyone that AI might turn everyone into paper clips tomorrow,
or on the flip side might cure every disease on earth,
it's easy to distract people from today's issues of ghost labor,
algorithmic bias, and erosion of the rights of artists and others.
This is incredibly convenient for the powerful individuals and companies who stand to profit from AI.
And like effective altruists, effective accelerationists are fond of waxing philosophical,
often with great verbosity and with great surety that their ideas are the peak of rational thought.
Effective accelerationists in particular also like to suggest that their ideas are grounded in scientific concepts like thermodynamics and
biological adaptation, a strategy that seems designed to woo the technologist types who are primed
to put more stock in something that sounds scientific, even if it's nonsense.
For example, the inaugural substack post defining effective accelerationism's principles and tenets,
name drops the Jarsinsky-Krooks fluctuation dissipation theorem, and suggests that thermodynamic
bias will ensure only positive outcomes reward those.
who insatiably pursue technological development.
Effective accelerationists also claim to have, quote,
no particular allegiance to the biological substrate, end quote,
with some believing that humans must inevitably forego these limiting,
fleshy forms of ours, quote, to spread to the stars, end quote,
embracing a future that they see mostly, if not entirely, revolving around machines.
A new coat of philosophical paint.
It is interesting, isn't it, that these supposedly deeply considered philosophical movements that
emerged from Silicon Valley all happen to align with their adherence becoming disgustingly wealthy.
Where are the once billionaires who discovered after their adoption of some effective ism they
picked up in the tech industry that their financial situation was indefensible?
The tech thought leaders who coalesced and wrote 5,000-word manifestos about community aid and responsible software development.
While there are, in fairness, some effective altruists who truly do donate a substantial portion of their earnings and live rather modestly,
there are others like Sam Bankman Fried, who reaped the reputational benefits of being seen to be a philanthropist,
without ever seeming to actually donate much of his once mind-boggling wealth towards the charitable causes he claimed were so fundamental to his life's work.
Other prominent figures in the effect of altruism community rationalized spending 15 million pounds or around 19 million U.S. dollars,
not on malaria bed nets or even AI research, but on a palatial manner in Oxford to host their conferences.
Similarly, effective accelerationism has found resonance among Silicon Valley elites such as Mark Andresen,
a billionaire who can think of nothing better to do with his money than purchase three different mansions all in Malibu.
With each manifesto, from his 2011 Why Software is Eating the World, to his 2020, It's Time to Build,
to his most recent, Andresen reveals himself as a shallow thinker,
who is not meaningfully built anything in about 20 years,
but who is incredibly fearful of losing grip
on where he's sunk his teeth into the neck of the technology industry.
He can see that the shine has worn off
on some of the techno-utopianism
that marked some of the earlier days of the industry,
and painful lessons have taught investors and the public alike
that not every company and founder is inherently benevolent.
The move fast and break things era in tech has had devastating consequences, and the once unquestioning admiration of larger-than-life technology personalities like himself has since morphed into suspicion and disdain.
The threats to his livelihood are many. Workers are organizing in ways they never have in the technology sector.
His firm's reputation has been tarnished by going all in on Web3 and Cryptopropros.
a decision even he doesn't seem to understand.
The public is meaningfully grappling with the idea that maybe not every technology that can be built
should be built. Legislators are sorely aware of their past missteps in giving technology
a free pass in the name of innovation and are increasingly suspicious of the power of tech
firms and monopolies. They're actively considering regulatory change to place limits on once
lawless sectors like cryptocurrency, and hearings to discuss AI regulation were scheduled at
breakneck speed after the emergence of some of the more powerful large language models.
Despite the futuristic language of his manifesto, its message is clear.
Andresen wants to go back.
Back to a time when technology founders were revered and when obstacles between him and staggering
profits were nearly non-existent.
When people weren't so mean to billionaires, but instead admired them for, quote,
undertaking the hero's journey, rebelling against the status quo, mapping uncharted territory,
conquering dragons, and bringing home the spoils for our community.
The same is true of the broader effective accelerationism philosophy,
which speaks with sneering derision of those who would slow development and thus profits,
or advocate for caution. In a world that is waking up to the externalities of unbridled growth in terms of
climate change and of technologies build now and work the kinks out later philosophy,
in terms of things like online radicalization, negative impacts of social media, and the degree of
surveillance creeping into everyday life, effective accelerationists too are yearning for the past.
More than two sides to this coin.
Some have fallen into the trap, particularly in the wake of the Open AI saga,
of framing the so-called AI debate as a face-off between the effective altruists and the effective accelerationists.
Despite the incredibly powerful and wealthy people who are either self-professed members of one or the other camp,
or whose ideologies align quite closely, it's important to remember that there are,
far more than two sides to this story. Rather than embrace either of these regressive ideologies,
both of which are better suited to indulging the wealthy in retroactively justifying their choices
than to influencing any important decision-making, it would be better to look to the present
and the realistic future, and the expertise of those who have been working to improve technology
for the better of all, rather than just for themselves and the few,
just like them. We've already tried out having a tech industry led by a bunch of techno-utopianists
and those who think they can reduce everything to markets and equations. Let's try something new
and not just give new names to the old. Thanks for listening to this issue of the citation-needed
newsletter. To learn how to support my work, visit mollywhite.net slash support. If you would like to read
the text versions of these episodes, sign up to receive the newsletter in your email, or support
my work on a recurring basis, go to newsletter.mollywhite.net.
