I was a superstitious kid, avoidant of sidewalk cracks and black cats, a kid who slept facedown to avoid exposing my neck to vampires. I harbored a vague terror that naming my fears out loud would make them come true. So instead, I went to Yahoo with them. I thought Yahoo could tell me, specifically, the chances that my sister would die. I used the baroque, quotation-mark-heavy syntax common at the time-”ewing sarcoma” and “death,""ewing sarcoma” and “prognosis”—but came up blank. I never did get up the nerve to take the question to a human being who might be able to answer.
Today, more than one billion websites exist. We often complain about the abundance of digital content as if it had been thrust at us without our consent, leaving us with little choice but to consume it, like guests at a dinner table who, presented with a too-heaping plateful of rice and curry, have no option but total ingestion to avoid offending the host. The truth is that before the content existed, there were people like me who, through the act of searching, communicated a desire for answers. That is, for content.
We millennials just happened to be graduating from high school under particular economic and social conditions. We were living through the longest economic expansion the United States had ever experienced, driven in part by free-trade policies, under the WTO and deals like the North American Free Trade Agreement, that privileged U.S. corporations and the managerial class running them over everyone else in the world. Those policies had oriented the United States toward the high-tech invention taking place in cities like Seattle and away from the manufacturing in the middle of the country. It had also attracted immigrants, including people fleeing growing inequality in Mexico that could be traced at least partly to NAFTA. Millennials weren’t ambitious, tech savvy, or diversity oriented by nature, any more than Gen Xers (having become adults during a recession) were slackers by nature, or the Greatest Generation (having become adults during the deadliest war ever) were great by nature.
It started, at least as far back as I was conscious of it, with the Kingdome. They wanted a new stadium, so we gifted it to them. Then they wanted South Lake Union, so we gifted it to them. Then they asked for the elephant. They wanted to have it. By then it felt like we didn’t have much choice in the matter. We gifted it to them… Amazon wanted control of all the buying and selling in the world. We gifted it to them.
The French sociologist Pierre Bourdieu, in 1986, famously named three different types of capital: economic capital, which can be measured by one’s financial access; cultural capital, which can be measured by one’s access to culture, academic credentials, knowledge, and information; and social capital, which can be measured by the status conferred by one’s social connections. Each of these three forms of capital, Bourdieu argued, can be exchanged for the others. To the extent that capitalism’s staying power rests on the myth that its benefits are readily available to anyone, it’s interesting to consider the first big internet companies through Bourdieu’s lens. If it could be said that Amazon, with its infinite catalog of cheap products, appealed to our desire for economic capital, and Google, with its infinite provision of information, to our desire for cultural capital, then Facebook was about social capital.
If an alien were, in this moment, to access Earth’s internet from deep space and search for the word “apple,” the first result would probably not be a fruit. If it searched for “Amazon,” it would probably not immediately find a river. If it searched for “alphabet,” it would probably not discover, at first, any of our four thousand written systems of communication. What would it make of this planet we inhabit? What kinds of colonized creatures would it picture living here?
So I ask myself: is it possible to invent a technology of communication ourselves that can overcome what separates and hierarchizes us? Or would any technology only reinscribe the separations and hierarchies that already exist? In the end, perhaps it is better to do what is more difficult: to keep trying to improve our communication using the free tool we already have, which is language. Perhaps what makes life easier is not always better.
To be fair—to myself—I had published “Ghosts” partly as a provocation. I wanted to bring attention to a promise I imagined Al companies might make in the future—that they could help us tell our own stories—and then demonstrate that promise getting uncannily close to coming true while ultimately being broken. At the same time, I wanted to demonstrate my own complicity in taking the bait in the first place. But I could also see an argument that, whatever my intent, the exercise of “Ghosts” had been fundamentally corrupt—that my desire to make a point mattered little, set against my collusion with the harmful practices required for “Ghosts” to exist. A small part of me even hoped critics—at least one critic—would censure me for it.
Instead, the opposite happened. One writer cited my piece in a hot take with the headline “Rather Than Fear AI, Writers Should Learn to Collaborate with It,” using it as evidence that people and AI “will learn to coexist, with humans using their deep learning-based counterparts to enhance and improve their prose.” Teachers assigned “Ghosts” in writing classes, requiring students to produce their own Al collaborations. A beloved indie filmmaker emailed me looking for advice on using Al in a screenplay. A venture capitalist invited me to help judge an Al writing contest for established authors and journalists. Knowing that the proliferation of Al language models would depend on convincing people that the benefits were worth the costs, I was starting to feel that I’d contributed to making that case.
The machine-generated falsehoods compelled me to assert my own consciousness by writing against the falsehoods.
As far as I could tell, what distinguished the productization of Al so far had been not its impressiveness but the speed with which corporations had insinuated it into our lives despite its frightening unimpressiveness.
[Shoshana] Zuboff writes that the propagandists of surveillance capitalism present its surveillance apparatus—which began online and has now found its way into our homes, automobiles, and bodies—“as the product of technological forces that operate beyond human agency and the choices of communities, an implacable movement that originates outside history and exerts a momentum that in some vague way drives toward the perfection of the species and the planet.” This rhetoric of the inevitable has already been successful, she adds, to the extent that it has provoked in citizens a sense of resigned helplessness. But there’s no natural law by which power and wealth should necessarily keep accruing to the same people, just because they have in the recent past; if human history has shown us anything, it’s that, for better or worse, human beings are endlessly capable of twisting circumstances to our will. Zuboff cites Hannah Arendt’s characterization of the concept of will itself as the mental device oriented to the future, just as memory serves as the mental device oriented to the past.
I don’t know what it is called when mutual understanding breaks down—when, for example, a speaker believes he is making a declaration, and his audience believes he is making a joke. “We have made a soft promise to investors that, once we build this sort of generally intelligent system, basically we will ask it to figure out a way to generate an investment return for them,” Sam Altman told one onstage interviewer, years before ChatGPT came out, when OpenAl hadn’t yet monetized its research. Laughter gently emanated from the audience, and Altman offered the slightest smile. “It sounds like an episode of ‘Silicon Valley’—it really does, I get it, you can laugh, it’s all right,” he said. “But it is what I actually believe is going to happen.” The laughter subsided.
[…]
”Imagine a world in which we cure cancer, Alzheimer’s, and other crippling disease, in just a few years’ time,” Senator Todd Young, a Republican who helped come up with the plan [to invest in AI], said at a press conference… This language sounded familiar to me. I wondered if Young had read Altman’s manifesto. Then I wondered if the statement had been written by Altman. Then I wondered if it had been written by one of Altman’s products. Young was speaking with a straight face. This time no one laughed.
[Ursula K. Le Guin] ended her speech with a reminder that since the future is not yet written, we have the chance to write it ourselves: “We live in capitalism, its power seems inescapable—but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.”