The Only Ethical Model for AI is Socialism
Today's large language models (LLMs) are produced by data from the public, so they can't ethically be owned by any one individual. Socialism is the only fair way to govern this technology.
Let’s start with a hypothetical person called “Kath”—short for “Kathénas,” Greek for “everyone.” Kath has friends, a home, children, and a 9-to-5 job in a Western economy. Kath spends her day—as we all do—feeding the vast array of databases driving today’s AI. That data, generated from online activity, can include everything from purchase history and health status to probable religious beliefs and sexual orientation. It includes language culled from emails, YouTube videos, web pages, Instagram posts, and other content she has posted. Images uploaded and actions taken on social media as well as video-watching behavior and preferences are also tracked. Kath’s highly valuable data is sifted, organized, and loaded into massive databases. As she sleeps, a phone-based sleep tracker records her sleep patterns.
Now she awakens. Kath’s phone alarm is set to an unusual listening choice: Nyabinghi Rastafarian chants from the 1960s. That selection will be used to refine future music suggestions for her and other users with a similar profile. Kath skims news stories and friends’ posts on Facebook as she eats breakfast—“liking” some, watching a few videos while skipping others, and adding comments to one or two. Each action is recorded. Her parents want pictures of the kids, so Kath logs onto a photo-sharing site using CAPTCHA, which requires her to complete a simple test to identify letters or images before she can upload her photos. This test helps to train an image-recognition AI system. Her uploaded pictures feed yet another AI. Unbeknownst to her, something that closely resembles—or that may even be identical to—her youngest child’s drawing of an airplane in red crayon will appear next week on an AI site after a user types, “Draw an airplane like a child.”
Kath calls the publicly-subsidized rail service to book a seat for an upcoming business trip, which entails a “conversation” with a voice-driven system that has a woman’s name. It initiates their exchange with a cheerful greeting in a feminine voice: “Hi! I’m Julie!” The system’s behavior and Kath’s responses are tracked. Her next call is to her insurance company; she speaks to a call center operator. That conversation—the words she speaks, the emotions she expresses, and even minute inflections in her voice—will later be analyzed.
Kath then asks Alexa to play some string quartet music; that, too, is tracked. She makes a phone call. Are her words also mined? We don’t think so—at least not yet—but we can’t know for sure. It’s technically possible to harvest phone calls in this way, and the tech world has become increasingly secretive about its internal processes, including what it mines and why.
At work, almost everything about Kath is monitored: typing speed, number of pauses, even webcam information that includes “biometric data such as eye movements, body shifts, and facial expressions,” which is used to determine whether she is focused on her work and paying attention during video calls. Soon, it is rumored, this data may be used to create the AI that replaces her.
As she heads toward the break room for a cup of coffee, an algorithm using details about her life, including her income and occupation, predicts the date of her death.
Researchers, academics, and AI company CEOs believe that new AI technology will radically transform our lives—or already is—but there are some fundamental misconceptions about what it is and how it works.1 Most people are aware that today’s “AI” mines online data and responds to prompts with sentences or images that resemble human responses. But what is it?
The term “AI” has had different meanings over the years. When I worked on primitive corporate versions in the early 1990s, we used an “expert system” model which recorded and replicated the behavior of a few human experts. Other models preceded that one, and others would follow, but none generated widespread interest and use—until now. Today, ChatGPT and other AI tools represent a very different approach: the “large language” model, or LLM. LLMs weren’t possible in the ’90s. It took billions of people spending decades online to build the vast reservoirs of data they need. An LLM may sound human at times, but it doesn’t have a mind. What it does have is unimaginably large amounts of data.
Our data.
Why does that matter? Here’s a hypothetical: imagine that a corporation somehow managed to patent language. Not a computer language, not a new catchphrase or marketing slogan, but language itself. From that moment on, everyone who used written or oral language as a medium of communication would need the corporation’s permission. That, in effect, is what AI corporations have done—not with language, but with this new collective creation. They have appropriated our work—our poems, our stories, our emails written for socializing or work—for themselves.
AI resembles a public resource in many ways, but it isn’t like a green field or a body of water—parts of the natural world that existed before humanity came along. LLM-model AI was created by humans, billions of them, as they used the internet. They refine it and add to it every day. But it isn’t a consciously created individual work, like a novel or a painting. Like language, it arises organically from human social activity.
Chatbots don’t operate by magic or by the superhuman intelligence of their owners. They require force—massive amounts of brute force, in computing power and therefore in energy consumption—to mine the incomprehensibly large databases that result from the world's people spending time online. Their often lifelike answers and seeming bursts of creativity—as well as their mistakes—are the result of human-generated data manipulated on a mass scale.
LLMs are a new social and economic entity—chimeras of speech, art, and culture. A chatbot is a collectivity. Because it’s produced by everyone, it can’t ethically be owned by any single individual. Socialism—collective, democratic ownership—is the only reasonable way to govern AI. Right now, we don’t own or control our collective creations, and they’re being used to harm us: to hijack our time, redirect wealth upward, and exploit the world’s resources.
We’ve known for a long time that internet addiction exists. We now know why it exists—or, at least, why so little is being done to address it. The “insatiable” demand for the information that drives LLMs provides a huge incentive for tech companies to keep people online, milking them for ever-increasing reams of data. For-profit corporations and their systems designers are invested in manipulating us into staying online and engaging in data-rich activities—what researcher Riccardo Chianella describes as “quantifiable measures of engagement” (clicks, likes, views, etc.) that can serve as “a proxy for the product's ability to generate profit.” Any time we spend offline—exercising, socializing, reading a book, marching in the streets—diverts us from our true purpose, which, according to those who profit from this system, is to feed the databases. In that sense, all online life can be seen as a form of labor we engage in for the benefit of these companies.
We’re all toiling in the AI mines. Hour by hour, minute by minute, we type, enter, and click. We’re paid for our piecework in micro-doses of dopamine that keep us coming back for more. And we’re harmed in the process. As Thomas Moller-Nielsen observes, rampant cellphone use has distanced us from our physical companions and degraded our mental health. Negative mental and physical health effects from social media use and time spent online in general have also been reported.
When we think about some of the other problems created by AI—its massive environmental footprint, its enabling of mass killing when used by militaries, its potential to disrupt jobs and entire creative professions like art and music—it’s easy to forget that many of these problems are created by the context in which AI has been developed: namely, our economic, social, and political system. The culture of capitalism is built around principles of competition and the concentration of wealth. Those principles are antithetical to the fundamentally collaborative nature of AI. That’s why the socialist approach is the only equitable way to manage AI.
The philosopher G. A. Cohen has used the metaphor of a camping trip to make a non-threatening, relatable case for socialism. In his 2009 book Why Not Socialism?, he writes,
You and I and a whole bunch of other people go on a camping trip. There is no hierarchy among us; our common aim is that each of us should have a good time, doing, so far as possible, the things that he or she likes best. [...] [W]e have, for example, pots and pans, oil, coffee, fishing rods, canoes, a soccer ball, decks of cards, and so forth. And, as is usual on camping trips, we avail ourselves of those facilities collectively. [...] [T]hey are under collective control for the duration of the trip. [...] [O]ur mutual understandings, and the spirit of the enterprise, ensure that there are no inequalities to which anyone could mount a principled objection.
In camping, as well as other contexts, Cohen observes, people tend to “cooperate” and “take for granted” things like “equality and reciprocity.”
Now imagine that everything this hypothetical group of campers used—apples, nuts, fish, the beauty of the landscape—had been generated by the campers themselves. What if all of it was produced by their collective labor—and by their recreation, their teaching and learning activities, and their private communications? And what if they were told they had no rights to any of it?
That’s how society is treating LLMs today, as can be seen in the fact that corporations are allowed to control and monetize these products as if they were their own inventions. But remember, these programs are humanity’s collective work, which means that private ownership of AI is theft. There is no equitable way to own and govern this kind of AI other than democratic socialism. That would be easier under a primarily socialist system but, as an inherently socialist technology, AI can be organized socialistically even in a regulated market economy—and even in the United States.
What would a socialist AI economy look like?
People could be compensated for the use of their collective intellectual property in several ways under a “democratic socialist AI” system. Revenues might be used to enhance education and build public media, or people might receive small stipends for their role in the greater system. Unless people are compensated for their contributions in some form, however, the system will remain inherently exploitative.
People should also be allowed to see, understand, and vote on the algorithms that are used to filter the information they see. And the public must have both free access to information and democratic control over the "stickiness," or addictive design, of any online technology they use. Anything less is an infringement of basic freedoms. As for privacy rights, these could be established in advance and voted on by participants. Individuals might be given the option to opt out of the system and keep their output for their own use without sharing in the common resource.
Most of all, the AI economy could be used to advance the flourishing of human and non-human life. That’s an idea as old as socialism itself. How old? In the early days of automation, the socialist revolutionary Che Guevara explicitly called for human liberation through technology. In a speech at the National Sugar Plenary Meeting in Camagüey, Cuba, in February 1963, he said: “We are trying to turn machines into liberating instruments [...] to achieve the most important thing we must achieve: individuals developed to the full.” Last year, Nathan J. Robinson wrote in Jacobin that technology could “liberate us from drudgery rather than hurl us into poverty.”
“Liberation” could take many forms, providing people with both the tools and the means to learn, to experiment, to debate, and to produce art. It could also result in dramatically shorter working hours for the vast majority of people, something that was widely expected to happen in the early days of automation. Consider the 1960s-era “Jetsons” cartoon. The family’s sole earners worked a few hours a week at a factory yet earned enough to have a house in the sky, a family-sized flying car, and a robot maid. The show’s premise was based on the widely believed assumption that increases in productivity would be equally shared between employers and employees—as they had been between World War II and the end of the 1960s. In fact, the first wave of automation gave rise to serious discussions about the upcoming “crisis of leisure.” The “crisis” never happened, of course. Since the 1970s, Americans have not benefited from shorter work hours the way other peer nations have, and wages have not kept up with productivity. Why? Because it’s good for corporate profit, and in a capitalist system, profit is paramount. (Senator Bernie Sanders recently called for a 32-hour workweek with no reduction in pay. What’s not to like? And it’s hardly a new idea. “Throughout the economy,” the New York Times reported in 1964, “there are the beginnings of movement toward the four[-]day week.” )
In a socialist scenario where we prioritize human needs over profit, some workers could find themselves out of a job. As a way to compensate these displaced workers, Robinson proposed an interesting scenario: “How about this: once the job that you train for is automated, you get an automation pension and get to relax for the rest of your life. Everyone will be praying their job is next on the list to go.”
As for other concomitant crises we face, from climate to war—which are worsened by the current use of AI—the socialist ethic would free us from the destructive power of profit seeking, allowing us to truly address these problems with the urgency they require. Similarly, any “existential” risks that may be posed by AI (that’s a highly controversial subject in its own right) could be addressed by enacting a moratorium on activities that increase such risks (similar to the desire expressed by Arab nations to create a Middle East free of weapons of mass destruction).
The organizer’s first task is to help people understand that whatever their ideology, the only fair way to govern AI is through democratic socialism. That means explaining that AI is both humanity’s commons and its collective creation. Organizers must understand and communicate AI’s real nature, as well as the unjust nature of the current system of ownership. They must present AI’s real and immediate threats to working people and explain how we can address them through common action.
Should current government structures administer AI? What about workplace democracy—especially in the global workplace that is the internet? Perhaps a federalized or distributed system of governance would be better, or a network of guild-based professions or other affinities. As those questions are explored, new institutions will come into being.
What about the workplace? Because AI absorbs the actions of human workers, it’s the fruit of their labors. That means they should decide how to use it, democratically, through some form of cooperative ownership or system of worker governance.
An Atlantic profile of the corporation known as “OpenAI,” the private company that created ChatGPT, includes this description of the firm’s headquarters:
Enter its lobby from the street, and the first wall you encounter is covered by a mandala, a spiritual representation of the universe, fashioned from circuits, copper wire, and other materials of computation.
It is a two-dimensional rendering of a multidimensional cosmos, an infinity reduced to circuitry: on silicon, as it is in heaven. But this is not spirituality; it is pathology. It reinforces the tech industry’s idea that so-called “artificial intelligence” programs are mysterious and magical, perhaps even alive, and that only these industry mavericks have the esoteric knowledge needed to manage them. But this technology isn’t their creation; it is ours. Even the industry’s harshest critics have missed this point because conventional political thinking hasn’t kept pace with the technology. AI is different from past innovations in ways that render today’s political, legal, and regulatory tools obsolete. We’ll need to explore different ideas and alternative ways of thinking. Fortunately, a 19th-century school of thought has already begun that conversation.
notes
1. As computer scientist and entrepreneur Erik J. Larson writes in his book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, “our AI culture has simplified ideas about people, while expanding ideas about technology.” Some people believe, for example, that chatbots are, or are becoming, sentient. This is a mistake. And popular fears of so-called robot uprisings or a homicidal superintelligence are unlikely to come true, at least in the near future. And it’s a misconception to believe these systems “think,” since actual consciousness would require scientific breakthroughs that have yet to be approached in the real world.