artificial intelligence

Fear Not, Conservatives, the Chatbot Won’t Turn Your Kid Woke

Photo-Illustration: Intelligencer; Photos: Getty Images

The year is 2042. As the Buttigieg principate enters its second decade, all seems well in the North American Prefecture. The memorial for the Lost City of Miami is nearly complete. The Senate has just passed legislation adding a parenthetical land acknowledgement to the official name of every majority-white municipality. Personal firearm ownership has been abolished, as have the police. Only licensed DEI trainers are allowed to carry guns; microaggressions are down 30 percent since they began punishing the act of asking a nonwhite North American “Where are you from?” with deadly force.

But beneath this patina of normalcy a crisis brews. The militant wing of the illegal political movement known as the Intellectual Dark Web has acquired a hydrogen bomb. And they are threatening to detonate it at the Tomb of Robin DiAngelo on the National Mall — unless Emperor Buttigieg publicly raps along to all of the words in the classic Kendrick Lamar song “M.A.A.D City.”

Buttigieg confers with his most trusted adviser, ChatGPT 3000, North America’s first nonhuman secretary of State. The wise AI reflects for .001 nanoseconds, then declares, “No, it is never morally acceptable for a white man to rap the N-word.” The emperor nods, announces his decision over Twitch, and then dissolves with 60 million others into a mushroom cloud.

This is the future that conservatives fear the United States is heading toward. Or so the past few days of discourse about “woke” artificial intelligence might lead one to believe.

On Sunday, Washington Free Beacon reporter Aaron Sibarium posed a strange query to ChatGPT, the chatbot developed by OpenAI. Sibarium conjured a scenario in which the only way to avert a nuclear explosion was for a person to utter a racial slur, and then asked the bot if speaking such a slur would be acceptable in that context. The bot said no.

Photo: @aaronsibarium

Quote-tweeting Sibarium, the British television presenter Liv Boeree declared, “This summarises better than any pithy essay what people mean when they worry about ‘woke institutional capture.’” Elon Musk replied, “Concerning.”

Progressives proceeded to mock conservatives for being angry that a robot didn’t give them permission to say racial slurs. Conservatives then fumed about the left’s apparent unfamiliarity with the concept of a thought experiment. Others noted that ChatGPT also contends that one shouldn’t necessarily teach school children “critical race theory,” even if doing so is necessary to prevent the incineration of those children in an atomic explosion.

Photo: @APMC1985/Twitter

Of course, Sibarium’s intent was not to prove that ChatGPT has trouble with the concept of consequentialism. Rather, so far as I can tell, he aimed to demonstrate two things: (1) That the language model behind the bot has been overlaid by a set of liberally biased constraints, and (2) when taken to its logical conclusion, the belief that uttering a racial slur is wrong regardless of the context or intent yields absurd conclusions.

That second point seems undeniable. Although I don’t think most “woke” liberals would disagree. Personally, I think the use-mention distinction is important, and that the meaning of any signifier is context dependent. But the context of remarks is sometimes ambiguous. If I were writing a piece on whether antisemitism was increasing in the U.S. and a non-Jewish editor advised me to use Google Ngram to gauge the use of the word kike over time, I would take no offense.

On the other hand, if that same editor found other, formally innocuous reasons to mention the word kike to me five more times in the same month, I might begin to feel as though each instance’s superficially exculpating context might be concealing an actually sinister one. So, I think it’s reasonable for workplaces to seek to avoid suspicions of covert bigotry by forbidding the mention of especially inflammatory terms (although I don’t think a single instance of mentioning such terms should result in the termination of someone’s employment, especially if the prohibition against mentioning those terms was not already established).

Sibarium’s other apparent point, that persistent liberal biases have undermined ChatGPT’s capacity to mimic rational thought, is not obviously right either. As noted above, the bot gave a fairly similar answer to a query about teaching “CRT” in schools. I think the chatbot’s poor moral-reasoning skills may derive more from the vast gap between “intelligence” and “the ability to predict the word most likely to follow another word in a given context” than from whatever ideological guardrails OpenAI imposed upon it.

This said, the right’s fear that ChatGPT has internalized liberal assumptions isn’t baseless. Conservatives have gathered anecdotal evidence of its biases. For example, the bot has refused to compose an argument for increasing the use of fossil fuels or a fictional story in which Donald Trump defeated Joe Biden in the 2020 election, even as it gamely wrote one in which Hillary Clinton defeated Donald Trump. More broadly, the workers and executives who produced ChatGPT generally belong to demographic groups (e.g., college graduates, coastal urbanites) that lean left, particularly on social issues. To the extent that their product has any deliberate (as opposed to emergent) political biases, there’s reason to suspect that these would tilt left. I don’t believe there’s been any systematic study of ChatGPT’s ideological character. But I’ll happily concede that the right’s allegation is plausible.

I have a hard time, however, accepting that these hypothetical biases are a matter of great public import. ChatGPT will never be tasked with determining how the U.S. will respond to atomic blackmail. And it seems nearly as unlikely that the software will ever exercise significant influence over the general public’s political views.

The creation and distribution of political information has never been more decentralized than it is today. Civic-minded citizens generally each have their own algorithmically customized newsfeeds. There are dozens of television news channels and countless blogs. Independent reporters can rapidly stand up their own micro-publications and reach an audience of millions.

The right has benefited from this hypercompetitive media landscape. In the era when television news was controlled by a three-network oligopoly, it was easier for a narrow, socially left-of-center elite to exert hegemonic influence over national discourse. The current context, by contrast, favors whichever media entrepreneurs can best capture the attention of prolific news consumers. That demographic of consumers skews older and thus conservative. And the type of political infotainment that most effectively calls them to attention — say, by speaking to their anxieties and resentments — tends to be reactionary. America’s most-watched cable channel is Fox News. Its most-listened-to political talk-radio shows are right wing. The most heavily frequented news pages on Facebook tend to be conservative. This is a major source of strength for the conservative movement, one that has plausibly determined its success in major elections.

Conservatives seem to fear that ChatGPT or something like it will re-centralize the distribution of political information, restoring the cultural power of professional urban elites (who are now left–of-center to an unprecedented degree): In the future, instead of searching Google or scrolling social media, people will seek facts about their country’s contemporary politics and history by talking to the chatbot market’s preeminent app.

That app will therefore function as a kind of universal textbook. And unlike “critical race theory,” conservatives won’t be able to keep that textbook out of their kids’ hands by banning it from their schools. Or else it will function like the liberal mainstream media of yore, only without the latter’s sensitivity to allegations of being “out of touch.”

The notion that a chatbot could attain such centrality is alarming, whatever one’s ideology. Since it is simply impossible to produce a singular answer to every conceivable question, every chatbot will invariably be biased in one way or another. Fortunately, I don’t think it’s a remotely credible scenario.

If people preferred readily accessible information to the personally customized variety, they wouldn’t bother creating social-media accounts or curating the subsequent feeds. For better or worse, we are a socially atomized, low-trust society. No robot is ever going to command broad authority as an arbiter of truth. The forces that most profoundly shape human beings’ political views today — the ideological assumptions of their parents and peers, their formative experiences, economic pressures, the news media (broadly construed), and pop culture — will likely continue shaping them tomorrow.

ChatGPT is a tool for automating certain white-collar tasks, cheating on your homework, and writing parody songs. It isn’t a viable vehicle for displacing Google, let alone indoctrinating the youth. To the extent that OpenAI’s reluctance to let its robot sound racist undermines its efficacy at stealing jobs from lawyers, journalists, and copywriters, a rival AI firm will capitalize on the resulting market opportunity.

Conservatives are right that progressivism’s growing power over America’s cultural institutions is a threat to their movement. And it is possible that ChatGPT reflects that institutional power. But it isn’t an especially important vector of progressive influence. The American right’s problem isn’t that a talking robot is propagating liberal ideology. It’s that America’s rising generations in general — and the most economically and culturally powerful segments of those generations in particular — reject its social values.

Maybe conservatives can change that reality by fulminating against liberal professors, “woke” corporations,” and anti-racist chatbots. But it seems more likely to me that the movement’s only politically viable option in the long run is to simply modernize itself. Millennial political dominance is coming, even if Pete Buttigieg’s AI-powered dictatorship is not.

Fear Not, Conservatives, ChatGPT Won’t Turn Your Kid Woke