The first mention of artificial intelligence in the Congressional Record dates back to 1964, when Senator Hubert Humphrey marveled at machines “that read, that remember, that improve their performance.” Even in that innocent time, politicians had canned takes about such technology. “The computer age is young; but already, let us admit, some laymen in policymaking positions have tended to make three types of speeches on the computer,” Humphrey said. There are speeches of “sheer awe.” Then there are those touting the hours of leisure and convenience computers would provide their human masters. And then the doomsayers: “Good-bye jobs; hello breadlines.”
It turns out that the doomsayers may have had a point. The political understanding of technology, however, has hardly gotten any more sophisticated. At a May 16 hearing on what the federal government should do about the widespread adoption of AI, Senator Richard Blumenthal of Connecticut played a recording of a computer-generated voice that sounded uncannily like him reciting a speech that an AI program had written in his style. This was no neat trick, according to Blumenthal. It was an ominous harbinger: “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?”
This is the kind of hawkish talk surrounding technology that is fashionable these days in Washington, where lawmakers are mulling a wholesale ban of TikTok because of national-security concerns. “Our goal is to demystify and hold accountable those new technologies, to avoid some of the mistakes of the past,” said Blumenthal, a reference to the way Facebook was allowed to metastasize across the globe, leading to a rise in political disinformation — another congressional hobbyhorse. To his credit, Blumenthal noted that AI could lead to a “new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs,” yet it seems clear that Congress is more concerned with scoring points against China and Russia than addressing the very real threat that AI poses to American workers. Worse, the preparation for this task by Congress is about where it was in the 1960s.
In recent months, we have seen two AI programs explode in popularity: DALL-E, which generates images from a text description, and ChatGPT, which can answer questions, write term papers, and even tell convincing lies in language that mimics human speech. Sam Altman, whose company, Open-AI, created both DALL-E and ChatGPT, was the star witness at the hearing, the rare Silicon Valley ringleader to ask Congress for more regulation. He suggested that the federal government create licenses that would ensure developers were thoroughly testing AI models before their release into the wild. Still, the message was that AI is set to expand. “The era of AI cannot be another era of ‘Move fast and break things,’” said Christina Montgomery, IBM’s chief privacy and trust officer, who testified at the hearing. “But we don’t have to slam the breaks on innovation, either.” We need, she said, “clear, reasonable policy” and “sound guardrails.” What that might mean is anyone’s guess.
Altman is trying to bend the debate toward more benign terrain. AI won’t kill jobs, he claims, but “tasks.” Jobs have a sense of mission to them; they are crucial for an economy. In their highest, longest form — careers — they can give meaning to one’s life. Tasks are different. Tasks are the things that, when you have enough money, someone else can be paid to do. It’s not called JobRabbit, after all.
But Altman’s AI can do much more than tasks. Machines have been replacing the cumbersome parts of economic production for centuries, literally taking work out of human hands. AI does something different. It goes after your mind. Lawyers, coders, writers, and designers are among the occupations in the crosshairs of ChatGPT and its successors. Already, the ranks of the knowledge-worker class are being thinned. Among the reasons Hollywood writers are striking is to prevent studios from automating their jobs. IBM is pausing hiring for roughly 7,800 roles that could be replaced by AI. BT Group, the largest broadband provider in Britain, is culling up to 55,000 jobs by 2030 as it transitions to AI. In May, BuzzFeed CEO Jonah Peretti, who had axed the site’s news division the previous month, announced, “Over the next few years, generative AI will replace the majority of static content, and audiences will begin to expect all content to be curated and dynamic with embedded intelligence.”
This is not the kind of predicament that can be solved by investing in new job training — that old Washington solution to the slow death of the manufacturing sector and the spread of global free trade. The entire American system has tilted toward the knowledge economy in recent decades, and there are only so many law-school graduates who can be turned into plumbers. Maybe it’s too much to expect that a body of lawmakers, median age 65, will grasp that new technology requires new ways of thinking. AI is liable to wreck people’s livelihoods — not to mention their sense of self — in ways that will be impossible to roll back once it becomes ingrained. It’s easier to ban TikTok because it’s owned by a Chinese company, and Facebook’s lobbyists (wait, aren’t they the bad guys?) have been pressuring lawmakers to kick the platform out of the country. On May 17, the governor of Montana signed a bill banning TikTok in the state, a pointless victory in the war against Chinese surveillance.
Altman played down the more apocalyptic scenarios in his testimony. “I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better,” he said. He envisions a world in which people are freer to go after “more satisfying projects” that are commensurate with rising standards of living, the inverse of what David Graeber warned of in his book Bullshit Jobs, which described a society flooded with meaningless work done by middle managers in service of one-percenters who need people to safeguard their immense stores of capital.
But at least those bullshit jobs pay. And here’s another problem: The pandemic has already reshaped work in ways that have given people more meaning to their lives, and it’s not because of people like Altman. The power in the economy has undergone a fundamental shift back to employees, and it’s allowed them to work where they want, commute less, and negotiate for higher pay and better benefits. Job satisfaction is at the highest level it’s been in 36 years. If the recent moves by BuzzFeed and IBM are any indication, big business sees AI as a way to keep labor costs low and claw back some of the power it has lost.
AI’s proponents are right that it’s too late to reverse course or to freeze AI’s development in place. What the government can do is control how companies use it. Think of an AI as the most advanced, hardest-working employee a boss could ever hope for. The State Department already regulates highly skilled workers from foreign countries with visas, making the Googles of the world prove that they can’t fill a position with an American worker before hiring someone from abroad. These are called H-1B visas, and the federal government has an annual cap on how many it issues. Why not impose similar “visa” requirements on companies outsourcing their work to AI? And then tax them on the excess profits anyway to strengthen a safety net that will likely be strained in the coming years?
Two days after Altman’s testimony, Senators Michael Bennet and Peter Welch introduced a bill that would create a new federal agency to regulate internet platforms like ChatGPT — a kind of FDA for the automated content we consume. This agency would be staffed with experts to establish and enforce rules for digital platforms to keep people from getting harmed or tricked. It’s a good idea, except that we are not just consumers of AI — we are its competitors. And by the time the government realizes this, it may already be too late.