screen time

Is AI Coming for Coders First?

Photo-Illustration: Intelligencer; Photo: Getty Images

Lots of industries are speculating about what AI means for their futures. It could be an “especially big deal” for the legal profession and may “reinvent” market research. It could change what it means to be a graphic designer. It could empty out call centers. Will it make white-collar office workers around the world massively more productive, or redundant, or neither, or both? An open letter calling for a “pause” on “giant AI experiments,” signed by an unusual assortment of public figures including Elon Musk, Yuval Noah Harari, and Andrew Yang, frames the question as hopelessly as possible: “Should we automate away all the jobs, including the fulfilling ones?”

These are mostly predictions, made from afar, about one industry’s plans for all the others. Inside the tech industry, though, there is a bit more confidence about where AI automation will matter the most, the soonest. The future may as well be on auto-complete: AI is obviously coming for software first. Where else would it start?

As a feeling, this is understandable. Capital is draining from the rest of the start-up scene and flooding into AI. Big tech firms are announcing major AI investments at the same time that they’re shedding thousands of other workers. It’s all anyone in the industry can talk about, and true believers are everywhere; if you’re already the sort of person who is anxious that you’re not working on the next big thing, it only follows — emotionally, to you, the staff software engineer at a company that has nothing to do with AI — that the next big thing may just crush you underfoot, or at least change your job in unpredictable ways.

But the idea that software development will discover the consequences of LLM-based AI is based on more than a nervous vibe. While the public was playing with experimental chatbots such as ChatGPT and image generators like DALL-E and Midjourney for the first time, coders were using AI assistants — some based on the same underlying technology — at work. GitHub Copilot, a coding assistant developed by Microsoft and OpenAI, “analyzes the context in the file you are editing, as well as related files, and offers suggestions” about what may come next with the intention of speeding up programming. Recently, it has become more ambitious and assertive and will attempt a wider range of programming tasks, including debugging and code commenting.

Reviews of Copilot have ranged from rapturous to mixed; at the very least, it’s a pretty good auto-complete for a lot of coding tasks, suggesting that its underlying model has done an impressive amount of “learning” about how basic software works. Game developer Tyler Glaiel found that GPT-4 was unable to solve tricky and novel programming test problems and that, like its content-generating cousins, it has a tendency to “make shit up” anyway, which “can waste a lot of time.” Still, on the question of whether GPT-4 can “actually write code,” he gave it some credit:

Given a description of an algorithm or a description of a well-known problem with plenty of existing examples on the web, yeah GPT-4 can absolutely write code. It’s mostly just assembling and remixing stuff it’s seen, but TO BE FAIR … a lot of programming is just that.

Former Twitter VP and Googler Jason Goldman assessed the technology from the perspective of a common industry type: a manager who can’t really code.

OpenAI was early to release usable AI coding tools, but this week, Google announced that it was partnering with Replit, a popular software development environment, on a general-purpose coding assistant. In an interview with Semafor, Replit’s CEO, a rather excited Amjad Masad, described coding as “almost the perfect use case for LLMs,” and said that, eventually, his company’s goal was for its assistant to become “completely autonomous,” such that it can be treated like an extra employee.

This month, Paul Kedrosky and Eric Norlin of SK Ventures published a more extensive bull case for AI software development:

The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.

This, they say, is in part because “[s]oftware is even more rule-based and grammatical than conversational English, or any other conversational language,” and “programming is a good example of a predictable domain.” In their view — fairly optimistic, but also, you know, they’re investors — this will let people make and use software where they previously never would have been able to, quickly relieving “society’s technical debt” before triggering waves of unpredictable innovation.

And hey — maybe! It’s clear, in any case, that the software industry is highly exposed to the effects, whatever they may be, of its newest creations and that its workers and employers have been quick to test and adopt them. It makes some sense that the effects of LLM automation on labor — fewer jobs, more jobs, different jobs, wage pressure, displacement — will manifest early, if not first, somewhere in the industry where it’s first and most thoroughly deployed and where it seems especially capable.

One such somewhere is within the companies creating the AI tools. Google is a software company that hopes to provide AI-based software to its users and customers at other firms; it’s also an employer with more than 150,000 employees that just slashed 6% of its workforce. In his layoff announcement, Google CEO Sundar Pichai directly cited the company’s investment in AI. “Being constrained in some areas allows us to bet big on others,” he wrote. “Pivoting the company to be AI-first years ago led to groundbreaking advances across our businesses and the whole industry,” he continued, emphasizing “substantial opportunity in front of us with AI.” This can be read two ways. Google is certainly in an enviable position to sell and provide AI products to others. It’s also, perhaps, the ideal customer for its own allegedly productivity-enhancing tools: dozens of offices full of coders and product managers and emailers and deck-makers and meeting-holders — not to mention countless lower-paid contractors spread around the world — testing out tools, en masse, in a single corporate environment. Before Google really finds out what its products will do for and to its customers, and their workers, it’ll probably start to find out what those products will do for itself. In the event that LLM tools turn out to be massively overhyped and don’t produce much utility or change, well, Google would be among the first to know that, too, although they might not be terrible eager to share their findings.

In its own analysis, OpenAI suggested that certain tech jobs would be highly exposed to LLM-based tools. “We discover that roles heavily reliant on science and critical-thinking skills show a negative correlation with exposure,” the company claimed, “while programming and writing skills are positively associated with LLM exposure,” and that “around 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by the introduction of LLMs, while approximately 19 percent of workers may see at least 50 percent of their tasks impacted.”

Now, one of the leading LLM companies in the world would say that, and it’s easy for a company with fewer than 400 employees to speculate wildly about what will happen to everyone else. (OpenAI does, however, utilize thousands of foreign contract laborers to help clean up its models, doing work that is potentially quite “exposed” to near-term automation.) It’s also the sort of prediction that may interest Microsoft, OpenAI’s biggest funder by far and its partner on a bunch of AI-powered features in popular software such as Windows, Office, and, of course, GitHub. Like Google, Microsoft has been cutting costs, mostly by eliminating thousands of jobs, including some from GitHub’s overseas teams. Its investment in AI can likewise be interpreted in two ways: as a bet on a new sort of product from which it may make money and, more immediately, as an investment in automation that simply saves it some money on labor, like a new machine on a factory floor.

Is AI Coming for Coders First?