The tech website CNET recently attracted attention for having used artificial intelligence to generate dozens of articles over the past few months — without informing readers. Futurism, which broke that story, uncovered a slightly less sinister, slightly more pathetic twist this week: A lot of CNET’s work was heavily plagiarized from the very humans AI journalism was supposed to replace. Futurism reports:
The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original. In at least some of its articles, it appears that virtually every sentence maps directly onto something previously published elsewhere.
The bot cribbed from its competitors but also from properties such as Forbes that are owned by CNET’S parent company, Red Ventures. Red Ventures announced it would pause its AI journalism for now after the initial reports about it inspired significant backlash.
As Futurism points out, plagiarism was hardly the only problem here: CNET’s bot-generated articles also included some basic factual errors. But all the copying raises the question: If the machines are so damn smart, why do they have to rely on the work of their pea-brain inferiors as if they were a desperate high-school student or Melania Trump giving a speech at the Republican National Convention?
AI systems like the one CNET employed work by scouring enormous amounts of human-generated data and “learning” to write from it, so perhaps some level of copying was an inevitability. But the point is to subtly sound like a human, not to replicate exactly what one specific person has written. That the bot couldn’t distinguish between the two is a somewhat reassuring development for journalists worried about being replaced; maybe next we’ll find out the bot blew deadlines and filed fake expense reports too.
But the recent release of ChatGPT, the uncannily human-sounding, possibly revolutionary text generator, has highlighted how much better AI is getting at mimicking humans. And, so far, there’s no indication ChatGPT suffers from the same copycat flaws as CNET’s bot journalism. At least it still gets things wrong frequently. At this point, humans have to take what we can get.
More From This Series
- Sam Altman and OpenAI Are Victims of Their Own Hype
- What Does It Mean That Elon Musk’s New AI Chatbot Is ‘Anti-Woke’?
- How Big Tech Companies Really Think About AI