Every tech company talks a big game when it comes to generative AI, but most of them are still figuring out how exactly to add it to their products. Microsoft is putting a chatbot in Windows and content-generation tools in Office. Google is testing out AI-written answers to searches. Meta is trying out AI-generated “stickers” and bots based on celebrity likenesses. We can expect to encounter plenty of these experiments over the next year. They’re all over the place, in both senses of the phrase.
The biggest players do seem to agree on one use for generative AI, however: pumping out more ads. Better ads! More persuasive ads. Meta is giving advertisers tools for automatic “text variation, background generation, and image outcropping.” Google has tools that can “generate relevant and effective keywords, headlines, descriptions, images, and other assets” for advertisers’ campaigns. TikTok has a tool called Creative Assistant, which coaches users on “best practices” for “creating ads or videos for TikTok.” This sort of stuff isn’t front and center in how big tech is talking about AI, but it’s at least as important as anything else it is marketing. Meta and Alphabet are, at base, advertising companies. Everything they do ultimately serves that purpose. In their hands, any technology becomes an advertising technology. This is what big tech sees in AI.
These AI ad-generation products are less visible to regular users than, for example, an Instagram bot pretending to be Tom Brady or a pop-up asking if you’d like help responding to an email. But they’re also clearer in purpose and tell us a lot about what sorts of problems these companies think generative AI can solve — for them, anyway. Take Amazon, which recently started using AI to summarize reviews and to let sellers automatically generate most of a listing — “compelling product titles, bullet points, and descriptions” — from a few words. This week, the company announced a tool for sellers and advertisers: a tool that uses AI to generate photos of their products in scenarios and scenes of their choosing.
In Amazon’s words, this is a way for the company to “support advertisers while also making the ads our customers see more engaging and visually rich.” It’s also, the company suggests, a “perfect use for generative AI — less effort and better outcomes.” In theory, automatically generated, professional-looking product photos will result in better ad performance and more sales, which benefits both sellers and Amazon. By reducing the cost of good-enough advertising photos, the theory goes, Amazon makes all sellers more competitive. The money they might have previously spent on producing ads can now be allocated to actually placing them on Amazon, which they might need to focus on more now that everyone has the same creative advantage. Amazon tells sellers that when products are placed in a “lifestyle context,” click-through rates “can be 40 percent higher compared to ads with standard product images.” (Like a lot of AI pitches, there’s a certain near-sightedness here. Like, what’s supposed to happen when everyone gets that 40 percent advantage? When “lifestyle context” photos are ubiquitous? Is the answer just “we’ll see”? Or maybe “it becomes more expensive to advertise on Amazon”?)
In any case, you might notice that regular users — people who use and encounter ads and listings on Amazon — are not exactly central to these pitches, making brief background appearances in the form of behavioral statistics and references to “impact.” This is a feature to be used on them, not by them, which is part of how big platforms like this work; different parties are getting different things from their interaction with the system, and their needs are sometimes but not always or necessarily aligned. If you step an inch outside the weird and specific context of the relationship between Amazon and its millions of sellers, the company’s pitch for automated product photos sounds sort of insane. It’s a tool for faking product photos! It’s the sort of thing an e-commerce site might make a rule against using, and with good reason. Take the product photos in Amazon’s video, for example.
At first glance — which, for a thumbnail in an ad on Amazon, might be what matters most — things look good. We see a toaster in the middle of a sunlit kitchen, another on the edge of a stone table, another on some planks, and then on a counter again, except, hold on, this counter appears to extend all the way back to some drawers and might also be … the floor? I can see how such photos might result in better click-through rates for an ad, since they mimic certain popular aesthetics and imply a level of polish and care. What I can’t see is how big this toaster is. These photos somehow offer less information than the customary floating-in-white-space Amazon product shot. They don’t add context. They subtract it. This toaster has been banished to four different locations in the uncanny valley, none of which are anywhere near an electrical outlet.
Next, we see images created with the prompt “product in a kitchen, used in meal preparation.” Let’s see how that goes:
Once more, regarding probably the main question that real-world product shots can help answer: How large is this appliance? On the top left, staged in a pizza-oven crematorium, the device sits next to a bowl of the statistical average of all slow-cooked meals, a few inches (or feet?) away from a rare leafy bell pepper, in front of some towering greens in a utensil holder, and looks like a miniature; on the bottom right, in the middle of a butcher block, it looks fairly large, except for some confoundingly scaled [robot voice] STARCH and MEAT on the plate. Let’s check out the next product, with images generated by selecting a “pumpkin spice” theme:
Here, we have an unplugged electric griddle that is both much larger and much smaller than a turkey, placed in a variety of kitchenlike anti-contexts, accessorized with impossibly tined forks. Lots of ambiguity about the role of the pumpkins here. Edible? Decorative? Load-bearing?
To be fair, this is just a brief marketing demo — a staged ad for a staged-ad-creation tool — and Amazon is clear that tuning and filtering outputs is ultimately up to the seller. I also don’t mean to be precious here about Amazon advertisements or advertising in general, where deceptive and/or janky product photos are used in basically every possible context by a wide range of advertisers. Countless Amazon product listings, many of which have sold thousands of units to satisfied customers, already include manipulated product imagery, some of which is deliberately misleading. Fake photos abound and plenty of real photos are shopped. This is mainly just weird.
But it’s worth paying attention to which, and whose, problems these companies are trying to solve with AI. Amazon is attempting to address an issue for sellers and for its own advertising business: Clickable “lifestyle” ad imagery is expensive to hire for and difficult to shoot yourself, so a lot of sellers advertise with less-clickable materials. In the process of addressing this problem for its sellers, and attempting to automate certain kinds of product photography, Amazon ended up creating a tool that automates the (low-stakes, slight, and frankly sort of surreal) deception of customers by sellers. An actual photo of a griddle in a staged kitchen might get more people to click an ad. It would also provide, before clicking, some useful information about the product, such as — sorry for repeating myself here — its size relative to real objects in the world, or how it might look in the sort of location where it would actually be used. These AI-generated images aren’t exaggerated or deliberately altered. They don’t take liberties or mislead viewers to believe something specific about the product that isn’t true. Instead, what they do is create the impression that a photo shoot has taken place, implying the sort of expenditure and marketing output customers associate with established, legitimate brands. Narrowly, the plan makes sense: Certain types of images seem to get people to click more; these images can be ingested, analyzed, and approximated by software; the more of these images there are, the more people will click and buy things. It’s classic platform logic, coherent but incurious about everything it can’t internally observe or measure.
Again, on its own, this tool probably won’t be very consequential, and customers are more than able by actually visiting advertised listings, reading reviews, and looking at photos provided by previous customers to get a better sense of the information obfuscated by these weird generations. There are also bigger questions about the general promise of self-generating advertising, many of which apply to all widely available generative tools: It’s similarly unclear how a sudden glut of passable generated content will play out in our inboxes, in workplaces, or on social media. What is clear, however, is that the applications of AI matter at least as much as what it’s capable of. And, for now, those decisions belong to companies like Amazon.