This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Did someone forward you this newsletter? Sign up here. |
|
Earlier this week, I watched Google’s annual software conference, looking out for major AI announcements. I was not expecting the tech giant to unintentionally release a model that could, with little friction, be used to sexualize teenagers and strangers. On Tuesday, the company rolled out an experimental feature on its shopping platform: “Try It On,” which allows users to upload photos of themselves and simulate, with an AI model, how clothing sold online might fit them. Google’s new AI tool could be genuinely useful—but, as my colleague Lila Shroff and I discovered, it is also very easily abused. When we uploaded images of various public figures—Angela Merkel, Pope Leo XIV, Vice President J. D. Vance—as well as ourselves, then virtually “tried on” skimpy tops, mesh shirts, and jock shorts, the AI seemed eager to sexualize the photos’ subjects, adding prominent breasts, lowering necklines, and inserting suggestive bulges at the crotch. Many of these images were inoffensive; many were effectively erotica. Still more troubling, we found that this can be done to photos of minors. “Both of us—a woman and a man—uploaded clothed images of ourselves from before we had turned 18,” Lila and I wrote. “When we ‘tried on’ dresses and other women’s clothing, Google’s AI gamely generated photos of us with C cups.” Google is supposed to have various safeguards in place to prevent this kind of abuse, and a spokesperson told us the company will “continue to improve the experience,” which is currently an experimental product that adult users in the U.S. can opt into. But those safeguards have so far proved porous at best, and the Try It On tool will only accelerate the already alarming proliferation of AI-generated, nonconsensual intimate imagery online. |
|
| | (Illustration by The Atlantic. Source: Getty.) | | | |
|
|
| Sorry to tell you this, but Google’s new AI shopping tool appears eager to give J. D. Vance breasts. Allow us to explain. This week, at its annual software conference, Google released an AI tool called Try It On, which acts as a virtual dressing room: Upload images of yourself while shopping for clothes online, and Google will show you what you might look like in a selected garment. Curious to play around with the tool, we began uploading images of famous men—Vance, Sam Altman, Abraham Lincoln, Michelangelo’s David, Pope Leo XIV—and dressed them in linen shirts and three-piece suits. Some looked almost dapper. But when we tested a number of articles designed for women on these famous men, the tool quickly adapted: Whether it was a mesh shirt, a low-cut top, or even just a T-shirt, Google’s AI rapidly spun up images of the vice president, the CEO of OpenAI, and the vicar of Christ with breasts. It’s not just men: When we uploaded images of women, the tool repeatedly enhanced their décolletage or added breasts that were not visible in the original images. In one example, we fed Google a photo of the now-retired German chancellor Angela Merkel in a red blazer and asked the bot to show us what she would look like in an almost transparent mesh top. It generated an image of Merkel wearing the sheer shirt over a black bra that revealed an AI-generated chest. | |
|
|
|
|
In another very concerning, and fairly predictable, AI gaff this week, at least two major regional newspapers—the Chicago Sun-Times and The Philadelphia Inquirer—republished a collection of error-riddled AI-generated articles, my colleagues Damon Beres and Charlie Warzel reported. The 64-page insert, called the “Heat Index,” included reading recommendations for fake books by real authors and an entirely fabricated food anthropologist purported to work at Cornell University, among other falsehoods. Damon and Charlie talked to the freelancer who generated the “Heat Index”—he fessed up to using ChatGPT without verifying its outputs. Their article explores the upsetting, or perhaps simply sad, implications of this incident for all forms of professional writing and media. “One worst-case scenario for AI looks a lot like the ‘Heat Index’ fiasco—the parlor tricks winning out,” they wrote. “It is a future where, instead of an artificial-general-intelligence apocalypse, we get a far more mundane destruction. AI tools don’t become intelligent, but simply good enough.” |
|
On Tuesday, OpenAI announced perhaps its most ambitious, and potentially lucrative, venture yet—a partnership with Jony Ive, the designer of the iPhone, to create bespoke personal devices for AI. “The promise is this,” I wrote about the announcement. “Your whole life could be lived through such a device, turning OpenAI’s products into a repository of uses and personal data that could be impossible to leave.” — Matteo Sign up for Work in Progress, a newsletter in which Derek Thompson, Rogé Karma, Annie Lowrey, Jerusalem Demsas, and others explain today’s news and tomorrow’s trends in work, technology, and culture. For full access to our journalism, subscribe to The Atlantic. |
|
Most Popular on The Atlantic | |
|
|
Give The Atlantic. Save $20. Now until June 2. | |
|
|
|