Where are we on the whole robots replacing humans thing? Sometimes I’ll read a story — say, “Amazon Plans to Replace More Than Half a Million Jobs With Robots” or “ Woman Wins Court Case by Using ChatGPT as a Lawyer” — and the answer is clearly, like, tomorrow. But then I’ll read other things — about malfunctioning smart glasses or anti-AI marketing tactics — and the answer is not anytime soon. It’s possible that I’m asking the wrong question. In its current phase, AI seems less likely to replace and more likely to collaborate. And that’s … good, I guess? Having AI as a co-worker is a less scary proposition than being replaced by it entirely. But on the other hand: AI is your co-worker!! You’re no longer competing against your cubicle mates Walter, Shannon and Philip to get that promotion, you’re up against a machine. And even weirder, you’re probably training that machine to behave just like you. Eventually, it’ll be able to do your day job a lot faster and more efficiently, since it doesn’t commute to the office, drink the cold brew on tap or take bathroom breaks. Which brings me to the 100 ex-investment bankers who are earning $150 an hour to help OpenAI build an AI banker that operates just like any other junior analyst. The project goes by the code name “Mercury,” and the task is relatively straightforward: “Participants are asked to create their models in Excel and they’re also expected to follow industry norms for formatting the models, including for areas like margin sizes and italicizing percentages,” Bloomberg News’ Omar El Chmouri writes. Matt Levine finds the mission “culturally pleasing” in that it mirrors all the trauma-bonding that happens on Wall Street: “When you arrive at an investment bank fresh out of college, you will be asked to prepare materials for client meetings and to build financial models in Excel,” he writes. “How can you demonstrate that you are good at the job? You just got there; you are unlikely to have any brilliant insights into the client’s needs. Pretty much you’re going to put together the materials your bosses tell you to, using pages from previous client pitches. You can do this sloppily in a way that embarrasses your bosses … Or you can just do it perfectly, and then the VP will like you.” If these ex-bankers train the models properly, the VP will always like the AI, right? Well, not exactly. Humans make mistakes. And if humans are the ones training the AI, then the AI might make mistakes, too. Just look at this embarrassing blunder made by OpenAI’s vice president for science over the weekend: “GPT-5 just found solutions to 10 (!) previously unsolved Erdös problems,” Kevin Weil said in a since-deleted post on X. Turns out, that wasn’t quite true. “The company’s latest model had simply scraped answers off the internet and regurgitated them as its own,” Parmy Olson writes. Artificial general intelligence is still very much a moonshot. OpenAI and Nvidia might be hoping that their “machines will be able to reason and discover the answers to thorny problems in business and society,” writes Parmy, “but the Erdos error is a stark reminder that the large language models underpinning the generative AI boom mostly pretend to be good at reasoning. They are still glorified pattern-matching tools.” Yet that’s where it all comes full circle: “Glorified pattern-matcher” is the textbook job description of a junior banker! So, yeah. Maybe those roles are about to be replaced by robots after all. The CEOs Are Not All Right | Eeep! The “vibecession” has made its way to the C-Suite: Jonathan Levin says CEOs “remain largely out of sync with a US stock market near all-time highs and profits that are expected to grow meaningfully across large-cap and small-cap US stocks.” What’s going on? He chalks it up to “the confluence of events in recent years” that’s left consumers — and now business executives — unsettled. In times of uncertainty, a natural gut reaction is to panic. But if you’re, say, the co-founder of a certain software firm who has spent north of $1 billion over the past few decades trying to convince the public that you are a sincerely good person who runs a good company, maybe don’t panic! Of course, I’m talking about Marc Benioff, who had a very public meltdown last week that Beth Kowitt says “wasn’t just a little PR oopsie. It was a full-on torching of the carefully constructed narrative he’s spent so many years creating.” The long and the short of it is that the Salesforce CEO told the New York Times that he was totally on board with the National Guard going into San Francisco. “Benioff’s comments were more than the general bootlicking that most tech CEOs have done to curry favor with Trump,” Beth writes, but they signal a strange turn in corporate leadership: “Rather than try to rehabilitate their image, tech CEOs in particular are coming off as craven and increasingly removed from their employees and customers.” Case in point? Gautam Mukunda says a handful of US tech companies — not Salesforce, thankfully — have started encouraging, or even requiring workers to follow China’s “996” model, which involves working from 9 a.m. to 9 p.m., six days a week. “This is a mistake,” Gautam argues. “You can’t grind your way to breakthrough ideas, and overwork kills the curiosity and creativity that innovation depends on. In fact, if you’re in a job where 996 doesn’t hurt your ability to do your work well, you’re likely to be one of the first people replaced by AI.” Welp. There’s that idea again. |