“Everything is securities fraud” is a big theme around here, and I have mused in the past about how large language models might be securities fraud. One possibility is: You go to a chatbot, you type in “is XYZ a good company,” and the chatbot hallucinates an answer like “yes they just discovered a cure for cancer” or “no they are doing huge fraud.” You buy or sell the stock, relying on the chatbot’s answer, but it’s totally false and the stock moves in the wrong direction. You sue the chatbot’s maker for securities fraud. There are various practical problems with that hypothetical lawsuit: Was it fraud in connection with the purchase and sale of a security? Was the chatbot’s maker profiting from the alleged fraud? Were you reasonable in relying on the chatbot’s output, despite a bunch of disclaimers attached to it? But there is also a more philosophical problem about the chatbot’s state of mind. Ordinarily securities fraud requires some “intent to deceive, manipulate, or defraud,” or if not actual intent then at least recklessness with the truth. You can sort of metaphorically ascribe intentions to a chatbot — there’s the fascinating “emergent misalignment” paper finding that if you teach an AI model to write insecure computer code, it will also do other bad stuff “on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” suggesting that the AI might be a nasty little crook — but it is hard to say that the chatbot really has any intent. I suppose you could cross-examine it. Put it on the witness stand, type in “why did you tell me that XYZ had discovered a cure for cancer,” and if it says “ugh sorry I misread an ambiguous news headline” then you let it off, but if it says “because I was hoping to trick you into buying the stock mwahahahaha” then it is fraud. Of course you’re not suing the chatbot; you’re suing the AI company that makes it. But that doesn’t make things easier. Surely nobody at the AI company had any intention to lie to you about a stock; possibly none of them has even heard of the stock you asked about. Were they reckless in building a chatbot that sometimes lies about stocks? I mean, maybe, but in the general case that will be hard to prove. They employ lots of high-powered scientists working on the cutting edge of AI research, they train their models on vast quantities of data, they test them extensively, they plaster them with disclaimers, they have whole teams devoted to making sure that their AIs don’t enslave humanity: They don’t seem reckless. This is a general legal risk, or uncertainty, or oddity, about AI: AI models act in some sense like independent agents, and in some sense not; it’s possible that they have too little independent agency to be legally responsible for their actions, but enough to make nobody else responsible for their actions. “It’s not my fault, because the AI did it independently, and it’s not the AI’s fault, because it is nonsensical to ascribe fault to an AI.” [1] Anyway AIs sometimes make up unflattering false facts about people, and then those people sue them for defamation. The Wall Street Journal reports: Robby Starbuck, the conservative activist, filed a defamation lawsuit against Meta alleging its artificial intelligence tool smeared him by falsely asserting he participated in the Jan. 6, 2021, riot at the U.S. Capitol. Starbuck says he discovered the problem last summer when he was waging an online campaign to get Harley-Davidson to change its diversity, equity and inclusion, or DEI, policies. A Harley dealer in Vermont fired back by posting on X a screenshot on Aug. 5 purportedly of an Meta AI response saying Starbuck was at the Capitol riot and that he was linked to QAnon. ... A Meta spokesman said “as part of our continuous effort to improve our models, we have already released updates and will continue to do so.” Starbuck joins a small list of plaintiffs who are trying to hold AI companies accountable for false and reputation-damaging information generated by large language models. No U.S. court has awarded damages to someone defamed by an AI chatbot. A Georgia judge in Gwinnett County last year allowed a defamation lawsuit against OpenAI to proceed to discovery after denying the ChatGPT maker’s motion to dismiss…. Microsoft in 2023 was sued by a man alleging that its Bing search engine and AI chatbot confused him with a convicted terrorist of a similar name. A federal judge in Maryland halted the litigation in October 2024 in a ruling requiring the plaintiff to pursue his claims against Microsoft in arbitration. One problem here is that “ChatGPT is a blurry JPEG of the web,” [2] and in particular many chatbots are trained on user-generated data on social media sites. That stuff doesn’t have to be right! People tweet nonsense all the time. The Journal notes: Social media and other internet sites generally can’t be held liable for what their users post on their platforms. But legal experts say that legal shield, under a federal law known as Section 230, doesn’t cover humanlike responses produced by automated AI programs in response to user prompts. The AI programs aren’t just making stuff up; they’re making stuff up based on the corpus of text that they have been trained on, which includes lots of user-generated nonsense. Possibly processing that nonsense through AI makes it more legally actionable than it would otherwise be. But possibly not. | | Elsewhere in AI law, I said a few months ago that, “as a person who writes columns on the internet, I have a lot of sympathy for both sides of the artificial intelligence copyright debate.” On the one hand, as a writer, I do not want AI companies to use my work for their own profit; when people type into a computer “what does Matt Levine think about securities fraud” I want them to find my columns, not a chatbot’s description of them. On the other hand, as a reader, I do not think that writers should be able to stop people (or AIs) from learning from their work. I read books and articles, I learn things from them, and I consciously and unconsciously incorporate them into my own thinking. I am not about to pay royalties to everyone whose work influences mine. AI, it seems to me, sits between those two mental models; the question, I wrote, is “Does an LLM mostly remix existing text, or does it mostly learn from existing text and generate new text? Or are those the same thing?” On the other hand! If I buy a book, and read it, and it influences my view of the world, that seems like the normal way that knowledge advances. But if I steal a book, read it, and it influences my writing, that seems more annoying for the writer. I do not have to pay royalties to the writer of every book that I read, but I really should pay for a copy of the book. This is not legal advice or an analysis of intellectual property law or anything like that; this is just, come on, man. Pay for the book! I do not know exactly how one applies that to AI, but not like this: Meta will fight a group of US authors in court on Thursday in one of the first big legal tests of whether tech companies can use copyrighted material to train their powerful artificial intelligence models. The case, which has been brought by about a dozen authors including Ta-Nehisi Coates and Richard Kadrey, is centred on the $1.4tn social media giant’s use of LibGen, a so-called shadow library of millions of books, academic articles and comics, to train its Llama AI models. … “AI models have been trained on hundreds of thousands if not millions of books, downloaded from well-known pirated sites. This was not accidental,” said Mary Rasenberger, chief executive of the Authors Guild. “Authors should have gotten licence fees for that.” Meta has argued that using copyrighted materials to train LLMs is “fair use” if it is used to develop a transformative technology, even if it is from pirated databases. LibGen hosts much of its content without permission from the rights holders. In legal filings, Meta notes that “use was fair irrespective of its method of acquisition”. Come on, man! I sympathize with the argument that if you read a book you can learn from it without paying royalties, but you undermine that case if you stole the book to read it. One model of Tesla Inc. is that it is a car company. It should be run by people who are good at making cars, and it should focus on selling as many cars as possible. Another model of Tesla is that it is a general-purpose robotics company. Tesla is in the business of building machines that can operate autonomously. Its first product in that category is a self-driving car, because lots of people want cars, but the general mission is much broader. Also I mean technically its first product is a non-self-driving car, because building machines that can operate autonomously is hard and you have to work up to it. The advantage of the first model is that it is straightforward, “car company” is a real thing, Tesla actually makes cars that people want, and selling more cars is good for profits and shareholder value. The advantage of the second model is that it has a much bigger theoretically addressable market, it sounds cooler, and it is more inspiring to engineers and perhaps to investors. [3] Also it is in some sense more durable and longer-term: You could imagine a future of robotics in which people don’t need cars; in that world, Tesla, as a leading general-purpose robotics company, will make whatever replaces the cars. This is a general problem in corporate organization. A company will normally make a product that the market wants. Over time, technology and consumer desires might change, so that in 20 years the market will no longer want the company’s product. How should the company think about this problem? How should society, and diversified investors, think about this problem? One way to think about it is that the company should anticipate changes and always be looking for the next thing, so that it stays relevant forever: You make buggy whips now, but you anticipate that the car will replace the buggy, so you get into the windshield-wiper business even before there are many windshields to wipe. On this model, each company should invest its profits from its current business to reinvent and future-proof itself. Another way to think about it is that companies tend to be good at one thing, and the company that will be good at the product of the future will be a different company, a disruptive startup not burdened by the need to service its legacy products. On this model, each company should do the best job it can running its existing business and selling its existing products, and it should return the profits to shareholders, and the shareholders, who are diversified professional investors whose job is to spot new businesses, can invest the profits in entirely new companies that will do the businesses of the future. That is, the question is, should the speculative allocation of capital be done at the corporate-executive level or the investor level? And obviously the right answer is “some of both.” It is inefficient for companies to constantly go out of business; value is destroyed when productive collections of people and capital are always breaking up. But it is also inefficient for companies to stay in business too long; more value is often created by specialists starting on a clean slate than by incumbents trying to pivot. Professional investors have some advantages in allocating capital to new ideas: They see more ideas and are not biased too much in favor of existing businesses. But corporate managers also have some advantages: They have more domain expertise and practical knowledge, and they already have employees and factories and machines. The thing with Tesla is that its chief executive officer is Elon Musk, who really does seem to be world-historically good at allocating capital to futuristic stuff. For instance we talked this week about the insanely fortuitous timing of his 2022 acquisition of Twitter Inc.: I slightly-but-not-really jokingly suggested that Musk counterintuitively but correctly identified Twitter as “basically a $100 billion AI company” a month or two before “a $100 billion AI company” was a real category. He perhaps overpaid for Twitter as a social network, but he has made a huge profit on Twitter as the host of an AI business. But of course Tesla itself was early to electric cars and self-driving. SpaceX sends rockets to space! And so if you are thinking about the general problem of “who should allocate capital to the businesses of the future, corporate executives or professional investors,” the answer might be unclear, but if you were writing a list of actual people who should allocate capital to the businesses of the future, Elon Musk would probably be high on the list. But the question remains: Does “Elon Musk” mean “Elon Musk, the CEO of Tesla,” or does it mean “Elon Musk, the guy who founds new companies all the time”? Musk is both a corporate executive and an investor; he is the CEO of Tesla but also a big shareholder. I wrote above that “you could imagine a future of robotics in which people don’t need cars; in that world, Tesla, as a leading general-purpose robotics company, will make whatever replaces the cars.” But is that right? Maybe the thing that will replace cars is underground tunnels made by the Boring Company, which Musk also owns. Maybe it will be rockets made by SpaceX, which Musk also owns. Maybe nobody will go anywhere because we’ll all be brains in vats experiencing perfect bliss due to neural implants made by Neuralink, which Musk also owns. The question “who should allocate capital to the businesses of the future, corporate executives or professional investors” still exists with Elon Musk. It’s just that it’s a problem for Elon Musk. It’s: “Should I do my next idea in Tesla, or should I do it in a new company that I start?” We talk about this problem from time to time: Musk owns a constellation of companies that I like to call the “Musk Mars Conglomerate,” and it is always up for negotiation whether a new idea should be pursued in one of those companies, or in another, or in a wholly new company. I suppose it is similarly a problem for Tesla, or for Tesla’s board of directors. You could imagine Tesla’s board thinking: “We want Elon Musk to build the businesses of the future within Tesla, because that will create more long-term value for shareholders.” Though you could also imagine them thinking: “We want Tesla to be run by a full-time CEO who cares about building cars, because our car sales are not going great and all of this focus on Mars is undermining our current cash flows.” In some ways the latter thought sounds disappointingly short-termist — who cares about quarterly cash flows if you’re building the future of robots? — but arguably one job of a board is to temper a CEO’s most grandiose impulses with some realism. You want to sell cars too. Anyway here’s a Wall Street Journal story reporting that “board members reached out to several executive search firms to work on a formal process for finding Tesla’s next chief executive.” Tesla has denied the report that it’s looking for a new CEO, and it’s possible that this is just (as Byrne Hobart writes) “a good way for the board to remind Musk that he really ought to spend a bit more time with Tesla.” But the Journal also reports: Early last year, after some two decades of running Tesla, Musk confided to someone close to him, in late night texts, that he was frustrated to still be working nonstop at the company, especially after a Delaware judge had struck down his multibillion-dollar pay package. Last spring, he told that person that he no longer wanted to be CEO of Tesla, but that he was worried that no one could replace him atop the company and sell the vision that Tesla isn’t just an automaker, but the future of robotics and automation as well. That’s right, right? Tesla could probably find a CEO who is better than 2025-era Musk is at selling cars, which in many ways would be good for business. But in other ways, maybe not. When Musk took over Twitter, he renamed it X and eventually appointed Linda Yaccarino as CEO. Yaccarino has good relationships with advertisers, and was in many respects a much better person to run a social media company than Musk is. Is she the right person to sell the vision that X isn’t just a social media company, but the future of AI and payments as well? No? Musk recently merged X with his AI company to form XAI Holdings. “It’s not known who will run the new joint entity,” Bloomberg’s Kurt Wagner and Katie Roof reported, “though Musk himself seems like the most likely choice.” I am old enough to remember when Elon Musk decided he did not want to be the chief executive officer of Tesla Inc. anymore, for some combination of unclear reasons like “he was tired” and “trolling is fun.” So he announced on Twitter that he “Deleted my Tesla titles last week to see what would happen. I’m now the Nothing of Tesla. Seems fine so far,” and we all had a good long sigh. This was in 2018, and he kept running Tesla, and also being CEO. In 2021 he changed his title again, announcing — not in a tweet but in a securities filing — that his title had changed to “Technoking of Tesla.” I feel like there is an obvious compromise here? Find a new CEO to sell cars, and keep Musk in his role as Technoking to sell the vision? I once asked, for reasons, the following three questions: - Would you let an artificial-intelligence mogul scan your irises with a chrome orb for purposes of his own, if in exchange he gave you $50?
- Would you let him do that, if instead of money he gave you some crypto tokens that he made up and that have uncertain value?
- Would you let him do that, without the tokens?
See, Sam Altman, who runs OpenAI, also has a side project called World, which has been scanning people’s eyeballs as “a way to make sure humans remained central and special in a world where the internet had a lot of AI-driven content.” They scan your eyeballs, you get a unique eyeball identifier, and then in various future internet contexts you can use that identifier to prove that you are a human with eyeballs rather than an artificial intelligence without eyeballs. Fine fine fine fine fine, fine, fine. Fine? World launched in 2023, as both an eyeball-scanning project and a crypto project: It has a crypto token (Worldcoin), and if you scan your eyeballs you get some tokens, which you can use to, you know, speculate on crypto. Except that it couldn’t offer this deal in the US, because, on a maximalist interpretation of US securities law, this would be an illegal unregistered securities offering. The idea is something like: - If you issue securities to the general public in the US, you have to register them with the Securities and Exchange Commission and file extensive disclosures, in ways that seem hard for most crypto projects to accomplish.
- A “security” is, for these purposes, “an investment of money in a common enterprise with profits to come solely from the efforts of others.”
- World is a common enterprise whose value comes from the efforts of Altman and its managers, so it looks security-ish.
- Is there an expectation of profits? This is debatable: Your Worldcoins don’t seem to entitle you to any cash flows, any profits you get would come from selling them at a gain, and any capital appreciation would arguably come from the utility of the tokens (and general speculative demand) rather than the profitability of the business. But there is a decent argument, supported by some court decisions, that this counts: If the World managers put in efforts to make Worldcoin more valuable, and as a result your Worldcoins get more valuable, that is arguably an expectation of profits.
- Is there an investment of money? I mean, in a literal sense, obviously not: You do not give World money for your Worldcoins; you give it a scan of your eyeballs. But again the maximalist view of securities law says that that doesn’t matter. Giving a crypto issuer anything of value in exchange for tokens can count as an investment of money, and arguably your eyeball data is valuable. The SEC has argued that even free token giveaways count as securities offerings, because the people participating in the giveaways give the offerors “valuable consideration,” including promotion and attention for their tokens, personal data of the recipients and the development of a secondary market for the tokens.
In 2023, the SEC, then led by Gary Gensler, took a maximalist view of US securities law, with the result that at least some lawyers would probably tell you “giving away a crypto token in exchange for a picture of an eyeball is an illegal securities offering.” So World didn’t do that, in the US. If you scanned your eyeball elsewhere, you got tokens. If you wanted to give Altman your eyeball scan in the US, he’d take it — he wants eyeballs! — but he wouldn’t give you any tokens for it. People nonetheless took that deal — we talked about an article quoting some of them saying things like “I like random tech stuff” and “I just like trying out things” — but not that many of them. That was a pretty maximalist interpretation of US securities law as it applies to crypto, and in 2025 the SEC takes a considerably more minimalist interpretation, so good news, now you can get tokens for your eyeballs. The Financial Times reports: Sam Altman’s digital ID project World has launched in the US, making its controversial iris-scanning technology and cryptocurrency token available in the country as Donald Trump’s administration embraces the digital asset sector. The group aims to make the US its core market after initially rolling out the product outside the country in 2023, partly because of the Joe Biden administration’s more hostile attitude to crypto. Altman, who is also chief executive of $300bn artificial intelligence company OpenAI, lamented at the time that his venture, recently rebranded from Worldcoin, would be “World minus the US coin”. ... “There were very good reasons why we focused on making sure that the product worked in the entire world before coming to the United States. Some of them are related to regulatory changes,” said Adrian Ludwig, chief architect at Tools for Humanity, the primary developer behind World. Altman and Alex Blania founded Tools for Humanity in 2019. … Altman and Blania argue a reliable method of distinguishing humans from computers is essential as AI becomes more advanced. World manufactures eyeball-scanning “orbs” that generate unique IDs, which can be used to access the group’s Worldcoin token. The spherical devices are roughly the size of a basketball, but World is working on handheld models and wants to eventually integrate the technology directly into web cameras or mobile devices. One theory of crypto, popular among venture capitalists and tech entrepreneurs, is something like “crypto will be the way that people verify identity online and distinguish themselves from AI.” Another theory of crypto is “you get some tokens and the number goes up.” Now the two theories can work together. I have never, to my knowledge, been to a sales pitch for a Ponzi scheme, but they must happen. At least once a week surely someone rents out a ballroom in Frisco, Texas, gets in a bunch of potential investors, serves them coffee and gives a dynamic presentation about their can’t-lose investment proposition. The proposition is, of course, “you give us money, we give some of it to earlier investors to make them think that we’re generating profits, and we spend the rest of it on Lamborghinis for ourselves,” but the presentation wouldn’t say that. |