A growing number of former OpenAI employees are taking cover offered by a prominent legal scholar, who says he’s protecting their right to publicize the perils of the company’s unfettered race to develop artificial general intelligence.
Lawrence Lessig, a professor at Harvard Law School, said in an interview that he’s examining “every legal option,” including suing OpenAI if necessary, to protect the former employees. Lessig is now representing 15 former OpenAI employees, five more than when he wrote about his mission last week in an opinion piece for CNN, he said.
“I was eager to help them get to a place where they could assert the freedom that I think they need given the potential threat of AI technology — to identify risks that are not being appropriately dealt with,” Lessig said. “So that was their motivation, and I was keen to help them.”
Lessig is the founder of Creative Commons, a trailblazing organization that advocates for public access to copyrighted and licensed content. He also represented Frances Haugen, the Facebook whistleblower who leaked internal documents revealing the platform’s societal harms. He taught at Stanford Law School, where he founded the Center for Internet and Society.
OpenAI, based in San Francisco, is at the center of an escalating debate about the benefits and dangers of artificial general intelligence, or AGI, the nascent technology that simulates and, some fear, threatens to surpass and replace human intelligence and capabilities. OpenAI says its mission is to “ensure that artificial general intelligence benefits all of humanity.”
Daniel Kokotajlo thinks that’s not quite right. The former OpenAI researcher hired to forecast the future of AI said in a blog post that he lost confidence the company will “behave responsibly” when AGI is achieved. Last week, he signed an open letter, “A Right to Warn about Advanced Artificial Intelligence,” explaining that losing control of autonomous AI systems may result in human extinction.
The letter, a manifesto of principles AI companies should ideally adhere to, was signed by ten other former OpenAI employees, four by name and six anonymously, along with two former Google DeepMind employees. Geoffrey Hinton, the so-called “Godfather of AI,” also signed on.
The letter’s publication seemed to coincide with Kokotajlo’s interview in a New York Times story, followed by his appearance on the newspaper’s Hard Fork podcast. In the podcast, Kokotajlo explains that it’s “tricky” for him to disclose details about what’s going wrong at OpenAI because he’s still bound by the company’s confidentiality agreements.
Kokotajlo said that OpenAI’s policy was to withhold vested equity from departing employees — in his case totaling about $1.7 million — unless they sign an agreement to not disparage the company. Kokotajlo said he refused to sign it, and was prepared to forfeit his equity.
The agreements, first reported by Vox a month earlier, had prompted an immediate response from OpenAI CEO Sam Altman, who posted on X.
“we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement),” Altman wrote in the May 18 post in his typical (and some might say annoying) all lowercase style. “vested equity is vested equity, full stop.”
“this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have,” Altman added.
In the podcast, Kokotajlo also explained how Lessig entered the picture.
After the Vox story, he said, “This lawyer, Larry Lessig, offered pro bono services, and so I engaged with him. Various people were reaching out to me, and I was saying, ‘If you feel similarly to me, here’s a good lawyer, you should get in touch.’”
Lessig told Gazetteer that Kokotajlo is free to talk. Under California law, he said, vested options are treated as wages, which can’t be rescinded when an employee leaves a company — unless he or she agrees to terms allowing such restrictions, beyond what they’ve already signed.
“I get to walk away with the wages I got, and they can’t say the only way we’re going to give your wages is if you sign something called a non-disparagement agreement,” Lessig said. “That seems pretty clear, and I expect the company is eventually going to get around to clarifying that — and that’s an important problem for them to solve.”
In the Hard Fork podcast, Kokotajlo equivocates, sometimes uncomfortably, when pressed to provide specific examples of how OpenAI is recklessly speeding ahead with its AI development. Hard Fork co-host Casey Newton told Kokotajlo that his warnings weren’t connecting with ordinary people because, Newton said, they use ChatGPT and have “trouble understanding how this is going to end the world.”
Kokotajlo responded that one doesn’t need OpenAI’s internal secrets to understand the dangers posed by AGI. The information is evident through publicly available information, he said in the podcast.
Kokotajlo didn’t point to it, but it’s worth noting that the New York Times doesn’t need to look too far for existing examples of the dangerous misuse of AI — no AGI needed. Besides suing OpenAI for allegedly stealing news stories to train AI systems, the newspaper reported OpenAI’s disclosure last month that its artificial intelligence has been put to deceptive use by “covert influence operations.”
OpenAI’s models were used by Russia to create AI-generated content manipulating information about the country’s invasion of Ukraine, the company said. Other targeted subjects include the war in Gaza, elections in India, politics in Europe and the United States, and criticism of the Chinese government by Chinese dissidents and foreign governments, OpenAI said.
Despite Lessig’s analysis that OpenAI’s critics are free to talk, and Altman’s apparent promise, the professor said the former employees still feel a cloud of potential legal liability hanging over them if they speak out.
“We’re all trying to be careful about that,” Lessig said. “I can't tell you the advice that I'm giving them, but that's certainly an issue for them to address.”
Was Kokotajlo taking a risk by speaking out on the podcast or otherwise?
“I'm not going to talk about how much of a risk I think he's taking, but I do know that this has been the concern everybody has had — about whether they can address these issues of public import without facing, you know, their own risk of liability,” Lessig said.
The professor said it might be necessary for the former OpenAI employees to sue the company to protect their right to criticize the company.
“We’re certainly looking at every legal option,” Lessig said, adding that it has been his hope that the company would step up and do the right thing by giving Kokotaljo and others the freedom to speak out without needing to worry about lawsuits.
“The objective is to get the freedom Daniel is talking about,” Lessig said, “and we're looking at every possible way we can engage to get that, including legislation, or litigation.”