A hacker has stolen OpenAI's secrets, raising fears that China could do the same

Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s artificial intelligence technologies.

The hacker stole details from discussions in an online forum where employees discussed OpenAI’s latest technologies, but was unable to break into the systems where the company hosts and develops its artificial intelligence, according to two people familiar with the incident.

OpenAI executives disclosed the incident to employees at an all-hands meeting at the company’s San Francisco offices in April 2023, the people said, during which they discussed sensitive company information on condition of anonymity.

But executives decided not to share the news publicly because no customer or partner information was stolen, the two people said. Executives did not consider the incident a national security threat because they believed the hacker was a private individual with no known ties to a foreign government. The company did not notify the FBI or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries like China could steal AI technology that, while now primarily a business and research tool, could ultimately jeopardize U.S. national security. It also prompted questions about how seriously OpenAI was taking security and exposed rifts within the company over the risks of AI.

After the breach, Leopold Aschenbrenner, OpenAI’s technical program manager working to ensure that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Leopold Aschenbrenner, a former OpenAI researcher, touched on the security breach in a podcast last month, reiterating his concerns.Credit…via YouTube

Mr. Aschenbrenner said OpenAI fired him this spring for leaking more information outside the company, and argued that his dismissal was politically motivated. He touched on the breach in a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security was not strong enough to protect against the theft of key secrets if foreign actors infiltrated the company.

“We appreciate the concerns Leopold raised while at OpenAI, and that has not led to his departure,” said an OpenAI spokeswoman, Liz Bourgeois. Referring to the company’s efforts to build artificial general intelligence, a machine that can do everything a human brain can do, she added: “While we share his commitment to building safe AI, we disagree with many of the statements he has made about our work since.”

Fears that a hack of an American tech company could have ties to China are not unreasonable. Last month, Microsoft Chairman Brad Smith testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.

However, under federal and California law, OpenAI cannot bar people from working at the company based on their nationality, and policy researchers have said that excluding foreign talent from U.S. projects could significantly impede the progress of AI in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s chief security officer, told The New York Times in an interview. “There are risks involved, and we need to understand those.”

(The Times is suing OpenAI and its partner, Microsoft, alleging that they are infringing its copyright on news content related to artificial intelligence systems.)

OpenAI isn’t the only company building ever more powerful systems using rapidly improving AI technology. Some of them, most notably Meta, which owns Facebook and Instagram, are freely sharing their designs with the world as open source software. They believe that the dangers posed by current AI technologies are small, and that sharing code allows engineers and researchers across the industry to identify and solve problems.

Today’s AI systems can help spread misinformation online, including text, still images, and increasingly video. They’re also starting to take away some jobs.

Companies like OpenAI and its competitors Anthropic and Google add protections to their AI apps before offering them to individuals and businesses, in the hopes of preventing people from using the apps to spread misinformation or cause other problems.

But there’s little evidence that today’s AI technologies pose a significant risk to national security. Studies by OpenAI, Anthropic, and others over the past year have shown that AI was not significantly more dangerous than search engines. Daniela Amodei, Anthropic’s cofounder and the company’s president, has said that its latest AI technology would not pose a significant risk if its designs were stolen or freely shared with others.

“If it were owned by someone else, could it be extremely damaging to a large part of society? Our answer is, 'No, probably not,'” he told The Times last month. “Could it accelerate something for a bad actor in the future? Maybe. It's really speculative.”

But researchers and tech executives have long feared that AI could one day fuel the creation of new biological weapons or help break into government computer systems. Some even believe it could destroy humanity.

Several companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a safety and security committee to explore how it should address the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber ​​Command. He has also been appointed to OpenAI’s board of directors.

“We started investing in security years before ChatGPT,” said Mr. Knight. “We’re on a journey to not only understand and anticipate risks, but also to build our resilience.”

Federal officials and state lawmakers are also pushing government regulations that would bar companies from releasing certain AI technologies and fine them millions if their technologies cause harm. But experts say those dangers are still years or even decades away.

Chinese companies are building their own systems that are nearly as powerful as leading U.S. systems. By some measures, China has eclipsed the U.S. as the world’s largest producer of AI talent, with the country producing nearly half of the world’s top AI researchers.

“It’s not crazy to think that China will soon surpass the United States,” said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world’s open-source AI projects.

Some researchers and national security officials argue that the mathematical algorithms underlying current AI systems, while not dangerous today, could become so, and are calling for tighter controls on AI laboratories.

“Even if the worst-case scenarios are relatively low probability, if they have a high impact, then it’s our responsibility to take them seriously,” Susan Rice, a former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, said at an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to say.”

Leave a Reply

Your email address will not be published. Required fields are marked *