Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI Security Breach: Hacker Steals Internal AI Design Details from ChatGPT Maker

In early 2023, sources said a hacker infiltrated OpenAI's internal messaging systems, stealing information about AI designs. The breach, not publicly disclosed, has sparked concerns about the company's security measures and potential national security risks.

Sources Reveal Hacker Stole AI Design Details from OpenAI's Employee Forum, Security Concerns Rise

Two sources acquainted with the incident have reported that the hacker obtained information from an online forum where employees discussed OpenAI's most recent technologies. However, the hacker needed help accessing the company's systems where it houses and develops its artificial intelligence.

In a recent report by The New York Times, according to the two individuals who discussed confidential information about the company on the condition of anonymity, executives at OpenAI disclosed the incident to employees during an all-hands meeting at the company's San Francisco offices in April 2023 and informed its board of directors.

However, according to the two individuals, the executives elected not to disclose the information to the public, believing that no information regarding consumers or partners had been compromised. The executives did not consider the incident a threat to national security, as they thought the perpetrator was a private individual with no known affiliations with a foreign government. The company did not inform the F.B.I. or any other law enforcement agency.

For certain OpenAI employees, the news sparked concerns that foreign adversaries, such as China, could acquire A.I. technology that, while primarily used for research and work, could potentially jeopardize U.S. national security. This incident also raised concerns regarding the severity of OpenAI's security protocols and revealed internal divisions within the organization regarding the potential hazards of artificial intelligence.

After the breach, Leopold Aschenbrenner, a technical program manager at OpenAI dedicated to preventing the severe harm caused by future A.I. technologies, sent a memo to the company's board of directors. He contended that the company needed to take more measures to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Former OpenAI Manager Alleges Political Motives Behind Termination Following Security Breach Disclosure

Mr. Aschenbrenner claimed that OpenAI terminated him this spring due to disclosing additional information outside the organization. He contended that his termination was politically motivated. He referenced the breach in a recent podcast; however, the specifics of the incident have not been previously disclosed. He stated that OpenAI's security measures were inadequate to safeguard against capturing critical secrets in the event of a foreign actor's intrusion into the company.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, said. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

It is not unreasonable to harbor concerns that a breach of an American technology company may have connections to China. Last month, Brad Smith, the president of Microsoft, testified on Capitol Hill regarding how Chinese hackers exploited the technology giant's systems to conduct a comprehensive attack on federal government networks.

Nevertheless, under federal and California law, OpenAI prohibits excluding individuals from employment at the company based on their nationality. Policy researchers have suggested that excluding foreign talent from U.S. projects could impede the advancement of AI in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s head of security, told The New York Times in an interview. “It comes with some risks, and we need to figure those out.”

OpenAI is not the only organization developing systems that are becoming more powerful due to the swiftly evolving AI technology. Some of them, most notably Meta, the owner of Facebook and Instagram, readily share their designs as open-source software with the rest of the world. The risks associated with the current state of artificial intelligence (AI) technologies are minimal, and the sharing of code enables engineers and researchers from various industry sectors to identify and resolve issues.

Photo: Microsoft Bing

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.