OpenAI has reportedly revamped its security operations to safeguard against corporate espionage. According to the Financial Times, the company accelerated its security measures following the release of a competing model by Chinese startup DeepSeek in January, which OpenAI claims improperly copied its models using “distillation” techniques.
The enhanced security protocols include “information tenting” policies that restrict staff access to sensitive algorithms and new products. For instance, during the development of OpenAI’s o1 model, only verified team members who were briefed on the project could discuss it in shared office areas, as reported by the FT.
ICYMI: GAA to investigate scoring confusion in All-Ireland hurling semi-final
Additionally, OpenAI is now isolating proprietary technology on offline computer systems, implementing biometric access controls (including fingerprint scans for employees), and enforcing a “deny-by-default” internet policy that requires explicit approval for external connections. The company has also increased physical security at its data centers and expanded its cybersecurity team.
These changes reflect broader concerns about foreign adversaries attempting to steal OpenAI’s intellectual property. However, amid ongoing poaching wars among American AI companies and frequent leaks of CEO Sam Altman’s comments, OpenAI may also be addressing internal security challenges.
SOURCE: TECH CRUNCH