A recent security issue underscores the challenges of maintaining safety in the age of AI. Researchers have identified a critical vulnerability in the NLWeb protocol, which Microsoft touted just a few months ago at its Build conference. Designed to provide ChatGPT-like search capabilities across websites and apps, NLWeb has already been deployed with clients such as Shopify, Snowflake, and TripAdvisor.
The vulnerability allows remote users to access sensitive files, including system configuration details and API keys for OpenAI or Gemini. Alarmingly, it’s a classic path traversal flaw, easily exploited by simply visiting a malformed URL. While Microsoft has addressed the flaw, it raises concerns about how such a basic issue could slip through the cracks amid the company’s renewed focus on security.
“This case study serves as a critical reminder that as we build new AI-powered systems, we must reassess the impact of classic vulnerabilities, which can now jeopardize not only servers but also the ‘brains’ of AI agents,” stated Aonan Guan, a senior cloud security engineer at Wyze, who, along with Lei Wang, reported the flaw to Microsoft.
Guan and Wang flagged the vulnerability to Microsoft on May 28, shortly after NLWeb’s launch. Microsoft issued a fix on July 1 but has not provided a Common Vulnerabilities and Exposures (CVE) classification for the issue, which would help raise awareness and facilitate tracking. The researchers have urged Microsoft to issue a CVE, but the company has been hesitant.
“This issue was responsibly reported, and we have updated the open-source repository,” said Microsoft spokesperson Ben Hope in a statement to The Verge. “Microsoft does not use the affected code in any of our products. Customers utilizing the repository are automatically protected.”
ICYMI: National Tragedy: Defence Minister and Others Perish in Helicopter Crash
Guan cautioned that NLWeb users “must pull and vend a new build version to eliminate the flaw,” as any public-facing NLWeb deployment remains susceptible to unauthorized access to .env files containing crucial API keys.
While leaking an .env file can be serious for web applications, Guan believes it’s “catastrophic” for an AI agent. “These files hold API keys for LLMs like GPT-4, which serve as the agent’s cognitive engine,” he explained. “An attacker not only steals a credential but effectively usurps the agent’s ability to think, reason, and act, potentially resulting in significant financial losses from API abuse or the creation of malicious clones.”
Additionally, Microsoft is advancing native support for Model Context Protocol (MCP) in Windows, even as security researchers warn of the associated risks. If the NLWeb vulnerability is any indication, Microsoft will need to exercise caution in balancing the rapid rollout of new AI features with the imperative of prioritizing security.
SOURCE: THE VERGE