Meta has resolved a security vulnerability that allowed users of its AI chatbot to access the private prompts and generated responses of others.
Sandeep Hodkasia, founder of the security testing firm AppSecure, revealed to TechCrunch that Meta awarded him $10,000 as a bug bounty for disclosing the issue he reported on December 26, 2024.
Meta implemented a fix on January 24, 2025, and confirmed that there was no evidence of malicious exploitation of the bug.
Hodkasia identified the flaw while investigating how Meta AI enables logged-in users to edit their prompts to regenerate text and images. He discovered that when users edited their prompts, Meta’s back-end servers assigned unique identifiers to the prompts and their AI-generated responses. By analyzing network traffic during the editing process, Hodkasia found that he could alter the unique identifier, allowing access to prompts and responses from other users.
The vulnerability indicated that Meta’s servers were not adequately verifying whether the requesting user was authorized to view specific prompts and responses. Hodkasia noted that the prompt identifiers were “easily guessable,” potentially enabling malicious actors to scrape users’ original prompts by rapidly changing these identifiers with automated tools.
ICYMI: UCC School of Optometry Holds Maiden White Coat Ceremony
In a statement to TechCrunch, Meta confirmed the bug was fixed in January and emphasized that no abuse had been detected. Meta spokesperson Ryan Daniels stated that the company rewarded the researcher for his discovery.
This news comes as tech giants race to launch and enhance their AI products, amid growing concerns over security and privacy risks. Meta AI’s stand-alone app, released earlier this year to compete with rivals like ChatGPT, faced challenges after some users unintentionally shared what they believed were private conversations with the chatbot.
SOURCE: TECH CRUNCH