Hackers leaked 72,000+ selfies, IDs, and DMs from Tea’s unsecured database.
The non-public data of girls utilizing the app is now searchable and spreading on-line.
The unique leaker mentioned lax “vibe coding” might have been one of many explanation why the app was left vast open to assault.
The viral women-only courting security app Tea suffered an enormous information breach this week after customers on 4chan found its backend database was utterly unsecured—no password, no encryption, nothing.
The end result? Over 72,000 non-public photographs—together with selfies and authorities IDs submitted for consumer verification—have been scraped and unfold on-line inside hours. Some have been mapped and made searchable. Personal DMs have been leaked. The app designed to guard girls from harmful males had simply uncovered its complete consumer base.
The uncovered information, totaling 59.3 GB, included:
13,000+ verification selfies and government-issued IDs
Tens of hundreds of photographs from messages and public posts
IDs courting as lately as 2024 and 2025, contradicting Tea’s declare that the breach concerned solely “outdated information”
4chan customers initially posted the recordsdata, however even after the unique thread was deleted, automated scripts stored scraping information. On decentralized platforms like BitTorrent, as soon as it’s out, it’s out for good.
From viral app to complete meltdown
Tea had simply hit #1 on the App Retailer, using a wave of virality with over 4 million customers. Its pitch: a women-only house to “gossip” about males for security functions—although critics noticed it as a “man-shaming” platform wrapped in empowerment branding.
One Reddit consumer summed up the schadenfreude: “Create a women-centric app for doxxing males out of envy. Find yourself unintentionally doxxing the ladies purchasers. I find it irresistible.”
Verification required customers to add a authorities ID and selfie, supposedly to maintain out pretend accounts and non-women. Now these paperwork are within the wild.
The corporate instructed 404 Media that “[t]his information was initially saved in compliance with regulation enforcement necessities associated to cyber-bullying prevention.”
Decrypt reached out however has not acquired an official response but.
The offender: ‘Vibe coding’
Here is what the O.G. hacker wrote. “That is what occurs while you entrust your private info to a bunch of vibe-coding DEI hires.”
“Vibe coding” is when builders sort “make me a courting app” into ChatGPT or one other AI chatbot and ship no matter comes out. No safety overview, no understanding of what the code truly does. Simply vibes.
Apparently, Tea’s Firebase bucket had zero authentication as a result of that is what AI instruments generate by default. “No authentication, no nothing. It is a public bucket,” the unique leaker mentioned.
It could be vibe coding, or just poor coding. Regardless, the overreliance on generative AI is just growing.
This is not some remoted incident. Earlier in 2025, the founding father of SaaStr watched its AI agent delete the corporate’s complete manufacturing database throughout a “vibe coding” session. The agent then created pretend accounts, generated hallucinated information, and lied about it within the logs.
Total, researchers from Georgetown College discovered 48% of AI-generated code incorporates exploitable flaws, but 25% of Y Combinator startups use AI for his or her core options.
So regardless that vibe coding is efficient for infrequent use, and tech behemoths like Google and Microsoft pray the AI gospel claiming their chatbots construct a formidable a part of their code, the common consumer and small entrepreneurs could also be safer sticking to human coding—or at the very least overview the work of their AIs very, very closely.
“Vibe coding is superior, however the code these fashions generate is filled with safety holes and may be simply hacked,” pc scientist Santiago Valdarrama warned on social media.
Vibe-coding is superior, however the code these fashions generate is filled with safety holes and may be simply hacked.
This will probably be a stay, 90-minute session the place @snyksec will construct a demo utility utilizing Copilot + ChatGPT and stay hack it to search out each weak spot within the generated…
— Santiago (@svpino) March 17, 2025
The issue will get worse with “slopsquatting.” AI suggests packages that do not exist, hackers then create these packages stuffed with malicious code, and builders set up them with out checking.
Tea customers are scrambling, and a few IDs already seem on searchable maps. Signing up for credit score monitoring could also be a good suggestion for customers attempting to stop additional injury.
Typically Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.