What began as an harmless pattern—turning selfies into lovable “Studio Ghibli-style AI images”—has now taken a sinister flip. AI-powered instruments, as soon as celebrated for inventive creativity, at the moment are being manipulated to craft pretend identities, forge paperwork, and plan digital scams. This isn’t science fiction. It’s taking place proper now, and India is already feeling the ripple results. AI instruments like ChatGPT and picture mills have captured the general public creativeness.
However whereas most customers discover them for productiveness and leisure, cybercriminals have reverse-engineered their potential for deception. By combining text-based AI prompts with picture manipulation, fraudsters are producing shockingly sensible fake IDs—particularly Aadhaar and PAN playing cards.
The Rise of AI-Fueled Scams
Utilizing minimal particulars akin to identify, date of start, and deal with, attackers have been in a position to produce near-perfect replicas of official identification paperwork. Social media platforms like X (previously Twitter) have been flooded with examples. One person, Yaswanth Sai Palaghat, raised alarm bells by saying,
“ChatGPT is producing pretend Aadhaar and PAN playing cards immediately, which is a severe safety danger. That is why AI needs to be regulated to some extent.”

One other person, Piku, shared a chilling revelation:
“I requested AI to generate an Aadhaar card with only a identify, date of start, and deal with… and it created a virtually good copy. Now anybody could make a pretend model… We regularly focus on knowledge privateness, however who’s promoting these Aadhaar and PAN card datasets to AI firms to develop such fashions?”
While AI tools don’t use actual personal information, the accuracy with which they mimic formats, fonts, and layout styles suggests that they’ve been exposed to real-world data—possibly through public leaks or open-source training materials. The Airoli Aadhaar incident is a notable instance that might have offered a template for such operations.
Hackers are additionally coupling these digital forgeries with actual knowledge scavenged from discarded papers, outdated printers, or e-waste dumps. The end result? Complete pretend identities that may cross primary verification—resulting in SIM card frauds, pretend financial institution accounts, rental scams, and extra.
Let that sink in: the identical instruments that generate anime-style selfies at the moment are being weaponized to commit identification theft.
The Viral Shreya Ghoshal “Leak” That Wasn’t
Whereas doc fraud is worrying, misinformation and phishing campaigns are evolving with related complexity. Simply final week, the Indian web was abuzz with a supposed “leak” involving in style playback singer Shreya Ghoshal. Followers have been shocked by headlines hinting at courtroom controversies and career-ending moments. But it surely was all pretend.
Based on cyber intelligence analyst Anmol Sharma, the leak was never real—it was a hyperlink. Sharma tracked the viral content material to newly created rip-off web sites posing as information shops, akin to replaceyourselfupset.run and faragonballz.com.
“These web sites have been set as much as appear like credible information sources however have been really redirecting folks to phishing pages and shady funding scams,” he defined.

These websites mimicked trusted media layouts and used AI-generated photographs of Ghoshal behind bars or in tears to evoke emotional responses. The objective? To drive visitors to malicious domains that steal private knowledge or push crypto scams underneath pretend manufacturers like Lovarionix Liquidity.
Faux Medical doctors, Actual Deaths
In an much more harrowing case, a person impersonating famend UK-based heart specialist Dr. N John Camm carried out over 15 coronary heart surgical procedures at a revered hospital in Madhya Pradesh. Recognized as Narendra Yadav, the impersonator fooled employees and sufferers alike at Mission Hospital in Damoh, resulting in a number of affected person deaths between December 2024 and February 2025.
Based on official information, a minimum of two fatalities have been linked to Yadav’s actions. Victims’ households, together with Nabi Qureshi and Jitendra Singh, have recounted heartbreaking experiences involving aggressive surgical procedures and vanishing docs.
Whereas the case remains to be underneath investigation, it highlights the terrifying extent to which digital impersonation—presumably aided by pretend credentials or manipulated paperwork—might be taken offline, leading to real-world hurt.
A Want for Privateness-Aware AI Use
The rising misuse of AI has sparked concern amongst cybersecurity specialists. Ronghui Gu, founder, CertiK warns:
“Customers ought to strategy AI-based picture mills with a wholesome degree of warning, significantly in relation to sharing biometric info like facial photographs. Many of those platforms are storing person knowledge to coach their fashions, and with out clear insurance policies, there’s no approach to know whether or not photographs are being repurposed or shared with third events.”
The warning extends past picture knowledge. As AI tools turn into extra built-in into each day purposes—from onboarding processes to doc verification—the chance of misuse rises, particularly in jurisdictions with weak knowledge governance.
Ronghui Gu advises customers to:
- Totally overview privateness insurance policies earlier than importing knowledge.
- Keep away from sharing high-resolution or identifiable photographs.
- Use pseudonyms or secondary e-mail addresses.
- Make sure the platform complies with knowledge safety legal guidelines like GDPR or CCPA.
“Privateness-conscious utilization requires a proactive strategy and an understanding that comfort ought to by no means come at the price of management over private knowledge,” Ronghui Gu added.
A HiddenLayer report reinforces this, revealing that 77% of firms utilizing AI have already confronted safety breaches, probably exposing delicate buyer knowledge. The takeaway? Even respectable use of AI instruments carries hidden dangers—particularly if the backend programs aren’t safe.
A New Age of Cybercrime — The place a Selfie Begins the Rip-off
What started as playful AI-generated artwork is now being hijacked for fraud, identification theft, and misinformation. The identical instruments that energy creativity at the moment are powering chaos—and cybercriminals are getting smarter by the day.
India’s digital ecosystem is turning into floor zero for these AI-driven scams. And the scariest half? That is only the start.
We are able to’t afford to marvel on the tech whereas ignoring its darker edge. Regulators should transfer past lip service. Tech firms should be held accountable. And cybersecurity professionals must deal with generative AI not as a novelty, however as an actual menace vector.
As a result of on this period, even one thing as innocent as a selfie may very well be weaponized.
And if we’re not paying consideration now, we’ll be outrun by those that are.
Associated
Media Disclaimer: This report is predicated on inside and exterior analysis obtained via varied means. The data offered is for reference functions solely, and customers bear full accountability for his or her reliance on it. The Cyber Express assumes no legal responsibility for the accuracy or penalties of utilizing this info.