“Help me, Appa!” echoed in the ears of a retired banker from Madurai when he received a call that sent chills down his spine. The voice on the other end, unmistakably his son’s, begged for help. The ransom demand—a mere ₹5,000—seemed suspiciously low, prompting him to check on his son. That’s when he discovered the chilling truth: his son was safe. It was a scam, and the voice he heard had been generated using AI.
Not everyone has been so fortunate. Last year, PS Radhakrishnan, a retired government employee in Kerala, fell victim to a more sophisticated scam. Using AI-generated deepfake technology, criminals duped him out of ₹40,000. The scam blended manipulated voice and video, making the fraudulent plea seem real.
In Delhi, cybercriminals used AI voice cloning to con Lakshmi Chand Chawla, a senior citizen, out of ₹50,000. They played a cloned voice of his cousin’s son, begging for help, claiming he had been kidnapped. In a panic, Chawla transferred the money via Paytm, only to realize later that his cousin’s son was never in danger.
These chilling incidents highlight the rapid evolution of AI-driven scams, where technology is used to exploit trust. Scammers are no longer faceless; they now sound like loved ones in distress.
The Rise of AI-Driven Fraud
Independent researcher Rohini Lakshane explains the growing danger: “Fraudulent calls using voice cloning are extremely concerning. With India’s push toward digital financial transactions, many people now use systems like UPI, but awareness about digital security is still low. AI-facilitated crime will only exacerbate the issue.”
Scammers gather personal information from social media or phone calls, then use AI to clone voices, making their fake pleas seem legitimate. The emotional manipulation is powerful, often leading victims to act before they verify.
Lakshane also warns of the “liar’s dividend”—a situation where, as these scams become more common, even genuine distress calls may be doubted, potentially delaying help when it’s truly needed.
Facecam.ai and the Evolution of Deepfakes
The threat extends beyond voice cloning. Tools like Facecam.ai have pushed deception even further, allowing real-time video deepfakes with just a single image. Facecam.ai was taken down after backlash over its potential misuse, but similar tools continue to thrive. Programs like Deep-Live-Cam allow users to swap faces in video calls, enabling impersonation of anyone, from celebrities to family members.
The implications are alarming. Fraud, manipulation, and reputational damage are now just a few clicks away. As deepfake technology becomes more accessible, the potential for harm grows exponentially.
In one high-profile case last year, scammers in Hong Kong used a deepfake video to impersonate a company’s CFO, resulting in a $25 million loss. Such scams are no longer confined to high-profile individuals; anyone can fall victim.
Can Personhood Credentials Solve the Problem?
With AI blurring the line between real and fake, one proposed solution is Personhood Credentials—a system to verify that the person behind a digital interaction is, in fact, real. Srikanth Nadhamuni, the CTO of Aadhaar, advocates for this system, believing biometric verification could help prevent AI-fueled deception. In a paper co-authored in August, Nadhamuni argued that such credentials could ensure every online interaction is genuine.
But is it that simple? Rohini Lakshane raises concerns. “Personhood credentials might stop some scams, but they also open up serious privacy issues. What happens if these credentials are misused or denied?” she asks. India has already seen cases where people were wrongly declared dead or denied access to services due to identity verification errors.
The Balance Between Security and Privacy
As AI technology advances, striking the right balance between security and privacy becomes increasingly important. Personhood credentials may help combat AI-fueled fraud, but they also risk creating a world where digital interaction is tightly controlled by governments or corporations. Those without credentials could be silenced or excluded from essential services, and trust in real emergencies might erode.
As the line between reality and deception continues to blur, society faces difficult questions. Can we find a way to protect against AI-driven scams without sacrificing privacy and freedom? Or will we be forced to navigate a new digital landscape fraught with both technological advancement and its darker, more dangerous consequences?