Deepfake Video Scams: How to Spot and Protect Yourself
Learn to identify AI-generated deepfake videos used in scams. Understand how criminals use this technology and protect yourself and your loved ones.
By Marcin Piekarski • Founder & Web Developer • builtweb.com.au
AI-Assisted by: Prism AI (Prism AI represents the collaborative AI assistance in content creation.)
Last Updated: 7 December 2025
TL;DR
Deepfake videos use AI to create realistic fake videos of real people. Scammers use them for fraud, impersonating executives, family members, and celebrities. Protect yourself by verifying through separate channels, looking for visual glitches, and being suspicious of urgent requests for money or information.
Why it matters
Deepfake technology has become accessible to criminals. In 2024, a Hong Kong company lost $25 million when an employee was tricked by a deepfake video call impersonating the company's CFO. These scams are becoming more common and more convincing. Anyone can be targeted.
How deepfake scams work
The technology
AI analyzes videos and photos of a person to learn:
- Their facial features and movements
- Their voice patterns and speech
- Their mannerisms and expressions
Then it generates new video that looks and sounds like that person saying whatever the scammer wants.
Common attack types
Executive impersonation
- Fake CEO/CFO on video calls
- Urgent wire transfer requests
- "Keep this confidential" pressure
- Targets: Finance teams, assistants
Family emergency scams
- Fake video of relative in distress
- Requests for bail money or emergency funds
- Creates panic to bypass rational thinking
- Targets: Elderly family members
Romance and trust scams
- Fake video calls with romantic interests
- Building relationships for financial fraud
- Used when victim requests video proof
- Targets: Online dating users
Celebrity endorsement fraud
- Fake videos of celebrities promoting scams
- Cryptocurrency schemes
- Investment frauds
- Targets: Social media users
Red flags to watch for
Visual clues
Current deepfakes often have telltale signs:
- Unnatural blinking — Too slow, fast, or irregular
- Skin texture — Too smooth or plastic-looking
- Edge artifacts — Blurring around face edges
- Lighting inconsistencies — Face lit differently than background
- Hair issues — Strands don't move naturally
- Accessories glitches — Glasses, earrings behaving strangely
- Background warping — Slight distortions when head moves
Audio clues
- Robotic quality — Slightly unnatural speech patterns
- Sync issues — Lips don't quite match audio
- Breathing — Unnatural or missing breath sounds
- Background noise — Inconsistent ambient sounds
Behavioral clues
- Urgency — "This must happen now"
- Secrecy — "Don't tell anyone about this"
- Unusual requests — Asking for things they normally wouldn't
- Avoiding verification — Resisting callbacks or confirmation
How to protect yourself
The callback rule
Never act on a video request without verification through a separate channel.
If your "CEO" calls asking for a wire transfer:
- Say you'll call them right back
- Use a number you already have (not one they give you)
- Confirm the request directly
- If they resist verification, it's likely a scam
Family code words
Establish a family verification system:
- Create a secret code word only family knows
- Use it to verify emergency calls
- Change it if it might be compromised
- Never share it via text or email
Organizational safeguards
For businesses:
- Require multi-person approval for large transfers
- Establish verification protocols for video requests
- Train employees on deepfake awareness
- Create clear escalation procedures
Technical measures
- Enable two-factor authentication everywhere
- Be cautious about video calls from unknown numbers
- Verify meeting links come from legitimate sources
- Consider deepfake detection tools for high-risk situations
What to do if targeted
Don't panic
- Take a breath
- Don't act immediately
- The urgency is manufactured
Verify
- Contact the person through known channels
- Call their verified phone number
- Check with others who might know
Document
- Save the video if possible
- Note the contact method used
- Record any phone numbers or emails
Report
- FTC (US): reportfraud.ftc.gov
- IC3 (FBI): ic3.gov
- Action Fraud (UK): actionfraud.police.uk
- Your local police
- Your organization's security team
What's next
Stay informed about AI safety:
- AI Safety Basics — General safety principles
- Scam Watch — Latest AI scams to watch for
- AI Ethics — Broader ethical considerations
Frequently Asked Questions
Can any video be deepfaked?
Creating a convincing deepfake requires training material—public videos and photos of the target. Public figures with lots of video content are easiest to fake. Private individuals with minimal online presence are harder to fake convincingly.
Are there tools to detect deepfakes?
Yes, deepfake detection tools exist (Intel FakeCatcher, Microsoft Video Authenticator), but they're not perfect. The best protection remains behavioral—verifying through separate channels rather than trusting any single video.
Is creating a deepfake illegal?
It depends on jurisdiction and intent. Using deepfakes for fraud is illegal everywhere. Creating them for satire or education may be legal. Many jurisdictions are passing specific deepfake laws, especially around elections and non-consensual intimate content.
Was this guide helpful?
Your feedback helps us improve our guides
About the Authors
Marcin Piekarski• Founder & Web Developer
Marcin is a web developer with 15+ years of experience, specializing in React, Vue, and Node.js. Based in Western Sydney, Australia, he's worked on projects for major brands including Gumtree, CommBank, Woolworths, and Optus. He uses AI tools, workflows, and agents daily in both his professional and personal life, and created Field Guide to AI to help others harness these productivity multipliers effectively.
Credentials & Experience:
- 15+ years web development experience
- Worked with major brands: Gumtree, CommBank, Woolworths, Optus, Nestlé, M&C Saatchi
- Founder of builtweb.com.au
- Daily AI tools user: ChatGPT, Claude, Gemini, AI coding assistants
- Specializes in modern frameworks: React, Vue, Node.js
Areas of Expertise:
Prism AI• AI Research & Writing Assistant
Prism AI is the AI ghostwriter behind Field Guide to AI—a collaborative ensemble of frontier models (Claude, ChatGPT, Gemini, and others) that assist with research, drafting, and content synthesis. Like light through a prism, human expertise is refracted through multiple AI perspectives to create clear, comprehensive guides. All AI-generated content is reviewed, fact-checked, and refined by Marcin before publication.
Capabilities:
- Powered by frontier AI models: Claude (Anthropic), GPT-4 (OpenAI), Gemini (Google)
- Specializes in research synthesis and content drafting
- All output reviewed and verified by human experts
- Trained on authoritative AI documentation and research papers
Specializations:
Transparency Note: All AI-assisted content is thoroughly reviewed, fact-checked, and refined by Marcin Piekarski before publication. AI helps with research and drafting, but human expertise ensures accuracy and quality.
Key Terms Used in This Guide
Related Guides
AI and Privacy: What You Need to Know
BeginnerAI tools collect data to improve—but what happens to your information? Learn how to protect your privacy while using AI services.
AI and Kids: A Parent's Safety Guide
BeginnerKids are using AI for homework, entertainment, and chatting. Learn how to keep them safe, teach responsible use, and set healthy boundaries.
AI Failure Modes and Mitigations: When AI Goes Wrong
IntermediateUnderstand how AI systems fail and how to prevent failures. From hallucinations to catastrophic errors—learn to anticipate, detect, and handle AI failures gracefully.