With only several seconds of audio, someone can clone a victim’s voice, call their bank, and potentially get access to … everything. Vocal deepfakes have gotten very good, but so has the ">
AI voice fraud is a prevalent and costly issue for Americans.
Banks are primary targets for AI voice fraud due to financial incentives.
Multi-factor authentication is crucial to defend against evolving AI threats.
Technology detects AI-generated content by analyzing distinct features.
Regulations are needed for mandatory disclosure of AI-generated online content.
Deep Dive
Millions of Americans have reportedly lost money to AI voice scams, often thousands of dollars per incident.
The episode introduces a special series on the business of crime, focusing on defense against AI voice clones.
The discussion highlights the emerging use of AI technology to combat AI-generated fraud.
Banks are a primary target for AI voice fraud due to the potential for substantial financial gain.
Criminals exploit increased digitalization by recording several seconds of a voice to create AI clones.
These cloned voices are used to bypass voice verification systems on phone lines, enabling unauthorized transactions or account access.
Ben Coleman co-founded Reality Defender in 2021 to combat schemes like AI voice cloning, detecting AI by analyzing harmonic structures.
Reality Defender advises financial institutions to move beyond voice biometrics as a sole security measure, deeming it a significant vulnerability.
Mark Kwapazewski from PNC Bank emphasizes the necessity of multi-factor authentication and layered security.
PNC Bank employs multiple security layers, including location data, device information, personal details, and text message verification codes.
Banks like PNC are implementing technology to block fraudulent calls, preventing scammers from spoofing bank numbers and misleading customers.
Voice cloning technology is also used to impersonate loved ones in distress; a 'safe word' strategy is suggested for identity verification.
Ben Coleman of Reality Defender advocates for vetting all online content for AI generation, citing rapid technological advancement outpacing regulation.
Coleman testified before Congress to advance regulations requiring disclosure of AI-generated content in online communications, such as Instagram, Zoom calls, and WhatsApp voice memos.