The rise of artificial intelligence has brought many wonders, but it’s also opened the door to new types of cyber threats. One of the most concerning? Deepfakes. Previously a worry for social media, elections, and public discourse, deepfakes—hyper-realistic fake videos and audio created with AI—are now a serious concern for businesses, especially in the financial sector.
Deepfake Dangers on the Rise
Imagine receiving a call from your boss, asking you to transfer funds or share sensitive information. Their voice sounds exactly right, but in reality, it’s a fraudster using a deepfake audio clip. This isn’t science fiction; it’s a reality that companies are grappling with today.
Bill Cassidy, CIO of New York Life, highlighted the novelty and danger of these AI-enabled threats. Banks and financial services, in particular, are in the crosshairs. According to Kyle Kappel, U.S. Leader for Cyber at KPMG, the financial sector is facing these threats head-on as technology rapidly evolves.
The speed of this evolution was underlined by OpenAI’s recent demonstration of technology capable of mimicking a human voice from just a 15-second clip. However, recognizing the potential for misuse, OpenAI has held back on releasing this technology to the public.
Real-World Impacts
The stakes are high. For instance, Chase Bank was tricked by an AI-generated voice in an experiment, highlighting the vulnerability of current systems. The increase in deepfake incidents within the fintech sector is staggering, with a 700% rise reported in 2023 alone, according to Sumsub.
Fighting Back with Technology
The financial industry isn’t sitting still. Companies are actively seeking new solutions to counter these threats. New York Life, for example, is exploring startups and emerging technologies that can detect and combat deepfakes.
In a move to strengthen security, some banks are altering their identity verification processes. Alex Carriles of Simmons Bank noted a shift from using static photos for ID verification to requiring customers to take live photos and selfies through the bank’s app. This approach helps prevent scammers from using AI-generated images to impersonate real customers.
A Mixed Response
Not every financial institution is convinced that the latest technology is the answer. KeyBank’s CIO, Amy Brady, considers the bank’s slow adoption of voice authentication technology a blessing in disguise, given the risks associated with deepfakes. For now, Brady is cautious about implementing new voice verification tools until more reliable methods for detecting fakes are available.
What’s Next?
The battle against deepfakes in the financial world is just beginning. As AI technology becomes more sophisticated, so too do the methods to detect and prevent its misuse. This ongoing arms race between cybersecurity professionals and bad actors underscores the need for vigilance and innovation in protecting financial transactions and personal information in the digital age.
As we navigate this challenging landscape, one thing is clear: the financial sector must stay on its toes, constantly adapting to counter these invisible yet potentially devastating threats.