'Deepfake' technologies, which have been on the rise recently, allow real people's voices to be produced in real time by imitating them. The recent rise in productive artificial intelligence technologies has enabled these deepfake technologies to be used much more widely. The Financial Times questions the security of digital banking in the face of these technologies.

Deepfake applications have been frequently used in areas such as movie editing for some time. However, according to the information compiled by finansgundem.com, with the widespread use of technology, the possibility of 'deepfake' falling into the wrong hands has increased. Therefore, the use of deepfake content for theft and fraudulent transactions has become a serious risk factor.

Of course, with the development of technology, deepfake detection tools in the market have also developed rapidly. These tools focus on detecting fraud signals in content using artificial intelligence. For example, distortions and blurs in lighting, shadows and angles in a photo are among the obvious signs that inconsistencies can be detected.

However, the recent explosion of AI models and AI chatbots like ChatGPT has made deepfake technology more convincing and more accessible. In other words, hackers no longer need higher technical skills.
Michael Matias, CEO of Clarity, a start-up company specializing in deepfake detection, was quoted in the FT as saying, "More advanced open-source AI models are being released. This is making deepfakes more common and pushing the technology further." Matias warned of easily accessible "killer apps". The CEO said that bad people are enabling the capacity to produce super high quality deepfakes quickly, easily and inexpensively. What's more, these deepfakes can render some deepfake detection tools completely ineffective.

According to technology vendor ExpressVPN, millions of deepfakes are currently online, up from less than 15,000 in 2019. According to a survey by Regula, nearly 80 percent of enterprises see these deepfake audio or video recordings as a threat to their operations.
Matthew Moynahan, CEO of identity verification provider OneSpan, said: "Businesses need to see this as the next generation of cybersecurity concern. We've solved almost all of the privacy and compliance issues. Now it's all about authentication."

Deepfake-related issues have indeed become a serious priority for companies. A report published in June by Transmit Security found that AI-generated deepfakes can be used to bypass biometric security systems, such as facial recognition systems that protect customers' accounts, and create fake identity documents. AI chatbots are now programmable enough to impersonate a trusted individual or customer service. This raises the threat that people can be tricked into sharing personal data that can be used in other attacks.

Haywood Talcove, CEO of risk solutions provider LexisNexis, points to behavioral biometrics as a way to combat this type of identity theft. The technique involves assessing and learning how a user behaves when using a device such as a smartphone or computer. If any suspicious changes are detected in the user, the system alerts.

"These behavioral biometric systems look for thousands of clues that someone may not be who they claim to be," Talcove explained. For example, if a user is in a new section of a website they have never visited before, seems familiar with it and can navigate it quickly, this could be an indication of fraud.
Start-ups like BioCatch, as well as larger groups like LexisNexis, are among those developing this technology to continuously verify a user in real time.

Still, there are risks of counterattacks. "Traditional fraud detection systems often rely on rule-based algorithms or pattern recognition techniques. However, AI-powered fraudsters can use deepfake to evade these systems. By generating fake data or manipulating the patterns that AI models learn, fraudsters can trick algorithms into classifying fraudulent activities as legitimate," the report warned.