Do you feel confident that you could spot a deepfake?
That’s not a trick question. Instances of AI-generated deepfake content, whether videos or images, are growing both in popularity and sophistication. In fact, the global market value for AI-generated deepfakes is expected to reach USD 79.1 million by the end of 2024.
Misinformation and fake news are on the rise – so much so that the World Economic Forum identified it as the biggest threat in its 2024 Global Risks Report. This has certainly proven to be correct, with new instances hitting the headlines almost every week.
What’s become evident is that this threat does not discriminate – whether it’s board members of public companies, politicians, or celebrities. The consequences are far-reaching. Consumers can be tricked into false financial advice by a trusted public figure, or businesses can suffer financial and reputational damage by either having staff falling for a deepfake or having a senior leader impersonated – as demonstrated by the Arup case earlier this year.
Traditional phishing and impersonation scams are still prevalent, of course, but this threat takes it to the next level and is even more difficult to spot. The ‘tells’ are less obvious, and impersonations can be scarily accurate. So, how can we combat?
Many measures have been taken by industry giants to fight against deepfakes. Just this month, Facebook and Instagram owner Meta announced plans to introduce facial recognition technology to crack down on scammers who fraudulently use celebrities in adverts.
This news caught our attention, as we believe that advanced biometric technology will be key in combating the deepfake threat, now and in the future. But taking this further, we want to shine a light on the role that secure digital identities can play in tandem.
At Thales, we’ve spoken at length about how secure digital identities can help protect our personal information while also creating more seamless online interactions. Here’s how they could specifically play a role against this growing threat.
Content Verification: Digital identities can verify the authenticity of content creators, ensuring that only verified individuals can upload and share media.
Traceability: By linking digital identities to content, it becomes easier to trace the origin of media, making it simpler to identify and remove deepfakes.
Real-Time Authentication: Digital identities can enable real-time authentication during live streams, ensuring that the person on camera is who they claim to be.
Access Control: Restricting access to sensitive systems and platforms through digital identities can prevent unauthorised users from creating or distributing deepfakes.
User Accountability: Digital identities can hold users accountable for the content they share, deterring the creation and spread of deepfakes through potential repercussions.
Enhanced Security Protocols: Combining digital identities with biometric verification adds an extra layer of security, making it harder for deepfake creators to bypass systems.
As we navigate this complex digital landscape, it’s clear that the integration of biometric technology and secure digital identities will be crucial in combating the deepfake threat. By ensuring content authenticity, enhancing traceability, and implementing robust security protocols, we can protect both individuals and organisations from the far-reaching consequences of deepfakes.
What measures do you think are most effective in addressing this issue? How can we, as a community, further innovate to stay ahead of these evolving threats?