When deepfakes first entered the public consciousness, they were a novelty act. But the potential for harm to come from the spooky, AI-enabled synthetic humans was always acknowledged. Now that the underlying technology has become more widely available and understood, we’re starting to see more malicious use cases – and considering how to combat them.
In March 2022, Ukrainian president Volodymyr Zelensky popped up on Facebook and YouTube to announce his country’s surrender to Russian forces. It didn’t take long for the video to be flagged as a fake and removed, and for fingers to point at Russian propaganda (the video had also been posted to Telegram and Russian social network VKontakte). But the potential for deepfake disinformation is clear.
And it’s not only governments who need to worry. Banks and other businesses are increasingly using “liveness tests” as part of user verification, to check that still photos haven’t been used to fool facial recognition. But security company Sensity has found that many of these can be easily bypassed using deepfakes.
The Fight Against Deepfakes
In a recent announcement, the European Commission decided to make the USB-C a standard port because it was responding to accusations that large companies weren’t doing enough to allow its customers a “right to repair” feature and put an end to the huge amounts of ewaste.
“The powers that be no longer have to stifle information. They can now overload us with so much of it, there’s no way to know what’s factual or not. The ability to be an informed public is only going to worsen with advancing deep fake technology.”
Other tech companies are springing up to help fight deepfakes. Estonian startup Sentinel specialises in detecting whether media has been created using AI, essentially using algorithms to spot algorithmically-created images.
And Spain’s OARO provides enterprise Identity and Access Management (IAM) solutions. They help create an immutable data trail that allows its clients – in government, insurance, aviation and banking – to authenticate any image or video.
As it becomes harder for humans to spot fakes, we can only hope that tech can step in and identify the imposters.