AI, it seems like it has been the big buzzword over the past couple of years, being bolted on to multiple products, both useful and... not so useful. Do you really need an AI-powered plant pot or hair brush?
AI technology has evolved a lot over the past few years and has also brought significant benefits across many industries. However, as you can imagine (and we have spoken about this at length), it has also given rise to a host of new and evolving threats.
One of the most alarming improvements, especially over the past 12 months, is the rising threat of “deepfake technology”, which uses AI to create highly convincing fake videos, audio, and images.
These deepfakes can mimic voices, faces, and even behaviours with such accuracy that they’re difficult to distinguish from reality. For businesses, this can and already has spelt trouble for those who are unaware to combat these new threats.
Let's look at this in more detail.
Deepfakes can be just about any form of digital media, whether that be videos, audio, or images.
Deepfakes are media that have been artificially altered using machine learning, AI and other digital tools to make them appear in situations that they have not been in, such as saying or doing something they never actually did, or being seen somewhere they have not been. This works by training AI models on a large set of real data, such as audio, images, or video of a person, as well as examples of them speaking or acting, and then using this information to create entirely new, yet highly convincing, content.
How does it work?
At the core of deepfake generation is the use of a machine learning model called an autoencoder. Autoencoders are neural networks that learn to encode data provided to them into a compressed form and then decode it back into a format resembling the original input. This allows deepfake systems to generate convincing reproductions of faces, voices, or other attributes, which can then be manipulated or swapped to create fraudulent media.
However, it is also important to note that in some instances, this can be used for harmless entertainment purposes, such as the Deepfake movie reaction videos from comedian Charlie Hopkinson. Whilst some can use deepfake technology to create entertainment such as this... There are also those who will use deepfake technology for malicious purposes.
In 2024, a finance worker at a Hong Kong-based multinational firm was tricked into paying $25 million to cyber criminals who used deepfake technology to pose as the company’s chief financial officer in a video conference call.
Another notable example of AI deepfake fraud involved the UK engineering firm Arup, where cyber criminals used AI-generated video calls to impersonate senior executives, convincing an employee to authorise a large payment of £20 million. Despite the company’s secure systems, the fraudsters managed to bypass traditional security checks, proving just how dangerous deepfakes can be.
However, there have also been several notable cases where AI Deepfakes of celebrities and influencers such as Mr Beast, Taylor Swift, Jennifer Aniston and many others, which have also impacted thousands of people around the world, costing them millions, and have led many countries to implement laws such as the US “Take It Down Act”.
The ability for these AI models to be able to accurately reproduce and replicate a person’s voice, appearance, and even their mannerisms makes it a highly attractive method for fraudsters to impersonate trusted individuals within a business as part of a wider social engineering attack. This could include CEOs or finance directors, as seen in previous examples of these types of attacks, where the deepfakes will issue fraudulent requests for large sums of money or confidential data.
Traditional security systems, like password protection and even voice recognition, often fail to detect these AI-generated impostors, especially if those being targeted do not have regular video communications with these people to be able to spot potential inconsistencies. Deepfakes exploit the trust people have in their colleagues and the way they communicate, making it harder to spot a scam.
To protect your business from deepfake threats such as these, it is essential to take a proactive approach to these newer types of threats. First and foremost, education is key in ensuring that your employees are regularly trained on the risks associated with deepfakes and the things to look out for.
Employees should be taught to verify requests, particularly those involving money or sensitive information, either by changing authorisation codes or safe phrases, or direct email authentication. Ensuring that your employees have a cautious, sceptical mindset can help avoid falling victim to these kinds of fraud.
However, it is also important to consider that this should be part of a much wider cyber defence strategy, which should involve multi-factor account security across your business infrastructure and for all online services In addition to this, you should implement policies of least privileged access and privileged identity and access management as part of a wider Zero Trust approach to your cyber security strategy.
Finally, businesses should collaborate with trusted and proven cyber security experts, such as us, who have a trusted background in cyber defence and resilience. We can offer advice, deploy detection tools, and provide critical insight into cyber security vulnerabilities your business faces as part of a detailed report.
It is essential that businesses stay ahead of the curve as cyber attacks become increasingly more complex.
The rise of AI deepfake technology and attacks presents a significant challenge to businesses of all sizes.
However, by adopting comprehensive, holistic cyber security measures, and fostering a vigilant company culture, businesses can protect themselves from these increasing threats. It’s crucial to stay informed and be prepared for what’s coming next, which is why we try to share as much information as we can with businesses to educate them on these latest risks.
Want to find out more? Book a meeting with us to take a FREE Cyber Security health check and help you to understand if your business is prepared to face these modern challenges.
    Help Desk