We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
In March, the FBI released a report declaring that malicious actors almost certainly will leverage “synthetic content” for cyber and foreign influence operations in the next 12-18 months. This synthetic content includes deepfakes, audio or video that is either wholly created or altered by artificial intelligence or machine learning to convincingly misrepresent someone as doing or saying something that was not actually done or said.
We’ve all heard the story about the CEO whose voice was imitated convincingly enough to initiate a wire transfer of $243,000. Now, the constant Zoom meetings of the anywhere workforce era have created a wealth of audio and video data that can be fed into a machine learning system to create a compelling duplicate. And attackers have taken note. Deepfake technology has seen a drastic uptick across the dark web, and attacks are certainly taking place.
In my role, I work closely with incident response teams, and earlier this month I spoke with several CISOs of prominent global companies about the rise in deepfake technology they have witnessed. Here are their top concerns.
Dark web tutorials
Recorded Future, an incident-response firm, noted that threat actors have turned to the dark web to offer customized services and tutorials that incorporate visual and audio deepfake technologies designed to bypass and defeat security measures. Just as ransomware evolved into ransomware-as-a-service (RaaS) models, we’re seeing deepfakes do the same. This intel from Recorded Future demonstrates how attackers are taking it one step further than the deepfake-fueled influence operations that the FBI warned about earlier this year. The new goal is to use synthetic audio and video to actually evade security controls. Furthermore, threat actors are using the dark web, as well as many clearnet sources such as forums and messengers, to share tools and best practices for deepfake techniques and technologies for the purpose of compromising organizations.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
Deepfake phishing
I’ve spoken with CISOs whose security teams have observed deepfakes being used in phishing attempts or to compromise business email and communication platforms like Slack and Microsoft Teams. Cybercriminals are taking advantage of the move to a distributed workforce to manipulate employees with a well-timed voicemail that mimics the same speaking cadence as their boss, or a Slack message delivering the same information. Phishing campaigns via email or business communication platforms are the perfect delivery mechanism for deepfakes, because organizations and users implicitly trust them and they operate throughout a given environment.
Bypassing biometrics
The proliferation of deepfake technology also opens up Pandora’s Box when it comes to identity. Identities are the common variable across networks, endpoints, and application, and the focus on who or what you are authenticating becomes pivotal to an organization’s security on their journey to Zero Trust. However, when a technology exists that can imitate identity to the point of fooling authentication factors, such as biometrics, the risk for compromise becomes greater. In a report from Experian outlining the five threats facing businesses this year, synthetic identity fraud, in which cybercriminals use deepfaked faces to dupe biometric verification, was identified as the fastest growing type of financial crime. This will inevitably create significant challenges for businesses that rely on facial recognition software as part of their identity and access management strategy.
Distortion of digital reality
In today’s world, attackers can manipulate everything. Unfortunately, they are also some of the first adopters of advanced technologies, such as deepfakes. As cybercriminals move beyond using deepfakes purely for influence operations or disinformation, they will begin to use this technology to compromise organizations and gain access to their environment. This should serve as a warning to all CISOs and security professionals that we’re entering a new reality of distrust and distortion at the hands of attackers.
Rick McElroy is principal cybersecurity strategist at VMware.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.