We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Many projected that deepfake videos would play a lead role in the 2020 elections, with the prospect of foreign interference and disinformation campaigns looming large in the leadup to election day. Yet, if there has been a surprise in campaign tactics this cycle, it is that these AI-generated videos have played a very minor role, little more than a cameo (so far, at least).
Deepfake videos are much more convincing today due to giant leaps in the field of Generative Adversarial Networks. These are generated videos that are doctored to alter reality, showing events or depicting speech that never happened. Because people tend to lend substantial credence to what they see and hear, deepfakes pose a very real danger.
Worries about deepfakes influencing elections have been bubbling since the technology first surfaced several years ago, yet there were few instances of deepfakes in the 2020 U.S. elections or elections globally. One example is a deepfake showing former Vice President Joe Biden sticking out his tongue, which was retweeted by the president. In another, the prime minister of Belgium appeared in an online video saying the COVID-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Except she did not say this, it was a deepfake.
These have been the exceptions. So far, political deepfakes have been mostly satirical and understood as fake. Some have even been used as part of a public service campaign to express the importance of saving democracy.
(The above video was created for representUS, a nonprofit and nonpartisan anti-corruption and good governance group, by an advertising agency using deepfake technology.)
The reason there have not been more politically motivated malevolent deepfakes designed to stoke oppression, division, and violence is a matter of conjecture. One reason might be the ban some social media platforms have placed on media that has been manipulated or fabricated and passed off as real. That said, it can be difficult to spot a well-made deepfake, and not all are detected. Many companies are developing AI tools to identify these deepfakes but have yet to establish a foolproof method. One recently discussed detection tool claims 90% accuracy by analyzing the subtle differences in skin color caused by the human heartbeat.
At the same time, those creating deepfakes learn from the published detection efforts and continue to advance their capabilities to create more realistic looking videos. And more advanced tools to create deepfakes are also proliferating. For example, recent developments designed to improve videoconferencing could be used to create more realistic deepfakes and avoid detection.
Another reason we may not have seen more deepfakes targeting elections is because traditional means of falsification appear to work well enough through selective editing. Finding a real video clip that, for example, shows a candidate saying they will raise taxes is not difficult. Cutting those sound bites from the larger context of the original clip and repurposing them to push an agenda is a common, if unethical, practice of political persuasion.
It might also be that greater energy is going into projects that yield more immediate commercial benefits, such as creating nude images of women based on pictures taken from social media.
Some see an upside to deepfakes, with positive uses eventually reducing the stigma associated with the technology. These positive uses are sometimes referred to not as deepfakes but as “synthetic videos” even though the underlying technology is the same. Already there are synthetic corporate training videos. And some people claim synthetic videos could be used to enhance education by recreating historical events and personalities, bringing historical figures back to life to create a more engaging and interactive classroom. And there are the just-for-fun uses, such as turning an Elon Musk image into a zombie.
Are deepfakes still a problem?
As of June this year, nearly 50,000 deepfakes have been detected online, an increase of more than 330% in the course of a year. The dangers are real. Faked videos could falsely depict an innocent person participating in a criminal activity, falsely show soldiers committing atrocities, or show world leaders declaring war on another country, which could trigger a very real military response.
Speaking at a recent Cybertech virtual conference, former US cyber command chief, Maj.-Gen. (ret.) Brett Williams said, “artificial intelligence is the real thing. It is already in use by attackers. When they learn how to do deepfakes, I would argue this is potentially an existential threat.”
The implication is that those who would use deepfakes as part of an online attack have not yet mastered the technology, or at least not how to avoid any breadcrumbs that would lead back to the perpetrator. Perhaps these are also the most compelling reasons — lack of mature technology and fear of the source being discovered — that we have not seen more serious deepfakes in the current political campaigns.
A recent report from the Center for Security and Emerging Technology echoes this observation. Among the key findings of the report, “factors such as the need to avoid attribution, the time needed to train a Machine Learning model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice.” The report concludes that tailored deepfakes produced by technically sophisticated actors will represent a greater threat in the future.
Even if deepfakes have not played a significant role in this election, it is likely only a matter of time before they impact elections, subvert democracy, and perhaps lead to military engagements.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.