Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.
When Google announced the Google News Initiative in March 2018, it pledged to release datasets that would help “advance state-of-the-art research” on fake audio detection — that is, clips generated by AI intended to mislead or fool voice authentication systems. Today, it’s making good on that promise.
The Google News team and Google’s AI research division, Google AI, have teamed up to produce a corpus of speech containing “thousands” of phrases spoken by the Mountain View company’s text-to-speech models. Phrases drawn from English newspaper articles are spoken by 68 different synthetic voices, which cover a variety of regional accents.
“Over the last few years, there’s been an explosion of new research using neural networks to simulate a human voice. These models, including many developed at Google, can generate increasingly realistic, human-like speech,” Daisy Stanton, a software engineer at Google AI, wrote in a blog post. “While the progress is exciting, we’re keenly aware of the risks this technology can pose if used with the intent to cause harm. … [That’s why] we’re taking action.”
The dataset is available to all participants of ASVspoof 2019, a competition that aims to foster the development of protections for and countermeasures against spoofed speech — specifically, systems that can distinguish between real and computer-generated speech.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
“As we published in our AI Principles last year, we take seriously our responsibility both to engage with the external research community, and to apply strong safety practices to avoid unintended results that create risks of harm,” Stanton wrote. “We’re also firmly committed to Google News Initiative’s charter to help journalism thrive in the digital age, and our support for the ASVspoof challenge is an important step along the way.”
AI systems that can be used to generate misleading media have come under increased scrutiny recently. In September, members of Congress sent a letter to National Intelligence director Dan Coats requesting a report from intelligence agencies about the potential impact of deepfakes — videos made using AI that digitally grafts faces onto other people’s bodies — on democracy and national security. Members of Congress speaking with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey also expressed concern about the potential impact of manipulative deepfake videos in a Congressional hearing in late 2018.
Fortunately, the fight against them appears to be ramping up. Last summer, members of DARPA’s Media Forensics program tested a prototypical system that could automatically detect deepfakes, or manipulated images or videos, in part by looking for cues like unnatural blinking in videos. And startups like Truepic, which raised an $8 million funding round in July, are experimenting with deepfakes detection as a service.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.