We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Supercomputers and facial recognition dominated the headlines this week in AI — but not necessarily in equal measure. Meta, the company formerly known as Facebook, announced it’s building a server cluster for AI research that it claims will be among the fastest of its kind. Meanwhile, the IRS quietly implemented a new program with a vendor, ID.me, that controversially uses facial recognition technology to verify the identity of taxpayers.

Meta’s new “AI supercomputer” — called AI Research SuperCluster (RSC) — is impressive, to be sure. Work on it began a year and a half ago, with phase one reaching the operational stage within the past few weeks. Currently, RSC features 760 Nvidia GGX A100 systems containing 6,080 connected GPUs as well as custom cooling, power, networking, and cabling systems. Phase two will be completed by 2022, bringing RSC up to 16,000 total GPUs and the capacity to train AI systems “on datasets as large as an exabyte.”

Meta says RSC will be applied to training a range of systems across Meta’s businesses, including content moderation algorithms, augmented reality features, and experiences for the metaverse. But the company hasn’t announced plans to make RSC’s capabilities public, which many experts say highlight the resource inequalities in the AI industry.

“I think it’s important to remember that Meta spends money on big expensive spectacles because money is their strength — they can outspend people and get the big results, the big headlines they want that way,” Mike Cook, an AI researcher at Queen Mary University in London, told VentureBeat via email. “I absolutely hope Meta manages to do something interesting with this and we all get to benefit, but it’s really important that we put this in context — private labs like [Meta’s] redefine progress along these narrow lines that they excel at, so that they can position themselves as leaders.”

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Large corporations dominate the list of “AI supercomputers,” unsurprisingly, given the costs involved in building such systems. Microsoft two years ago announced that it created a 10,000-GPU AI supercomputer running on its Azure platform with research lab OpenAI. Nvidia has its own in-house supercomputer, Selene, that it uses for AI research including training natural language and computer vision models.

Os Keyes, an AI ethicist at the University of Washington, characterized the trend as “worrying.” Keyes says that the direction of larger and more expensive AI compute infrastructure wrongly rewards “scale and hegemony,” while locking in “monolithic organizational forms” as the logical or efficient way of doing things.

“It says some interesting things about Meta — about where it’s choosing to focus efforts,” Keyes said. “That Meta’s direction of investment is in algorithmic systems demonstrates exactly how hard they’ve pinned themselves to ‘technosolutionism’ … It’s change driven by what impresses shareholders and what impresses the ‘California ideology,’ and that isn’t change at all.”

Aiden Gomez, the CEO of Cohere, a startup developing large language models for a range of use cases, called RSC a “major accomplishment.” But he stressed that it’s “another piece of evidence that only the largest organizations are able to develop upon and benefit from this technology.” While language models in particular have become more accessible in recent years, thanks to efforts like Hugging Face’s BigScience and EleutherAI, cutting-edge AI systems remain expensive to train and deploy. For example, training language models like Nvidia’s and Microsoft’s Megatron 530B can cost up to millions of dollars — not accounting for storage expenses. Inference — actually running the trained model — is another barrier. One estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year.

“The big push for us at Cohere is changing this and broadening access to the outputs of powerful supercomputer advances – large language models – through an affordable platform,” Gomez said. “Ultimately, we want to avoid the extremely resource-intensive situation where everyone needs to build their own supercomputer in order to get access to high quality AI.”

Facial recognition for taxes

In other news, the IRS this year announced that it’s contracting with ID.me, a Virgnia-based facial recognition company, to verify taxpayers’ identities online. As reported by Gizmodo, users with an IRS.gov account will need to provide a government ID, a selfie, and copies of their bills starting this summer perform certain tasks, like getting a transcript online (but not to e-file taxes).

The IRS pitches the new measures as a way to “protect the security of taxpayers.” But ID.me has a problematic history, as evidenced by complaints from residents in the roughly 30 states that contracted with the company for unemployment benefit verification.

In New York, News10NBC detailed accounts of residents struggling to navigate through ID.me’s system, including one woman who claimed she’d waited 19 weeks for her benefits. Some have suggested that people of color are more likely to be misidentified by the system — which wouldn’t be surprising or unprecedented. Gender and racial prejudices are a welldocumented phenomenon in facial analysis algorithms, attributable to imbalances in the datasets used to train the algorithms. In a 2020 study, researchers showed that algorithms could even become biased toward facial expressions, like smiling, or different outfits — which might reduce their recognition accuracy.

Worryingly, ID.me hasn’t been fully honest about its technology’s capabilities. Contrary to some of ID.me’s public statements, the company matches faces against a large database — a practice that privacy advocates fear poses a security risk and could lead to “mission creep” from government agencies.

“This dramatically expands the risk of racial and gender bias on the platform,” Surveillance Technology Oversight Project executive director Albert Fox Cahn told Gizmodo. “More fundamentally, we have to ask why Americans should trust this company with our data if they are not honest about how our data is used. The IRS shouldn’t be giving any company this much power to decide how our biometric data is stored.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics