We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Microsoft today announced that it acquired Two Hat, an AI-powered content moderation platform, for an undisclosed amount. According to Xbox product services CVP Dave McCarthy, the purchase will combine the technology, research capabilities, teams, and cloud infrastructure of both companies to serve Two Hat’s existing and new customers and “multiple product and service experiences: at Microsoft.
“Working with the diverse and experienced team at Two Hat over the years, it has become clear that we are fully aligned with the core values inspired by the vision of founder, Chris c, to deliver a holistic approach for positive and thriving online communities,” McCarthy said in a blog post. “For the past few years, Microsoft and Two Hat have worked together to implement proactive moderation technology into gaming and non-gaming experiences to detect and remove harmful content before it ever reaches members of our communities.”
Moderation
According to the Pew Research Center, 4 in 10 Americans have personally experienced some form of online harassment. Moreover, 37% of U.S.-based internet users say they’ve been the target of severe attacks — including sexual harassment and stalking — based on their sexual orientation, religion, race, ethnicity, gender identity, or disability. Children, in particular, are the subject of online abuse, with one survey finding a 70% increase in cyberbullying on social media and gaming platforms during the pandemic.
Priebe founded Two Hat in 2012 when he left his position as a senior app security specialist at Disney Interactive, Disney’s game development division. A former lead developer on the safety and security team for Club Penguin, Priebe was driven by a desire to tackle the issues of cyberbullying and harassment on the social web.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
Today, Two Hat claims its content moderation platform — which combines AI, linguistics, and “industry-leading management best practices” — classifies, filters, and escalates more than a trillion human interactions including messages, usernames, images, and videos a month. The company also works with Canadian law enforcement to train AI to detect new child exploitative material, such as content likely to be pornographic.
“With an emphasis on surfacing online harms including cyberbullying, abuse, hate speech, violent threats, and child exploitation, we enable clients across a variety of social networks across the globe to foster safe and healthy user experiences for all ages,” Two Hat writes on its website.
Microsoft partnership
Several years ago, Two Hat partnered with Microsoft’s Xbox team to apply its moderation technology to communities in Xbox, Minecraft, and MSN. Two Hat’s platform allows users to decide the content they’re comfortable seeing — and what they aren’t — which Priebe believes is a key differentiator compared with AI-powered moderation solutions like Sentropy and Jigsaw Labs’ Perspective API.
“We created one of the most adaptive, responsive, comprehensive community management solutions available and found exciting ways to combine the best technology with unique insights,” Priebe said in a press release. “As a result, we’re now entrusted with aiding online interactions for many of the world’s largest communities.”
It’s worth noting that semi-automated moderation remains an unsolved challenge. Last year, researchers showed that Perceive, a tool developed by Google and its subsidiary Jigsaw, often classified online comments written in the African American vernacular as toxic. A separate study revealed that bad grammar and awkward spelling — like “Ihateyou love,” instead of “I hate you,” — make toxic content far more difficult for AI and machine detectors to spot.
As evidenced by competitions like the Fake News Challenge and Facebook’s Hateful Memes Challenge, machine learning algorithms also still struggle to gain a holistic understanding of words in context. Revealingly, Facebook admitted that it hasn’t been able to train a model to find new instances of a specific category of disinformation: misleading news about COVID-19. And Instagram’s automated moderation system once disabled Black members 50% more often than white users.
But McCarthy expressed confidence in the power of Two Hat’s product, which includes a user reputation system, supports 20 languages, and can automatically suspend, ban, and mute potentially abusive members of communities.
“We understand the complex challenges organizations face today when striving to effectively moderate online communities. In our ever-changing digital world, there is an urgent need for moderation solutions that can manage online content in an effective and scalable way,” he said. “We’ve witnessed the impact they’ve had within Xbox, and we are thrilled that this acquisition will further accelerate our first-party content moderation solutions across gaming, within a broad range of Microsoft consumer services, and to build greater opportunity for our third-party partners and Two Hat’s existing clients’ use of these solutions.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.