We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Harassment in the workplace affects employees of all backgrounds, genders, sexualities, and ethnicities — but disproportionately those in under-represented groups. A 2018 survey by Stop Street Harassment showed that 81% of women have been harassed in their lifetime. And according to a UCLA School of Law study, half of LGBTQ workers have faced job discrimination at some point in their careers.
Work-from-home arrangements during the pandemic haven’t slowed or reversed the trend — in fact, they’ve accelerated it. A poll from TalentLMS and The Purple Campaign, the results of which were published in 2021, found that over one in four respondents experienced unwelcome sexual behavior online since the start of the health crises, either via videoconferencing, email, or text messages.
Beyond the emotional distress for everyone involved, there’s a dollars-and-cents motivation for companies to prevent and address abuse. Sexual harassment causes lasting damage to the employees who experience it, leading to higher turnover, lower productivity, and increased absenteeism and sick leave. Deloitte estimates that workplace sexual harassment costs an average of $2.6 billion in lost productivity, or $1,053 per victim.
Borrowing a page from social media’s playbook, a relatively new crop of startups is developing systems that leverage a combination of filters, algorithms, and other tools to flag potentially problematic messages before they reach employees. For example, Fairwords, which today raised $5.25 million in a series A round led by Fintop Capital, uses a “spell-check” like interface to notify users of harmful language as they type and give information about how the language might be interpreted. But some experts say that these platforms run the risk of normalizing employee surveillance, complicating their mission of reducing abuse.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
Flagging abusive language
Fairwords, like its rivals CommSafe AI and Aware, uses AI to scan the messages that employees send through collaboration platforms including Slack, Microsoft Teams, Facebook Messenger, Skype, Zoom, and email. The company’s software sits on desktops and detects language that’s “non-compliant,” such as words and phrases that might fall under the categories of cyberbullying or sexual harassment. When Fairwords spots language that possibly runs afoul of workplace policies, it shows real-world examples of how the language has “been detrimental to businesses and careers.”
On the backend, managers can add policies to Fairwords for specific types of violations. The company claims that its software can also look for signs of bribery and corruption, collusion, and discrimination. Fairwords attempts to quantify violations across the workforce via an analytics dashboard, showing communications trends over time and the percentage of employees who revise their messages before sending.
“Fairwords helps C-suite compliance and HR leaders understand the health of their communications culture by providing anonymous dashboards that show information including number of words analyzed, number of words flagged, percent of flags repeated within 30 days, percent of flags repeated more than once within 30 days, average flags per user, top flags per term, and top flagged applications,” a Fairwords spokesperson told VentureBeat via email. “The dashboards can help organizational leaders understand what kind of language is being used in their organizations, how to categorize that language (harassment, discrimination, bullying, etc.), and the applications that people are using to communicate.”
Fairwords pitches the solution as a way to see which words or phrases are being flagged most often and which communications channels are the worst offenders — as well as detect unauthorized chat apps. But while the company claims that it anonymizes the data it collects, some employees might feel wary of software that analyzes their keystrokes.
“Fairwords is built to train employees first, providing them with immediate feedback and training as they type to help them write inclusive, compliant, and fair communications. Our mission is to elevate company cultures by improving the nature and quality of written communications, and our product is built so that employees can interact with it directly,” the spokesperson said.
According to a 2021 ExpressVPN survey, 59% of employees are wary of employer surveillance while 43% believe that workplace monitoring software — which is largely legal in the U.S. — is a violation of trust. Beyond privacy concerns, part of the reason might be that companies often fail to alert staff that they’re using surveillance software. Digital.com found in a recent study that 14% of businesses haven’t notified staff about their monitoring activities.
The Fairwords spokesperson asserts that the platform gathers data in an anonymous way, showing information about an organization overall versus individuals. “Surveillance products, which we are not, monitor individuals and are not built to provide feedback or support to employees, and they are often not shared with employees until they’ve ‘caught’ someone doing something wrong,” they added. “We believe in a proactive and transparent approach to create improvement overall.”
Still, some experts say that software like Fairwords — whether guilty of enabling surveillance or not — can make a difference if properly and transparently implemented.
“Employees have a right to privacy, and companies should be transparent about whether they monitor their employees’ work-related communications … [But] platforms like Fairwords could be a great tool to sensitize and train people towards using inclusive and fair language in workplaces,” Nabamallika Dehingia, a pre-doctoral fellow at the University of California, San Diego who’s coauthored studies on harassment in the workplace. “AI technologies like Fairwords could be used as a supplement to [other] policies in order to ensure that employees do not engage in offensive or abusive digital communications.”
Amir Karami, an associate professor specializing in social media and politics at the University of South Carolina, agrees that these types of tools can be beneficial by promoting a positive culture, training employees, and helping companies to determine how much of their workspace is safe. But he points out the problems they pose, as well, such as potentially unwanted data collection.
“First, if the platforms collect personal data and the employees don’t know who has access to the data, this can create fear and reduce the trust level in the company,” Karami told VentureBeat via email. “Second, the employees might think that the data might be used for punishment. Third, if an employee uses an inappropriate word that wasn’t recognized by the platforms, they might assume that it is ok to use that word. Fourth, the platforms could create stress of constant monitoring that leads to lower job satisfaction and employee and reduce employee retention.”
Flaws and meaningful change
One of the challenges is flagging offensive language while preventing bias from creeping into the system.
In a 2019 study from the University of Washington, scientists found that AI is more likely to label phrases in the African American English (AAE) dialect — a dialect spoken by many Black people in the U.S. — more toxic than general American English equivalents, despite their being understood as non-toxic by AAE speakers. Other audits of AI-powered, toxicity-detecting systems have found that they struggle to recognize hate speech that uses reclaimed slurs like “queer.” For example, at one point, the hate speech detection systems Meta used on Facebook aggressively detected comments denigrating white people more than attacks on other demographic groups.
There’s also evidence to suggest that AI misses toxic text a human could easily spot, particularly where there’s missing characters, added spaces between characters, and spellings with numbers in place of words. For its part, Fairwords says that it’s working to “constantly improve” its algorithms and make its backend systems easier to customize.
“Fairwords detection currently uses pre-trained, patented natural language processing-based models,” the spokesperson told VentureBeat. “Shortly, we will be introducing updates that use both publicly-sourced training data for toxic language and customer data with user feedback (so-called human in the loop) to further improve detection effectiveness … We [currently] use anonymized customer data to help train the models, which includes the feedback from end-users when they indicate notifications are not accurate with classifications as to why. This helps the analytics adapt to domain-specific jargon and changing communications behavior.”
Regardless of an AI system’s accuracy, Dehingia cautions that software alone isn’t the answer to workplace harassment or abuse. Instead, she says, it requires “larger normative shifts” that prioritize harassment prevention through “protective policies” and “strong measures for inclusion and diversity.”
“We … need data to track improvements and backsliding on these issues, and platforms designed to train are not necessarily those best suited to evaluate their own impact … Companies [also] need to develop a culture of inclusion and diversity that is top-down (leadership prioritizing diversity) as well as participatory in that it allows its employees to provide feedback and suggestions on making the workplace a safe environment,” Dehingia said. “Representation, especially in senior or leadership roles is important … Institutional policies that discourage harassment and protect women and individuals belonging to racial and other social minority groups must be put in place.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.