We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


This week, as the heads of four of the largest and most powerful tech companies in the world were called before a virtual congressional antitrust hearing to answer inquiries into how they built and run their respective behemoths, you could see that the bloom on the rose of Big Tech has faded.

Facebook’s Mark Zuckerberg, once the rascally college dropout boy genius you loved to hate, still doesn’t seem to grasp the magnitude of the problem of globally destructive misinformation and hate speech on his platform. Tim Cook struggles to defend how Apple takes a 30% cut from some of its app store developers’ revenue — a policy he didn’t even establish that is a vestige of Apple’s mid-2000s vise grip on the mobile app market. The plucky young upstarts who founded Google are both middle-aged and have stepped down from executive roles, quietly fading away while Alphabet and Google CEO Sundar Pichai runs the show. And Jeff Bezos wears the untroubled visage of the world’s richest man.

Amazon, Apple, Facebook, and Google all created tech products and services that have undeniably changed the world, some in ways that are undeniably good. But as these tech titans moved fast and broke things, they also largely excused themselves from asking difficult ethical questions, from how they built their business empires to the impacts their products and services have on the people who use them.

As AI continues to lead the next wave of transformative technology, skating over these difficult questions is a mistake the world can’t afford to repeat. What’s more, AI technologies won’t actually work properly unless companies address the issues at their heart.

Smart and ruthless was the tradition of Big Tech, but AI requires people to be smart and wise. Those working in AI have to not only ensure the efficacy of what they make, but holistically understand the potential harms for people AI tech impacts. That’s a more mature and just way of building world-changing technologies, products, and services. Fortunately, many prominent voices in AI are leading the field down that path.

This week’s best example was the widespread reaction to a service called Genderify, which promised to use natural language processing (NLP) to help companies identify customers’ gender using only their name, username, or email address. The entire premise is absurd and problematic, and when AI folks got ahold of it to put it through its paces, they predictably found it to be terribly biased (which is to say, broken).

Genderify was such a bad joke that it almost seemed like some kind of performance art. In any case, it was laughed off the internet. Just a day or so after it was launched, the Genderify site, Twitter account, and LinkedIn page were gone.

It’s frustrating to many in the field that such ill-conceived and poorly executed AI offerings keep popping up. But the swift and wholesale deletion of Genderify illustrates the power and strength of this new generation of principled AI researchers and practitioners.

The burgeoning AI sector is already experiencing the kind of reckoning Big Tech is only facing after decades. Other recent examples include an outcry over a paper that promised to use AI to identify criminality from people’s faces (really just AI phrenology), which led to the paper being withdrawn from publication. Landmark studies on bias in facial recognition have led to bans and moratoriums on the technology’s use in several U.S. cities, as well as a raft of legislation to eliminate or combat its potential abuses. Fresh research is finding intractable problems with bias in well-established data sets like 80 Million Tiny Images and the legendary ImageNet — and leading to immediate (if overdue) change. And there’s more.

Although advocacy groups play a role in pushing for changes and posing tough questions, the authority for such inquiry and the research-based proof is coming from people inside the field of AI — ethicists, researchers looking for ways to improve AI techniques, and actual practitioners.

There is, of course, an immense amount of work to be done and many more battles ahead as AI fuels the next dominant set of technologies. Look no further than problematic AI in surveillance, military, the courts, employment, policing, and more.

But seeing tech giants like IBM, Microsoft, and Amazon pull back on massive investments in facial recognition is a sign of progress. It doesn’t actually matter whether their actions are narrative cover for a capitulation to other companies’ market dominance, a calculated move to avoid potential legislative punishment, or just a PR stunt. For whatever reason, these companies acknowledged the value of slowing down and reducing damage rather than continuing to “move fast and break things.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics