We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Artificial intelligence is not the savior of mankind.

All of the intelligence we program into an interface — a speaker like Amazon Echo, the chatbot you use at work, or a car that drives on its own — has to be tested in the real world by actual human beings, and until those tests are perfected, it won’t be saving anyone. Even then, this computer entity is really an extension of the human mind, isn’t it? An AI is only as smart as the humans who create and program it — nothing more and nothing less.

That’s why autonomous cars are so important. They won’t be the savior of all mankind, but the AI in cars will certainly save a few lives — perhaps even millions.

That’s why the recent news that Waymo (the sister company of Google) is using a fleet of 500 test vehicles — the Chrysler Pacifica minivan, outfitted with LIDAR sensors and other tech to keep the car on the straight and narrow — is so important. No autonomous car will ever reach full production standards at larger automakers like Chrysler and GM until there are real people sitting in the vehicle — including kids, as Google’Waymo has explained.

Here’s why that is.

Every expert in the auto industry knows there are millions of variables when it comes to driving, and an AI has to collect and analyze all of that data. That’s why it is called machine learning, not machine learned. A cat and a dog run into the road at the same time; what does the car need to do? Five cars swerve around an object on a highway. Why? At high speeds, wind comes up from a valley and sweeps over the road, but at the same time, there’s a draft from a storm on the other side — one that happens once every year. How should the car react?

Self-driving cars need the full force of a major tech company in order to discern how an AI will function in all of these conditions. Have you met any government regulators? They are not going to let cars drive on their own in every state on every road anytime soon, although Las Vegas and San Francisco seem fairly open to the idea in some areas. Tesla is beta-testing autonomous cars with real drivers, but Tesla is a tiny startup compared to Ford. There’s a risk of reputation, not from the handful of self-driving cars Ford might release a few years from now but to the millions of cars they sell without that tech.

And there’s this fact. Early adopters all have one thing in common: They are a small minority. The major automakers don’t want to sell to a small number of early adopters, they want to sell to your Aunt Judy who drives to work in a Honda Civic today. Mass production of AI-enabled vehicles requires mass testing with hundreds or even thousands of drivers. There’s a lesson there for anyone creating a chatbot or AI-enabled software: Test it for Aunt Judy. Your AI should have so much data to analyze, go through so many rounds of usability testing, and work with an incredible number of users at all levels of technical ability that it’s almost foolproof.

What happens when you don’t test in the real world? An AI will crash and fail. It will lose market share. It will become Microsoft Tay. Google and Waymo are testing 500 minivans because they are thinking about production scale artificial intelligence in cars. That’s the real goal.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics