Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


Waymo’s driverless cars have driven 6.1 million autonomous miles in Phoenix, Arizona, including 65,000 miles without a human behind the wheel from 2019 through the first nine months of 2020. That’s according to data from a new internal report Waymo published today that analyzed a portion of collisions involving the robo-taxi service Waymo One, which launched in 2018. In total, Waymo’s vehicles were involved in 18 accidents with a pedestrian, cyclist, driver, or other object and experienced 29 disengagements — times human drivers were forced to take control — that likely would have otherwise resulted in an accident.

Three independent studies in 2018 — by the Brookings Institution, the think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. And Partners for Automated Vehicle Education (PAVE) reports a majority of Americans don’t think the technology is “ready for prime time.” These concerns are not without reason. In March 2018, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one in which a Tesla Model S collided with a parked fire truck. Now the automaker’s Full Self Driving Beta program is raising new concerns.

Waymo has so far declined to sign onto efforts like Safety First For Automated Driving, a group of companies that includes Fiat Chrysler, Intel, and Volkswagen and is dedicated to a common framework for the development, testing, and validation of autonomous vehicles. However, Waymo is a member of the Self-Driving Coalition for Safer Streets, which launched in April 2016 with the stated goal of working “with lawmakers, regulators, and the public to realize the safety and societal benefits of self-driving vehicles.” Since October 2017, Waymo has released a self-driving report each year, ostensibly highlighting how its vehicles work and the technology it uses to ensure safety, albeit in a format some advocates say resembles marketing materials rather than regulatory filings.

Waymo says its Chrysler Pacificas and Jaguar I-Pace electric SUVs — which have driven tens of billions of miles through computer simulations and 20 million miles (74,000 driverless) on public roads in 25 cities — were providing a combined 1,000 to 2,000 rides per week in the East Valley portion of the Phoenix metropolitan region by early 2020. (Waymo One reached 100,000 rides served in December 2019.) Between 5% and 10% of these trips were driverless — without a human behind the wheel. Prior to early October, when Waymo made fully driverless rides available to the public through Waymo One, contracted safety drivers rode in most cars to note anomalies and take over in the event of an emergency.

Waymo One, which initially transitioned to driverless pickups with a group of riders from Waymo’s Early Rider program, delivers rides with a fleet of over 600 autonomous cars from Phoenix-area locations 24 hours a day, seven days a week. It prompts customers to specify pickup and drop-off points before estimating the time to arrival and cost of the ride. As with a typical ride-hailing app, users can enter payment information and rate the quality of rides using a five-star scale.

Using its cloud simulation platform, Carcraft, Waymo says it predicts what might have transpired had a driver not taken over to avert a near-accident — what the company calls a counterfactual. Waymo leverages the outcomes of these counterfactual disengagement simulations individually and in aggregate. Engineers evaluate each counterfactual to identify potential collisions, near-misses, and other metrics. If the simulation outcome reveals an opportunity to improve the system’s behavior, the engineers use it to develop and test changes to software. The counterfactual is also added to a library of scenarios used to test future software.

At an aggregate level, Waymo uses results from counterfactuals to produce metrics relevant to a vehicle’s on-road performance.

While conceding that counterfactuals can’t predict exactly what would have occurred, Waymo asserts they can be more realistic than simulations because they use the actual behavior of the vehicles and objects up to the point of disengagement. Where counterfactuals aren’t involved, Waymo synthesizes sensor data for cars and models scenes in digitized versions of real-world environments. As virtual cars drive through the scenarios, engineers modify the scenes and evaluate possible situations by adding new obstacles (such as cyclists) or by modulating the speed of oncoming traffic to gauge how the vehicle would have reacted.

As part of a collision avoidance testing program, Waymo also benchmarks the vehicles’ capabilities in thousands of scenarios where immediate braking or steering is required to avoid collisions. The company says these scenarios test competencies crucial to reducing the likelihood of collisions caused by other road users.

Waymo analyzes counterfactuals to determine their severity based on the likelihood of injury, collision object, impact velocity, and impact geometry — methods the company developed using national crash databases and periodically refines to reflect new data. Events are tallied using classes ranging from no injury expected (S0) to possible critical injuries expected (S1, S2, and S3). Waymo says it determines this rating using the change in velocity and direction of force estimated for each vehicle.

Here is a breakdown of the car data from January 1, 2019 to September 30, 2020, which covers 65,000 miles in driverless mode. The disengagement data is from January 1 to December 31, 2019, which is when Waymo’s cars drove the aforementioned 6.1 million miles.

S0

  • Waymo cars were involved in one actual and two simulated events (i.e., events triggered by a disengagement) in which a pedestrian or cyclist struck stationary Waymo cars at low speeds.
  • Waymo vehicles had two “reversing collisions” (e.g., rear-to-front, rear-to-side, rear-to-rear) — one actual and one simulated — at speeds of less than three miles per hour.
  • Waymo cars were involved in one actual sideswipe and eight simulated sideswipes. A Waymo car made a lane change during one simulated sideswipe, while other cars made the lane change during the other simulated sideswipes and the actual sideswipe.
  • Waymo reported 11 actual rear-end collisions involving its cars and one simulated collision. In eight of the actual collisions, another car struck a Waymo car while it was stopped; in two of the actual collisions, another car struck a Waymo car moving at slow speeds; and in one of the actual collisions, another car struck a Waymo car while it was decelerating. The simulated collision modeled a Waymo car striking a decelerating car.
  • Waymo vehicles had four simulated angled collisions. Three of these collisions occurred when another car turned into a Waymo car while both were heading in the same direction. One of the collisions happened when a Waymo car turned into another car while heading in the same direction.

S1

  • While making a lane change, a Waymo vehicle was involved in a simulated sideswipe that didn’t trigger airbag deployment.
  • Waymo cars were involved in one actual and one simulated rear-end collision that didn’t trigger airbag deployment. In the first instance, a Waymo car was struck while traveling slowly, while in the second instance a Waymo car was struck while decelerating.
  • There were two actual rear-end collisions involving Waymo cars that triggered airbag deployments inside either the Waymo vehicles or other cars, one during deceleration and the other at slow speeds.
  • There were six simulated angled accidents without airbag deployments, including one actual angled accident with deployment and four simulated accidents with deployment.

Waymo points out that the sole incident in which a Waymo car rear-ended another car involved a passing vehicle that swerved into the lane and braked hard. The company also notes that one actual event triggered a Waymo car’s airbags and two events would have been more severe had drivers not disengaged. However, Waymo also says that the severities it ascribed to the simulated collisions don’t account for secondary collisions that might have occurred subsequent to the simulated event.

Falling short

Taken as a whole, Waymo’s report, along with its newly released safety methodologies and readiness determinations, aren’t likely to satisfy critics who advocate for industry-standard self-driving vehicle safety metrics. Tellingly, Waymo didn’t detail accidents earlier in the Waymo One program or progress in the other cities where it’s actively conducting car and semi-truck tests. These cities include Michigan, Texas, Florida, Arizona, and Washington, some of which experience more challenging weather conditions than Phoenix. As mandated by law, Waymo was one of dozens of companies to release a California-specific disengagement report in February. This report showed that disengagement rates among Waymo’s 153 cars and 268 drivers in the state dropped from 0.09 per 1,000 self-driven miles (or one per 11,017 miles) to 0.076 per 1,000 self-driven miles (one per 13,219 miles). But Waymo has characterized disengagements as a flawed metric because they don’t adequately capture improvements or their impact over time.

In 2018, the RAND Corporation published an Uber-commissioned report — “Measuring Automated Vehicle Safety: Forging a Framework” — that laid bare some of the challenges ahead. It suggested that local DMVs play a larger role in formalizing the demonstration process and proposed that companies and governments engage in data-sharing. A separate RAND report estimated it would take hundreds of millions to hundreds of billions of miles to demonstrate driverless vehicle reliability in terms of fatalities and injuries. And Waymo CEO John Krafcik admitted in a 2018 interview that he doesn’t think self-driving technology will ever be able to operate in all possible conditions without some human interaction.

In June, the U.S. National Highway Traffic Safety Administration (NHTSA) detailed the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) program, which claims to be a robust source of information about autonomous vehicle testing. The program’s goal is to shed light on the breadth of vehicle testing taking place across the country. The federal government maintains no database of autonomous vehicle reliability records, and while states like California mandate that companies testing driverless cars disclose how often humans are forced to take control of the vehicles, critics assert that those are imperfect measures of safety.

Some of the AV TEST tool’s stats are admittedly eye-catching, like the fact that program participants are reportedly conducting 34 shuttle, 24 autonomous car, and seven delivery robot trials in the U.S. But these stats aren’t especially informative. Major stakeholders like Pony.ai, Baidu, Tesla, Argo.AI, Amazon, Postmates, and Motion apparently declined to provide data for the purposes of the tracking tool or have yet to make a decision. Moreover, several pilots don’t list the road type (e.g., “street,” “parking lot,” “freeway,”) used in tests, and the entries for locations tend to be light on the details. Waymo reports it is conducting “Rain Testing” in Florida, for instance, but hasn’t specified the number and models of vehicles involved.

Waymo says it evaluates its cars’ performance based on the avoidance of crashes, completion of trips in driverless mode, and adherence to applicable driving rules. But absent a vetting process, Waymo has wiggle room to underreport or misrepresent these metrics. And because programs like AV TEST are voluntary, there’s nothing to prevent a company from demurring as testing continues during and after the pandemic.

Other federal efforts to regulate autonomous vehicles largely remain stalled. The Department of Transportation’s recently announced Automated Vehicles 4.0 (AV 4.0) guidelines request — but don’t mandate — regular assessments of self-driving vehicle safety. And they permit those assessments to be completed by the automakers rather than standards bodies. Advocates for Highway and Auto Safety have also criticized AV 4.0 for its vagueness. And while the House of Representatives unanimously passed a bill that would create a regulatory framework for autonomous vehicles, dubbed the SELF DRIVE Act, it has yet to be taken up by the Senate. In fact, the Senate two years ago tabled a separate bill (the AV START Act) that had made its way through committee in November 2017.

Coauthors of the RAND reports say it’s important to test the results of self-driving software with a broad, agreed-upon framework in place. The University of Michigan’s MCity in January released a white paper laying out safety test parameters it believes could work — an “ABC” test concept of accelerated evaluation (focusing on the riskiest driving situations), behavior competence (scenarios that correspond to major motor vehicle crashes), and corner cases (situations that test limits of performance and technology). In this framework, on-road testing of completely driverless cars is the last step — not the first.

“They don’t want to tell you what’s inside the black box,” Matthew O’Kelly, who coauthored a recent report proposing a failure detection method for safety-critical machine learning, recently told VentureBeat, “We need to be able to look at these systems from afar without sort of dissecting them.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics