Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


Last year, Waabi, a startup developing autonomous vehicle technology, launched out of stealth with $83.5 million in venture funding. Founded by Raquel Urtasun, previously the chief scientist and head of R&D at Uber ATG, Uber’s self-driving car division, Waabi claimed that one of the core components of its platform — a simulator to develop autonomous driving systems — was superior to those from rivals like Waymo and Cruise.

Self-driving cars aren’t developed exclusively on real-world test tracks and public roads. Nearly every major autonomous vehicle company has a software-based simulator to replay — or explore new — driving scenarios in digital environments. But while this software has improved over the years and been used to simulate tens of billions of autonomous miles, it has key limitations, according to experts like Wayne State University computer scientist Weisong Shi.

“[Simulators] should [ideally] include more diverse pedestrian models such as different gender, height, shape, stake boarders, wheelchairs, strollers, including distracted and inattentive pedestrians. The ultimate goal should be to have pedestrians in simulation behave as real pedestrians behave on the road,” Shi told VentureBeat via email. “[They’re also lacking in] capabilities to test vehicle-to-infrastructure and vehicle-to-vehicle technology [and] adversarial situations such as adversarial cyberattacks that can confuse the vehicle perception system. [Finally, they need] scenarios that include roadside construction and other [situations] that need immediate decision-making ability.”

The topic is timely as self-driving companies ramp up their investments in logistics during the pandemic, which promises to change the way goods are delivered to — and from — businesses. Just this week, Waymo Via, Waymo’s shipping-focused division, announced a pilot with C. H. Robinson that’ll give the company access to 200,000 shippers and carriers. For its part, Intel’s Mobileye recently announced a partnership with startup Udelv to put automated delivery vehicles into service in the U.S. by 2023.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Simulating the road

As Urtasun explained to VentureBeat in an interview, broadly speaking, there are three types of simulators used in the autonomous vehicle industry: (1) Log replay, (2) virtual simulation for motion planning, and (3) virtual world rendering via graphic engines for perception training. Log replay simulators replay previously-captured car sensor data to train a self-driving system. Virtual simulation for motion planning testing creates an immersive virtual world, while virtual world rendering — which relies on artists to create a virtual world — employs automated techniques like procedural modeling to make those worlds larger.

For example, simulators like Carcraft, one of Waymo’s tools, leverage real-world sensor data to model objects including cars, cyclists, and pedestrians for digital versions of self-driving systems to navigate. Engineers can modify the environment and evaluate different situations, such as adding cyclists or adjusting the time of day — or the speed of oncoming traffic.

Waymo says that its simulators can automatically synthesize cars’ journeys in addition to replaying sensor logs, enabling engineers to recreate elements like raindrops and solar glares. A spokesperson for Cruise told VentureBeat that its simulator leverages “learned models” to predict things like driver, cyclist, and pedestrian behaviors. And Yandex, the Russian tech giant, says that it tests “thousands” of scenarios in simulation simultaneously and analyzes test cases for “thousands” of metrics.

But as capable as simulators are, they’re susceptible to technical shortcomings that make them imperfect substitutes for the real world. Ziyuan Zhong, a research assistant at Columbia specializing in autonomous systems, points out that some “errors” identified by simulators can be due to artifacts introduced by the simulation itself. Moreover, simulators can miss real-world errors because they fail to anticipate them, he says.

“[Some] simulated sensor information might never happen in the real world … [and the] simulator might not consider [certain] kinds of variation in [their] testing scenarios,” Zhong told VentureBeat via email. “For example, in the real world, someone can modify the appearance of their vehicles by attaching different kinds of stickers — which can potentially fool the vision model of the autonomous vehicle.”

They’re well-founded concerns. In a commonly cited example, researchers placed stickers on a stop sign to make a self-driving car think the sign said there was a speed limit of 45 miles per hour. Another study out of the University of Michigan found that self-driving perception systems based on lidar — a sensing technology used by Waymo and Cruise, among others — can be fooled into “seeing” a nonexistent obstacle.

“In terms of [the] ‘thoroughness’ [of simulators,] there are several perspectives to consider,” Zhong said. “First, the inherent limitations of a simulator might prevent one from testing certain scenarios. Second, the potential number of scenarios to be tested [is] infinite [and,] at this point, there is still no uniform consensus on the ultimate standard of ‘thoroughness’ for testing autonomous vehicles.”

Improvements down the line

Urtasun asserts that Waabi’s simulator solves many of these problems by leaning heavily on AI to design tests, assess skills, and teach self-driving systems to “learn to drive on their own.” Waabi’s simulator can build “digital twins” of the world from data and perform “near-real-time sensor” simulation, she says, automatically crafting scenarios to stress-test self-driving systems without human intervention.

“The first application of our self-driving technology will be in logistics, an industry that stands to gain the most from autonomy due to a chronic driver shortage and pervasive safety issues,” Urtasun said. “[S]imulation has been on the periphery of testing in self-driving technologies for a while now … The trouble is that the simulators being utilized today tend to solve for one particular problem, or focus on building efficacy in one area.”

Zhong and Shi are skeptical that Waabi’s solution addresses every problem with self-driving simulators today. From his observations, Zhong says that simulators need to “improve their realism of sensor simulation” — for example, adding “realistic noise” when simulating sensor information. The behaviors of background vehicles and pedestrians in simulators also needs to be made more realistic, Zhong says, and simulators should allow users to design more “diverse” scenarios.

“In my opinion, until we have covered all the possible scenarios in simulation that a typical driver sees on the road, there is [a] need to do real-world testing to test for the edge cases,” Shi said. “I believe simulation testing is currently limited in that sense. The autonomous car can only learn to respond to different driving patterns by actually experiencing these scenarios.”

A longstanding challenge in simulations involving real data is that every scene must respond to a self-driving car’s movements — even though those that might not have been recorded by the original sensor. Whatever angle or viewpoint isn’t captured by a photo or video has to be rendered or simulated using predictive models, which is why simulation has historically relied on computer-generated graphics and physics-based rendering that somewhat crudely represents the world. 

“The reality is that all simulation is somewhat limited in that you have to calibrate it and validate it against reality,” Jonny Dyer, previously the director of engineering at Lyft’s Level 5 self-driving division, told VentureBeat in a previous interview. “It’s not going to replace driving on the roads anytime soon [because] you can’t do everything in simulation.

Underlining the risks of putting too much faith in simulation, a study by researchers at the University of Geneva in Switzerland found that self-driving systems trained in simulators don’t always successfully transfer skills to the real road. After testing three publicly available state-of-the-art lane-keeping AI systems, the coauthors found that “poor photorealism” and “inadequate representation of the vehicle’s sensory and of the environmental uncertainty sources” led to failures of the systems on a real-world test track.

Other research has investigated whether self-driving systems might fail to detect dark-skinned pedestrians — given the biases ingrained in the types of computer vision systems and datasets likely used to train self-driving systems.

“With the current simulators, we can test how an ego vehicle performs in different weather conditions, road terrains, urban and city driving, interaction and response to the traffic control devices such as the traffic lights and stop signs, interactions with other road users such as pedestrians, cyclists, and more,” Wei said. “However, there is always a room for improvement.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics