Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


Quadcopters that autonomously spot signs of facilities wear are nothing new — French startup Sterblue, Clobotics, General Electric spinoff Avitas Systems, and Cyberhawk employ robots to look after gas terminals, oil rigs, and other assets. A problem that remains somewhat uncracked in the drone inspection space, though, is localization — the ability to accurately suss out a drone’s location with respect to the thing it’s inspecting. GPS and inertial measurement units (IMUs) provide relatively granular tracking, but more accurate data might ensure better consistency and enable drones to get safely closer to inspection targets.

Toward that end, a newly published paper on the preprint server Arxiv.org (“Improving drone localization around wind turbines using monocular model-based tracking“) proposes a novel method of integrating images into drone navigation stacks for automated wind turbine inspection. “Due to harsh weather conditions, wind turbines can incur a wide range of structural damage, which can severely impact their power generation abilities,” the scientists explain. “Current best practice in visual inspection is the use of ground-based cameras with telephoto lenses, or manual inspection using climbing equipment. [But] both methods incur considerable cost in both the inspection itself, and the turbine downtime.”

Drone inspection

They have a point — ice can do enormous damage to turbines. Some wind farms report energy production losses of up to 20 percent due to icing, according to Canadian wind-industry consultancy firm TechnoCentre Éolien (TCE), and over time, ice shedding from blades can damage other blades or overstress internal components, necessitating costly repairs.

The researchers’ model-based approach involves mapping a 3D line-and-point skeleton representation of turbines to image data collected from drones’ front-facing monocular cameras. The matching is performed by a convolutional neural network trained on a set of 1,000 labeled photos of turbines from the internet, which translates the image data — along with estimated camera poses obtained from the drone’s GPS and IMU sensors — into a form that can be “easily” correlated to the skeleton model projection.

In tests, the approach “noticeably improve[d]” localization, the paper’s authors write, although they concede that there’s more validation work to be done; they didn’t have access to ground truth pose estimates for inspection flights, and so they weren’t able to quantitatively evaluate the overall system. But they contend that their work lays a foundation for improved systems to come, including versions that incorporate additional sensors such as lidar and simultaneously estimate turbine models’ parameters.

“Results illustrate that the use of the image measurements significantly improves the [precision] of the localization over that obtained using GPS and IMU alone,” they wrote.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics