We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Ever notice that underwater images tend to be be blurry and somewhat distorted? That’s because phenomena like light attenuation and back-scattering adversely affect visibility. To remedy this, researchers at Harbin Engineering University in China devised a machine learning algorithm that generates realistic water images, along with a second algorithm that trains on those images to both restore natural color and reduce haze. They say that their approach qualitatively and quantitatively matches the state of the art, and that it’s able to process upwards of 125 frames per second running on a single graphics card.

The team notes that most underwater image enhancement algorithms (such as those that adjust white balance) aren’t based on physical imaging models, making them poorly suited to the task. By contrast, this approach taps a generative adversarial network (GAN) — an AI model consisting of a generator that attempts to fool a discriminator into classifying synthetic samples as real-world samples — to produce a set of images of specific survey sites that are fed into a second algorithm, called U-Net.

The team trained the GAN on a corpus of labeled scenes containing 3,733 images and corresponding depth maps, chiefly of scallops, sea cucumbers, sea urchins, and other such organisms living within indoor marine farms. They also sourced open data sets including NY Depth, which comprises thousands of underwater photographs in total.

Post-training, the researchers compared the results of their twin-model approach to that of baselines. They point out that their technique has advantages in that it’s uniform in its color restoration, and that it recovers green-toned images well without destroying the underlying structure of the original input image. It also generally manages to recover color while maintaining “proper” brightness and contrast, a task at which competing solutions aren’t particularly adept.

It’s worth noting that the researchers’ method isn’t the first to reconstruct frames from damaged footage. Cambridge Consultants’ DeepRay leverages a GAN trained on a data set of 100,000 still images to remove distortion introduced by an opaque pane of glass, and the open source DeOldify project employs a family of AI models including GANs to colorize and restore old images and film footage. Elsewhere, scientists at Microsoft Research Asia in September detailed an end-to-end system for autonomous video colorization; researchers at Nvidia last year described a framework that infers colors from just one colorized and annotated video frame; and Google AI in June introduced an algorithm that colorizes grayscale videos without manual human supervision.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics