Cornell, US Navy raise bar for autonomous underwater imaging

Tests conducted by Cornell and the U.S. Navy used new algorithms to outperform state-of-the-art programming for autonomous underwater sonar imaging, significantly improving the speed and accuracy for identifying objects such as explosive mines, sunken ships, airplane black boxes, pipelines, etc.

author avatar

27 May, 2022. 2 min read

Tests conducted by Cornell and the U.S. Navy used new algorithms to outperform state-of-the-art programming for autonomous underwater sonar imaging, significantly improving the speed and accuracy for identifying objects such as explosive mines, sunken ships, airplane black boxes, pipelines and corrosion on ship hulls.

Sea reconnaissance is filled with challenges that include murky waters, unpredictable conditions and vast areas of subaquatic terrain. Sonar is the preferred imaging method in most cases, but acoustic waves can be difficult to decipher, often requiring different angles and views of an object before it can be identified.

“If you have a lot of targets and they’re distributed over a large region, it takes a long time to classify them all,” said Silvia Ferrari, the John Brancaccio Professor of Mechanical and Aerospace Engineering, who led the research published May 24 in the journal IEEE Journal of Oceanic Engineering. “Sometimes an autonomous underwater vehicle won't be able to finish the mission because it has limited battery life.”

To improve the capability of these vehicles, Ferrari’s research group teamed up with the Naval Surface Warfare Center, Panama City, and the Naval Undersea Warfare Center, Newport, Virginia. The team created and tested a new imaging approach called informative multi-view planning, which integrates information about where objects might be located with sonar processing algorithms that decide the optimal views, and the most efficient path to obtain those views. The planning algorithms take into account the sonar sensor’s field-of-view geometry along with each target’s position and orientation, and can make on-the-fly adjustments based on current sea conditions.

In computer simulated tests, the research team’s algorithms competed against state-of-the-art imaging methods to complete multi-target classification tasks. The new algorithms were able to complete the tasks in just half the time, and with a 93% improvement in accuracy of identifying targets. In a second test in which the targets were more randomly scattered, the new algorithms performed the imaging task more than 11% faster, and with 33% more accuracy.

“Until these algorithms, we were never able to account for the orientation and some of the more complicated automatic target variables that influence the quality of the images,” Ferrari said. “Now we can accomplish the same imaging tasks with higher accuracy and in less time.”

As a final test, the algorithms were programmed into a REMUS-100 autonomous underwater vehicle tasked with identifying 40 targets scattered within an area of St. Andrew Bay off the coast of Florida. Performing in its first undersea trial, the new algorithms achieved the same speed as the state-of-the-art algorithms, and with equal or superior classification performance.

“Demonstrating the developed algorithms using an actual vehicle in sea trials is a very exciting achievement,” said Jane Jaejeong Shin, Ph.D. ’21, who is now an assistant professor of mechanical and aerospace engineering at the University of Florida. “This result shows the potential of these algorithms to be extended and applied more generally in similar underwater survey missions.”