This post originally appeared on Discover magazine’s blog The Crux.
By Lakshman Prasad
When the American painter Abbott H. Thayer published his book Concealing-Coloration in the Animal Kingdom in 1909, he put forth the hypothesis that animals’ colors served one function and one function only: to camouflage. While that theory has since been disproven (animal colors also play a role in threatening predators and attracting mates), his work made a significant impact on our understanding of camouflage and how it could be used in war. During World War I, both the French and the German militaries relied on his book to develop designs for camouflaging their soldiers, and it became required reading for the U.S. Army’s newly launched unit of camoufleurs. Thayer’s work noted how nature “obliterates” contrast by both blending into its environment and disrupting it by using arbitrary patterns to break up outlines.
Thayer was right. Nature uses both blending and patterns to disguise itself. And it is exceedingly good at it. If you have any doubts, just watch this video of an octopus seamlessly blending in to its surroundings:
Over the last hundred-plus years, humans have looked to nature to improve our ability to camouflage ourselves. We’ve come a long way—going from this:
Of course, it’s not just the United States and our allies that have benefited from advances in camouflage. So have our adversaries. And it’s not just soldiers that use camouflage to blend into their surroundings. Facilities can also be camouflaged, as can the movement of people and equipment. These can present significant challenges to the military. How can we see what doesn’t want to be seen?
At Los Alamos National Laboratory, we study camouflage in nature to learn how we can identify things trying to disguise themselves. We do that by looking at marine organisms that are exceptionally good at the art of blending in: flounders, skates, cuttlefish, and octopi.
Take, for example, flounders. They’re not completely flat, but they appear flat—with two eyes on top of their heads. Similar to the octopus, they are able to change both the color and texture of their skin to imitate those found on the ocean floor. Identifying them is no easy task. Although animal and human vision have evolved to efficiently perceive a complex visual world by relying on cues such as coherent edges, color contrast, and texture differences, natural camouflage has evolved to frustrate these perceptions to escape detection. In the past, researchers have tried to unravel this conundrum by studying the workings of vision. In our research into the failings of vision when it comes to detecting camouflage, we’re taking a different approach by searching for clues about how visual perception works.
Working with the Woods Hole Oceanographic Institution (WHOI) and National Oceanic and Atmospheric Administration (NOAA), we’re using their autonomous underwater vehicles (AUVs) to obtain datasets representing different kinds of camouflage. The AUVs take millions of images of the ocean floor. We’ve already developed an algorithmic framework for image segmentation and shape analysis based on geometric modeling of principles of perceptual organization. The goal of this work was to develop efficient automated methods for detecting and analyzing features in remote sensing imagery for national security and intelligence applications.
This algorithmic framework has since been used in follow-on projects, beyond remote sensing, for analyzing radiographs, biomedical imagery, and marine imagery, which involved characterizing the structure and texture of marine organisms and their habitats from images obtained by WHOI. For instance, we have developed the first successful method for rapidly detecting and counting sea scallops. In working with marine imagery, we were surprised by the sensitivity of our methods in detecting subtle features, and even certain camouflage, in the presence of high clutter—a trait not shared by other image segmentation methods, which rely heavily on spectral cues. That’s promising and inspired our quest for cracking the camouflage code.
Yet still, most camouflage defeats our current detection capabilities. Our work now hopes to improve that. Rather than search for clues in the confounding world of blending and disruptive colors, we look for telltale structural cues. Indeed, color-blind people are often better at detecting camouflaged objects. This is perhaps because they rely less on colors and more on form and texture to discern the world around them.
In particular, we observe that the economy of animals’ physical forms, due to ease of motility and heat conservation, yields structural cues, such as a smoother edge or a slightly different texture from the ambient background, that hint at their possible location. These cues, which are unlikely to be just accidental, give us strong reason to look closer at the localized regions where they are found, with the help of other powerful techniques that are too expensive or slow to be applied all over a large image. Such initial cuing can also help automate the generation of large training sets required by machine-learning algorithms that can be taught to recognize camouflage.
Locating marine organisms in their natural habitats can help us not only better detect nefarious activities that could threaten national security, but better understand ocean biodiversity as well. We can use the technology to monitor fish populations and mitigate overfishing, a growing environmental concern. It can also give us clues as to how rising ocean temperatures are affecting fish populations.
When Abbott Thayer wrote his book more than a hundred years ago about how animals use their color to survive, I doubt he could have imagined all the technological advances that have made camouflage as sophisticated as it is today—and how good we’ve gotten at detecting it. Camouflage is nature’s best approximation of invisibility. Our job is to beat it at its own game.
Lakshman Prasad is a data scientist in the Intelligence and Space Research division of Los Alamos National Laboratory. His most recent paper on this topic can be found here in the journal Methods of Oceanography.