GeoVisual Search works on two different imagery catalogs, showcasing multiple resolutions, and national and global scale search. You can explore each of these below.
Humans can quickly identify images that are similar, even if they have different colors, orientations, and resolutions.
Center Pivot Irrigation
While humans immediately recognize visual similarity, they cannot analyze the millions of individual scenes that satellites generate daily. We built GeoVisual Search to do just that.
Tile source maps
Divide the globe into a grid of tiles. We use multiple, overlapping grids, capturing features that would otherwise run across tiles.
For each image tile, use deep learning, a form of artificial intelligence that is loosely inspired by the structure of the brain, to create a neural network that extracts features (e.g., shapes, colors, texture). Features can include both the visible and non-visible spectrum.
Given a query image, calculate a “visual distance” between the query features and the features extracted from each image in the comparison set.
The tiles with the smallest “visual distance” are visually similar to the query tile.
Descartes Labs created two base layer map composites for GeoVisual Search utilizing Landsat 8 for global coverage and NAIP for the United States.
Aerial Imagery (NAIP)
Resolution: 1 meter per pixel
Coverage: United States
Number of pixels processed: 31.5 trillion
Number of tiles processed: 1.9 billion
Resolution: 20 meters per pixel
Number of pixels processed: 3.4 trillion
Number of tiles processed: 205 million