Skip to main content
SHARE
Publication

AN INDUCTIVE MAPPING WITH CONVOLUTIONAL REPRESENTATIONS FOR HUMAN SETTLEMENT DETECTION: PRELIMINARY RESULTS...

by Wadzanai D Lunga, Dilip R Patlolla, Hsiuhan Yang, Jeanette E Weaver, Budhendra L Bhaduri
Publication Type
Conference Paper
Publication Date
Conference Name
2017 IEEE International Geoscience and Remote Sensing Symposium
Conference Location
Fort worth, Texas, United States of America
Conference Date
-

Undoubtedly, deep convolutional learning methods continue to improve performance in image-level classification for computer vision and remote sensing applications. However, the spatio-temporal nature of remote sensing imagery offers other interesting challenges in multiscale image understanding. Emerging opportunities include seeking fine-grained and neighborhood mapping with overhead imagery. Limitations due to lack of relevant scale ground-truth often mandates that these challenges are pursued as disjoint. We test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate surprising potential on multiscale experiments that incorporate edge-level details up to semantic-level informationWe test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate a preliminary inductive transfer learning potential on multiscale experiments that incorporate edge-level details up to semantic-level informationWe test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate a preliminary inductive transfer learning potential on multiscale experiments that incorporate edge-level details up to semantic-level information.