Title: Scene Classification in High Resolution Remotely Sensed Images Based on PCANet
Abstract: Rich information provided by high resolution remotely sensed images allow us to classify scenes by understanding their spatial and structural patterns. The key of scene classification task with remotely sensed images lies in feature learning efficiency and invariant image representations. While deep neutral network-based approaches achieved good classification accuracy for remotely sensed images, they often have to train millions of parameters and involve heavily iterative computation. In this paper, we propose a new framework for scene classification based on a simple PCANet which is introduced into high remotely sensed image classification for the first time. First, we verify the eligibility of PCANet on classifying large scale scenes from high resolution remotely sensed images. Then we explore the impact of PCANet parameters including filter size, number of filters, and block overlap ratio on classification accuracy. Lastly, we do comprehensive experiments with the public UC-Merced dataset to exemplify the effectiveness of the approach. Experimental results show that the proposed framework achieved on par with the state-of-the-art deep neutral network-based classification accuracy without training a huge amount of parameters. We demonstrate that the proposed classification framework can be highly effective in developing a classification system that can be used to automatically scan large-scale high resolution satellite imagery for classifying scenes.
Publication Year: 2016
Publication Date: 2016-01-01
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
Cited By Count: 3
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot