hyperspectral image classification using gradient local auto

5
225 Abstract Spatial information has been verified to be helpful in hyperspectral image classification. In this paper, a spatial feature extraction method utilizing spatial and orientational auto-correlations of image local gradients is presented for hyperspectral imagery (HSI) classification. The Gradient Local Auto-Correlations (GLAC) method employs second order statistics (i.e., auto-correlations) to capture richer information from images than the histogram-based methods (e.g., Histogram of Oriented Gradients) which use first order statistics (i.e., histograms). The experiments carried out on two hyperspectral images proved the effectiveness of the proposed method compared to the state-of-the-art spatial feature extraction methods for HSI classification. 1. Introduction Hyperspectral imagery (HSI) captures a dense spectral sampling of reflectance values over a wide range of spectrum [1]. This rich spectral information provides additional capacities for many remote sensing applications including environmental monitoring, crop analysis, plant and mineral exploration, etc. In conventional HSI classification approaches, only spectral signatures of every pixel in the image are considered. Classification techniques use spectral values alone to assign labels to each pixel are so-called pixel- wise classifiers [2]. However, the spatial context information in hyperspectral images is also useful for scene interpretation. During the last decade, there has been a great deal of effort in exploiting spatial features to improve HSI classification performance. In [3], a volumetric gray level co-occurrence matrix was used to extract the texture features of hyperspectral images. In [4], a spectral-spatial preprocessing method was proposed to incorporate spatial features for HSI classification by employing a multihypothesis prediction strategy that was developed for compressed sensing image reconstruction [5] and image super-resolution [6]. A 3-D discrete wavelet transform (3-D DWT) was employed in [7] to effectively capture the spatial information of hyperspectral images in different scales and orientations. 2-D Gabor filters were applied to selected bands or principal components of the hyperspectral image to extract Gabor texture features for classification [8, 9]. Morphological profiles (MPs) generated via a series of structural elements were introduced in [10] to capture multiscale structural features for HSI classification. Due to the effectiveness of MPs characterizing spatial structural features, many features based MPs have been proposed for HSI classification, such as extended morphological profiles (EMPs) [11], attributes profiles (APs) [12], and extended multi-attribute profile (EMAP) [13]. In [14], local binary patterns (LBPs) and Gabor texture features were combined to enhance the discriminative power of the spatial features. Spatial feature extraction plays a key role in improving the HSI classification performance. In this paper, we introduce the gradient local auto-correlations (GLAC) [15] and present a new spatial feature extraction method for hyperspectral images using GLAC. The GLAC descriptor, which is based on the second order of statistics of gradients (spatial and orientational auto-correlations of local image gradients), can effectively capture rich information from images and has been successfully used in motion recognition [22] and human detection [15, 23]. To our best knowledge, this is the first time, image gradient based features have been used for hyperspectral image classification. Experimental results on two HSI datasets demonstrate the effectiveness of the proposed feature extraction method compared with several state-of- the-art spatial feature extraction methods for HSI classification. The remainder of this paper is organized as follows. Section 2 describes the details of the GLAC descriptor and the classification framework. Section 3 presents the experimental results with two real hyperspectral datasets. Finally, Section 4 concludes the paper. 2. Methodology 2.1. Gradient local auto-correlations GLAC [15] descriptor is an effective tool for extracting shift-invariant image features. Let I be an image region Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1 , Junjun Jiang 2 , Baochang Zhang 3 , Wankou Yang 4 , Jianzhong Guo 5 1. Department of Electrical Engineering, University of Texas at Dallas, Texas, USA 2. School of Computer Science, China University of Geosciences, Wuhan, China 3. School of Automation Science and Electrical Engineering, Beihang University, Beijing, China 4. School of Automation, Southeast University, Nanjing, China 5. School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan, China [email protected] 1

Upload: vokhanh

Post on 03-Jan-2017

239 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hyperspectral Image Classification Using Gradient Local Auto

225

Abstract

Spatial information has been verified to be helpful in

hyperspectral image classification. In this paper, a spatial feature extraction method utilizing spatial and orientational auto-correlations of image local gradients is presented for hyperspectral imagery (HSI) classification. The Gradient Local Auto-Correlations (GLAC) method employs second order statistics (i.e., auto-correlations) to capture richer information from images than the histogram-based methods (e.g., Histogram of Oriented Gradients) which use first order statistics (i.e., histograms). The experiments carried out on two hyperspectral images proved the effectiveness of the proposed method compared to the state-of-the-art spatial feature extraction methods for HSI classification.

1. Introduction Hyperspectral imagery (HSI) captures a dense spectral

sampling of reflectance values over a wide range of spectrum [1]. This rich spectral information provides additional capacities for many remote sensing applications including environmental monitoring, crop analysis, plant and mineral exploration, etc.

In conventional HSI classification approaches, only spectral signatures of every pixel in the image are considered. Classification techniques use spectral values alone to assign labels to each pixel are so-called pixel-wise classifiers [2]. However, the spatial context information in hyperspectral images is also useful for scene interpretation. During the last decade, there has been a great deal of effort in exploiting spatial features to improve HSI classification performance. In [3], a volumetric gray level co-occurrence matrix was used to extract the texture features of hyperspectral images. In [4], a spectral-spatial preprocessing method was proposed to incorporate spatial features for HSI classification by employing a multihypothesis prediction strategy that was developed for compressed sensing image reconstruction [5] and image super-resolution [6]. A 3-D discrete wavelet transform (3-D DWT) was employed in [7] to effectively

capture the spatial information of hyperspectral images in different scales and orientations. 2-D Gabor filters were applied to selected bands or principal components of the hyperspectral image to extract Gabor texture features for classification [8, 9]. Morphological profiles (MPs) generated via a series of structural elements were introduced in [10] to capture multiscale structural features for HSI classification. Due to the effectiveness of MPs characterizing spatial structural features, many features based MPs have been proposed for HSI classification, such as extended morphological profiles (EMPs) [11], attributes profiles (APs) [12], and extended multi-attribute profile (EMAP) [13]. In [14], local binary patterns (LBPs) and Gabor texture features were combined to enhance the discriminative power of the spatial features.

Spatial feature extraction plays a key role in improving the HSI classification performance. In this paper, we introduce the gradient local auto-correlations (GLAC) [15] and present a new spatial feature extraction method for hyperspectral images using GLAC. The GLAC descriptor, which is based on the second order of statistics of gradients (spatial and orientational auto-correlations of local image gradients), can effectively capture rich information from images and has been successfully used in motion recognition [22] and human detection [15, 23]. To our best knowledge, this is the first time, image gradient based features have been used for hyperspectral image classification. Experimental results on two HSI datasets demonstrate the effectiveness of the proposed feature extraction method compared with several state-of-the-art spatial feature extraction methods for HSI classification.

The remainder of this paper is organized as follows. Section 2 describes the details of the GLAC descriptor and the classification framework. Section 3 presents the experimental results with two real hyperspectral datasets. Finally, Section 4 concludes the paper.

2. Methodology

2.1. Gradient local auto-correlations

GLAC [15] descriptor is an effective tool for extracting shift-invariant image features. Let I be an image region

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Chen Chen1, Junjun Jiang2, Baochang Zhang3, Wankou Yang4, Jianzhong Guo5

1. Department of Electrical Engineering, University of Texas at Dallas, Texas, USA 2. School of Computer Science, China University of Geosciences, Wuhan, China

3. School of Automation Science and Electrical Engineering, Beihang University, Beijing, China 4. School of Automation, Southeast University, Nanjing, China

5. School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan, China [email protected]

Page 2: Hyperspectral Image Classification Using Gradient Local Auto

226

and ( , )tx y=r be a position vector in I . The magnitude and the orientation angle of the image gradient at each

pixel can be represented by 2 2I In

x y∂ ∂= +∂ ∂

and

arctan ,I Ix y

θ⎛ ⎞∂ ∂= ⎜ ⎟∂ ∂⎝ ⎠

, respectively. The orientation θ is then

coded into D orientation bins by voting weights to the nearest bins to form a gradient orientation vector D∈f R . With the gradient orientation vector f and the gradient magnitude n , the thN order auto-correlation function of local gradients can be expressed as follows:

0 , 1( ,..., ,..., )N NR d d =a a (1)

[ ]0 11 1( ), ( ),..., ( ) ( ) ( ) ( ) ,

NN d d d NI

n n n f f f dω + + + +∫ r r a r a r r a r a r

where ia are displacement vectors from the reference

point r , df is the thd element of f , and ( )ω ⋅ indicates a weighting function. In the experiments reported later,

{0,1}N ∈ , 1 , { ,0}x ya r∈ ±Δ , and ( ) min( )ω ⋅ ≡ ⋅ were considered as suggested in [15], where rΔ represents the displacement interval in both horizontal and vertical directions. For {0,1}N ∈ , the formulation of GLAC is given by

( ) ( ) ( )00 0 0: N d

I

R d n f=∈

=∑r

F r r (2)

( ) ( ) ( ) ( ) ( )0 11 1 0 1 1 1 1: , , min , .N d d

I

R d d n n f f=∈

⎡ ⎤= + +⎣ ⎦∑r

F a r r a r r a

The spatial auto-correlation patterns of 1( , )+r r a are shown in Figure 1.

Figure 1. Configuration patterns of 1( , )+r r a .

The dimensionality of the above GLAC features ( 0F

and 1F ) becomes 24D D+ . Although the dimensionality of the GLAC features is high, the computational cost is low due to the sparseness of f . In other words, Eq. (2) is applied to a few non-zero elements of f . It is also worth noting that the computational cost is invariant to the number of bins, D , since the sparseness of f does not depend on D .

2.2. Proposed classification framework

Hyperspectral images usually have hundreds of spectral bands. Therefore, extracting spatial features from each spectral band image creates high computational burden. In

[16], it was suggested to use several principal components (PCs) of the hyperspectral data to address this issue. However, any feature reduction technique could also be applied. In our spatial feature extraction method, principal component analysis (PCA) [17] is used to obtain the first K PCs. In each PC, GLAC features are generated for the pixel of interest in its corresponding local image patch with size of w w× . The GLAC features from all PCs are concatenated to form a single composite feature vector for each pixel in an image as illustrated in Figure 2. For classification, extreme learning machine [18] is employed due to its efficient computation and good classification performance [9, 19].

Figure 2. Graphical illustration of the procedure of extracting GLAC features from a hyperspectral image.

3. Experiments and analysis In this section, we evaluate the proposed feature

extraction method using two real hyperspectral datasets. In our experiments, the first 4 PCs (i.e., 4K = ) which account for over 95% of the variance of the datasets are considered. Three spatial feature extraction approaches including Gabor filters [8], EMAP [13], and LBP [14] are utilized to compare with our proposed GLAC method. Moreover, classification using the spectral information (denoted by Spec) is also conducted.

3.1. Experimental data

We use two widely used benchmarks (the Indian Pines and Pavia University datasets) for HSI classification. Both datasets and their corresponding ground truth maps are obtained from the publicly available website [20]. The Indian Pines dataset was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Indian Pines test site in northwestern Indiana. The original data consists of 224 spectral bands which were reduced to 200 bands after removal of 24 water-absorption bands. The dataset has a spatial dimension of 145 145× pixels with a spatial resolution of 20m. There are 16 different land-cover classes in this dataset. The Pavia University

Page 3: Hyperspectral Image Classification Using Gradient Local Auto

227

dataset was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) over Pavia in north Italy. This dataset has 103 spectral bands and each having a spatial dimension of 610 340× pixels with a spatial resolution of 1.3m. The dataset consists of 9 different land-cover classes. The ground truth labels of the two datasets are shown in Figure 3. There are 10249 labeled pixels for the Indian Pines dataset and 42776 labeled pixels for the Pavia University dataset. Detailed information of the number of training and testing samples used for the two datasets is summarized in Tables 3 and 4, respectively.

(a) (b)

Figure 3. Ground truth labels of two hyperspectral datasets: (a) Indian Pines; (b) Pavia University.

3.2. Parameter setting

In the proposed feature extraction method, two parameters D and rΔ for GLAC are important. First of all, we estimate the optimal parameter set ( , )D rΔ for the GLAC descriptor. The training samples are randomly selected and the rest of the labeled samples are used for testing. The training and testing samples are fixed for various parameter sets of ( , )D rΔ . For simplicity, we set patch size 21 21w = × in this parameter tuning experiment. The classification results with various parameter sets for the two datasets are shown in Table 1 and Table 2, respectively. From these two tables, larger number of bins ( )D generally generates better classification performance but with higher dimensionality of the GLAC features. Moreover, smaller value of rΔ achieves higher classification accuracy since local gradients are supposed to be highly correlated. Therefore,

7D = and 4rΔ = are chosen for the Indian Pines dataset in terms of classification accuracy and feature dimensionality. Similarly, 6D = and 3rΔ = are chosen for the Pavia University dataset.

After selecting the parameter set ( , )D rΔ , we study the patch size for the GLAC feature extraction method. The impacts from different patch sizes are investigated and the results are presented in Figure 4. The classification performance tends to be maximum with 21w = for both of the datasets. In addition, the parameters for ELM with a radial basis function (RBF) kernel are chosen as the ones that maximized the training accuracy by means of a 5-fold cross-validation test in all the experiments.

The default parameter settings for the competing methods (Gabor features, EMAP features, and LBP features) are adopted according to [8], [21], and [14], respectively.

D

rΔ 1 2 3 4 5 6 7 8

1 75.7 85.6 88.2 89.9 90.5 90.7 90.7 91.2 2 76.0 84.8 88.8 90.2 90.5 90.7 91.2 91.1 3 75.8 84.7 86.7 89.0 89.4 90.1 90.8 91.0 4 75.1 84.1 88.0 89.6 90.2 90.6 91.6 91.6 5 74.1 83.3 87.5 88.7 89.5 89.7 90.2 90.4 6 73.1 83.4 87.4 88.7 88.9 89.9 90.4 91.1 7 74.0 80.8 88.5 88.7 89.9 90.4 90.5 90.7 8 75.2 82.5 88.9 88.8 90.7 91.2 91.1 91.4

Table 1. Classification accuracy (%) of GLAC with different parameters ( , )D rΔ for the Indian Pines dataset.

DrΔ

1 2 3 4 5 6 7 8

1 71.8 82.3 82.3 85.5 84.7 83.9 85.3 84.5 2 75.0 82.9 83.9 85.3 84.2 85.6 85.3 85.6 3 76.1 82.9 84.2 84.2 84.9 87.2 87.1 87.5 4 73.8 80.3 82.1 83.6 84.0 85.9 85.8 86.2 5 72.6 79.8 81.7 83.4 84.1 85.8 85.9 86.5 6 72.4 77.3 80.5 82.6 83.8 85.3 85.2 85.9 7 70.3 76.7 80.1 82.4 83.0 85.2 85.1 86.5 8 72.1 77.2 81.7 82.0 82.8 84.9 84.7 85.9

Table 2. Classification accuracy (%) of GLAC with different parameters ( , )D rΔ for the Pavia University dataset.

Figure 4. Classification performance versus different patch sizes.

3.3. Results

In order to quantify the efficacy of the proposed feature extraction method, we compare it with several state-of-the-art spatial feature extraction methods for HSI classification. To avoid any bias, the classification experiment is repeated 10 times with different realizations of randomly selected training and testing samples and the classification performance (overall accuracy (OA) and Kappa coefficient of agreement (κ )) is averaged over the 10 trails. The performance of the proposed method is shown in Tables 3 and 4 for the two experimental datasets.

Page 4: Hyperspectral Image Classification Using Gradient Local Auto

225

Class Samples Features Train Test Spec Gabor EMAP LBP GLAC

Alfalfa 6 40 52.00 97.75 95.50 98.75 95.75 Corn-notill 30 1398 56.61 83.58 79.77 88.28 86.98

Corn-mintill 30 800 66.00 84.78 89.46 90.48 94.64 Corn 24 213 75.63 98.26 96.20 98.40 98.92

Grass-pasture 30 453 88.52 93.47 92.45 94.83 95.76 Grass-trees 30 700 92.24 95.33 99.69 96.41 96.83

Grass-pasture-mowed 3 25 76.00 88.80 92.80 94.00 98.80 Hay-windrowed 30 448 96.56 99.96 99.87 99.91 100.00

Oats 2 18 47.22 85.00 85.56 87.78 83.33 Soybean-notill 30 942 71.70 91.24 87.87 91.92 93.27

Soybean-mintill 30 2425 57.79 78.94 88.81 84.06 86.98 Soybean-clean 30 563 68.42 90.83 89.48 92.33 94.37

Wheat 22 183 98.47 98.85 99.56 98.63 99.89 Woods 30 1235 85.55 93.72 97.58 96.49 97.14

Build-Grass-Trees-Drives 30 356 69.21 98.62 95.70 99.78 98.74 Stone-Steel-Towers 10 83 86.63 93.25 91.20 93.98 93.73

Overall Accuracy (%) 71.09 88.27 90.72 91.36 92.62 Kappa Coefficient 0.6741 0.8673 0.8942 0.9019 0.9160

Table 3. The classification performance for the Indian Pines dataset.

Class Samples Features Train Test Spec Gabor EMAP LBP GLAC

Asphalt 30 6601 67.07 70.23 76.23 80.34 80.38 Meadows 30 18619 80.78 86.73 87.49 80.92 85.20

Gravel 30 2069 76.34 80.39 74.53 95.02 93.57 Trees 30 3034 92.20 83.78 93.27 73.83 80.92

Painted Metal Sheets 30 1315 99.38 99.64 98.13 92.10 97.03 Bare Soil 30 4999 70.13 78.39 88.41 94.60 96.06 Bitumen 30 1300 90.41 88.14 95.27 96.77 95.90

Self-Blocking Bricks 30 3652 68.31 85.72 91.85 93.31 95.17 Shadows 30 917 94.56 77.40 98.91 75.27 84.07

Overall Accuracy (%) 78.09 82.82 86.82 84.39 87.35 Kappa Coefficient 0.7171 0.7763 0.8289 0.8005 0.8369

Table 4. The classification performance for the Pavia University dataset. From the results, we can see that the performance of classification with spatial features is much better than that with the spectral signatures only (Spec). For example, GLAC produces over 20% and 9% higher accuracies than Spec for the Indian Pines dataset and the Pavia University dataset, respectively. This is because spatial features can take advantage of local neighborhood information that adjacent pixels share similar characteristics and may belong to the same class due to homogeneous regions in HSI. Among various spatial features, GLAC achieves the highest classification accuracies for both datasets, which demonstrates that GLAC features exhibit more discriminative power than other features.

Figure 5 provides an example of visual inspection of the classification maps generated using different features for the Indian Pines dataset. As shown in this figure, classification maps of spatial feature based classification methods are less noisy and more accurate than the map of pixel-wise classification method (i.e., Spec).

We also report the computational complexity of different feature extraction methods on the Indian Pines dataset in Table 5.

Figure 5. Thematic maps resulting from classification for the Indian Pines dataset. (a) Ground truth map. (b) Spec: 69.77%. (c) Gabor: 87.41%. (d) EMAP: 91.15%. (e) LBP: 91.21%. (f) GLAC: 92.78%.

Experiments were carried out using MATLAB on an Intel i7 Quad-core 3.4 GHz desktop computer with 8 GB of RAM. Although GLAC has the highest computational cost, it should be noted that GLAC feature extraction is performed independently on each PC, which means that feature extraction can go parallel. Thus, the speed of GLAC feature extraction on PCs can be greatly improved.

Page 5: Hyperspectral Image Classification Using Gradient Local Auto

226

Features Processing time (s) GLAC (proposed) 15.21

EMAP 1.12 LBP 3.69

Gabor 2.17 Table 5. Processing times of different feature extraction methods on the Indian Pines dataset.

4. Conclusion and future work In this paper, a spatial feature extraction method based

on auto-correlations of local image gradients was proposed for hyperspectral imagery (HSI) classification. The gradient local auto-correlations (GLAC) features utilize spatial and orientational auto-collections of local gradients to describe the rich texture information in hyperspectral images. The experimental results on two standard datasets demonstrated superior performance of GLAC over several state-of-the-art spatial feature extraction methods.

Although the proposed GLAC feature extraction method provides effective classification results, we believe that there is room for further improvement. In future work, we plan to extend GLAC to a 3D version (similar to a 3D Gabor filter or a 3D wavelet), thereby extracting features directly from a 3D hyperspectral image cube.

Acknowledgement We acknowledge the support of the Natural Science

Foundation of China, under Contracts 61272052 and 61473086, the Program for New Century Excellent Talents of the University of Ministry of Education of China, and the key program of Hubei provincial department of education under grant D2014602.

References [1] C. Chen, W. Li, E. W. Tramel, and J. E. Fowler, “Reconstruction of

hyperspectral imagery from random projections using multihypothesis prediction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 1, pp. 365-374, January 2014.

[2] Y. Tarabalka, J. A. Benediktsson, and J. Chanussot, “Spectral–spatial classification of hyperspectral imagery based on partitional clustering techniques,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 8, pp. 2973-2987, August 2009.

[3] H. Su, B. Yong, P. Du, H. Liu, C. Chen, and K. Liu, “Dynamic classifier selection using spectral-spatial information for hyperspectral image classification,” Journal of Applied Remote Sensing, vol. 8, no. 1, pp. 085095, August 2014.

[4] C. Chen, W. Li, E. W. Tramel, M. Cui, S. Prasad, and J. E. Fowler, “Spectral-spatial preprocessing using multihypothesis prediction for noise-robust hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1047-1059, April 2014.

[5] C. Chen, E. W. Tramel, and J. E. Fowler, “Compressed-sensing recovery of images and video using multihypothesis predictions,” Proceedings of the 45th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 2011, pp. 1193-1198.

[6] C. Chen, and J. E. Fowler, “Single-image super-resolution using multihypothesis prediction,” Proceedings of the 46th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 2012, pp. 608-612.

[7] Y. Qian, M. Ye, and J. Zhou, “Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4, pp. 2276-2291, Apr. 2013.

[8] W. Li, and Q. Du, “Gabor-filtering based nearest regularized subspace for hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1012–1022, Apr. 2014.

[9] C. Chen, W. Li, H. Su, and K. Liu, “Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine,” Remote Sensing, vol. 6, no. 6, pp. 5795-5814, June 2014.

[10] M. Pesaresi, and J. Benediktsson, “A new approach for the morphological segmentation of high resolution satellite imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 2, pp. 309-320, Feb. 2001.

[11] J. A. Benediktsson, J. A. Palmason, and J. Sveinsson, “Classification of hyperspectral data from urban areas based on extended morphological profiles,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 480-491, March 2005.

[12] M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, “Morphological attribute profiles for the analysis of very high resolution images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 10, pp. 3747-3762, Oct. 2010.

[13] M. D. Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, “Extended profiles with morphological attribute filters for the analysis of hyperspectral data,” Int. J. Remote Sens., vol. 31, no. 22, pp. 5975-5991, Jul. 2010.

[14] W. Li, C. Chen, H. Su, and Q. Du, “Local binary patterns for spatial-spectral classification of hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3681-3693, July 2015.

[15] T. Kobayashi, and N. Otsu, “Image feature extraction using gradient local auto-correlation,” in ECCV 2008, Part I, vol. 5302, 2008, pp. 346-358.

[16] J. A. Richards, and X. Jia, Remote Sensing Digital Image Analysis: An Introduction. Berlin, Germany: Springer-Verlag, 2006.

[17] J. Ren, J. Zabalza, S. Marshall, and J. Zheng, “Effective Feature Extraction and Data Reduction in Remote Sensing Using Hyperspectral Imaging,” IEEE Signal Processing Magazine, vol. 31, no. 4, pp. 149-154, July 2014.

[18] G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern., Part B: Cybern., vol. 42, no. 2, pp. 513-529, Apr. 2012.

[19] C. Chen, R. Jafari, and N. Kehtarnavaz, “Action Recognition from Depth Sequences Using Depth Motion Maps-based Local Binary Patterns,” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Waikoloa Beach, HI, January 2015, pp. 1092-1099.

[20] http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes

[21] J. Li, P. R. Marpu, A. Plaza, J. M. Bioucas-Dias, and J. A. Benediktsson, “Generalized composite kernel framework for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 9, pp. 4816-4829, Sep. 2013.

[22] T. Kobayashi, and N. Otsu, “Motion recognition using local auto-correlation of space–time gradients,” Pattern Recognition Letters, vol. 33, no. 9, pp. 1188-1195, July 2012.

[23] T-K. Tran, N-N. Bui, and J-Y. Kim, “Human detection in video using poselet combine with gradient local auto correlation classifier,” Proceedings of 2014 International Conference on IT Convergence and Security, Beijing, China, Oct. 2014, pp. 1-4.