Essay On MSPCA

2396 Words5 Pages

2.8. Multiscale Principal Component Analysis
Multiscale PCA (MSPCA) combines the capability of PCA to extract the cross-correlation between the variables and wavelets to divide deterministic features from stochastic processes and approximately de-correlate the autocorrelation among the measurements. Figure 2.3 illustrates the MSPCA procedures.

Figure 2.3. shows the MSPCA procedures.

For combining the profit of PCA and wavelets, the capacity for each variable are decomposed to its wavelet coefficients by the same wavelet for each variable. This transformation of the data matrix brings X into a matrix, WX, while W is an n * n orthonormal matrix showing the orthonormal wavelet transformation operator that contains the filter coefficients.

h_(L,1) h_(L,2) . . . . . . . h_(L,N)

g_(L,1) g_(L,2) . . . . . . . g_(L,N) g_(L-1,1) . . . g_(L-1,N/2) 0 . . . 0
0
. . . . g_(L-1,N/2+1) . . . g_(L-1,N)

. . . . . . . . . .
. . . . . . . . . . g_1,1 g_1,2 0 . . . . . . 0
. . . . . . . . . .
0 0 . . . . . 0 g_(1,N-1) g_(1,N)

where, G_m is the 〖2 〗^(log_2^(n-m) )* n matrix containing low pass wavelet filter coefficients consequent to scale m = 1, 2, ..., L, and H_L is the matrix of scaling function high pass filter coefficients at the coarsest scale. The matrix, WX has the same size of the input data matrix, X, but after wavelet decomposition, the deterministic component in each variable in X is concentrated in a relatively small number of coefficients in WX, while the stochastic component in each variable is approximately decorrelated in WX, and is increased over all components according to its power spectrum. Theorem 1 shows the relation between the PCA and X and WX.
Theorem 1. The principal component loadings found by the PCA of...

... middle of paper ...

...means and become familiar with K-means clustering and its usage. Then, we finish this part by different method of clustering. The K-nearest- neighbors is also discussed in this chapter. The KNN is simple for implication, programming, and one of the oldest techniques of data clustering as well. There are many applications existing for KNN and it is still growing. The PCA also discussed in this chapter as a method for dimension reduction, and then discrete wavelet transform is discussed. For the next chapter the combination of PCA and DWT, which can be useful in de-noising, come about. In this study, we have examined the neural network structure and modeling that is most of usage these days. The backpropagation is one of the common methods of training neural networks and for the last model, we discussed autoregressive model and the strategies to choose a model order.

Open Document