# Covariance Matrix Applications .

60 views
Uploaded on:
Description
Diagram. What is the covariance matrix?ExampleProperties of the covariance matrixSpectral Decomposition Principal Component Analysis. Covariance Matrix. Covariance grid catches the change and straight relationship in multivariate/multidimensional data.If information is a N x D framework, the Covariance Matrix is a d x d square matrix.Think of N as the quantity of information occasions (columns) and D the quantity of a
Transcripts
Slide 1

﻿Covariance Matrix Applications Dimensionality Reduction

Slide 2

Outline What is the covariance framework? Case Properties of the covariance lattice Spectral Decomposition Principal Component Analysis

Slide 3

Covariance Matrix Covariance grid catches the change and direct connection in multivariate/multidimensional information. In the event that information is a N x D network, the Covariance Matrix is a d x d square grid .Think of N as the quantity of information occasions (lines) and D the quantity of properties (segments).

Slide 4

Covariance Formula Let Data = N x D lattice. The Cov(Data)

Slide 5

Example COV(R)

Slide 6

Moral: Covariance can just catch straight connections

Slide 7

Dimensionality Reduction If you work in "information investigation" it is regular nowadays to be given an informational index which has bunches of factors (measurements). The data in these factors is frequently repetitive – there are just a couple wellsprings of bona fide data. Address: How can be distinguish these sources consequently?

Slide 8

Hidden Sources of Variance X1 X2 H1 X3 H2 X4 Model: Hidden Sources are Linear Combinations of Original Variables

Slide 9

Hidden Sources If the data that the known factors gave was diverse then the covariance grid between the factors ought to be a slanting network – i.e, the non-zero sections just show up on the corner to corner. Specifically, if Hi and Hj are autonomous then E(H i -  i )(H j -  j )=0.

Slide 10

Hidden Sources So the question is the thing that ought to be the concealed sources. Incidentally the "best" shrouded sources are the eigenvectors of the covariance lattice. In the event that A will be a d x d network, then <  , x> is an eigenvalue-eigenvector match if Ax =  x

Slide 11

Explanation a We have two hub, X1 and X2. We need to extend the information along the bearing of most extreme fluctuation.

Slide 12

Covariance Matrix Properties The Covariance network is symmetric. Non-negative eigenvalues. 0 ·  1 ·  2   d Corresponding eigenvectors u 1 ,u 2 ,  ,u d

Slide 13

Principal Component Analysis Also known as Singular Value Decomposition Latent Semantic Indexing Technique for information diminishment. Basically lessen the quantity of segments while losing insignificant data Also think as far as lossy pressure.

Slide 14

Motivation Bulk of information has a period segment For instance, retail exchanges, stock costs Data set can be composed as N x M table N clients and the cost of the calls they made in 365 days M << N

Slide 15

Objective Compress the information framework X into Xc, to such an extent that The pressure proportion is high and the normal blunder between the first and the compacted grid is low N could be in the request of millions and M in the request of hundreds

Slide 16

Example database

Slide 17

Decision Support Queries What was the measure of offers to GHI on July 11? Locate the aggregate deals to business clients for the week finishing July twelfth?

Slide 18

Intuition behind SVD y x\' y\' x Customer are 2-D focuses

Slide 19

SVD Definition A N x M network X can be communicated as Lambda is a corner to corner r x r grid.

Slide 20

SVD Definition More vitally X can be composed as Where the eigenvalues are in diminishing request. k,<r

Slide 21

Example

Slide 22

Compression Where k <=r <= M

Recommended
View more...