# 11 - Markov Chains.

56 views
Category: Music / Dance
Description
Diagram. Irreducible Markov ChainsOutline of Proof of Convergence to Stationary DistributionConvergence ExampleReversible Markov ChainMonte Carlo MethodsHastings-Metropolis AlgorithmGibbs SamplingSimulated AnnealingAbsorbing Markov Chains. Stationary Distribution. As approachesEach line is the stationary circulation.
Transcripts
Slide 1

﻿11 - Markov Chains Jim Vallandingham

Slide 2

Outline Irreducible Markov Chains Outline of Proof of Convergence to Stationary Distribution Convergence Example Reversible Markov Chain Monte Carlo Methods Hastings-Metropolis Algorithm Gibbs Sampling Simulated Annealing Absorbing Markov Chains

Slide 3

Stationary Distribution As methodologies Each column is the stationary appropriation

Slide 4

Stationary Dist. Illustration

Slide 5

Stationary Dist. Case Long Term midpoints: 24% time invested in state E1 39% energy invested in state E2 21% time invested in state E3 17% time spent in state E4

Slide 6

Stationary Distribution Any limited, aperiodic irreducible Markov bind will join to a stationary appropriation Regardless of beginning dissemination Outline of Proof requires straight variable based math Appendix B.19

Slide 7

L.A. : Eigenvalues Let P be a s x s lattice. P has s eigenvalues Found as the s answers for Assume all eigenvalues of P are particular

Slide 8

L.A. : left & right eigenvectors Corresponding to every eigenvalue Is a privilege eigenvector - And a left eigenvector - For which: Assume they are standardized:

Slide 9

L.A. : Spectral Expansion Can express P as far as its eigenvectors and eigenvalues: Called a ghostly extension of P

Slide 10

L.A. : Spectral Expansion If is an eigenvalue of P with relating left and right eigenvectors & Then is an eigenvalue of P n with same left and right eigenvectors &

Slide 11

L.A. : Spectral Expansion Implies otherworldly development of P n can be composed as:

Slide 12

Outline of Proof Going back to evidence… P is move grid for limited aperiodic irreducible Markov chain P has one eigenvalue, equivalent to 1 All different eigenvalues have outright esteem < 1

Slide 13

Outline of Proof Choosing left and right eigenvectors of Requirements: Also fulfills : & = 1 Probability vector (total to 1) Normalization (meaning of left eigenvector as eigenvalue of 1)

Slide 14

Outline of Proof Also: Can be demonstrated that there is a novel arrangement of this condition likewise fulfills so so that Same condition fulfilled by the stationary dispersion

Slide 15

Outline of Proof P n gives the n-step move probabilities. Ghostly Expansion of P n is: So as n builds P n approaches Only one eigenvalue is = 1. Rest are < 1

Slide 16

Convergence Example

Slide 17

Convergence Example Has Eigenvalues of :

Slide 18

Convergence Example Has Eigenvalues of : Less than 1

Slide 19

Convergence Example Left & Right eigenvectors fulfilling

Slide 20

Convergence Example Left & Right eigenvectors fulfilling Stationary dissemination

Slide 21

Convergence Example Spectral development Stationary circulation 0

Slide 22

Reversible Markov Chains

Slide 23

Reversible Markov Chains Typically pushing ahead in "time" in a Markov chain 1  2  3  …  t What about going in reverse in this chain? t  t-1  t-2  …  1

Slide 24

Reversible Markov Chains Ancestor Back in time Forward in time Species A Species B

Slide 25

Reversible Markov Chains Have a limited irreducible aperiodic Markov chain with stationary circulation During t moves, chain will travel through states: Reverse chain Define Then turn around chain will travel through states:

Slide 26

Reversible Markov Chains Want to show structure deciding the switch chain grouping is likewise a Markov chain Typical component found from run of the mill component of P, utilizing:

Slide 27

Reversible Markov Chains Shown by utilizing Bayes lead to transform restrictive likelihood Intuitively: what\'s to come is autonomous of the past , given the present The past is free without bounds , given the present

Slide 28

Reversible Markov Chains Stationary appropriation of invert chain is still Follows from Stationary dissemination property

Slide 29

Reversible Markov Chains Markov anchor is said to be reversible if This exclusive holds if

Slide 30

Monte Carlo Methods

Slide 31

Markov Chain Monte Carlo Class of calculations for inspecting from likelihood dispersions Involve building a Markov Chain Want to have stationary conveyance State of chain after expansive number of steps is utilized as an example of sought circulation We examine 2 calculations Gibbs Sampling Simulated Annealing

Slide 32

Basic Problem Find move lattice P to such an extent that Its stationary dissemination is the objective conveyance Know that Markov fasten will merge to stationary dissemination, paying little mind to starting appropriation How would we be able to discover such a P with its stationary circulation as the objective circulation?

Slide 33

Basic Idea Construct move lattice Q "competitor creating network" Modify to have amend stationary appropriation Modification includes embeddings calculates So that Various approaches to picking a\'s

Slide 34

Hastings-Metropolis Goal: build aperiodic irreducible Markov chain Having endorsed stationary dissemination Produces a related grouping of draws from the objective thickness that might be hard to test utilizing a traditional autonomy strategy.

Slide 35

Hastings-Metropolis Process: Choose set of constants Such that And Define Accept state change Reject state change Chain doesn\'t change esteem

Slide 36

Hastings-Metropolis Example = (.4 .6) Q =

Slide 37

Hastings-Metropolis Example = (.4 .6) Q = P=

Slide 38

Hastings-Metropolis Example = (.4 .6) P= P 2 = P 50 =

Slide 39

Algorithmic Description Start with State E 1 , then emphasize Propose E\' from q(E t ,E\') Calculate proportion If a > 1, Accept E (t+1) = E\' Else Accept with likelihood of an If rejected, E (t+1) = E t

Slide 40

Gibbs Sampling

Slide 41

Gibbs Sampling Definitions Be the irregular vector Be the dispersion of Assume We characterize a Markov chain whose states are the conceivable estimations of Y

Slide 42

Gibbs Sampling Process Enumerate vectors in some request 1, 2,… ,s Pick vector j with jth state in chain p ij : 0 : if vectors i & j contrast by more than 1 part If they vary by at most 1 segment, y 1 *

Slide 43

Gibbs Sampling Assume Joint dissemination p(X,Y) Looking to test k estimations of X Begin with estimation of y 0 Sample x i utilizing p(X | Y = y i-1 ) Once x i is discovered utilize it to discover y i p(Y | X = x i ) Repeat k times

Slide 44

Visual Example

Slide 45

Gibbs Sampling Allows us to manage univariate restrictive appropriations Instead of complex joint circulations Chain has stationary conveyance of

Slide 46

Why is Hastings-Metropolis ? On the off chance that we characterize Can see that for Gibbs: When an is dependably 1

Slide 47

Simulated Annealing

Slide 48

Simulated Annealing Goal: Find (surmised) least of some positive capacity Function characterized on a to a great degree substantial number of states, s And to discover those states where this capacity is minimized Value of the capacity for state is:

Slide 49

Simulated Annealing Process Construct neighborhood of every state Set of states "close" to the state Variable in Markov bind can move to a neighbor in one stage Moves outside neighborhood not permitted

Slide 50

Simulated Annealing Requirements of neighborhood If is in neighborhood of then is in the area of Number of states in an area (N) is free of that state Neighborhoods are connected so chain can inevitably make it from any E j to any E m . In the event that in state E j , then the following move must be in neighborhood of E j .

Slide 51

Simulated Annealing Uses a positive parameter T Aim is to have the stationary appropriation of every Markov chain state being: Constant to guarantee entirety of probabilities is 1 Visit frequently enough to permit those states with low estimation of f() to end up unmistakable

Slide 52

Simulated Annealing

Slide 53

Simulated Annealing Large T values All states in current states neighborhood are picked with ~ approach likelihood Stationary dissemination of tie has a tendency to be uniform Small T values Different states in neighborhoods have vastly different stationary dispersion probabilities Too little may stall out in nearby maxima

Slide 54

Simulated Annealing Art of picking T esteem Want quick development starting with one neighborhood then onto the next (Large T) Picks out states in neighborhoods with extensive stationary probabilities (Small T)

Slide 55

SA Example

Slide 56

Absorbing Markov Chains

Slide 57

Absorbing Markov Chains Absorbing state: State which is difficult to leave p ii = 1 Transient state: Non-retaining state in engrossing chain

Slide 58

Absorbing Markov Chains Questions to reply: Given chain begins at a specific state, what is the normal number of ventures before being consumed? Given chain begins at a specific state, what is the likelihood it will be consumed by a specific engrossing state?

Slide 59

General Process Use Explanation from Introduction to Probability – Grinstead Convert network into standard shape Uses transformations to answer these inquiries Use basic case all through

Slide 60

Canonical Form Rearrange states so that the transient states start things out in P t x t framework t x r grid t : # of transient states r x t zero lattice r x r character network r : # of engrossing states

Slide 61

Drunkard\'s Walk Example Man strolling home from a bar 4 squares to walk 5 states add up to Absorbing states: Corner 4 – Home Corner 0 – Bar Each piece he has an equivalent likelihood of going ahead or in reverse

Slide 62

Drunkard\'s Walk Example

Slide 63

Drunkard\'s Walk : Canonical Form Canonical frame

Slide 64

Fundamental Matrix For a retaining Markov Chain P Fundamental Matrix for P is: n ij section gives expected number of times that the procedure is in the transient state s j if began in transient state s i (Befo

Recommended
View more...