Premium
Signaalstroomdiagrammen en Markov‐ketens
Author(s) -
Bakker W.
Publication year - 1964
Publication title -
statistica neerlandica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.52
H-Index - 39
eISSN - 1467-9574
pISSN - 0039-0402
DOI - 10.1111/j.1467-9574.1964.tb00501.x
Subject(s) - signal flow graph , mathematics , markov chain , impulse response , impulse (physics) , markov process , stochastic matrix , convolution (computer science) , graph , algorithm , discrete mathematics , computer science , mathematical analysis , statistics , physics , quantum mechanics , machine learning , artificial neural network , electrical engineering , engineering
Summary The transient behavior of a simple two‐state discrete Markov process can be studied by means of matrix‐multiplication and z‐transformations. For linear systems the output is equal to the convolution of the input and the impulse response of the system. By taking z‐transforms the convolution can be avoided: the transform of the output is equal to the transform of the input multiplied by the transform of the impulse response of the system. The signal flow graph method is the transformation of the matrix‐method of solving a system of simultaneous equations in a topological method. The simple two‐state discrete Markov process can be represented by such a flow graph and it is shown how to simplify this flow graph step by step. Taking as input the unit‐impulse at time zero, the output of this system at time n turns out to be the n‐step transition‐probability. The results for the transmission in a network are mentioned (Mason and Zimmermann [8]). With these results it is possible to give at once the n‐step transition probability of a system without first simplifying the flow graph. Building on this result signal flow graphs can be used to determine the chance to be for the first time in a certain state, the average number of times a system is in a certain state and the chance to reach a certain state for the first time before another state of the system has been reached. Finally the results are extended to continuous Markov processes.