In an earlier series of posts the emerging inhomogeneous Poissonian nature of network traffic was detailed. One implication of this trend is that not only network flows but also individual packets will be increasingly well described by Markov processes of various sorts. At EQ, we use some ideas from the edifice of information theory and the renormalization group to provide a mathematical infrastructure for viewing network traffic as (e.g.) realizations of inhomogeneous finite Markov processes (or countable Markov processes with something akin to a finite universal cover). An essentially equation-free (but idea-heavy) overview of this is given in our whitepaper “Scalable visual traffic analysis”, and more details and examples will be presented over time.
The question for now is, once you’ve got a finite Markov process, what do you do with it? There are some obvious things. For example, you could apply a Chebyshev-type inequality to detect when the traffic parameters change or the underlying assumptions break down (which, if the model is halfway decent, by definition indicates something interesting is going on–even if it’s not malicious). This idea has been around in network security at least since Denning’s 1986-7 intrusion detection article, though, so it’s not likely to bear any more fruit (assuming it ever did). A better idea is to construct and exploit martingales. One way to do this to advantage starting with an inhomogeneous Poisson process (or in principle, at least, more general one-dimensional point processes) was outlined here and here.
Probably the most well-known general technique for constructing martingales from Markov processes is the Dynkin formula. Although we don’t use this formula at present (after having done a lot of tinkering and evaluation), a more general result similar to it will help us introduce the Girsanov theorem for finite Markov processes and thereby one of the tools we’ve developed for detecting changes in network traffic patterns.
The sketch below of a fairly general version of this formula for finite processes is adapted from a preprint of Ford (see Rogers and Williams IV.20 for a more sophisticated treatment).
Consider a time-inhomogeneous Markov process on a finite state space. Let denote the generator, and let denote the corresponding transition kernel, i.e. where the Markov propagator is
and indicates the formal adjoint or reverse time-ordering operator. Thus, e.g., an initial distribution is propagated as (NB. Kleinrock‘s queueing theory book omits the time-ordering, which is a no-no.)
Let be bounded and such that the map is Write and Now
and the Markov property gives that
The notation just indicates the history of the process (i.e., its natural filtration) at time The transition kernel satisfies a generalization of the time-homogeneous formula
so the RHS of the previous equation is times
plus a term that vanishes in the limit of vanishing mesh. The fact that the row sums of a generator are identically zero has been used to simplify the result.
Summing over and taking the limit as the mesh of the the partition goes to zero shows that
is a local martingale, or if is well behaved, a martingale.
This can be generalized (see Rogers and Williams IV.21 and note that the extension to inhomogeneous processes is trivial): if is an inhomogeneous Markov process on a finite state space and is such that is locally bounded and previsible and for all then given by
is a local martingale. Conversely, any local martingale null at 0 can be represented in this form for some satisfying the conditions above (except possibly local boundedness).
To reiterate, this result will be used to help introduce the Girsanov theorem for finite Markov processes in a future post, and later on we’ll also show how Girsanov can be used to arrive at a genuinely simple, scalable likelihood ratio test for identifying changes in network traffic patterns.