Effective statistical physics of Anosov systems

14 September 2010

We’ve just posted a paper titled “Effective statistical physics of Anosov systems” that details the physical relevance of the techniques we’ve used to characterize network traffic. The idea is that there appears to be a unique well-defined effective temperature (and energy spectrum) for physical systems that are typical under the so-called chaotic hypothesis. We’ve demonstrated how statistical physics can be used to detect malicious or otherwise anomalous network traffic in another whitepaper also available on the arxiv through our downloads page. The current paper completes the circle and presents evidence indicating that the same ideas can be fruitfully applied to nonequilbrium steady states.


Initial software release

24 August 2010

Our free/open-source visual network traffic monitoring software is now available for download at www.eqnets.com. A video of our enterprise system in action and technical documents detailing our approaches to traffic analysis, real-time interactive visualization and alerting are also available there.

Besides a zero-cost download option, we are also offering Linux-oriented installation media for under $100 and an enterprise version of our system with premium features such as configurable automatic alerting, nonlinear replay, and a 3D traffic display.

Discounts—including installation media for a nominal shipping and handling fee—are available to institutional researchers or in exchange for extensions to our platform.

The software can run in its entirely on a dedicated x86 workstation with four or more cores and a network tap, though our system supports distributed hardware configurations. An average graphics card is sufficient to operate the visualization engine.

Thanks and enjoy!


Equilibrium Networks beta

19 March 2010

Our visual network traffic monitoring software (for background information, see our website) has successfully passed our internal tests, so we are packaging a Linux-oriented beta distribution that is planned for snail-mailing (no downloads–sorry, but export regulations still apply) on a limited basis before the end of the month. The beta includes premium features that will not be available with our planned free/open-source distribution later this year, but at this early stage we will be happy to provide a special license free of charge to a limited number of qualifying US organizations.

Participants in our beta program will be expected to provide timely and useful feedback on the software, e.g.

•    filling perceived gaps in documentation
•    proposing and/or implementing improvements
•    making feature requests or providing constructive criticism
•    providing testimonial blurbs or case studies
•    etc.

The software should be able to run in its entirely on a dedicated x86 workstation with four or more cores and a network tap (though you may prefer to try out distributed hardware configurations). If your organization is interested in participating in our beta program, please include a sentence or two describing your anticipated use of this visual network traffic monitoring software along with your organizational background, POC and a physical address in an email to beta [at our domain name]. DVDs will only be mailed once you’ve accepted the EULA. Bear in mind that beta slots are limited. Enjoy!


Martingales from finite Markov processes, part 1

15 February 2010

In an earlier series of posts the emerging inhomogeneous Poissonian nature of network traffic was detailed. One implication of this trend is that not only network flows but also individual packets will be increasingly well described by Markov processes of various sorts. At EQ, we use some ideas from the edifice of information theory and the renormalization group to provide a mathematical infrastructure for viewing network traffic as (e.g.) realizations of inhomogeneous finite Markov processes (or countable Markov processes with something akin to a finite universal cover). An essentially equation-free (but idea-heavy) overview of this is given in our whitepaper “Scalable visual traffic analysis”, and more details and examples will be presented over time.

The question for now is, once you’ve got a finite Markov process, what do you do with it? There are some obvious things. For example, you could apply a Chebyshev-type inequality to detect when the traffic parameters change or the underlying assumptions break down (which, if the model is halfway decent, by definition indicates something interesting is going on–even if it’s not malicious). This idea has been around in network security at least since Denning’s 1986-7 intrusion detection article, though, so it’s not likely to bear any more fruit (assuming it ever did). A better idea is to construct and exploit martingales. One way to do this to advantage starting with an inhomogeneous Poisson process (or in principle, at least, more general one-dimensional point processes) was outlined here and here.

Probably the most well-known general technique for constructing martingales from Markov processes is the Dynkin formula. Although we don’t use this formula at present (after having done a lot of tinkering and evaluation), a more general result similar to it will help us introduce the Girsanov theorem for finite Markov processes and thereby one of the tools we’ve developed for detecting changes in network traffic patterns.

The sketch below of a fairly general version of this formula for finite processes is adapted from a preprint of Ford (see Rogers and Williams IV.20 for a more sophisticated treatment).

Consider a time-inhomogeneous Markov process X_t on a finite state space. Let Q(t) denote the generator, and let P(s,t) denote the corresponding transition kernel, i.e. P(s,t) = U^{-1}(s)U(t), where the Markov propagator is

U(t) := \mathcal{TO}^* \exp \int_0^t Q(s) \ ds

and \mathcal{TO}^* indicates the formal adjoint or reverse time-ordering operator. Thus, e.g., an initial distribution p(0) is propagated as p(t) = p(0)U(t). (NB. Kleinrock‘s queueing theory book omits the time-ordering, which is a no-no.)

Let f_t(X_t) be bounded and such that the map t \mapsto f_t is C^1. Write t_0 \equiv 0 and t_m = t. Now

f_t(X_t)-f_0(X_0) \equiv f_{t_m}(X_{t_m})-f_{t_0}(X_{t_0})

= \sum_{j=0}^{m-1} \left[f_{t_{j+1}}(X_{t_{j+1}}) - f_{t_j}(X_{t_j})\right],

and the Markov property gives that

\mathbb{E} \left(f_{t_{j+1}}(X_{t_{j+1}}) - f_{t_j}(X_{t_j}) \ \big| \ \mathcal{F}_{t_j}\right)

= \sum_{X_{t_{j+1}}} \left[f_{t_{j+1}}(X_{t_{j+1}}) - f_{t_j}(X_{t_j})\right] \cdot P_{X_{t_j},X_{t_{j+1}}}(t_j,t_{j+1}).

The notation \mathcal{F}_t just indicates the history of the process (i.e., its natural filtration) at time t. The transition kernel satisfies a generalization of the time-homogeneous formula P(t) = e^{tQ}:

P_{X_{t_j},X_{t_{j+1}}}(t_j,t_{j+1})

= \delta_{X_{t_j},X_{t_{j+1}}} + (t_{j+1} - t_j) \cdot Q_{X_{t_j},X_{t_{j+1}}}(t_j) + o(t_{j+1} - t_j)

so the RHS of the previous equation is t_{j+1} - t_j times

\frac{f_{t_{j+1}}(X_{t_j}) - f_{t_j}(X_{t_j})}{t_{j+1} - t_j} + \sum_{X_{t_{j+1}}} f_{t_{j+1}}(X_{t_{j+1}}) \cdot Q_{X_{t_j},X_{t_{j+1}}}(t_j)

plus a term that vanishes in the limit of vanishing mesh. The fact that the row sums of a generator are identically zero has been used to simplify the result.

Summing over j and taking the limit as the mesh of the the partition goes to zero shows that

\boxed{\mathbb{E} \left(f_t(X_t)-f_0(X_0)\right) = \mathbb{E} \int_0^t \left(\partial_s + Q(s)\right)f_s \circ X_s \ ds.}

That is,

M_t^f := f_t(X_t)-f_0(X_0)- \int_0^t \left(\partial_s + Q(s)\right)f_s \circ X_s \ ds

is a local martingale, or if Q is well behaved, a martingale.

This can be generalized (see Rogers and Williams IV.21 and note that the extension to inhomogeneous processes is trivial): if X is an inhomogeneous Markov process on a finite state space \{1,\dots,n\} and g : \mathbb{R}_+ \times \{1,\dots,n\} \times \{1,\dots,n\} \times \Omega \longrightarrow \mathbb{R} is such that (t, \omega) \mapsto g(t,j,k,\omega) is locally bounded and previsible and g(t,j,j,\omega) \equiv 0 for all j,k, then M_t^g(\omega) given by

\sum_{0 < s \le t} g(s,X_{s-},X_s,\omega) - \int_{(0,t]} \sum_k Q_{X_{s-},k}(s) \cdot g(s,X_{s-},k,\omega) \ ds

is a local martingale. Conversely, any local martingale null at 0 can be represented in this form for some g satisfying the conditions above (except possibly local boundedness).

To reiterate, this result will be used to help introduce the Girsanov theorem for finite Markov processes in a future post, and later on we’ll also show how Girsanov can be used to arrive at a genuinely simple, scalable likelihood ratio test for identifying changes in network traffic patterns.


Random bits

4 February 2010

Hacking for Fun and Profit in China’s Underworld

Google + NSA Information Assurance Directorate

“Every user in the world is convinced they need security features, not security procedures.”

Advanced Persistent Threat highlighted by DNI; Mandiant report gives details. Mandiant have coined the APT term, and it’s probably because they deal with that sort of thing constantly: they’re very good at what they do. We hired them for internal test and eval work as well as usability input as our software began taking shape, and I came away impressed. It’s not surprising to see them tackling high-profile events.

Quantum energy teleportation


Common ecology quantifies human insurgency

21 December 2009

Researchers in Colombia, Miami, and the UK have published an article in this week’s Nature that claims to identify what amounts to universal power-law behavior (though they don’t call it that, and there are slightly different exponents for different insurgencies, but the putative universal exponent is apparently 5/2) in insurgencies. The researchers analyzed over 54000 violent events across nine insurgencies, including Iraq and Afghanistan. They find that the power-law behavior of casualties (see also here for the distribution of exponents over insurgencies) is explained by “ongoing group dynamics within the insurgent population” and that the timing of events is governed by “group decision-making about when to attack based on competition for media attention”.

Their model is not predictive in any practical sense: few things with power laws are. What it provides is a quantitative framework for understanding insurgency in general, and perhaps more importantly a path towards classifying insurgencies based on a set of quantitative characteristics. One of the nice things about universality (if this is really what is going on) is that it allows you to ignore dynamical details in a defensible way, so long as you understand the basic mechanisms at play. This insight actually derives from the renormalization group (the same one that informs Equilibrium’s architecture) and provides a way to categorize systems. So if there really is universal behavior, then the fact that the model these researchers use is just a cariacture wouldn’t matter as much as it otherwise would, and it would allow for reasonably serious quantitative analysis.

The first question about this work ought to be if similar results can be obtained with different model assumptions. The second ought to be attempting to run the same analysis on “successful” wars of national liberation to see if there are indeed distinguishing characteristics. If there are, this framework could be a valuable input to policy and strategy. When pundits talk about Iraq or Afghanistan being another Vietnam, the distinction between terrorist insurgency and guerrilla warfare is blurred. But hard data may provide clarity in the future.


Birds on a wire and the Ising model

30 November 2009

Statistical physics is very good at describing lots of physical systems, but one of the basic tenets underlying our technology is that statistical physics is also a good framework for describing computer network traffic. Lots of recent work by lots of people has focused on applying statistical physics to nontraditional areas: behavioral economics, link analysis (what the physicists abusively call network theory), automobile traffic, etc.

In this post I’m going to talk about a way in which one of the simplest models from statistical physics might inform group dynamics in birds (and probably even people in similar situations). As far as I know, the experiment hasn’t been done–the closest work to it seems to be on flocking (though I’ll give $.50 and a Sprite to the first person to point out a direct reference to this sort of thing). I’ve been kicking it around for years and I think that at varying scopes and levels of complexity, it might constitute anything from a really good high school science fair project to a PhD dissertation. In fact I may decide to run with this idea myself some day, and I hope that anyone else out there who wants to do the same will let me know.

The basic idea is simple. But first let me show you a couple of pictures.

<a rel="cc:attributionURL" href=

Source: http://www.flickr.com/photos/blmurch/ / CC BY-SA 2.0

Notice how the tree in the picture above looks? There doesn’t seem to be any wind. But I bet that either the birds flocked to the wire together or there was at least a breeze when the picture below was taken:

<a rel="cc:attributionURL" href=

Source: http://www.flickr.com/photos/paul_garland/ / CC BY-SA 2.0

Because the birds are on wires, they can face in essentially one of two directions. In the first picture it looks very close to a 60%-40% split, with most of the roughly 60 birds facing left. In the second picture, 14 birds are facing right and only one is facing left.

Now let me show you an equation:

H = -J\sum_{\langle i j \rangle} s_i s_j - K\sum_i s_i.

If you are a physicist you already know that this is the Hamiltonian for the spin-1/2 Ising model with an applied field, but I will explain this briefly. The Hamiltonian H is really just a fancy word for energy. It is the energy of a model (notionally magnetic) system in which spins s_i that occupy sites that are (typically) on a lattice (e.g., a one-dimensional lattice of equally spaced points) take the values \pm 1 and can be taken as caricatures of dipoles. The notation \langle i j \rangle indicates that the first sum is taken over nearest neighbors in the lattice: the spins interact, but only with their neighbors, and the strength of this interaction is reflected in the exchange energy J. The strength of the spins’ interaction with an applied (again notionally magnetic) field is governed by the field strength K. This is the archetype of spin models in statistical physics, and it won’t serve much for me to reproduce a discussion that can be found many other places (you may like to refer to Goldenfeld’s Lectures on Phase Transitions and the Renormalization Group, which also covers the the renormalization group method that inspires the data reduction techniques used in our software). Suffice it to say that these sorts of models comprise a vast field of study and already have an enormous number of applications in lots of different areas.

Now let me talk about what the pictures and the model have in common. The (local or global) average spin is called the magnetization. Ignoring an arbitrary sign, in the first picture the magnetization is roughly 0.2, and in the second it’s about 0.87. The 1D spin-1/2 Ising model is famous for exhibiting a simple phase transition in magnetization: indeed, the expected value of the magnetization for in the thermodynamic limit is shown in every introductory statistical physics course worth the name to be

\langle s \rangle = \frac{\sinh \beta K}{\sqrt{\sinh^2 \beta K + e^{-4\beta J}}}

where \beta \equiv 1/T is the inverse temperature (in natural units). As ever, a picture is worth a thousand words:

magnetization

For K = 0 and T > 0, it’s easy to see that \langle s \rangle = 0. But if K \ne 0, J > 0 and T \downarrow 0, then taking the subsequent limit K \rightarrow 0^\pm yields a magnetization of \pm 1. At zero temperature the model becomes completely magnetized–i.e., totally ordered. (Finite-temperature phase transitions in magnetization in the real world are of paramount importance for superconductivity.)

And at long last, here’s the point. I am willing to bet ($.50 and a Sprite, as usual) that the arrangement of birds on wires can be well described by a simple spin model, and probably the spin-1/2 Ising model provided that the spacing between birds isn’t too wide. I expect that the same model with varying parameters works for many–or even most or all–species in some regime, which is a bet on a particularly strong kind of universality. Neglecting spacing between birds, I expect the effective exchange strength to depend on the species of bird, and the effective applied field to depend on the wind speed and angle, and possibly the sun’s relative location (and probably a transient to model the effects of arriving on the wire in a flock). I don’t have any firm suspicions on what might govern an effective temperature here, but I wouldn’t be surprised to see something that could be well described by Kawasaki or Glauber dynamics for spin flips: that is, I reckon that–as usual–it’s necessary to take timescales into account in order to unambiguously assign a formal or effective temperature (if the birds effectively stay still, then dynamics aren’t relevant and the temperature should be regarded as being already accounted for in the exchange and field parameters). I used to think about doing this kind of experiment using tagged photographs or their ilk near windsocks or something similar, but I can’t see how to get any decent results that way without more effort than a direct experiment. I think it probably ought to be done (at least initially) in a controlled environment.

Anyways, there it is. The experiment always wins, but I have a hunch how it would turn out.

UPDATE 30 Jan 2010: Somebody had another interesting idea involving birds on wires.


Happy Thanksgiving

26 November 2009

I'm thankful for seeing truth presented with beauty.

This is a picture to help understand an Anosov flow obtained from the cat map. It’s part of research on a technique we’ve used to analyze network traffic.


Capability of the PRC to conduct cyber warfare and computer network exploitation

23 November 2009

I just finished reading a recent report [pdf] with this title produced for the US-China Economic and Security Review Commission. Though there’s a lot of filler material, it’s pretty good. I’ll spare you the trouble of reading all 88 pages and start with what I thought were the most salient themes covered in the executive summary:

  • Some evidence exists suggesting limited collaboration between individual elite hackers and the Chinese government; however
  • The constant barrage of network penetrations from China (comprising most of what Mandiant calls “the advanced persistent threat“) “is difficult at best without some type of state-sponsorship”.
  • The modus operandi of the penetrations “suggests the existence of a collection management infrastructure”; and
  • PLA CNE aims during a military conflict would be “to delay US deployments and impact combat effectiveness of troops already in theater”.

The PLA’s “Integrated Network Electronic Warfare” doctrine is based on attacking a few carefully selected network nodes controlling C2 and logistics. The INEW doctrine was apparently validated in a 2004 OPFOR exercise when the red force (NB. the Chinese use red to denote themselves) C2 network got pwned within minutes, and it is likely that PRC leadership would authorize preemptive cyberattacks if they think it wouldn’t cross any “red lines”. This preemptive strategy is apparently favored by some in the PLA who view cyber as a “strategic deterrent comparable to nuclear weapons but posessing greater precision, leaving far fewer casualties, and possessing longer range than any weapon in the PLA arsenal“. [emphasis original]

One aspect of this thinking that I think is underappreciated is that the PRC is already deterring the US by its apparent low-level attacks. These attacks demonstrate a capability of someone in no uncertain terms and in fact may be a cornerstone of the PLA’s overall deterrence strategy. In short, if the PLA convinces US leadership that it can (at least) throw a monkey wrench in US deployments, suddenly the PRC has more leverage over Taiwan, where the PLA would need to mount a quick amphibious operation. And because it’s possible to view the Chinese Communist Party’s claim to legitimacy as deriving first of all from its vow to reunite China (i.e., retake the “renegade province” of Taiwan) one day, there is a clear path from the PLA cyber strategy to the foundations of Chinese politics.

The paper goes on to note that “much of China’s contemporary military history reflects a willingness to use force in situations where the PRC was clearly the weaker entity” and suggests that such uses of force were based on forestalling the consequences of an even greater disadvantage in the future. This putative mindset also bears on cyber, particularly through the Taiwan lens. The PLA has concluded that cyber attacks focusing on C2 and logistics would buy it time, and presumably enough time (in its calculations) to achieve its strategic aims during a conflict. This strategy requires laying a foundation, and thus the PRC is presumably penetrating networks: not just for government and industrial espionage, but also to make its central war plan credible.

In practice a lot of the exploitation would consist of throttling encrypted communications and corrupting unencrypted comms, and it is likely that the PLA is deliberately probing the boundaries of what can and cannot be detected by the US. But this generally shouldn’t be conflated with hacktivism or any civilian attacks originating from China, as there’s little reason to believe that the PLA needs or wants anything to do with this sort of thing. While it’s possible that there is some benefit to creating a noisy threat environment, executing precise cyberattacks in the INEW doctrine requires exploitation that can be undermined by hacktivism or civilian (especially amateur) attacks.

The end of the meaty part of the report talks about what’s being done and what should be done. It talks about the ineffectiveness of signature-based IDS/IPS and the promise of network behavior analysis, but also its higher overhead and false alarm rates. This is precisely the sort of thing our software is aimed at mitigating, by combining dynamical network traffic profiles and interactively configurable automated alerts with a framework for low-overhead monitoring and fast drill-down.


DIMACS workshop on designing networks for manageability

14 November 2009

The highlight of the DIMACS workshop on designing networks for manageability for me was Nick Duffield‘s talk on characterizing IP flows at network scale. The basic idea is to use machine learning to identify the flow predicates that best reproduce packet-level classifications. By sampling flows according to a simple dynamical weighting, Duffield et al. demonstrate that this sort of flow classification is accurate (to a few percent, with the misclassifications largely due to overloading of HTTP, e.g., with media over web), scalable (i.e., faster than real-time), versatile (i.e., independent of the particular ML classifier), and stable (over space and time, with a deployment on a separate but similar network producing essentially equivalent results over several months). This work is more recent than related research we’ve cited in our whitepaper “Scalable visual traffic analysis” (on our downloads page) detailing the rationale behind our own traffic aggregation methods.

Much of the workshop (especially its first day) was more focused on current deployment and engineering issues than I would have thought for an overarching focus on “algorithmic foundations of the internet”. Both another mathematician that came with me and I expected to see some work on (or at least suggesting the use of) sparse linear algebra to deal with traffic matrices. I was surprised not to see anyone talk about some kind of agent-based configuration methods for networks–this sort of approach has been used to great effect on hosts.

But there were a number of other talks I found interesting: Aditya Akella from Wisconsin talked about an entropy characterization of “reachability sets” describing packets that can be sent between pairs of routers based on their configurations, and used this to construct a routing complexity measure for networks. Dan Rubenstein from Columbia talked about a “canonical graph” method for efficiently detecting misconfigurations for routing protocols. Iraj Saniee talked about why networks are globally hyperbolic (using a result of Gromov’s well-known work on groups), a conclusion that seems intuitively obvious to me if the existence of a global curvature (bound) is assumed. (Basically a network spreads out if it’s drawn in any reasonable way, and hyperbolic geometry amounts to expansion.)

Mung Chiang from Princeton talked about the results in “Link-State Routing with Hop-by-Hop Forwarding Can Achieve Optimal Traffic Engineering” first presented at INFOCOM 2008. He and coworkers perturb assumptions behind routing protocols to obviate the need for hard optimization problems (i.e., computation of optimal link weights to input to OSPF is NP-hard, but changing OSPF can make the corresponding optimization problem easier). From what I could tell OSPF corresponds to a “zero-temperature” protocol, whereas the improved protocol corresponds to a “finite-temperature” one.

Michael Schapira from Yale and Berkeley talked about game-theoretic and economic perspectives on routing. It is a happy “accident” that the internet is BGP stable (usually, although a notable event where a Pakistani ISP set all its hop counts to 1 some time ago created a routing “black hole”). Although ISPs are selfish, economic considerations tend to result in stability. But that’s not a guarantee. So Schapira and coworkers analyzed the situation and found that “interdomain routing with BGP is a game” in which the ASes are the players, the BGP stable states are pure Nash equilibria, and BGP is the “best response“. I mentioned to him that the “accidental” nature of this stablity is likely due to reciprocity, in that an ISP that discovers one of its neighbors engaging in predatory routing is likely to retaliate in the future. I think the use of economic and game theory is generally a good idea. An emphasis of the economics of cybercrime has developed recently, and understanding the market forces at play here and elsewhere is likely to lead to improvements in the reliability and security of networks.


Follow

Get every new post delivered to your Inbox.