28 August 2009

## Random bits

27 August 2009

Nanophotography

Estonian cybercrime hub

Offshore oil platforms vulnerable to network attacks: this article contains the ludicrous mischaracterization that “[most oil platforms] rely on the decades-old supervisory control and data acquisition (SCADA) software, written in an era when the ‘open source’ tag was more important than security”

How entanglement could be deterministic

## A minimal periodic coloring theorem, part 3

26 August 2009

Last time I gave nontrivial examples of periodic colorings of $A_3,$ and demonstrated that the cyclic coloring of $A_{n-1}$ exists for all $n.$ More generally for $n$ composite, let $\{ \sigma_{1k} \}_{k=1}^n \cong G$ for a group $G$ of order $n.$ (By Cayley’s theorem, every finite group can be realized in this way.) Then $\Gamma_\sigma \cong G.$ In other words, every finite group of order $n$ can be realized as a minimal periodic coloring of $A_{n-1}.$

It’s not immediately clear to me that every minimal periodic coloring arises in this way, but I would bet on it. (The first person who can demonstrate this is welcome to claim $.50 and a Sprite as reward, and an equal reward goes for the first person who can provide some kind of actual picture of what happens with a nonabelian example, say using $S_3$ to color $A_5.$) Making the connection practically is a bit more involved. I did the simple thing and used permutation matrices to look for faithful representations of finite groups. First up is the code to produce a product of two permutations: function y = permprod(s1,s2) % Product of permutations. % s1 and s2 represent permutations in$S_n$as arrays (akin to the two-row % notation and NOT cycle decomposition). % This function maps s1 and s2 to permutation matrices, computes their % product, and returns the corresponding array. if size(s1,1)*size(s2,1) - 1 'needs row arrays' return; elseif size(s1,2)-size(s2,2) 'sizes must match' return; else n = size(s1,2); end I = eye(n); % permutation matrices m1 = I(s1,:); m2 = I(s2,:); m = m1*m2; % now deduce the array for j = 1:n s(j) = find(m(j,:),1); end y = s; and next the inverse of a permutation: function y = perminv(s1) % Inverse permutation. % s1 represents permutation in$S_n$as an array (akin to the two-row % notation and NOT cycle decomposition). % This function maps s1 to a permutation matrix, computes its inverse, % and returns the corresponding array. if size(s1,1) - 1 'needs row array' return; else n = size(s1,2); end I = eye(n); % permutation matrix m1 = I(s1,:); m = inv(m1); % now deduce the array for j = 1:n s(j) = find(m(j,:),1); end y = s; and finally a very crude way to see which choices of permutations produce coherent color shift groups and which don’t: function y = colorshiftcheck(varargin) % Consider elements sjk (or$\sigma_{jk}$) of$S_n$for 1 ≤ j,k ≤ n and % subject to the following defining relations: % 1) sjk*skj = e % 2) sjk*skl = sjl % This code starts with$n-1$permutations s12,...,s1n and proceeds to % determine whether 1) and 2) hold % inputs are s12,...,s1n n = length(varargin)+1; % get the s1j and check their size, then compute the inverses sj1 for j = 2:n temp = varargin(j-1); s{1,j} = temp{1}; if size(s{1,j},2)-n j-1 'th entry length incorrect' break; elseif size(s{1,j},1)-1 j-1 'th entry has too many rows' break; end s{j,1} = perminv(s{1,j}); end s{1,1} = 1:n; % form the entries sjk=sj1*s1k for j = 2:n for k = 2:n s{j,k} = permprod(s{j,1},s{1,k}); end end % now check to see that the assignments s{j,k} are consistent: for j = 2:n for k = 2:n if sum(abs( perminv(s{k,j}) - s{j,k} )) 'inverse failure at' [j k] end for l = 2:n if sum(abs( permprod(s{j,l},s{l,k}) - s{j,k} )) 'product failure at' [j k l] end end end end y = s; There are probably much more sophisticated ways to do this sort of thing, but the overall idea should be clear: you can specify a symmetry group you want to be reflected in a minimal periodic coloring, and combine these if you want nonminimal periodic colorings (another$.50 and a Sprite go to the first person who demonstrates that every nonminimal periodic coloring is either obtained in this way or provides a counterexample). For example, combining the two colorings shown last time produces a periodic 8-coloring of $A_3.$

The reason we looked at this sort of thing at Equilibrium is (briefly) that it is possible to make individual packets correspond to vectors of the form $-e_j + e_k$, where the standard basis is indicated, and $j, k$ are respectively one of $n$ source or destination attributes that characterize the packet. These sorts of vectors generate $A_{n-1}.$ (For reasons outlined in a whitepaper of ours [see our website] $n$ can generally be taken to be quite small.) The idea then was to take the induced trajectory (here, the sequence of colors) corresponding to a sequence of observed packets and use various tools from probability to characterize the network traffic (a real explanation of this will have to wait for future posts, as will an explanation of how we can do something along these lines without using the coloring theorem).

## Dynamical Bias in the Dice Roll

24 August 2009

An entry today in Bruce Schneier’s blog alerted me to the existence of a paper by Diaconis, Holmes, and Montgomery called “Dynamical Bias in the Coin Toss” and published in SIAM Review 49, 211 (2007) (a preprint is available from Schneier’s entry). It reminded me of a little blurb I wrote around 2005 or so about a similar sort of thing, and I figured I’d put it up here with minor edits and a few links added:

***

Consider the familiar six-sided die. It is common practice to simply state that the probability of any of the six outcomes for a toss is 1/6. Let’s look at this in some more detail. The probabilistic constructions are mostly straightforward, even in detail: the probability space is {1, 2, 3, 4, 5, 6} and the $\sigma$-algebra of events is generated by all the singletons. Justifying the uniform measure, however, is a task for physics—not mathematics or probability. The die is a physical object and subject to physical laws, and we require a connection from the physics to obtain the common idealization.

A qualitative sketch of the justification is not too hard: a gambler’s tosses of a die (or dice) at a table are represented by subtly different initial conditions collectively generating the phase space average. The dynamics of the die-table system are well understood: there are gravitational and hard-core potentials and a dissipative frictional contribution. Over short time scales after the initial impact of the die on the table, dissipation can be ignored. One could argue by analogy with ergodic results for billiards in an attempt to justify the ergodic hypothesis, and by extension the uniform probability measure.

The lines of this sketch become less clear when upon closer examination, though: it is plausible that there might be correlations in the tosses that result in a “non-metrically indecomposable” system, for instance. Such an objection might be overcome by formally stipulating that the initial conditions of the die are given by suitably nice distributions on initial position, velocity, and orientation. But there still might be some correlations hidden between these distributions and the symmetries of the table, or in the dissipative dynamics. Upon further reflection people generally admit that the problem is so difficult to really break down that it appears operationally intractable, [1] and this intractability is the actual justification for selecting the uniform 1/6 measure. [2]

Despite this apparent intractability, there might be deviations from uniformity arising from determinate causes, as the following argument suggests (but does not establish or even claim). The moment of inertia of a cube of mass M and edge length a about its center of mass is $I_0 = Ma^2/6.$ Now consider a typical die as indicated:

Schematic of a typical die. NB. Some dice differ in the orientations of the faces with two, three, and/or six "pips".

Each of the 21 indentations or “pips” affects the moment of inertia about one of the principal axes, which is taken to (approximately) coincide with the coordinate axes, which in turn are taken to have their origin at the center of the convex hull of the die (i.e., the cube without pips). Assume each pip corresponds to a negative volume $\delta V$. Now $\rho \equiv Ma^{-3}$ and the corresponding mass deficit for a pip is $\delta m = \rho \delta V.$

Assume for convenience that the pips are small cylindrical indentations of negligible depth. If, for instance, a pip’s small dimension is along the z-axis, then the parallel-axis theorem shows that its moment of inertia about that axis is of the form $\delta m (r^2/2 + d^2),$ where r is the pip’s radius and d the distance from the pip’s center to the z-axis. Its moments of inertia about the x- and y-axes are of the form $\delta m (r^2/4 + d^2).$

The geometry of the faces is essentially captured by the following figure:

Face geometry.

Note that $c^2 = a^2/4 + b^2/2.$ Regardless of the number of pips on a face, each pip is either in a position as indicated in the figure above, or in the center of the face.

The die’s moment of inertia about the z-axis is given by

$I_z = I_0 - \sum_{j=1}^6 \delta I_{j,z}$

where $\delta I_{1,z} = \delta m (r^2/2 + 0),$ $\delta I_{2,z} = 2\delta m (r^2/4 +c^2),$ $\delta I_{3,z} = \delta I_{2,z} + \delta m (r^2/4 +a^2/4),$ $\delta I_{4,z} = 4\delta m (r^2/4 +c^2),$ $\delta I_{5,z} = \delta I_{4,z} + \delta m (r^2/4 +a^2/4)$ and $\delta I_{6,z} = 4\delta m (r^2/2 + b^2) + 2\delta m (r^2/2 + b^2/2).$ This simplifies to

$I_z = I_0 - \delta m \left( 7r^2 + \frac{a^2}{2} + 5b^2 + 12c^2 \right).$

Similarly,

$I_y = I_0 - \delta m \left( 7r^2 + a^2 + 6b^2 + 10c^2 \right) = I_x.$

Moreover, substituting for $c^2$ shows that in fact $I_x = I_y = I_z.$ This is not an accident: it depends on not only the assignment of outcomes to particular faces, but also the orientations of faces relative to each other. It also explains why the particular configuration of pips and faces depicted above is almost always chosen for casino dice.

However, the center of mass of the die is not at the center of the corresponding cube, but rather is slightly displaced in the direction $(-3, -1, -5)$. Thus the moments of inertia about the principal axes most nearly parallel to the z, x, and y axes have, respectively the greatest, intermediate, and least values. Consequently, rotation about the x-principal axis is unstable, and the other two principal rotations are stable. [3] The instability is of course weak because of the small contribution from the pips. (An essentially identical derivation shows that the same stability result holds for painted or even filled pips, so long as the densities of the die body and filling are not identical: however, the instability will be correspondingly weaker in these cases.)

Nevertheless, there are the beginnings of a possibly viable strategy for craps, based on the (questionable) supposition that a dedicated gambler might be able to subtly influence rolls of the dice by “setting” the intial conditions of the dice. [4] Though actual success sounds unlikely, it cannot be ruled out a priori, and surprisingly, the physical analysis suggests a detailed strategy for a particularly common bet. First, however, I will mention generic “odds bets”, where a single outcome from 2 to 12 is bet on. These offer no advantage to the house or player for uniformly random outcomes, though a line bet must also be made, which is designed to confer a slight net advantage to the house. By the simple expedient of choosing stable axes, however, these net odds might conceivably be perturbed enough to give the player a slight advantage.

A more complicated but in principle nontrivial strategy for so-called “pass line” bets is not altogether implausible. [5] In such bets, the losing or “craps” outcomes are 2, 3, and 12; the winning outcomes are 7 and 11. Any other outcome—4, 5, 6, 8, 9 or 10—is a “point”, and throws are repeated until either the point is duplicated (the winning outcome) or a 7 is rolled (the losing outcome). By setting the dice about a single stable axis, but with opposing orientations, you might expect a slight increase in the number of 7 rolls, independently of the details of successive throws. The nominal determinant of such an effect would be the average extent to which the two dice follow similar trajectories in the course of a single throw. [6] The z-principal axis would be preferred over the y-principal axis for our pass line strategy, because craps would then be slightly less likely—pure rotations about z-axes parallel to the table cannot produce them—and the winning outcome of 7 slightly more likely (but not, at least to first order, the winning outcome of 11).

If a point occurred, the proper thing to do would be to attempt to duplicate the point in the succeeding throws by reproducing the detailed throw mechanics. Even given the usual requirement to hit the back wall of the table, a reasonable degree of skill might thereby suffice to improve point duplication to a statistically significant degree. Though it seems the strategy probably offers no practical utility, it nevertheless appears that the argument is at least reasonable in principle. That said, any truly successful series of results from our pass line strategy would probably also require giveaways in the player’s throwing mechanics that would result in her identification as a dice controller, and therefore almost certainly entail expulsion from the casino. Thus ends our Gedankenglückspiel.

One can imagine a more detailed die design where the moments of inertia about the center of mass are exactly the same, but such a die would be difficult to manufacture, and imperfections in that manufacturing process might even negate the supposed remedy. The bottom line is that the detailed dynamics are operationally intractable, but even so, regularities might arise if the ensemble of initial conditions has some special property.

A similar argument could be made for almost any sufficiently complex system. There is very little truly random (in the colloquial sense) about most complex phenomena, but almost all the details are operationally intractable. This intractability is at the heart of a definition of randomness that is due to Kolmogorov and elaborated by Chaitin. It also highlights a point of considerable importance: the fact that the detailed specification of the initial state of a complex system is an intractable task is the normal justification for invoking the traditional methodology of statistical physics. Arguments that lead from nonequilibrium initial conditions to paradoxes in the context of thermodynamics and statistical physics (e.g., arguments based on spin-echo experiments) usually fail to recognize that the nonequilibrium states involved in these arguments are relatively easy to describe, and that such states are usually outside the scope of the theory.

There might well be a gambler in the real world that generally wins at the craps table, having carefully refined her tossing motion and studied the effects of the table geometry and material. [7] This gambler would be making a living by exploiting our naiveté regarding the scope and applicability of ergodic theorems or even our understanding of what randomness is. If any of us knew of such a person we would probably envy her. If you ran a casino or sat on a gaming commission, you would do well to see to it that such a person would be barred from gambling. By the same token I would do well to ensure that perpetrating any such shenanigans with an implementation of [a security system such as Equilibrium's] should be prohibitively difficult. This “pit boss” frame of mind is helpful when selecting [configurations for such a system].

Footnotes

[1] “It is impossible for a Die, with such determin’d force and direction, not to fall on such a determin’d side, only I don’t know the force and direction which makes it fall on such a determin’d side, and therefore I call that Chance, which is nothing but want of Art…” From Arbuthnot’s preface to “Of the laws of chance” (1692), as reproduced in Grimmett and Stirzaker

[2] Performing the experiment might produce a distribution that exhibits a “statistically significant” (say, as measured by some ad hoc protocol such as a chi-squared test, or even simple common sense) deviation from the uniform one. In fact just such an experiment was performed and used in an attempt to criticize maximum entropy methods: the uniform measure turned out to deviate statistically significantly from the measured distribution. Jaynes turned the criticism on its head and provided a convincing demonstration that the physics of this particular die-tossing experiment was improperly characterized. (See Jaynes, E. T. “Where do we stand on maximum entropy?” In The Maximum Entropy Formalism, Levine, R. D. and Tribus, M., eds. MIT (1979).) He detailed and justified Ansätze (equivalently, constraints) expressing data-skewing reflecting the possible effects of the die’s pips on the center of gravity and of imprecise milling: he then applied MAXENT to obtain a fitted distribution with no statistically significant deviation from the experimental one. It seems plausible that a careful characterization of the die used in the experiment would have supported Jaynes’ theoretical result. Certainly careless applications of MAXENT can encounter problems.

[3] Recall that the rotation of an asymmetrical top is stable about its principal axes with greatest and least moments of inertia, and unstable about the intermediate principal axis. See, e.g., Landau, L. D. and Lifshitz, E. M. Mechanics. 3rd ed. Butterworth-Heinemann (1976).

[4] Techniques often advertised for controlling dice throws in craps are dependent on the player’s supposed ability to dictate the detailed geometry of the throw by setting, establishing a consistent throwing motion through keeping a rhythm or other methods, attempting to have the dice impact the table at 45-degree angles, etc. (See, e.g. books by Frank Scoblete or http://www.goldentouchcraps.com/, where \$1295 buys the inclined punter two days of coaching.) Setting is of at best questionable utility, and the other techniques are still more dubious. It should also be mentioned that while some casinos disallow even setting, the rest all require setting to be done quickly so as not to disrupt play.

[5] The pass line is the most common example of the mandatory line bet: the house advantage for it is roughly 1.5%. Most other line bets are considerably worse choices.

[6] The motion of the dice appears not to be chaotic in any strong technical sense. The dice trajectories should diverge most strongly at their termini, where, for instance, a little extra kinetic energy could result in a die just barely tipping over one more time. The motion therefore does not even seem to be intractable in principle, but merely in practice; and our arguments do not seem to rely on anything so foolish as “beating Lyapunov exponents”. The dice are certainly correlated with each other, even for a typical throw, and it is not unreasonable to suppose that these correlations remain localized in a small enough region of phase space during a time evolution so that they might be made manifest along the lines described. Of course, whether they actually are, or even can be in practice, is another issue entirely.

[7] Gamblers that appear to fit this description were profiled in the television feature Breaking Vegas: Dice Dominator. The History Channel (2005). Thanks to Brian Hearing for alluding to this reference.

***

The truly ironic thing about this is that I happen to know a PhD physicist who is also a former craps dealer. He hasn’t read this, but I’ve spoken about it with him, and he assured me that the pip densities are sufficiently carefully controlled to defeat any sort of stability-oriented “attack” such as this. Not that I was actually going to try the experiment…

Update: typos fixed and formatting improved.

## A minimal periodic coloring theorem, part 2

21 August 2009

As mentioned previously, for $n=4$ the situation regarding minimal periodic colorings of $A_{n-1}$ is more complicated. The assignment

$\sigma_{12} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 1 \end{smallmatrix} \right), \quad \sigma_{13} \equiv \left( \begin{smallmatrix}1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{smallmatrix} \right), \quad \sigma_{14} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \end{smallmatrix} \right)$

works and produces the essentially unique periodic cyclic coloring of $A_3.$ This name reflects the fact that the induced $\Gamma_\sigma$ is isomorphic to the cyclic group $\mathbb{Z}_4.$

The cyclic coloring of $A_3$. For comparison with a later figure, take 0 to be the vertex at the top of this figure.

There are also essentially three inequivalent periodic 4-colorings with $\Gamma_\sigma \cong \mathbb{Z}_2^2$ obtained in this way, corresponding to

$\sigma_{12} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \end{smallmatrix} \right), \quad \sigma_{13} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{smallmatrix} \right), \quad \sigma_{14} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 4 & 3 & 2 & 1 \end{smallmatrix} \right);$

or

$\sigma_{12} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \end{smallmatrix} \right), \quad \sigma_{13} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 3 & 4 & 2 & 1 \end{smallmatrix} \right), \quad \sigma_{14} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 4 & 3 & 1 & 2 \end{smallmatrix} \right);$

or

$\sigma_{12} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 1 & 3 \end{smallmatrix} \right), \quad \sigma_{13} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 3 & 1 & 4 & 2 \end{smallmatrix} \right), \quad \sigma_{14} \equiv \left( \begin{smallmatrix} 1 & 2 & 3 & 4 \\ 4 & 3 & 2 & 1 \end{smallmatrix} \right),$

respectively.

The $\mathbb{Z}_2^2$ coloring of $A_3$ corresponding to $\sigma_{12} \equiv \left(\begin{smallmatrix} 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \end{smallmatrix} \right), \quad \sigma_{13} \equiv \left(\begin{smallmatrix} 1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{smallmatrix} \right),$ and $\sigma_{14} \equiv \left(\begin{smallmatrix} 1 & 2 & 3 & 4 \\ 4 & 3 & 2 & 1 \end{smallmatrix} \right).$

For larger values of $n$ it is impractical to catalog the coherent color shift groups in silico via brute force: either a reasonably sophisticated algorithm or some (presumably nontrivial representation-theoretic) calculation in papyro is wanted. Nevertheless, it is easy to see that a cyclic coloring of $A_{n-1}$ exists for all $n$ (and hence that for $n$ prime the cyclic coloring is the only periodic $n$-coloring): simply let

$s := \left(\begin{smallmatrix} 1 & 2 & \cdots & n-1 & n \\ 2 & 3 & \cdots & n & 1 \end{smallmatrix} \right).$

The cyclic coloring is obtained by setting $\sigma_{1k} := s^{k-1}.$

Color quotients and the cyclic color space

For $k \ge n$ we may consider a $k$-coloring $\kappa$ of $A_{n-1}$ and the attendant color quotient $A_{n-1}/\sim$, where $x \sim y \iff \kappa(x) = \kappa(y)$. The color projection map $\Pi : A_{n-1} \rightarrow A_{n-1}/\sim$ is defined in the obvious way, and we may identify $\Pi$ and $\kappa$ with only a slight abuse of notation. In particular, if $\kappa$ is the cyclic coloring, the resulting color quotient is called the cyclic color space and denoted $\mathbb{A}_{n-1}.$

Consider a càdlàg random walk $\xi_t$ on $A_{n-1}$, for $t \ge t_0 \equiv 0$, and write $\kappa_t := \Pi \circ \xi_t \equiv \kappa \circ \xi_t$ for the projected walk on $\mathbb{A}_{n-1}.$ Let $t_\ell$ denote the $\ell$th jump time of $\xi_t$ (and hence also $\kappa_t$), and in a further abuse of notation write $\kappa_\ell := \kappa(\xi_{t_\ell}).$ Let $j_\ell,k_\ell$ be defined via $\xi_{t_\ell} - \xi_{t_{\ell-1}} = -e_{j_\ell} + e_{k_\ell}.$ Now the color shift from $\kappa_\ell$ to $\kappa_{\ell+1}$ is

$\sigma_{\kappa_\ell \kappa_{\ell+1}} = \sigma_{\kappa_\ell 1} \sigma_{1 \kappa_{\ell+1}} = s^{1-\kappa_\ell}s^{\kappa_{\ell+1}-1} = s^{\kappa_{\ell+1}-\kappa_\ell}.$

Without loss of generality, let $\kappa_0 = 1$ and $\kappa(\xi_0-e_1+e_k) = k$. Define

$\rho_n(m) := (m-1) \!\!\!\!\! \mod n + 1.$

Note that $s^\ell : j \mapsto s^\ell \cdot j \equiv \rho_n(j + \ell)$, where $\cdot$ indicates the cyclic color shift action. Now

$\kappa(\xi_0-e_j+e_k) = \kappa([\xi_0-e_1+e_k]-e_j+e_1) \equiv \sigma_{j1} \cdot k$

$= \sigma_{1j}^{-1} \cdot k = s^{-j+1} \cdot k = \rho_n(k-j+1).$

Therefore $\kappa_1 = \rho_n(1+ k_1 - j_1)$, and we have that the color shift from $\kappa_0$ to $\kappa_1$ is

$\sigma_{\kappa_0 \kappa_1} = s^{\kappa_1-\kappa_0} = s^{\rho_n(k_1 - j_1 + 1)-1} = s^{(k_1 - j_1) \!\!\!\!\! \mod n} = s^{k_1 - j_1}.$

We proceed along similar lines to compute $\kappa_{\ell+1}$ from $\kappa_\ell,$ $j_\ell,$ and $k_\ell:$

$\boxed{\kappa_{\ell+1} = \sigma_{j_\ell k_\ell} \cdot \kappa_\ell = s^{k_\ell - j_\ell} \cdot \kappa_\ell = \rho_n(k_\ell - j_\ell + \kappa_\ell).}$

This equation illustrates the ease and utility of projecting random walks from $A_{n-1}$ to $\mathbb{A}_{n-1}.$ It is also straightforward to show that for $a \in \mathbb{Z}^n$ the assignment

$\kappa \left( \sum_{m=1}^n a_m(-e_1 + e_m) \right) = \left( \sum_{m=1}^n a_m (m-1) \right) \!\!\!\!\! \mod n + 1$

produces the cyclic coloring.

In a followup post I’ll sketch how to identify the coherent color shift groups of low order and hint at some of the ways in which this construction might be applied.

## Random bits

20 August 2009

Interesting things quantum and gravitational…

Quantum solution to the arrow of time (since it’s in PRL I will take this on faith for the moment)

Observational bounds on gravitational waves

Recreating the Big Bang Inside Metamaterials

## Random bits

19 August 2009

24th Air Force stands up

On the network attacks during the Russia-Georgia war

Reliable replacement warhead tug-of-war (just because vacuum tubes are old technology doesn’t mean they don’t still work)

18 August 2009

## A minimal periodic coloring theorem, part 1

17 August 2009

In the course of researching various martingale-based techniques for profiling network traffic I developed a cute little graph coloring theorem that may well be known, or even trivial using more sophisticated methods. But it was not well known or trivial to me. If it is to you, please contribute references or techniques in a comment to this post.

Although it is not reflected in Equilibrium’s technology (we did monkey around with it for a while before deciding it wasn’t that useful with the sorts of traffic classification schemes and resulting traffic patterns that would typically be encountered in the real world), it was part of our R&D process. But it has applications for devising interesting or useful boundary conditions for various lattice models in physics, such as lattice gauge theory, and it may be useful or interesting to other folks, which is why we’re sharing it.

(The rest of this post is written in a typical mathematical style, so if you do not typically read mathematics it may be tougher sledding than otherwise. Sorry.)

We may associate a graph $(V,E)$ to the root lattice

$A_{n-1} := \{ x \in \mathbb{Z}^n : x_1 + \cdots + x_n = 0\}$

by taking $V \equiv A_{n-1}$ and $(x,y) \in E \iff \lVert x - y \rVert= \sqrt{2}.$ In an abuse of notation, we write $A_{n-1} \equiv (V,E).$ It is easy to see that $A_{n-1}$ is a perfect graph: i.e., for every induced subgraph $G \subset (V,E)$ there is an independent set of vertices $H$ s.t. $\omega(G - H) < \omega(G),$ where $\omega(\cdot)$ is the clique number. Since the chromatic and clique numbers coincide for perfect graphs, we have $\chi(A_{n-1}) = n:$ i.e., $A_{n-1}$ is $n$-colorable. Here a $k$-coloring $\kappa : V \rightarrow \{1,\dots,k\}$ is a map that satisfies $\kappa(x) \ne \kappa(y)$ for $(x,y) \in E.$ Equivalently, the inverse images $\kappa^{-1}(j)$ are independent.

Producing a periodic $n$-coloring is not completely trivial, however. The difficulty stems from the fact that on one hand $A_{n-1}$ cannot be colored with fewer than $n$ colors, while on the other hand periodicity requires that some next-nearest-neighbor vertices share colors. It is clear that there is (up to color permutation) a single $n$-coloring of $A_{n-1}$ for $n=2,3,$ but it can be shown (i.e., we will show later) that $A_3$ admits more than one periodic 4-coloring.

To begin the construction of a periodic $n$-coloring, let $\kappa(0) = 1.$ Pick a primitive lattice simplex $\Delta$ with $0$ as a vertex and extend $\kappa$ by coloring the remaining $n-1$ vertices of $\Delta.$ Different colorings of $\Delta$ correspond to overall color permutations. Now the key observation is that for $\kappa$ to be periodic while still using only $n$ colors, we must have for $(x,y) \in E$ that

$\kappa(x) = \kappa(x+z) \iff \kappa(y) = \kappa(y+z).$

Therefore it is only necessary to consider the coloring on a neighborhood of a vertex.

We do this as follows. First note that the neighbors of $x \in A_{n-1}$ are of the form $x - e_j + e_k$ for $j \ne k.$ Introduce an ordering $\prec$ on the neighbors, e.g.

$x - e_j + e_k \prec x - e_\ell + e_m \iff (j < \ell) \lor (j = \ell \ \land \ k < m).$

Write $\nu(x)$ for the ordered vertex neighborhood of $x.$ Now moving from color $\kappa(x)$ to color $\kappa(y)$ always amounts to acting on $\kappa(\nu(x))$ by a fixed permutation $\sigma_{\kappa(x)\kappa(y)} \in S_n,$ which we term a coherent color shift. It is not hard to see that in order to induce a coherent coloring the $\sigma_{jk}$ must satisfy

$\sigma_{kj} = \sigma_{jk}^{-1}; \quad \sigma_{jk}\sigma_{k\ell} = \sigma_{j\ell}.$

These conditions correspond to the coherence of color shifts on loops of 2 and 3 (or more) steps, respectively. They also ensure that $e \cup \{\sigma_{jk}\}_{j \ne k}$ forms a group $\Gamma_\sigma.$ In the event that $\Gamma_\sigma$ produces a periodic $n$-coloring it is called a coherent color shift group.

The equations above are necessary conditions for a coherent color shift. A sufficient condition is that

$\{\sigma_{jk}\}_{j \ne k} = \{\sigma_{1k}\}_{k=2}^n.$

For if we set $\sigma_{k1} := \sigma_{1k}^{-1}$ and $\sigma_{jk} := \sigma_{j1} \sigma_{1k}$, we have that

$\sigma_{jk} \sigma_{k\ell} = \sigma_{j1} \sigma_{1k} \sigma_{k1} \sigma_{1\ell} = \sigma_{j1} \sigma_{1\ell} = \sigma_{j\ell}$

for all $k$. In this event $\Gamma_\sigma$ is of order $n$ and acts freely via color shifts, and therefore has no fixed points, so that no two adjacent vertices are the same color. At the same time, the coloring is periodic because the color shifts depend only on the colors and not the lattice site. If $\Gamma_\sigma$ has order less than $n$, the color shift action is not free and there is at least one fixed point, which produces two adjacent vertices of the same color. If on the other hand $\Gamma_\sigma$ has order greater than $n$, then the coloring cannot be periodic.

We can put the preceding argument in slightly different language: there exists $j \ne k$ such that $\sigma_{1j} = \sigma_{1k}$ iff $\sigma_{jk} = e$, and in this case we cannot have a $n$-coloring. It follows that we must have $\lvert \Gamma_\sigma \rvert \ge n$. Moreover, if for $j \ne k$ there does not exist some $\ell$ such that $\sigma_{jk} = \sigma_{1\ell}$, then for $x$ generic,

$\sigma_{jk} \cdot \kappa(x) \notin \{\sigma_{1\ell} \cdot \kappa(x)\} = \{1,\dots,n\} \backslash \kappa(x),$

and in this case $\kappa$ cannot be periodic. (Here we have used $\cdot$ to indicate the action of a coherent color shift.) It follows that a coherent color shift group constructed from some $\{\sigma_{1k}\}_{k=2}^n$ must satisfy $\lvert \Gamma_\sigma \rvert = n$, or equivalently that any choice of $\{\sigma_{1k}\}_{k=2}^n$ producing $\Gamma_\sigma$ of order $n$ when augmented using equations from above yields a periodic $n$-coloring. We refer to this as the $A_{n-1}$ minimal periodic coloring theorem.

For the smallest values of $n$ we can construct all possible sets $\{\sigma_{1k}\}_{k=2}^n$ and determine whether or not they induce a coherent color shift group along the lines above. The case $n=2$ is trivial. For $n = 3,$ the only such choices that work are those with $\sigma_{12}$ and $\sigma_{13}$ both cyclic permutations, so there is essentially only one 3-coloring of $A_2.$ It is left as an exercise for the reader to draw a picture.

For $n=4$ the situation is both more complicated and more interesting, and we’ll cover it (with pictures) in the sequel.

Update: typo corrected, $A_2$ is 3-colorable.