Olshanetsky-Perelomov Solution of the Calogero-Moser System

Calogero-Moser System

In joint work with Nam-Gyu Kang and Nikolai Makarov we discovered that the driving functions of the multiple SLE(0) evolve according to the Calogero-Moser dynamical system. The latter describes the trajectories of $N$ particles on the real line $\mathbb{R}$ that interact via a force determined by an inverse pair potential, namely:

\begin{equation}\label{eq:SecondOrder} \ddot{x}_j = \sum_{k \neq j} \frac{2}{(x_j - x_k)^3}, \end{equation}

for $j=1,\ldots,N$. The $2$ on the right hand side is somewhat of an arbitrary choice, and in fact to generate multiple SLE(0) curves via forward Loewner flow you need a negative sign on the right hand side, but the focus of these notes is Calogero-Moser so we will stick with the latter, more common presentation. Since the left hand side is an acceleration term the right hand side is the force acting on each particle (by Newton’s Law). The force clearly blows up as particles approach each other, but because of the sign we see that the forces are repulsive. One might expect that the particles never actually collide, and indeed this is the case. It is useful to note that this is an instance of a Hamiltonian system, which lends all the tools of that theory to the analysis of the system. The Hamiltonian is

\begin{equation}\label{eq:Hamiltonian} H(\mathbf{x}, \mathbf{p}) = \frac{1}{2} \sum_{i=1}^N p_i^2 + \sum_{j \neq k} (x_j - x_k)^2 \end{equation}

where $\mathbf{x} = (x_1, \ldots, x_N)$ and $\mathbf{p} = (p_1, \ldots, p_N)$ are the momenta variables. The standard Hamiltonian equations of motion become

\begin{equation}\label{eq:HamiltonianDynamics} \dot{x}_i = \frac{\partial H}{\partial p_i} = p_i, \quad \quad \dot{p}_i = -\frac{\partial H}{\partial x_i} = \sum_{k \neq i} \frac{2}{(x_i - x_k)^3}, \end{equation}

so that differentiation the $\dot{x}_i$ once again clearly leads to the second order dynamics \eqref{eq:SecondOrder}. Given the Hamiltonian \eqref{eq:Hamiltonian} note that it is a purely mechanical exercise to derive these equations of motion, but in the reverse direction it only takes a little experience to determine the appropriate Hamiltonian that produces the desired second order dynamics.

Since the late 1960s and early 1970s this system has received signficant attention due to its integrability property (see here for a useful history). Integrability simply means that one can prove the existence of $N$ integrals of motion (i.e. quantities that are conserved under the flow) and that these integrals are in involution (this condition is somewhat more technical so we won’t say anything more - but it is important). Calogero-Moser seems to have been one of the first dynamical systems where the integrability was proved by finding a Lax pair for the system. Since the pair is important for the rest of the story we mention it now. In this case it turns out to be two $N \times N$ matrices $L = L(\mathbf{x}, \mathbf{p})$ and $M = M(\mathbf{x}, \mathbf{p})$ whose entries are explicitly defined in terms of $\mathbf{x}$ and $\mathbf{p}$. It is a bit hard to write them out succinctly in MathJax, so introduce the $N \times N$ matrices $\mathbf{e}_{ij}$ that have a single one in the $(i,j)$-entry and are zero everywhere else. We can use these to write the formulas for the entries as

\begin{equation}\label{eq:LaxPair1} L_{jj} = p_j, \quad L_{jk} = i (x_j - x_k)^{-1} \textrm{ for } j \neq k \end{equation}

and

\begin{equation}\label{eq:LaxPair2} M_{jj} = i \sum_k (x_k - x_j)^{-2}, \quad M_{jk} = - i (x_j - x_k)^{-2} \textrm{ for } j \neq k. \end{equation}

The Hamiltonian equations of motion for $\dot{x}_i$ and $\dot{p}_i$ combined with simple but lengthy algebra show that, along the evolution of the Hamiltonian system in phase space, these matrices satisfy the Lax evolution equation

\begin{equation}\label{eq:LaxEvolution} \dot{L} = LM - ML = [L,M]. \end{equation}

All the computation lies in the first equality (see the paper by Moser for the first proof of this, and one with a very easy to follow exposition), whereas the second equality is just the standard definition of the matrix commutator $[L,M]$. General theory then implies that the $N$ eigenvalues of $L$ are integrals of motion and in involution, so the difficult part is in finding the matrices $L$ and $M$. The general theory also implies that once the integrability of the system is established then there exists a change-of-coordinates on the $(\mathbf{x}, \mathbf{p})$ phase such that the original dynamical system can be easily solved in these new coordinates. Here ‘‘solved’’ means that the time evolution of the new spatial and momenta variables can be easily solved for given their initial position. The new coordinate system in which this is all possible is usually called the action-angle coordinates, which are central to much of modern dynamics.

Now all of these abstractions on integrability are quite nice, but they rely on general theory that doesn’t necessarily help much if one wants to explicitly solve the Calogero-Moser system \eqref{eq:SecondOrder} given the initial conditions $\mathbf{x}(0)$ and $\dot{\mathbf{x}}(0) = \mathbf{p}(0)$1. For Calogero-Moser, however, there is a simple and elegant description of the trajectories $\mathbf{x}(t)$ that was first discovered by Olshanetsky and Perelomov.

Olshanetsky-Perelomov Solution

Let me just jump right into the result: Olshanetsky and Perelomov proved that the solutions $\mathbf{x}(t) = (x_1(t), \ldots, x_N(t))$ to the system \eqref{eq:SecondOrder} are just the eigenvalues of the matrix

$$\operatorname{diag}(\mathbf{x}(0)) + L(\mathbf{x}(0), \mathbf{p}(0)) t.$$

To me, at least, this is a completely surprising result! At time $t=0$ the result is obviously true, since the matrix above is diagonal at that time, but as soon as $t$ grows the diagonal nature is lost and the structure of the eigenvalues seems more complicated. The evolution of the matrix-valued process is incredibly simple, indeed it is just linear in time, but why should linear evolution at the matrix level project onto the much more complicated Hamiltonian dynamics \eqref{eq:SecondOrder} at the level of the eigenvalues?. In the original dynamical system \eqref{eq:SecondOrder} there doesn’t seem to be any hint of a matrix, but as we shall see knowing the Lax pair is also key to this Olshanetsky-Perelomov solution.

Quickly Verifying Olshanetsky-Perelomov

This computation is essentially a summary of Section 2.1.3.2 of Calogero’s book.

To prove this result turns out to be relatively simple, once you have the idea that it is true. Here is a quick way to verify it with nothing more than basic algebra. Let $X(t)$ be the time-evolving diagonal matrix with entries $\mathbf{x}(t)$ on its diagonal, i.e. $X(t) = \operatorname{diag}(\mathbf{x}(t))$, so that finding the evolution of $X(t)$ is equivalent to solving \eqref{eq:HamiltonianDynamics}. Of course this problem is no simpler than solving \eqref{eq:HamiltonianDynamics} itself, but the idea is to try and conjugate $X(t)$ via another evolving matrix such that the conjugated matrix has dynamics that we can solve. To this end introduce an as-yet-to-be-determined matrix-valued process $P(t)$, assumed to be invertible, and define

$$ X^*(t) = P(t) X(t) P(t)^{-1}. $$

Then $X^*$ and $X$ have the same eigenvalue evolution, and the eigenvalue evolution of $X$ is precisely what we are after. But if we are clever we can choose $P(t)$ such that the process $X^*$ follows an evolution equation that can be explicitly solved. To find it we just start computing. Differentiate both sides of the above with respect to $t$ (and drop the $t$ dependence for convenience) to obtain

$$ \dot{X^*} = \dot{P} X P^{-1} + P \dot{X} P^{-1} + P X \dot{(P^{-1})} = \dot{P} X P^{-1} + P \dot{X} P^{-1} - P X P^{-1} \dot{P} P^{-1} . $$

The time derivative of $P^{-1}$ comes from differentiating $P P^{-1} = I$ with respect to $t$ and solving. Now factor a $P$ out of the left and a $P^{-1}$ out of the right hand side to obtain

$$ \dot{X^*} = P \left( P^{-1} \dot{P} X + \dot{X} - XP^{-1} \dot{P} \right) P^{-1} = P \left( \dot{X} - [X, P^{-1} \dot{P}] \right) P^{-1}. $$

Tidy this up by writing $M(t) = P(t)^{-1} \dot{P}(t)$ and $L(t) = \dot{X}(t) - [X(t), M(t)]$, so that the above reads $\dot{X^*} = P(t) L(t) P(t)^{-1}$. Note that this $L$ and $M$ are not a priori the same $L$ and $M$ as in the Lax pair \eqref{eq:LaxPair1}-\eqref{eq:LaxPair2} for the Calogero-Moser system, but we will soon see that it is quite advantageous to choose them as such. Indeed, the evolution equation

$$ \dot{X^*} = P(t) L(t) P(t)^{-1} $$

does not seem so easy to solve, because we still don’t know $P(t)$ and on the right hand side the dependence on $X(t)$ is buried in $L(t)$ in a fairly complicated way. The trick is to note that the above computation of the time derivative of $X^*$ didn’t use at all that $X(t)$ is a diagonal matrix, so we can apply it again to the last equation and obtain

$$ \ddot{X^*} = P \left( \dot{L} - [L,M] \right) P^{-1}. $$

Now the Lax evolution equation \eqref{eq:LaxEvolution} enters in to the computation, and we see the power of the Lax pair. We have complete freedom to choose $P(t)$ so let’s do it in the following way

  • choose $P(0) = I$,
  • use \eqref{eq:LaxPair2} to define $M(t) = M(\mathbf{x}(t), \mathbf{p}(t))$ and then let $P$ solve $\dot{P}(t) = P(t) M(t)$,
  • observe, by explicit computation, that there is no inconsistency in the two definitions $L(t) = \dot{X}(t) - [X(t), M(t)]$ and $L(t) = L(\mathbf{x}(t), \mathbf{p}(t))$, the latter using \eqref{eq:LaxPair1}.

This uniquely defines the $P(t)$ process, and with these choices the Lax evolution equation \eqref{eq:LaxEvolution} for $L$ and $M$ implies that

$$ \ddot{X^*} = P \left( \dot{L} - [L,M] \right) P^{-1} = 0. $$

This equation holds for every entry of $\ddot{X^*}$ and of course we know how to solve it. Since we have the initial conditions

$$ X^*(0) = X(0), \quad \dot{X^*}(0) = L(0) $$

we obtain $X^*(t) = X(0) + L(0)t$, as desired.

Projections and Geodesics

The last section is a purely computational verification of the Olshanetsky-Perelomov solution, but it already contains some key ideas that point towards the more general idea. Of particular note are the following:

  • lifting the dynamics from $\mathbb{R}^{2n}$ to a space of $n \times n$ matrices leads to a simpler problem,
  • the dynamics in the matrix space project down onto the desired dynamics on $\mathbb{R}^{2n}$, and
  • the dynamics in the matrix space are actually a type of geodesic motion.

The last point is not immediately obvious but also not difficult to verify. To do so one first needs to specify the metric on the matrix space, and while doing that it is also worthwhile to be precise about what exactly the matrix space is. To identify the space that $X^*(t)$ takes values in look at the matrix $P(t)$ that diagonalizes it. The evolution of $P$ is governed by $M$, since $\dot{P} = PM$, and from \eqref{eq:LaxPair2} it is easy to verify that $M$ is skew-Hermitian, i.e. equal to the negative of its conjugate transpose. The space of skew-Hermitian matrices are the Lie algebra of the unitary group, which implies that $P(t)$ is a unitary matrix (if you want to check it for yourself just observe that $P^*P = I$ at time zero and compute that its time derivative is also zero). Thus $X^*(t)$ has real eigenvalues and is diagonalized by a unitary matrix, which shows that the $X^*$ process takes values in the space of Hermitian matrices.

A Similarity to Random Matrix Theory

Often in random matrix theory one starts with a particular law on matrices and then wants to work out the induced law of the eigenvalues. If this can be done it is usually via explicit computation and a clever use of symmetries in the matrix law. The inverse problem is often of equal interest: starting with a the law of a point process find a matrix model for it, meaning find a law on matrices such that the induced law on eigenvalues is that of the point process. Of course there is a cheap answer to this problem: if $\mathbf{\lambda} = (\lambda_1, \ldots, \lambda_n)$ is the random point process, simply let

$$ \Lambda = \operatorname{diag}(\lambda_1, \ldots, \lambda_n) $$

be your matrix model. While certainly true, this choice is practically useless and sheds no extra light on the problem. The law of $\mathbf{\lambda}$ might be more complicated than one knows what to do with, i.e. even the very pretty

$$ \frac{1}{Z} \prod_{i < j} (\lambda_i - \lambda_j)^{-2} \prod_{i} e^{-\lambda_i^2/2} $$

is already too complicated for a typical undergraduate with a working knowledge of probability density functions. But of course if you conjugate the matrix $\Lambda$ by an independently sampled Haar unitary matrix then you just get a GUE matrix, whose law is well understood to anyone familiar with the normal distribution. In general the idea is that the matrix model should have a simpler law than that of the point process, even if it is living on a higher-dimensional space. This is also the fundamental idea behind Olshanetsky-Perelomov, just with the word “dynamics” replacing “law”.

Generalizing


  1. More precisely, the transformation from the original coordinates to the action-angle coordinates is only guaranteed to exist but may not be explicit, or easy to work with. ↩︎

Previous