Saturday 29 June 2013

Advanced String Theory

This entry is quite long; skip to the end to look at a piece of string if you don't feel like doing any maths. The Lagrangian is the difference between kinetic and potential energy terms. \[ L = T - V \] All of classical mechanics can be done using the Lagrangian and the equations of motion it obeys, it is completely equivalent to Newton's equations. If you have never seen a Lagrangian before you will have to take my word that it satisfies the equation \[ \frac{d}{dt} \frac{\partial L}{\partial \dot{\theta}_i } = \frac{\partial L}{\partial \theta_i } \] where the dot denotes a time derivative and the $\theta_i$ are the co-ordinates relevant for the problem. We're going to use pendulums which is why I chose the symbol $\theta$. Deriving this takes a while and involves concepts like D'Alembert's principle principle and variational calculus.

Accepting Lagrange's equations we can expand out the $\frac{d}{dt}$ to get, \[ \sum_j \frac{\partial^2 L}{\partial \dot{\theta}_i \partial \theta_j }\dot{\theta}_j + \sum_j \frac{\partial^2 L}{\partial \dot{\theta}_i \partial \dot{\theta}_j }\ddot{\theta}_j = \frac{\partial L}{\partial \theta_i }. \] The second partial derivatives have two indices, $i$ and $j$ and act on the one index object $\theta_j$ in the same way a matrix multiplies a vector. We define the two matrices \[ M_{ij} = \frac{\partial^2 L}{\partial \dot{\theta}_i \partial \dot{\theta}_j } \] the "mass" (for zero potential this would just be a diagonal matrix with the particle masses on the diagonal) and \[ F_{ij} = \frac{\partial^2 L}{\partial \dot{\theta}_i \partial \theta_j } \] The equation is now, \[ F \dot{\theta} + M \ddot{ \theta } = \frac{\partial L}{\partial \theta} \] I have dropped the indices and you should think of $\theta$ as a vector.

As in the previous post for numerical analysis we prefer first order equations, even if there are more of them. We define $\omega = \dot{\theta} $ and re-write the equation as, \[ \frac{d}{dt} \left( \begin{array}{c} \theta \\ \omega \end{array} \right) = \left( \begin{array}{cc} 0 & I \\ 0 & -M^{-1}F \end{array} \right) \left( \begin{array}{c} \theta \\ \omega \end{array} \right) + \left( \begin{array}{c} 0 \\ M^{-1} \frac{\partial L}{\partial \theta} \end{array} \right) \] As bad as it looks this is now in a form amenable to numerical analysis. We just have to calculate, $M$, $F$ and $\frac{\partial L}{\partial \theta}$. Let's get to it!

We'll make things a bit simpler for ourselves by making all the pendulums of equal length and all the pendulum bobs of equal mass. The following diagram summarises the situation for two pendulums.

From the diagram we can see (after staring at it for a while) that \[ \begin{array}{cc} x_1 = l \sin \theta_1 & y_1 = L \cos \theta_1 \\ x_2 = l (\sin \theta_1+\sin \theta_2) & y_2 = l (\cos \theta_1 + \cos \theta_2) \\ \vdots & \vdots \end{array} \] We will use the abbreviations \[ \begin{array}{c} c_i = \cos \theta_i \\ s_i = \sin \theta_i \end{array} \] The potential of any of the masses is $-mgh$ where $h$ is the height. This gives for all $N$ pendula a potential, \[ \begin{array}{l} V = -mg\left[ y_1 + y_2 + \ldots + y_N \right] \\ V = -mgl\left[ c_1 + (c_1 + c_2) + \ldots + (c_1 + \ldots + c_N) \right] \\ V = -mgl\left[ N c_1 + (N-1) c_2 + \ldots + c_N \right] \end{array} \] The kinetic energy is \[ T = \frac{m}{2} \left[ \dot{x}_1^2 + \dot{y}_1^2 + \dot{x}_2^2 + \dot{y}_2^2 + \ldots + \dot{x}_N^2 + \dot{y}_N^2 \right]. \] From the expressions for $x_i$ and $y_i$ we obtain, \[ \begin{array}{cc} \dot{x}_1 = l \cos \theta_1 \dot{\theta}_1 & \dot{y}_1 = l \sin \theta_1 \dot{\theta}_1\\ \dot{x}_2 = l (\cos \theta_1 \dot{\theta}_1 + \cos \theta_2 \dot{\theta}_2 ) & \dot{y}_2 = l (\sin \theta_1 \dot{\theta}_1 + \sin \theta_2 \dot{\theta}_2 ) \\ \vdots & \vdots \end{array} \] and the kinetic energy, \[ \begin{array}{l} T = \frac{m}{2} \left[ \dot{x}_1^2 + \dot{y}_1^2 + \dot{x}_2^2 + \dot{y}_2^2 + \ldots + \dot{x}_N^2 + \dot{y}_N^2 \right].\\ T = \frac{ml^2}{2} \left[ (c_1 \omega_1)^2 + (c_1 \omega_1 + c_2 \omega_2 )^2 + \ldots + (c_1 \omega_1 + c_2 \omega_2 + \ldots c_N \omega_N ) + c \leftrightarrow s \right]. \end{array} \]

This is a bit complicated so we will use matrix notation to keep track of things. With \[ J_{ab} = \begin{array}{ll} (N+1 - b) & \quad a < b \\ (N+1 - a) & \quad a \geq b \end{array} \] a symmetric matrix (write it out) we have, \[ T = \frac{ml^2}{2} \sum_{a = 1}^{N} \sum_{b = 1}^{N} J_{ab} c_a c_b \omega_a \omega_b+ c \leftrightarrow s = \sum_{a = 1}^{N} \sum_{b = 1}^{N} J_{ab} c_{ab} \omega_a \omega_b \] where \[ \begin{array}{c} c_{ab} = \cos (\theta_a - \theta_b) = c_{ba} \\ s_{ab} = \sin (\theta_a - \theta_b) = -s_{ba} \end{array} \]

Let's start by calculating $M$ and $F$. \[ \frac{ \partial T}{\partial \omega_j } = ml^2 \sum_{a = 1}^{N} \omega_a J_{aj} c_{aj} \] Then \[ M_{ij} = \frac{ \partial^2 T}{\partial \omega_i \partial \omega_j } = ml^2 J_{ij} c_{ij} \] and using the symetry of mixed partial derivatives \[ F_{ij} = \frac{ \partial^2 T}{\partial \theta_j \partial \omega_i } = ml^2 \left[ J_{ij} \omega_j s_{ij} - \delta_{ij} \sum_{a = 1}^{N} \omega_a J_{ai} s_{ia} \right] \] Now we need $\frac{\partial L}{\partial \theta_i}$. We have \[ \frac{\partial V}{\partial \theta_i} = mgl J_{1i} s_i \] and \[ \frac{\partial T}{\partial \theta_i} = ml^2 \sum_{a=1}^N J_{ai} s_{ai} \omega_a \omega_i \] Which gives $\frac{\partial L}{\partial \theta_i}$. Phew!

There actually is one more term. Adding damping in the Lagrangian framework is a little bit tricky, a full explanation is here. This gives us an extra parameter $\beta$ which we use to control the amount of drag. Below is a simulation of N coupled pendulua for $N = 1 \ldots 12$. Presumably you could let $N \rightarrow \infty$ and get a continuous string, I'll leave that to you.

Friday 21 June 2013

Pendulums

Now we're going to look at some "classical" numerical analysis. Newton's second law is a differential equation, \[ m \frac{d^2 x}{dt^2} = F, \] summarizing the fact that you need to push heavy things harder than light things, x is a coordinate of as many variables as necessary. For our purposes we are going to write this one simple ordinary differential equation as two even simpler ones. Defining velocity $v = \frac{dx}{dt}$ Newton's equations become: \[ \frac{d}{dt} \left( \begin{array}{c} x \\ v \end{array} \right) = \left( \begin{array}{c} v \\ F/m \end{array} \right), \] calling the vector on the left $\vec{y}(t)$ and the one on the right $\vec{f}(t, \vec{y}(t))$ this becomes: \[ \frac{d\vec{y}(t)}{dt} = \vec{f}(t, \vec{y}(t)). \]

In this form we can make a very simple numerical solution: $\frac{d\vec{y}(t)}{dt}$ means the slope of $\vec{y}(t)$ at $t$. Let $h$ be a very small number, then: \[ \frac{d\vec{y}(t)}{dt} \approx \frac{\vec{y}(t+h) - \vec{y}(t)}{h}. \] Substituting this into the second form of Newton's equation we get, \[ \vec{y}(t+h) = \vec{y}(t) + h \vec{f}(t, \vec{y}(t)) \] so given $\vec{y}$ at $t$ we can calculate it a short time later at $t + h$ from the "force" $\vec{f}$. Let's try it on the standard example in physics, the pendulum.

From the diagram above, the pendulum moves on a circle of length $l$, the pendulum bob has traveled a distance $l \theta$ along the circle when the force on it is $-mg \sin(\theta)$, tangent to the circle. Newton's equations give, \[ \begin{array}{c} m l \frac{d^2 \theta}{dt^2} = -m g \sin(\theta)\\ \frac{d^2 \theta}{dt^2} + \frac{g}{l} \sin(\theta) = 0 \end{array} \] Because we're solving this numerically we don't have to make the small angle approximation $\sin(\theta) \approx \theta$ so the pendulum does indeed move on a circle. Click the animation below and drag the bob to somewhere in the box and let go. The red pendulum is the result of evolving the equation above using the Euler method.

The Euler method is a bad method, its advantage is how easy it is to program. The standard compromise between improvement and complexity is the 4${}^{th}$ order Runge-Kutta method. Runge from the Runge-Lenz vector and Kutta from the Kutta-Joukowski airfoil. This method is basically equivalent to Simpson's Rule. Explicitly, \[ \begin{array}{c} y(t+h) = y(t) + \frac{h}{6} \left( k_1 + 2k_2 + 2k_3 + k_4\right)\\ k_1 = f(t, y(t))\\ k_2 = f(t + h/2, y(t) + k_1h/2)\\ k_3 = f(t + h/2, y(t) + k_2h/2)\\ k_4 = f(t + h, y(t) + k_3h)\\ \end{array} \] This approximates the slope using the values at the beginning, the end and two in the middle, giving greater weight to the values in the middle. The blue bob in the previous animation (behind the red one) evolves according to the Runge-Kutta algorithm.

Given a decent algorithm for solving differential equations we'll use it for something more interesting, the double pendulum. Click the mouse in the box, drag the red bob to its start position click again and drag the blue bob then release. I'll give a derivation of the double (and triple and n-tuple) pendulum equations of motion next week.

Friday 14 June 2013

An Ant Metropolis

jQuery UI Slider - Multiple sliders

The Metropolis algorithm has been used to simulate just about everything, let's try ants. We have four types of ant: red, black, green and blue. Red ants like green and blue ones but dislike black. Black ants like green and blue ones but dislike red. Green and blue like all three other types. All the ants are indifferent to ants of their own colour. "Dislike" means that if a red ant is close to a black one its "happiness" goes down. First we'll put our ants on a lattice. Then we'll give every ant a happiness that gets increased by one if an ant it likes is on the same lattice point and decreased by one for every ant it doesn't like. We can encode this in an interaction matrix: \[ H = \left( \begin{array}{cccc} 0 & -1 & 1 & 1 \\ -1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \end{array} \right) \]

Then the Metropolis algorithm proceeds as before, we propose a random direction for the ant to move: north, south, east or west. If it will increase happiness or keep the happiness the same we accept it, if it decreases happiness we accept it with probability $\exp( -\Delta E \beta)$ where $\Delta E$ is the change in happiness and $\beta = 1/T$, where $T$ is something analogous to temperature. $\beta$ controls how much the ants care about their mutual attraction and repulsion. When it is low the ants don't notice each other as much, when it is high they care a lot where they live. You could call it... blindness? sociability? sensitivity? social-sensitivity? I will stick with $\beta$.

Anyway, below is a little animation of the ants running around. When $\beta$ is low they don't care about each other and run around like idiots. When it's high they get into clusters with the ants they like. You can see the red ones avoiding the black ones. The sliders are for $N$ the number of ants and $\beta$. You can also type in your own interaction matrices. How about, \[ H = \left( \begin{array}{cccc} 1 & -1 & -1 & -1 \\ -1 & 1 & -1 & -1 \\ -1 & -1 & 1 & -1 \\ -1 & -1 & -1 & 1 \end{array} \right) \] racist ants? \[ H = \left( \begin{array}{cccc} 0 & 1 & 1 & -2 \\ 1 & 0 & 1 & -2 \\ 1 & 1 & 0 & -2 \\ 1 & 1 & 1 & 0 \end{array} \right) \] or cluster bombers?




Thursday 6 June 2013

The Potts Model

jQuery UI Slider - Multiple sliders

The last post was about the Ising model and how to simulate it. The Ising model is the simplest type of spin model and has several generalizations. You can make different shaped lattices; triangular, honeycomb, union-jack. You can make the couplings in the horizontal and vertical directions different, or you can make every coupling on every link different; this type of model is called a spin glass. You can increase the number of dimensions; one and two are exactly solved, three and four are interesting and unsolved analytically; five and higher are somewhat less interesting. This is because of the large number of neighbours each spin has in $N$ dimensions - $2N$. The more neighbours a spin has the more accurate it is to describe the spin as interacting with a fixed background and mean field theory becomes more and more accurate. You could even put several Ising models on the same lattice and make them interact - this is the Ashkin-Teller model. There are many more.

Here I will consider a simple generalization - more than two states. A spin can point in $N_c$ directions. If two neighbouring spins point in the same direction this decreases the energy, if they point in opposite directions the energy increases, the Ising model has $N_c = 2$.

Let's look at the possible states two neighbours can be in. For the Ising model we have \[ \left( \uparrow \uparrow \, , \, \downarrow \downarrow \right) \] decreasing the energy and \[ \left( \uparrow \downarrow \, , \, \downarrow \uparrow \right) \] increasing it, ie. two increasing and two decreasing. For $N_c = 4$ it goes like, \[ \left( \uparrow \uparrow \, , \, \downarrow \downarrow \, , \, \leftarrow \leftarrow \, , \, \rightarrow \rightarrow \right) \] and \[ \left( \begin{array}{ccc} \uparrow \downarrow, & \uparrow \leftarrow, & \uparrow \rightarrow, \\ \downarrow \uparrow, & \downarrow \leftarrow, & \downarrow \rightarrow, \\ \leftarrow \downarrow, & \leftarrow \uparrow, & \leftarrow \rightarrow, \\ \rightarrow \downarrow, & \rightarrow \uparrow, & \rightarrow \leftarrow \end{array} \right) \] four decreasing the energy and twelve increasing it. This means if you pick a random configuration it is three times as likely that a given pair will increase the energy than decrease it. In the energy-entropy battle the Potts model has more ways to mix up the configurations and so it is easier for entropy to dominate. This means, among other things, that the critical temperature is lower. If $N_c$ is increased enough ($\geq 4$) even the nature of the transition changes, from an Ising-like second order to a water-steam type first order.

Below you can tune both the temperature, $\beta$, and the number of spin orientations allowed, $N_c$, symbolised by the colour of a pixel.