next up previous
Next: Exercise 13.1: Classical gas Up: Monte Carlo Simulation Previous: The Canonical Ensemble

The Metropolis algorithm

we wan to obtain an estimate for the mean value of an observable $A$:

\begin{displaymath}
\langle A \rangle = \sum_s A_s e^{-\beta E_s}/\sum_s e^{-\beta E_s},
\end{displaymath}

where $E_s$ and $A_s$ are the values of the energy and the quantity $A$ in the configuration $s$. The idea of using Monte Carlo consists in sampling a subset of configuration and approximating the average by the mean over the sample:

\begin{displaymath}
\langle A \rangle \simeq \sum_s^{m} A_s e^{-\beta E_s}/\sum_s^{m}
e^{-\beta E_s},
\end{displaymath}

where the sampling is over $m$ configurations.

A crude Monte Carlo procedure is to generate a configuration at random, calculate $E_s$ and $A_s$, and the contributions of this configuration to the sums. This is equivalent to the ``hit and miss'' Monte Carlo method for evaluating integrals. We have seen that this approach is very inefficient, because the configurations generated would likely be very improbable and contribute very little to the sum. Instead, we want to generate a sample of configurations that are important, i. e. have large contributions to the sums. This is precisely the equivalent to ``importance sampling''. Hence, we need to generate the configurations according to a probability distribution. In this case, the most convenient one is not other than the Boltzmann probability itself $P_s$ (276). Since we will average over the $m$ configurations generated with this probability, we must use the expression:

\begin{displaymath}
\langle A \rangle \simeq \sum_s^{m} \frac{A_s}{P_s} e^{-\bet...
..._s^{m} \frac{1}{P_s}e^{-\beta E_s}
= \frac{1}{m}\sum_s^{m}A_s
\end{displaymath}

The idea of the Monte Carlo algorithm consists in performing a random walk over the space of configurations. The walker ``hops'' from a configuration $i$ to another $j$ using the ``transition probability''

\begin{displaymath}
W=\min{\left(1,\frac{P_j}{P_j}\right)}.
\end{displaymath}

Replacing by the corresponding expression, we obtain:

\begin{displaymath}
W=\min{\left(1,e^{-\beta(E_j-E_i)}\right)}.
\end{displaymath}

Since we are only interested in the ratio $P_j/P_j$, it is not necessary to know the normalization constant $Z$. Although we have picked this expression for the transition probability $W$, is not the only choice. It can be shown that the only requirement is that $W$ satisfies the ``detailed balance'' condition:

\begin{displaymath}
W(i \rightarrow j)e^{-\beta E_i} = W(2 \rightarrow j)e^{-\beta E_j}.
\end{displaymath}

Another comon choice in the literature is given by:

\begin{displaymath}
W(i\rightarrow j)=\frac{1}{e^{-\beta (E_j-E_i)}+1}.
\end{displaymath}

Note that if $\Delta E=0$, then $W=1/2$ and the trial configuration has an equal probability of being accepted.

The pseudocode for a Monte Carlo simulation can be outlined as follows:

  1. Establish an initial configuration.

  2. Make a random trial change in the configuration. For example, choose a spin at random and try to flip it. Or choose a particle at random and attempt to displace it a random distance.

  3. Compute the change in the energy of the system $\Delta E$ due to the trial change.

  4. If $\Delta e \leq 0$, accept the new configuration and go to step 8.

  5. If $\Delta E$ is positive, compute the ``transition probability'' $W=e^{-\beta \Delta E}$.

  6. Generate a random number $r$ in the interval $[0,1]$.

  7. If $r \leq W$, accept the new configuration; otherwise retain the previous configuration.

  8. Repeat steps (2) to (8) to obtain a sufficient number of configurations or ``trials''.

  9. Compute averages over configurations which are statistically independent of each other.



Subsections
next up previous
Next: Exercise 13.1: Classical gas Up: Monte Carlo Simulation Previous: The Canonical Ensemble
Adrian E. Feiguin 2009-11-04