Fluctuations

From Scholarpedia
Giovanni Gallavotti (2008), Scholarpedia, 3(6):5893. doi:10.4249/scholarpedia.5893 revision #137269 [link to/cite this article]
(Redirected from Fluctuation)
Jump to: navigation, search

Fluctuations: Deviations of the value of an observable from its average or, also, deviations of the actual time evolution of an observable from its average evolution in a system subject to random forces or, simply, undergoing chaotic motion.

Contents

Foundations: errors

The law of errors is the first example of a theory of fluctuations. It deals with sums of a large number \(N\) of values \(\sigma_1,\ldots,\sigma_N\) occurring randomly with probability \(p(\sigma)\) equal for each pair of opposite values (i.e. \(p(\sigma)=p(-\sigma)\)), hence with\(0\) average. If the possible values of each \(\sigma\) are finitely many (and at least two) their sum can be of an order of magnitude as large as the number \(N\ ,\) however such a large value is very improbable for large \(N\) and deviations from the average of the order of the square root of \(N\) follows the errors law, also called normal law, of Gauss

\[\tag{1} \hbox{probability}(\sum_{i=1}^N \sigma_i= x\sqrt N\ {for\ x\ within}\ [a,b])= \int_a^b e^{-x^2/2D}\frac{dx}{\sqrt{2\pi D}}\]


if \(D=\sum_\sigma \sigma^2 p(\sigma)\ ,\) up to corrections approaching \(0\) as \(N\to\infty\ ,\) (\([a,b]\) being any finite interval).

Gauss' application was to control the errors in the determination of an asteroid orbit when observations, of its position in the sky, in excess of the minimum (three) necessary were available (Gauss 1971).

The error law is universal in the sense that it holds no matter which are the values of the variables \(\sigma\) as long as

(1) they have finitely many possibilities,

(2) probabilities \(p(\sigma)\) give zero average to the expectation \(\sum_\sigma \sigma \,p(\sigma)=0\ ,\)

(3) occurrence of any value takes place independently of occurrences of other values. The simplest application is to the sum of equally probable values \(\sigma=\pm 1\ .\)

Another important kind of fluctuations are the Poisson's fluctuations describing, for instance, the number of atoms in a region of volume \(v\) or the number of radioactive decays in a time interval \(\tau\ :\) these are independent events which occur with an average number \(\nu\) proportional to \(v\) or \(\tau\ ;\) the probability that \(m\) events are actually observed is \(P(m)=e^{-\nu}{\nu^m}/{m!}\ .\) A feature of such fluctuations, also called rare events, is that the mean square deviation is equal to the mean\[\sum_{m=0}^\infty (m-\nu)^2P(m)=\nu\ .\]

Fluctuations: small and large

The probabilities of values \(\sum_{i=1}^N \sigma_i\) of size of order \(N\) is called the theory of large fluctuations, because the \(\sum_{i=1}^N \sigma_i\) considered in the errors law and often referred to as small fluctuations is comparatively much smaller, being of order \(\sqrt{N}\ .\)

Also large fluctuations show universal properties, but to a lesser extent. The analysis is quite simple when the sum \(\sum_{i=1}^N \sigma_i\) involves two equally probable independent values \(\sigma=\pm 1\ :\) there is a function \(f(s)\) such that the probability that \(\sum_{i=1}^N \sigma_i=s N\) with \(s\in [a,b]\) and \(-1<a<b<1\) satisfies

\[\tag{2} {probability}(\sum_{i=1}^N \sigma_i\in [aN,bN])\sim e^{-N \min_{s\in [a,b]}f(s)}\]


in the sense that the logarithm of the ratio of the two sides divided by \(N\) approaches \(0\) as \(N\to\infty\ .\) It is

\[\tag{3} f(s)=\frac{1-s}2\log \frac{1-s}2+\frac{1+s}2\log \frac{1+s}2+\log 2\ .\]


Existence of a function \(f(s)\ ,\) called the large deviations rate, controlling the probabilities of events \(sN=\sum_{i=1}^N \sigma_i\) for \(N\) large and \(0<|s|<\max |\sigma|\) is a rather general property.

Although the exponential dependence on \(N\) of the probability of large deviations is a universal feature, the large deviations rate function is not a universal function: only the property that \(f(s)\) has a maximum at \(s=0\) with a second derivative \(D\) which is strictly positive is universal.

The error law is consistent with the large deviations law: this can be seen heuristically by using the large deviations law to study the probability of \(\sum_{i=1}^N \sigma_i=x \sqrt N\in [a,b]\) and noting that to leading order in \(N\) it is \(\sim e^{-\min_{[a,b]}x^2/2D}\) if \(D=\) second derivative of \(f(s)\) at \(s=0\) and \(D>0\) (Gnedenko and Kolmogorov 1968).

The universality properties of fluctuations, around their average, of sums of independent variables can be summarized by saying that the small fluctuations, of size \(O(\sqrt N)\ ,\) of sums of independent variables which assume finitely many values, have a universal (Gaussian) distribution controlled by a single parameter \(D\ .\) The latter is the second derivative at the maximum of a non universal function \(f(s)\) which controls the probability of large fluctuations. Large fluctuations have a probability which tends to zero exponentially with the number \(N\ ,\) as long as \(s\in [a,b]\) and \( \min \sigma<a<b<\max \sigma\ ,\) while the small deviations probability is much larger and it only approaches \(0\) exponentially in \(\sqrt N\ .\)

Extensions: non zero mean and infinite square mean

The laws of errors and of large fluctuations are extended to the general case in which \(\overline\sigma\,{ def \atop =}\, \sum_\sigma \sigma\,p(\sigma)\ne0\ ,\) e.g. when opposite values occur with unequal probabilities: simply they retain the same form provided \(\sum_{i=1}^N \sigma_i\) is replaced by \(\sum_{i=1}^N (\sigma_i-\overline\sigma)\) and provided \(0<\sum_{\sigma} (\sigma-\overline\sigma)^2 p(\sigma)<+\infty\ .\)

Further extensions apply to cases in which the variables \(\sigma_i\) take infinitely (denumerably or more) many values. In the previously considered cases the quantities \(\sum_{i=1}^N (\sigma_i-\overline\sigma)\) cannot exceed the interval \(N\,[\min (\sigma-\overline \sigma),\max (\sigma-\overline\sigma)]\ ;\) but in the cases in which \(\max |\sigma|=+\infty\) the large deviations concern quantities \(\sum_{i=1}^N (\sigma_i-\overline\sigma)\) which can be of size of order larger than \(N\ .\) This implies that some care is needed in the extensions of the fluctuation laws, large or small, to such cases.

For instance suppose \(\sigma_i\) can take infinitely many values with probabilities \(p(\sigma)=p(-\sigma)\) that decay to \(0\) too slowly for having \(\sum_\sigma p_\sigma \sigma^2<\infty\ ,\) and consider the special case in which \(\int_s^\infty p(\sigma) d\sigma\) is, asymptotically for \(s\to\infty\ ,\) proportional to \(s^{-\alpha}\ ,\) \(0<\alpha<2\ ;\) then the the small deviations have size \(N^{\frac1\alpha}\) (rather than \(N^{\frac12}\)) in the sense that the variable \(\sum_{i=1}^N \sigma_i \) is \(s N^{\frac1\alpha}\) for \(s\in [a,b]\) with a probability of the form \(\int_a^b e^{F_{\alpha,c}(x)}dx\) and with \(F_{\alpha,c}\) universal i.e., whatever the distribution of the \(\sigma\) is, the law depends on it only through a parameter \(c\) which plays the role of \(D\) in Gauss' law (Ch.7, Gnedenko and Kolmogorov 1968). If \(\alpha=1\) then

\[\tag{4} F_{\alpha=1,c}(s)=\frac1\pi\frac{c}{c^2+s^2}\ .\]


The cases in which the probabilities \(p(\sigma)\) are not symmetric in \(\sigma\) are more involved in the sense that the laws of \(s\) depend on \(p(\sigma)\) through more than one parameter rather than on one only (Gnedenko and Kolmogorov 1968). For instance if \(\alpha=1\) and \(\int_s^\infty p(\sigma) d\sigma\) is, asymptotically for \(s\to\infty\ ,\) proportional to \(s^{-\alpha}\ ,\) but \(p(\sigma)=0\) for \(\sigma<0\ ,\) then

\[\tag{5} f(s)=\frac1{\sqrt{2c\pi\,s^3}}e^{-\frac1{2c\,s} };\]


this is Smirnov's law.

If \(\int \sigma^2 p(\sigma)=+\infty\) in general the \(\sum_{i=1}^N \sigma_i\) might not admit a limit law \(f(s)\ ,\) not even for the small fluctuations. This means that, even for suitable choices of \(\alpha\) and \(a_N\ ,\) there needs not exist a function \(f(s)\) such that \(N^{-\frac1\alpha} \sum_{i=1}^N (\sigma_i-a_N)\) has probability of falling in \([a,b]\) asymptotically given by \(\int_a^b f(s)ds\ .\) A necessary and sufficient condition for the existence of a limit law is, if the tails of the distribution of the single events \(\sigma\) are denoted \(r_+(s)=\int_s^\infty p(\sigma)d\sigma\) and \(r_-(s)=\int_{-\infty}^{-s} p(\sigma)d\sigma\ ,\) that

\[\tag{6} \lim_{s\to+\infty} \frac{r_+(s)}{r_-(s)} exists and \lim_{s\to+\infty} \frac{r_+(s)+r_-(s)} {r_+(ks)+r_-(ks)}=k^{\alpha}\]


with \(0<\alpha<2\ ,\) (Gnedenko and Kolmogorov 1968).

Brownian motion

The theory of Brownian motion deals with pollen particles (colloid) suspended in a viscous medium (e.g. water) which can be considered, although of huge size, as large molecules and, therefore, statistical mechanics applies to them.

Remarkably Einstein developed the theory without knowing the available experimental evidence, hence he could say

...It is possible that the movements to be discussed here are identical with the so-called "Brownian molecular motion", however, the information available to me regarding the latter is so lacking in precision, that I can form no judgment in the matter...

A little later, he attributed the evidence in the first instance to M. Gouy rather than to the 1867 series of experimental results published by G. Cantoni who had concluded:

...In fact, I think that the dancing movement of the extremely minute solid particles in a liquid, can be attributed to the different velocities that must be proper, at a given temperature, of both such solid particles and of the molecules of the liquid that hit them from every side. I do not know whether others did already attempt this way of explaining Brownian motions... (Cantoni 1867;Pais 1982;Duplantier 2005).

Non rectilinear motion of the suspended particles is attributed to fluctuations due to their random collisions with molecules. It is a random motion, at least when observed on time scales \(\tau\) large compared to the time necessary to dissipate the velocity \(v\) acquired in a single collision with a molecule. The dissipation takes place because of the friction, which in turn is also due to microscopic collisions between fluid molecules.

Viscosity of the medium slows down the particles (or acts as a thermostat on them) and it is also a manifestation of the atomic nature of the medium: so that there should be a relation between the value of the friction coefficient and the fluctuations of the momentum exchanges due to the microscopic collisions. This led to develop, starting from Brownian motion as a paradigmatic particular case, a class of results quantitatively relating, very near equilibrium, dissipation occurring in transport phenomena and equilibrium fluctuations of suitable observables: Einstein's theory can be regarded as a first example of fluctuation-dissipation theorems.

Einstein's theory

In a (gedanken) gas of Brownian particles a density variation \(\frac{\partial\rho}{\partial x}\) generates a material flux \(\rho v=D \frac{\partial\rho}{\partial x}\) and therefore a diffusion, with coefficient of diffusion \(D\!\ .\)

On the other hand, the density gradient implies an osmotic pressure gradient \(\frac{\partial p}{\partial x}=\beta^{-1} \frac{\partial\rho}{\partial x}\ ,\) with \(\beta \, \overset{ \mathrm{def}}{=} \, \frac1{k_B T}\ ,\) by the Raoult-van t'Hoff osmotic pressure law \(p=\beta^{-1}\rho\!\ ;\) hence it corresponds to a force in the \(x\!\)-direction \(F=\frac{\beta^{-1}}{\rho} \frac{\partial\rho}{\partial x}\) which, in a stationary state, is balanced by the viscosity resistance implying \(F=6\pi\eta R v\) (Stokes formula), where\[\eta=\] the viscosity coefficient, \(R=\) the radius of the particles, and \(v\) their velocity along the \(x\!\)-axis. Hence

\[\tag{7} \rho v=\rho \frac{F}{6\pi\eta R}=\frac{\beta^{-1}}{6\pi\eta R} \frac{\partial\rho}{\partial x} =D \frac{\partial\rho}{\partial x}\]

and \( D=\frac{\beta^{-1}}{6\pi\eta R}.\)

This is the Sutherland-Einstein-Smoluchowski relation (Sutherland 1904,1905; Einstein 1905; von Smolan Smoluchowski 1906) characterizing the transport coefficient of diffusion and its relation with the dissipation \(\eta\ .\) The quantity \(\!\beta D\!\) should be regarded as the susceptibility or response in speed \(v\) to a force \(F\) driving the Brownian particle.

A particle starting at the origin and undergoing diffusion at time \(t\!>\!0\) will have a randomly distributed \(x\!\)-coordinate with Gaussian distribution \(f(x,t)dx=e^{-\frac{x^2}{4Dt}}\frac{dx}{\sqrt{4\pi D \,t}}\ ,\) as a consequence of the diffusion equation \(\frac{\partial \rho}{\partial t}=D\Delta\rho\) with initial value \(\rho(x)=\delta(x)\ .\)

Hence its average square distance from the origin in the \(x\!\)-direction is \(r^2=\lim_{t\to\infty} \frac1t \langle{x^2(t)}\rangle=2D\ :\) this characterizes the fluctuations of the dissipative motion with diffusion coefficient \(D\!\ .\)

On the other hand, \(r^2=\lim_{t\to\infty} \frac 1t \langle{x^2(t)}\rangle\) can also be computed by averaging \(x(t)^2=\left(\int_0^t u(t)\,dt \right)^2\ ,\) \( \ u(t) \overset{ \mathrm{def}}{=} \dot x(t)\ ,\) and a brief heuristic computation shows that the average is \(t\int_{-\infty}^{\infty} \langle{u(0)u(t')}\rangle dt',\) hence

\[\tag{8} D=\frac12\int_{-\infty}^{\infty} \langle{u(0)u(t)}\rangle dt\]


The above two equations establish a relation between velocity fluctuations and dissipation (the latter being expressed by the diffusion coefficient \(D\!\ ,\) or by the related viscosity \(\eta\)) (Einstein 1956).

Patterns fluctuations, stochastic processes

Unlike the theory of errors, besides the average of the square displacement, the theory of Brownian motion devotes attention, also to the joint fluctuation of many variables, namely to the probability that the actual path of a Brownian particle deviates from a predefined path.

More generally, given an observable \(X\) one looks at the probability that a string or path or pattern of results of observations of \(X\ ,\) namely \(x_i\ ,\) \(i=1,2,\ldots,T\ ,\) performed at discrete times, or \(x(t)\ ,\) \(0\le t\le T\ ,\) performed continuously in a time interval of length \(T\ .\)

The discrete time case arose earlier in a work by Bachelier on stock market prices (Bachelier 1900), and the continuous time case begins to be studied in the theory of Einstein and Smoluchowski.

In general the probability distribution of the possible paths is called a stochastic process. When the successive values of the observable \(X\) are independent of each other the process is often called a Bernoulli process; if they are not independent but the successive variations \(x_{i+k}-x_i\) or \(x(t+\tau)-x(t)\) are independent random quantities, the process is called an independent increments process. If the process is a Bernoulli process and the variables have a Gaussian probability distribution the process is called a white noise.

It is also possible to consider processes in which the successive events are correlated in more general ways: all questions that can be asked for independent increments or for white noise processes can be extended to the latter more general framework and attract great interest, both theoretical and applicative, in particular when there is strong correlation over distant events in time.

The Wiener process: nondifferentiable paths

The position \(x(t)\) of a particle in Brownian motion is a process with random independent displacements with a Gaussian distribution and zero average: this means that if at time \(t+\tau\) the position is \(x(t+\tau)\) then the displacement \(\delta=x(t+\tau)-x(t)\) has a probability of being between in the cube \(d^3\delta\) centered at \(\delta\) and with side \(d\delta\) given by

\[\tag{9} \textrm{probability}(\delta\in d^3\delta)=\frac{e^{-\frac{\delta^2}{4D \tau}}\,d^3\delta}{(4\pi D\tau)^{\frac32}}\]


A typical property of processes in continuous time and with independent increments is that the paths \(t\to x(t)\) are quite irregular as functions of \(t\ .\)

For instance the Brownian motion paths are not differentiable with probability \(1\ :\) for small \(\tau\) the variation \(x(t+\tau)-x(t)\) has a square with average size \(2D \tau\ ,\) so that the variations have a size of order \(O(\sqrt{D\tau})\ .\)

Of course this property, mathematically rigorous, can be only approximately true in the physical realizations of a Brownian motion, because by reducing the size of \(\tau\ ,\) say below some \(t_0\ ,\) motion eventually becomes smooth: but on time scales long compared to \(1\,msec\ ,\) as is necessarily the case because of our human size, velocity would depend on the time interval over which it is measured and it would diverge in the limit \(\tau\to0\) or, better, it would become extremely large and fluctuating as \(\tau\) approaches the time scale \(t_0\) beyond which the theory becomes inapplicable.

Therefore, Brownian motion was an example of an actual physical realization of certain objects that had been just mathematical curiosities, like continuous but non differentiable curves, discovered in the '800s by mathematicians in their quest for a rigorous formulation of calculus; Perrin himself stressed this point very appropriately (Perrin 1970).

The analysis of the motion at very small time scales was later performed by Ornstein and Uhlenbeck who identified the time scale \(t_0\) with \(t_0=m/\lambda\) where \(\lambda\) is the friction experienced by a Brownian particle of mass \(m\ ,\) as can be seen from the full solution to the Langevin equation.

Schottky fluctuations

Coming back to the relations between fluctuations and dissipation the Schottky effect theory is a prominent instance, following the theory of Brownian motions by few years only (1919). The effect is a current fluctuation in a circuit with \(L,R,C\) elements in series (i.e. with inductance \(L\ ,\) resistance \(R\) and capacitance \(C\)), and with a diode attached in parallel to the two poles of the condenser \(C\ .\) The current flowing in the diode is \(i_0=n e\ ,\) where \(n\) is the average number of electrons of charge \(e\) leaving the cathode to migrate towards the anode. The current \(i_0\) is steadily generated by a source. See Figure 1

Fluctuations circuit.gif
Figure 1:

The circuit equation is then \(L \dot I+R I+C^{-1}Q=0\ ,\) \(\dot Q=I-i(t)\ ,\) or

\[\tag{10} L \ddot I+R\dot I+C^{-1}(I-i(t))=0\]


where \(i(t)\) is not equal to the average \(i_0\) because of the discrete nature of the electron emission. Then the stationary current is

\[\tag{11} I(t)=\int_{-\infty}^t \frac{\omega_0^2}{\omega} e^{-(t-t')R/2L}\sin(\omega\, t') \, i(t')\,dt'\]


where \(\omega={\big((LC)^{-1}-({R}/{2L})^2\big)}^{\frac12}\) and \(\omega_0=(LC)^{-1}\ .\) Dividing time into intervals of size \(\tau\) small compared to the proper time of the circuit \(2\pi/\omega_0\ ,\) the current is regarded as piecewise constant and equal to \(i_k=\frac{m}\tau\,e\) with \(m\) distributed, independently over \(k\ ,\) as a Poisson distribution with average \(n\,e\,\tau\ .\) This means that probability of \(m\) is \(P(m)=e^{-n\tau}\frac{(n\tau)^m}{m!}\ .\) Denoting by \(\langle{F}\rangle \) the average (in time in the present case) of an observable \(F\) it is, therefore, \(\langle{i_k\,i_{k'}}\rangle =i_0^2+\delta_{k k'}\langle{\frac{m^2 e^2}{\tau^2}}\rangle =i_0^2+ \frac{n e^2}\tau \delta_{kk'}\ .\) This implies, discretizing the integral for \(I(t)\) into a sum over the intervals of time of size \(\tau\ ,\) that the average \(\langle{I(t)^2}\rangle=\frac{\omega_0^4\tau^2}{\omega^2} \sum_{k,k'=0}^\infty i_k i_{k'}e^{-(k+k')\tau R/2L}\sin \omega k\tau\,\sin\omega k'\tau\) becomes

\[\tag{12} \langle{I(t)^2}\rangle =\frac{\omega_0^4}{\omega^2}\Big(\int_0^\infty e^{-t' R/2L}\sin(\omega t')\,dt'\Big)^2 i_0^2+\frac{\omega_0^4}{\omega^2}i_0 e\int_0^{\infty} e^{-t' R/L}\sin^2\omega t'\,dt'\]


which is evaluated as \(i_0^2+\frac{i_0 e\omega_0^2}{2R/L}=i_0^2+\frac{i_0 e}{2RC}\ .\) The heat generated in the resistor, per unit time, is \(R\langle{I^2}\rangle= Ri_0^2+\frac{i_0 e}{2C}\ .\) So imposing an average current \(i_0\) it is possible to measure (by means of a thermocouple) the difference \(R(\langle{I^2}\rangle -i_0^2)=\frac{i_0 e}{2C}\ ,\) obtaining in this way another example of a relation between fluctuations and dissipation and, also, a scheme of a method to measure the electron charge \(e\) (from Becker 1964).

Johnson-Nyquist noise

Nyquist theorem provides a theoretical basis for studying of voltage fluctuations occurring in a \((L,R,C)\)-circuit (i.e. a circuit with inductance \(L\ ,\) electrical resistance \(R\) and capacitance \(C\)), discovered by J.B. Johnson.

Let \(E_{ch}(t)\) denote the chaotic random electromotive voltage due to the discrete nature of the electricity carriers (electrons in metals, ions in electrolytes and gases). The circuit equation is

\[\tag{eq:12:label exists!} L\dot I+ R I+C^{-1}Q= E_{ch}(t),\qquad \dot Q=I\]


Suppose that the noise has a frequency spectrum, with frequencies equispaced by \(d\) (for simplicity) and time fluctuations which is Gaussian and decorrelated i.e., denoting \(\langle{F}\rangle \) the average of an observable \(F\) as above, \(E(t)\) is a white noise:

\[\tag{13} E_{ch}(t)=d \sum_\nu E_\nu e^{i2\pi \nu t},\qquad \langle{\overline E_\nu(t)E_{\nu}(t')}\rangle ={\mathcal E} \frac{\delta_{\nu\,\nu'}}{d}\]


so that \(\langle{{E}_{ch}(t){E}_{ch}(t')}\rangle ={\mathcal E E}\,\delta(t-t')\ .\) Then the current \(I(t)\) is, if \(\omega_0^\pm=-\frac{R}{2L}\pm\sqrt{\frac14\big(\frac{R}L\big)^2-\frac1{LC}}\) and \(I(0)=0, Q(0)=0\ ,\)

\[\tag{14} I(t)=\int_0^t \frac{\omega_0^+e^{\omega_0^+(t-\tau)} -\omega_0^- e^{\omega_0^-(t-\tau) }}{\omega^+_0-\omega^-_0} \frac{E_{ch}(\tau)}{L}\,d\tau\,.\]


By equipartition at equilibrium, the average energy is \(\frac12L\, \langle{I(t)^2}\rangle =\frac12 k_B T\) (also equal to \(\frac12 C^{-1}\langle{Q^2}\rangle \)) so that computing the integrals in \(I(t)^2\ ,\) expressed as above and as sums over the Fourier components \(E_\nu\ ,\) it follows that \(k_B T=L \,\lim_{t\to+\infty}\langle{I(t)^2}\rangle =C^{-1}\,\lim_{t\to+\infty}\langle{Q(t)^2}\rangle \) is given by \(k_B T=\frac{ {\mathcal E} }{2d R},\) or:

\[\tag{15} \frac{ {\mathcal E} }{d}=2\beta^{-1} R\]


It is now possible to evaluate the contribution \(U=d\sum_\nu E_{\nu}e^{i2\pi t}\) to the voltage filtered on the frequency range \([\nu',\nu'']\) (via suitable filters). This is \(U_{\nu',\nu''}\) which has zero average and variance

\[\tag{16} \langle{U^2_{\nu',\nu''}}\rangle = d^2 \sum_{\nu\in[\nu',\nu'']} |E_\nu|^2=(2 k_BT R) (\nu''-\nu')\]


If \(Y_\nu\) is the total transfer admittance of a circuit into which the considered \(L,R,C\) element is inserted, then the power generated in the element will be \(W=\langle{|d\sum_{\nu} E_\nu Y_\nu e^{2\pi i \nu t}|^2}\rangle \ ,\) hence given by

\[\tag{17} d \sum_\nu 2\beta^{-1} R|Y_\nu|^2\equiv\int_0^\infty 4 k_B T R|Y_\nu|^2d\nu\]


having used the symmetry between \(-\nu\) and \(\nu\) to have an integral over positive \(\nu\) only. The last two expressions give the fluctuation dissipation theorem of Nyquist: the \(\nu\)-independence of \(\mathcal E\) leads to an ultraviolet divergence which is removed if the quantum effects at large \(\nu\) are taken into account (analogously to the theory of the black body radiation divergence) (Nyquist 1928).

Langevin equation

Perhaps the most well known instance of a fluctuation-dissipation relation is given by the theory of the Langevin equation for the motion of a particle of mass \(m\) moving in a viscous medium and subject to a chaotic, random and uncorrelated in time, force \(\vec F_{ch}(t)\ :\)

\[\tag{18} m \ddot{\vec x}=-\lambda \dot{\vec x}+ \vec F_{ch}(t)\]


Proceeding as in the Nyquist theorem derivation and if \(\langle{ F_{ch,i}(t)F_{ch,j}(t')}\rangle \) \(={\mathcal F}^2\delta_{ij}\delta(t-t')\ ,\) \(i,j=x,y,z\ ,\)

\[\tag{19} m\langle{\dot{\vec x}^2}\rangle =3 k_B T=\frac{3}{2} {\mathcal F}^2\lambda^{-1}\]


establishing a connection between the chaotic background force and the dissipation coefficient represented by the viscosity. Considering the forced motion \(m \ddot{\vec x}=-\lambda \dot{\vec x}+ \vec F_{ext}(t)\) with \(\vec F_{ext}(t)=\vec F \,e^{i\omega t}\) (large compared to \(\vec F_{ch}(t)\) but still small) it follows that the averaged current induced by the periodic force is, for large \(t\ ,\) \(\langle{\dot {\vec x}}\rangle =\beta D(\omega)\,\vec F_{ext}\,e^{i\omega t}\) with susceptibility

\[\tag{20} D(\omega)=\frac{ k_B T}{i\,m\, \omega+\lambda}\]


and \(D(\omega)\) can be checked to be expressible also in terms of the velocity fluctuations in the equilibrium state, \(\lim_{t\to\infty}\langle{\dot{\vec x}(t+\tau)\cdot\dot{\vec x}(t)}\rangle { def\atop =}C(\tau)\ ,\) as

\[\tag{21} D(\omega)=\frac16\int_{-\infty}^\infty e^{-i\omega \tau} C(\tau)\,d\tau\]


yielding a kind of fluctuation dissipation theorem (Langevin 1908;Kubo 1966).

Fluctuation-Dissipation theorem

Finally consider a system in interaction with thermostats and external non conservative forces \(\vec E\ .\) For \(\vec E=\vec 0\) and all thermostats at the same temperature the system admits an invariant distribution \(\mu_0\ .\) For \(\vec E\ne\vec 0\) the phase space volume (of the system and the thermostats together) measured with \(\mu_0\) will contract at a rate \(-\sigma(X,\dot X)\) which, in terms of the microscopic equations, is defined as their divergence (recall that if \( \dot \xi_i=\Gamma_i(\vec\xi)\) is a generic system of ordinary differential equations, \( i=1,\ldots,n\ ,\) then its divergence is defined as the \(\sum_i {\partial \Gamma_i\over \partial \xi_i}(\vec\xi)\ :\) its value yields the variation per unit time of the volume of an infinitesimal volume element \(d\vec\xi\) around \(\vec\xi\)) changed in sign, and therefore depends on the \(6N\) phase space coordinates \((X,\dot X\)) (this holds it the metric on phase space is suitably chosen; if not it differs from $0$ by a term with zero average and zero fluctuations which do not affect the analysis). The \(\sigma(X,\dot X;\vec E)\) vanishes for \(\vec E=\vec 0\ ,\) because \(\mu_0\) is invariant.

The distribution \(\mu_0\) is important because, if it is ''ergodic'' (as it is often implicitly supposed), it allows to express the time averages as ''phase space averages'', also called ensemble averages with probability \(1\) with respect to initial data randomly chosen with respect to \(\mu_0\)

The physical interpretation of \(\sigma\) is of entropy production in the motion starting from a microscopic configuration typical of the distribution \(\mu_0\) (i.e. selected with a probability distribution \(\mu_0\)): it establishes a conjugation between forces \(\vec E\) and fluxes \(\vec J\) via

\[\tag{22} J_i(X,\dot X;\vec E)=\frac{\partial \sigma(X,\dot X;\vec E)}{\partial E_i}.\]


Let \( S^{\vec E}_t\) denote the solution flow of the equations of motion, associating with a generic initial datum \((X,\dot X)\) the datum \( S^{\vec E}_t(X,\dot X)\) into which it evolves in time \(t\ .\) Then for \(\vec E\ne\vec 0\) the system evolves in time reaching a stationary state in which any observable \(\Phi\) has a (phase space) average (hence, with certainty, a time average), that can be defined by \(\langle{\Phi}\rangle =\lim_{t\to\infty} \mu_0(S^{\vec E}_t\Phi)\ ,\) where \(S^{\vec E}_t\Phi(X,\dot X)\) is defined by \(\Phi(S^{\vec E}_t(X,\dot X))\ .\) Then setting \(\Phi= J_i(X,\dot X;\vec E)\) and \(J_i(\vec E)=\langle{J_i(X,\dot X;\vec E)}\rangle \ ,\) it follows

\[\tag{eq:22:label exists!} { J_i(\vec E)}= \int_0^\infty dt \frac{d}{dt}\int \mu_0(dX d\dot X) J_i(S^{\vec E}_t(X,\dot X);\vec E)\]


and using the definition of phase space contraction it is possible (Chernov et al. 1993), to check the exact relation:

\[\tag{23} { J_i(\vec E)}=\int_0^\infty \sigma(S^{\vec E}_{-t} (X,\dot X);\vec E)\,J_i(X,\dot X;\vec E)\, \mu_0(dX d\dot X)\]


Therefore, the susceptibilities \(L_{ij}=\frac{\partial J_i(\vec E)}{\partial E_j}\Big|_{\vec E=\vec 0}\) are (using that \(\sigma|_{\vec E=\vec 0}=0\) and ignoring the difficult discussion of the interchanges of limits, derivatives and integrals)

\[\tag{24} L_{ij}=\int_0^\infty dt \langle {J_j(X,\dot X;\vec 0)J_i(S_t^{\vec 0}(X,\dot X);\vec 0)}\rangle_{\vec E=\vec 0}\]


where the average is with respect to the equilibrium distribution \(\mu_0\ .\) This shows that the fluctuation-dissipation theorem can be formulated as: Knowledge of the correlation completely determines the susceptibility.

The latter expression, known as Green-Kubo formula also shows the Onsager reciprocity, \(L_{ij}=L_{ji}\) which holds under the assumption that the time evolution is reversible, i.e. that there exists a map \({\mathcal I}\) of phase space with \({\mathcal I}^2=\) identity, which anticommutes with the time evolution \({\mathcal I} S_t^{\vec 0}=S_{-t}^{\vec 0}{\mathcal I}\) and which leaves \(\mu_0\) invariant (\(\mu_0\circ{\mathcal I}=\mu_0\)) (Kubo 1966).

If time reversibility also holds for \(\vec E\ne \vec 0\ ,\) as it is often the case, the fluctuation dissipation theorem can also be shown to be generalized by the fluctuation theorem (Gallavotti 1996).

The exchange of limits involved in the derivation of the fluctuation-dissipation theorem can be completely discussed under the extra chaotic hypothesis (Gallavotti and Ruelle 1997;Gallavotti 2000) or even under weaker and earlier versions of it (Chernov, Eyink, Lebowitz and Sinai 1993). However it has to be kept in mind that even in this case the proof only states a property of the susceptibilities at zero forcing: per se this does not even imply that the fluctuation theorem is observable because checking it requires measuring currents at small but non zero forcing (because no current flows at zero forcing). No good estimates, and in most cases not even just estimates, of the range of (approximate) validity of the proportionality between currents and forces are available in the few concrete cases in which a proof is possible. And in the literature doubts have been raised about the physical relevance of the above derivation of the fluctuation-dissipation theorem: of course no one doubts of the validity of the reciprocity relations (and the corresponding linear response) but of the correctness of their explanation. It might even be that the susceptibility at positive non zero (small) field is not a smooth function of the field which can only be interpolated better and better as the field teds to zero. See (Van Kampen, 1971).

Blue of the sky

The fluctuation--dissipation theorem has many more applications: it is worth mentioning an explanation of the color of the sky via Rayleigh diffusion.

If light of frequency \(\nu=\omega/2\pi\) arrives into a gas medium it is diffused and the power diffused is a fraction depending on the frequency or on the wavelength as \(W\propto\omega^4=\lambda^{ -4}\) (more precisely \(W=\frac23\frac{e^2}{c^3}\omega^4\big(\frac{e}{m\,(\omega^2- \omega_0^2)}\big)^2\) if \(\omega_0/2\pi\) is the frequency of the external electronic orbit of the molecules). Hence \(\frac{W_{blue}}{W_{red}}=\Big(\frac{4.5\,10^4 A^o}{6.5 \,10^4 A^o}\Big)^4\sim 4.3\ .\) So much more blue light is scattered, or more properly absorbed and emitted in a spherically symmetric way.

A light wave hitting a region of the wavelength size in air will simultaneously excite many atoms at a given time, but the various phases of the electric field will be different. If \(n_1,n_2\) denote the (large, for air in normal conditions) numbers of atoms actually present in two such adjacent regions of half wavelength size, the electric (or magnetic) field seen by an observer is \(\sim n_1 E_1+n_2 E_2=(n_1-n_2) E_1\) which has \(0\) average. However the scattered power is proportional to the average of the square \(\langle{(n_1-n_2)^2}\rangle =\langle{n_1}\rangle +\langle{n_2}\rangle \) because the numbers of atoms is Poisson distributed, so that the intensity of the diffused light is proportional to the single atom diffusion \(I_1\) times \((\langle{n_1}\rangle +\langle{n_2}\rangle )\) and the destructive interference is not complete: hence blue light dominates in the color of the sky (Purcell 1968).

Correlated fluctuations

The fluctuations due to uncorrelated events discussed so far have quite different properties compared to the fluctuations of correlated events. The subject is very wide and arises in studying processes describing paths in time (continuous or discrete) as well as processes in which the events are labeled by labels with different physical meaning. For instance the events could be associated with the place in space where they develop: so \(x_\xi\) could be events labeled by lattice sites in a \(d\)--dimensional lattice \(Z^d\ .\) Or they could be labeled by \(\xi\in R^d\) when they happen at the positions of a continuum \(d\)--dimensional space. The latter examples are called stochastic fields and cover also the already considered processes in time because time can be regarded as just one more coordinate.

A paradigmatic example is the field \(\sigma_\xi\) where the event \(\sigma_\xi\) is the value of a spin located at the site \(\xi\in Z^d\ .\) There are many probability distributions that can be considered on such field and it is convenient to restrict attention to translation invariant probability distributions. The latter are distributions which attribute the same probability to the event in which the values \(\sigma_1,\ldots,\sigma_n\) of the field occur at sites \(\xi_1,\ldots,\xi_n\) or at the translated sites \(\xi_1+x,\ldots,\xi_n+x\ ,\) \(x\in Z^d\ .\)

The translation invariant distributions are a natural generalization of the independent distributions or of the independent increments distributions. Similar questions can be raised about them: for instance, given a cube \(\Lambda\subset Z^d\) with \(|\Lambda|\) lattice points, it is interesting to study the probability of \(\sum_{\xi\in\Lambda}\sigma_{\xi}\in [\sqrt{|\Lambda|}\, a, \sqrt{|\Lambda|}\, b] \) i.e., physically interpreted as magnetization.

The latter quantity might be expected to be a Gaussian, as in the case of independent variables. However already in the simple cases in which the probability of a value \(\sigma_\xi=\pm1\) at \(\xi\ ,\) given the other field values, depends just on the field values in the nearest neighbor sites, the Ising model, it is possible (although exceptional) that the quantity \(s \,{ def\atop =}\,{|\Lambda|^{-\frac{1}\alpha}}\sum_{\xi\in\Lambda}\sigma_{\xi}\) not only does not have a Gaussian distribution in the limit \(|\Lambda|\to\infty\) but it is not trivial only if \(\alpha\) is suitably chosen (\(\alpha=\frac87\) if the lattice dimension is \(d=2\)) and different from \(\alpha=2\ .\) This is the critical fluctuations phenomenon which is of great importance in statistical mechanics and in the theory of phase transitions but which would lead us too far in the present context (Gallavotti 2000).

References

  • A. Bachelier. Th\'eorie de la sp\'eculation. Annales Scientifiques de l' \'Ecole Normale Sup\'erieure, 17:21--36, 1900.
  • R. Becker. Electromagnetic fields and interactions. Blaisdell, New-York, 1964.
  • G. Cantoni. Su alcune condizioni fisiche dell'affinit\`a e sul moto browniano. Il Nuovo Cimento, 27:156-167, 1867.
  • N. I. Chernov, G. L. Eyink, J. L. Lebowitz, and Ya. G. Sinai. Derivation of Ohm's law in a deterministic mechanical model. Physical Review Letters, 70:2209--2212, 1993.
  • B. Duplantier. Brownian Motion, "Diverse and Undulating" , in Einstein, 1905-2005. Poincar\'e Seminar 2005, Th. Damour, O. Darrigol, B. Duplantier and V. Rivasseau, Editors, pp. 201-293 (Birkh\"auser Verlag, Basel, 2006) [1]
  • A. Einstein. On the Motion of Small Particles Suspended in Liquids at Rest, Required by the Molecular-Kinetic Theory of Heat. Ann. d. Physik 17:549--560, 1905.
  • A. Einstein. Investigations on the theory of the Brownian Movement. Dover (reprint), New York, 1956.
  • G. Gallavotti. Extension of Onsager's reciprocity to large fields and the chaotic hypothesis. Physical Review Letters, 77:4334-4337, 1996.
  • G. Gallavotti. Statistical Mechanics. A short treatise. Springer Verlag, Berlin, 2000.
  • G. Gallavotti and D. Ruelle. SRB states and nonequilibrium statistical mechanics close to equilibrium. Communications in Mathematical Physics, 190:279-285, 1997.
  • K. Gauss. Theory of the motion of heavenly bodies moving about the Sun in conic sections. Dover (translation), New York, 1971.
  • V. Gnedenko and A. N. Kolmogorov. Limit distributions for sums of independent random variables. Addison Wesley, Reading, 1968.
  • R. Kubo. The fluctuation-dissipation theorem. Reports on Progress in Physics, 29:255-284, 1966.
  • P. Langevin. Sur la th\'eorie du mouvement brownien. CR Acad\'emie des Sciences, Paris, 146:530-533, 1908.
  • H. Nyquist. Thermal agitation of electric charge in conductors. Physical Review, 32:110-113, 1928.
  • A. Pais. Subtle is the Lord: the science and the life of Albert Einstein. Oxford University Press, 1982.
  • J. Perrin. Les atomes (reprint). Gallimard, Paris, 1970.
  • E.M. Purcell, Berkeley Physics Course: Electricity and Magnetism (Vol.2) McGraw Hill, New York, 1968.
  • M. R. von Smolan Smoluchowski. Essay on the theory of Brownian motion and disordered media. Rozprawy Krak\'ow} 46A: 257-281, 1906; French translation: Essai d'une th\'eorie du mouvement brownien et de milieux troubles, Bull. International de l'Acad\'emie des Sciences de Cracovie, 577-602, 1906; German translation: Ann. d. Physik 21: 755-780, 1906.
  • W. Sutherland. The Measurement of Large Molecular Masses. Report of the 10th Meeting of the Australasian Association for the Advancement of Science, Dunedin, 117-121, 1904.
  • W. Sutherland, A Dynamical Theory for Non-Electrolytes and the Molecular Mass of Albumin. Phil. Mag. S.6, 9: 781-785, 1905.

Internal references

  • Jan A. Sanders (2006) Averaging. Scholarpedia, 1(11):1760.
  • Tomasz Downarowicz (2007) Entropy. Scholarpedia, 2(11):3901.
  • Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.
  • David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924.


Recommended reading

  • J. Perrin, see above reference
  • B. Duplantier, see above reference
  • A. Pais, see above reference
  • G. Gallavotti, Ch. 8 of Statistical Mechanics

External links

See also

Anosov Diffeomorphism, Chaos, Chaotic Hypothesis, Fluctuations, Entropy, Ergodic Theory, Smooth Dynamics

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools