24

recently I have had some exchanges with @Marek regarding entropy of a single classical particle.

I always believed that to define entropy one must have some distribution. In Quantum theory, a single particle can have entropy and I can easily understand that. But I never knew that entropy of a single rigid classical particle is a well defined concept as Marek claimed. I still fail to understand that. One can say that in the classical limit, the entropy of a particle in QT can be defined and that corresponds the entropy of a single classical particle. But I have difficulty accepting that that gives entropy of a single Newtonian particle. In my understanding, If a system has entropy then it also should have some temperature. I don't understand how one would assign any temperature to a single classical particle. I came across a paper where there is a notion of "microscopic entropy". By no means, in my limited understanding, it corresponded to the normal concept of entropy. I am curious to know, what is the right answer.

So, my question is, is it possible to define entropy of a single classical particle?

Urb
  • 2,608
  • There are many aspects of entropy (and most of these are indeed related to each other). Furthermore entropy is often mis-understood or mis-represented in expositions and the range of application is narrowed. Leaving these aside for a minute, indeed a single particle (e.g as a dynamical system) can have entropy (e.g Kolmogorov-Sinai entropy), or a topological entropy etc.. This can go along way, i'll just leave it here for this comment though – Nikos M. Jun 17 '14 at 17:04
  • Related, can a single photon have entropy? https://physics.stackexchange.com/q/749311/226902 "Has the entropy of a single photon ever been measured?" – Quillo Feb 11 '23 at 13:54

8 Answers8

19

First of all we must distinguish between two things that are called entropies. There's a microscopic entropy, also called Shannon Entropy, that is a functional over the possible probability distributions you can assign for a given system:

$\displaystyle H[p] = -\sum_{x \in \mathcal{X}}\; p(x) \log(p(x))$

where $\mathcal{X}$ is the set where your variable x takes values. And there's a "macroscopic entropy", that is merely the value of the functional above calculated for a specific family of distributions parametrized by some variable $\theta$:

$S(\theta)=-\sum_{x \in \mathcal{X}}\; p(x|\theta) \log(p(x|\theta))$

Now, what happens in thermodynamics and equilibrium statistical physics is that you have a specific family of distributions to substitute in the first expression: the Gibbs equilibrium distribution:

$p(x | V, T, N) = \frac{1}{Z}e^{-\frac{E(x)}{T}}$

where, as an example, we have as parameters the volume, temperature and number of particles, and E(x) is the energy of the specific configuration x. If you substitute this specific family of distributions on $H[p]$, what you'll have is the thermodynamic equilibrium entropy, and this is what physicists usually call entropy: a state function depending on parameters of the Gibbs distribution (as opposed to a functional that associate a real value for each possible choice of distributions). Now, to find what is the apropriate physical equilibrium for this system when those parameters are allowed to vary, you must maximize this entropy (1).

Now here it's common to make the following distinction: x is a microscopic variable that specify the detailed configuration of the system, and V, T and N are macroscopic parameters. It doesn't need to be so. In the specific case of statistical physics the origin of the distribution function is the fact that there are so many degrees of freedom that it's impossible (and even undesirable) to follow them all, so we are satisfied with a statistical description. Under this assumptions it's natural to expect that the distribution would be over microscopic variables with macroscopic parameters. But this is not the only reason why one would use a distribution function.

You could have other sources of ignorance. As an example, you could have the following problem: suppose we recently discovered a new planet on a solar system where there' 2 more planets. It's position $\vec{x}$ and velocity $\vec{v}$ at a given instant $t = 0$ have been measured within some precision $\sigma_x$ and $\sigma_v$. Let's assume that the sources of possible errors in the measures are additive. Then it's reasonable to assume that we have a gaussian probability distribution for the position of the planet:

$\displaystyle p(\vec{x}(0), \vec{v}(0) | \sigma_x, \sigma_v) =\frac{1}{Z} \exp\left(-\frac{x(0)^2}{2\sigma_x} -\frac{v(0)^2}{2\sigma_v} \right)$

where Z is some normalization constant. Now suppose we want to predict this planets position in the future given the current positions of the other planets and their uncertainties. We would have a distribution:

$\displaystyle p(\vec{x}(t), \vec{v}(t) | \vec{x}_i(0), \vec{v}_i(0), \sigma_{x,i},\sigma_{v,i})= \displaystyle p(\vec{x}(0), \vec{v}(0) | \sigma_x, \sigma_v)\prod_{i=1}^{2}\displaystyle p(\vec{x}_i(0), \vec{v}_i(0) | \sigma_{x,i}\sigma_{v,i}) \times$ $\times p(\vec{x}(t), \vec{v}(t) | \vec{x}(0), \vec{v}(0),\vec{x}_1(0), \vec{v}_1(0), \vec{x}_2(0), \vec{v}_2(0))$

where $p(\vec{x}(t), \vec{v}(t) | \vec{x}(0), \vec{v}(0),\vec{x}_1(0), \vec{v}_1(0), \vec{x}_2(0), \vec{v}_2(0))$ would take Newton's equations of motion into account. Note that there's a small number of particles here: just 3. And the only source of "randomness" is the fact that I don't know the positions and velocities precisely (for a technological reason, not a fundamental one: I have limited telescopes, for example).

I can substitute this distribution in the definition of entropy and calculate an "macroscopic entropy" that depends on the positions, velocities and measurement precisions of the other planets:

$S(x_i, v_i,\sigma_{x,i},\sigma_{v,i}) = - \int d\vec{x} d\vec{v} p(\vec{x}, \vec{v} | t, \vec{x}_i, \vec{v}_i, \sigma_{x,i},\sigma_{v,i}) \log \left[p(\vec{x}, \vec{v} |\vec{x}_i, \vec{v}_i, \sigma_{x,i},\sigma_{v,i})\right]$

What does this entropy means? Something quite close to what thermodynamic entropy means!!! Is the logarithm of the average configuration space volume where I expect to find the given planet in instant t (2)!!! And it's the entropy of a 'single particle'.

There's no problem with that. I can even have situations where I must maximize this entropy! Suppose I don't know the position planet 2, but I do know all three planets have coplanar orbits. There are well defined procedures in information and inference theory that say to me that one way of dealing with this is to find the value of $\vec{x}_2$ that maximizes the entropy, subject to the constraint that all orbits are in the same plane, and then substitute this value in the original distribution. This is often called "principle of maximum ignorance".

There are interpretations of thermodynamics and statistical physics as an instance of this type of inference problem ( please refer to the works of E. T. Jaynes, I'll give a list of references below). In this interpretation there's nothing special on the fact that you have many degrees of freedom besides the fact that this is what makes you ignorant about the details of the system. This ignorance is what brings probabilities, entropies and maximum entropy principles to the table.

Refrasing it a bit, probabilities and entropies are a part of your description when ignorance are built in your model. This ignorace could be a fundamental one - you can't know something about your system; could be a technical one - you could know if you had better instruments; and even, as in the case of statistical physics, a deliberate one - you can know, at least in principle, but you choose to leave detail out cause it isn't relevant in the scale you're interested in. But the details about how you use probabilities, entropies and maximum entropy principles are completely agnostic to what the sources of your ignorance are. They are a tool for dealing with ignorance, no matter the reasons why you are ignorant.

(1) For information-theoretic arguments why we have to maximize entropy in thermodynamics please refer to E. T. Jaynes' famous book "Probability Theory: The Logic of Science" (3) and this series of articles:

Jaynes, E. T., 1957, Information Theory and Statistical Mechanics, Phys. Rev., 106, 620

Jaynes, E. T., 1957, Information Theory and Statistical Mechanics II, Phys. Rev., 108, 171.

Another interesting source:

Caticha, A., 2008, Lectures on Probability, Entropy and Statistical Physics, arxiv:0808.0012

(2) This can be given a rigorous meaning within information theory. For any distribution p(x) let the set $A_\epsilon$ be defined as the smallest set of points with probability greater than $1 - \epsilon$. Then the size of this set must be of order:

$log |A_\epsilon| = S + O(\epsilon)$

For another form of this result see the book "Information Theory" by Cover and Thomas.

(3) Some of Jaynes's rants about quantum theory in this book may appear odd today, but let's excuse him. He committed some errors too. Just focus on the probability theory, information theory and statistical physics stuff which is quite amazing. :)

(4) It seems that dealing with this kinds of problems from Celestial Mechanics was actually one of the first problems that made Laplace interested in probabilities, and apparently he used it in calculations on Celestial Mechanics. The other problem that took his attention towards probability theory was... gambling! Hahaha...

Urb
  • 2,608
  • +1 for a great answer and for mentioning Jaynes' work. The only beef I have with your argument is that while you can define an entropy for an individual planet, it appears you need a many body system for that definition to work. –  Feb 03 '11 at 19:43
  • 4
    thanks :) . I don't understand why would I need many bodies though. That entropy is the legimate entropy of a single particle, and I can do with it anything I can do with any other entropy. – Rafael S. Calsaverini Feb 03 '11 at 19:45
  • Thanks for this elaborate answer :). However, for some vague (possibly illogical and ignorant) reasons I personally feel some unease about information theoretic generalizations of physical concepts. I don't know why! Perhaps because deep down, naively, I feel that physics is more fundamental than information theory! –  Feb 04 '11 at 17:23
  • 3
    @sb1 Thanks :). You know, I have the exact opposite feeling. I think information theory have a more fundamental status. In essence information theory is about general rules for how we acquire quantitative knowledge about things. And... physics is a specific stance of this problem. :) – Rafael S. Calsaverini Feb 04 '11 at 19:30
5

The concept of entropy is very difficult because of the following day-to-day fact: when we have a macroscopic mechanical system, we can look at the system all the time, and know exactly where everything is. In such a situation, we know what each particle is doing at all times, the evolution is deterministic, and the concept of entropy is meaningless.

But the process of looking at particles to find out where they are always produces entropy. To acquire the information about the positions of molecules cannot be done in a way that decreases the entropy of the particles plus the measuring devices. This is an important point, but it can be proven easily from Liouville's theorem. If you start off ignorant of the position of a particle, it occupies some phase space volume. The only way to shrink that volume is to couple trajectories so that you correlate the trajectory of the atoms in a measuring device with the trajectory of the particle. You can do this by adding an interaction Hamiltonian, and this will reduce the phase space volume of the particle given the measuring device trajectory, but the total phase space volume is conserved in the process, so there is uncertainty in the measuring device trajectories which more than compensates for the loss of uncertainty in the position of the particle.

The conservation of phase space probability volume is counterintuitive, because we have intuition that looking classically at particles doesn't disturb them. In fact, if you bounce very weak classical EM radiation from the particles, you can see them without disturbing them. But this is because classical fields do not have a thermal equilibrium--- and when they are near zero over all space, they are infinitely cold. So what you are doing is dumping the entropy of the particles into the infinite zero temperature reservoir provided by the field, and extracting the position from the field during this process.

If you put the field on a lattice, to avoid the classical Rayleigh-Jeans divergence in the thermal equilibrium, then you can define a thermal state for the classical field. If the field is in this thermal state, it gives you no information on the particle positions. If you add a little bit of non-thermal field to measure the particles with, the interaction with the particles dump the phase space volume of the original uncertainty in the particles' positions into the field with a finite entropy per bit acquired, just by Liouville's theorem.

The entropy is a well defined classical quantity, even for a single particle. When you have no information about the particle position, but you know it's energy, the entropy is given by the information theory integral of $\rho\log\rho$. You can extract as much information as you want about the particle by measuring its position more and more accurately, but this process will always dump an equal amount of entropy into the measuring device. All this follows from Liouville's theorem.

This is the reason that entropy is often confusing. When discussing entropy, you need to take into account what you know about the system, much as in quantum mechanics.

4

Entropy is a concept in thermodynamics and statistical physics but its value only becomes indisputable if one can talk in terms of thermodynamics, too.

To do so in statistical physics, one needs to be in the thermodynamic limit i.e. the number of degrees of freedom must be much greater than one. In fact, we can say that the thermodynamic limit requires the entropy to be much greater than one (times $k_B$, if you insist on SI units).

In the thermodynamic limit, the concept of entropy becomes independent of the chosen ensembles - microcanonical vs canonical etc. - up to corrections that are negligible relatively to the overall entropy (either of them).

A single particle, much like any system, may be assigned the entropy of $\ln(N)$ where $N$ is the number of physically distinct but de facto indistinguishable states in which the particle may be. So if the particle is located in a box and its wave function may be written as a combination of $N$ small wave packets occupying appropriately large volumes, the entropy will be $\ln(N)$.

However, the concept of entropy is simply not a high-precision concept for systems away from the thermodynamic limit. Entropy is not a strict function of the "pure state" of the system; if you want to be precise about the value, it also depends on the exact ensemble of the other microstates that you consider indistinguishable.

If you consider larger systems with $N$ particles, the entropy usually scales like $N$, so each particle contributes something comparable to 1 bit to the entropy - if you equally divide the entropy. However, to calculate the actual coefficients, all the conceivable interactions between the particles etc. matter.

Luboš Motl
  • 179,018
  • "the concept of entropy is simply not a high-precision concept for systems away from the thermodynamic limit.." that was exactly my point. Thanks :). –  Feb 03 '11 at 17:08
  • @sb1: I guess we can agree on this formulation. But you said that entropy can't be defined at all, which is something completely different... – Marek Feb 03 '11 at 17:29
  • 6
    @Luboš: I disagree with your first paragraph. Concept of entropy as a universal measure of information (as embodied e.g. in Shannon's entropy) is far more general than the variable we know from thermodynamics. In particular, it is also used in non-equilibrium statistical physics and elsewhere ;) – Marek Feb 03 '11 at 17:33
  • 1
    @Marek you must be joking! Please re-read the relevant portions of my answer. –  Feb 03 '11 at 17:41
  • @sb1: okay, I will quote you because you apparently already forgot what you had said: "You can define energy of a single particle but not entropy." ;) – Marek Feb 03 '11 at 17:46
  • @Marek: You apparently missed the context. Can a single particle (implicitly assumed to be free as per the question) have any entropy? What is its information content (or lack there of) when it is freely moving with constant velocity? –  Feb 03 '11 at 17:54
  • @Marek: You didn't quote this comment of mine in the same answer! So let me quote it. " It is possible to define a quantity of a single particle in an ensemble as "microscopic entropy". Here, temperature is the surrounding temperature of the particle. see arxiv.org/PS_cache/arxiv/pdf/1005/1005.2751v3.pdf Obviously it has nothing to do with the above question" –  Feb 03 '11 at 18:21
  • 2
    @sb1: well, if you know definite position and velocity then the entropy is zero. If there is any uncertainty then the entropy will be non-zero and computable in terms of the usual law. But in any case it's possible to define and compute which is basically the only thing I disagreed with you about. – Marek Feb 03 '11 at 18:32
  • @sb1: so if it has nothing to do with the said question, why did you mention it and why are you talking about it again? :) – Marek Feb 03 '11 at 18:33
  • 1
    @Marek I am tired with this. I mentioned it to bring it to notice that I wanted to say the meaning of entropy is not well defined in case of a single classical particle. Bye and have a nice day. –  Feb 03 '11 at 18:38
  • @sb1: yeah, let's just agree to disagree. Have a nice day too. – Marek Feb 03 '11 at 18:45
  • 1
    Dear @Marek, I agree with you that you disagree with me - and now with sb1. ;-) If you think you can calculate the entropy, what is the entropy of 1 Hydrogen atom moving by speed 100 m/s in a ball of radius 1 meter? I am very curious about your answer haha. And please, don't ask me what is the temperature of the walls. They are perfectly reflective. – Luboš Motl Feb 03 '11 at 18:58
  • 2
    @Luboš: the problem is not well-posed but assuming no internal degrees of freedom (electron in a ground state, say) and full classical knowledge of the position, the entropy will be precisely zero. More precisely it will be given by uncertainty in the precision with which we can measure the position. If the problem is that we can't measure the particle's position inside the ball at all (as would be the case if it was a constituent of a gas) then the entropy would be proportional to $\log V_{ball}$. In any case, there is no problem with entropy, only with how the problem is posed precisely. – Marek Feb 03 '11 at 19:28
  • Dear @Marek, there is no problem with entropy? I don't claim that there is a "problem with entropy" in the sense of a paradox. I just say that the notion doesn't make any sense for a single particle. And your formula doesn't make any sense, either. $\log V_{ball}$ can't be calculated because $V_{ball}$ is not even dimensionless which is what arguments of a logarithm have to be. So you mean some $\ln(V_{ball}/V_0)$ but what is $V_0$? Be sure that my questions won't stop here. Any answer will depend on dozens of choices. For large objects, the answer is independent of them. – Luboš Motl Feb 04 '11 at 18:03
  • @Luboš: and I claim the notion makes sense, although it's (naturally) different from the usual thermodynamic notion. That's because the thermodynamic notion is only a special case. But that doesn't mean the notion is useless. It measures the usual information content of any probability distribution. Of course, what that distribution looks like depends on the precise problem. Because you haven't posed the problem more exactly, I too can't provide any better solution than I already did. – Marek Feb 04 '11 at 19:21
  • @Luboš: (cont.) As for the $\log$: it naturally misses some constants but I can't provide them to you, because you also haven't provided them to me in the assignment of the problem. So don't expect me to invent random constants for you ;) In any case, until now you haven't shown any problem in my reasoning. You only posed a half-baked problem and are probably thinking how clever you are because I can't answer this half-baked problem. Be sure that as soon as you pose something well-defined, I too will provide you with a good answer ;) – Marek Feb 04 '11 at 19:23
  • 2
    @Lubos: Marek is right, this answer is incorrect. The intuition that fails is that one can track the single particle's motion by sight. – Ron Maimon Sep 19 '11 at 00:46
  • Apologies, @Ron, but it isn't clear what you exactly think is wrong. sb1 asked whether a single particle - or an object with a similarly low entropy comparable to 1 bit - may be attributed an exact value of entropy. The answer is obviously No, it can't. The "statistical error" in the magnitude of the entropy is of order 100 percent when it's low. This is what the question by sb1 was about and this is what I answered. My answer is obviously right, as confirmed by Marek's inability to quantify the entropy even in the simplest situations. – Luboš Motl Sep 20 '11 at 08:38
  • This question has nothing to do with "tracking" and with "sight". Even if I define a particular state of a particle, Marek or you won't be able to assign it with a unique value of entropy. The most naive definition would assign $S=0$ to any pure state: but this is clearly incorrect for pure states of big multi-body objects. One has to decide what the ensemble of macro-indistinguishable states is; log of their number is the entropy. This number is hugely uncertain for a low $S$ but becomes well-defined and independent of the choices in the thermodynamical i.e. large $S$ limit. – Luboš Motl Sep 20 '11 at 08:40
  • 1
    @Lubos: the entropy of any pure state is 0, even a big one. The point is that you don't know which state is in there, so you have an entropy. Entropy is for a mixed state/probability density on phase space, and the answer is the same whether or not you have a single particle or a thousand---it's the plogp. – Ron Maimon Sep 20 '11 at 16:31
  • Dear @Ron, yours is just one way of defining the entropy, and not the usual one. The usual way to define the entropy calculates the number of macroscopically indistinguishable states with a given state, according to some convention, and the logarithm of this number is the entropy. In this way, one gets a nonzero entropy even for pure states (both in classical and quantum physics). Assigning just $S=0$ to single-particle states doesn't solve anything because it gives you no method to actually get a sensible, i.e. nonzero, entropy of larger objects. – Luboš Motl Sep 21 '11 at 16:18
  • But even if adopted your dogmatic $S=0$ definition of the entropy for pure states, you will not be able to give a positive answer to the original question because the original question asks about a single classical particle: read it carefully. In classical physics, the number of states isn't integer. Instead, one must measure the volume on the phase space. A strictly well-defined point on the phase space has $V=0$ which means $S=-\infty$. Yes, you may assign finite $S$ to distributions on the phase space but the additive shift in $S$ in classical physics remains undetermined, anyway. – Luboš Motl Sep 21 '11 at 16:21
  • 1
    @Lubos: Yes I know the approach you describe, and I don't think it is fully consistent (although it works fine heuristically or for the thermodynamic limit). Since Jaynes, I think the better definition of entropy reflects the uncertainty in the description, so that the entropy is plogp. This is sensible for large objects, because you don't know what state they are in. It is also sensible for small objects. This is the modern understanding of entropy, due to Jaynes, in light of information theory. This resolved some old paradoxes, like properly explaining Maxwell's Demon, in the 1960s-1970s. – Ron Maimon Sep 21 '11 at 16:24
  • The undetermined additive shift to the entropy, as seen in classical physics, is actually "echoed" in quantum mechanics of a free particle as well. If you have a free particle in infinite coordinate space, it may occupy an arbitrarily large volume in the phase space. The entropy you would get according to your method is spurious, however. $S$ only becomes meaningful if one talks about the states of a bound state, internal degrees of freedom of an object, not the center-of-mass ones. How the center-of-mass d.o.f. are treated depends on many things; the dependence disappears as $N\to\infty$. – Luboš Motl Sep 21 '11 at 16:25
  • Dear @Ron, could you please explain in what sense the definition is "inconsistent"? What is true is that it doesn't allow one to attribute sharp values to the entropy of objects away from the thermodynamic limit. Indeed, it doesn't allow that. And it's a good thing, too, because those things don't have any physical meaning. You may choose a particular formula or prescription or value of the entropy of 1 particle but others may choose different prescriptions and values and there's no general principle that would say that one answer is better than the other. – Luboš Motl Sep 21 '11 at 16:28
  • There is a difference between entropy and e.g. energy. Energy is a high-precision observable that may be measured and calculated with any accuracy, for arbitrarily small systems, and so on. But the entropy is simply not. Entropy is not an observable in the quantum mechanical sense: it is not an operator on the Hilbert space. That's why it doesn't make any sense to attribute sharp values of entropy to small systems. You may link the entropy to someone's being "ignorant" but then the whole quantity becomes subjective which is much worse than its being ill-defined for small objects. – Luboš Motl Sep 21 '11 at 16:30
  • The real purpose of entropy, regardless of its definitions in statistical physics, is the role it plays in thermodynamics: it never (macroscopically) decreases and $T,dS$ and similar things appear in the equations of thermodynamics. The letter $S$ from these contexts is what we're calculating microscopically because it has consequences for thermal engines, life, and processes in the Universe in general. A good definition of entropy has to agree with thermodynamics. On the other hand, subtleties about entropy that can't be measured by thermodynamics don't belong to natural science. – Luboš Motl Sep 21 '11 at 16:32
  • Concerning the subjectivity: you may adopt ignorance-based definition of entropy but if you're ignorant, others don't have to be ignorant about the properties of the system, and they will have a different value of (your) entropy. Still, just because they know the positions of 1,000 molecules in some gas tank doesn't change that they may verify that the equations of thermodynamics statistically hold with a nonzero $S$, so your definition disagrees with thermodynamics. Another, equivalent complaint: you can't really "measure" the amount of your ignorance about the system. – Luboš Motl Sep 21 '11 at 16:34
  • 1
    @Lubos: As far as the classical divergence in entropy for infinitely well defined states, this can be resolved by fixing a reference phase-space volume to define 0 entropy. This allows you to define the entropy of a classical particle as $\int \rho \log(\rho)$, which indeed becomes $-\infty$ when the particle is localized. I agree that this is a not-so-usual description of entropy, but I really believe it pays to learn it, because the Jaynes definition is superior in every respect to classical ones, in my opinion. – Ron Maimon Sep 21 '11 at 16:48
  • 1
    @Lubos: The difference in entropy for an observer which has more information regarding the 1000 particles is physical, the observer with more information can extract heat energy from the system without doing work using this information, by acting as a Maxwell demon, until he or she reduces his own information about the system to nothing beyond the thermodynamic quantities. To acquire this information, even classically, one must dump entropy into the environment. This is really a different point of view, and I hope you will appreciate it, because it is nice. – Ron Maimon Sep 21 '11 at 16:52
1

(The question has already been answered by Lubos but perhaps I can elaborate on the quantum aspect.)

With a single classical particle the answer, in general, is 'no' with some exceptions as discussed near the end. And when the number of particles $n \gt 1$ the concept of entropy becomes clear only in the limit that $ n \gg 1$ or the thermodynamic limit (as Lubos mentioned in his answer). With a quantum 'particle' however, the situation is somewhat different.

Consider a qubit, a single spin $1/2$ object, whose state at any given moment is specified by a vector $\vert \psi \rangle$ in a Hilbert space $H$. This is the simplest quantum analog of a classical particle one can think of. In contrast to the classical situation a many-body quantum system can exist either in pure or mixed states. The above specification of the qubit as a state vector is only possible if the state is pure.

A many-body state will not, in general, be describable by a state in a Hilbert space. One must use the more general notion of a density matrix for the correct description. A general density matrix $\rho$ is written in terms of the state-vectors $\vert \psi_s \rangle$ as :

$$ \rho = \sum_s p_s \vert \psi_s \rangle \langle \psi_s \vert $$

where $p_s$ is the "fraction of the ensemble" [reference] in each pure state $\vert \psi_s \rangle $. For instance an ensemble of spin-$1/2$ particles would be described by:

$$ \rho = p_{\uparrow} \vert \psi_{\uparrow} \rangle \langle \psi_{\uparrow} \vert + p_{\downarrow} \vert \psi_{\downarrow} \rangle \langle \psi_{\downarrow} \vert $$

where $p_{\uparrow}$ ($p_{\downarrow}$) is the fraction of particles with spin up (down). Now, on the face of it, it seems one should be able to use this definition for a single particle. It is important to keep in mind that though one can a priori assign probabilities to the two states of a single particle, experimentally these probability amplitudes can be confirmed only by repeating a measurement of the particle's spin many times over. The result of a single measurement is a single number signifying which eigenstate the particle collapsed into due to the measurement process. To obtain a probability one must take the average over many such numbers, which requires more than one measurement.

In this case the entropy loses meaning as the measure of the lack of information external observers have about a system. The "system" is no longer a qubit with a given state or density matrix. Instead it is a qubit which has undergone repeated cycles of the measurement process consisting of state preparation, time evolution and finally collapse to an eigenstate.

The question boils down to the fact that probability amplitudes can only be obtained experimentally by generating an ensemble of observations. A single system cannot be assigned an entropy. A system consisting of an observational ensemble for qubits - either with a single or few qubit and many measurements or many qubits and few measurements - can be assigned an entropy.


Extra material The classical notion of Gibbs-Shannon entropy can be extended to the quantum version called the von-Neumann entropy:

$$ S_{cl} = -\sum_k p_k \ln(p_k) \Rightarrow S_{qm} = - Tr\left[\rho \ln(\rho)\right] $$


First Edit @Marek makes some pertinent observations in the comments, which I feel are important enough to be included in this edit.

"In contrast to the classical situation a many-body quantum system can exist either in pure or mixed states." what contrast? In classical physics it is also either pure (a single point in the phase space, which is the analogue of the Hilbert space in the classical physics) or a mixture of a pure states which can be described by a measure on the phase space. Everything is completely the same.

The key difference lies in the different effect of measurement on a classical and quantum process. In the absence of a unified description of measurement spanning all domains, we are left with two notions of measurement. In the quantum mechanical case an irreducible error is present in any measurement. This is not the case for a classical mechanical system where measurements can be done to an arbitrary accuracy. Of course, in a unified description of measurement, a classical system would utimately be understood in terms of many-body quantum systems subject to the effects of decoherence.

"A single system cannot be assigned an entropy" -- of course it can be assigned an entropy. Just because you can't measure everything at once doesn't mean it doesn't make sense to define it :)

I'm not talking about "everything" but only of the very definite concept of "probability". Probability is a statistical notion. You cannot do statistics with one data point.

Because if you'd said so, you might as well throw whole quantum theory out of the window, which ultimately doesn't allow you to talk about anything else than probabilities and for those you need multiple measurement as well...

Well yes, of course, you need multiple measurements for quantum probabilities. That is the essence of my argument.

Urb
  • 2,608
  • @space_cadet: Thanks a lot for your answer. It was a real pleasure :) –  Feb 03 '11 at 18:25
  • "In contrast to the classical situation a many-body quantum system can exist either in pure or mixed states." what contrast? In classical physics it is also either pure (a single point in the phase space, which is the analogue of the Hilbert space in the classical physics) or a mixture of a pure states which can be described by a measure on the phase space. Everything is completely the same. – Marek Feb 03 '11 at 18:35
  • 2
    "A single system cannot be assigned an entropy" -- of course it can be assigned an entropy. Just because you can't measure everything at once doesn't mean it doesn't make sense to define it :) Because if you'd said so, you might as well throw whole quantum theory out of the window, which ultimately doesn't allow you to talk about anything else than probabilities and for those you need multiple measurement as well... – Marek Feb 03 '11 at 18:42
  • @Marek the edited version of my answer attempts to answer your critiques. Let me know what you think. –  Feb 03 '11 at 19:45
  • @space_cadet: hm, you introduced further things with which I don't agree :) "irreducible error is present in any measurement" -> not really. Assume the system is in a state described by a vector $\left| \psi \right>$ that is also an eigenvector of a conserved observable $A$. After measuring $A$ of the system again, we will obtain the same state. Always. With 100% accuracy. – Marek Feb 03 '11 at 20:25
  • @space_cadet: there of course is a difference between quantum and classical theory but it lies elsewhere. Namely in the non-diagonal phases of the density matrix. But these arise essentially because of quantum superposition and ultimately because of what the "state" means in quantum theory. But the pure/mixed dichotomy works the same in both quantum and classical physics, namely one (pure) state vs. multiple states (described by either density matrix, probabilities, or measures, as one prefers). – Marek Feb 03 '11 at 20:29
  • @space_cadet: hopefully the last point :) "You cannot do statistics with one data point." -> but I can still say that when I throw a dice, it will come out as 3 with probability of 1/6. So I can talk about probabilistic notion without ever conducting an experiment. And that's what I am saying (and you seem to be disagreeing with): you can define these probabilistic objects (and in particular entropy) always. Interpretation (and corroboration) only comes after conducting experiments but that's completely unrelated to the theoretical side of the argument. – Marek Feb 03 '11 at 20:33
  • @Marek please read my answer carefully one more time. I describe how the measurement cycle consists of state preparation, evolution and collapse. You're missing the preparation stage of the process. But it seems you're picking apart individual sentences because you want to justify the statement that "a single system can be assigned an entropy". If you feel so strongly you should supply an answer so I can have a turn at critiquing your view. –  Feb 03 '11 at 20:34
  • @space_cadet: I don't understand. If you think my criticism is invalid then explain why. Just rereading your answer again surely won't change the wrong parts. If you think they are not wrong then argue why. ... As for an answer of my own, I don't think it would be useful as sb1 asked this question precisely because he disagrees with me :) I think my position is known (the entropy can be always be defined) and if you want an answer I fully support, it's the Rafael's answer (which is the only truly correct one and at the same time very detailed). – Marek Feb 03 '11 at 20:51
  • @user346: This answer is wrong. The classical statistical entropy of a single particle is well defined. – Ron Maimon Sep 19 '11 at 00:47
0

Given a probability distribution, $$S=-k\sum_i p_i \ln p_i.$$

Unless the probability distribution is of the form $e^{-\beta \left( H - \mu N \right)}/Z$, or reasonably close to it, we can't really define the temperature of the particle.

I should also add that if the probability distribution is a Dirac delta function, the entropy goes to zero (at least if we introduce an ultraviolet regulator...).

QGR
  • 2,327
  • How is this related to the question? The point is that a single classical particle has a deterministic equation of motion and at all times occupies a well defined point in the state space. – Tim van Beek Feb 03 '11 at 16:06
  • 1
    A probability distribution is compatible with a deterministic evolution. The Liouville equation has this property. – QGR Feb 03 '11 at 16:46
  • 2
    @Tim unless I have an uncertainty about its initial position of the particle, or an uncertainty about the position of some other particle that will interact with it... or any other uncertainty about anything that will influence the motion of the particle. Using probability distributions have nothing to do with non-determinist equations of motion. After all, quantum equations of motion are all deterministic! – Rafael S. Calsaverini Feb 03 '11 at 20:00
0

If the space is unbounded a single particle will have an infinite number of positions. It’s velocity can be any value between zero and c if you believe relativity in that situation, which is problematic. If you know the position relative to some frame of reference, a dubious notion physically because a single particle’s position must be defined against some other physically defined frame...well let’s pretend there is an abstract position based on a ghost consciousness, a dubious idea. Then the entropy is 0. If you forget the position or didn’t originally know it then the entropy is infinite I believe. If it is in a finite space then its entropy is not infinite. An infinite amount of information is required to specify a position in an infinite space.

Don’t believe me. I am not a physicist

0

The entropy of a single, classical particle is given by noting that the Boltzmann-Gibbs distribution maximizes the entropy over all distributions on phase space, i.e. it is the equilibrium distribution. The derivation works for $N$ particles, and defines temperature as $\beta=\frac{1}{k_B T}$. For $N=1$, this gives an entropy of

$$\frac{k_B}{2}\left(1+ \log\left(\frac{2\pi m L^2}{\beta h^2}\right)\right)$$

for a particle of mass $m$ in a 1d box of length $L$, where $h$ is Planck's constant.

-2

To give a short answer, yes, it is perfectly possible, via $$ mc^2 = S T_P, $$ where $T_P$ is the Planck Temperature.

In my understanding, If a system has entropy then it also should have some temperature.

Indeed; and there is no contradiction: Temperature is a field sourced by entropy. Just like gravitational potential is a field sourced by mass. viz. $$ T(r)=\frac{\hbar c}{k_B^2}\frac{S}{r} $$ (in which I have ignored retarded potentials, etc. For an exact answer you must solve a wave equation). For a thorough discussion of this see here.