We need to think about what it means for one system to be "hotter" than another, or to have a temperature $T$. Thermodynamics defines the temperature of systems in thermal equilibrium. If an energy level $i$ of energy $U_i$ has degeneracy $g_i$, its occupation level $\propto g_i \exp \left( -\beta U_i \right) $ for some constant $\beta$ at thermal equilibrium. We then define $T := \tfrac{1}{k_B \beta}$. When all energy is at the lowest level, $T = 0$. When $T \in \left( 0,\,\infty\right)$ high-energy levels are occupied, but their occupation-to-degeneracy ratio is lower than for low-energy levels. When $T=\infty$ , this low-more-occupied-than-high disparity is replaced with equality, i.e. occupation $\propto g_i$ . Clearly, we are gradually moving energy up to higher energy levels, thereby increasing the mean energy. But if you obtain a state of even higher mean energy by obtaining a high-more-occupied-than-low disparity, mathematically $T < 0$ . These "negative-temperature" systems (which can be experimentally prepared but are short-lived) are "hotter" than infinitely hot ones (or indeed systems at the Planck temperature), rather than being below absolute zero (which is impossible).