As I understand it, statistically this means that a fall from a high temperature (say $300\text{ K}$) to a middle temperature ($200\text{ K}$), and an increase from say ($100\text{ K}$) to ($200\text{ K}$) by heat exchange do not result in the same entropy changes, the first loses less entropy than the second gains. So this implies that the former loses less microstates than the latter gains.
But how can this be so? Surely every increase of say $1\text{ K}$ increases the number of available microstates by the same multiplicative factor, and thus the same additive amount of entropy?
Edit:
I've been reading a text which states quite clearly that $\frac{\partial\log{\Omega}}{\partial U} \approx -\beta$ so the rate at which accessible microstates become available would approximately fall with increased energy. I'm trying to get a handle on why the rate of increase would typically fall with higher values of $U$, and by implication $T$.
As you say the log is a concave function, which answers a lot of why this behaviour happens, but I am trying to understand it from a physical rather than a mathematical perspective. The fact that the log was used is generally presented as a way in which to keep it in check with the state variable definition of entropy - which is in fine. But I guess that then leads to a corollary question of why entropy defined in such a way that it is a concave function of the number of states rather than the number of states per see. In other words why can the increase in the number of states available for states with low initial states vs. the descrease in states for a state with high initial states be sufficient to drive spontaneous processes through a statistical process.