It's worth noting that your definition of an elemental change in the entropy of a system, namely:
$dS=\displaystyle\frac{\delta Q}{T}$
It's just valid for an internally reversible change. This is not a technicism which can be omitted; I think part of your question might be related to the notion of heat (a measurable amount of energy transferred) and statistical uncertainty (which is, up to alternative and equivalent interpretations, the intrinsic meaning of entropy).
In an internally reversible process which involves heat addition or substraction from a system, that T under de heat (inexact) differential must be a uniform temperature across the system's spatial extension up to it's boundaries, so that at every moment the temperature of system's boundaries is equal to it's bulk temperature (and unique). That means that there are no temperature gradients inside the system of interest, and because of that very fact, there aren't any possible heat exchanges inside the system's boundaries. That is because, for a system to exchange heat with something else, there must be a difference in temperature between them, and if the difference is zero (they are equal) then no heat will be transferred. If you think about it this is a sound argument: a cold glass of water gets increasingly hotter when you leave it in a room, but when it reaches the same temperature of the air around it, then there's no more change and it stays there indefinitely.
Going back to the original equation, you can now interpret the RHS as telling you that, at situations where the system's temperature is uniform at every moment, the ratio of the infinitesimally small amount of heat added or substracted to the system by it's environment, and the unique temperature at every point of the system (which is nothing more but a measure of the mean kinetic energy of individual molecules which make it up), is equal to it's change in entropy. And what is entropy? Well, macroscopically talking, you can take what I've written above as a definition of entropy, and you can thermodynamically deduce that it is indeed a state function (it only depends on the point properties of the system, like it's pressure and temperature) and it doesn't depend upon the chain of events by which that state was reached.
On the other hand, statistical mechanics (which is a more recent way of addressing what we see macroscopically as thermodynamical properties, like entropy, starting from a mechanical description at the molecular level) gives us more details on the nature of entropy. I think it's better to think about it not as a measure of randomness but as the (macroscopic) uncertainty of the (microscopic) state of the system.
I'll give you a simple example: imagine you had a pool table with it's top totally covered by an opaque fabric, with just one open end for introducing the cue stick. Assume now that you know (by some means) that the eight balls are distributed in the table forming a straight line with an equal spacing between them, but you don't know where exactly this line stands in the table's rectangular area; and that, for the purpose of the experiment, the white one is just next to the hole (and of course you know it). Now, you take the cue stick, introduce it in the fabric's hole left open, and strike the cue ball. After a few seconds of (hearing) collisions, you can be sure that movement stopped under the fabric. What happened to your knowledge about the system?
Well, you don't know where does each ball gone (we've sealed the pockets, of course!) but you didn't knew it before the strike, did you? But then, you at least knew they were forming a line, and that information is now gone. From your outside point of view, your prior information about the positions of the balls and the energy and momentum you introduced in the system trough the strike isn't enough to rule out a huge number of possible actual distributions of the balls. At the begining of the experiment, you could at least write down the number of possible positions of the line of balls (perhaps by drawing a grid over the table's area, with each cell's side length equal to a ball's diameter, and counting the number of longitudinal cell lines) but now the number of possible positions has multiplied. Before and after you only have partial knowledge of the system's configuration (all you can do is count the possible ones, based on what you know about the system from the outside, which restrict the possibilities) but that knowledge has decreased after the experiment. It has nothing to do with the physics of the collisions between the balls: it has to do with the fact that you can't see the balls from your point of view, and all you can do is retrieve partial information through indirect measurements.
The analogy with the example above in a statistical system is that by measurements of macroscopic observables (like temperature, pressure, density, etc) we only measure mean microscopic properties. For example, temperature is a measure of the mean molecular kinetic energy, and pressure is a measure of the mean rate of momentum transferred by striking molecules per area unit. Measuring them gives us partial knowledge of it's microscopic configuration (like the original information you held about the positions of the pool balls). And any change in the macroscopic observables is correlated to a change in the possible (i.e. not ruled out) microscopic configurations, and then that causes a change in our knowledge about it. It turns out that those changes can be measured, and that's indeed entropy variation, in the sense that an entropy increase correlates to an uncertainty increase, or a knowledge decrease. Showing that this relation holds, starting from a mechanical framework, is the whole point behind statistical mechanics.
Finally, I hope you can see now that what $\displaystyle\frac{\delta Q}{T}$ is just analogue to the energy introduced by the strike in the experiment in relation to the previous knowledge of the position of the balls (lower temperatures imply less molecular translational, roational and vibational molecular movements, and vice versa, so it is actually a "partial measure" of their positions). So:
It doesn't hold the information about the randomness of the system, it is just a measure of the increase in uncertainty from a macroscopic perspective, and only holds for reversible processes (in general, entropy can increase without adding energy to a system).
As other answers have stated, entropy is needed to define some of the terms in any state equation (like the Ideal Gas law), and by the way, state equations are just approximations to the actual behavior of real substances (something pretty clear in the "ideal" part of the the law you cite), so it's natural for them to be based on more fundamental concepts (like entropy).
EDIT: As Nathaniel rightly pointed out below, my original statement that the validity of the macroscopic definition of entropy in terms of heat and temperature depended on the (tacitly) total reversibility of the process, was flawed. The only requirement for it to be valid is that the heat exchange process must be internally reversible, becasue we're only measuring this way the change in entropy inside the system (and so external irreversibilities associated with the process are irrelevant).