4

Phasors are used to represent sinusoidals. So why do they add like vectors?

Why is it that when I add two sinusoidals in the form of phasors and add them like vectors, I get the right phase and magnitude and everything?

Note: I wasn't sure whether this was a physics question or a mathematics question. Posted it here anyways.

Qmechanic
  • 201,751
  • 1
    A proper answer to this question should explain why phasors work in the first place (i.e. because the equations in question are linear and time invariant). From there, it's trivial to show why phasors add like vectors. – DanielSank Feb 05 '18 at 22:30

3 Answers3

6

Simply because if:

$$R\,\cos(-\omega\,t+\theta) = A \cos(\omega\,t) + B\,\sin(\omega\,t)\tag{1}$$

then:

$$R\,\cos\theta = A\tag{2}$$ $$R\, \sin\theta = B\tag{3}$$

So that the entity $R\angle\theta$ decomposes into a superposition of fixed-phase-quadrature sinusoids $\cos(\omega\,t),\,\sin(\omega\,t)$ in exactly the same way as a vector decomposes into $x$ and $y$ components. Therefore, if we sum the entities $R\angle\theta \equiv R\,\cos(-\omega\,t+\theta)$ and $R^\prime\angle\theta^\prime \equiv R^\prime\,\cos(-\omega\,t+\theta^\prime)$, we do so by summing up all the corresponding weights of $\cos(\omega\,t),\,\sin(\omega\,t)$, thus equivalent to vector addition.

One can summarize (1), (2) and (3) as showing that the set of functions

$$\{f:\mathbb{R}\to\mathbb{R};\;R\,\cos(-\omega\,t+\theta)|\;\theta,\,R\in\mathbb{R}\}$$

for any fixed $\omega\in\mathbb{R}$ is a vector space over the reals of dimension 2 with a pair of possible basis vectors $f_x(t)=\cos(\omega\,t)$ and $f_y(t)=\sin(\omega\,t)$. If, further, we define the inner product

$$\langle f,\,g\rangle = \frac{\pi}{\omega}\,\int_0^\frac{2\,\pi}{\omega}\,f\,g\,\mathrm{d} t$$

then our vector space becomes a two-dimensional real inner product space and the basis $\{f_x,\,f_y\}$ is an orthonormal basis with respect to the inner product.


Further Details:

The central objects and operations the phasor method allows us to discuss are:

  1. Arbitrarily phased sinusoids, all of the same carrier frequency, i.e. entities of the form $f(t) = R\,\cos(-\omega\,t+\theta) \equiv R\angle\theta$;
  2. Addition of these sinusoids, which is justified by the restriction of the phasor method to linear systems analysis, where we seek to add solutions of linear differential equations to find other solutions of the same equations. E.g. the subtraction of two sinusiodally-varying-with-time electrical potentials across a circuit element to find the voltage across the element or the summation of such voltages around a loop in writing down Kirchhoff's Voltage Law (steady state energy conservation); the addition of sinusiodally-varying-with-time currents at a node to to write down Kirchhoff's Current Law (steady state charge continuity equation);
  3. The transformation of a sinusoidally-varying-with-time input by a linear time-shift-invariant system to a steady state sinusoidally-varying-with-time output: E.g. the voltage across a linear lumped circuit element in response to the sinusoidally-varying-with-time output current through it or contrariwise. In the steady state, a linear time-shift-invariant system transforms its input by scaling its amplitude by a constant scale factor and adding a constant phase delay to the input.

Note that in 3., the phasor method cannot cope with transients in linear systems; it can only describe a linear system with sinusoidal excitation in the steady state.

We have already justified that, through (1), (2) and (3), when we add entities of the form $R\,\cos(\omega\,t-\theta) \equiv R\angle\theta$ (i.e. the LHS of (1)) we can do so by adding the $A$ and $B$ co-efficients, which in turn is the same as adding the Cartesian components of a two dimensional vector.

Now, another take on this is to consider that the $\mathbb{R}$-linear function

$$\mathrm{Re}:\mathbb{C}\to\mathbb{R}\tag{4}$$

and the $\mathbb{C}$-linear upconversion operator:

$$\mathscr{U}:(\mathbb{R}\to\mathbb{C})\to(\mathbb{R}\to\mathbb{C});\quad f(t)\mapsto e^{i\,\omega\,t}\,f(t)\tag{5}$$

through the above linearities have the properties that

$$\mathrm{Re}\circ\mathscr{U}(R\,e^{i\,\theta}) = R\,\cos(-\omega\,t+\theta)\tag{6}$$ $$\mathrm{Re}\circ\mathscr{U}(R\,e^{i\,\theta}+ R^\prime\,e^{i\,\theta^\prime}) = R\,\cos(-\omega\,t+\theta)+R^\prime\,\cos(-\omega\,t+\theta^\prime)\tag{7}$$ $$\mathrm{Re}\circ\mathscr{U}(R\,e^{i\,\theta}\times a\,e^{i\,\alpha}) = a\,R\,\cos(-\omega\,t+\theta+\alpha)\tag{8}$$

and, moreover, $\mathrm{Re}\circ\mathscr{U}$ is bijective if we restrict it to complex constants and the inverse mapping to functions of the form $\tilde{f}(t) = R\,\cos(-\omega\,t+\theta)$; the latter restricted set is all we want to work with in the phasor method.

(6) and (7) say that the addition operation 2. is faithfully reproduced if we represent our arbitrarily phased sinusoids by complex constants and add the latter, and (8) says that we can replicated property 3. above by representing the action of any linear system by a complex scaling constant and applying this constant through complex multiplication on the number $z=R\,e^{i\,\theta}$ to find the amplitude and phase of the linear system's output.

Complex numbers of course add like vectors.

Something else we get from the phasor method is the inner product that represents the time-averaged product of two sinusoidally varying quantities, that is, if $z=R\,e^{i\,\theta}$ and $z^\prime=R^\prime\,e^{i\,\theta^\prime}$ are two complex number representing the time varying sinusoids $R\,\cos(-\omega\,t+\theta)$, $R^\prime\,\cos(-\omega\,t+\theta^\prime)$, then:

$$\langle R\,\cos(-\omega\,t+\theta) R^\prime\,\cos(-\omega\,t+\theta^\prime)\rangle_t = \frac{1}{2} \langle z, z^\prime\rangle = \mathrm{Re}(z^\ast\,z^\prime)\tag{9}$$

where $\langle \_ \rangle_t$ is the time average over a period, so (9) allows us to calculate, for example, the average power when a current flows through a potential difference by combining the two phasors as in (9). Indeed the cross product between the two phasors thought of as vectors:

$$z\wedge z^\prime = \mathrm{Im}(z^\ast\,z^\prime)\tag{10}$$

here a real number, gives us the amplitude of the oscillating part of the instantaneous product between the two sinusoids. It is thus useful for calculating e.g. the energy that is shuttled to and fro over a period in a circuit.


And lastly...

A completely different way to motivate a technique that broadens the idea of a phasor in the case of the electromagnetic field and ends up being equivalent to it in the case of a single frequency time variation is the Riemann-Silberstein idea of diagonalizing Maxwell's equations. Not only does the resulting technique work like phasors (but is more general), it has a very neat and elegant interpretation in terms of polarization. I discuss this idea in my answer here.

  • But that is only if we are adding a sine and a cosine. What about all the phases in between?? – PhyEnthusiast Feb 05 '18 at 11:08
  • Can you expand on "summing weights of sine and cosine"?? – PhyEnthusiast Feb 05 '18 at 11:19
  • 1
    @PhyEnthusiast Here $\theta$ represents either the arbitrary angle between any two vectors or the phase difference between two phasors. In other words this single construction proves that you can add any two phasors like vectors and because the result is also a phasor you can generalize to any number. – dmckee --- ex-moderator kitten Feb 05 '18 at 19:15
  • 1
    @PhyEnthusiast No, we can represent all the "phases in between" as sums of a sine and cosine. That's the essence. The sine and the cosine functions are basis vectors and let us see that the set of arbitrarily phased sinusoids is in fact a dimension 2 real vector space. See all of my updates – Selene Routley Feb 06 '18 at 07:12
2

A phasor is actually a complex number, which is isomorphic to $\mathbb{R}^2$, and that's a well known vector space: the usual arrows.

Translation: a phasor is a complex number, and you add complex numbers in the same way you do with vectors. Working with $a+bi$ is equivalent of doing so with $(a,b)$ pairs.

The only difference is that, now, the angle (phase) depends on time, but that doesn't change anything. The sum will depend on $t$ too, just that, and that's what you see that happens.

FGSUZ
  • 8,293
1

Phasors are rotating vectors. In a nutshell we make use of the simple theorem:$$\sum x (\text{or}\ y)\ \text {components of vectors} = x\ (\text{or}\ y)\ \text{component of}\sum \text{vectors}.$$ The sinusoids are the x components that you want to add. But it may be easier to add the vectors that have these x components (using a phasor diagram which is just a special sort a vector diagram) and then to take the x component of the vectors' sum (if we want the instantaneous value).

Adding phasors is useful only if the voltages or currents that they represent all have the same frequency. Then the phasors all rotate as a group and keep their relative positions.

Phasors may be represented by complex numbers (and it's a neat thing to do), but, as rotating vectors, they can be handled using only real numbers.

Edited to try and make clearer.

Philip Wood
  • 35,641