Feedback and regulation
Imagine yourself writing your new poem in your comfortable mountain hut, far away from any crowded city. It was a beautiful evening, but after some time you start to feel a bit cold. Quick look at room thermometer reveals that really it is cold – only 20 degrees Celsius! All right, in mountains the temperature drops quickly after the sun goes down. So what do you do? You walk to your room temperature controls, and adjust the knob slightly. In just 15 minutes, you know, it will get comfortable again.
You used a feedback (a negative feedback) to maintain your room temperature. You alone acted a part of the feedback system.
Consider a real-world amplifier. What is the difference between an ideal and a real-world amplifier? One difference is that the real world amplifier doesn’t have constant gain. Its gain may depend on signal level (non-linearity) or may depend on signal frequency, or on outside temperature or may change with years (aging) or can differ between two particular products of the same type. What can we do about it?
Figure 1: an amplifier with input 'x', output 'y' and gain 'A'
In ideal case, a constant level ‘x’ at amplifier input, will produce constant level ‘y’ at its output, where y=x*gain. But our amplifier is not ideal. There can be many reasons (environmental temperature, for example) that can change its output ‘y’ even if the ‘x’ is held constant. Could there be a way to counteract this unwanted change? Can we try to take a portion of output signal ‘y’ and somehow subtract it from the input signal ‘x’? We hope that this way any unwanted change of amplifier output will be partially compensated by opposite change on amplifier input.
Figure 2: a negative feedback was added to an amplifier
Above you see a system where proportional part of output signal ‘y’ is subtracted from the input ‘x’. The result is then forwarded to the amplifier itself. The equation  mathematically describes this system. The equation  is the same, but better looking. By looking at the overall gain of the system, found in equation , we can conclude few things:
As ‘beta’ is less than one, we can use simple attenuator to bring proportional part of output back to the input. Making an attenuator is simple and an attenuator can be made precisely calibrated (only two calibrated resistors might be needed).
But is this system any more resistant to ‘A’ gain changes? The answer is in the equation . This equation tells how much the overall gain changes with change of gain ‘A’. The following can be concluded: Larger gain ‘A’ gives better resistance!
The conclusion is fantastic – by providing very large gain ‘A’, we can make a system that is pretty resistant to fluctuations of the gain ‘A’, and, at the same time, has the overall gain equal to 1/beta. As ‘beta’ can be adjusted precisely, the overall gain 1/beta is also accurate.
Figure 3: overall gain and its resistance to amplifier's gain fluctuations when amplifer gain goes extremely high
Even better, making rough, high-gain amplifier ‘A’ is actually simpler that it might seem. We just add several transistor stages on it. In a way, the feedback we employed trades gain for refinement.
What we did is called the negative-feedback loop. It is called ‘negative’ because it opposes to the change of output by providing compensating action at input. As it stabilizes the gain, the negative feedback can:
(For example, an open-loop amplifier could have gain of 10000 at 0Hz but only 7000 (-3dB) at 10Hz. We would say that this amplifier has bandwidth of 10Hz. When a negative feedback is employed the overall gain could be 20 at 0Hz, dropping to 14 (-3dB) at 5kHz, making bandwidth 5kHz wide. The negative feedback extends bandwidth on the expense of gain.)
How the negative feedback handles noise in the system? We will analyze it using a system below. We introduced noise ‘n’ somewhere in the middle of amplifier, A=A2*A1.
Figure 4: noise simulation inside amplifier
We can see that the feedback reduces the noise if the noise appears close to the output side. (That is, if the most of the overall gain comes from A1). If the noise is introduced at the very output of the amplifier (A2 approx. equal to 1) it can be largely cleaned given that the gain of the amplifier is high enough. Therefore, in a strong-feedback amplifier, the last stage (power stage) can introduce lot of noise without doing much harm. (The input stage, on the other hand, must be noiseless as if no feedback is used.)
As shown, the negative feedback will stabilize the amplifier output in many cases, whether there are fluctuations inside the amplifier itself, or there are some disturbances from outside. The negative feedback is simply good at stabilizing outputs and keeping them strong.
There are amplifiers, called operational amplifiers (op-amps), designed to be used within negative feedback loops. Without the feedback, op-amps are pretty useless (except as comparators). Op-amps are made with ridiculously high gain (one million is common) and are cheap because are mass-produced.
I find the negative feedback charming. We can make rough amplifiers and refine them using negative feedback. The output of such system will keep its value strong, despite disturbances. But… There are many other systems in the world that are not as simple as amplifiers. Can we use benefits of negative feedback even with them? For example, can we regulate our room temperature so that once set, it doesn’t change with outside disturbances?
But before we continue, just one observation… look again at the figure 2 where the simplest negative-feedback system is depicted. If, as we said, the gain ‘A’ is very high (millions) then obviously the input to the amplifier is normally very small. This in turn means that the difference between input value ‘x’ and feedback value ‘beta*y’ is normally very small. We could say that we are somehow comparing input value to feedback value, and then govern the system in such manner that will make the comparison error as small as possible.
I will draw an overly simplified room-heating diagram. The diagram is simplified because I am drawing only a first-order system (also, I am not sure if power of looses is proportional to temperature difference). But suppose that room is heated/cooled with a fanned heat pump. On the drawing below, the ‘H’ represents heat pump’s reference-to-power ratio; the ‘L’ represents insulation quality of the room, while ‘C’ is physical constant. We are not allowed to change any of these.
Figure 5: very simplified heating system of one room
The equation  represents the system, as does the more revealing equation . The equation  is Laplace transform of equation  (all-zero initial state is supposed). Finally, the equation  follows directly from . From the equation  it can be seen that the outside temperature ‘n’ invariably influences the temperature of the room ‘y’. We cannot do anything about it (except maybe insulating the room better, that is, decreasing ‘L’).
If the outside temperature drops, we will have to go to the heat-pump buttons and adjust the reference to keep room temperature the same. Okay, can we make a regulator that will do this instead of us? We will try to use negative feedback for this regulation, because we already know that the negative feedback can produce surprisingly nice results.
First of all, we need some kind of sensor to measure the actual room temperature ‘y’. Then we must somehow ‘negatively’ feed the signal from this sensor to the very input of the system… Here is the complete idea:
Figure 6: we added a simple proportional regulator 'P' to the room heat system
The equation  represents our room-and-heat-pump system (the same equation that we already calculated earlier). The equation  is how the ‘r’ signal gets calculated (in Laplace domain). By combining  and  we get  that is the equation for the system as whole (room-and-heat-pump-with-our-regulator). Equation  is the same as  but more beautiful.
Let’s see how the whole system reacts to some disturbance ‘n’. We hope that our simple regulator will do a good job, keeping room temperature largely unchanged. To easy our math, we will keep the reference input ‘x’ at zero, while the input ‘n’ will be stepped to n0.
Figure 7: analysis of inside temperature stability to some outside temperature disturbance
The equation  (made by substituting X and N into ) describes how system reacts to the step disturbance in Laplace domain. The equation  is the same, but in easier form. Finally, equation  is the same as  but reverted back to the time-domain. We can see that by providing higher gain ‘P’ the disturbance of the room temperature will be smaller, exactly as we hoped for. In fact, by making ‘P’ extremely large, the disturbance will become virtually non-existent.
By playing with math, we can see that the regulator will compensate also for changes in heat pump efficiency or insulation efficiency. Even the gain ‘P’ of the regulator amplifier itself can change without disturbing the set temperature much (as long as ‘P’ it is still large enough).
Not only this… It can be shown that (when the set point is changed, for example) the room temperature will settle by exponential approach. But with higher ‘P’ the time needed for the temperature to settle will become shorter. This of course is true only if the heat pump can provide the necessary power. But as the heat pump power is limited in the real world, we will actually need to wait even if we use infinite ‘P’.
So, the higher ‘P’ gain, the smaller sensitivity to disturbance (of any kind). We can be happy because we found our regulator that will keep room temperature exactly as set. But in real world we might be limited for the ‘P’ gain for various reasons. For example a noise on the ‘error’ signal may drive the heat pump eternally on-off-on-off if we used excessive ‘P’ gain.
On the other hand, we are not limited to simple amplifiers (‘P’-gain) as useful regulators. Instead we can put much more elaborated regulators in its place. For example, instead of simply gaining the error signal, we could integrate it and forward the integrated value to the regulated system.
This actually sounds as a nice idea… If a limited ‘P’ gain must be used then there will necessarily exist some small error even in the settled state. But we can then ‘slowly’ integrate this small error bringing our system into the non-error condition. Look at the drawing below:
Figure 8: We added some integrative component to our regulator. This may be good if we are limited about 'P' gain.
We hope that in the above regulator, the proportional part ‘P’ will bring system somewhere near the non-error state, while the integrative part ‘I’ will slowly drive it further to completely eliminate the error. As long as there is even the slightest amount of error, the integrator will drive ‘r’ until error is fully compensated.
If we make our simplified math on the whole system (this time the n will be held zero to make things easier, while x will be stepped to x0) we will get… (sorry, it gets complex)…
Figure 9: Analyisy of the system ouput when reference point is step-changed
The equation  is the equation of our room-and-heat-pump system that we calculated earlier. The equation  describes how our regulator governs the room-and-heat-pump system. As a result of their combination, after a long calculation, we get equation . The equation  represents the response of the overall system to step of reference signal ‘x’. By quick analysis of the Laplace-domain equation , we can conclude that oscillations are possible!!! Equation  tells about the frequency of these oscillations.
Oscillations will happen if ‘P’ is small and ‘I’ is large. Smaller the ‘P’ or larger the ‘I’, the oscillations will be faster. These oscillations will be dampened (larger ‘P’ dampens them faster), but can still be unwanted. Having ‘I’ small enough, we may get rid of oscillations at all, but this means that the system will take long time to settle into the non-error state. Therefore we can conclude that the above ‘P-and-I’ regulator has its potential if either we don’t mind about possible dampened oscillations, or we can wait long time for system to settle.
Our room-and-heat-pump system is still quite simple (a first-order system). Higher order system may produce non-dampened oscillations when integrating regulator is used. Yet higher order systems may produce non-dampened oscillations even if high-gain proportional-only regulator is used. At the end there are systems that are inherently instable and do oscillate by itself.
To stabilize such systems, we can provide derivation of the error signal to them. This brings us to the PID controller (PID regulator) depicted below.
Figure 10: The general way how the PID controller is embedded into system control
The equation  and  are the same – they show the response of a PID regulated system.
The PID regulator handles the error signal by forwarding three components to the system: the Proportional, Integrate and Derivate. By adjusting gains P, I and D, one might find optimal PID configuration to drive/stabilize the system ‘S’ so that the output of the system ‘y’ follows the input reference ‘x’ as good as possible.
The proportional component does the most of work (especially on zero-order or first-order systems). The integrative component can help to achieve (slowly) non-error state if we cannot use high ‘P’ for any reason. Care must be taken with high ‘P’ and especially with high ‘I’ because oscillations may occur in higher order systems. The derivative component is the charming one. It can stabilize wild systems.
Note that not always all these three components are used. The ‘P’ is used almost every time. The ‘I’ is also often used. The D, more or less so. So we can have ‘P’, ‘PI’, ‘I’, ‘PD’ and ‘PID’ regulation.
How the ‘D’ component works? It is used when it is known that the regulated system has some sort of ‘inertia’. Because of this inertia, the regulator governed only by ‘P’ and ‘I’ may easily overshoot its target level. (In worst case, the overshoot error may be even larger than starting error, thus un-dampened oscillations will occur.) The derivative component is sensitive to ‘movement speed’ of the system, that is, it is proportional to the change rate of system output. The ‘D’ component governs the system in a manner that its ‘movement speed’ gets reduced – it doesn’t allow the system to change too fast. This will decrease the possible overshoot and may even suppress oscillations. (Sure, too large ‘D’ component may increase settling time)
One problem with the ‘D’ component is that it is sensitive to high-frequency noise. If high-frequency noise is present in the error signal, it will become greatly amplified after derivation producing erratic handling of the regulated system. Sometimes a low-pass filter may be applied on the ‘D’ input.
The regulation doesn’t stop on PIDs. Not at all. The regulation merely starts here. More complicated and elaborated regulators can be employed. To develop an optimal regulator, one must know about system that he/she wants to control.
We know how a negative feedback can be used to linearize or stabilize stuff. Is there any useful purpose of positive feedback? Of course, it is used in bistables and oscillators. The positive and negative feedback are two opposite faces of the same thing. Easily one converts to the other one.
Figure 11: A positive feedback to amplifier 'A'
I feel that the equation  is misleading. It doesn’t reveal much, if anything. For example, if we substitute A=100, beta=0.2, the equation  becomes: y = -5.26 x. For any stable ‘x’ there will be stable ‘y’ and it seems that the system is stable… but I tell you, it is not.
We need to think harder… The math used in equations  and  doesn’t represent the system entirely. These equations assume that the signal propagation is infinite (or that the system has size of mathematical point) and that the system is absolutely free from any inertia. I would say that these equations neglect the system – they condense the system to a structureless mathematical point.
What equations  and  are telling is that for any input ‘x’ there is unique output ‘y’ for which the system is ‘in peace with itself’. That is, the system can reach balance. For example if A=100, beta=0.2, x=1, and y=-5.26, the system is balanced. But equations tell nothing about stability of this balanced state. They tell nothing about what happens if there is even a smallest disturbance over its balanced state (look, if it were that the system is a structureless mathematical point then such system won’t be able to experience any disturbance. But our system is not a structureless mathematical point).
As you are clever enough, you noticed that equations  and  suffer from the same problem. True, but equations  and  work because we correctly assumed that the system depicted on figure 1 is stable and that it will (very quickly) convert to the stable state after every disturbance. This stable state is expressed with equations  and …. However, our positive-feedback depicted on the picture above is not stable. Equations  and  do not express its stable position. They express its balanced state, instead.
The balanced state of a positive feedback system is like a stick carefully placed to its vertical position. Even slightest disturbance will tip it over (The negative feedback, on the other hand, is like a stick hanging vertically. After a disturbance it will eventually return to its stable vertical position.)
We need better equations to describe our system than those  and . We will try to add some physical reality to our positive-feedback system. In the system below we added a low-pass filter to the amplifier output. This low-pass filter emulates limited slew rate of the amplifier.
Figure 12: A more realistic model of positive feedback
Equations are simple. The equation  tells about overall system response in case of disturbance (by dirac impulse, in this case). As we can see, once disturbed the system will change exponentially ‘forever’ (providing that A*beta>1). Therefore, the system is unstable.
Of course, a positive-feedback system has its limits. Its output cannot change over some limited value. Therefore, in real world, a non-oscillating positive-feedback system usually has two stable states, its positive and its negative limit. Any state in between is unstable. This brings us to the concept of bistable.
A bistable is a system that has two stable states and cannot easily stand still in any of ‘in-between’ states. A stabile state of a system is the state to what the system will return after some small disturbance. The bistable has two such stable states. The bistable can jump into its other stable state if the disturbance is large enough… One real-word op-amp based bistable example is depicted below.
Figure 13: A bistable made of single op-amp and two resistors. A positive feedback is used.
We can see that the input-output curve of the bistable doesn’t have one single path – the hysteresis is present. Whenever you see a hysteresis in any input-output curve you may suspect bistability (or multistability). Bistable systems can memorize its state (while its input is in neutral range, a bistable output is in its last memorized state) and are used as digital memories in arrays of zillions.
To make a bistable you need positive feedback of some sort and some non-linearity to limit positive-feedback effect. (In the above example, the non-linearity is present inside the operational amplifier A, because its output cannot exceed +Vcc and –Vcc, roughly).
Positive feedback over lower-order systems may produce a bistability. Positive feedback over higher-order systems may even produce oscillations. This way we can make an oscillator. We already discussed oscillations while talking about PID regulators. But we talked about negative feedback there! This is why I tell that positive and negative feedback are two opposite faces of the same thing. In higher-order systems, the negative feedback may easily turn into positive feedback producing unwanted oscillations (and tearing things apart). How is this possible?
The feedback affects stability. The primary purpose of the negative feedback is usually to make systems more stable. The positive feedback is used to provide controllable instability (bistables, oscillators and self-destructing gadgets like bombs).
I want to show how a negative feedback can turn positive.
Figure 14: A general feedback system with open feedback loop. It is easier to analyse a feedback system when we open its loop.
The picture above depicts a very simple system with negative feedback. The output is directly feed to the input. But for analysis purpose, we will break the feedback loop. With broken loop we will feed pure sine wave signal to the input ‘x’ and will monitor the output ‘y’. Of course, because the system is linear, the output ‘y’ will also be a pure sine wave signal (but possibly of different amplitude and/or phase).
Figure 15: in some systems, the phase will shift with frequency (an example)
At the left we provided low-frequency signal at the input (the black curve). We see that the output signal (green curve) phase exactly follows the input phase. There is no significant phase difference.
In the middle, we provided middle-frequency signal to the input. We can see some phase shift now.
At the right, we provided even higher frequency input. The phase shift is now exactly 180 degrees. What happens if we close the feedback loop now? Because there is 180-degree phase shift between two subtracted signals, the two sine-wave signals will in fact sum up. This will create positive feedback instead of negative.
As we can see, for the above system the feedback can be negative in low-frequency range, but positive in high-frequency range. The analysis method we used – the open loop method – is very common and useful when analyzing stability.
Much nastier things can happen in non-linear systems.