Uncertainty

As a software engineer working on graphics I’ve drifted a pretty long way from my original specialty, aerospace controls. I’m glad I got my PhD in controls though, because it taught me a unique way to approach problem solving, one which I tend to apply well beyond the traditional realm of controls.

Controls is a very math-heavy discipline, focused on proofs and guarantees of safety because in many situations lives depend on the reliability of the system: autopilots come to mind, but there are countless examples in many fields. At its core is uncertainty and the management thereof. The first thing we’re taught when modeling systems is that our models are inevitably wrong, and not by a little: nothing is linear, nothing is rigid, nothing is independent of the world around it. These assumptions make our math tractable, but they are not realistic. And yet, all is not lost: we take our terrible models and our unreliable sensors and we bound their uncertainty. No matter how poor the input is, it is still valuable as long as you know how poor it is. Then comes the magic of feedback control: the ability to stabilize systems even in the face of tremendous uncertainty, what we call robustness.

This description is largely a classical controls view of the world, stemming from its roots in electrical engineering, but more and more in modern control the field has been influenced by computer science techniques like optimization and more recently, machine learning. These are powerful tools enabling things like self-driving cars that seemed impossible a short time ago, but they also come from a different mentality. Take optimization: computer scientists tend to look at these problems mathematically as having a well-defined correct answer. However, in applying it to controls you have all the same modeling uncertainty, now compounded with an even more uncertain cost function. A common problem with overzealous optimization is that it will tend to drive your solution to where your assumptions are violated. This will end in tears or worse. It is only by keeping uncertainty at the forefront of the analysis that a robust system can be created, regardless of the techniques employed.

Of course at the core of understanding uncertainty is statistics, and this is also fertile ground for invalid assumptions, the most popular of which are Gaussian distributions and uncorrelated signals. Ask the finance industry how well those assumptions worked for their risk analysis. A surprising number of distributions are Gaussian, but if you are rejecting any outliers, yours is not and standard deviations are meaningless. When working with risk and safety all that matters are the tails of the distribution, which are exactly the outliers.

Another classic example of unexpected correlation is the triple-redundant hydraulic systems of the DC-10, which were all cut by a single engine failure, simply because they all had to run past the tail engine. We are so used to assuming things are uncorrelated and Gaussian that it’s easy to ignore how far-reaching these assumptions are, and how infrequently validated.

This may have been the cause, but it wasn't the problem. The problem was how many things were sensitive to this.

Of course, these errors pale in comparison to the errors we make personally. Statistics are counter-intuitive precisely because our brains are so bad at risk analysis. Take an all too common example: we’ve done something a hundred times without a failure, so it’s probably safe. If that failure would kill someone, you probably want it to happen less than one time in a million. Those hundred tries therefore give you no indication of safety whatsoever. This is so common we even have a term for it: lulled into a false sense of security. And experts are not at all immune: just look at Challenger.

Another common problem with human decision making is to fall back to binary logic: if this, then do that. But this completely ignores uncertainty, and when faced with uncertainty, often results in decision paralysis, when in fact no action is often the worst action. In reality we are often forced to make decisions with incomplete information, and only by embracing this uncertainty can we succeed. Poker players and generals know this well. 

What tools does control theory provide to help us keep things on track? Lots of specific things of course, like Laplace transforms and Linear Quadratic Regulators, but I prefer the general intuitions, which I find apply far more broadly than the narrow mathematical assumptions they are founded upon. For instance, you'll need to operate at a higher frequency than anything you want to resolve. Averaging smooths by removing high frequencies, but at the cost of increasing delay. Delay is the fundamental driver of instability. Feedback is great at stabilization and tracking, but there is a fundamental tradeoff between the two, and any inherent instability in the system makes that tradeoff worse. Feedforward, or making changes based on what you think will happen, can increase performance but makes you less robust to your errors and assumptions. Damping (anything like friction that fights motion) is critical to stability. You can add damping with feedback, but it will never be as good as the real, physical thing, because yours will always come with delay. Instability trumps stability.

My big takeaway is that any recurring decision process can make use of the tools of control theory, as they are fundamentally feedback loops at some time scale. I’d love to see it applied to the actions of central banks and to the progressivity of the tax system. But in the meanwhile I search out the grey in between the white and black. As I’ve become more interested in computational geometry I’ve noticed a trend toward exact arithmetic, which appeals to mathematicians but I find lacking. What if instead we embrace floating point rounding error as a fundamental uncertainty? Will this enable more robust solutions? Stay tuned...

Comments

Popular posts from this blog

Perseverance - a history of Manifold

3D Interaction

Manifold Performance