Observability is not always enough


Mental models can change what you see and how you perceive the world around you.

I spend a lot of time helping organizations understand what is going on so they can make better decisions. With so much focus these days on Big Data and Observability that may not be much of surprise. However, most of my time is not spent helping them instrument their ecosystem or set up tools to collect and analyze data. In fact, most already have a plethora of advanced tools and teams in place by the time I arrive.

Yet despite all the heavy investment that has been made to gather and analyze ecosystem data, I inevitably find all sorts of places where the organization needs help to fill in gaping holes in the very situational awareness needed for decision making. What is going on?

The problem is rarely a lack of data. Instead, the challenges usually begin with subtle yet important flaws in the mental models that people hold about their ecosystem. These flaws affect what data you look at and how you interpret it. Each has a heavy influences the decision making process.

What is a Mental Model

Mental models are our personal internal representation of the world around us. They shape how we perceive and understand the connections and behaviors of the world around us. We all possess lots of them. Many of the models overlap or build upon each other in ways that we can use like a series of conditional statements. Sometimes we even pull apart and reuse elements of one mental model. We do all of these helpful tricks without thinking much about it.

A good example of all of this is in action is a Stop sign.

Even before we learn that they are used as a traffic safety mechanism to stop and look for cross traffic before proceeding, we have a pretty good idea of what a Stop sign is for. The most obvious is having the word “Stop” written on it (mental model #1). We learn pretty quickly in life that official signage rarely contains meaningless or contradictory words (mental model #2)

 

  What would you do at this sign?

 

The color red (mental model #3) is usually used to bring our attention to something important.

 

      What about a blue sign?

There are more mental model events that take place once we know what a Stop sign looks like. For instance, we start to recognize its octagon shape as an indication to stop (mental model #4). Not only will we recognize signs that have “Stop” written in another language as still a Stop sign, but the octagon symbol is often used in documents (such as school age tests) and other places to indicate where to stop.

The color red also picks up an an additional meaning (mental model #5) of symbolizing “Stop” (such as a red light), or something stopped or not working (like a service).

      Does this also mean stop?

 

The Problem of Mental Model Formation

Mental models are obviously pretty useful in providing quick clues for decision making. Where the problems start is with how we form them.

We form our mental models in a variety of ways. Some, like the Stop sign, are shaped formally through a direct experience or formal education. We all carry a sizable number of formally taught ones from school and work.

However, the vast majority of the mental models we have are formed far more informally. A large number of these come from ideas or beliefs arising from cultural norms, media, family or friends, as well as other colleagues we encounter in our lives. We also create a sizable portion ourselves. These come from randomly assembled patterns we think are both real and meaningful. Sometimes they are. However, our brains regularly try to find meaningful patterns where there are none. Seeing familiar shapes in clouds is a great example. We know a unicorn shaped cloud isn’t a unicorn, but we may still think it looks like one.

The Problem with how we gauge mental model accuracy

As one might expect, the variable quality of the ways we build mental models creates lots of opportunities for flaws to form. One would think that correcting these flaws would only be a matter of knowing to regularly check their accuracy. This might be by checking them against a reputable source, or at least fixing them when the mental model fails.

Instead, our brain gauges mental model accuracy in a completely different and more error-prone way. What matters most to it is how long we have held key parts of the model.

How long a mental model has held probably made a lot of sense when we were hunter-gatherers. Rarely did your ecosystem change dramatically, and remaining alive when it did was probably a good sign they weren’t too flawed. However, in today’s world it means you are far more likely to trust a deeply flawed mental model assembled from bits of information misheard from a drunk uncle when you were a child over evidence presented now by a subject matter expert.

Gauging mental model accuracy this way not only obviously breaks our ability to act appropriately, it can make it far more difficult to correct.

What this means for observability and decision making

Relying upon a flawed mental model obviously can affect your decision making. However, the problems go far beyond the decision itself. For one, even a minor flaw can interfere with what information you think is important to collect, let alone pay attention to. If you do happen to get past those hurtles, flaws can then alter how you perceive the information’s meaning.

When any of these problems happens, no amount of instrumentation and data analysis is going to help. Instead, you need to find and correct the flawed mental model that is getting in the way.

Finding the flaw is a lot harder than it sounds. For one, mental models often overlap or get of entangled with one another. Often it only takes one flawed yet seemingly obscure one for problems to build.

Fortunately, there are a number of common patterns that flaws take. By carefully building in mechanisms to check for them, these can be spotted and rooted out. The next few articles will walk through some examples of these that will hopefully help you spot their appearance in you own ecosystem.