We are living in a digital era increasingly dominated by uncertainty, driven in part by the rise of exponential change. The problem is, we are generally clogging up the gears of progress and growth in our companies by treating that uncertainty as risk and by trying to address it with traditional mitigation strategies. The economist Frank Knight first popularized the differentiation between risk and uncertainty almost a century ago. Though it is a dramatic oversimplification, one critical difference is that risk is – by definition – measurable while uncertainty is not. 
Proof and Confidence. One way to parse uncertainty from risk, and in turn to assess differing levels of risk, is to consider what it should take for your organization to make a certain strategic move. One dimension of this is the “level of evidence required” in order to make the move. In other words, what amount of data and supporting information is necessary to understand the contours of the unknown and to shift from inaction to action? A
Risk in the Known or Knowable. Since anything that can be called risky is measurable (e.g. via scenario modeling, financial forecasting, sensitivity analysis, etc.), it is by definition close enough to the standard and “knowable” business of today. Uncertainty is the realm outside of that: it’s “unknowable” and not measurable.
In risky areas, the level of analysis we do – and how much time we take to try to understand the risk and make a decision – should vary. The graphic above frames the three levels of risk described in more detail below, along with examples from some of our clients of the types of projects that we see falling into these categories:
1. Risky, Without Precursor – These are moves for which there is no “precursor” or analog that we have seen from elsewhere. We really want to do our homework when opportunities fall here, as exposure (e.g. financial or reputational) is high, and we have very little experience with the move and/or supporting data in the form of other’s success stories, analogs, etc.
Typical initiative: Collaborative and/or ecosystem-driven solution development – The City of Columbus was awarded the U.S. Department of Transportation’s $40MM Smart City Challenge in June of last year. The competition involved submissions from 78 cities “to develop ideas for an integrated, first-of-its-kind smart transportation system that would use data, applications, and technology to help people and goods move more quickly, cheaply, and efficiently.” The solutions that were envisioned as part of the challenge were generally known (or at least had an identifiable development path) but required a complex ecosystem to deliver them. Columbus was awarded the prize because they created a compelling vision and because they were able to bring the right “burden of proof” to the USDOT that they would be able to pull it off – i.e. that they had ways to manage down the execution risk.
2. Risky, With Precursor – Exposure may be high, but we are highly confident about making the move. The argument for why the move makes sense should be reasonably straightforward.
Typical initiative: Sensor-based business models and data monetization – A major aerospace sub-system provider had long been an industry leader in developing high-tech industrial parts and products. In recent years, new competitors had been coming online, and the company knew they needed to innovate to stay ahead of the game. In one initiative, they began adding sensors to their aircraft and aerospace products, initially for predictive maintenance needs. As they began rolling this out, they realized the data could be valuable in many other ways and actually create a whole new source of revenue from a whole new customer: pilots. Using this data, they decided to build a mobile platform that would allow pilots to view operating information from the parts and understand better ways to fly from point A to point B. The level of evidence they had was high – it was clear from many other industries that data could be used in this way to produce business value, but the confidence that it was the right decision for the brand was low at the beginning. They had to test it to find out. In this case, it was enormously successful, opening up a new business model and customer set that the company had never served before.
3. Low Risk, No Brainer – This is the domain of “just go do it,” perhaps because lots of solutions exist already and the opportunity for immediate economic value is high. There isn’t much reason to go study this to death.
Typical initiative: Robotic Process Automation (RPA) – RPA technology is essentially a software robot that has been coded to be able to do repetitive, highly logical and structured tasks. It has been around for a while, and there are extensive examples and case studies across industries, especially in banking. So, when JP Morgan decided to look into using softbots to automate higher order processes with investment banking, the right decision seemed obvious. With growing pressure on margins, and with the success within the industry in automating structured tasks, raising demands on automation technology seemed like a logical next step. It was clear this was where the industry is going, and it was just a matter of time before all competitors would be doing it. Choosing not to innovate seemed like the bigger risk in this situation.
Dealing with the Uncertainty Quadrant. This is the domain of the “unknowable.” Operating in this space, many companies spend lots of time running around collecting data to reduce risk, in the attempt to make it more knowable. But if the action is truly uncertain, extensive research to lower your risk is just a waste of time.
The only way to consider a highly uncertain action is to “just go do it” – usually through prototyping and market testing – but in a way that minimizes financial or reputational exposure. Consider an old story about Palm Computing, a favorite of my friend Larry Keeley’s. As I have heard Larry tell it, the genesis story of Palm is rooted in a condition we are all too familiar with today: a low-level hypothesis that digital would matter when it came to being organized and connected, but with a high degree of uncertainty about how that would (and should) play out. This was a time of “spontaneous simultaneity” as various players worked with designs and technological solutions. The one who got it right (for a time) was the one who just did it.
Jeff Hawkins, one of the founders of Palm, epitomized the activity of prototyping. The (perhaps apocryphal) story is that in the very early days, Jeff would work in his garage to cut multiple pieces of balsa wood into organizer-shaped rectangles. He would load a bunch of those into his shirt pocket and carry them to meetings, sketching on each one in the moment something that occurred to him as being particularly helpful at the time. Contact entry, instant contact sharing, notes, calendar access, etc.: all started to appear on pieces of wood and craft an overall vision for the most important functionality to be built into the Palm. And unlike computers of the era, he discovered the criticality of instant-on functionality. To steal a phrase from the design world, the device ended up being “well-behaved” from the beginning because it was founded upon how people actually interacted. The rise and fall of Palm is a much longer story. But in the early days, Hawkins demonstrated the handling of uncertainty while minimizing exposure exquisitely.
As we carry these principles back into our organizations, discussion of whether something is risky or simply uncertain is almost “certainly” going to drift quickly towards the semantic. We should start training ourselves and our organizations to talk more about the level of evidence required (not to mention whether proof is even attainable) and level of confidence, and less about how risky something seems. With this approach, we might actually be able to start thriving in a world that is increasingly uncertain.