Discovery-Led Decision Making
We have observed that organizations and individuals are not well placed to cope with – let alone capitalize upon – disruptions to their environment.
We have seen in the preceding chapters that we find ourselves in a world that seems to be getting more confusing and turbulent, as characterized by the term VUCA – volatile, uncertain, complex and ambiguous. However, we have also observed that organizations and individuals are not well placed to cope with – let alone capitalize upon – the resultant disruptions to their environment. We have compared two contrasting ways in which organizations mobilize themselves to adapt: top-down and middle-out. In this chapter, we’ll describe how we advocate organizations and their leaders respond to these challenges in terms of strategic decision making, using our discovery-led decision-making framework (Shenoy, 2017). The subsequent three chapters explore the different modes of this framework in greater detail.
The turbulent, confusing context we’ve described presents decision makers with a number of dilemmas to overcome. Some of these are not new; the psychological processes by which we sift and weigh information have evolved over millennia, and the inertial nature of established organizations is well documented. Today we have the benefit of a growing body of research into individual and organizational behaviour. An awareness and understanding of this knowledge can help to highlight and avoid the decision-making pitfalls into which we might otherwise stumble.
No process is ever perfect, especially if applied rigidly and without reflection. But are there ways in which the problems already discussed can be minimized or even rectified? The emphasis of contemporary management practice has been on helping businesses deal with a stable and easily quantifiable reality, whereas the challenges outlined earlier demand a more fluid, dynamic approach tailored to addressing the unknown. Here we examine what characteristics such an approach needs to be able to tackle VUCA environments.
What does a more contingent approach to decision making look like?
Tim Lasseter has pointed out that the father of scientific management, Frederick W Taylor, misused the term ‘scientific’ (Lasseter, 2014). Taylor conflated the concepts of science and arithmetic, desiring to quantify everything in the spirit of reductionism. This approach works well in predictable, well-understood settings such as mature markets and stable, high-volume operational settings. However, there aren’t many meaningful indicators to measure the situations we’ve described in earlier chapters. A further problem with extreme reliance on analytical models is that there may not be sufficient relevant data available if your organization is breaking new ground with a pioneering product, service or business model. Devoting effort to measuring meaningless or immeasurable indicators offers an illusion of comfort but does little to grapple with the unknown or validate critical assumptions.
For many organizations, strategic planning occurs in annual or sometimes biennial cycles. Some organizations take an even longer perspective. The global charity UNICEF operates on a three-year strategic planning cycle. The multinational technology conglomerate Honeywell uses a strategic planning process with a five-year outlook, combined with annual operating planning that provides a road map for the forthcoming year.
One of the criticisms levelled at traditional planning models is that they are slow and cumbersome. Could it be that, in today’s dynamic environment, the rhythm of strategy planning needs to change to reflect the fluid and evolving climate? The music seems to have changed from a slow, ceremonial march to a whirling, spinning tarantella dance.
In the 1970s and 1980s, General Electric Company (GE) was seen as the master of corporate strategic planning. But three decades ago, in 1988, Larry Bossidy, then Vice-Chairman of GE, was heard to say – much to the astonishment of his audience – that the company had abandoned strategic planning. He went on to explain that General Electric had not set aside strategic thinking and strategic management. When Jack Welch succeeded Reginald Jones as CEO of GE, he inherited a hierarchical and overly bureaucratic, stilted and formalized process that hindered rather than helped the formulation of insightful thinking. Welch replaced this with a more streamlined and fluid system. Instead of an annual process, Welch created the Corporate Executive Council, which met quarterly and was designed to cut across the mountains of aggregated data and get to the heart of important issues. It provided a forum for an exchange of ideas and critical reflection upon business plans.
Centralized formal strategic planning processes can often act as a brake on business units. The need to seek and wait for approval from a higher authority means that opportunities may be lost. Indeed, US management academic Gary Hamel has called the strategic planning process in Fortune 500 companies ‘the last bastion of Soviet-style central planning’ (Hamel, 1999). Facing a rapidly mutating market and several quarters of declining revenues, IBM recognized this and devolved strategic planning control to the business units, in an attempt to allow the rhythm of decision making to speed up in time with the music.
Johnson & Johnson is a multinational corporation manufacturing medical devices, pharmaceuticals and consumer goods. It too has seen the benefits of decentralized management. Each operating company is responsible for its own strategic plans. The management team meet with board members throughout the year to discuss strategy. As its own website says, ‘this interactive, on-going dialogue provides our Directors with insight into the activities and direction of the Company’s business’.
In 2007, Nokia had more than 37 per cent of the global mobile phone market. It was by far the dominant player, with a market share over two and a half times that of its nearest rival, Motorola. The story of Nokia’s demise is well known and multifaceted. Today the Nokia organization does not have a stand-alone presence in the mobile phone market. A contributory factor was that the company failed to realize that the tempo had changed. When it needed the flexibility and manoeuvrability of a downhill skier, it had the agility of a supertanker.
- As we have seen, part of the problem is that management techniques, developed for the more stable, regimented business environment of the 20th century, aren’t well suited to the emerging world. For example:
- As Clayton Christensen of Harvard Business School (Christensen, Kaufman and Shih, 2008) and Columbia University’s Rita McGrath (McGrath and MacMillan, 1995) argue, financial tools such as net present value calculations ystematically undervalue potential innovations.
- Conventional marketing approaches such as survey-based market research are ill suited to predicting consumer responses to novel products and services. For example, the initial survey research for the Sony Walkman in 1975 indicated that customers did not wish to listen to music on the move.
- Classic project management tools don’t cope well with situations where objectives are initially poorly understood.
In Chapter 1, we explained why conventional strategy techniques aren’t adapted to this brave new world. Martin Reeves and his co-authors, of strategy consulting firm Boston Consulting Group, argue that different approaches to strategy fit well with different environments (Reeves, Haanæs and Sinha, 2015). Specifically, classic strategy evolved for – and is well suited to dealing with – relatively stable shifts in markets, consumer segments and consumers.
Instead of obsessing about just numbers, a truly scientific approach to management would apply a hypothesis-driven approach designed to eliminate misguided mental models of the organization. Such an approach would forgo an over-reliance on analysis for a bias towards evidence, through a process that echoes the scientific method by testing and either disproving or validating assumptions about a business model. This tallies with our comments in Chapter 3 about Jeanne Liedtka’s work: namely, that a vital quality of strategic thinking is the ability to be hypothesis-driven, meaning the ability to nurture challenge and creativity by considering multiple perspectives (Liedtka, 1998).
Jeanne Liedtka and Tim Ogilvie also argue that there are ‘Six things managers know… that are dead wrong’ (Liedtka and Ogilvie, 2011). Their maxims – such as ‘If the idea is good, then the money will follow’ and ‘Measure twice, cut once’ – are aimed at applying design thinking to product and service development, but are also applicable to strategic decision making. Such principles can give shape to a reformed approach to strategy formulation and execution.
A ‘discovery-led’ approach to decision making, grounded in the scientific method, needs to draw on principles inferred from design thinking, agile software engineering and the lean start-up movement. The core premise of a discovery-led approach, better suited to a VUCA environment, is that how you make decisions should be based on the balance between what you know, what you don’t know and what you think you know (ie your assumptions).
‘Discovery-led’ means reducing unknowns and assumptions with a bias to action
Managers, however, don’t have the luxury of performing tightly controlled experiments in an obsessive pursuit of the nature of truth. Strategies are the organizational analogues of scientific theories, and strategic decisions are equivalent to specific hypotheses. Managers need to guess what might work in the future to frame an organization’s strategic direction. This future can’t be created (or even discovered) by simply examining the past. Answers to these tough questions cannot be found just in an analysis of numbers. At the same time, managers can’t guide an organization based on a set of wild guesses. A revised approach to strategic decision making requires that ‘good enough’ analysis is confronted with messy reality through rapid, well-designed experiments.
Experiment: how do we learn what we need to know?
So when what we know is eclipsed by what we don’t know and what we think we know (but might not be true), we should experiment. The purpose of this mode of the discovery-led decision-making framework is to develop and refine a range of options to reduce the unknowns associated with the situation confronting us, by means of a number of ‘tests’ to (in)validate key assumptions by means of prototypes, demonstrations or field trials.
During the experiment mode the decision-making team, either with cross-functional internal partners (such as engineers, developers, programmers and marketing teams) or collaborating with external partners, develops one or more ‘tests’ that will address the problems or opportunities identified through analysis and experience. The techniques used at this stage involve the development of business experiments designed to validate or refute core assumptions in the most resource-efficient way, including consumer observation in the field, surveys and focus groups, role-play and co-creating products and services with partners and consumers.
In the spirit of avoiding paralysis through over-analysis, these ‘tests’ need to start very simply, with complexity and sophistication emerging through iteration. While each test may be individually simple, a series of tests build upon each other to weave together a rich tapestry of data about what actually happens. Furthermore, to be useful, tests need to be both ‘falsifiable’ (ie they are designed so that they go wrong if the core assumptions are incorrect) and time-boxed (ie they have clearly defined endpoints). Failed pilots are unusual in most organizational environments for a variety of reasons (eg political). However, breakthrough insights are frequently hidden within failed experiments, which means that reflecting on failure is an essential component of learning from experience.
The ‘tests’ should be discovery-led: designed to (in)validate decision makers’ key assumptions and flush out unknowns. Discovery manifests itself as validated or refuted assumptions, with unknowns converted into knowns. This process of discovery will gradually increase the ratio of knowns to unknowns and assumptions – in other words, learning.
Think big; start small; expand what’s proven
In established organizations, there’s often a hurdle of minimum value and attractiveness that an opportunity has to clear in order to be pursued. However, transformational strategies – especially novel ones – often start small and build momentum incrementally. In reality, the problem is that our strategy frameworks don’t recognize the fact that we don’t know if new ideas are worth anything – with truly radical ideas that might be beyond the current understanding of even loyal customers. Given the increasingly fragmentary and messy nature of the changes in the environment, it is becoming increasingly rare that a generic strategy can be applied wholesale to an opportunity to generate the scale of returns demanded by incumbent firms. Each context for a given product or service, customer base and societal context is unique.
At an INSEAD conference in 2014 investigating why Nokia lost the smartphone battle, the former Nokia CEO Olli-Pekka Kallasvuo recalled the competitive environment during his tenure from 2006 to 2010. He recounted that nowhere in business history has a competitive environment changed so much as it did with the convergence of several industries – to the point that no one knew what to call the industry any more. Mobile telephony blended with the mobile computer industry, the internet industry, the media industry and the applications industry – to mention a few – and today they’re all rolled into one, he reflected.
So while it is commendable to have a grand end vision, it is better to complement it with small initiatives to test whether our assumptions about an opportunity are more than just fantasy. As the discovery process unfolds, and the ratio of knowns to unknowns and assumptions increases, the decision-making process can shift emphasis from searching for solutions to scaling those that are proven to resolve the key challenges being confronted.
Scale: how do we do more of what works?
When we are content that what we know about a situation comfortably outweighs what we don’t know – and that we’ve validated our most critical assumptions – then we are ready to scale what we have proven to work. The purpose of the scale mode in the discoveryled decision-making framework is to spread proven solutions tested in the experiment mode, based on knowns, validated assumptions and tolerable unknowns. What ‘scaling’ specifically means depends on the context: it might involve increasing the volume of production for a new product that’s proven successful in market testing, or spreading a particular organizational practice or mindset from a couple of departments to the whole enterprise.