When companies develop new technologies, they can never be certain how the market will respond. That said, the future of a given technology is not as unforeseeable as it might seem. When I work with tech companies on crafting or refining their innovation strategy, I start with an exercise that helps them anticipate where the next big breakthroughs will—or should—be. Central to the exercise is an examination of the key dimensions on which a technology has evolved—say, processing speed in computing—and the degree to which users’ needs have been satisfied. This can give companies insight into where to focus their effort and money while helping them anticipate both the moves of competitors and threats from outsiders.
One of my favorite examples comes from the consumer electronics and recording industries, which competed on the basis of audio fidelity for decades. By the mid-1990s, both industries were eager to introduce a next-generation audio format. In 1996 Toshiba, Hitachi, Time Warner, and others formed a consortium to back a new technology, called DVD-Audio, that offered superior fidelity and surround sound. They hoped to do an end run around Sony and Philips, which owned the compact disc standard and extracted a licensing fee for every CD and player sold.
Sony and Philips, however, were not going to go down without a fight. They counterattacked with a new format they had jointly developed, Super Audio CD. Those in the music industry gave a collective groan; manufacturers, distributors, and consumers all stood to lose big if they bet on the wrong format. Nonetheless, Sony launched the first Super Audio players in late 1999; DVD-Audio players hit the market in mid-2000. A costly format war seemed inevitable.
You may be scratching your head at this point, wondering why you’ve never heard about this format war. What happened? MP3 happened. While the consumer electronics giants were pursuing new heights in audio fidelity, an algorithm that slightly depressed fidelity in exchange for reduced audio file size was taking off. Soon after the file-sharing platform Napster launched in 1999, consumers were downloading free music files by the millions, and Napster-like services were sprouting up like weeds.
You might be inclined to think that Sony, Philips, and the DVD-Audio consortium were just unlucky. After all, who could have predicted the disruptive arrival of MP3? How could the consumer electronics giants have known that a format on a trajectory of ever-increasing fidelity would be overtaken by a technology with less fidelity? Actually, with the methodology outlined below, they could have foreseen that the next breakthrough would probably not be about better fidelity.
Understanding what’s driving technological developments isn’t just for high-tech firms. Technology—the way inputs are transformed into outputs, or the way products and services are delivered to customers—evolves in every market. I have used the three-step exercise described here with managers from a wide range of organizations, including companies developing blood-sugar monitors, grocery store chains, hospitals, a paint-thinner manufacturer, and financial services firms. It often yields an “Aha!” moment that helps managers refine or even redirect their innovation strategy.
Step One: Identify Key Dimensions
It’s common to talk about a “technology trajectory,” as if innovation advances along a single path. But technologies typically progress along several dimensions at once. For example, computers became faster and smaller in tandem; speed was one dimension, size another. Developments in any dimension come with specific costs and benefits and have measurable and changing utility for customers. Identifying the key dimensions of a technology’s progression is the first step in predicting its future.
To determine these dimensions, trace the technology’s evolution to date, starting as far back as possible. Consider what need the technology originally fulfilled, and then for each major change in its form and function, think about what fundamental elements were affected.
To illustrate, let’s return to music-recording technology. Tracing its history reveals six dimensions that have been central to its development: desynchronization, cost, fidelity, music selection, portability, and customizability. Before the invention of the phonograph, people could hear music or a speech only when and where it was performed. When Thomas Edison and Alexander Graham Bell began working on their phonographs in the late 1800s, their primary objective was to desynchronize the time and place of a performance so that it could be heard anytime, anywhere. Edison’s device—a rotating cylinder covered in foil—was a remarkable achievement, but it was cumbersome, and making copies was difficult. Bell’s wax-covered cardboard cylinders, followed by Emile Berliner’s flat, disc-shaped records and, later, the development of magnetic tape, made it significantly easier to mass-produce recordings, lowering their cost while increasing the fidelity and selection of music available.
For decades, however, players were bulky and not particularly portable. It was not until the 1960s that eight-track tape cartridges dramatically increased the portability of recorded music, as players became common in automobiles. Cassette tapes rose to dominance in the 1970s, further enhancing portability but also offering, for the first time, customizability—the ability to create personalized playlists. Then, in 1982, Sony and Philips introduced the compact disc standard, which offered greater fidelity than cassette tapes and rapidly became the dominant format.
When I guide executive teams through step one of the exercise, I emphasize the need to zero in on the high-level dimensions along which a technology has evolved—those that are broad enough to encompass other, narrower dimensions. This helps teams see the big picture and avoid getting sidetracked by its details. In audio technology, for example, recordability is a specific form of customizability; identifying customizability, rather than the narrower recordability, as a high-level dimension invites exploration of other ways people might want to customize their music experience. For example, they might value a technology that automatically generates a playlist of songs with common characteristics—and indeed, services like Pandora and Spotify emerged to do just that.
Selecting useful dimensions to examine takes industry knowledge and common sense.
It’s important to identify dimensions at the optimal “altitude”—neither so low or narrow that they miss the big picture, nor so high or broad that they won’t offer adequately detailed insight about a specific technology. In the case of automobiles, for example, climate control may be a technology dimension, but it’s so narrow that it’s not the most useful one to study; examining the higher-level “comfort” dimension under which it falls will be more illuminating. By the same token, the sweeping “performance” dimension in automobiles is probably too broad a choice, because it includes speed, safety, fuel efficiency, and other dimensions where meaningful advances could be made. Even a product as simple as a mattress involves technology with multiple performance dimensions—such as comfort and durability—that are useful to consider separately.
Selecting dimensions to examine isn’t a strict science; it depends substantially on knowledge of your industry—and common sense. I usually ask teams to agree on three to six key dimensions for their technology. The exhibit “A Sampling of High-Level Technology Dimensions” lists those identified by workshop participants for their respective industries. Notably, some dimensions, such as ease of use and durability, come up frequently. Others are more specific to a particular technology, such as magnification in microscopes. And with rare exceptions, cost is an important dimension across all technologies.
A final step in this part of the exercise can add further insight about the identified dimensions and in some cases suggest future dimensions worth exploring. I ask team members to disregard cost and other constraints and imagine what customers would want if they could have anything. This sounds like it might unleash a flood of creative but impractical ideas. In fact, it can be highly revealing. Folklore has it that Henry Ford once said, “If I had asked people what they wanted, they would have said faster horses.” If any carmaker at the time had really probed people about exactly what their dream conveyance would provide, they probably would have said “instantaneous transportation.” Both consumer responses highlight that speed is a high-level dimension valued in transportation, but the latter helps us think more broadly about how it can be achieved. There are only limited ways to make horses go faster—but there are many ways to speed up transportation.
Most of the time this exercise indicates that people want further improvements in the key dimensions already identified. Sometimes, however, the exercise suggests dimensions that have not been considered. Would consumers want an audio device that could sense and respond to their affect? If so, perhaps “anticipation of needs” is another key dimension.
Step Two: Locate Your Position
For each dimension, you next want to determine the shape of its utility curve—the plot of the value consumers derive from a technology according to its performance—and establish where on the curve the technology currently sits. This will help reveal where the greatest opportunity for improvement lies.
For example, the history of audio formats suggests that the selection of music available has a concave parabolic utility curve: Utility increases as selection expands, but at a decreasing rate, and not indefinitely. When there’s little music to choose from, even a small increase in selection significantly enhances utility. Consider that when the first phonographs appeared, there were few recordings to play on them. As more became available, customers eagerly bought them, and the appeal of owning a player grew. Increasing selection even a little had a powerful impact on utility. Over the ensuing decades, selection grew exponentially, and the utility curve ultimately began to flatten; people still valued new releases, but each new recording added less additional value. Today digital music services like iTunes, Amazon Prime Music, and Spotify offer tens of millions of songs. With this virtually unlimited selection, most customers’ appetites are sated—and we are probably approaching the top of the curve.
Now let’s consider the fidelity dimension, the primary focus of Super Audio CD and DVD-Audio. It’s likely that fidelity also has a concave parabolic utility curve. The first phonographs had awful fidelity: Music sounded thin and tinny, though it was still a remarkable benefit to be able to hear any recorded music at all. The early improvements in fidelity that records offered made a big difference in people’s enjoyment of music, and sales took off. Then along came compact discs. The higher fidelity they offered was not as widely appreciated—many people felt that vinyl records were good enough, and some even preferred their “warmth.” For most consumers, further improvements in fidelity provided little additional utility. The fidelity curve was already leveling out when Sony, Philips, and the DVD-Audio consortium introduced their new formats in the early 2000s.
Both formats offered higher fidelity, by certain technical measures, than the compact disc. For example, whereas CDs have a frequency range up to about 20,000 cycles per second, or 20 kHz, the new formats offered ranges that reached 50 kHz. That’s an impressive high end—but because human hearing peaks out at about 20 kHz, only the family dog was likely to appreciate it. In 2007 the Audio Engineering Society released the results of a yearlong trial assessing how well subjects (including professional recording engineers) could detect the difference between Super Audio and regular CDs. Subjects correctly identified the Super Audio CD format only half the time—no better than if they’d been simply guessing.
Had the companies introducing the new formats created even a back-of-the-envelope utility curve for fidelity, they could have seen that there was little room for improvement that customers would appreciate. Meanwhile, even a cursory look at the portability curve would have suggested opportunity on that dimension. Sony, of all companies, should have recognized the importance of portability in the evolution of audio formats. Back in 1979, the company had introduced one of the most successful consumer electronics products ever created—the Sony Walkman. The device, a lightweight cassette player that could fit in one hand, was a runaway hit not because it cost less or offered greater fidelity or selection than other formats but because it was portable. Similarly, MP3 was successful because it made music much more portable; MP3 files were small enough to be easily stored on a computer and shared with friends.
Only the family dog was apt to appreciate further improvements in audio fidelity.
Fast-forward to today. Although music lovers now take portability and selection for granted, there’s still lots of room for improvement on the customizability dimension. Pandora offers primitive customizability (you can create a channel where all the songs sound more or less like Taylor Swift), but artificial intelligence may get us much further up that utility curve in the future. It’s plausible (likely, in fact) that a program could identify elements of your preferred music style and then create music for you. Perhaps it would produce an endless stream of “Beatles songs,” nearly indistinguishable from the real thing but not written or played by the Beatles (or by any human performer). Machine-learning programs already compose music for advertisements and video games, and in 2016 Sony released two songs composed by an artificial intelligence system called Flow Machines. The first, “Daddy’s Car,” is reminiscent of the Beatles, and the second, “Mr Shadow,” emulates the styles of Duke Ellington, Irving Berlin, and Cole Porter. While neither quite hits the mark, both suggest what’s to come—and where music companies might sensibly invest.
Parabolic utility curves like those for audio fidelity and selection show that for some technology performance dimensions, small improvements can have a dramatic impact on utility from the start. Of course, not all technologies follow such utility curves.
The utility curve for speed reveals that the point at which improvements in a dimension are of little value can change with shifts in the environment or in enabling technologies. Forty miles per hour probably seemed more than fast enough, for example, when the Model T was introduced, since most roads at the time weren’t paved. As roads improved and highways appeared, the top speeds desired by customers shifted upward. The move to autonomous vehicles may make even higher speeds safe, comfortable, and desirable. If so, the flat top of the current utility curve for speed may slope upward once again.
Step Three: Determine Your Focus
Once you know the dimensions along which your firm’s technology has (or can be) improved and where you are on the utility curves for those dimensions, it should be straightforward to identify where the most room for improvement exists. But it’s not enough to know that performance on a given dimension can be enhanced; you need to decide whether it should be. So first assess which of the dimensions you’ve identified are most important to customers. Then assess the cost and difficulty of addressing each dimension.
For example, of the four dimensions that have been central to automobile development—speed, cost, comfort, and safety—which do customers value most, and which are easiest or most cost-effective to address? On the speed dimension, cars are already at the top of the utility curve, and top speed is relatively difficult and expensive to increase: Higher speed requires more power, which requires a bigger engine, which reduces fuel efficiency and increases cost. Comfort is probably the easiest dimension to address, but is it as important to consumers as safety? And how much does it cost to improve performance on these dimensions?
Tata Motors’ experience with the Nano is instructive. The Nano was designed as an affordable car for drivers in India, so it needed to be cheap enough to compete with two-wheeled scooters. The manufacturer cut costs in several ways: The Nano had only a two-cylinder engine and few amenities—no radio, electric windows or locks, antilock brakes, power steering, or airbags. Its seats had a simple three-position recline, the windshield had a single wiper, and there was only one rearview mirror. In 2014, after the Nano received zero stars for safety in crash tests, analysts pointed out that adding airbags and making simple adjustments to the frame could significantly improve the car’s safety for less than $100 per vehicle. Tata took this under advisement—and placed its bets on comfort. All 2017 models include air-conditioning and power steering but not airbags.
To assess which technology investments are likely to yield the biggest bang for the buck, managers can use a matrix like the one in the exhibit “How to Improve Glucose Monitoring?” First, for the technology being examined, list the performance dimensions you’ve identified as most important. (For cars, for example, that might be cost, safety, and comfort.) Then score each dimension on a scale of 1 to 5 in three areas:
- Importance to customers (1 = “not important” and 5 = “very important”)
- Room for improvement (1 = “minor opportunity” and 5 = “major opportunity”)
- Ease of improvement (1 = “very difficult” and 5 = “very easy”)
The exhibit shows a manufacturer’s scores on four dimensions of blood-glucose monitors: reliability, comfort, cost, and ease of use. The team identified reliability as most important to customers; having accurate glucose measures can be a matter of life and death. However, existing devices (most of which require a finger prick) are already very reliable and thus scored low on the “room for improvement” measure. They are also fairly easy to use and reasonably low in cost—but they are uncomfortable. Comfort is highly valued yet has much room for improvement. Both comfort and ease of use are moderately difficult to improve (scoring 3s), but because comfort is more important to customers and has more room for improvement, this dimension received the higher total score. So comfort became the focus for innovation efforts; the company began to develop a patch worn on the skin that would detect glucose levels from sweat and would send readings via Bluetooth to the user’s smartphone.
Notably, with a simple manipulation, the weight of the matrix scores can be adjusted to reflect any organization’s particular situation. For example, if a company is cash-strapped or under other duress, it may want to prioritize easy-to-improve dimensions rather than pursue those that have the greatest potential but are harder to address. If the scale for ease of improvement is switched to 1–10 (while the other scales are kept at 1–5), ease-of-improvement scores can be expected to roughly double and thus have a greater influence on total scores. Alternatively, a company seeking breakthrough innovation might extend the scale for importance to buyers, the scale for room for improvement, or both.
Similarly, a company’s competitive positioning may affect which technology dimensions it emphasizes. For example, safety may be a key differentiator for an automaker such as Volvo, while speed (or, more broadly, driving performance) may be the differentiator for BMW. So although the companies make the same technology (cars), they market to different customer segments and thus emphasize different dimensions.
Shifting the Focus
The three-part exercise I recommend can help managers broaden their perspective on their industry and shift their focus from “This is what we do” to “This is where our market is (or should be) heading.” It can also help overcome the bias and inertia that tend to keep an organization’s attention locked on technology dimensions that are less important to consumers than they once were. For example, at a large financial services firm I worked with, data-transfer speed had long been a key dimension where the leadership expected to see regular improvements. At its founding, the firm had developed technology to deliver financial data more rapidly than anyone else could. Being faster than competitors was, and remained, central to the company’s strategy and a matter of organizational pride. However, when I used this exercise with the firm’s managers, they realized that concentrating on data-transfer speed (which was now in the nanoseconds) was diverting their attention away from technology dimensions where there was greater opportunity to make improvements that customers would actually value.
For this firm, data-transfer speed had become what fidelity was to Super Audio CD: It could be improved upon year after year, but it offered diminishing utility to users. Furthermore, speed no longer provided a competitive advantage; technology to move data quickly had become ubiquitous and commoditized. The firm’s proprietary algorithms for transforming raw data into strategically useful information were far more defensible. The exercise revealed much greater opportunity for delivering this information on demand. Following the workshop, a group of managers made plans to shift resources into ensuring that their most highly used and differentiated analytics-based products could be effectively delivered on phones and tablets. The result was an award-winning mobile application that is now among the top three financial-services applications worldwide.
New product ideas are not the only—or even the most important—outcome of this exercise. Perhaps more valuable is the big-picture perspective it can give managers—shedding new light on market dynamics and the larger-scale or longer-term opportunities before them. Only then will they be able to lead innovation in their industries rather than scramble to respond to it.
Melissa Schilling is a professor of management and organizations at New York University Stern School of Business. She is the author of Strategic Management of Technological Innovation (McGraw-Hill Education, 2017), now in its fifth edition.
IMAGE CREDITS: https://ci.memecdn.com