new directions in research on dominant design ... · new directions in research on dominant design,...
TRANSCRIPT
1
New Directions in Research on Dominant Design, Technological Innovations, and Industrial Change
Johann Peter Murmann Kellogg Graduate School of Management
Northwestern University Evanston, IL 60208
Phone: 847-467-3502 Fax: 847-491-8896 E-mail: [email protected]
Version 2.0
2
I. Introduction
Organization theorists, strategy scholars, economists, and historians of technology
have all highlighted the powerful role of technology in shaping industrial dynamics and
firm performance. Mastering the "black box of technology" represents a crucial
organizational capability for succeeding in competitive markets. It is by now a well
established proposition that technological change is one of the prime triggers of
organizational change (Nelson and Winter, 1982; Pavitt, 1984; Tushman and Anderson,
1986; Metcalfe and Gibbons, 1989; Henderson and Clark, 1990; Freeman and Soete,
1997). Schumpeter (1950) long ago highlighted that technological change is a double-
edged sword. One the one hand, technological change creates vast opportunities for firms
to improve existing products or to design entirely new products. On the other hand,
technological change can destroy the usefulness of existing products and thereby threaten
the livelihood of individuals and firms tied to the old technology. Scholars following the
inspiration of Schumpeter have tried to understand in greater detail how this process of
"creative destruction" inherent in technical change shapes the fate of firms, populations
of firms, and entire nations (Rosenbloom and Christensen, 1994; Hannan and Freeman,
1989; Porter, 1990). Time and again studies tracing the evolution of technologies over
long periods of time have shown that technical change is a highly unpredictable process
(Rosenberg, 1982). Notwithstanding, scholars working in different academic disciplines
have documented characteristic patterns of innovations and formulated illuminating
models of technical evolution. All models essentially have in common that they attempt
to couple characteristic patterns of technical evolution with changing levels of
uncertainties, exploiting the fact that the degree of uncertainty inherent in the process is
not uniform over time.
In organization theory, the technology cycle model and its concept of a dominant
design has received a great deal of attention and has stimulated important empirical
research over the last decade. As the concept of a dominant design has taken the center
stage in much empirical work linking technological and organizational change (Anderson
and Tushman, 1990; Utterback and Suárez, 1993; Utterback, 1994; Suárez and Utterback,
1995), a number conceptual puzzles and difficult empirical issues have surfaced in the
literature. While in some writings the term "dominant design" is applied to a total
technological system (Abernathy, 1978; Van de Ven and Garud, 1993; Suárez and
Utterback, 1995; Iansiti and Khanna, 1995), in other writings the term is applied to
3
components of systems (Rosenkopf and Tushman, 1994; Khazam and Mowery, 1994;
Miller et al., 1995). Some authors even apply the term "dominant design" in the same
paper to total systems and components of systems without explaining whether the
concept applies to a total system and its components in the same way (Anderson &
Tushman, 1990; Utterback and Suarez, 1993). A second difficulty in interpreting the
body of research findings on dominant designs arises as authors frequently appear to
investigate dominant designs at the level of a total system but then describe a dominant
design in terms of the characteristics located at a particular component (Anderson and
Tushman, 1990; Utterback and Suarez, 1993; Baum, Korn and Kotha, 1995). Trying to
compare research findings across studies is also made difficult because various writers
differ in regard to the level of detail at which they define dominant designs. Some studies
define dominant designs in terms of general technological principles (see for example,
Miller et al., 1995; Rosenkopf and Tushman, 1994). Other studies, in contrast, define
dominant designs in terms of specific product names (Abernathy and Utterback, 1978;
Teece, 1986; Anderson and Tushman, 1990; Utterback and Suarez, 1993).1 There also
seems to be disagreement about how often a dominant design can emerge during the
evolution of a product class. In their analysis of the evolution of six products classes,
Suarez and Utterback identify only one dominant design for the entire life-span of each
product class. Anderson and Tushman (1990), on the other hand, find a periodic
emergence of new dominant designs during the life-span of four other product classes.
Finally, writers on dominant design disagree about the range of products to which
the dominant design theory applies. Authors like Anderson and Tushman (1990) suggest
that the theory applies to all technologies that evolve without interference of patent rights.
Other writers take a much narrower view and see the theory only applicable to assembled
products (Abernathy and Utterback, 1978; Teece, 1986; Nelson, 1995; Suarez and
Utterback, 1995). Many other authors are simply silent on the boundary conditions of the
theory (Van de Ven and Garud, 1993; Sanderson and Uzumeri, 1995; Baum, Korn and
Kotha, 1995).
Academic researchers, R&D managers, and public policy makers who want to use
the dominant design theory currently face a great variety of definitions and analytical
approaches that make it difficult to build new research efforts on the existing literature. In
the absence of some uniform definitions of the major concepts in theory, empirical
1The Productivity Dilemma (1978) has only Abernathy on the cover page. However, the chapter that sets forth the theoretical model of dominant designs was coauthored by Abernathy and Utterback. Similarly, Utterback helped to clarify the model in the implications chapter. This is why we cite The Productivity Dilemma when we refer to the Abernathy-Utterback model.
4
studies are unlikely to lead to some integrated body of findings. The purpose of this paper
is the help remedy this situation. Our strategy is twofold: We first critically review the
literature on standardization not only in organization theory but also in neighboring
disciplines to show the importance of dominant designs in writings on technical evolution
across many fields. From this broad review we synthesize a conceptual framework that
can help integrate the existing literature on dominant designs in organization theory. The
goal of this essay is to make some progress in removing potential confusion and to solve
some of the analytical puzzles that make it difficult for academic researchers to design
more cumulative studies. By studying the literature around the dominant design concept,
we also want to provide R&D managers and public policy makers with an example of
how to evaluate the adequacy of their tool box of concepts for analyzing technological
dynamics.
The essay proposes that technological evolution proceeds in the form of a nested
hierarchy of technology cycles. We argue that a hierarchical model of technical evolution
gains tremendous analytical power when linkages between subsystems are viewed as
subsystems in their own right. The model clarifies why innovations leading to order-of-
magnitude changes in performance can originate at any level of the physical hierarchy,
and helps remove the confusions in the existing literature.
II. An Illustration of the Phenomenon of Technical Change
To appreciate the phenomenon of dominant designs, consider the evolutionary history of
passenger airplanes.
The dream of flying is as old as humanity itself but the first successful flying machine was constructed only in 1903 by the Wright brothers. Human flight was finally made possible by two developments: Abandoning the idea of imitating birds in a flying machine, designers switched to a principle of operation where a stationary wing would provide upward thrust and a propeller would provide forward thrust (Vincenti, 1990). When engineers were also able to make the internal combustion engine, originally developed for automobiles, light enough for an airplane, the Wright brothers were at last in the position to keep their flying machine in the air for more then 10 seconds and travel 120 feet. Two years later the Wright brothers designed an improved airplane that carried one 1 person (the pilot) for 24 miles at a speed of 35 miles per hour. Over the last ninety years airplanes advanced dramatically along the three crucial performance dimensions: speed, operating cost per passenger, and flying safety. Today’s largest commercial airplane, the Boeing 747 SP 400, can cruise at over 600 miles per hour, carry 455 passengers over more than 7000 miles, and has a failure rate that
5
is lower than any other transportation technology. Tracing new features from one design to the next reveals that aeronautical engineers continuously introduced design changes ranging from improved nuts and bolts to a very different shape and organization of the entire artifact. Sometimes these changes were radical (both in terms of the technological principles used and the additional performance levels achieved) but more often they were incremental in nature. For analytical purposes it is useful to view an airplane as the union of a number of essential functions: a propulsion, lifting, landing, and control subsystem, a passenger compartment, and finally a mechanism for linking all these different functional domains to produce a well-designed overall flying system. During the infancy stage of the airplane, designers experimented with configuring the major components of an airplane in different ways. The purpose of changing the arrangement of fuselage, wings, elevating and steering rudder, engine, propeller, and landing gear was to bring about stable, controllable and efficient flight. The first successful airplane of the Wright brothers had two wings and was powered by a twelve horsepower 4 cylinder internal combustion fuel engine driving two propellers. With the exception of the engine, the airplane was made out of wood and fabric. The evolution that transformed this early design into a contemporary jet airliner proceeded through cycles of variation, selection and retention at all levels of the physical artifact. Here is a selection of innovation episodes that reveal the emergence of dominant designs at the system, subsystem, and basic component levels:
Early variation of systems architecture:
At the highest level of the physical artifact, the architecture of the overall body, aeronautical engineers experimented with a great many design alternatives before a particular configuration emerged as the dominant design for a period of time. Trying to make airplanes more controllable (so that pilots could fly curves and travel over longer distances), the Wright brothers and other designers experimented with changing the size of the various main components and placing them in different relations to one another. During this process, designers not only built monoplanes with either low or high-mounted wings but also created double and triple wing airplanes. In some designs, the propeller was placed in front of the wings facing forward, in others it was placed behind the wings facing backwards. To achieve more stability and greater distance some designers built airplanes with two propellers powered by independent engines; others tried to perfect the airplane configuration with single engine motor power. After experimenting and learning about the advantages and disadvantages of various configurations, the engine-forward, tail-aft biplane by WW I had become the dominant design, which designers typically took as the starting point in their efforts to create better airplane designs (Vincenti, 1990). As engines became more powerful, designers switched from the biplane to the single wing configuration. Figure 1 shows how, after the emergence of the engine-forward, tail-aft monoplane as the dominant design for the overall configuration, engineers focused on experimenting with
6
small variations of the dominant design to find the most aerodynamic airplane shape as well as on improving the individual functional domains.
Evolution of propulsion function:
The propulsion subsystem of the Wright brothers’ airplane consisted of an internal combustion engine and two wood propellers. To improve the bite of propellers, a large variety of shapes shown in figure 2 were tested. The historical record does not indicate whether a particular shape became the dominant design. However, metal replaced wood as the dominant material for the propeller blades by the 1930s. Trying to scale up propulsive power, engineers examined the effects of mounting up to eight engines on the airplane. Around 1926-7, the three engine approach as embodied in the Ford Tri-Motor became the dominant design for commercial airplanes until the appearance of the DC-3 ushered in a period of two motor designs. A crucial innovation in raising the performance of individual combustion engines occurred at a low level component. After testing a number of different materials, engineers converged on the sodium-cooled exhaust valves as the dominant design for this component of the motor cylinders in the early 1930s. A dramatic increase of engine performance was also achieved by adding lead to the engine fuel. The outcome of much experimentation with different lead levels was a 90 octane standard that remained in existence until 1945 when it was replaced by a 100 octane standard (Hanieski, 1973). When theoretical work in aeronautics in the 1920s predicted that it was possible to travel at least twice as fast as previously assumed, designers looked for a propulsion technology that would not have the speed limitations of propellers. The German design community experimented with a number of different propulsion concepts: rockets, controlled bomb explosion, pure jets and turbine jets (Jewkes, Sawers and Stillerman, 1961; Constant, 1980). From these variations, the turbine jet (turbojet) engine emerged as the most viable option. Initially, turbojets had many fewer parts than traditional piston engines, but over the course of fifty years jet designers have added so many parts that jet engines have again become very complex subsystems. Within turbojet technology a large number of alternative architectures were tried out until the ducted fan type axial flow turbojet became the dominant design, largely because of its fuel efficiency (Constant, 1980). For the turbojet to become a viable technology, material scientists had to mix hundreds of different alloys to find an alloy that could withstand the enormous heat of a jet engine. Engineers found the Nimonic 80 alloy to be the most effective heat resistant material for constructing a gas turbine and it became the dominant material. A similar process of search, development and experimentation with a large variety of alloys led to the selection of alloy G. 18B for the rotor and rim of the turbine (Hanieski, 1973). Without solving these "material" bottlenecks, the turbojet would not have replaced the piston-engine. To successfully incorporate jet-engines into airplane technology, it was necessary to make a number of complementary changes in other subsystems. For example, airframes had to be made much stronger in order withstand the higher levels of stress created by jet-engines. Furthermore, when jet engines finally became
7
commercially viable in the mid 1950s after a twenty year development period, the runways of airports had to be extended because jets need to accelerate longer before taking off. The introduction of jet engines could not be accomplished in a modular way by simply mounting them in the space allocated for the traditional piston engines. Before jet-engines could become the dominant design for commercial airplanes, systemic innovation both in the airplane as a whole and its larger technological context (runways) had to be accomplished.
Evolution of lifting function:
The shape of the wings has important aerodynamics consequences for the performance of the airplane as a whole. Ideally wings create large lifting forces without causing much drag that slows down the airplane. Figures 3 and 4 give evidence of the many different wing shapes. It is estimated that in pursuing a formula for optimal wing shapes designers had tested over 2000 different airfoils by 1936 (Vincenti, 1990). These 2000 different designs embody modification of the shape of the cross section, the thickness, and the curvature of the top and bottom side of the wing. Because the wing shape best suited for a particular airplane depends on its size and a number of other parameters, wing shapes are typically custom made for every airplane model and thus no dominant shape design emerged. However, a number of standard design choices emerged with regard to other parameters of wing design. The internal architecture of wings also underwent a great deal of variation. In the early 1930s, however, most wing designers adopted Wagner’s invention of a latticework frame which was very efficient in shifting stress to the covering sheet. Partly because of this advantage feature, metal wings became the dominant design. (For alternative lattice designs, see Figure 5.) To increase the lifting, steering, and breaking functions of wings, today’s commercial jet airliners include variable sweep back wings with multiple slots, replacing the earlier fixed wing dominant design. Figure 6 shows a number of alternative slot designs that were tried before the multiple slot approach became the dominant design.
Evolution of fuselage:
Every component used to assemble the overall airplane is made from a particular material. Structural materials used for the construction of the fuselage have two chief performance requirements: to be as light as possible, yet very resistant to stresses. Partly due to cultural values associated at the time with metals, engineers started to use metal materials in more and more airplane parts instead of wood and fabric materials (Schatzberg, 1994). By 1919, the first all-metal airplane (the Junkers F 13) was designed. This marked the beginning of a trajectory that would eventually displace wood entirely from fuselage (and wing) construction. At the level of basic materials, then, metal replaced wood as the dominant material for structural parts. Recently engineers have challenged the dominance of pure metal alloys by starting to employ fiber reinforced composite materials for airplane structures (Schatzberg, 1994).
8
Evolution of landing function:
The pioneer airplanes, for example the 1910 Nieport model, typically were equipped with a four-wheel fixed gear, resembling a little cart. Efforts to make landing gears more robust led to the tripoid design with two big wheels mounted below the fuselage and a very small wheel at the bottom of the tail, giving the entire fuselage a downward slope toward the rear. The tripoid configuration became the dominant design for commercial airplanes until the tri-cycle undercarriage was introduced in 1938 by Douglas’s model 4E (Miller and Sawers, 1968). By introducing a third leg of equal length, an airplane would be less inclined to flip onto its face (i.e. the front end of the fuselage) during landing. Landing gears that put the commercial airplane in a fully horizontal position became the dominant design until today.
In the 1930s engineers started to explore a number of different design ideas for making the landing gear more aerodynamic (See Figure 7). These attempts can be classified into two broad design approaches, the enclosing of wheels and the construction of retractable landing gears (Vincenti, 1994). Trying out a number of different methods of enclosure, designers put airplanes into service that either had their wheels enclosed (a design called wheel pants or spats) or had the entire landing gear enclosed (a design called trouser pants). Similarly, a number of different retraction mechanisms were devised. Retractable landing gears promised to deliver the greatest aerodynamic gains, but these devices were much more complicated than wheel and trouser pants, leading designers initially to focus on enclosing the landing gear. In the end, however, it was the laterally retracting landing gear that became the dominant design for all commercial airplanes going faster than 250 mph, winning the design competition not only against wheel and trouser pants but also against mechanisms where wheels would retract backwards or into the sides of the fuselages.
Evolution of linking function:
All components at some level have to be joined to compose an integrated artifact. As the case of rivets that join metal sheets to the lattice of the airplane demonstrate, dominant designs emerge at the most trivial linking mechanisms. To make additional aerodynamic gains after streamlining the fuselage and the wings, engineers started to explore different methods for making rivets even with the skin of the fuselage and the wings. This process is called flush riveting. In the early phase of flush riveting designers used a wide variety of had angles, ranging from 78 to 130 degrees. When in December 1941 the aeronautical board of the army and navy made the 100 degree angle head mandatory for all military aircraft, it quickly spread throughout the entire industry. Thus the 100 degree angle head became the dominant design, eliminating other options from the airplane construction practice (Vincenti, 1990).
Evolution of control subsystem:
9
While early airplanes were only controlled by the steering skills of the pilot, today’s large commercial planes can in principle be flown by a collection of instruments without a human pilot. To provide maximum safety, human pilots are still sitting in the cockpit. Automatic flight was made possible because engineers over the last 90 years developed a large number of control instruments from the gyrostabilizer that relieved the pilot from constant steering action to automatic navigation instruments. The strings and pulleys that provided the mechanical connection between the pilot’s wheel and the rudders were gradually transformed into power operated controls, first introduced in the Douglas DC 4E airplane. In a fly-by-wire control system, now guiding such airplanes as the Airbus 320 and the Boeing 777, the pilot no longer has a direct tactile connection to the pressures that impinge upon the various rudders and flaps of the plane. These fly-by-wire systems are expected to become the dominant design in the future. Many standards in the design of the control instruments emerged, partly because the airline industry is so heavily regulated. While analog displays were the dominant design in the early days of instrumentation, digital displays have in recent years replaced analog displays as the dominant design, as the cockpit is becoming ever more computerized.
Innovation and industrial dynamics:
Besides the dramatic fluctuations in demand due to the two world wars and the Great Depression, it was the innovations in the design of commercial airplanes that had powerful effects on industry dynamics from the very beginning of the technology (Rae, 1968). During the airframe “revolution” between 1925 and 1935 the introduction of such important innovations as the all-metal, low-wing monoplane, the controllable-pitch propeller, the retractable landing gear, and wing flaps led to significant entry of new firms, exit of incumbents, mergers and dramatic reconfigurations of relative market shares. The former leaders in the airframe industry like the Curtiss-Wright corporation were overtaken by firms like Boeing, Douglas, Lockheed, and Martin which pioneered these important innovations. When in 1936 Douglas integrated this set of innovations into its DC-3, the firm achieved so economical an airplane that it very quickly became the largest manufacturer of commercial airplanes in the world until the jet era in the 1950s. As other firms tried to imitate Douglas‘s design formula, the all-metal, low-wing monoplane, the controllable-pitch propeller, the retractable landing gear, and wing became standard design features for the next 20 years, largely because the DC-3 dominated the commercial market. By 1941, almost 8 out of 10 airliners were DC-3s (Klein, 1977). When jet engines became commercially viable in the mid-1950s, Boeing was quicker, however, to respond to the technological discontinuity in the engine subsystem of airplane. Boeing tested a prototype, its 707, a full year before Douglas even began developing its DC-8 jet airliner. Boeing captured a leading position in the beginning of the jet era and has succeeded in remaining the largest
10
producer of commercial airplanes to the present day, while Douglas was reduced to very small player in the market only to be taken over by Boeing recently, and Lockheed altogether abandoned the commercial jet market. Radical innovations in individual subsystems have also led to a large amount of entry and exit among the populations of firms associated with the production of individual components and subsystems (Rae, 1968). For instance, the leading manufacturers of water-cooled aircraft engines in the early 1920s— Curtiss Aeroplane and Motor Corporation, the Wright Aeronautical Corporation, and the Packard Motor Car company—were challenged by dynamic competitors like Lawrence and Pratt & Whitney who entered the industry to pioneer the development of air-cooled engines. While Wright was able make the transition to air-cooled engines and remain a major producer in the 1930s, the other leading firms were overtaken by Pratt & Whitney, and many exited the industry (Klein, 1977).
Although this case study of innovations only covers a small number of the innovations
that propelled airplane technology to current performance levels, it gives a good
overview of why scholars of innovation have found it useful to conceptualize
technological evolution as a process of variation, selection, and retention. This short case
history also suggests that dominant designs can occur at all levels of the physical artifact,
from the small component to the total system, or to put it in other words, at different
levels of resolution. We will now review more systematically the evidence on dominant
designs. After conducting a critical review of the literature on dominant designs in
organization theory proper, we will examine the evidence on dominant designs uncovered
by scholars outside the field of organization theory and canvass these literatures for ideas
that will help us to formulate a refined model of dominant designs.
11
III. Literature Review
1. Survey of Writings on Dominant Designs in Organization Theory and Strategy. Since Abernathy and Utterback (1978) first developed the concept of a dominant
design from a study of the automobile industry, many writers in the field of organization
theory and strategy have found the concept to be extremely useful tool for studying the
evolution of technological products. At the heart of dominant design thinking lies the
empirical observation that technology evolves by trial and error and thus entails
enormous risks for the population of firms engaged in its development. When a new
product class appears, it is very unclear what kind of inherent potential a technology
possesses and what kind of needs users have. The only way to reduce the uncertainty
about technological potential and user needs is to create different designs and wait for
feedback from users. Over time, merely one or a few designs from the large number of
design trials eventually succeed. The firms that happen to be the producers of the winning
designs will flourish while the firms that invested in the failing designs will incur great
economic losses and more often than not go out of business. The dynamics that lead to
dominant designs are of central importance to firms that have a stake in the way
technology evolves because the emergence of a dominant designs produces winners and
losers.
As organization theorists and strategy scholars have developed a greater concern
about the role of technology in shaping the fate of firms, dominant design thinking has
become a major intellectual focus of both theoretical and empirical work. Following the
lead of Abernathy and Utterback, scholars have applied dominant design ideas to a wide
variety of products. Dominant designs have been found in such diverse industries as
cement production machinery, flat and container glass production systems,
minicomputers (Anderson and Tushman, 1990), video recorders (Rosenbloom and
Cusumano, 1987), typewriters , TV sets, TV tubes, transistors, electronic calculators
(Utterback and Suarez, 1993), radio transmitters (Rosenkopf and Tushman, 1994),
hearing aids (Van de Ven and Garud, 1993), computer work stations (Khazam and
Mowery, 1994), disc drives (Rosenbloom and Christensen, 1994), facsimile transmission
12
devices (Baum, Korn and Kotha, 1995), mainframe computers (Iansiti & Khana, 1995),
personal stereos (Sanderson and Uzumeri, 1995), flight simulators (Miller, Hobday,
Leroux-Demers, Olleros, 1995) and microprocessors (Wade, 1995). (See Table 1 for an
overview of the different studies, their units of analysis and key findings, etc.) However,
researchers have also found that not all technological discontinuities lead to dominant
designs, suggesting that the emergence of a dominant design may not be a universal
phenomenon. Anderson and Tushman (1990) revealed that dominant designs do emerge
after most but not after every single discontinuity in a given product class. Utterback and
Suarez (1993) failed to uncover evidence that a dominant design appeared in the case of
integrated circuits and supercomputers. Although Henderson and Clark (1990) devote
considerable attention to dominant design concepts in the theoretical section of their
paper on the failure of established firms in the photolithographic aligner industry, they
never state whether or not they believe that a dominant design emerged in this particular
technology. This makes the photolithographic aligner industry a case that can neither be
interpreted as evidence for nor against dominant design theory. The weight of the existing
evidence presented by the aforementioned authors suggests that the process of
standardization leading to a dominant design takes place in a great variety of industries.
But if the present evidence is correct, there are clearly instances where dominant designs
do not emerge.
To refine dominant design theory, it would undoubtedly be a great step forward to
uncover the factors that explain under what conditions dominant designs emerge. Our
review of the existing empirical studies, however, suggests that it is currently impossible
to isolate from the available evidence with any degree of certainty a few factors that can
predict under what circumstances dominant designs emerge. A close reading of the
existing theoretical and empirical literature reveals that researchers have worked with a
great variety of definitions and empirical methodologies to determine dominant designs.
The absence of a shared set of definitions makes it very difficult to directly compare the
results reported in the various industry studies.
Before it is possible to isolate a small set of factors that predict the emergence of
dominant designs, it will be necessary to achieve some convergence in the way
researchers use dominant design concepts and collect evidence. As a first step towards
13
facilitating such a convergence, we review how and why researchers differ in the way
they use dominant design concepts. Our goal is to bring into view the sources of
disagreement and thereby enable an informed debate on how these differences can be
overcome without forcing researchers into a straightjacket that makes it impossible to
emphasize certain aspects of dominant designs over others. We will organize our
discussion in terms of the most important dimensions along which researchers differ in
their work on dominant designs. They concern 1) the definition of a dominant design, 2)
the chosen unit and level of analysis, 3) the underlying causal mechanisms, and 4) the
boundary conditions of the theory. These dimensions are, of course, not entirely
independent from one another since the general conception of dominant designs will have
direct implications on how a particular researcher will think about how to take various
analytical steps required in theorizing and conducting empirical research on dominant
designs.
Before discussing the differences among authors, let us get clear about areas
where researchers generally are in broad agreement. All researchers on dominant designs
share the view that technology is an important factor in shaping the evolution of
industries and the performance of firms that are affected by a particular technology.
There is no controversy about the proposition that a powerful process of standardization
(i. e. a reduction in design variety) accompanies the development of a technology. Broad
agreement also exists about the notion that the emergence of dominant designs often
represents a defining event that marks the transition from one competitive regime to
another one. All scholars hold that the birth of a new product class is marked by a great
degree of uncertainty as to what a technology can do and what kind of performance
characteristics would be most beneficial to users. Furthermore, scholars who have studied
the actual course of product evolution all have come to appreciate the central fact that the
evolutionary path is replete with designs that have failed in the marketplace. Although
scholars start from this common ground, they arrive at very different understandings of
what dominant designs are and how one goes about researching when they occur.
14
a. Disagreements about Definitions
Important differences already begin with the ways in which scholars have defined
the idea of a dominant design. Not all of them do provide an explicit definition of
dominant designs in their writings. However, as one analyzes how scholars have used the
concept of a dominant design, it becomes apparent that even when scholars do not
commit themselves to an explicit definition, they often attach a different meaning to the
concept. It would take us too far to review all the different meanings that have been
associated with this concept. We will limit our discussion to those authors who have
engaged in efforts to develop dominant design theory itself, leaving out those who have
merely applied existing dominant design ideas to a particular research concern without
trying to make a contribution to the development of dominant design theory itself.
Since Abernathy and Utterback pioneered the concept of a dominant design, we
begin with their definition to provide a convenient reference point for evaluating later
definitions. They identify several dimensions that characterize a dominant design.
Writing a book on the dynamic relationship between product and process innovation,
these authors see a dominant design as the turning point that leads the industry to move
from a “made-to-order” to a standardized-product manufacturing system. According to
Abernathy and Utterback, this transition from flexible to specialized production processes
is marked by a series of steps. The first one is the development of a model that has
sufficiently broad appeal in contrast to the design of earlier product variants that focused
on performance dimensions valued only by a small number of users. This design that can
satisfy the needs of a broad class of users is not a radical innovation but rather a creative
synthesis of innovations that were introduced independently in earlier products. The
second and decisive step is the achievement of a dominant product design, one that
attracts significant market share and forces imitative competition design reaction (p.147).
In the third step, competitors are forced to imitate this broadly appealing design, inducing
product standardization throughout the industry. The emergence of a dominant design
closes a period where firms compete by introducing radically different product designs
into the market. After the dominant design is in place, innovations focus on incrementally
15
changing the product from year to year, they become more cumulative, and competition
centers more on price than on product differences.
It is rather unfortunate that Abernathy and Utterback use the term “dominant
design“ already for the second step where one design gains significant market share
without having necessarily reached anywhere close to 50% of the market. Abernathy
clearly stipulates that a dominant design is one that diffuses almost completely through
the industry (p.61-2). Diffusion throughout the industry is precisely what makes it the
dominant design. Employing the term for the period before and after introduces
considerable ambiguities. To summarize Abernathy’s and Utterback’s definition, a
dominant design meets the needs of most users, diffuses almost completely throughout
the industry, is a synthesis of previous independently introduced innovations, and ushers
in a period of incremental innovations exploiting latent performance potentials.
Abernathy and Utterback illustrate their concept of a dominant design by referring to the
Ford Model T and the DC-3 as powerful examples of dominant designs that shaped the
automobile and the airplane industries.
The manner in which Abernathy and Utterback describe the emergence of a
dominant design conveys the strong notion that the design that will eventually emerge as
the dominant one is pre-ordained to achieve this status. The only factor that can prevent
this process from taking place are overly segmented product markets. These scholars
appear to suggest that it is simply a matter of time until designers will have tried enough
variants of a product to achieve a synthesis that is slated to dominate all others. While
this design may not be the best along all performance dimensions, Abernathy and
Utterback seem to regard it as the best compromise that will then force all other industry
participants to imitate the design and make it the standard for the entire industry.
In his recent writings with Suarez (Utterback and Suarez, 1993), Utterback
continues to underscore the notion that a dominant design is successful because it brings
together the most useful features that were previously scattered across different designs.
Utterback and Suarez’s (1995) insistence that “some authors have used measures which
are tautological to determine a dominant design, such as market share (Anderson and
Tushman, 1990)” (p.426), indicates that these authors use the term “dominant design“ not
so much to refer to the design that has achieved at least 50% of the market, but rather to
16
identify the design that offers a collection of design features that will make it the
irresistible choice in the market as soon as it appears. Utterback and Suarez’s proposal
that “[a] better measure might be notable increase in licensing activity during several
years by a given firm or by a group of firms with products based on the same core
technology” (p.426) gives even stronger evidence that for these scholars the defining for
dominant design occurs well before the design captures over 50% of the market share.
The interpretation that Utterback and Suarez use the term “dominant design” in the sense
of the best one in the market—the one that can dominate all others once it has been found
as opposed to the most frequent one—receives further support from the way in which
Utterback and Suarez react to scholars who have identified economies of scales as an
important force in bringing about dominant designs. Utterback and Suarez write that “we
think that economies of scale are of primary importance after a dominant design is in
place (p.418). In their account, economies of scale are not a mechanism that helps bring
about a dominant design, but rather the emergence of the best compromise makes it
possible in the first place to sell a standardized product to many different users.
While most of Utterback and Suarez’s statements seem to suggest that a dominant
design is dominant precisely because it is the best compromise, these authors are
sometimes pulled away from such a position. This vacillation in perspective makes it
rather difficult to give their account a straightforward reading. Utterback and Suarez’s
assertion that a dominant design “may not be ideal choice in a broader context of
optimality, but rather a design, such as the familiar QWERTY typewriter keyboard, that
becomes an effective standard” is difficult to reconcile with their notion of a dominant
design as the best technological compromise. When the authors continue that dominant
design “is not necessarily the one which embodies the most extreme technical
performance” (p.418), it becomes even more difficult to see how they can maintain a
position that identifies a dominant design as the best compromise. But if a dominant
design is not dominant because it captures over 50% of the market share (we read
Abernathy, unlike Utterback, as making this an defining requirement for a dominant
design) and if now Utterback and Suarez retreat from the idea that a dominant design is
the best technological compromise, we are left with the tantalizing question: What then is
a dominant design?
17
Before moving on to the definitions of other scholars, we want to record for our
later discussion of differences in unit of analysis that Utterback and Suarez elaborate their
position by explaining that for complex products with many parts a dominant design
“embodies a collection of related standards” (p.418). Utterback and Suarez try to clarify
here the relationship between the concept of a dominant design and the concept of
standards that was widely used in the economics literature in the 1980s. They affirm once
again that for products that are made from parts, a dominant design amounts to a standard
way of assembling the parts into a functional whole.
In their discussion of dominant designs, Henderson and Clark (1990) adopt a
more structural definition than the previous scholars. Henderson and Clark (1990) also
use the concept of a dominant design to refer to standardization at the level of the overall
product, but they are more explicit about the requirements that have to be fulfilled before
a dominant design can be said to have emerged.
A dominant design is characterized both by a set of core design concepts that correspond to the major functions performed by the product and that are embodied in components and by a product architecture that defines the ways in which these components are integrated. It is equivalent to the acceptance of a particular product architecture (p.14).
For Henderson and Clark (1990) a dominant design manifests itself when designers
converge on a common design approach for all major functions of the product and for the
linkages that integrate the components into a functional whole. This definition of a
dominant design is striking because of two features. Henderson and Clark require a
standard design approach for both components as well as linkages. Second, these authors
are silent on the question of the technological capabilities of the dominant design vis-à-
vis its competitors. The definition entirely lacks a sense that a dominant design is the
dominant design because it represents the best technological approach.
Tushman and Anderson (1986) start their work on dominant designs by adopting
the Abernathy and Utterback (1978) definition of a dominant design as a synthesis of
previously introduced design elements. In their later work, Anderson and Tushman
(1990) move away from the synthesis idea and adopt a more structural view, as do
Henderson and Clark (1990). By defining a dominant design as a single architecture that
18
establishes dominance in a product class (1990, p.613), Tushman and Anderson,
however, take a much less restrictive view than Henderson and Clark (1990). Anderson
and Tushman use the term “architecture” in a very broad sense that leaves open how
many of the components and linkages have to become standardized across designs in
order to constitute a dominant design. By adopting such abstract definition of a dominant
design, the authors do not commit themselves to a very concrete set of requirements that
have to be fulfilled to find positive evidence that a dominant design has emerged in a
product category. On this definition any design feature that becomes the standardized
across different design approaches could in principle qualify as a dominant design. While
this broadening of the definition of a dominant design has the advantage that it can
accommodate researchers who study a particular component in a technological system, it
comes at the expense of introducing even further ambiguities into the concept of a
dominant design. In contrast to their imprecise qualitative account of dominant design,
Anderson and Tushman (1990) are very clear on the numerical threshold a design has to
overcome to qualify as a dominant design. For a dominant design to exist, they demand
that a single configuration or a narrow range of configurations must account for over 50
per cent of new product sales or new product installations. Not only this requirement
differentiates Anderson and Tushman (1990) from Utterback and Suarez (1993, 1995).
The former authors, in stark contrast to the latter ones, contend that a dominant design
can only be known in retrospect and not in real time.
Putting the definitions of a dominant design that have been offered by different
scholars side by side creates a canvas filled with ambiguities and questions. It is not
surprising that researchers trying to build on the existing literature find it difficult to
extract a consistent set of principles that can guide research on dominant designs. Some
ambiguities can be removed by taking a closer look at the evidence already available. But
others will require a great deal more empirical and theoretical research. For instance, the
notion that a design becomes a dominant design because it is the best technological
approach is not supported by the evidence that has been accumulated since The
Productivity Dilemma was published in 1978. Scholars in organization theory, strategy,
history of technology and particularly economics have shown both theoretically and
empirically that is quite simple for an technologically inferior product to become the
19
dominant design (Anderson and Tushman, 1990; Cusumano, Mylonadis and
Rosenbloom, 1992; Farrell and Saloner, 1985). Utterback and Suarez (1993 and 1995)
began to recognize this evidence but they were not able incorporate it into their theory of
dominant designs.
Other ambiguities are much harder to remove. Abernathy and Utterback’s (1978)
insistence, for example, that dominant designs need to have sufficiently broad appeal is
empirically very difficult to operationalize. Levinthal (1998) has emphasized that
technologies frequently undergo changes in new user environments and later reinvade
their original user environment. This makes it difficult to pinpoint the precise moment
when a design has broad appeal. Furthermore, how should one go about identifying when
users have very different requirements that cannot be met by the same design? Which
users should be grouped together and which ones apart? Do small, medium, and large
cars all constitute different user segments and require the researcher to look for three
different dominant designs, or do they all fall into the same segment and thus require the
researcher to search for one dominant design? If the latter is the case, one could also
wonder whether small trucks should fall in the same segment, etc. These are questions
every researcher has to address but they are difficult to make tractable in a theoretical
way. In our view, the safest way to proceed currently is to consider a number of
alternative definitions of relevant user segments and determine how sensitive the
empirical results are to changes in classifications. Empirical researchers have handled this
ambiguity actually quite well by mostly picking user segments that have a great deal of
face validity because they follow widely shared definitions of markets. While researchers
have been able to circumvent this theoretical problem raised by Abernathy and
Utterback’s definition with considerable skill, they have experienced much greater
difficulties in forging an agreeing on how to analyze a given technological product class
and properly identify dominant designs.
b. Disagreements about Units and Levels of Analysis The Abernathy and Utterback model as articulated in chapter four of The
Productivity Dilemma (1978) leaves the strong impression that dominant designs are a
phenomenon that occurs at the level of the entire product. Because the authors emphasize
20
that a dominant design is a synthesis of previous independently introduced innovations,
they appear to exclude the possibility that the concept of a dominant design could also
apply to single components that constitute the Ford Model T automobile or the DC-3
airplane. Although the Abernathy-Utterback concept of a dominant design was
formulated with regard to the entire product, researchers have often not followed the
model with regard to the unit of analysis. Instead of assembling evidence that
standardization has occurred in all functional domains and their linkages (to follow the
more precise Henderson and Clark 1990 definition), other researchers have frequently
focused on standardization in one or a few components. For instance, Anderson and
Tushman (1990) examine standardization in kiln length and the heating subsystem rather
than standardization in all functional domains of a cement production system. In their
study of minicomputers, they similarly pick out two functional components—the central
processing and the memory unit—to characterize dominant designs. Rosenbloom and
Cusumano (1987) apply the concept of a dominant design to the scanning head of a video
recorder, one out of many components that make up this technological system. In their
recent papers, three of the eight products that Utterback and Suarez (1993, 1995) study
are components of larger systems. Likewise when Baum, Korn, and Kotha (1995)
examine the emergence of a dominant design in the facsimile industry, they focus on
standardization in the interface component instead of studying standardization in the
design of the overall facsimile technology.
Focusing on one or a few components of a larger system by itself would not be a
problem if all authors proceeded in the manner of Rosenbloom and Cusumano (1987).
Given the evidence they possess, they draw the valid inference that a dominant design
emerged with regard to how firms went about designing the scanning component of video
recorders, but they resist making claims about dominant design at the level of the entire
video recorder. Proceeding in the way of Rosenbloom and Cusumano would simply
expand the Abernathy-Utterback model to components of larger technological systems.
In the other parts of the book where he details the sequence of innovations in the
automobile industry, Abernathy (1978) himself did not strictly follow the spirit of the
model formulated in the chapters co-authored with Utterback. When he set himself to
organize and describe the sequence of innovations in automobiles, he applied the concept
21
of a dominant design to such diverse components as the internal-combustion gasoline
engine, the Model T chassis, the V-8 engine design, the closed steel body, the electric
starter, hydraulic brakes, energy-absorbing steering assemblies, independent front
suspension, and 12-volt electrical systems. Abernathy’s list of components that he
regards as having become dominant designs at different points in time goes on.
Many authors, however, purport to study dominant designs at the level of the
entire product but then only marshal evidence that is restricted one component, one
linkage, or a small number of components and linkages. Claiming that a dominant design
has emerged at the level of the overall product based on evidence from a few components
is, of course, highly problematic. While it may be absolutely true that designers currently
converge on a dominant design for a particular component, they may at the same time
choose new and different approaches for all other components. In this case the process of
standardization would run parallel to an even more extensive process of variation in
design. It is not difficult to see that in such a case making an inference from increased
standardization in one component to technological dynamics at the system level of
analysis is simply invalid. Counted together, the evidence would be stronger against than
for the notion that dominant design has emerged at the overall product level, since most
components in this example would experience an era of technological ferment. There
seems to be no a priori reason why a researcher cannot conclude that a dominant design
has emerged in the one specific component which has become standardized across
different products. However, it is important not to infer from the component to the overall
product as is often done in the literature. Without specifying the unit of analysis,
researchers are always in the danger of reaching contradictory conclusions about whether
a dominant design has emerged with regard to particular product or not. This is hardly a
satisfactory state of affairs for achieving a cumulative science.
Although most definitions of dominant design, as we have seen earlier, are
formulated with an overall product in mind, many researchers have been attracted to
studying precisely the phenomenon of standardization at individual components or
linkages, without examining in any detail the development of the overall product. Moving
to a different unit of analysis without expanding the formal definition of dominant design
raises, of course, a number of important questions. Are dominant designs at the level of a
22
component or linkage the same phenomenon as dominant designs at the overall product
level? What is the relation of the dominant designs at a component and linkage level with
dominant designs at the overall product level. We will return to these questions later in
the essay.
The confusions surrounding the unit of analysis problem have a second important
dimension. Even when researchers focus on the same component, there remain wide
opportunities to reach opposite conclusions concerning the existence of a dominant
design, although researchers may be working with exactly the same evidence. Take the
case of the motor unit in automobiles. Abernathy (1978) describes in the early parts of
The Productivity Dilemma how the internal combustion engine beat the steam and
electric engine to become the dominant design for the motor unit in 1902. As Abernathy
follows the innovations that have characterized the development of automobile
technology, he also determines that the V-8 internal combustion engine became the
dominant design in the 1930s. Abernathy certainly employs different criteria when he
calls both the internal combustion engine and the V-8 engine a dominant design. While
the first judgment is based on a relatively general criterion that distinguishes between
fundamentally different technological principles for creating motive power—combustion,
steam, electricity— the second judgment is based on a much more specific criterion that
distinguishes between different designs within the combustion approach.
Unfortunately, researchers have seldom investigated different technologies with
their analytical lenses set at the same level of generality. Just like Abernathy, other
scholars have examined dominant designs at different levels of generality without making
this explicit and without providing the context that would show why the researchers have
chosen a particular level of generality from a set of many possible ones. Anderson and
Tushman (1990), for example, characterize the dominant design in very concrete terms
for container glass production systems but use a much more general description for
minicomputers. In the first product class, they present the United Machine as the
dominant design. This imposes very stringent requirements for identifying which other
products follow the dominant design and which ones do not. In the second product class,
Anderson and Tushman present the 16-bit machine, core memory machine as the
dominant design. By only using two rather abstract product features that can be
23
implemented in many different ways, Anderson and Tushman make it much less difficult
for a design to be counted as a example of the dominant design. Utterback and Suarez
(1993) are no different from Anderson and Tushman (1990) in this respect. In their study
of 8 different products, they also locate dominant designs at different levels without
giving any explanation of why they do so and without discussing how this practice affects
their results. Consider their examples of dominant designs (Again, see Table 2 for an
overview of the different studies, their units of analysis and key findings, etc.). For
transistors they identify the planar process as the dominant design; for electronic
calculators, the calculator on a chip; and for automobiles, the all-steel, closed body car—
all very general descriptions encompassing a large number designs that can differ
substantially in their details. For typewriters they present the Underwood Model 5 and
Hess subsequent innovations as the dominant design; for TV sets, the 21-inch screen
along with the RCA technical standards; and in the case of TV tubes, the all glass 21-inch
tube. In contrast to the earlier products, the requirements for a design to count as a
version of the dominant design are here much more specific, setting the empirical hurdle
considerably higher.
Given that scholars have operated at different units (component versus system)
and different levels (specific versus general features) of analysis is it hardly surprising
that the empirical literature is filled with inconsistent findings, independent of the
question whether dominant designs are a universal phenomenon or not. For results to
become at least roughly comparable across studies it would be necessary that scholars
make explicit at what unit and what level of analysis they are conducting their empirical
test.
At the present time, the literature does not say whether dominant designs are
equally important at the different possible units and levels of analysis. We also don’t
know whether dominant designs are shaped by the same dynamics at the different units
and levels of analysis. But even more importantly, we don’t know how processes of
standardization at the different units of analysis relate to one another. Do dominant
designs at one unit of analysis control whether dominant designs can emerge at another
unit of analysis? There are still other important dimensions along which scholars differ in
their treatment of dominant designs.
24
c. Disagreement about the Frequency of Dominant Designs in a Product Class How often do dominant designs emerge? In the original formulation of dominant designs,
Abernathy and Utterback (1978) strongly emphasize that dominant designs emerge once
in the evolution of a particular product class. The pioneers of dominant design theory
view the transition from small scale, flexible production technology to large scale,
specialized production systems as an irreversible process. A dominant design emerges
before it is possible for producers to change to a large automated production facility
turning out a standardized product. If a dominant design is intimately connected with this
transition and the transition occurs only once, it is logical to conclude that a dominant
design emerges once in the lifespan of a product class. Suarez and Utterback clearly
follow this approach in their most recent empirical research on firm entry and exit rates
before and after the emergence of a dominant design. In all the seven product classes
Suarez and Utterback (1995) examine, dominant designs emerge once and then continue
to exist for as long as the product class continues to find customers in the market place.
Utterback and Suarez (1993) at one point come into contact with the possibility that new
dominant designs can come into existence when they write in the literature review of
their paper:
Eventually, we believe that the market reaches a point of stability in which there are only a few large firms having standardized or slightly differentiated products and relatively stable market shares, until a major technological discontinuity occurs and starts a new cycle again (our italics) (p.2-3).
But these authors never recognize in their direct theoretical statements a notion that an
existing dominant design may in time be replaced by new one nor do they give any
evidence of this possibility in their descriptions of how industries have evolved. Early
Abernathy as well as Utterback and colleagues are not alone in the “one dominant design
per industry” camp. Baum, Korn, and Kotha (1995) present the case of the facsimile
transmission industry also as one where a dominant design emerged once during the
observed lifetime of the technology. Similarly Teece (1986; 1992) writes that one design
or narrow class of designs emerged at some point in time in the automobile, aircraft,
computer, and VCR industries.
25
In contrast to Utterback, Abernathy reverses his earlier theoretical position on the
frequency of dominant designs in his later joint work with Clark (1985). Examining in
greater detail the nature of the innovations that have shaped the automobile industry, he
and Clark come to the conclusion that even after a dominant design emerges, industries
can experience further rounds of technological de-maturity. The authors cite the “all-
purpose road cruiser“ as the design that became dominant in the early 1940s, replacing
the earlier dominant design.
Anderson and Tushman (1990) also reject Abernathy and Utterback‘s (1978)
stage theory and offer an alternative technology cycle model. According to this model,
the evolution of product classes is marked by recurring technological discontinuities that
are followed by a new dominant design. In their study of the evolution of the cement,
container glass, flat glass and minicomputer industries, Anderson and Tushman (1990)
find that each industry experienced several discontinuities which in most cases lead to a
new dominant design. In writing that “[r]adical innovations establish a new dominant
design” (p.11), Henderson and Clark (1990) also affirm that a new dominant design can
emerge in a product class when it experiences a radical innovation. Sanderson and
Uzumeri (1995) are another set of authors who find it more useful to describe the
evolution of product classes (in their case the Walkman personal stereos) in terms of a
cyclical model as opposed to a stage model.
This conceptual disagreement on whether or not new dominant designs can
replace the original dominant design has dramatic ramifications for conducting empirical
research. If researchers from the two camps would be asked to study the very same
industry over the very same period, it is possible that the researcher from the “one-
dominant-design” camp would find that a single dominant design has appeared while the
researchers from the “multiple-dominant design” camp determine that already the third
dominant design has emerged, replacing the earlier dominant designs. This scenario is of
course hardly conducive for building a coherent and cumulative body of findings. Is there
a way to resolve these theoretical differences? Which conceptual approach is supported
by stronger empirical findings? We believe that the cycle view is more accurate about the
actual nature of technical evolution. We will return to this point later in the essay.
26
d. Underlying Causal Mechanisms Scholars on dominant designs have not all appealed to the same underlying causal
logic for explaining why a particular design approach rather than other ones emerges as
the dominant design. The question of how dominant designs actually come into existence
has received a number of different answers—some of which are more convincing than
others.
i. The Logic of the Best Technology
We have already discussed in detail the Abernathy and Utterback (1978) account
and its elaboration in Utterback and Suarez (1993 and 1995) of how dominant designs
come about. The idea that a dominant design becomes dominant because it represents the
best technological compromise and then forces all other producers to imitate the design is
simply not confirmed by the evidence that scholars have found across many different
products. While in some cases the best technological approach may actually become the
dominant design, there are numerous well-documented examples where technologically
inferior products capture the dominant market position. The cases of QWERTY versus
the DVORAK keyboard (David, 1985). VHS versus Beta and Video 2000 (Cusumano,
Mylonadis et al., 1992), and the DOS versus the Macintosh operating system— all
examples where a technologically inferior approach became the dominant design—
demonstrate that this can by no means be the universal explanation of how dominant
designs come into existence. Suarez and Utterback’s (1995) remark that “in the presence
of bandwagon effects, strategic maneuvering is a powerful force driving the emergence
of dominant designs” (p.418) indicates their awareness that other causal mechanisms
must be at work in determining dominant designs.
ii. Economies of Scale
One of the most straightforward explanations for the emergence of a dominant
design are economies of scale that can be realized with standardized products. While
Suarez and Utterback (1995) argue that economies of scale are more important after a
dominant design is in existence, other researchers have stressed that early advantages in
market share can provide firms with the higher margins that will allow them to outspend
their rivals in R&D and create more innovative products that will eventually drive the
27
less innovative ones out of the market (Klepper, 1996). If one works with a definition that
requires a dominant design to capture over 50 percent of the market, it seems that
Abernathy’s study of the automobile industry and Utterback and Suarez’s (1993 and
1995) recent industry studies very much point to economies of scale as one of driving
force in selecting a particular design as the dominant one. Hounshell’s (1984) historical
research on the emergence of the “American Systems of Manufacture” has provided a
flavor of how the forces of standardization and economies of scale are a highly
interactive process. While some degree of standardization is necessary in order to drive
down unit costs, the higher demand following a drop in unit cost will give producers an
even greater incentive to further standardize the product and realize yet higher reductions
in unit cost. On this economic logic, design that initially has a lead in market share will
emerge as the dominant design. Arthur (1994) and well as Klepper (1996) have shown
mathematically that under conditions increasing returns to scale one design can easily
come to dominate the market.
iii. Network Externalties
A logic rather similar to the notion of economies of scale is used by scholars who
view network externalities as a strong force behind the selection of a particular design
approach as the dominant one. In recent research on the emergence of dominant designs,
Wade (1995) and the team of Baum, Korn, and Kotha (1995) have borrowed arguments
from the economics literature on network externalities to explain the emergence of
dominant design in the microprocessor and facsimile industries. The idea of network
externalities describes a situation where the value of adopting a particular technology
depends on the number of users who have purchased a compatible technology. Telephone
systems, fax machines, ATM networks, and computer platforms are all examples where
users have an incentive to adopt the technology that is already adopted by many other
users because the larger network will make the particular technology more valuable to the
individual user.
As the work of Arthur (1989; 1988) has shown, small random differences in the
beginning of the adoption process can give one design an advantage that will make it the
dominant design. In this process, there is no mechanism that would prevent a technically
inferior design from becoming the dominant design because of early advantages in
28
adoption rates. Wade (1995) has provided some empirical evidence that small differences
in early sales led other firms to adopt the design of the leading product, setting in motion
a bandwagon that would turn Intel-based processors into the dominant design for the
industry.
iv. Strategic Actions on the Part of Firms
Abernathy-Utterback (1978) downplay the strategic role that firms and their
managers possess in bringing about dominant designs. While on their account a dominant
design emerges when designers, after considerable trial and error, finally hit upon the
best synthesis of product features, some recent authors have emphasized the strategic
maneuvering on the part of firms to explain the emergence of a particular dominant
design. Cusumano, Mylonadis, and Rosenbloom (1992) cite JVC’s strategy to license
their VHS design to many other electronic companies as the main reason why the firm
was able to beat Sony’s Beta design, although Sony was first to market. In their
investigation of the strategies for dominant design in work station computers, Khazam
and Mowery (1994) have also pointed to SUN’s strategy of licensing its chip architecture
to many suppliers as the main reason for why SUN’s Spark chip became the dominant
design for the industry. Lee, O’Neal, Pruett, and Thomas (1995)have tried to develop a
comprehensive framework for the emergence of a dominant design. The framework
emphasizes that firms can take concrete steps to bring about a dominant design. Building
on Teece’s (1986) idea of complementary assets, the authors argue that management
must systematically analyze what kind of R&D, manufacturing, and marketing
capabilities a firm must possess to turn its design into the dominant one.
Finally, McGrath, MacMillan, and Tushman (1992) have argued forcefully that
managers must formulate ex-ante strategies for creating dominant designs if a firm wants
to profit from the dynamics of dominant designs. In their view, managers can develop a
host of strategies that will enhance the probability that the firm’s design will become the
dominant one. Their argument centers on two key ideas: (1) lumpiness of customers and
(2) heterogeneity among firms. By lumpiness they mean that customers form
subpopulations because they are not uniformly distributed across the n-dimensional
design space defined by the relevant technological performance attributes of a given
technology. Because customers lump in this technology performance space, firms can
29
develop strategies to move their particular design into a direction that will give them the
largest number of new customers for unit of development cost invested into the particular
design. Here the dominant design—defined ex ante—is that point in the product
performance space that satisfies the largest number of users. Because firms are
theorized—following the resource-based view (Wernerfeld, 1984; Barney, 1986; Teece,
Pisano and Shuen, 1997)— to differ in their technological, manufacturing, marketing and
management capabilities, individual firms differ in their ability to achieve a design that
will improve the technology and appeal to the largest number of users. The crucial role of
management, according to McGrath, MacMillan and Tushman (1992), is to appraise the
present capabilities of the firm and move its product design to a position that (1) meets
the performance requirements of a larger number of users and (2) is difficult to replicate
by other firms in a short period of time.
While researchers who emphasize strategic maneuvering of firms as a driving
force behind dominant designs don’t negate powerful uncontrollable forces, they all see
an important role of management in formulating and executing strategies that will allow a
firm to establish its design as the dominant one for the industry.
v. Complex Interaction of Sociological, Political, and Organizational Dynamics
Tushman and Rosenkopf (1992) have argued that the complexity of a product
determines to a significant extent what forces will play the key role in bringing about
dominant designs. In their view, the selection of dominant designs for simple products is
influenced more by considerations of technical merit than by sociological, political and
organizational forces. However, as products become more complex, simple technical
logic cannot adjudicate between the many dimensions of merit that are built into complex
products. In this situation sociological, political and organizational dynamics become
more important in determining which design approach emerges as the dominant one.
Tushman and Rosenkopf (1992) cite machine tools, electricity networks, and radio
transmitters among other complex products as examples where a multifaceted interplay
of sociological, political and organization factors shaped the emergence of a particular
dominant design.
A number of recent examples show that the higher the degree of complexity in the
design of a product, the greater is the need for actors to agree on a dominant design to
30
avoid costly failures. The cases of high-definition television and the new CD format are
recent examples where firms wanted to avoid investing in very expensive alternative
designs and prevent a risky battle for a dominant design. Rather, firms negotiated the
dominant design ex-ante and then compete on marketing and manufacturing capabilities.
Other scholars have also highlighted this complex interplay of social actors in
explaining dominant designs. Lee, O’Neal, Pruett, and Thomas (1995), building on
Tushman and Rosenkopf (1992), observe that often not only firms and customers but also
such actors as local, state, and federal governments have substantial interest in and
preference for particular designs. The bigger their clout because of purchasing or
regulatory powers, the greater is the role of such actors in determining which design will
become the dominant one. The Federal Communication Commission, which grants TV
station licensing throughout the U.S., had enormous influence both in determining the
rules for selecting a high-definition TV standard and finally in selecting the standard
itself.
Finally, Miller et al. (1995) conclude from their detailed case study of the flight
simulator that in the case of low-volume and high unit-cost products, dominant designs
do not emerge through a market mechanism but are rather negotiated by a diverse set of
actors who have a stake in the technology. To summarize, when products are complex,
and/or when governments have regulatory authority, and/or when particular users have
enormous purchasing clout, and/or when unit costs are very high, dominant designs tend
to be negotiated ex-ante.
e. Boundary Conditions of the Theory
The current literature on dominant design is also marked by significant
disagreements about the range of phenomena dominant design theory is designed to
explain. Related to differences in the way dominant designs are defined and the unit of
analysis at which they are studied, authors display a great variety of conception of the
limits of dominant design theory. Anderson and Tushman (1990) take the broadest
perspective. For them dominant design theory applies to the evolution of all technologies
that are free from patent interference. As long as the normal competitive forces are
31
allowed to shape the development of technology, dominant designs can be expected to
emerge.
In their original formulation of dominant designs Abernathy and Utterback (1978)
took a narrower view. They limited the range of phenomena the theory was designed to
explain to industries which were characterized by a highly complex production process
where multiple inputs are combined to a highly valued product whose characteristics may
be varied. Abernathy and Utterback put particular emphasis on the requirement that it
must be possible to make a final product in a variety of ways, allowing firms to
differentiate the product along a number of dimensions.
In his recent writings with Suarez, Utterback has adopted a somewhat different
view from the one expressed in his early work with Abernathy (1978). Utterback and
Suarez confine dominant design theory to the manufacturing sector (1993), and in their
later paper (1995) take an even narrower view, restricting the theory to complex
assembled products. This is a substantial departure from Utterback‘s earlier work with
Abernathy (1978) where the authors speculate that concept of a dominant design might
be useful in the communication industry and in certain health care services (p.84).
For Teece (1986) dominant design theory is limited to mass markets where
consumer tastes are relatively homogeneous. While Nelson (1995) also stresses
uniformity of consumer demand as a condition for the emergence of dominant designs, he
does not express the idea that dominant design theory applies only to mass markets.
However, in his view the theory of empirical validity is limited to systemic technologies.
We believe that at the present time researchers do not have sufficient evidence to
conclude under what circumstances dominant designs appear. It is important, however, to
distinguish between two separate questions that are often conflated in this context: Under
what circumstances do dominant designs emerge and under what circumstances does the
emergence of a dominant design have a strong impact on the industry dynamics? The
second question goes to the heart of whether standardization will produce big winners
and losers, and thus matter in a profound sense for the industry participants. Dominant
designs will clearly have a greater impact on industrial dynamics when it is more costly
and difficult for a firm to switch to the dominant design after its own design was rejected
in the market. Similarly, behind Teece’s and Nelson’s emphasis on uniform consumer
32
demand appears to lie the intuition that the greater the uniformity of consumer demand,
the greater is the potential that a dominant design will have a significant negative impact
on the firms that invested in designs that did not become dominant.
2. Survey of Writings on the Phenomenon of Dominant Designs in other Fields To construct a model of dominant designs that brings more clarity to the existing
literature and that can guide future research efforts, we will now examine how scholars in
other academic disciplines have conceptualized technical change and phenomena of
variation and standardization in design. Scholars in other fields have developed ideas that
can help us formulate a model of dominant designs that is both conceptually sharper and
more descriptive of actual technological dynamics.
a. Economics
i. Review of Literature on Standards in Economics
Economists have also been confronted with the pervasive phenomenon of variety
giving way to standardization and its dramatic implications for the economy. Since the
early 1980s, the economics discipline has shown renewed interest in the process of
standard setting and its effects on industry structure and social welfare. In their review
article on the standards literature, David and Greenstein (1990) found that researchers
typically distinguish between three different kind of standards, namely, reference,
minimum quality, and interface or compatibility standards. The first two kinds provide
signals for consumers that a given product conforms in content and form to specific
characteristics. In this manner reference and minimum quality standards reduce
transaction costs for consumers because they need to engage in less product evaluation.
Interface and compatibility standards afford the consumer an assurance that intermediate
products or components can be successfully incorporated into a larger system.
In addition, writers in this tradition have distinguished between four different
mechanisms by which standards come into existence. The first process, called
“unsponsored” standard setting, describes a state of affairs where no party can be
identified as having had a clear proprietary interest in creating a particular standard, but
the standard nevertheless exits well-documented in the public domain. The second
mechanism is commonly referred to as “sponsored” standard setting. In this case, one or
33
more sponsoring entities which hold a direct or indirect proprietary interest create an
inducement for other firms to adopt a particular set of technical specifications. The third
mechanism for creating of a standard involves negotiation, agreement and publication
under the umbrella of voluntary standards-writing organizations. The wide variety of
standards that are written with the help of the American National Standards Institute
(AINSI) fall into this category. The fourth distinct mechanism comes under the title of
mandated standards, which are promulgated by governmental agencies endowed with
some regulatory authority. Because the first two standard-creating mechanisms involve
market-mediated processes, they are generally referred to as de facto standards. The
latter two typically originate from political “committee” deliberations or administrative
procedures. These procedures may be influenced by market forces but not in a direct way,
as are the first two mechanisms. For that reason the latter two are often loosely referred to
as de jure standards although only that last kind of standard is backed by the power of
the law.
To show how this typology of standards is related to the literature on dominant
designs, we have devised a table that maps examples of dominant designs into the
typology of standards presented by David and Greenstein (1990). As the dominant design
literature has not been concerned with reference or minimum quality standards, we have
developed examples of these categories to illustrate how they differ from compatibility
standards.2
2 We would like to thank Shane Greenstein for helping us develop examples for the different kinds of standards delineated in the David and Greenstein (1990) typology.
34
Reference Minimum
Quality Interface or Compatibility Standards
Unsponsored Standards
•Grades of sulfuric acid (Kreps, 1938)
•Wool •Wood •Metal minimum quality standards
•QWERTY keyboard layout (David, 1985) •Nuts and bolts standards •Shape of metal electrode on electric plugs
De Facto Standards (market mediated)
Sponsored Standards
•Triple AAA Guidebook to Hotels. •Michelin star ratings of European restaurants and hotels (Greenstein, 1996, personal communication)
•Underwriters Laboratories electrical appliance certification (Greenstein, 1996, personal communication)
•VHS versus Beta (Cusumano et al, 1992) •AC/DC power standard (Hughes, 1983)
Negotiated Standards through Voluntary Standards Organization
•Universal product code for groceries created by Grocers Association (Greenstein, 1996, personal communication)
•CPA certification standards •AMA board certification (Greenstein, 1996, personal communication)
•Facsimile standards (Baum et al., 1995) •The American Standard Code for Information Interchange (ASCII codes)
De Jure Standards
Mandated Standards
•Army boots specifications (Greenstein, 1996, personal communication)
•Meat grade standards •EPA pollution standards
•United States HDTV standard •Interconnection standards of customer premise phone equipment mandated by the FCC
Most of the writings in the economic literature on standards have been focused on
interface and compatibility standards, exploring very little the economics of reference and
35
minimum quality standards. One of the central topics in this economic literature on
standards has been network externalities and increasing returns for later adoption (Arthur,
1989; Arthur, 1988). As mentioned earlier, some writers on dominant designs used the
concept of network externalities to explain why a particular design becomes the dominant
one. As Wade (1995) noted, most of the research on the economics literature has been
focused on the mathematical modeling of network externalities and increasing returns.
Only a few studies have explored empirically why a particular technology becomes
dominant and drives all competitors from the market or relegates them to very small
niches. David’s (1985) case study of how QWERTY emerged as the standard for the
keyboard layout although it was technically inferior to the DVORAK keyboard design
illustrated how early advantages in adoption rates can make an inferior technology into
the industry standard. Saloner (1990) investigated the battle between different operating
systems for UNIX-based computers and documented in great detail how the rivalling
firms formed two large coalitions to make their preferred UNIX version the industry
standard. Recently Saloner and Shepard (1995), while not investigating directly which
ATM machine technology was adopted, have provided econometric evidence that the
existence of the network effect and economies of scale explain how quickly banks
adopted ATM machines. Indirectly this study provides some evidence that network
externalities and economies of scale are important drivers of why a particular design
becomes the dominant one.
Given that both are focused on the elimination of design variants, it is possible to
ask what the dominant design literature can learn from the economics literature on
standards? The economic literature suggests that it is analytically useful to distinguish
between different kinds of different standards. As compatibility standards involve a range
of issues (e.g. coordination across technologies) that are absent from reference and
minimum quality standards, economists have found it useful not to treat every standard
alike but to investigate their creation and their impacts separately. Until now researchers
on dominant designs have not attempted to differentiate systematically between different
kinds of dominant designs. Our review of the confusions in the existing dominant design
literature suggests that it may be extremely useful to distinguish between different kinds
of dominant designs just as researchers on standards have it found it expedient to
36
distinguish between different kind of standards. Treating dominant designs as a collection
of “related animals” rather than “one kind of animal” appears to be a promising strategy
for advancing the current state of knowledge. Tushman and Rosenkopf’s (1992) typology
that distinguishes between technological products based on their complexity promises to
be a good starting point for getting a better handle on the diversity of phenomena
previous researchers treated under the same heading. Historians of technology have
already traveled a long way down the path of developing a conceptual framework that
incorporates differences in complexity between technologies. We will examine these
efforts after reviewing other work in economics that bears on the phenomenon of
dominant designs.
37
ii. Writings on Technological Change in Evolutionary Economics
Scholars in economics interested in technological change have developed a set of
concepts that to some extent overlap ideas lying behind dominant design thinking. Nelson
and Winter (1982) employ the phrase “natural trajectories” to describe the phenomenon
that technologies typically evolve by exploiting latent economies of scale and the
potential for increased mechanization of operations that were previously done by hand.
Nelson and Winter maintain that designers of a technology have, at every given point in
time, beliefs about what is technically feasible or at least worth trying out. Thus the
development of a technology is very much constrained and directed by the cognitive
framework that designers bring to the development situation. This idea of natural
trajectories parallels to a great extent what researchers on dominant design have written
about the era of incremental elaboration of dominant designs. Natural trajectories and
periods of incremental innovations of dominant design occur because it is economically
efficient to elaborate a design approach into which substantial resources have been
invested and which is already well understood. Only when further performance
improvements can be achieved by radically new designs do engineers look for
fundamentally different design approaches.
Dosi (1984) elaborates the ideas of Nelson and Winter and describes in more
detail how natural trajectories are unseated by new ones. In his study of devices that
amplify, rectify and modulate electrical signals, Dosi examined the dynamics of how
thermoionic valve technology (vacuum tubes or electronic tubes) was replaced by a new
trajectory based on semiconductor technology. Borrowing Kuhn’s ideas about the
evolution of scientific disciplines, he developed the ideas of technological paradigms and
technological trajectories3. Dosi’s definition of technological paradigm is a
multidimensional construct as he uses the concept to refer to a generic technological task,
the material technology selected to achieve the task, the physical/chemical properties
exploited and the technological and economic dimensions and trade-off focused on
(1982, p.153).
Dosi identifies two origins for new technological paradigms. Either designers
cannot improve a technology on the existing paradigm and therefore engage in
extraordinary problem solving to find a radically new solution for the generic
3 From a study of the evolution of farm tractors, locomotives, aircrafts, tank ships, electric power generation systems, computers and passenger ships , Sahal (1981; 1985) develops very similar concepts which he calls “technological guideposts" and “innovation avenues." He also finds that certain design approaches serve as the starting point for incremental innovations over long periods until they are overthrown by other radical design approaches.
38
technological task (here he follows Nelson and Winter) or scientific breakthroughs may
open up new possibilities for achieving the technological task. In the second case,
innovative designers seize the opportunity and create a technological alternative to the
existing designs. Once designers adopt a new paradigm, they focus on incrementally
improving the technology along key dimensions identified by the paradigm. Dosi argues
that technological paradigms have a powerful exclusion effect: they focus the
technological imagination and the efforts of engineers as well as the organizations they
work for in rather precise directions while they make them “blind” with respect to other
technological possibilities. (1984, p.15). This exclusionary effect stabilizes the paradigm
even further and explains why technological evolution is so highly directional and only
under special circumstances shifts to a very different path.
Since there are always a number of different pathways that are technologically
possible on a given technological paradigm, what determines the selection of a particular
trajectory? Dosi maintains that economic forces together with institutional and social
factors operate as a selective device. For him the most important economic forces are the
pressures on firms to achieve adequate returns on their investments. Because of these
pressures, managers and designers pursue pathways that promise to bring about
marketable applications.
Nelson, Winter and Dosi, just like writers on dominant designs, conceptualize
technical evolution in terms of infrequent radical changes followed by long periods of
incremental changes. They identify economies of scale and the need of firms to develop
technology along the path that seems most profitable as important forces in shaping the
particular form a technology will take. There are, however, important differences in the
focus of these authors and the literature on dominant designs. Nelson, Winter and Dosi
place a much greater emphasize on the cognitive environment (codified and tacit
knowledge, heuristics, ideas, etc.) in which pieces of hardware are created rather than on
the actual shape and performance characteristics of the hardware itself. Thus Dosi’s idea
of a technological paradigm is very similar to the idea of a dominant design when
dominant designs are defined in terms of abstract technological principles (for example,
internal combustion versus the steam or electric engine). However, when dominant
designs are defined in terms of very specific characteristics of technological artifacts,
they are very different from what Dosi means by a technological paradigm, as
technological paradigms always refer to general technological principles, heuristics and
39
ideas that designers employ in creating a particular design rather than the design itself.
The difference between the two concepts has concrete empirical implications: A new
dominant design may be based on the same technological paradigm.
This comparison illustrates how important it is to distinguish between dominant
designs that are defined at different levels of generality if one wants to avoid confusions
about the timing when dominant designs appear and when they are toppled by
technological discontinuities. It is absolutely possible that abstractly-defined dominant
designs remain unchanged while concretely defined dominant design change.
b. History of Technology
Historians of technology have collected much evidence that variation and
selection processes have shaped the evolution of a wide variety of technologies. Gilfillan
(1935) pioneered the study of inventions with a systematic examination of the history of
ships. His study of the development of ships from the early canoe to the big motorships
of his day showed that screw propulsion emerged as the standard design for ships from a
number of possible propulsion technologies that apply power to water. It won against a
series of alternative designs: water jet (proposed in 1661), stern paddle-wheel (1737),
setting poles (1737), propeller (1753), artificial fins (around 1757), duck’s foot (1776),
and side-wheels (1776), chain of floats, (1782), oars (1783), reciprocating paddles
(1786), and central paddle wheel (1789) [1935, p.80]. Gilfillan also discovered that a
number of alternative sources of power (human, animal, wind, steam, internal
combustion) were put in use before the internal combustion engine emerged as the typical
choice for ships in the early decades of this century. When the traditional wood was
challenged by metals as the material for building ship hulls, it was the lighter steel rather
than the heavier iron that became the dominant building materials for this part of the ship
(1935, p.149).
In his recent work that develops a general theory of technological evolution based
on the concepts of diversity, continuity, novelty, and selection, Basalla (1988) found that
of the “nearly 350 [nuclear] reactors operating in the world, about 70 percent of them are
of the light-water type” (p.166). He also documented that in the early days of the railway,
engineers experimented with a number of propulsion systems. One approach was to build
atmospheric railways which were propelled by pressure created in a tube located in the
40
middle of the track. This design, however, lost against the now-established standard of
locomotive powered railways (p.177).
The case study of David Noble (1984) on the emergence of automatically
controlled machine tools provided detailed evidence that a particular design approach can
become the dominant one because actors form powerful coalitions to create a standard
and not because one design is technically superior to all the rest. According to Noble’s
account, the technological discontinuity that made it possible to design machine tools that
were not entirely guided by a skilled operator created a number of alternative possibilities
for automating machine tools. For Noble the decisive factor in the victory of numerical
control (NC) against record-playback (RC) was the powerful coalition of academics and
the Air Force (with its enormous budget and the resulting market power) that was able to
impose its technological preference for a standard on the market. The proposition that
technologies are not selected on purely technical grounds is also supported by Hughes’s
(1983) account of the victory of alternating current (AC) against direct current (DC)
electric power generating systems and by Aitken’s (1985) analysis of how vacuum tube
radio transmitters won the competition against other continuous wave technologies (arc
and the alternator) as well as against the earlier discontinuous spark-gap designs.
David Landes‘s (1983) work on the evolution of watches described how watch
technology advanced incrementally at the system level over many centuries until quartz
technology introduced a discontinuity at system level that relegated the old mechanical
and semi-electric watch technologies to niche markets. Quartz technology proposed
alternative technological approaches for every major functional subsystem of the watch,
eliminating the need for any moving part in the system. Landes found that such a system
level revolution created enormous organizational difficulties for existing firms. The
difficulties were quite different from those created by technological discontinuities that
had previously been limited to a individual subsystem. Although the R&D labs of leading
Swiss firms in the industry had pioneered the revolutionary quartz technology or at least
mastered its underlying principles, these Swiss firms were initially unable to incorporate
the new technology into their products and thus gave away the markets to firms located in
the Far East. For researchers of dominant design one of the most important insights to
take from Landes’s study of watch technology is the distinction between a system level
revolution and one that is confined to a particular subsystem. If the evolution of watches
is representative of other complex technologies, the potential effects of each kind of
revolution on various industrial outcomes appear to be very different.
We have already drawn on Vincenti‘s writings (1990; 1991; 1994) on the history
and methodology of aeronautical engineering in our case study at the beginning of the
41
paper. Vincenti needs to be discussed in our present review of work by historians of
technology because he introduced into this literature the concept of an operational
principle. It is so useful a concept for analyzing technologies that it merits inclusion in
the tool box of every student of technological change. Because the concept has received
no attention in the dominant design and related literatures on technology management, it
deserves special attention here. The concept was originally developed by Polanyi (1962)
in the context of developing a theory of how human beings know things. Polanyi found it
useful to define an operational principle with reference to patents.
A patent formulates the operational principle of a machine by specifying how its characteristic parts—its organs—fulfill their special function in combining to an overall operation which achieves the purpose of the machine. It describes how each organ acts on another organ within this context. (1962, p.328)
Polanyi’s formulation of an operational principle brings together the ideas of parts,
linkages and technological goals to describe the essence of how technological artifacts
work. For Polanyi, an operational principle captures the kind of knowledge a human
designer must have in order to build a technological device that works on physical nature
in a desired way. To put it differently, an operational principle defines how the parts
interact with one another in order to implement the goal of overall technology. Consider
the example of the principle underlying the first successful human flight. Instead of trying
to design a flying machine where flapping wings would provide both the counter force to
gravity and forward thrust, Cawley proposed in 1809 to separate lift from propulsion by
using a fixed wing and by propelling it forward with motor power. The central idea was
that moving a rigid surface through resisting air would provide the upward force
countering gravity. As Vincenti (1990) has noted, this was a radically different way to
conceptualize the design of an airplane because it freed designers from the impractical
idea of flapping wings. Subsequently, the fixed-wing and forward propulsion idea
became the operational principle underlying all airplane designs.
When human beings have grasped the operational principle of a technology they
know how an artifact can act on nature in a special beneficial way. Because an
operational principle essentially specifies the way components need to be arranged in
order to create a successful artifact, operational principles reveal the abstract logic of how
an artifact works. Since an operational principle represents principle definition of an
42
artifact, it provides the ideal starting point for understanding what the essential aspects of
a particular technology are and how the technology works.
Another especially useful feature of the operational principle concept is that it
allows us to compare different technologies by probing whether they work according to
the same the operational principle. For instance, planes and helicopters, both devices for
air travel, differ in terms of how they achieve the general task of transporting humans in
the air. While a plane accomplishes flight by separating the propelling function and the
lifting function into two separate components (the propeller or jet and the wings), the
helicopter realizes movement in the air by implementing the lifting and propelling
function in one and the same component, the large vertical rotor. Rockets, another class
of devices for traveling, make air travel possible by allowing an expanding air-fuel mix to
escape only through the rear of the device and thus propelling it forward. Rocket
propulsion requires neither wings nor propellers. The example of these three principles
underlying air travel demonstrates how operational principles allow the student of
technology to categorize a set of artifacts into general classes. This is useful for research
on dominant designs because it makes it possible to find fundamental variation in the
design of technologies that fulfill the same general purpose as shown by the example of
the different solutions for transporting humans through air. The idea of an operational
principle is very much related to Clark’s (1985) idea of a core design concept. We
believe, however, that the idea of an operational principle is more revealing and has
greater analytical power because it refers not only to concepts but also to actual
knowledge of how an artifact is made to work. The notion of an operational principle is
closer to functional artifacts as compared to the notion of a core design concept because
the latter includes all those ideas which never work out when they are tried out in a real
artifact.
To summarize, the concept of an operational principle has three very beneficial
properties for research on technological evolution: 1. By encapsulating the essence of
what makes a particular artifact work, it aids the student of technology to gain a deep yet
relatively easy to acquire understanding of what makes an artifact work. 2. By bringing
simultaneously into view components and linkages, it assures that the analyst will not
miss the important role linkages can play. 3. By specifying in general terms the nature of
43
an artifact, it provides the researcher with a useful guideline for classifying variants of a
particular technology.
Scholars working in the history of technology have been faced with much of the
same problems that have plagued researchers on dominant designs, namely, of finding a
powerful conceptual framework for studying a diverse set of technologies. With the
contribution of a number of leading scholars like Constant (1980), Hughes (1983), Aitken
(1985) and Vincenti (1990, 1994), this field has developed a general consensus that
technologies are best conceived as systems that are composed of multiple levels of
subsystems. Although historians of technology may differ somewhat in terms of how
they use the concept of a system (some restrict their systems analysis to physical artifacts,
others include bodies of knowledge as well as ideas on which these artifact are based, and
there are also those who bring into the analysis the human actors who are involved in the
system), they have all come to view system concepts as an indispensable tool for
studying and understanding technologies that are composed of multiple components.
The system framework as developed by historians of technologies involves a
number of important ideas. Any technology that is made of more than one part can be
analyzed in terms of components and multiple levels of subsystems as the artifact gets
more complex. Hughes (1983) points out that “in any large system there are countless
opportunities for isolating subsystems and calling them system for the purpose of
comprehensibility and analysis“ (p.55). Based on his experience with many different
technologies, Hughes recommends that “analyzers of systems should make clear, or at
least be clear in their minds, that the system of interest may be a subsystem as well as one
encompassing its own subsystems“ (p.55). This nested hierarchy that incorporates basic
components through multiple levels into a complete system requires linkage mechanisms
that are often more important than components themselves, as Hughes (1983) illustrated
for the case of power networks. There are two fundamentally different links: Vertical
links connect components with different functions to create an integrated functional
whole while horizontal links connects components of the same kind or function to
provide the system with larger capacity or stability. Linkages give the collection of
components a clear structure or configuration.
44
Figure 8: Illustration of a Four-level Nested Hierarchy
System Level
First-order Subsystems
Second-order Subsystems
Component Level
The hierarchical composition principle implies that smaller systems (subsystems)
yield control to the large systems or, to put it in other words, smaller systems exist to
fulfill a function in the larger system. When systems are centrally controlled, the
subsystem that is given the central control is of special importance.
Drawing on the work of Simon (1981) and particularly on Campbell (1974),
Constant (1980) argues that the ideas of a nested hierarchy of variation and selective
retention mechanism helps to come to terms with the nature of innovations that shape the
evolution of complex systems. In systems technologies, innovations can occur at many
points: at individual components, linkages, and at multiple levels of subsystems.
Innovations at each of the locations in the systems hierarchy are shaped by the
evolutionary logic of variation, selection and retention. Constant‘s (1980) study of a
radical innovation in a major functional subsystem of airplanes, the engines, showed how
the success of the radical innovation that introduced jet technology into the engine
subsystem depended on complementary changes in other subsystems. Constant used this
example to illustrate the general idea that many components co-evolve as designers try to
upgrade the performance of the overall system. The important conceptual feature of this
nested hierarchy of variation and selective retention mechanisms is that technological
selection criteria at lower levels have to be consistent with higher level selection criteria
to maintain a functional overall system.
45
Vincenti‘s work (1994; 1991) on the history and the method of aeronautical
engineering has clarified how design activity itself has an hierarchical structure in the
sense that some design parameters have to be specified first and others later (linear time)
while some can be specified concurrently (parallel time). This hierarchical structure is
necessary to break a complex task into manageable tasks. (For import contributions to the
literature on the hierarchical nature of design activity see also Clark (1985), Alexander
(1964), Booker (1962), and Marple (1961).) Vincenti describes the stages of this
hierarchical activity as follows: First designers define the project in general terms. Then
they translate the project definition into a plan that specifies the overall dimensions and
configuration of the artifact. Next, the designers specify in general terms the operational
principles of the major subsystems. The smaller components that make up the major
subsystem can then be designed in parallel by different design teams. The team that is
charged with designing the landing gear of an airplane, for example, receives general
specification as to what dimensions the landing gear cannot exceed and what minimum
performance requirements the landing gear must possess. These broad performance
characteristics leave ample room for designers to create many different landing gears
from which they have to select one. In this sense an innovation episode in landing gears
is nested in the overall technical development of an airplane.
For the student of technology it is thus useful to envision a hierarchy of
operational principles parallel to the hierarchy of subsystems. The hierarchy of
operational principles has one crucial feature that needs to be made explicit: Higher level
operational principles set the technical agenda for lower level operational principles. In
the early history of the automobile, engineers had three general options at their disposal
for designing the motor subsystem of cars (Basalla, 1988). These were the steam engine,
the electric motor, and the internal combustion engine. While the general operational
principle of an automobile requires a motor that delivers traction power to the wheels, it
does not specify which one of the three alternatives should be used. However, the choice
of any one particular engine type has dramatic consequences for the technical agenda of
the engine’s components as the three operational principles underlying the engine types
imply very different components. To state it a bit more abstractly, for a general
technological task there are typically alternative operational principles available for
46
achieving the task. However, once a particular operational principle is chosen, it sets very
specific requirements for lower level operational principles. Thus higher level operational
principles provide the task environment which engineers of lower level components take
as a given when they search for alternative ways of making a given subsystem work.
However, it is important to recognize that operational principles at any level in the
hierarchy typically are general enough to allow variation in the way they are
implemented in a concrete design
Constant (1980) and Vincenti (1990) conclude from their studies that radical
innovations can occur at all levels in the systems hierarchy. Unfortunately, historians of
technology, just as writers in organization theory, have used the notion of a radical
innovations in at least two very distinct ways. The only common feature of these two uses
is that in both cases something big and exceptional happens. Radical innovations have
been defined either in terms of their antecedents (the scope of new knowledge required)
or in terms of their consequences (the increased performance they make possible).4
Table 3:
Performance Improvement
Low High Scope of
New
Small Incremental Innovation
Radical Innovation Sense 1
Knowledge Large
Radical Innovation Sense 2
”Super” Radical Innovation
Given these two different dimensions of radicalness, it is possible that an innovation is
incremental in the terms of the new knowledge required but radical in terms of the
additional performance achieved and vice versa. When one distinguishes between these
two dimensions of radicalness, it becomes obvious that innovations that require large
amounts of new knowledge and create large performance improvements have a particular
potential to transform industrial dynamics. Hughes (1983) is one of the few scholars who
4 See Ehrnberg (1995) for a very good discussion of the confusion between the two meanings of radical innovation in the innovation studies literature. See also Levinthal (1998) who develops important
47
makes clear in his study of the development of electrical power systems that he defines a
radical innovation as one which replaces every major component of the system with
designs that are based on new technological principles. Most scholars, however, do not,
and thus it is difficult to interpret in what sense they see an innovation as being radical.
How does this typology map into the concept of a systems hierarchy? In terms of
the new knowledge dimension innovation, moving up the systems hierarchy (i.e.
encompassing more and more components) by definition means that an innovation is
becoming more radical because more and more components are being designed based on
new principles. Yet this is not true for the performance dimensions of innovation. Here
innovations that occur at a lower levels (recall the impact of leaded fuel on the
performance of airplanes) can have more radical consequences than innovations that
involve the entire system.
This discussion makes clear that it is important for a model on dominant designs
to recognize that radical innovations (in terms of both dimensions) can occur at the
individual component, individual subsystem or at a higher level of aggregation.
Furthermore, a refined model of dominant design should be able to take into
consideration that technologies can be studied at higher and lower levels of abstraction.
In his study of radio technology, Aitken (1985) has pointed out that at a high level of
abstraction the arc, the alternator, and vacuum tube transmitters were functionally
equivalent because they all produced continuous waves. At a lower level of abstraction,
however, they were very different because they had “different technical genealogies, they
represented different configurations of information, and they were managed in different
ways and with different degrees of success“ (1985, p.549). Technological competition
between alternative designs can take place at in terms of very general or very specific
design parameters. Because individual organizations always produce specific and not
abstract designs, scholars who are interested in understanding the impact of technological
events on the fate of organizations should work with a framework of dominant designs
that can distinguish between dominant designs based on abstract and specific design
parameters.
theoretical implications of the two meaning of radical innovation for evolutionary theories of industrial change.
48
IV. Towards a Refined Model of Dominant Designs: A Hierarchy of Technology Cycles. After having analyzed the confusions in the dominant design literature and
canvassed other literatures of technological evolution for useful analytical ideas, we are
now in the position to formulate a refined model of dominant designs that brings more
clarity and analytical power to the study of technological change. The review of the
literature has shown that while the Anderson and Tushman (1990) technology model is
a very good framework for studying simple technologies, it is rather crude for more
complex technologies that are composed of many parts. Since the vast majority of
technologies studied in the dominant design literature are not simple technologies, a
refined model must be able to incorporate different degrees of complexity.
Figure 9: Anderson and Tushman’s (1990) Model of Dominant Designs
Era of Ferment Era of Incremental Change
TIME
Technological Discontinuity 1
Dominant Design 1
Technological Discontinuity 2
The Technology Cycle
49
a. General View of Technological Artifacts. Technological artifacts are systems composed of components, linkages, and
multiple levels of subsystems. The most simple technology is a special one-level
“system” consisting of only one component. Technological artifacts are structured in
terms of a hierarchy where component are nested in subsystems, smaller subsystems in
larger subsystems and finally the highest level subsystems in the overall system
(Christensen and Rosenbloom, 1995; Clark, 1985; Alexander, 1964). Together
components, subsystems, and linkages articulate the configuration or architecture of the
overall artifact (Henderson and Clark, 1990). Most components interact only indirectly
with one another, namely as members of subsystems (Simon, 1981). Components and
subsystems that have more linkages to other components and subsystems are more core
while those that have less linkages are more peripheral. When linkages themselves are
made of multiple components, they form their proper subsystem. Linkages may be more
important than components themselves (Hughes, 1983).
50
Figure 10: Patterns of Interaction in a Systems Hierarchy.
System Level
First-order Subsystems
Second-order Subsystems
Component Level
Interac tionBound aries
Technological artifacts are characterized by operational principles that reveal the
abstract logic of how components interact to fulfill the characteristic goal of the
technology (Polanyi, 1962; Vincenti, 1990). Parallel to the structural hierarchy of the
system, technological artifacts embody a nested hierarchy of operational principles.
Higher level operational principles set the technical agenda for lower level operational
principles.
b. Nature of Technical Change. Technological change occurs through repeated sequences of short periods of
radical change followed by long periods of incremental change (Constant, 1980; Nelson
and Winter, 1982; Dosi, 1982; Anderson and Tushman, 1990). Change can be radical in
terms of the new knowledge required or in terms of the increased performance achieved
51
or both. Technological systems provide many points at which technical innovations can
take place. Innovations can be localized to a particular component. They can take place at
several components at the same time. They may be confined to a particular subsystem or
involve a number of subsystem simultaneously. At all these points, technological change
can be radical or incremental (Constant, 1980, Vincenti, 1990).
Thus technological change can be conceptualized as a nested hierarchy of
technology cycles. Individual technology cycles are constituted by two recurrent stages,
an era of ferment and an era incremental variation. The era of ferment begins with a
technical discontinuity and ends with the selection of a dominant design. After the
emergence of a dominant design (on design approach captures over 50% of the market),
technical evolution is characterized by a period of incremental variation until another
technical discontinuity triggers a new era of ferment. This restarts the cycle. Component
technology cycles are nested in subsystem technology cycles and subsystem technology
cycles are nested in the system technology cycle.
52
Figure 11: A Refined Model of Dominant Design—Nested Hierarchy of Technology Cycles
System LevelTechnology Cycle
First-order Subsystem Technology
Cycles
Second-order Subsystem Technology
Cycles
Basic ComponentTechnology
Cycles
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention--Elaboration-
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Ferment
-Substitution-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention--Elaboration-
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Ferment
-Substitution-Competition
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
Era of Fermen
-Substitutio-Competitio
Technological Discontinuity
-Variation-
Dominant Design
-Selection-
Era of Incremental
Change-Retention-
-Elaboration-
The more components and subsystems are experiencing radical innovations based
on new operational principles concurrently, the greater the scope of the competency
destroying innovation. The most radical change in a system occurs when all components,
linkages and subsystems are based on new technological principles. Such events are very
rare as technological evolution typically proceeds through a recombination of existing
components or subsystems with infrequent additions of entirely new components or
53
subsystems. Most of time the novelty of an innovation is not based on new components
but on the way existing components are integrated into a functional whole (Simon, 1981).
Technological improvements that are accomplished by making changes only to an
individual component or an individual link are modular innovations while innovations
that require changes in many components and linkages are systematic innovations (Teece,
1984; Langlois and Robertson, 1992; Garud and Kumaraswamy, 1993). An innovation in
a subsystem that leaves the links to the other high level subsystem intact (Henderson and
Clark, 1990) is modular at the level where the subsystem is integrated into the overall
system. At the same time this innovation may require the redesign of components and
linkages within the subsystem that is the locus of innovations. Thus an innovation that is
modular at a higher level of aggregation may be systemic at a lower level of aggregation.
Innovations that involve the linkages between the highest level subsystems (first-order
subsystems5 or what Clark calls the main functional domains) often require sweeping
complementary changes at lower levels in the systems hierarchy. For this reason
designers often try to improve the performance of a technical systems without changing
the linkages between the first-order subsystems (Iansiti and Khanna, 1995; Garud and
Kumaraswamy, 1995; Ulrich, 1995).
c. Varieties of Dominant Designs. i. Levels in the Hierarchy For every structural point in the system hierarchy dominant designs can emerge
after a period of variation. Dominant design then can occur at the level of a component6,
a linkage, a subsystem, or the system. System level dominant designs are very rare as
systems become more complex and offer many points for potential innovations.
Conceptually, a dominant design at the system level implies that. no component, no
linkages, and no subsystem is experiencing a period of ferment. However, while there
5 As systems are composed of subsystems with different degrees of complexity, it is analytically useful to rank-order subsystems from the highest to the lowest level of aggregation. Thus the main functional domains are the first-order subsystems, a first order subsystem is composed of second order subsystems and the nth-order subsystems are composed of the basic components. 6 Sometimes what is called a basic component in fact involves multiple design parameters. The hierarchy framework implies that in such a case a dominant design can emerge for these individual design parameters.
54
may not be a dominant designs at the system level, dominant design may concurrently
exist at the level of various components, linkages, and subsystems.
ii. Levels of Abstraction Technological artifacts can be analyzed at different levels of abstraction (Aitken,
1985). Dominant designs can occur at both low and high levels of abstraction. When the
details of a design are ignored, designs can be viewed as identical although they are
dissimilar in their specifics. This implies that dominant designs may emerge at an abstract
level, while no dominant design may emerge at the most concrete level of design. Since
the idea of operational principles conceptualizes the artifact in an abstract way,
convergence of designers on a particular operational principle is an expression of
dominant designs at an abstract level of analysis. It is logically impossible that dominant
designs at a lower level of abstraction emerge before a dominant designs at a higher level
of abstraction. They can occur at the same time or later. This temporal relationship
between dominant designs at different levels of analysis helps to organize the
investigation of dominant designs.
d. Mechanisms Creating Dominant Designs There is no one way in which dominant designs emerge. Superior performance
and early advantages in market share that give raise to cost advantages through
economies of scale and network externalities are important forces in determining which
design will become the dominant one (Arthur, 1989; Baum, Korn et al., 1995; Wade,
1995). As technological artifacts become more complex, political and sociological factors
will become more important in determining dominant designs (Tushman and Rosenkopf,
1992). Similarly, the higher the complexity, the higher the unit cost and the lower the
production volume of a technological artifact, the more important are non-market
mechanisms in determining which design dominates others (Miller, Hobday et al., 1995).
55
V. Linking Technological Dynamics to Organizational and Strategic Outcomes Because of confusion in the literature on dominant design about the unit and
levels of analysis, the quality of the evidence linking technological and organizational
outcomes has been weaker than necessary and desirable. We have noted in the
introduction that the diversity of uses of dominant design concepts has prevented the
accumulation of a solid empirical foundation from which researchers could have
launched more sophisticated investigations. In this paper we have tried to formulate a
refined model of dominant designs that provides the groundwork for advancing the state
of the art of dominant design research. As researchers in organization theory and strategy
are not interested in technological dynamics for their own sake, the paper has done the
necessary preparatory work for being able to get to the most important research task on
dominant designs: linking technological dynamics to organizational actions and
outcomes. We are convinced that the concept of a nested hierarchy of technology cycles
allows researcher to create stronger links between the technological changes that are
taking place in a technological artifact and the competitive implication this change has
for the industry participants.
We will now develop propositions7 about technological dynamics and about
important organizational outcomes that we hope will stimulate exciting new research.
a. Where are the important dominant designs located in the nested hierarchy of technology cycles? Dominant designs at the different locations in the system hierarchy are not equally
important in their power to influence dominant designs in other locations of the system
hierarchy and the ability to affect industrial dynamics. Although not enough
technological systems have been studied in sufficient detail and over long periods of time
to provide us with a detailed empirical picture of how the different technology cycles in
the nested hierarchy influence the emergence of dominant designs at the various units and
levels of analysis of the artifact, the existing evidence allows us to formulate a few
general propositions which can be tested as more empirical evidence becomes available.
7 Given that any industrial phenomenon is susceptible to the influence of a wide range of variables, all our propositions are meant as “everything-else-being-equal” statements.
56
Hypothesis 1: Dominant designs at higher level operational principles have a greater impact on the evolution of the system and on industrial dynamics than dominant designs at lower level operational principles.
There are two important senses in which subsystems and components are either core or
peripheral in a system. In the first sense components or subsystems are core because they
are structurally linked to many other components. Conversely, a peripheral component or
subsystem is linked to few other components or subsystems. In computer systems, for
example, the operating system and the microprocessor are core subsystems while the disc
drives, screen and printers would be peripheral subsystems.
Hypothesis 2: Dominant designs in structurally core components or subsystems have a greater impact on the evolution of the artifact and on industrial dynamics than dominant designs in peripheral components or subsystems.
Components and subsystems can also be core in the second sense of constituting a
bottleneck (Rosenberg, 1969) or a reverse salient (Hughes, 1983) which is holding back
the evolution of the system to higher levels of performance. Once a particular bottleneck
is solved, another point in the system will be the weak spot and become the central focus
of design activity.
Hypothesis 3: Core components or subsystems in the sense of constituting a bottleneck will shift over time. Hypothesis 4: Designs that overcome bottlenecks are more likely to become associated with a dominant design arising somewhere in the system hierarchy than designs which do not.
When customers select between different designs of a technological artifact, their
decisions are only influenced by a small number of all design characteristics that make up
the artifact. To understand why some dominant designs are more important than others, it
is important to realize that customers consider in their buying decisions only design
parameters they directly interact with as opposed to those parameters which are located
57
inside the artifact. We call all those design parameters that a customer directly interacts
with—the shape, size, weight, the ease of handing the artifact, etc.—the product
interface. For analytical purposes it is useful to regard it as a proper subsystem that in
turn can be made up of lower level subsystems and components. Because the interface
constitutes the boundary between the customer and the internal organization of the
artifact, it plays a crucial role in the competition between alternative designs. 8 We read
Cusumano, Mylonadis and Rosenbloom‘s (1992) account of the victory of VHS over
Beta and Video 2000 video recorder designs as a testimony of how an interface parameter
(in this case the format of the tape that makes machines either compatible or
incompatible) can play a decisive role in determining the fate of the overall system
design. For all intents and purposes, the technology that was inside the different video
recorder systems did not matter for outcome of this competition. We suspect that the
internal technology was not all that different as Sony later was able to switch to the
production of designs that had the VHS interface after having lost a lot of money in
trying to make the Beta system the dominant format. Thus there may have been a number
of dominant designs at design parameters inside the artifact that they did not play a
significant role in shaping the success or failure of the different overall product designs in
the market.
Hypothesis 5: Interface dominant designs are more important than non-interface dominant designs in shaping the evolution of a product class and determining organizational outcomes.
The importance of the interface in the selection of designs is also brought into
focus by Sanderson and Uzumeri’s (1995) idea that the creation of product families can
bring great competitive advantages to firms. Sanderson and Uzumeri described how Sony
was able to create the large number of models in its Walkman product family by
designing mostly different casings while using largely identical components and
subsystems. The Sony Walkman case is an example of a product strategy in which firms
achieve economies of scale and scope by redesigning only the interface to satisfy more
8 The role of the interface in determining the selection of entire products and their underlying technology is so important a topic that it deserves more attention than we can give it in the context of the present paper. Here we can only sketch some of the ideas that need to be worked out in greater detail on another occasion.
58
closely the performance requirements of different user segments and by working with
standardized components and subsystems for the internal technology of the artifact.9
Hypothesis 6: Firms that are able to create different interfaces with the same internal design will be more successful than firms that create multiple interfaces based on different internal designs.
b. The Nested Hierarchy of Technology Cycles and Industrial Organization The production of technological systems can be organized in multiple ways.
Every basic component and subsystem could in principle be made by a different firm
with one firm assembling the final product from first-order subsystems purchased in the
market. Alternatively, a single firm can fully integrate the production of a system from
the basic component to the full system within its boundaries. The hierarchical
organization of physical artifacts implies that there can be a parallel nested hierarchy of
producers and markets. Christensen and Rosenbloom (1995) have used the notion of a
value-network to highlight the important fact that firms can be located at many points of
the design hierarchy and be more or less vertically and horizontally integrated. In order to
understand the impacts of radical innovations and dominant design at various locations in
the systems hierarchy, it is important to distinguish between firms whose boundaries
circumscribe different components and subsystems. It is useful to distinguish between
component manufacturers, subsystem assemblers and final system integrators in trying to
develop general propositions about how radical innovations which can be more or less
systemic impact the fate of firms.
Hypothesis 7 : Radical innovations in a particular subsystem will have greater negative effects on firms that assemble the particular subsystem and their network of suppliers than on firms that are higher level assemblers. Hypothesis 8 : Radical innovations in a particular subsystem will have greater negative effects on firms that assemble the particular subsystem and their network
9 Garud and Kumaraswamy (1995) provide a very careful exposition of the advantages of this strategy.
59
of suppliers than on firms that provide subsystems and components in other functional domains of the artifact. Hypothesis 9. Systemic innovations favor firms that produce a wide scope of the components and subsystems that are effected by the innovation. Hypothesis 10: Firms that produce core components and subsystems have greater influence on bringing about important dominant designs. Hypothesis 11: Firms that produce core components and subsystem within their boundaries will be more successful than firm which do not.
These hypotheses are offered here to give researchers some ideas the literature on
dominant designs can be carried forward in a productive manner.
VI. Implications for R&D Managers and Public Policy Makers
Scholars of technical change have clearly identified that there is an inherent
uncertainty in the innovation process (Rosenberg, 1996). The potential and the
implications of a new technology can simply never be fully predicted. However, as
McKelvy (1993) has pointed out, it is necessary to make a distinction between true
uncertainty and self-imposed uncertainty. If firms do not organize themselves to
understand the inherent potential of existing technologies and the present needs of users,
firms are living under a regime of self-imposed uncertainty. To create a dominant design
at any level in the design hierarchy and to appropriate rents, it is crucial that firms
establish R&D capabilities and marketing capabilities that can reduce the risk of
investing in a particular design.
To be effective, R&D managers must possess a sophisticated conceptual tool kit
for evaluating technological threats and opportunities. This paper tried to pull together
useful concepts from a variety of literatures and develop a model of technological change
that emphasizes the hierarchical nature of technological evolution. By analyzing
technologies as systems composed of nested subsystems, components, and linkages,
R&D managers have a conceptual tool for understanding what aspect of a technology is
60
likely to be of key strategic importance. The goal for a sophisticated technology analysis
is to understand what parts of the technology are more core, exerting greater control over
the development of the rest of the system. In using this conceptual tool for analyzing PC
computer systems, for example, it becomes evident that the operating system is
structurally central. In controlling the interaction of many of the first-order subsystems,
the developer of the operating system (Microsoft) has the upper hand in forcing the
developers of other subsystems to follow a particular technical agenda. Its control of the
operating system also allowed Microsoft to capture a dominant position in business
application software because the firm was able to provide in-house application software
developers with advanced and better information on how any application would have to
interact with the operating system in order to be functional. Lotus lost its formerly
leading position in spreadsheet applications for the PC to Microsoft because it was unable
to come out with a Windows version as quickly as Microsoft was able to do with its own
spreadsheet product. Similarly the developers of the Netscape internet browser
complained that Microsoft strategically put Netscape developers at a disadvantage by not
promptly disclosing the protocol that applications would need to follow to be fully
compatible with the operation system.
R&D managers not only have to analyze technological threats and opportunities
but they also need to communicate their strategic plans throughout the organization. The
concept of a hierarchy of technology cycles and such ideas as the operational principle of
technology offer a useful framework for explaining to the entire organization how the
firm intends to improve its technology. Once organization members conceptualize
technologies in terms of components, linkages, and hierarchies of subsystems, it is
possible to ask what components need to be made in-house and what components can be
bought from the market. When components do constitute key strategic asset for
protecting the value of the firm’s technological competence it is advantageous not to buy
the component in the market. Microsoft determined that the internet browser was a
critical new subsystems of a PC and decided that it needed to make a browser subsystem
in-house to protect the value of its existing technologies.
Dominant Design research has been chiefly concerned with the welfare of firms.
Undoubtedly it may be very profitable for a firm to establish a dominant design. But what
61
is good for an individual firm or an alliance of firms must not be good for society as a
whole. The dominant design literature to date has not engaged in serious reflection about
the implications for social welfare when firms establish dominant designs. There are two
sets of concerns that should be addressed. First, a firm may be able to establish a
dominant design that is substantially inferior to alternative technologies. When a
particular technology gets locked in prematurely such that alternative technologies are
given up before their full development potential are known, society may lose
technological options or not be able to switch to a more efficient technology without
enduring prohibitive costs. Second, although the prospect of establishing a proprietary
dominant design may provide firms with incentives to undertake risky research and
development work (and thus be beneficial from a social welfare point of view), social
welfare may be reduced once a firm controls a dominant design and the firm has no
incentives to develop a technology at the fasted pace possible. Society might be better off
if dominant designs are a public good rather than controlled by particular firms. The rapid
development of the internet, for example, can be traced in part to the fact that no single
firm to date has controlled the internet protocols that specify the language in which
information is sent across the worldwide web of computers. The hierarchical technology
cycle model makes is possible to begin a more nuanced debate about what aspects of a
technological system should be public.
62
Biobliography Abernathy, W. J. (1978). The Productivity Dilemma: Roadblock to Innovation in the Automobile Industry. Baltimore, Johns Hopkins Press. Abernathy, W. J. and K. B. Clark (1985). “Innovation: Mapping the Winds of Creative Destruction.” Research Policy 14: 3-22. Aitken, H. G. J. (1985). The Continuous Wave: Technology and American Radio, 1900-1932. Princeton, New Jersey, Princeton University Press. Alexander, C. (1964). Notes on the Snythesis of Form. Cambridge, MA, Harvard University Press. Anderson, P. and M. L. Tushman (1990). “Technological Discontinuities and Dominant Design: A Cyclical Model of Technological Change.” Administrative Science Quarterly 35(December): 604-633. Arthur, B. (1989). “Competing Technologies, Increasing Returns, and Lock-in by Historically Small Events.” The Economic Journal 99(394): 116-131. Arthur, W. B. (1988). Competing Technologies: An Overview. Technical Change and Economic Theory. G. Dosi, C. Freeman, R. Nelson, G. Silverberg and L. Soete. London, Pinter. Arthur, W. B. (1994). Increasing Returns and Path Dependence in the Economy. Ann Arbor, University of Michigan Press. Barney, J. B. (1986). “Strategic Factor Markets: Expectations, Luck and Business Strategy.” Management Science 32(October): 1231-1241. Basalla, G. (1988). The Evolution of Technology. New York, Cambridge University Press. Baum, J. A. C., H. J. Korn and S. Kotha (1995). “Dominant Designs and Population Dynamics in Telecommunications Services: Founding and Failure of Facsimile Transmission Service Organizations, 1965-1992.” Social Science Research 24: 97-135. Booker, P. J. (1962). “Principles and Precedents in Engineering Design.” The Engineering Designer: p. 30. Campbell, D. T. (1974). Evolutionary Epistemology. The Philosophy of Karl Popper. P. A. Schilpp. La Salle, Ill., Open Court. 14: 413-463.
63
Christensen, C. M. and R. S. Rosenbloom (1995). “Explaining the Attacker's Advantage: Technological Paradigms, Organizational Dynamics, and the Value Network.” Research Policy 24: 233-257. Clark, K. B. (1985). “The Interaction of Design Hierarchies and Market Concepts in Technological Evolution.” Research Policy 14: 235-251. Constant, E. W. T. I. (1980). The Origins of the Turbojet Revolution. Baltimore, The Johns Hopkins University Press. Cusumano, M. A., Y. Mylonadis and R. S. Rosenbloom (1992). “Strategic Maneuvering and Mass Market Dynamics: The Triumph of VHS over Beta.” Business History Review 66(Spring): 51-94. David, P. A. (1985). “Clio and the Economics of QWERTY.” American Economic Review 75(2): 332-337. David, P. A. and S. Greenstein (1990). “The Economics of Compatibility Standards: An Introduction to Recent Research.” Economic Innovations New Technology 1: 3-41. Dosi, G. (1982). “Technological Paradigms and Technological Trajectories.” Research Policy 11: 147-162. Dosi, G. (1984). Technical Change and Industrial Transformation. New York, St. Martin's Pres Inc. Ehrnberg, E. (1995). “On the Definition and Measurements of Technological Discontinuities.” Technovation 15(7): 437-452. Farrell, J. and G. Saloner (1985). “Standardization, compatibility, and innovation.” Rand Journal of Economics 16: 71-83. Freeman, C. and L. Soete (1997). The Economics of Industrial Innovation. Cambridge, MA, The MIT Press. Garud, R. and A. Kumaraswamy (1993). “Changing Competitive Dynamics in Network Industries: An Exploration of Sun Microsystem's Open System Strategy.” Strategic Management Journal 14: 351-369. Garud, R. and A. Kumaraswamy (1995). “Technological and Organizational Design for Realizing Economics of Substitution.” Strategic Management Journal 16(Special Issue Summer): 93-110. Gilfillan, S. C. (1935). Inventing the Ship. Chicago, Follet Publishing Company.
64
Greenstein, S. (1996). “Empirical Examples of Different Kinds of Standards.” Personal Communication at Stanford University. Hanieski, J. F. (1973). “The Airplane as an Economic Variable: Aspects of Technological Change in Aeronautics, 1903-1955.” Technology and Culture 14(4): 535-552. Hannan, M. and J. Freeman (1989). Organizational Ecology. Cambridge, MA, Harvard University Press. Henderson, R. M. and K. B. Clark (1990). “Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms.” Administrative Science Quarterly 35: 9-30. Hounshell, D. A. (1984). From the American System to Mass Production. Baltimore, Johns Hopkins University. Hughes, T. P. (1983). Networks of Power. Baltimore, Maryland, The Johns Hopkins University Press. Iansiti, M. and T. Khanna (1995). “Technological Evolution, System Architecture and the Obsolescence of Firm Capabilities.” Industrial and Corporate Change 4(2): 333-361. Jewkes, J., D. Sawers and R. Stillerman (1961). The Sources of Invention. New York, Norton. Khazam, J. and D. Mowery (1994). “The Commercialization of RISC: Strategies for the Creation of Dominant Designs.” Research Policy 23: 89-102. Klein, B. (1977). Dynamic Economics. Cambridge, MA, Harvard University Press. Klepper, S. (1996). “Entry, Exit, Growth, and Innovation over the Product Cycle.” American Economic Review 86(3): 562-583. Kreps, T. J. (1938). The Economics of Sulfuric Acid Production. Stanford, Stanford University Press. Landes, D. S. (1983). Revolution in Time. Cambridge, The Belknap Press of Harvard University Press. Langlois, R. N. and P. L. Robertson (1992). “Networks and Innovation in a Modular System: Lessons from the Microcomputer and Stereo Component Industries.” Research Policy 21: 297-313. Lee, J.-R., D. E. O'Neal, M. W. Pruett and H. Thomas (1995). “Planning for Dominance: A Strategic Perspective on the Emergence of Dominant Design.” R&D Management 25(1): 3-15.
65
Levinthal, D. (1998). “The Slow Pace of Rapid Technological Change: Gradualism and Punctuation in Technological Change.” Industrial and Corporate Change 7(2): 217-248. Marple, D. L. (1961). “The Decisions of Engineering Design.” IRE Transaction on Engineering Management EM-8: 55-71. McGrath, R. G., I. C. MacMillan and M. L. Tushman (1992). “The Role of Executive Team Actions in Shaping Dominant Designs: Towards the Strategic Shaping of Technological Progress.” Strategic Management Journal 13: 137-161. McKelvey, B. (1993). Evolution and Organizational Science. The Evolutionary Dynamics of Organizations. J. Singh and J. Baum. New York, Oxford University Press: 314-326. Metcalfe, S. and M. Gibbons (1989). Technology, Variety, and Organization: An Systematic Perspective on the Competitive Process. Research on Technological Innovations, Management and Policy. R. S. Rosenbloom and R. A. Burgelman. Greenwich, JAI Press. 59: 153-193. Miller, R., M. Hobday, T. Leroux-Demers and X. Olleros (1995). “Innovation in Complex System Industries: the Case of Flight Simulation.” Industrial and Corporate Change 4(2): 363-400. Miller, R. and D. Sawers (1968). The Technical Development of Modern Aviation. London, Routledge & Kegan Paul. Nelson, R. R. (1995). “Co-Evolution of Industry Structure, Technology and Supporting Institution, and the Making of Comparative Advantage.” International Journal of the Economics of Business 2(2): 171-184. Nelson, R. R. and S. G. Winter (1982). An Evolutionary Theory of Economic Change. Cambridge, The Belknap Press of Harvard University Press. Noble, D. F. (1984). Forces of Production. New York, Alfred A. Knopf, INC. Pavitt, K. (1984). “Patterns of Technical Change: Towards a Taxonomy and a Theory.” Research Policy 13: 343-373. Polanyi, M. (1962). Personal Knowledge: Towards a Post-Critical Philosophy. New York, Harper & Row. Porter, M. E. (1990). Competitive Advantage of Nations. New York, Free Press. Rae, J. B. (1968). Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge, Mass, The MIT Press.
66
Rosenberg, N. (1969). “The Direction of Technological Change: Inducement Mechanisms and Focusing Devices.” Economic Development and Cultural Change(18): 1-24. Rosenberg, N. (1982). Inside the Black Box. New York, Cambridge University Press. Rosenberg, N. (1996). Uncertainty and Technological Change. The Mosaic of Economic Growth. R. Landau, T. Taylor and G. Wright. Stanford, Stanford University Press: 334-353. Rosenbloom, R. S. and C. M. Christensen (1994). “Technological Discontinuities, Organizational Capabilities, and Strategic Commitments.” Industrial and Corporate Change 3(3). Rosenbloom, R. S. and M. A. Cusumano (1987). “Technological Pioneering and Competitive Advantage: The Birth of the VCR Industry.” California Management Review 29(4): 51-76. Rosenkopf, L. and M. L. Tushman (1994). On the Co-Evolution of Technology and Organization. The Evolutionary Dynamics of Organizations. J. Singh and J. Baum. New York, Oxford University Press. Sahal, D. (1981). Patterns of Technological Innovation. Reading, Massaaachusetts, Addison-Wesley Publishing Company. Sahal, D. (1985). “Technological Guideposts and Innovation Avenues.” Research Policy 14: 61-82. Saloner, G. (1990). “Economic Issues in Computer Interface Standardization.” Economics of Innovation and New Technology 1: 135-156. Saloner, G. and A. Shepard (1995). “Adoption of Technologies with Network Efffects: An Empirical Examination of the Adoption of Automated Teller Machines.” Rand Journal of Economics 26(3): 479-501. Sanderson, S. W. and M. Uzumeri (1995). “Managing Product Families: The Case of the Sony Walkman.” Research Policy 24: 761-782. Schatzberg, E. (1994). “Ideology and Technical Choice: The Decline of the Wooden Airplane in the United States, 1920-1945.” Technology and Culture 35(1): 34-69. Schumpeter, J. A. (1950). Capitalism, Socialism and Democracy. New York, Harper & Row.
67
Simon, H. A. (1981). The Sciences of the Artificial. Cambridge, Massachusetts, MIT Press. Suárez, F. F. and J. M. Utterback (1995). “Dominant Designs and the Survival of Firms.” Strategic Management Journal 16: 415-430. Teece, D. (1984). “Economic Analysis and Business Strategy.” California Management Review 26(3): 87-110. Teece, D. J. (1986). “Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy.” Research Policy 15: 285-305. Teece, D. J. (1992). Strategies for Capturing Financial Benefits from Technological Innovations. Technology and the Wealth of Nations. N. Rosenberg, R. Landau and D. C. Mowery. Stanford, Stanford University Press: 175-206. Teece, D. J., G. Pisano and A. Shuen (1997). “Dynamic Capabilities and Strategic Management.” Strategic Management Journal 18(7): 509-533. Tushman, M. L. and P. Anderson (1986). “Technological Discontinuities and Organizational Environments.” Administrative Science Quarterly 31: 439-465. Tushman, M. L. and L. Rosenkopf (1992). On the Organizational Determinants of Technological Change: Towards a Sociology of Technological Evolution. Research in Organizational Behavior. B. Staw and L. Cummings. Greenwich, CT, Jai Press. Ulrich, K. (1995). “The Role of Product Architecture in the Manufacturing Firm.” Research Policy 24: 419-440. Utterback, J. M. (1994). Mastering the Dynamics of Innovation: How Companies Can Seize Opportunities in the Face of Technological Change. Boston, Harvard Business School Press. Utterback, J. M. and F. F. Suárez (1993). “Innovation, Competition, and Industry Structure.” Research Policy 22(1): 1-21. Van de Ven, A. H. and R. Garud (1993). The Co-Evolution of Technical and Institutional Events in the Development of an Innovation. The Evolutionary Dynamics of Organizations. J. Singh and J. Baum. New York, Oxford University Press. Vincenti, W. (1990). What Do Engineers Know and How Do They Know It? Baltimore, Johns Hopkins Press. Vincenti, W. (1994). “The Retractable Landing Gear and the Northrup "Anomaly": Variation-Selection and the Shaping of Technology.” Technology and Culture 35(1): 1-33.
68
Vincenti, W. G. (1991). “The Scope for Social Impact in Engineering Outcomes: A Diagrammatic Aid to Analysis.” Social Studies of Science 21: 761-767. Wade, J. (1995). “Dynamics of Organizational Communities and Technological Bandwagons: An Empirical Investigation of Community Evolution in the Microprocessor Market.” Strategic Management Journal 16: 113-133. Wernerfeld, B. (1984). “A Resource-Based View of the Firm.” Strategic Management Journal 5: 171-180.