DIGITAL CONVERGENCE AND ITS CONSEQUENCES milton mueller Abstract The concept of convergence has been bandied about for at least 25 years. Initially, concepts of convergence conflated technological integration of print, telecommunications and broadcasting systems with firm-level integration of publishers, telephone companies, cable TV operators, and broadcasters. Ithiel de Sola Pool's (1983) concept of a single integrated common carrier that met all media needs exemplified the prevailing vision. This paper conducts a broad historical survey of the market structure of media and telecommunications industries from the analogue era of the 1940s to the late-1990s. Its chief premise is that convergence is driven by the declining cost of information processing power, and by the development of open standards. The chief effect of this upon market structure is not to encourage consolidation and vertical integration but rather to break up the media market into more or less specialised horizontal components (content, conveyance, packaging of services, software, and terminal equipment). Cheap, mass produced information processing radically undermines the economic and technological advantages of vertical integra-tion across these component markets, and rewards speciali-sation and market share within individual horizontal markets. The idea of convergence has been coming in and out of fashion for more than two decades. The process can be cast in religious terms. A band of early prophets sets out a vision. Afterwards, a succession of messiah — technologies appears that promise to realise the great vision. But, as we shall see, several of the messiah-technologies were crucified and failed to rise from the dead. Even so, one cannot discount the possibility that TCP/IP does indeed represent the Coming. In this article, I shall develop a long-term view of the convergence process. In the first part, I identify two of the prerequisites for digital convergence: (1) a technological revolution in processing power; and (2) a process of converging on common standards In the second part, I explore the impact of convergence on market structure and business models. Milton Mueller teaches in Syracuse University School of Information Studies. 00 fN no en (JD "<5 > A 3 a 0 fi The proposition that all modes of communication and information will converge into a digital nexus has been circulating for about twenty-five years. One of the earliest expressions of the idea came from Nicholas Negroponte, a technologist and founder of MIT's Media Lab (Brand 1987, 10). In 1978, he used three overlapping circles to represent the technologies of computing, printing, and broadcasting. The most rapid growth and innovation, he argued, could be found in the area where the three intersected. Negroponte had overlooked the telephone system, but simultaneously, telecommunications analysts were developing their own language of merging technologies (Farber and Baran 1977). Harvard's Anthony Oettinger, coined the ugly neologism "compunications" to express the growing overlap of computing and telecommunications (Oettinger, Berman, and Read 1977). French writers Nora and Minc independently came up with the more graceful "telematique" to express the same idea (Nora and Minc 1980). Neither term ever quite caught on, and to this day the world is still struggling with awkward combinations of terms such as "telecommunications," "information" and "computing" to label the basic technology of the information economy. Does the Internet, then, constitute the ultimate realisation of the prophets' vision? To answer this question we need to delve more deeply into some of the technological and social drivers of the process. Drivers of Convergence Convergence as analysed here is a combination of two factors: technological improvements in processing power, and the adoption of common protocols and standards. Technological Drivers To some, the term convergence suggests a marriage or a coming together of different technologies or industries. That image is a misleading one. Convergence is really a take-over of all forms of media by one technology: digital computers, a technological system with solid-state integrated circuits (ICs) at its core, supplemented by photonic components (lasers and optical fibers) and applications of mathematical information theory. The ability of digital systems to handle multimedia content at lower and lower costs is a product of exponential progress in the processing power and memory of ICs. This, in turn, depends on the ability to increase the density of transistors on a single IC chip. Moore's law. The first integrated circuits were fabricated in 1960. In 1971, the Intel Corporation created the first microprocessor by placing an entire computer central processing unit on a single silicon chip the size of a fingernail. From 1960 until today, the transistor density of a single IC chip has doubled approximately every two years. This phenomenon was first identified by Gordon Moore of Intel in 1968, and became known as "Moore's law." A corollary of Moore's law states that the cost of an IC is approximately proportional to the square root of IC complexity, which means that the cost of carrying out any particular task with ICs will be cut in half about once every two years. The link between the progress of media convergence and advances in integrated circuitry is well established in the literature (Gilder 1994; Midwinter 1995; Yoffie 1997). The spreading applications of ICs are not responses to a world of digital content and networks. On the contrary, content and networks have gone digital in order to avail themselves of the power of ICs. For example, most of the recent advances in digital Figure 1: Growth of Transistor Density on Chips _ - no video were not possible until a frame of digitised video could be stored on a single chip (Midwinter 1994, 29). The Internet's ability to deliver voice and video signals to PC users required upgrades in the processing speed and memory of a typical PC and increases in the bandwidth and processing speed of the network and its routers. Likewise, the addition of data screens to mobile telephones, and the adoption of CD-ROMs as a common storage medium for PC data, recorded music, and movies, both stem from a common root: lower priced and more powerful computer and laser components. The pace of convergence has thus been largely determined by the operation of Moore's law. The Billion Transistor Chip. Moore's law has held true for thirty-five years. But how much longer will the semiconductor industry be able to sustain that rate of progress? The most conservative estimates project that the rate of improvement will begin to level off around 2005 (Hutcheson and Hutcheson 1996). Moore himself predicts that advances in circuit complexity will begin to bump up against physical limits around the year 2010 (Moore 1996). Some technologists, however, believe that current rates of change may continue even longer if transistors operated by a single electron, which exist already in the laboratory, can be successfully commercialised. Whichever forecast turns out to be correct, the technological progress supporting digital convergence still has a long way to go. In a recent interview, Gordon Moore stated: Even with the level of technology we can extrapolate fairly easily—a few more generations—we can imagine putting a billion transistors on a chip. A billion transistors is mind-boggling. Our most advanced chips in design today will have less than 10 million transistors. So, we're talking about a hundred times the complexity of today's chips. Exploiting that level of technology ... could keep us busy for a century (Moore 1997). Semiconductor industry expert Michael Slater provides a more specific assessment of the capabilities of a billion-transistor chip: A single such chip could have dozens of processors, each with several times the complexity of today's most advanced devices, plus several megabytes of cache for each. Running at several gigahertz, the chip could include a video and 3D graphics system, peripheral controllers, a network interface modem, and so forth. A system could be built with everything in the fastest workstation today, including memory, in a single chip. A $10 microcontroller will be faster than the fastest microprocessor today and have a full set of peripherals (Slater 1997). With that many transistors on a chip, a desktop computer will be able to store an entire copy of a high-definition movie in RAM and manipulate it in real time. In effect, video content will be moved about and manipulated as easily as e-mail is today. Coordination and Standardisation But raw technological power is only part of the convergence story. Often overlooked is the fact that digital convergence also implies a process of settling upon common protocols and technical standards for data interchanges. This is a predominantly socio-economic process, not a technical one. It involves the coordinated adoption of compatible technology platforms by a critical mass of producers and consumers. That process is affected by network externalities and product life cycles. So, in many ways, the progress of digital convergence is a story of the rise and fall of specific standards that were designed to bring together various media forms. And as economic theory on standardisation has demonstrated, such processes are path-dependent, and may be "tipped" into one of various possible equilibria by chance events. ISDN. Many observers — especially the telephone companies who had developed it — thought that the ISDN standard was going to be the incarnation of convergence. ISDN was developed by the ITU starting in the late 1970s, and released as a mature standard in the first half of the 1980s. In promoting ISDN, telephone companies used the same promise of voice and data integration, including hints of the eventual inclusion of video. But of course ISDN never took hold. The telephone companies priced it as a premium service and did not commit themselves to a wholesale upgrade of their networks. Implementation was complex, and in the US, where data communication was most developed in the 1980s, the AT&T divestiture's fragmentation of the operating companies made the costs of cooperation higher and thus the development of different "flavours" of ISDN inevitable. One obvious limitation on the success of ISDN is that most consumers simply didn't know what it was supposed to do for them. In the 1980s, many data communication applications generally were built around proprietary equipment and protocols, such as IBM's SNA standard. There was still a lack of integration at the corporate and product development levels between telephone companies and computer companies. ISDN was no match for open standards, such as the IEEE's Ethernet, that could be directly managed and implemented by companies building LANs, rather than acquired from a third-party vendor. Ethernet. Indeed, the tremendous success of Ethernet demonstrated that open, nonproprietary standards enjoyed key advantages in the marketplace. Although it was inferior to the proprietary token-ring standard in purely technical ways, it nevertheless gave buyers more security and lower prices. Its initial success was reinforced as network designers and implementers became more familiar and comfortable with its features, leading to a bigger market, lower prices, more product development and diversity. One of the key factors is that a very large portion of intra-organizational networking has evolved as private networks; i.e., networks that were put together on a decentralised basis by the users themselves, not as large-scale service offered by a public carrier. This meant that compatibility and convergence had to take shape as bottom-up processes, rather than being imposed from the top down. SONET/SD, and Frame Relay/SMDS. The cost of bandwidth over long distances creates very powerful economic incentives for most private and public networks to "converge" all forms of traffic onto high-speed backbones. The Synchronous Optical Network (SONET) standard (known as SDH in Europe) is a time-division multiplexing technique developed by long distance carriers to combine many channels of voice traffic onto a single, high-bandwidth link. But it is a digital standard, so that data traffic can also be mixed into the bitstream. The problem is that it must first be fitted into the 64 kbps channel standard developed for voice traffic. In general, circuit switching and time-division multiplexing are less efficient ways of carrying data traffic, which is bursty rather than continuous and may require greater bandwidth than a single voice channel. Thus throughout most of the 1990s, high speed voice-oriented backbones often used different standards to the data backbones in corporate and telephone company networks, which were more likely to be based on data-oriented standards such as frame relay. Furthermore, these data-oriented standards were designed to have limited functionality. They were not designed to be broad-based convergence technologies. TCP/IP. The Internet protocol suite (TCP/IP) was designed to support internetworking. This means that it permits the interconnection of multiple networks that use different hardware and communication conventions. TCP/IP is a form of packet-based data communications, which routes small chunks of data from one machine to another based on address information carried in the packet. By the early 1990s, TCP/IP had begun to emerge as a very powerful solution to the data communication problems posed by the world of heterogeneous standards and equipment used in private networks. Like Ethernet, it was an open, non-proprietary standard. The basic technology of TCP/IP has survived almost two decades of exponential growth. During the past three years, TCP/IP has become the "protocol of convergence" for many companies and services. Internet telephony, and the streaming of video and audio on the Internet, is now commonplace, although the quality of service offered rarely matches that offered by networks based on more traditional standards. One of the weaknesses of IP is in the area of mobile communications. Asynchronous Transfer Mode (ATM) was the telephone companies' response to the rise of the Internet. It attempted to combine the benefits of the circuit-switched telephone networks (dedicated connections, guaranteed quality of service) with the benefits of a packet-based communication standard (which used bandwidth more efficiently). Unlike TCP/IIP ATM fits all data into a uniform packet size (known as a cell) and uses statistical multiplexing over virtual circuits. The uniform packet size makes it easier for ATM to provide isochronous services, such as voice or video, which do not tolerate delay. ATM can carry TCP/IP traffic, but one must chop up TCP/IP packets and fit them into a series of ATM cells. Carriers in the United States have recently begun to offer ATM backbone services. From the above one can begin to appreciate the complexity of converging real standards and equipment. In fact, a given user may employ many of these standards simultaneously. An Ethernet Local Area Network can be connected to the Internet via a ATM Wide Area Network, and once on the Internet may end up running over a SONET link. The most significant question is whether any one of these standards, most notably ATM or an improved TCP/IIP can eventually handle all the different service qualities and features that a given user might demand. Digital Media Market Structure The business implications of digital convergence are profound. The economic organisation of some of the world's largest, fastest-growing industries is being transformed. No one can predict precisely what shape this transformation will take. Nevertheless, some vital aspects of a significant change in market structure are already visible. Twenty years ago, most people thought that digitalisation would lead to a gigantic consolidation and merger of all media infrastructures into one vertically integrated monopoly. The "electronic nightmare" scenario projected that media would converge into a horrifying combination of the post office, Microsoft, broadcast networks, and the telephone company (Wicklein 1980; Pool 1983). In fact, something much closer to the opposite is happening. Cheap, abundant processing power is promoting disintegration and specialisation along the communications value chain. In computers, telecommunications, and broadcasting, successful firms are moving away from end-to-end, vertical integration to focus on specialised, horizontal segments of the market. Devices, distribution channels, and applications are becoming more diverse and specialised as well as more interoperable. The result is not a "unification" of broadcasting, computing, and telecommunications, but a completely new media ecology. This section identifies some of the key features of this change. The Vertical Structure of Analogue Media Prior to digitalisation, different electronic communication services formed discrete chains of components that restricted distinct kinds of communication and content to specific distribution networks and terminals. In many cases, especially the telephone and telegraph systems, the supplying firm was vertically integrated over the entire chain. Even when the supplying firm itself was not vertically integrated over the entire component chain, the vertical structure was maintained by technological barriers that prevented information from being easily transferred from one system to another. Figure 2 illustrates the situation around 1950, at the dawn of the age of semiconductors. Telephony, telegraphy, broadcasting, motion pictures, publishing, money, and documents were all vertically integrated chains linking a specific kind of content, distribution network, and terminal. There were some cross-linkages between these vertical chains, especially in the transmission segment. But for the most part they operated as separate systems. In telephone communication a single, vertically integrated monopoly supplied end-to-end service. Documents, data, money, financial transactions, and publications were largely restricted to the media of printed paper and physical distribution via a monopoly post office. The telegraph provided an important link between the worlds of telecommunication and print/paper, but telegraph transmissions relied on manual input, which severely limited their capacity. There were no credit cards and very limited forms of electronic funds transfer (McKenney, Copeland, and Mason 1995). Broadcast receivers and playback systems for recorded sound were also discrete technological systems. Figure 2: Vertical Structure of Media in 1950 Two-w ay voice Two-w ay docs & data Film T elephone & T elegraph Phone Set Telegraph Money Printed Publications O ne-way Sound Physical D istribution Radio Docum ents Photos Phonographs Radio Receiver O ne-way Video TV TV Set The real source of the vertical structure was not the content-carrier segment of the chain. Television and radio broadcast signals, voice signals, photographs, and text could all be converted into analogue electronic signals and carried by trunk telecommunication networks. The segregation of services took place primarily at the input and output terminal. Final distribution to users involved application-specific devices that could neither communicate with devices from other content-carrier chains, nor convert information into and out of other formats. Thus, convergence was limited by the limited processing power of end-user terminals. Compared to today, the technology that was needed to generate, process, convert, store and retrieve signals automatically was delicate, primitive, and expensive. It was, therefore, concentrated in organisations remote from the user, so that economies could be made and technical standards could be tightly controlled. It was also not standardised across media. Personal Computers and the Horizontal Shift The early computer industry adopted this vertical structure. Until the late 1970s, it consisted of a few large, vertically integrated manufacturers. Each manufacturer designed its own system around a proprietary architecture. They often developed and produced their own semiconductor devices for memory and processing, and employed their own applications software. Manufacturers also directly controlled the sales and distribution of their machines. The vertical structure is represented in Figure 3. By the late 1970s, rapidly developing microprocessor technology put all the basic processing functions of a computer on a single chip. Computers began to be assembled around a microprocessor, supplemented by readily available components such as memory chips, VO controllers, disc drives, and peripherals. IBM's introduction of the PC in 1981 inadvertently reinforced this modular approach to computer manufacture, and ultimately led to the destruction of the vertical structure in computer manufacturing. Because of the competitive threat represented by Apple Computer and other microcomputer manufacturers, IBM needed to enter the market quickly. It therefore abandoned its normal procedures, which relied on methodical, in-house development Figure 3: Vertical Integration of Computer Market, 1980 Distribution Distribution Distribution Distribution Distribution Utilities Utilities Utilities Utilities Utilities Operating Systems Operating Systems Operating Systems Operating Systems Operating Systems Computer platforms Computer platforms Computer platforms Computer platforms Computer platforms Basic Circuitry Basic Circuitry Basic Circuitry Basic Circuitry Basic Circuitry IBM Fujitsu NEC DEC HP Source: Grindley 1995. of a closed, proprietary architecture. Instead, IBM introduced an open architecture and off-the-shelf components, and held very little intellectual property protection over the result. As a result, the product and its architecture were easily imitated (Grindley 1995). Figure 4: Computer Industry Market Structure by 1995 Distribution Applications Operating Systems & Networks Computer Platforms Microprocessor Direct Computer Dealers VARs Superstores Mail/online Microsoft Lotus/IBM Oracle Informi ix Java DOS Windows Unix Apple Linux Novell Windows NT Unix other IBM Sun Compaq Dell HP Acer Apple DEC IBM/Apple/ Intel Sun , Motorola Pentium Sparc A'pna PowerPC Source: Grindley 1995 The result is now apparent to all. With the exception of Apple, the entire personal computer industry standardised around the IBM PC system architecture. Clone manufacturers took over 75-80 per cent of the PC market. Their competition and rapid innovations created constant pressure to lower prices and improve features. A new, more specialised industry structure emerged, characterized by competition between firms with strong positions in one of five horizontal segments of production. These five segments are: (1) microprocessors; (2) manufacture of computer platforms; (3) operating systems software (both client and server side); (4) applications software; and (5) distribution. Vertical links between one or two of these segments remain. Microsoft, for example, has leveraged its strength in operating systems to take over the lion's share of the applications software market. IBM still has significant positions in four of the segments, and its acquisition of Lotus in 1995 extended its position in applications software. Even so, market share is usually won or lost on the basis of competitiveness in horizontal segments. IBM PCs, for example, generally use Intel microprocessors. The strongest positions (e.g., those of Microsoft, Intel, Compaq) have generally been achieved precisely because the supplier specialised in one horizontal segment and did not try to extend that control too far up or down the value chain. End-to-end vertical integration has been almost entirely banished from the marketplace. The decline of Apple Computer's market share, its alliance with IBM and its licensing of independent manufacturers in the 1990s, represent the final stages of this transition. The Building Blocks of Digital Media The pattern experienced by the computer industry in the 1980s is now spreading throughout the telecommunication and media industries. The vertical structures represented in Figure 2 are breaking down on a global scale. The process is driven by the growing power of microprocessors and a shift in the distribution of information processing and storage power toward the end user, which leads to more open standards and interfaces across horizontal segments. The vertical segmentation of media is being replaced by a converged digital media market composed of five distinct horizontal segments. Following a model suggested by Bane et al (1995), these segments can be defined as (1) Content creation and production; (2) Service packaging; (3) Carriage; (4) Software; and (5) Equipment. The new situation is represented schematically in Figures 5 and 6. Figure 5 shows the partial convergence that existed about 1990, and Figure 6 provides a simplified diagram of the horizontal segments of a fully converged market. Content refers to the creation and production of symbolic material that has been encoded in a particular format. Motion pictures, television programming, newspaper articles, book manuscripts, recorded music, and the information on a Web site are all examples of content. So are human speech and money. In general, content refers to material that consumers value in and of itself, either for its entertainment value or for its educational, news, or exchange value. Packaging refers to the intermediary function wherein different types of content and/or software are assembled into a product or service bundle. Packagers reduce search costs for consumers and also provide a quality control and assurance function. Carriage refers to the business of distributing or transporting information. Telephone transmission networks, cable TV systems, or, more generically, optical fibre, co-axial copper cable, communication via radio frequencies, or vehicular transportation are examples of different types of carriage. Figure 5: Digital Convergence at 1993 Audiotext Network intelligence Long distance access Local access Voice terminals Databases Network intelligence Backbones LANs, WANs Computers $$$ Transaction processing Financial networks ATMs, cash, Cards, Checks Entertainment Content Intelligence Cable, Broadcast CD, tape distribution TV sets,, Radios Voice Data Money Audio-Visual Figure 6: Horizontal Segments of a Converged Media Environment Content Information products (Text, TV, Radio, Film, Financial info, Money, Graphic art, Web pages, Games, Music, Photography) Packaging Services; bundling and selection of content; addition of integrative and presentational functionality Transmission Physical infrastructures for transport (fixed telephone network, terrestrial and satellite wireless, cable TV systems, private LANs and WANs, etc.) Software Intelligence, including processing and storage hardware and software for network and individual terminals Terminals Local devices for input and output of signals and information (phone handsets, TVs, PCs, organizers, PDAs, etc.) Source: Bane, Bradley, and Collis 1995. Equipment manufacturing refers to hardware devices that enable telecommunication and information processing. This includes the consumer products that allow users to transmit, receive, and display signals, such as telephone handsets, television sets, fax machines, desktop PCs, pagers, and satellite dishes. It also includes intermediate goods that go into the construction of a network, such as switches and routers, multiplexers, modems, and so on. Software, the stored instructions that manipulate or process information in a particular way, is an essential element of the model. Software markets are often bundled with equipment, but nevertheless represent a distinct product. Desktop applications, switching, routing and network management protocols, browser software, information storage and retrieval protocols, multiplexing and signal compression, search engines, and transaction processing are all examples of software. Software is an input that is present throughout the communication chain, but it is also a discrete market. Economic Aspects of Horizontalization According to traditional natural monopoly theory, monopoly and concentration are products of economies of scale and scope in supply. Digital technology, however, massively increases the economies of scale and scope that can be achieved in the switching, transmission, and storage and duplication of content. Why, then, has the rise of digital media radically undermined monopoly and vertical integration instead of reinforcing it? There are two reasons. One is that mass-produced digital intelligence reduces the social cost of multiple, heterogeneous networks and systems. Or, to put it differently, it radically undermines the advantages of vertical integration. The other reason is that the declining price of intelligence has brought the capital investments needed to acquire it well within the budget constraints of ordinary firms and households. Reducing the capital intensity of intelligence also reduces the importance of building large-scale organisations that can share its costs among many users. Both of these points are elaborated below. Vertical integration undermined. In the old market structure, the five building blocks of the communications value chain were mostly vertically integrated around specific media. A typical broadcaster, for example, produced most of its own content, assembled outsourced content into a service package, and owned and operated its signal transmitter. Although vertical integration did not extend all the way to the end user's receiving equipment, this gap was filled by rigid government regulations confining transmissions to specific frequency bands and locations and controlling the characteristics of broadcast terminals. Likewise, telecommunication companies manufactured the terminal; built, owned, and operated the carriage network; and centrally controlled and managed the network intelligence. Service packages and specialised applications of network capabilities were developed internally by the telecommunication companies. To understand the new structure of media it is first necessary to understand what sustained the old one. The vertical, monopolistic form of communication media was basically a product of the high price of intelligence. In the era of electromechanical telephone switching, for example, increases in the scale of the network placed heavy demands on network intelligence. Additional information processing power could only be purchased with disproportional inputs of capital and labour. Increases in the size and complexity of telephone switching offices beyond a certain point created major diseconomies of growth (Mueller 1989). Under these conditions, any attempt to interconnect multiple, competing networks, or to support heterogeneous forms of terminal equipment, added greatly to the expense of the network. More diversity and complexity meant disproportionate increases in the physical facilities and labour resources needed to run the system. The viability of competition in telecommunication can be directly related to technological changes that reduced the price of processing power. With electromechanical technology there was only two ways to have competing telephone systems and, at the same time, allow all telephone users to be able to call each other. One was to let some users rent two access lines and telephone sets (demand-side duplication). The other was to require the competing systems to interconnect. The latter option (supply-side duplication) was as expensive as the first, for it created a duplicate trunk network, greatly enlarged the size and complexity of the switchboards, and also required major increases in the size of central offices' staff (Mueller 1997, 136). In digital electronic networks, interconnection of additional networks requires more intelligence, but only a little more hardware and very little additional labour. The complex exchanges of information required to interconnect independently managed networks can be achieved rapidly and automatically, through software protocols. Processing power acts as a direct substitute for the duplication of physical facilities and labour. Reduced capital intensity. When intelligence is very expensive, it must be shared among multiple users. Its application must be conserved, restricted to the most important functions. The capital investment it represents can only come from a large organisation and can only be recovered by spreading its costs across a significant portion of the population. When intelligence is abundant, sharing economies become less important; control and convenience rise in significance. As high levels of processing power come within the budget constraint of households and businesses, there is greater economic tolerance of diversity, duplication, and "waste" for the sake of convenience, customisation, and control. It is the same in other industries. From the standpoint of simple sharing economies, for example, a public bus or train is always more efficient than a private automobile. But the wealthier a society becomes, the more its consumers purchase automobiles and avoid public transport. The structural consequences of the declining price of intelligence can be summarised as follows. 1. There is greater fungibility among the different components of the communication chain. That is, an end user or service provider can more easily mix and match a product or service from one horizontal segment with the products and services from any other segment to configure a communication service. Weaker vertical links among specific applications means that competition is more focused on achieving market share in specific horizontal segments of the chain. 2. As the price of intelligence drops, it becomes more evenly distributed throughout the chain. Terminal equipment, once the "dumbest" part of the communication chain, has become vastly more intelligent. The concentration of intelligence in central switching offices and bureaucratic management hierarchies has gradually eroded. Instead, end users have asserted ownership and control over terminals and on-premises networks. 3. There is divergence, not convergence, in each horizontal segment. The horizontal shift is naturally accompanied by a growth of specialisation and diversity in the mar- ket as a whole. A standard feature of intense competition is that it forces competitors to differentiate their products and services. The market becomes more responsive to slight variations in demand. This trend is evident in all five segments. In data terminals there are still mainframes and PCs, but there are also smart cards, notebooks, palmtops, organisers, and PDAs. Telephones and pagers come in all shapes and sizes, representing different ways of handling technical and economic trade-offs between cost, bandwidth, portability, quality, mobility, and power utilisation. There is greater differentiation of audio-visual playback devices, ranging from the tiny, portable car TV to the gigantic home projection screen. In carriage, digital convergence has made different kinds of networks better substitutes for each other. But we do not see the carriage market collapsing into a single infrastructure; rather, competing infrastructures are proliferating, each targeted at a range of applications in which it holds a competitive advantage, and often working in a complementary fashion with other infrastructures. Thus, there are new fixed local networks; private LANs and WANs; many new public wireless local networks; multiple trunk networks for long distance; new, redundant cables for international communication; simultaneous growth of satellite and cable alternatives to terrestrial broadcasting; and so on. In content production the same growth of diversity is present. A standard result of economic analysis was that the mass-oriented, "lowest common denominator" quality of television and radio programming was a function of limitations on the number of channels and the broadcast medium's reliance on advertising support (Owen and Wildman 1994). Digital, interactive media are overcoming both limitations. Video, online, and audio content can increasingly be ordered and paid for on a transactional basis, and need not be supported solely by advertising. And the number of channels is increasing. The overall market for content, therefore, is beginning to look as diverse and fragmented as the market for printed publications. The market for service packagers and software is also increasingly diverse and specialised. The Progress of Disintegration The vertical structure of the telecommunications industry first began to disintegrate thirty years ago. The first step was the detachment of terminal equipment markets from the market for network services. This process was driven by the desire of electronic equipment manufacturers and users to pry open markets that were foreclosed by telephone companies' monopoly control of the access infrastructure. The creation of a standardised interface between the public network and the customer's equipment facilitated end user ownership of telephone handsets and PBXs, and promoted freer competition in terminal equipment markets. The rise of competition in long distance markets in the USA eventually led to an attempt to create an analogous standardised interface between local and long distance segments of the network. Without electronic switching intelligence, this would have been economically intractable. Another important development was the emergence of a distinction between the physical network and network intelligence in the form of "value-added services." This distinction had its roots in the emergence of computer networks that employed the telephone network for carriage but "added value" in the form of processing or storage (Brock 1994, 94). Despite this trend away from vertical integration, the prospect of converging telecommunication and audio-visual media in the early 1990s was interpreted by many businesses and analysts as an opportunity for telephone and cable companies to reassert the old vertical structure (OECD 1992; Oftel 1995). Telephone companies, threatened with competition in their traditional markets, began to view broadband networks offering interactive entertainment as the key to their future growth. Thus, in the US, local exchange companies (LECs), frustrated with the line of business restrictions left over from the AT&T divestiture, began to lobby for authorisation to carry video signals to consumers. In 1992 the FCC authorised LEC entry into a limited form of video distribution. A series of alliances and proposed mergers between US telephone and cable TV companies quickly followed. (Southwest Bell acquired two cable systems in the District of Columbia; US West acquired 25 percent of Time-Warner Entertainment; Bell Atlantic tried, but ultimately failed, to merge with cable giant TCI. Later a variety of interactive TV consortia were formed: americast, a partnership of Walt Disney Co., Ameritech, BellSouth, GTE Corp., SBC Communications, and SNET; Tele-TV, a consortium of Bell Atlantic, Nynex, and Pacific Telesis.) Concerned about telephone company threats to their business, American cable companies developed their own interactive TV trials. In both cases, the approach to convergence was based on the idea of proprietary standards and set-top boxes, and service packages under the end-to-end control of large-scale networks. The telephone company, it was thought, would become a cable TV broadcaster with better networking technology. The trend became global. In Australia, Telecom announced in mid-1993 its intention to aggressively develop a fixed broadband network to deliver motion pictures, multimedia, and interactive services to the home (Lindsay 1993, 1-2). British Telecom (BT) also began to position itself as a "multimedia" company. In 1994 Hong Kong Telecom announced the creation of its new Interactive Multimedia Services (IMS). The company hoped that IMS would make what was once just a telephone company into a movie rental store, a financial service provider, an electronic shopping mall, and an on-line school and library. Hong Kong Telecom's IMS initiative was, therefore, typical of the response of incumbent telephone monopolies in liberalising markets throughout the world. These initiatives approached convergence as a blending of the telecommunications and audio-visual industries. But the incursion of these two industries into each other's turf has been minimal and mostly unsuccessful. George Gilder was correct to deride these efforts as "a convergence of corpses" (Gilder 1994, 12). Beginning in late 1995, announcements of closure, delay, or drastic scaling back of various interactive TV and VOD plans became common. One reason was that the central office computers, software, and network upgrades required to support interactive TV proved to be too expensive (Collier 1996). The real nail in the coffin, however, was the rise of the Internet. Suddenly, without any warning to the slow-moving cable and telephone giants, the Internet was actually bringing to market many of the interactive multimedia capabilities the telephone and cable companies had been promising. The Internet's rapid diffusion could be directly attributed to its features of decentralised innovation, open, non-proprietary standards and the absence of end-to-end integration. The modular, horizontally organised Internet market thoroughly undermined the fundamental assumptions of the telco-cable approach to interactive media development. In 1996 telephone companies, including Hong Kong Telecom IMS, stampeded into the Internet Service Provider (ISP) market, often achieving great success. Cable TV companies kept pace by developing cable modems that would allow cable customers to gain high-speed access to the Internet (Weinschenk 1996). Whether they knew it or not, these changes amounted to a strategic repositioning away from vertical integration towards their horizontal strengths in carriage. AT&T's 1998 acquisition of the large cable television company TCI was primarily in that vein too: an attempt to acquire the missing local distribution network that would allow it to bypass local telephone companies and reach the customer directly with carriage services. Almost all of the merger activity that has taken place in the United States since the passage of the 1996 Telecommunications Act has been in horizontal segments of the market. Radio and TV broadcasting chains have acquired other radio and TV broadcasters; telephone companies have acquired other telephone companies (The Bell At-lantic-Nynex merger, the Pacific Telesis-Southwestern Bell merger, the BT-MCI merger); content giants have acquired other content originators (Time-Warner's acquisition Turner Broadcasting). At the same time, there is dramatic evidence of the failure of vertically oriented approaches to convergence and consolidation. No major mergers between telephone company giants and cable multiple system operators have succeeded. AT&T's self-divestiture of Lucent and NCR established clear separations between its business lines in computer manufacturing and services, telecommunication service, and equipment manufacturing. Attempts by consumer electronics hardware manufacturers SONY and Matsushita to integrate backwards into content were expensive failures (Bane et al. 1995). IBM's acquisitions of telephone equipment maker Rolm and Satellite Business Systems were equally unsuccessful. Internet as Digital Media Prototype A prototype of convergent media market structure already exists in the Internet. World-wide, the Internet industry is beginning to experiment with a fully converged environment in which television sets, telephones, and various digital devices besides PCs can be used to access and navigate the 'Net. This, of course, is what convergence is all about—and there is no doubt that the meeting point for this change will be the Internet rather than traditional cable TV or voice telephone systems. Thus, the Internet must be viewed as a bandwidth-constrained, administratively immature version of the fully digital media of the future. It represents the future of broadcasting and telecommunications as well as the future of networked computing. As such, its economic features offer important insights into the market structures and policy problems created by digital convergence. Key features of market structure include the following: Multimedia Capability. The Internet can carry and deliver all modes of content on an interactive basis. Old distinctions between publishing, broadcasting, and telecommunications have already lost their meaning on the Internet. The segmentation of voice, video, and data traffic is also undermined, although not abolished. The Internet currently offers access to news content, mail and document distribution, financial services, photos and graphics, various forms of electronic commerce and digital money, games, real-time voice and music clips, and even some limited clips of real-time video. In addition, it has created new forms of media such as chat rooms, MUDs, search engines, and browsers. The Internet's multimedia capabilities are still limited by congestion, low-bandwidth access to residences, and the presence of older chipsets in many home and office computers. Over time, however, new administrative arrangements, better pricing mechanisms, the expanding power of ICs, and equipment upgrades will reduce these barriers. Disintegration. The Internet is largely disintegrated in structure. TCP/IP, the protocol on which it is based, is an open, non-proprietary standard. There are clear demarcations between the markets for terminal equipment, browser software, local carriage, backbone carriage, service packagers, and content producers. Suppliers concentrate on maximising their competence and market share in one or two of these horizontal segments of the market. The environment of vertical disintegration has a powerful impact on the flexibility of service configuration and the possibilities for service innovation. Packagers and intermediaries can "mix and match" service components to create a product. Internet services may be advertising supported, subscription-based, free, pay per view, or a combination of these options; their delivery architecture includes both "pull" and "push" interfaces. The old broadcast-telecommunication categories are totally irrelevant in this environment. An important corollary of disintegration is that end-users in businesses and residences can assert ownership over terminal equipment, in-premises distribution, content, and software interfaces. Service providers must compete not only with other service providers, but also with equipment manufacturers. The consumer can control when to lease and when to buy. This creates further pressure toward open, "plug and play" standards and a disintegrated value chain (Yoffie 1997). A Borderless Market. The falling cost of bandwidth and processing power makes national boundaries increasingly irrelevant in determining the features of digital media. Unlike traditional telephony, there is no "distance premium" on the Internet and no regulatory regime, like the international settlements system, that makes data movements pay special taxes for crossing international borders. Multimedia content can be distributed globally and, via electronic commerce, services and products can be consumed from any point. It will become increasingly difficult — and counterproductive — for governments to monitor and control the movement of bits. A regime of increasingly free trade in information and telecommunication services and content seems inevitable. When entire motion pictures can be transmitted in encrypted form over international lines in a few seconds, and when Internet users can experience or download pictures, music or videos hosted on computers far outside their home country's jurisdiction, the concept of broadcasting laws and regulations that restrict ownership to nationals or prescribe the kind of content that people can view within the country cannot survive for long. A multimedia capability. A horizontal, specialised industry structure. Open entry. A transnational market. These four features represent the clear direction of digital media services. They are not unique to the Internet but are logical consequences of the declining cost of processing power, the victory of open over closed standards in computers and networking, and the growth in the size and scope of the market. References: Bane, Fl William, S. P Bradley, and D. J. Collins. 1995. Winners and Losers: Industry Structure in the Converging World of Telecommunications, Computing, and Entertainment (March). Available on the Internet at: www.hbs.edu/mis/multimedia/link/p_winners_losers.html. Brand, Stewart. 1987. The Media Lab: Inventing the Future at MIT. New York: Viking Press. Brock, Gerald. 1994. Telecommunications Policy for the Information Age. Cambridge, MA: Harvard University Press. Burgelman, Jean-Claude. 1995. Convergence and Trans-European Networks: some policy problems. Studies on Media Information and Telecommunication (SMIT) http://www2.echo.lu/legal/en/ smit13.html. Collier, Andrew. 1996. After the Gold Rush. tele.com (December) 76-79. Farber, David and Paul Baran. 1977. The Convergence of Computing and Telecommunications Systems. Science, 18 March. Gilder, George. 1994. Life After Television: The Coming Transformation of Media and American Life (Revised edition). New York: W. W. Norton. Grindley, Peter. 1995. Standards, Strategy, and Policy: Cases and Stories. Oxford: Oxford University Press. Hutcheson, G. Dan and Jerry D. Hutcheson.1996. Technology and Economics in the Semiconductor Industry. Scientific American (January) 54. KPMG. 1996. Public Policy Issues Arising from Telecommunications and Audiovisual Convergence, Report for the European Commission (September 1996). Available on the Internet at: http:// www.ispo.cec.be/infosoc/promo/pubs/execsum.html. Lindsay, David. 1993. When Cultures Collide: Regulating the Convergence of Telecommunications and Broadcasting. CIRCIT Policy Research Paper #29. South Melbourne: Centre for International Research in Communication and Information Technologies. Midwinter, John. 1994. Chapter 2 in Institute for Information Studies, Crossroads on the Information Highway: Convergence and Diversity in Communications Technologies. Nashville: Northern Telecom Inc. Midwinter, John. 1995. Annual Review of the Institute of Information Studies. Nashville: Northern Telecom Inc. and The Aspen Institute. McKenney, James L. with D.C. Copeland and R. O. Mason. 1995. Waves of Change: Business Evolution through Information Technology. Boston: Harvard Business School Press. Moore, Gordon. 1996. Moore's Law Revisited. Keynote speech, IEEE International Electronic Devices Meeting, San Francisco, 10 December 1996. Report available at http://www.isdmag.com/Events/ IEDM.html. Moore, Gordon. 1997. Moore's Law Repealed, Sort of (interview). Wired 5.05 (May), 166. Mueller, Milton. 1989. The Switchboard Problem: Scale, Signaling and Organization in the Era of Manual Telephone Switching. Technology and Culture 30, 3, 534-60. Mueller, Milton. 1997a. Universal Service: Competition, Interconnection, and Monopoly in the Making of the American Telephone System. Cambridge: MIT Press/AEI Series on Telecommunications Deregulation. Mueller, Milton. 1997b. Telecom Policy and Digital Convergence. Hong Kong: City University of Hong Kong Press. Nora, Simon and Alain Minc. 1980. The Computerization of Society: A Report to the President of France. Cambridge, MA: MIT Press. OECD. 1992. Telecommunications and Broadcasting: Convergence or Collision. Information, Computer, and Communications Policy Series #29. Paris: OECD. Oftel (U.K.). 1995. Beyond the Telephone, the Television, and the PC: A Consultative Document on the Regulation of Broadband Switched Mass-Market Services (and Their Substitutes) Delivered by Telecommunication Systems. London: Oftel. Owen, Bruce M. and Steven S. Wildman. 1992. Video Economics. Cambridge, MA: Harvard University Press. Pool, Ithiel de Sola, 1983. Technologies of Freedom. Cambridge, MA: Harvard University Press. Slater, Michael. 1997. Living without Moore's Law. The Slater Perspective (newsletter), 31 March. Available at http://www.chipanalyst.com/slater/perspective/1104sp.html Waters, Peter and N.Sekulich. 1996. Convergence or Collision: The Regulatory Treatment of Video-on-Demand in Hong Kong. Law Firm of Gilbert & Tobin (Australia), Updates. http://www.gtlaw.com.au/gt/news/updates/hkvod.html. Weinschenk, Carl. 1996. The Great Wired Hope (Cable Modems). tele.com, December, 51-62. Wicklein, John. 1980. Electronic Nightmare: The New Communications and Freedom. New York: Viking Press. Yoffie, David. 1996. Competing in the Age of Digital Convergence. Cambridge, MA: Harvard Business School.