Unanticipated consequences & influences

XV.1 January + February 2008
Page: 74
Digital Citation

TIME LINESUnanticipated and contingent influences on the evolution of the internet


Authors:
Glenn Kowack


Glenn Kowack, a pioneering networking entrepreneur, is writing a book about forces underlying unforeseen consequences in uses of digital technologies. This excerpt provides a fascinating perspective on the evolution of the Internet. Glenn and I have been good friends since w e met in late 1978 in the Tampa airport, waiting for a flight to Havana to have a look at life on the other side of the Iron Curtain. We found music, color, socioeconomic equality, and daiquiris, but not much venture capital.—Jonathan Grudin

 


“History is lived forward but observed backward.”—Soren Kierkegaard

In the Beginning There Was Telephony

Some years ago I read an 1880s-era newspaper article about one of the first demonstrations of long-distance telephony. The reporter wrote, “What might this new device be used for? Well, people at a party in Manhattan might call people at a party in New Jersey. Or, a young man might use the telephone to ‘pop the question’ to his true love.” When the telephone was first deployed, many had difficulty seeing its value. Life and work were linked to the infrastructures of the time: Markets were local, and modern cities were densely constructed so that related businesses were near one another. How could the telephone compete with other technologies? The telegraph already provided transcontinental transmission. Pneumatic tube systems and couriers in London, New York, Paris, and other cities could deliver signed contracts across town in hours. How could a voice-only box possibly compete with hard-copy transmission? It took the telephone system a century to provide a rough approximation of quick document delivery, when fax machines became widely used in the 1980s.

From our vantage point, one could easily assume that the Internet followed a deliberate, highly rational, and even obvious path from its beginnings to today’s dominant networking technology. Many explicit goals and plans were in fact realized on the way to our ubiquitous commercial Internet. Yet, time after time, unanticipated factors, some inherent in the technology and some external, produced powerful and unexpected effects on its evolution. Social, economic, and industrial influences often had broad effects in the manner of shifting tectonic plates. Some were subtle, with intimate knowledge of technical issues required to understand how they had their effects.

From the beginning of the 20th century and well into the 1990s, networking was dominated by telephony, which consisted almost exclusively of national champions: highly regulated post, telephone, and telegraph monopolies known as PTTs. They not only operated networks, but they also did the deployment, manufacturing, and research. Almost all were wholly owned and operated by their national governments. Many were actually government ministries. PTTs had enormous political clout.

A strong PTT was considered the only way to meet the demands of national priorities, business, and the public. Often the interests of users were last in line after economic stability, national security, prestige, and what could be very substantial government revenue. This control-and-stability-first model employed research, development, and deployment cycles that took many years. Global standards were developed and maintained in a similarly controlled and process-heavy manner by representatives of national technical committees that participated in the various historical stages of the CCITT—the International Telephone and Telegraph Consultative Committee (today the ITU-T, the standardization activity of the International Telecommunications Union, a United Nations agency).

The ARPANET Begins

In the 1960s, to write even simple programs required expensive equipment and rare skilled programmers. There was almost no independent software industry; software was strictly proprietary. A computer’s hardware architecture, its operating system, and even its applications were typically all made by the same manufacturer. Academics focused on basic research, carefully avoiding inappropriate influence from commercial interests. It was expected that new ideas would eventually be exploited by outside businesses, often large corporations, if at all. The infrastructure and process for starting new enterprises was far less developed and understood—venture capital was almost nonexistent.

In 1969 the United States Department of Defense ARPA (Advanced Research Projects Agency) chose packet switching, a technology then recently developed by Paul Baran and others, as both a subject of research and as a way to connect to expensive computers at research centers across the country. Packet switching held then-unknown attributes that would have a profound effect on the Internet’s evolution. ARPANET designers chose to base their work on the novel “connectionless” style of networking.

Planning for commercial deployment of these ideas was not on ARPA’s agenda. In fact, ARPA formally prohibited commercial exploitation of the ARPANET, which was created using public funds. This limitation would later become famously known as the “acceptable use policy” (AUP), which mandated that only academic institutions or companies with government contracts could use the network. By 1983 the ARPANET had grown to 113 nodes.

While internetworking research progressed, the PTTs were converting the existing analog telephone network to digital technologies, reconceiving hardwired circuits as virtual circuits, or connections. Connections were seen as the only way to guarantee commercially required levels of service and security. Each connection was to support real-time communication and be undisturbed by transient traffic crossing the Net. Control was centralized to meet technical and business requirements. Three critical business requirements were a viable business model, service level guarantees, and security.

The PTTs Ignored Connectionless Networking

The PTTs and data networking companies (notably IBM) had long product-cycle times and long-range research programs. They could have explored connectionless networking.

However, their technical practices and worldview and their three critical business requirements revolved around networks employing centralized control. A network without someone actively running the entire entity, as in connection-oriented networking, was beyond comprehension. Additionally, the PTTs sought a holy grail of “multimedia” support, a universal medium for voice, video, and data networking. Connectionless networks could not deliver the time-critical part of this goal. The PTTs were disinclined to pursue a technology that, from the start, appeared inadequate.

In contrast, the Internet was developed with research funding and was not constrained by these requirements. Major early uses, such as file-sharing and email, did not require real-time response. Users of academic networks were comfortable with a research agenda and had relaxed expectations for network performance and stability. This was made more palatable by the network’s steadily improving capabilities, which engendered a sort of “rising tide” mentality. Funds earmarked for networking provided by DARPA (Defense Advanced Research Projects Agency) or NSF were in a sense free money, which makes any purchase more acceptable. Within this specialized, somewhat isolated community, security was not a great concern, especially because contemporary delights such as identity theft, huge quantities of online personal information, and viruses had not yet emerged.

As a research project, the ARPANET was free to experiment with technologies of uncertain capability and consequences, including use of the connectionless networking model at a low layer of the network. By not setting up connections before transmission, the software and hardware could be simpler, cheaper, and could make far more efficient use of bandwidth since all capacity was unreserved and would not sit idle due to unused reservation. However, although one could set up connections at layers above IP, they would stand on a statistical service and thus never be certain to behave in a deterministic manner. Without resource reservation, transients crossing the network could potentially disturb or block other traffic. In practice, the Internet was quite robust; after all, that’s what it was designed for. It had a tendency to stay up, in spite of shortfalls such as not guaranteeing timely delivery.

Connectionless networking was not well understood when it was first deployed, and it’s probably fair to say that that it’s not yet fully understood today. The dominant attitude in the technical community, well into the later 1980s, if they had ever even thought about it, was that the Internet was there for research, was not expected to be used for mission-critical applications, and would not be the final commercial version. So the Internet could focus on what worked at a basic level and not let potentially problematic unknowns get in the way of research directions.

Connection-oriented networking employs complex basic mechanisms to reduce the complexity of interactions of traffic crossing the network. Great effort is made to allocate resources, which are then guaranteed to be available, and from there, network operation is relatively straightforward. Connectionless networking is far simpler at the level of the basic components, but as the network grows, traffic patterns quickly become exceedingly complex; how traffic might combine or be impeded can be very difficult to ascertain. Fortunately, as IP use scales up, the law of large numbers appears to conspire in favor of its effective operation—but odd transients can and do pop up all over the network.

Connections vs. Connectionless

By the 1980s the networking world had effectively split into two camps: the commercial (telephone and data) networks using connection-based networking, and the research-oriented, connectionless Internet. The former, if it paid attention to the Internet at all, knew that the Internet was only a prototype, and in some ways not even a serious one. After all, the Internet not only failed to support critical customer and operator requirements, it also fundamentally could not. This was true in terms of longstanding theory and business practices. And commercialization of the Internet was not a concern; it was prohibited by the AUP into the 1990s. However this technology might be deployed commercially, it was expected to be by parties other than those doing the academic investigations, who would surely move beyond the technical approaches used in academic and research networks.

UNIX, Ethernet, and the PC

The same year ARPANET was first deployed (see above), Unix, which was to play a major role in the evolution of the Internet, was created as an ad hoc software research project at AT&T Bell Laboratories. The UNIX creators took a novel approach to operating system design: Build a modifiable core system that does a few basic things simply and well, making it far easier to write applications. AT&T was restricted from entering the computer industry by regulation, and thus could not commercially exploit UNIX. Copies of the source code were widely distributed to universities, and by the 1980s UNIX was available on nearly every major computer architecture. Also in the 1980s, TCP/IP communication protocols were ported to UNIX and widely distributed. UNIX proved to be an effective server for network applications and in many environments a capable system on which to perform Internet routing. Before long, UNIX minicomputers and workstations were ubiquitous in universities and engineering organizations.

UNIX’s limitations relative to other operating systems resembled ARPANET’s limitations compared with connection-based networks. In particular, UNIX originally had only one type of scheduler, which allotted time to different applications in a manner that could not guarantee deterministic behavior. This parallel is also seen in another critical invention of the 1970s, Ethernet, which could not guarantee time-deterministic behavior, but made up for that in simplicity, flexibility, bandwidth, and generality. Despite some controversy about potential problems, Ethernet quickly became a market success and the primary medium for local area networks.

The PC, arriving in the early 1980s, created a large community of trained computer users familiar with keyboard, mouse, and email. It became an open platform for innovative new interoperable applications. This created a large, thriving, highly competitive shrink-wrapped software industry. Large profits and competitive pressures drove investment and innovation in manufacturing and development, resulting in much shorter development times.

Substantial academic networks that supplemented ARPANET and were also close to the IP community began to be deployed in the 1980s. These included ad hoc dial-up networks in the US (USENET) and Europe (EUnet) based originally on the UUCP (Unix-Unix CoPy) protocol, and the academic CSNET, which began as an economical dial-up network alternative to ARPANET.

The early performance of the Internet could not effectively support serious real-time processing requirements. Conveniently, neither did the early PCs—they did not typically run real-time or high-performance applications. The Internet supported applications such as email, FTP (File Transfer Protocol), and HTTP (HyperText Transport Protocol), which were throughput oriented, not response-time sensitive. Furthermore, the unidirectional traffic flow of these applications didn’t require real-time synchronization of user data as in telephony. The unimportance of real-time response is seen in the success of the UUCP protocol that connected thousands of Unix machines over cheap, conventional dial-up telephone lines, allowing free email, network news, and file transfer, although delivery often took days.

In 1983, the ARPANET converted to TCP/IP and moved its military sites to a separate IP “MILNET.” CSNET also adopted TCP/IP around this time. In 1986, ARPANET was largely superceded by the National Science Foundation Network (NSFNET), which was also restricted from commercial activity. Experience creating and managing ARPANET, USENET, EUnet, CSnet and other networks produced a self-aware community of network operators and a large community of network users. Important to note, none of the actual content, primarily email and network news, was ever specifically charged for, which removed a great deal of complexity and cost from the operations of the networks.

At this time, the thinking within the Internet community was that the Internet worked well, it was incredibly useful and quite stable, and that someday there would be an Internet in broad popular use that would have great impact, but we didn’t know what it would look like when it was finally deployed. Nor did we think the final, commercial Internet would employ existing Internet technology. We expected the experience and discoveries of Internetworking to be absorbed by the big telecom providers, who would deploy and market them in a form quite different from that of the free academic experiment. All manner of technical decisions were up in the air. I recall hearing from a senior UNIX engineer around 1987 that no one knew which, if any, of the many popular protocol suites would be the eventual leader, or even if there would be a single leader. Would it be TCP/IP, or OSI (Open System Interconnection), or XNS (Xerox Networking System) —which was considered a significant technical advancement over TCP/IP—or even perhaps Novell Netware, which dominated the PC world? Experts could see many technical and business pros and cons associated with each.

Furthermore, the Internet was not considered a candidate for a universal medium comprising data networking and time-critical communications such as telephony, radio, and television broadcasting. There were already massive, well-understood, highly efficient networks designed for those purposes. The Internet’s well-known poor performance at time-critical services was a fundamental limitation, and the Internet was also not designed for efficient broadcasting. This created an interesting freedom: The developers of the Internet could concentrate on making it do what it did well, and not worry about making the Internet all things to everyone.

Denouement

When the noncommercial nature of the Internet eventually eroded, the PTTs took notice. Starting around 1990 the U.S. and European UUCP networks, UUNet and EUnet, transitioned to IP and became for-profit companies, joining PSInet, founded in 1989. These commercial networks soon connected to NSFnet, which walked a tightrope as a noncommercial network chartered to foster widespread interconnection. NSFnet adroitly worked with emerging commercial networks without violating the AUP. Eventually, nongovernmental exchange points were set up, NSF withdrew, and the era of the commercial Internet began.

The PTTs had been preparing. Along with IBM and other large computer companies, they had participated in the development of the Open Systems Interconnection (OSI) networking standards within the International Organization for Standards and the ITU since 1974. In 1988 the OSI reference model was formally completed. The U.S. Department of Defense and the European Commission immediately adopted OSI as their standard.

This seemed to be the long-awaited transition, expected and even favored by many in the IP community. Longstanding members wrote books about the transition. It seemed OSI would, inexorably, win the day. The stage was resoundingly set for a reasonably smooth transition from IP research networking to PTT-oriented commercial networking.

It never happened. The cumbersome OSI repeatedly failed to demonstrate that it was practical. In the meantime, there was a relatively huge installed base of reasonably well-performing IP software. Why trade a solid, running technology for an entirely new design and complex, untested ISO software? Legions of networking experts trained on IP continued to improve the Internet, exploiting ever more powerful technology to close in on the quality of service that previously required connections.

It would be pleasingly theatrical to describe some dramatic crescendo or key event in the growth and commercialization of the Internet, but there was no explicit confrontation, no point when the competition to Internetworking abruptly failed. Rather, IP just continued to accelerate faster, and more commercially, than anyone had ever anticipated.

Trying to Make Sense of History As it is “Lived Forward”

Once commercial Internetworking took off, professionals found trying to navigate the big networking picture deeply confusing. So many fundamentals and particulars changed so quickly, and often unexpectedly, that it was exceedingly difficult to know what was going on and what direction to take. Literally thousands of ISPs were founded in the U.S., and similar numbers in Europe. Service prices were dropping rapidly, new and cheaper devices were introduced continually, client software was changing rapidly while entirely new applications such as the Web appeared abruptly, bandwidth requirements (driven substantially by the Web) and provisioning were exploding. Business relations between ISPs were tumultuous, with each jockeying for advantage, some becoming leaders in the dial-up business, others in long-haul IP capacity, with battles over “settlement” and pricing—who was whose customer and who paid whom how much. Battles over market leadership were made all the more complex by the arrival of huge quantities of venture capital, which often propelled ISP growth far ahead of revenue and made it possible to assemble a large ISP by acquiring many smaller ISPs. Perhaps the most confusing change was from engineer to entrepreneur, which could be both heady and disorienting in what was, for a surprisingly long time, an MBA- and lawyer-free zone.

It was amazing to see the different perspectives held even by experienced professionals in the field. I recall in 1996 talking to an experienced Silicon Valley software engineer and entrepreneur who confidently proclaimed that there was no way to make money running an ISP, and, because a new ISP could show up and undercut the established ISPs at any time, there never could be—so one should never even think of going into that business. Another, an experienced European Internetworking expert, asserted that it was entirely reasonable for the PTTs to time their rates of change to match the typical technical career, so as to not upset longstanding employment practices and create unemployment, and that the Internet should modify its rate of change to fit such cycles.

The Return of the Telecoms

Within only a few years the telecoms learned enough to enter the Internet service business through acquisition or starting their own ISPs. Since then they have been busily working to construct rational business models and divert innovation cycles to the service of their business requirements rather than the unbridled interests of users. However, they have not yet, after more than 15 years, been able to return telecoms to a state of “business as usual.” The changes brought by the Internet continue to carom through the telecommunications business—free WiFi hotspots and VoIP being prominent recent examples. The Internet continues to drive disruptive change in everything from the media business (music, film, television, and radio), education, international outsourcing, retailing and travel, to the press.

Conclusion: Serendipitous Obscurity and Contemporary R&D

The obscurity of the Internet and its inability to satisfy prevailing commercial requirements allowed it to develop and grow without appearing to be a potential competitor. In this way the Internet was not subjected to efforts to kill or co-opt it, and gained the time and space to mature. The failure to take a “breakthrough” technology seriously, as in the example of the telephone, with which this column began, this time played a critical role in creating our wonderful commercial Internet.

Acknowledgments

This article was prepared with helpful input from Dan Lynch, founder of InterOp, Dave Crocker of Brandenburg Consulting, Suzanne Woolf of the Internet Systems Consortium, and Internet pioneer Gary Grossman, presently of Riva sur Piedmont.

Author

Glenn Kowack
gkowack@illinoisalumni.org

About the Author

Among his many accomplishments, Glenn Kowack directed UNIX R&D at Gould Computer Systems in the 1980s, was the founding CEO of EUnet Limited (the first commercial and multinational ISP in Europe), a member of the board of directors of the Commercial Internet Exchange, and a contributor to the founding of ICANN, the Internet Corporation for Assigned Names and Numbers.

EDITOR

Jonathan Grudin
jgrudin@microsoft.com

Figures

UF1Figure.

©2008 ACM  1072-5220/08/0100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.

 

Post Comment


No Comments Found