8. What Are Technology’s Costs and Benefits?

It is usually difficult or impossible
to manage the effects of an innovation

so as to separate the desirable
from the undesirable consequences.

Everett M. Rogers

Technology has both costs and benefits. Every technology. The production and control of fire has protected us from predators and from the cold. But it has also burned our fingers and our homes. Clearly, costs and benefits are measured in many ways other than money: time, jobs, fairness, sustainability, environment, advances in knowledge, culture, and more.

For instance, while motor vehicles cost money to acquire and operate, enabling us to do things that generate money, they also cost about 40,000 deaths per year in the U.S. Air pollution and dumps full of tires and crushed cars are environmental costs, but cars provide environmental benefits, too. Prior to the car, city streets were filled with working horses, each producing an average of 22 pounds of dung per day. In 1900, the 150,000 horses in New York City and the 15,000 in Rochester produced over half a million tons of manure—enough to form a block 200 feet on a side and 190 feet tall. At the time, the horseless carriage appeared quite an environmental improvement.

Often it seems that the more we get of one desirable quality, the more we lose of another. These tradeoffs transcend the details, and new technology simply allows us closer to the extremes. For example, consider how easy it is to become dependent on something very useful. And the more useful a technology is, the more we rely on it—just like a crutch. But unlike with a crutch, we don’t plan to “get better.” The Y2K threat reminded us how much we lean on computers to process our financial transactions and run our businesses, and we do so precisely because they are so useful. How much will we rely on future technology far more compelling and enabling than anything we know today?

An unpredictable crutch is more dangerous than a predictable one. Unfortunately, many of our enabling technologies are complex in ways that make them hard to predict. When it comes to generating electricity from radioactive material or to shooting down enemy missiles, predictable is very important. In this chapter we will see why these systems, in particular, have already surprised us with failures—and why we continue to make ever more complex technologies despite the risks.

Such failures can be sudden and dramatic or subtle and lingering. “Catastrophic” and “chronic” describe costs and benefits in general. We often trade one for the other. Fires are a catastrophic event. Inhaling the asbestos that prevents our homes from burning causes chronic problems. Heating our homes or generating electricity with coal pollutes the air and can also cause chronic problems. Doing so with nuclear power keeps the air clean, but can cause catastrophic problems if a meltdown of the reactor core occurs.

Control and freedom is another pair of linked costs and benefits. After the terrorist attack on September 11, 2001, many Americans were ready to trade freedom and privacy for control and security. Technology, of course, was ready to accommodate. More than 500 years earlier, Gutenberg’s printing press spread freedom of speech while undermining the control of the Catholic Church. Some technology can be used either way, but using it to increase either control or freedom often decreases the other.

Finally, it seems inevitable that the faster our technology progresses, the faster it renders itself obsolete. Cellular telephones shrink while gaining new features. Most don’t live to see their 2nd birthdays because newer models lure their owners away. The cost here is environmental, with the creation of thousands of tons of garbage each year, but other forms of technological progress can cost jobs and culture.

These patterns do not alleviate us of diving into the particulars of issues that concern us. “The devil is in the details,” and there is no substitute for that analysis. But how do you put those details into context? How do you decide which details of the uncounted mountains of them to explore? These general patterns describe the big picture. They provide that context and can serve as a map. Future technology may make our choices more complex and difficult, but these patterns will weave through their details, too.

So, you might say let’s just back off from science and technology.
Let’s admit that these tools are simply too hot to handle…
throttle back to a minimal, agriculturally intensive technology,
with stringent controls on new technology…
or you might imagine throttling back
much further to hunter-gatherer society.

Carl Sagan

Enabler vs. Crutch

Hunting and gathering gave way to agriculture in most parts of the world. Each generation of farmer remembered less of the foraging ways. Population grew so dense that there was no going back. Even if all the skills and knowledge of hunting and gathering could be remembered, it could not support ten to one hundred times as many people in the same area. Agriculture enabled us to multiply and specialize, but not go back.

Writing enabled us to remember far more outside our brains than inside. And, like a crutch, it allowed our memories to lose their edge. Computers have gone farther, making it ever easier to look up facts when we need them and no longer memorize them. By linking computers, the Internet has made it easy to search the world over for more facts than anyone could ever retain. The more compelling a technology is, the more likely we will grow dependent upon it.

Dependence is not necessarily bad. We can apply our brainpower to understanding relationships, connections, and processes—including the processes for looking up those aforementioned facts. Well, perhaps this does not exonerate dependence, but simply asserts the obvious: that the benefits can outweigh the costs. Less obvious is that, even when they do, the costs are still lurking, and can still be dangerous.

For instance, it seems an appropriate use of computers to free countless humans from simple calculations. After all, computers can add millions of numbers in a second without a mistake and the human brain, so prone to simple errors, is capable of creativity beyond any computer (any computer of the present day, at least). A clear case of the benefits outweighing the costs. Yet, at the end of the 20th century, the Y2K scare forced us to recognize the cost of this otherwise reasonable choice.

In Switzerland, 105-year-old man was directed to attend elementary school when a computer program miscalculated his age to be five. In Norway, 16 airport express trains and 13 high-speed, long-distance Signatur trains refused to start because the on-board computers did not recognize the date—and that happened December 31, 2000, a year after problems were expected. Y2K helped us realize how insidious computers had become. We become reliant on them precisely because they are so effective at certain tasks—whether processing thousands of credit card transactions per second or monitoring our car’s wheels for skidding 45 times each second.

The Simple Reasons Behind the Y2K Bug

The Y2K problem arose from a memory-saving shortcut of storing only the last two digits of calendar years in our computer programs. Why repeat “19” a billion times in valuable computer storage if, when printing or displaying, automated programs could simply prefix it to the two digits that we do store? In the 1970s and 80s (that’s 1980s, of course), it seemed a safe assumption that software would be replaced well before 2000. Besides, profits depended on saving money “this year,” not decades in the future.

We discovered that not only did old “legacy” software survive until 2000, but new software often reused standard subroutines containing the two-digit shortcut. These basic building blocks of software perform tasks common to many programs, such as manipulating or comparing dates. Like pouring a bit of old milk into a fresh carton, reusing old subroutines can make the whole program go bad.

The problem arises when computers perform arithmetic on dates, figuring out if your car’s engine is overdue for service or your credit card or mortgage have been paid on time. January 2000 minus January 1999 is one year, but January 1900 (which is what a program that stores only the last two digits would assume) minus January 1999 is not. Is it –99 years? That depends on how the computer software is written, and many of the original programmers who could answer that question were long gone. In many cases, we simply did not know what would result.

We escaped the worst predictions for the Y2K crisis, spending millions of dollars on preventive measures, such as consulting and system upgrades, and experiencing only scattered system failures. But, though we averted disaster, we were forced to recognize how dependent our society had become on technology. On November 9, 1965, when an electric power relay at Niagara Falls, New York, switched off, the reminder was not as gentle.

That switch triggered a series of events that, within minutes, plunged 30,000,000 people over 80,000 square miles into darkness. The switch operated properly, protecting a power line from overload, but its threshold was based on two-year-old power levels. In those years, consumption had risen to the point that a momentary surge in demand triggered the mechanism. Taking one power line out of service, burdened the remaining lines with an extra load, triggering their power relays. Like dominos, the parts of the power system knocked each other offline.

Deprived of electricity, subway trains stopped and sat in pitch-black tunnels. Traffic lights stood dark over impossible traffic snarls, the orchestrated movements of cars and trucks thrown into chaos. Half of the 150 hospitals were not equipped with emergency generators, so they went dark. Highly flammable gases used in surgery prohibited use of candles or any open flames to finish what surgeons had started.

Airports, similarly unprepared, disappeared into the night. Approaching planes searched for the landing lights and listened for the control tower, unsuccessfully. Some basic communication was reestablished from planes parked on the tarmac, as their radios were powered by on board batteries.

Growing more and more specialized over the millennia, we have moved from self-sufficient hunters and gatherers to cogs in a “great machine.” If the machine stops, we wait until it starts again because, with few exceptions, we are unprepared to operate outside of it. This disaster illustrates how dependent we are on a complex structure of technology so tied into our lives that we are hardly aware of it.

Where once we lived isolated and secure,
leading our own limited lives, whose forms
were shaped and controlled by elements
with which we were intimately acquainted,
we are now vulnerable to change which
is beyond our own experience and control.
Thanks to technology no man is an island.

James Burke

Back to the fateful day in 1965. 800,000 subway passengers waited for rescue. Those stuck in traffic waited. Those at home, waited, hoping that the food in their now-dormant refrigerators would not spoil before everything started working again. They probably did not consider that the food in their cupboards could run out before local stores could be restocked by trucks that bring food from ships and trains—all part of that great machine. The 250 airplanes planning to land at John F. Kennedy Airport diverted to airports outside the affected area. They could not wait.

Power was back by early the next day, leaving a few dead and many shaken. The tens of millions who experienced the blackout had received a harsh wakeup call. Any illusion that we could get along without our technology was gone.

Was it a mistake to allow ourselves to become so reliant? If so, where would we draw the line? No computers? No electricity? No writing, agriculture, or any technology at all? Even our spear-wielding Cro-Magnon ancestors would balk at that.

Avoiding reliance on technology is rather hard—and unpleasant—to imagine. It allows us to accomplish so many things so much more easily. We make use of it and stop doing things the old, inefficient way. Those of us that “did it the old way” age and die. Institutions that supported the old way crumble and disappear. Our bridges are burned.

The solution is not to go all the way back to a time before technology, but to recognize what choices we are making. In the 21st century we have more technology choices than ever. Whether those choices are individual, organizational, national, or global, we can consciously weigh the costs and benefits. What are the dangers of becoming dependent on a given technology? Does it enable us in ways important enough to justify that dependence? If so, can we mitigate the risk of dependency? And does mitigation involve other technology with similar issues?

In retrospect, could we have prevented the failure of the power relay? Could we have avoided the shortsighted decision to store only the last two digits of each year in computer files? Knowing what we do now, of course we could have averted these specific problems, but without the benefit of hindsight can we avert future failures? The answer becomes more and more important as ever more enabling technology fosters ever greater dependence. Unfortunately, the systems of our great machine continue to grow ever more complex…and that can make them very difficult to predict.

Things should be as simple as possible,
but not any simpler.

Albert Einstein

Complexity vs. Predictability

There are two reasons that complex systems can be hard to predict. The first is a rather obvious one: complex systems are harder to understand than simple ones. And, if they also happen to have more points of potential failure, all the worse. Clearly, we cannot predict what we do not understand unless we can observe its full range of behavior. For instance, the sun rises and sets each day. Simple. We need no scientific understanding of the sun to predict that, but we cannot do the same with a nuclear plant. Building one just to see if it blows up is impractical.

The second reason also has to do with behavior and is based on a concept that came up in our discussion of how technology works in Chapter Four: emergent behavior. With the right ingredients, a system can behave quite differently from any of its components. If it is so complex that an expert can understand only one component, then bringing together experts, each versed in one component, to predict the behavior of the whole would not be very helpful.

We could avoid creating complex systems altogether. That would free us from this source of unpredictability. But the benefits of a complex system can be alluring and may free us of other costs. Take air travel, for instance, with its ability to transport us thousands of miles in just hours.

Alternately, we could drive a car, but at what cost? Death rates per mile traveled are 100 times higher for driving than flying. So, in spite of the complexity of airplanes and air traffic control, we continue to choose it. For as we explored earlier, returning to simpler technology means giving up a lot.

With flying so much safer than driving, maybe this tradeoff between complexity and predictability is more theoretical than practical. The key question: do our complex systems really fail when we do not expect them to? If so, do these failures affect us?

In 1974, the U.S. government released the “Rasmussen Report” to document the safety of nuclear power. Under the direction of MIT nuclear engineering professor Norman Rasmussen, the report was based on careful, systematic analysis of two reactor sites that represented U.S. reactors. Rasmussen and his team were familiar with emergent behavior, and they considered how several minor problems could combine to cause a major one. Their scientific approach estimated probabilities for each type of occurrence and for a variety of their combinations.

The conclusion: a meltdown could occur once in every 17,000 years for each reactor, or roughly every two centuries, based on the number of reactors in the U.S. This meant that based on the 107 reactors the U.S. had by 1998, a meltdown would be predicted every 159 years. Less than five years later, it happened.

Middletown, Pennsylvania: Three Mile Island nuclear plant. At 4:00 AM on March 28, 1979 in Unit 2 of the plant, a combination of malfunctions and mistakes shut down the pumps circulating water through the steam generator (loop B in the diagram of a reactor). The heat in the water that flows through the uranium reaction (loop A) had no place to go, so the reactor core heated up. Heat sensors detected the increasing heat and water pressure, automatically shutting down the reaction by inserting control rods into the core. Also automatic was the opening of a relief valve, which removed the high-pressure steam (from loop A).

[graphic not available in web version of book]

Both automatic responses were correct, but the relief valve failed to close once the steam had been bled off. Now, instead of excess pressure, the water circulating through the core had inadequate pressure. Two high-pressure injection pumps came on automatically, which led to the next problem. Thinking that they were doing the right thing, plant operators turned off one pump and turned down the other. Put simply, the operators had misunderstood what some of the 600 warning lights were telling them. Sometimes too much information can cause as much trouble as too little.

In this case it caused more. At approximately 5:30 AM, operators turned off the pumps circulating water through the reactor core (loop A), still believing that they were following the proper procedures. Deprived of water, half the reactor core became exposed, melting and releasing radioactive material. Around 6:30 AM, they figured out that the valve had not closed (loop A), which had been the second problem. They restored water to the reactor and started the process of getting it back to normal temperature and pressure.

The Rasmussen Report had not anticipated that particular interaction of independent events or the short time it would take to encounter them—less than five years after the report was issued. While the operators could be blamed for much of the problem, the complexity of the system was also responsible.

With systems monitoring systems, there are many modes of failure and too much information for the operators to absorb (including the 600 warning lights). The safety systems that monitor other safety systems add complexity, offering new ways that failures can occur.

The real problems we encounter tend to be more complex than the theoretical problems we anticipate. Fact is stranger than fiction. In Beyond Engineering, Robert Pool shares the observations of Hyman Rickover, someone “who oversaw the construction of more than a hundred reactors.” Rickover contrasted nuclear reactors in theory (“on paper”) and in practice:

On paper > In practice

It is simple > It is complicated

It is small > It is large

It is lightweight > It is heavy

It is in the study phase > It is being built now and is behind schedule

Little development required > It requires an immense amount of development

(Uses off-the-shelf components) (Much of it on apparently trivial items)

It can be built very quickly > It takes a long time to build

(Due to engineering development problems)


The more we study the major problems of our time,
the more we come to realize that they
cannot be understood in isolation.
They are systemic problems, which means that
they are interconnected and interdependent.

Fritjof Capra

Another example of a system so complex that we failed to predict its behavior became infamous during the 1991 Persian Gulf War when a SCUD missile launched from Iraq crashed through barracks in Saudi Arabia, killing U.S. soldiers. The soldiers were protected by the Patriot missile, a sophisticated technology that had worked very well…right up until it missed that SCUD. While some tasks are so difficult that a number of failures are expected, missing the missile was a tragic surprise.

Why Did the Patriot Fail?

What caused the Patriot missile to miss the Iraqi-launched SCUD missile that killed 28 soldiers? A mathematical error that did not show up when the missile was tested. Why did it show up in combat? The radar tracking system that was supposed to guide the missile to the incoming SCUD had been running for 100 hours, but had been tested and certified for only 14 hours. If it had been shut down every 14 hours, the mathematical error would not have grown large enough to matter.

But why should it matter, whether the radar system ran for 14 hours or 100? To answer that, we need to poke inside the Patriot, which involves radar and voting for ice cream.

Radar works by transmitting an electromagnetic wave and then timing how long it takes to bounce off an object and return. That delay is translated into the direction and distance of the target, so it must be fairly accurate. The Patriot system counted this delay in tenths of seconds, but computers represent information in binary and the binary equivalent of one tenth is not exact. The difference is like the one in representing fractions in terms of percentages.

Ask three people what their favorite ice cream flavor is. Chocolate, vanilla, and strawberry each gets one vote, or 33% of the total. Add all the votes to get 33 + 33 + 33 = 99%. One percent is missing. If each flavor gets 33.3% of the vote, this still adds up to only 99.9%, so one tenth of a percent is missing. Keep adding precision to make the error arbitrarily small, but at some point that process costs you effort and costs the system memory and performance. At some point, the error is just too small to be significant.

The missing delay in the Patriot’s radar system was very small, but it accumulated over time. Over 14 hours, it did not affect performance, and since 14 hours was as long as the system was specified to run between shutdowns, it was not significant. After 100 hours, it was 0.34 seconds. That was significant. Traveling at three times the speed of sound, a Patriot missile goes about 1000 feet in 0.34 seconds. And whether the Patriot was 1000 feet beyond it or short of it, the SCUD had plenty of room to get by.

The Patriot is a sophisticated and complicated technology, making it difficult to predict. Because it consists of many parts developed by many people over a long period of time, it can fail in ways that no single person is aware of. The original system was designed to track airplanes. The engineers who redesigned it to track missiles, which are much faster, may not have known all the assumptions that the original engineers made. In a simple technology, this would not have mattered. The redesign would have been straightforward because all the parts of the system—and their interactions—would have been apparent.

But there was no simple alternative. The best missile defense the U.S. had at that time was the Patriot. Given that the U.S. was at war with a country that had surface-to-surface missiles, the Patriot was the best option. Ironically, a software fix to the timing problem had already been developed and distributed, but it arrived after the SCUD struck.

If nothing else competes for your time and money, additional testing is always good. Of course, that is never the case. The complexity and dangers of a technology can suggest how much testing we should do. That testing effort becomes part of the technology’s cost…and part of our evaluation. Sometimes, we evaluate the complexity and dangers as so great that we do not pursue a technology, as in the following case of biotechnology.

Baboons do not contract AIDS. In 1995, Dr. Steven Deeks suggested that injecting elements of baboon bone marrow into humans with AIDS might resuscitate their impaired immune systems since those system rely on cells produced in bone marrow. With many lives on the line in the U.S. and even more outside, this was a serious proposal. Delaying an effective treatment meant needless loss of life.

Evaluating this proposal, the medical community recognized a complex, unpredictable system. The rapidly mutating human immunodeficiency virus (HIV), the genetic diversity of patients, and the unknown viruses that might be lying dormant within the bone marrow of baboons formed an interacting biological system that everyone agreed was beyond our ability to predict. Since serious diseases have jumped from animal to human hosts, mutating as they go, this test ran the risk of introducing a new one.

To grasp even more clearly why the risk was so high, let’s explore this in a bit more detail. HIV jumps from monkeys to humans. The influenza virus develops in birds, which are not affected, and then spreads to pigs (wallowing near and in bird feces), which allow it to mutate into a form that can jump to humans. That mechanism had evidently changed when several people died in Hong Kong from a flu that jumped directly from chickens to humans (no pig incubator necessary). What do you do when a complex system suddenly changes and you can no longer predict its behavior? Epidemiologists pushed for the annihilation of all chickens in Hong Kong before it could spread farther. After this drastic step, that particular flu strain disappeared.

But HIV and influenza are not the only diseases known to jump from animals to humans. Long ago, once humans domesticated and started living in close proximity to them, cattle probably gave us measles, tuberculosis, and smallpox. With all this in mind, health professionals worried that an HIV-infected human could prove an easy training ground for a baboon-hosted virus. If it existed, it could venture from the baboon’s marrow out to the injected human’s body, trading pieces of genetic material with other viruses, mutating into forms unknown.

Nobody could predict if the proposal would save lives or if it would create a plague far worse than AIDS. Even though the plague scenario seemed extremely unlikely, the compromise agreed to was to test the baboon marrow on only one patient, so that it could be better controlled. The result was anticlimactic: the baboon’s bone marrow did not grow in the patient’s body, perhaps rejected by the remnants of the patient’s immune system.

So far we have reaped neither the cost nor the benefit of that medical technology. But this area—and biotechnology in general—presents us with far more complex and hard-to-predict systems than we have ever encountered. Living systems do not have emergent behaviors as a side effect, as some of our mechanical technologies do, but as a distinguishing feature.

This does not mean that systems as complicated as nuclear reactors, missile defense, or medical technology are inherently unsafe. It does mean that complex systems are very difficult to predict and, so, we have had to learn by experience. In human evolution, the first compound tools (i.e. created from multiple parts, such as lashing an axe-head to a handle) must have caused some surprises. With experience, we learned to predict those pretty well. With time and, sometimes painful, experience we will learn to predict more complex systems pretty well. Currently, computer simulation show promise in this, but before that technology gets our current complex systems under control, we are already off creating technologies even more complex. This cost/benefit tradeoff of complexity vs. predictability is a pattern that will not go away.

In the majority of cases,
the evil is very insidious…
The worker falls into ill-health
and sinks away out of sight
in no sudden or sensational manner.

British asbestos textile factory inspector, 1898

Catastrophic vs. Chronic

Fire is a catastrophic problem. In buildings, it suddenly takes many lives each year, and has since humans first brought it into their early abodes. While caves, adobe, and igloos cannot catch fire, many structures are constructed of materials that can. Asbestos, a fibrous mineral, was used extensively as a fireproof insulation because it resists burning. Its resistance to heat also made it an ideal material for brake pads on cars, trucks, and trains. A vehicle that does not stop can also be catastrophic.

Fibers from asbestos can scar lung tissue and cause asbestosis. Asbestos exposure has also been linked to cancers of the lung, larynx, pharynx, oral cavity, pancreas, kidneys, ovaries, and gastrointestinal tract. In 1950 an internal report from the chief physician of Johns-Manville, a major asbestos manufacturer, said that “the fibrosis of this disease is irreversible and permanent so that eventually compensation will be paid to each of these men but as long as the man is not disabled it is felt that he should not be told of his condition so that he can live and work in peace and the company can benefit from his many years experience.”

These are chronic conditions that take many years of exposure to develop so insurers of the asbestos industry argued over who should pay the victims: the insurer at the time of the exposure or the insurer at the time of the diagnosis and claim. The death of the asbestos industry in the U.S. was a messy business near the end of the 20th century, with former workers often dying before courts awarded compensation. What caused this chronic problem in the first place? Fixing the catastrophic problem of fires and failed brakes. Without fully understanding the costs and benefits, we traded catastrophic problems for chronic.

risks that are under someone else’s control,
potentially catastrophic
and unfamiliar
are perceived as greater than those with
the opposite features. That is why most of us
view riding our bicycle
in a busy street as
a more acceptable risk than living near
a nuclear power
station, although rational
analysis says that you should stay off the bike.

John Krebs

The roots of the word “catastrophic” come from Greek, with “cata” meaning down, against, back, and “strophe” meaning a twist, or a turning about. Although it is often used to describe costs, we can take it in its more general meaning of something great and sudden. “Chronic” comes from the Greek “of time.” It, too, is commonly used to describe only costs, but we use it for benefits, too.

Trading catastrophic for chronic, or visa versa, is a recurrent theme in the history of technology. Dying of cold in the winter is catastrophic. So burn coal. But the effects of coal pollution (lung cancer, acid rain, and global warming) are chronic. So you decide to split the atom. Chernobyl ran clean until operators disabled the automated safety systems to try out a new emergency procedure. In just five seconds, the reactor’s power skyrocketed to 500 times normal. It melted, caught fire, and exploded, spreading radioactive material into the air. That is catastrophic.

We tend to fear catastrophic costs we cannot control more than the chronic ones we can. So, while the effect of many diseases can be catastrophic—death—the result of misuse of antibiotics to fight those diseases can be chronic. Specifically, antibiotic-resistant bacterial strains are a chronic problem, which we create by either overuse of antibiotics or failing to complete the prescribed course (e.g. stopping halfway through a 60 day prescription because we started to feel OK or ran out of pills).

Catastrophic costs are often easier to predict than chronic. With 40,000 deaths from car accidents in the U.S. each year, we have plenty of statistics to project into the future as well as a “smoking gun” cause of those statistics. Asbestos is not as easy to identify as a car when it causes a death because over the years it takes to kill many other factors affect the victim. Medical science can now readily identify the fingerprints of asbestos-related disease, but other chronic diseases are still very hard to attribute and, consequently, to predict.

Modern life exposes us to pesticides and drugs in our foods, as well as pollutants in our air. Could these cause or aggravate health problems such as asthma? According to the CDC, “Asthma is a complex disease that is increasing in prevalence in the United States. Poor, inner-city minorities have disproportionately high rates of mortality from asthma. We still don’t know what causes this disease or how to cure it…” We are not suggesting that we know better than the CDC what causes asthma. We are suggesting that any chronic costs of technology can be hard to connect to the offending technology and, so, will be very hard to predict.

With new and novel technologies, we will encounter both catastrophic and chronic costs and benefits. The more technology we employ, the harder it will be to untangle cause and effect, especially in the case of chronic costs—or even benefits. As this complexity increases, we will need to remember the simple pattern of tradeoffs that may lurk just beneath technology’s glitzy veneer.

We are in a new cycle
[following the 9/11
/2001 terrorism]:
We’ll trade our privacy
to be
more collectively secure.

Harold J. Krent

Control vs. Freedom

At the end of the 19th century, a 22-year-old Italian man arrived on British soil bearing a small black box he claimed could be used to send telegraphic messages without telegraph wires. Thinking more brilliantly than he was paid to, a British customs official recognized the threat to security—the very stability of the British Empire—if just anyone could move information unseen by the government. Taking the initiative, he smashed the box and sent Guglielmo Marconi back to Italy.

The technology in question—radio technology—ultimately increased freedom by allowing people to communicate more freely. And it undermined government control. Early in the 21st century, the Taliban in Afghanistan destroyed televisions and satellite dishes because they undermined control of information. These technologies could also increase control at the expense of freedom. In the 15th century, Gutenberg’s printing press undermined the control of the Catholic Church by increasing freedom of communication.

Little more than a decade after Marconi’s prototype radio was smashed, Britain nationalized (and protected) much of the Marconi Company’s facilities located within its borders, claiming that wireless communication was a matter of national security. How else could the British navy communicate with its ships at sea? Today, radio coordinates security forces throughout the world. In Britain televisions monitor streets for crime, where the mere presence of cameras, it is hoped, will discourage criminals. So technology can similarly be used to increase control at the expense of freedom.

Like most powerful technologies,
total surveillance
will almost certainly
bring both good and bad things into life.

James Wayman

The tradeoff of control (and security) for freedom (and privacy) is a pattern in society. Since it so often involves technology, it is also a cost and benefit of technology. It is up to all of us to weigh these because those developing the technologies may not. A manager of a system being designed to identify people by the spectrum of light reflected from their skin—much as satellites identify minerals or camouflaged vehicles—said, “We develop the technology. The policy and how you implement them is not my province.”

While Internet technology allows more freedom of communication, working its way around even China’s strict controls, it is privacy that is generating the most controversy. Cameras watched people entering the stadium for the 2001 Super Bowl in Tampa, Florida, but unlike conventional systems monitored by people, this one was monitored by computers. Each face was digitally characterized and compared to a database of known criminals. This was just a test run for implementations to come, so nobody was arrested, though 19 probable matches came up.

A U.S. government mandate to incorporate global positioning systems (GPS) into cellular phones is motivated by the importance of locating calls to 911 emergency centers. Calls from landlines—standard home, business, and pay phones—already provide location information on the computer screens of 911 operators. Even if you can’t tell them where you are, they know. This would be useful for cellular phones, too. But, though it has been called a matter of safety, it is also a matter of privacy. The same technology would allow anyone making a cellular call to be located and, over time, tracked. Already, the content of calls can be monitored in the name of national security.

Echelon, a mostly-secret cooperative effort joining U.S., Britain, Canada, Australia, and New Zealand, monitors phone calls. Cellular calls are the easiest because they broadcast into the air, but with cooperation from (or the tapping of) telephone companies, most calls can be monitored. As with the cameras at the 2001 Super Bowl, computers are used to analyze the huge volumes of data. Voice and pattern recognition help to identify those conversations that should be analyzed by humans.

Run by the U.S. National Security Administration (NSA), Echelon has company. The Federal Bureau of Investigation (FBI) developed Carnivore (also known as DCS1000) to monitor email, subject to a legal search warrant. The system taps into Internet servers through which almost all email traffic flows. Prior to the September 11 terrorist attacks, members of the U.S. Congress were critical of this invasion of privacy. After the attacks, there was little criticism.

How we evaluate these systems comes down to how we value control, security, freedom, and privacy. This issue comes up several times in the next chapter.

The hurrier we go,
the behinder we get.


Progress vs. Obsolescence

The more we progress in our technology, the more technology becomes obsolete. The costs of obsolescence include environmental, social, and cultural.

Cellular telephones last between 16 and 18 months in the U.S., not because they break or become unusable, but because people switch to a new service that requires a different phone or because they simply want something new. Styles and features change as sizes diminish, making old phones conspicuously dated. It is a testament to the power of both technology and marketing that a device that did not exist a century ago now is obsolete in less than two years.

Progress can be seen in the sleek, ever more featured phones that ever more people carry with them. Obsolescence can be seen in the millions of phones disposed of. Containing toxic substances such as lead, cadmium, and mercury, obsolete cellular phones may, by 2005, constitute 65,000 tons of trash per year.

Obsolescence can cost jobs, too. Even before the 1960s, there were threats to end the bracero program, which brought thousands of Mexican workers into California. In response agricultural professors at the University of California at Davis developed a technology to compensate for the loss of labor in the harvesting of tomatoes. Their system combined mechanical engineering with genetic selection to create both a machine to harvest tomatoes and a tomato hard enough to come through without squishing. The combined impact of this technology and the political change in the bracero program could be seen in California between 1962 and 1970:

  • The mechanical harvester and the hard tomato were born
  • 1152 mechanical harvesters and 18,000 human sorters replaced 50,000 human pickers
  • 600 growers replaced 4000
  • Acres cultivated fell by 17% but production increased by 5%
  • Mechanical harvesting rose from 0% to 99.9% of all production

The $65,000 mechanical harvesters were best suited for large farms, so those were the farms that benefited from the increased efficiency. When the bracero program did end in 1964, farms consolidated. The big got bigger and the small got out. Mechanical harvesting has helped California tomato production to pass the billion-dollar mark, but it also favored the large organizations. Technology progressed and the small farm and manual picker became largely obsolete.

In a story we explore in the next chapter, steel axes replaced the stone axes of an Australian aborigine tribe. This technological progress devastated the tribe’s traditions and culture, which had long provided effective rules for living. Their world had changed and so did the rules for surviving. Their traditions and culture no longer fit the world that the tribe lived in. They had become obsolete.


The discovery of nuclear chain reactions
need not bring about the destruction of mankind
any more than the discovery of matches.

Albert Einstein

Understanding the tradeoffs between technology’s costs and benefits is becoming ever more important because, as technology’s potential for benefit increases, so does its capacity for harm:

  • Fire cooks and also burns.
  • The crossbow defended and also conquered.
  • Splitting the atom provided cleaner energy and also left near-immortal waste, as well as the power to annihilate the human race.
  • Genetic engineering may eliminate most health problems and food shortages and it may also create global plagues against which we have no defense.
  • Nanotechnology promises to provide fantastic material abundance and it could also wipe out all life on Earth.
  • Robotics may support an evolutionary leap in human capability and it may also bring to extinction the race that spawned it.

We need tools to make such serious choices. The American Association for the Advancement of Science (AAAS) suggests the following questions in weighing the costs and benefits of technology:

  1. What are alternative ways to accomplish the same ends? What advantages and disadvantages are there to the alternatives? What trade-offs would be necessary between positive and negative side effects of each?
  2. Who are the main beneficiaries? Who will receive few or no benefits? Who will suffer as a result of the proposed new technology? How long will the benefits last? Will the technology have other applications? Whom will they benefit?
  3. What will the proposed new technology cost to build and operate? How does that compare to the cost of alternatives? Will people other than the beneficiaries have to bear the costs? Who should underwrite the development costs of a proposed new technology? How will the costs change over time? What will the social costs be?
  4. What risks are associated with the proposed new technology? What risks are associated with not using it? Who will be in greatest danger? What risk will the technology present to other species of life and to the environment? In the worst possible case, what trouble could it cause? Who would be held responsible? How could the trouble be undone or limited?
  5. What people, materials, tools, knowledge, and know-how will be needed to build, install, and operate the proposed new technology? Are they available? If not, how will they be obtained, and from where? What energy sources will be needed for construction or manufacture, and also for operation? What resources will be needed to maintain, update, and repair the new technology?
  6. What will be done to dispose safely of the new technology’s waste materials? As it becomes obsolete or worn out, how will it be replaced? And finally, what will become of the material of which it was made and the people whose jobs depended on it?

If we apply this critical approach in great enough numbers then governments (in search of votes) and corporations (in search of dollars) will follow suit. But who are we to make such choices? Benefit and harm are in the eye of the beholder: Gutenberg’s printing press spread knowledge and also the Protestant movement that undermined the Catholic Church. Since Gutenberg was a devout Catholic, from his standpoint this was a catastrophe. From Martin Luther’s perspective this use of the printing press was a gift from God. This takes us from the technical issues of costs and benefits to the social and psychological issue of values.


This webpage is adapted from the book
Technology Challenged: Understanding Our Creations & Choosing Our Future
available at Amazon