9. How Do We Evaluate Technology?


The easiest way to control a technology
is to kill it; then it can’t possibly do any harm.
But if the public wants the benefits from the technology,
it must live with some risk and so must decide on
a tradeoff between the technology’s benefits and its dangers.
Each society does this in a different way, depending on
its attitudes toward technology and its political culture.

Robert Pool

One society adopts steel axes, but rejects canoes. Another uses Stinger missiles, AK-47 assault rifles, and pickup trucks, but bans televisions and satellite dishes. One country develops mechanical clocks and vast oceangoing fleets long before anyone else, but then destroys it all. When we evaluate technology we go beyond an objective weighing of costs and benefits to apply our own values. Technology’s merit is in the eye of the beholder, so people with different values may agree on the costs and benefits of a technology, but come to entirely different conclusions on whether it is “good” or “bad.”

There are at least two important reasons for us to understand how we evaluate technology. First, it improves our ability to communicate. After all, it is inevitable that we will interact with others whose values are different from our own, and we may have difficulty understanding their choices. We may even conclude that they are uninformed, stupid, or even malicious. The patterns in this chapter will make it easier to understand how rational, intelligent people can reach different conclusions when weighing the costs and benefits of technologies. Comprehending how we evaluate technology will also make it easier to explain our own conclusions.

The other reason to gain this understanding is that it helps us to reflect on our own values. Those who learn a second language say that it helps them understand their first language, and learning how other value systems come into play with technology is, like a second language, a mirror for us. Once we see them in contrast, we will better understand our own values and perhaps even question some. The ancient Greek philosopher Socrates said, “The unexamined life is not worth living.” We might paraphrase him: “Unexamined values are not worth applying.”

Just as few people think about technology on a conceptual level—as we are in this book—few think about their own values. With technology, we are not inclined to “reinvent the wheel,” but rather use the technology that has already been developed or, perhaps, to improve it a little. With values, we follow the same approach. Instead of reinventing, we subscribe to the values of others: our parents, our social groups, and so on. We apply values that have proven themselves over time. This is practical because both technology and our values are part of countless activities: we could not ponder them hundreds or thousands of times each day. On the other hand, our values influence our present and future so profoundly that we cannot leave them completely unexamined. If technology is the lever that gives us ability to move great things, then values are the fulcrum that determines where the lever will move.

We haven’t formulated and agreed
upon a way of making good decisions
about the powerful technologies
we’re so good at creating.

Howard Rheingold

So how do we evaluate technology? The same way we evaluate anything in our lives. Psychologist Abraham Maslow suggested that our choices are driven by our needs, and some (such as the need for food and water) are primary. We are motivated to satisfy these physiological needs before all others, but once we do, we are free to concentrate on our next level of needs: safety and security. Then, once we satisfy those, we can focus on further needs—including belonging, love, and creative fulfillment. With each satisfaction of a more basic need, we are free to pursue satisfaction to “higher” needs. Maslow calls this our “hierarchy of needs” (also known as the humanistic theory of motivation):

  1. Physiological: air, food & water, sleep, sex
  2. Safety & Security: from current environment and future threat
  3. Social affiliation: in family, occupation, religion, country
  4. Esteem & Self-esteem: sense of usefulness and effect on environment
  5. Self-actualization: fulfillment of ones potential (e.g. creative, physical)

No one suggests that we follow a strict step-by-step progression through the hierarchy. Rather, we strive to satisfy needs on multiple levels simultaneously. But if basic needs are threatened, then we tend to forget about higher needs until basic ones are once again satisfied. For instance, lacking physical security, it is difficult to work toward the intangibles of freedom. There are famous exceptions: Mahatma Gandhi fasted in protest to British injustice, sacrificing physiological needs in order to fulfill his potential as a leader toward a just society. Still, someone who declares, “Give me liberty or give me death,” probably already has food, shelter, and physical safety…or believes that these will soon return.

The hierarchy of needs concept tells us that we evaluate technology as good or bad depending on whether it satisfies our most pressing needs, at whatever level those might be. If you were focused on survival, a big stick might be good. If focused on creative expression, acrylic paints or a word processor might be good. Everything else being equal, we choose the technology that helps us to satisfy our dominant needs. Usually, our needs are so obvious to us—so unconscious—that we are aware only of selecting the easiest option.

Ease is what the Coca Cola bottle offered the Bushmen of the Kalahari in the movie The Gods Must be Crazy (as we described in Chapter 6: “How does Technology Change Us?”). But when the bottle also brought discord, they evaluated it in terms of their mythology. The gods had always given them good things, never anything over which they had to compete. Receiving the bottle, therefore, must have been a mistake, and they evaluated it as bad.

A slightly different grouping of values is even better at explaining why groups and countries embrace or discard technology. As Maslow’s hierarchy uses levels, Spiral Dynamics (developed by Clare W. Graves, Don Beck, and Jeff Cowan) uses “worldviews.” In fact, Spiral Dynamics works so well that we adapt (with some liberty) six of its worldviews as answers to how we evaluate technology:

  1. Survival: Does it feed and protect us? Robinson Crusoe gives us a fictional example of a shipwreck survivor on a desert island. When retrieving goods from his foundering ship, he selects pistol, saw, and hammer over gold because food and protection were most important.
  2. Ritual: Is it consistent with our myths and traditions? The Yir Yoront aborigines in Australia evaluated steel axes as good and canoes as bad based on mythology. The steel axes introduced by missionaries were very similar to stone axes, which they believed the gods had given them. But gods had given them nothing like the canoes that their neighbors used.
  3. Power: Does it give us more control? In Afghanistan, the Taliban evaluated Stinger missiles, AK-47 assault rifles, and pickup trucks as good because those maintained control, and television and satellite dishes as bad because those undermined it.
  4. Authority: Do our leaders or traditions say it is good? China, guided by the authority of Confucian principles, developed mechanical clocks and vast oceangoing fleets of ships long before Europeans did. But a new regime declared new priorities, scrapping much of the advanced technology and turning inward to build the Great Wall.
  5. Economic: Is it profitable? In the capitalist free market system, electric lights, nuclear power, and Internet technologies promised profit and, so, were considered good. Labor-intensive, small-scale agriculture is less efficient than automated, large-scale agriculture, so the market puts the former out of business.
  6. Ecologic: Are environmental and long-term costs outweighed by the benefits? The U.S. government evaluates hydroelectric dams based on how they affect water quality, fish, wildlife & botany, public recreation, historical & archeological sites, and aesthetics.

Of course there are countless ways to categorize human values and, therefore, countless ways we could explain how people evaluate technology. But, as in every chapter, our aim is to provide a context in which to understand the question, not declare which answer or set of answers is best. We are also not attempting to critique the Yir Yoront, Taliban, ancient China, the capitalist free market, or any other group or system.

These examples were selected because they illustrate diverse approaches to evaluating technology. By necessity, these brief descriptions of approach are generalizations. No group makes choices homogeneously; a host of details keep any evaluation from neatly fitting into any one category. But this book—and science and the pursuit of knowledge, in general—is about finding patterns. And these patterns are a useful reflection of what we see in the history of people evaluating technology.

Buddhist philosophy reminds us “the map is not reality.” Good point. A map always sacrifices detail of the landscape it represents, and yet we still find maps useful. And so it is with our map of answers to how we evaluate technology.

Our map is descriptive, not prescriptive. It tells us how people do evaluate technology, not how they should evaluate it (attempts to impose value systems on other people have historically been bloody and largely unsuccessful). While this book does not preach one set of values to guide the future of technology, it does suggest that we educate ourselves to be able to answer these important questions:

  • How do we, as a global population, make the complex and momentous decisions that are creating our future?
  • How do we decide which technologies to develop and which to abandon?
  • How do we decide which should receive more resources and which incorporate such danger that they should not be developed?

I found out the carpenter’s chest, which was indeed
a very useful prize to me, and much more valuable than
a ship-loading of gold
would have been at that time.

Robinson Crusoe


In some situations, survival is our only concern. As babies we are unaware of much beyond food and warmth. And in many parts of the world, famine and war assure that survival remains the dominant priority. Other values seem luxuries for those struggling to get enough food to live another day or trying to avoid being caught in crossfire.

In Daniel Defoe’s fictional story Robinson Crusoe, the namesake character is left on a remote island and must figure out how to stay alive. Crusoe evaluated technology based on whether it contributed or detracted from his ability to feed himself, stay warm, and stay safe. Crusoe’s priorities were, “First, health and fresh water…Secondly, shelter from the heat of the sun. Thirdly security from ravenous creatures, whether men or beasts. Fourthly, a view to the sea, that if God sent any ship in sight I might not lose any advantage for my deliverance…” To him, more valuable than gold was technology that:

  • Fed him (muskets, pistols, kegs of gunpowder, powder horns, bags of shot, knives, forks, bread, sugar, flour, and rum)
  • Protected him from the elements (clothing, ropes, sail, canvas, hammock, bedding, razors, scissors)
  • Protected him from “ravenous creatures, whether men or beasts,” (saw, ax, hammer, bags of nails and spikes, screw-jack, hatchet, and many of the hunting weapons in the first category)

Some of the tools, including a grindstone, could be used to make new tools, if he were not soon rescued. As it turns out, his priorities of food and shelter were appropriate because he was a castaway for a long time.

Of the costs and benefits we examined in the previous chapter, complexity vs. predictability would probably be the most important to those focused on survival. Complex technology may have a greater benefit over time, but its unpredictability would be too high a price. Long-term benefits do not matter if any short-term setback means you will not be around to enjoy them.

If the situation changes, either through experience (babies grow up) or reduction of threat (assured supplies of food), the individual or organization can then consider other needs and values.

The stone axe in all its aspects,
uses, and associations was
integrated into the context of
Yir Yiront technology and conduct
because a myth, a set of ideas
had put it there.

Lauriston Sharp


The Yir Yoront, a nomadic tribe in Australia, had used stone axes for as long as any of them knew. Their mythology explained the entire world in terms of gods guiding their tribe since the beginning of time. Gods had given one ancestor the stone ax and all his descendants—his clan—shared the responsibility for honoring it. Another clan shared the responsibility for honoring dead ancestors. Each group, with their symbols or totems, knew its roles because of the mythological stories that were told again and again.

When Christian missionaries explored Australia mid-20th century, they brought new technology, including steel ax heads. From the missionaries’ viewpoint, steel axes were good because they were more efficient, and they thought that “progress” in any form was good. By that time steel had been around more than two millennia, having been first smelted around 500 BCE in India.

Missionaries and tribe members alike agreed that steel axes were more effective than stone axes. But the Yir Yoront adopted steel axes as a good technology because they fit into the tribe’s mythology. The only issue was whether they belonged to the totem of the stone ax, because they looked and worked like stone axes, or to the totem of dead ancestors, because the missionaries who brought them were as white as the sun-bleached bones of dead ancestors.

It may be tempting to project values of “efficiency and progress” on the process, assuming that these are universally compelling motivators. And from there we would conclude that mythology played only a superficial role in the Yir Yoront evaluation of stone axes. To determine whether mythology or efficiency was more influential, we consider the canoe.

The Yir Yoront did not use canoes. Instead, they held onto floating logs when paddling across rivers and lagoons—tricky business, as these were often infested with crocodile, shark, Portuguese man-of-war jellyfish, and stingray. As it turns out, the stingray was also important to the tribe since its barbed spine made a good tip for spears, catching in the wound and breaking off inside it. Those dried spines were the tribe’s primary export and necessary for obtaining stone ax heads, the nearest source of which was hundreds of miles and many tribes away. So why did the Yir Yoront not use canoes?

A good guess would be that they had never seen a canoe and, so, simply had not come up with the concept. That would be wrong, as the Yir Yoront saw their neighbors to the north use canoes. Another guess might be that they lacked the raw materials to manufacture canoes and, while they could trade for small things like stone ax heads, trading for big technology was impractical. Also wrong. Trees suitable for canoes (and for making the handles and strapping of axes) grew in Yir Yoront territory. Their reason? The gods had never given them the canoe.

They assumed—well, they could imagine no other explanation—that the tribe to the north has a god that long ago gave them the canoe. Adopting canoe technology would not simply be a matter of learning how to build a canoe, but of creating an entire myth about how one of their ancestors had received it…and spreading that belief throughout the community.

So, efficiency—and utility, in general—played a secondary role in the Yir Yoront evaluation of technology. Primary was their mythology, which explained for them how the world worked and, so, how they should evaluate options and make decisions. For the Yir Yoront, gods and mystical spirit beings were quite real, so living according to the rules set down by those entities in myth made sense. That may not be scientific, but then neither is skipping the “unlucky 13th floor” in a skyscraper, numbering them 11, 12, 14, 15, and so on, as is common in the U.S. Just as some find safety in not working on the 13th floor of a building, others find it in preserving rituals and sacred places.

Impact of the Steel Ax

Oblivious to Yir Yoront traditions, the well-intentioned missionaries gave steel axes in exchange for work and also as gifts. The male elders, who had held power in part through control of the stone axes, were less eager to switch to steel axes—which were not under their control—than were younger members. The tables were turned as the male elders had to borrow from women and young people, undermining the stone ax as a symbol of masculinity and of respect for the traditional leaders. Also, the annual festivals where the Yir Yoront traded spears for stone axes with neighboring tribes lost their main purpose once they started getting steel axes. The ritual around this exchange lost its core meaning, attendance declined, and cultural interaction withered.

Liberation for the formerly dependent and underprivileged? By Western values, these changes were progressive, but they shattered culture, tradition, and even religious practices. Even if replacing a culture could be justified, the missionaries were unprepared to shore up the collapse and form a new society, and certain practices emerged that would not have been permitted before. For example, wives and daughters were prostituted to obtain use of steel axes. The social structures that would have prohibited such activities were unable to keep pace with the rate of change.

Were there any benefits? The missionaries hoped that the new technology would make the Yir Yoront more productive, and, so, raise quality of life. Instead, the efficiency allowed them to work less and further extend their sleeping hours, which the missionaries thought were already more than ample. The meaning of the ax was far more important in this society than was its function. Mishaps like this may have inspired the “non-interference directive” on the science fiction show Star Trek, which prohibited the technologically advanced “Federation” from disrupting more primitive cultures.

The Yir Yoront’s evaluation process worked for them, or they would have perished as a tribe long before the missionaries arrived. Ritual is often based on practices proven effective and sustainable over long periods of time. Indonesian agriculture gives us an example.

The “Green Revolution” brought new agricultural techniques to Indonesia, much like the steel axe, these represented progress. The Indonesian government, who believed that progress was good, replaced the Balinese method of Hindu priests and water temple telling farmers when to irrigate their rice fields. The cascading of water from a high crater lake through a hierarchy of rice fields and dams (water temples, each with its own priest) appeared haphazard, inefficient, and unscientific. So the government imposed centralized and coordinated irrigation at more frequent intervals.

At first this had the desired effect of increasing crop yield, but then weaknesses became clear. The pattern of irrigation directed by the priests of the water temples had avoided the simultaneous ripening of all fields, which would have overtaxed water supplies. Progress provided rats and brown leaf-hoppers with contiguous fields of ripe crops across which they could multiply without limit.

The organochloride pesticide response to this was an expensive import and killed everything: fish, eels, and even farmers. The result was disastrous and the government had to allow the return of the ritual-based technique.

So what technique would have been an improvement? To figure that out, anthropologist Steve Lansing employed the latest technology: computer simulation of water flow, crop growth, and parasite spread. Working with the top priest, they repeatedly adjusted the timing of planting and irrigation to maximize yield. Their result was a pattern of irrigation very similar to what had been done, by ritual, for over a millennium.

So was the only problem in this case imperfect information? There is never perfect information—even 21st century science cannot explain and predict everything, and scientists will be the first to admit this—so when dealing with complex technologies, such as agriculture, it may sometimes be safer to stick with the tried-and-true approach…at least until we understand how and why it works.

While few societies today rely as much on mythology and supernatural forces in evaluating technology as the Yir Yoront or the Indonesian water temple priests, every society retains elements of it. Such beliefs are particularly common when the scientific mechanisms are hard to understand.

In Africa, the invisible and hard-to-predict behavior of AIDS prompts many to evaluate condoms, sexual practices, illicit IV drug use, and HIV drugs with values based on ritual. Overwhelmed by the complexity of the science surrounding AIDS, they rely on tradition, legend, and folk tales. But values evolved for one environment may not serve in another. For instance, one folk tale maintains that having sex with a virgin will remove the disease from ones body, and science can confidently say that this can only infect the virgin.

Tradition need not be based in mythology or ritual to exert strong influence. As we noted earlier, conscious reevaluation of every technology we encounter would be impractical—there is just not enough time in each day—so even those who practice science make many decisions based on tradition. For instance, modern business dress for men traditionally includes a necktie. Culture enforces this tradition subtly through peer pressure. As a clothing technology, the tie does little other than convey information (“I am conforming to our tradition” or “I am expressing my personality through the patterns on this tie”).

Globalization brings ever more values systems into contact, as more and more technology is sold, traded, given, or imposed from one society to another. When such technology is not aligned with native values, clashes result. Avoiding these requires that we understand the ways we evaluate technology. Since every society contains a blend of the value categories we outline in this chapter, this knowledge is universally useful.

After the Mujahidin parties came to power in 1992,
the Afghan people thought that peace would prevail
in the country. However, the leaders began to fight
over power in Kabul. Some local leaders,
particularly in Kandahar, formed armed gangs that
fought each other. There was widespread corruption and
theft, and there were roadblocks everywhere.
Women were being attacked, raped, and killed.

Taliban Spokesman


Shiny black tape was wrapped around a pole in the middle of the street. Like the medieval practice of putting decapitated heads on pikes outside the castle walls, it was a warning. The entrails of cassette music tapes warned not about trespassing, however, but about technology.

In Afghanistan, Taliban doctrine prohibited satellite dishes, television, videotapes, music tapes, and even traditional Afghan instruments and singing. Kites, which centuries earlier had led to a revolution in electrical technology, were also prohibited by the doctrine. All that flew in the breeze was the confiscated magnetic tape.

Pickup trucks, AK-47 assault rifles, and Stinger missiles (shoulder-mounted devices capable of shooting down helicopters and airplanes) were accepted technologies. Evaluating technologies of power and control as “good” made sense, since the Taliban came to power in the chaos of feuding warlords that followed a decade of Soviet occupation. In that environment, survival required power, either your own or that of a group that would protect you. Power was the only currency both in the anarchy the Taliban replaced as well as in the strict order they imposed.

Hungering for stability, some Afghans accepted the strict and repressive new regime. Others joined the Taliban for protection. Many more were simply conquered by force, burying their televisions and other forbidden technology in their backyards (to be unearthed after the fall of the Taliban in late 2001).

Trading freedom for stability reflected in how the Taliban evaluated technology. Television satellite dishes could bring in opposing views and were, therefore, subversive of the regime and considered bad technology.

Totalitarian regimes in other countries and eras have also evaluated technology based on “might makes right.” For instance, North Korea. While the U.S. continues its long debate over registration of guns—and how that might threaten the right to bear arms guaranteed by the U.S. Constitution—North Korea registers radios. Any citizen who owns a radio must register it with the local police, and any foreign-made radio must also have its tuner soldered into place, stuck at the official frequency. To ensure that no one modifies a radio to pickup anything other than official government programs, police make surprise inspections of the radios.

The reason for North Korean government’s paranoia? Radio Free Asia broadcasts news on North Korea’s shortages of food and electricity, as well as its political isolation. North Koreans know that they are hungry but few are aware that others are better off, as government propaganda has been consistent and complete since before many citizens were born. Radio Free Asia also broadcasts instructions on how to escape the country, including how to dress and act, and whom to contact. Successful defectors have been surprised to find that South Korea is wealthier than North and that the U.S. is not subservient to North Korea, but donates rice to it for reasons other than tribute.

Evaluating technology rather differently, human rights activists in South Korea smuggle small, disposable radios across North Korea’s borders. The smaller the radios are, the easier they are to hide from police. The more disposable they are, the more likely that there will be nothing for the police to find when they do search.

North Korea experimented with mobile phones for almost two years, deciding May, 2004, to ban them. Tens of thousands subscribed even though the cost to register a phone (about $750 U.S.) equaled 28 years of the average worker’s wages. Mobile phones, too, allowed foreign culture into the country, undermining government control.

The life experiences of those living in violent, oppressive environments confirm and reconfirm that you have to fight for whatever you get, that allegiance is based on power and force, and that anyone who does not believe this will suffer the consequences—and may not have the chance to change their mind. This “survival of the fittest” mentality is not restricted only to violent, ruthless environments. It is a worldview we can find, at least in pockets, anywhere on earth.

The North Korean government’s evaluation of technology is not that different from the Taliban’s. Technology that promotes freedom undermines control, and technology that promotes control undermines freedom. Given the examples of this cost-benefit tradeoff in the previous chapter, many would concur, but while analysis of tradeoffs can be fairly objective, evaluating the tradeoffs rarely is.

In the Islamic value system, there is not
the same 
emphasis on individual freedom.
The individual is seen as absorbed within,
and subject to, society…
Energy is more likely to be put into
group efforts to improve society than into
individual effort to pursue an individual course.

Peter Marsden


Although many Taliban decisions appear based on the value of power, they aspired—at least in word—to the higher authority of the Koran. They claimed to be creating a pure Islamic state. While power focuses on the individual (How does this serve me?), authority subordinates the individual to a greater good (How may I serve it?).

Taliban adherence to the Koran provides a different explanation for their ban on television. Islam prohibits visual representations of human and animal form. During Islamic rule of Spain, art and architecture contained brilliant geometric patterns, completely devoid of human or animal images. What could be worse than a painting with human or animal forms? An electronic box that presents a variety of them 24 hours a day.

Other religions also influence how their followers evaluate technology. For example, many Catholics oppose human cloning because the Vatican condemns it. They view the Pope as interpreting the authority of the Bible and conveying the authority of God. In contrast to the other worldviews we have discussed, they do not evaluate human cloning based on immediate survival (I need it now or I may perish), ritualistic tradition (we have always manipulated our world), or power (I can dominate others with this technology). They evaluate it based on whether it is God’s will for us (concluding that it is not right to meddle in this area).

Authority and power can blend since people in positions of authority may evaluate technology based on how it protects and enhances their power. They may even believe that a higher authority would be best served if they kept power. For them, the claim of authority may be a self-serving deceit, and they may even deceive themselves. Their followers may believe strongly that those in authority are acting purely in service of the ideology, whether religious, political, or something else.

The power of authority had far-reaching consequences in 15th century China, where it led to England colonizing Hong Kong instead of what might have happened: China colonizing Ireland or other parts of Europe. This happened because China evaluated ocean-going ships as bad and a protective wall as good. Based on the values of avoiding contact with foreigners and celebrating agriculture over commerce, it came at a time when China had by far the largest and most sophisticated fleet of ships in the world.

The roots of this decision trace back to the 5th and 6th centuries BCE, a violent and brutal period in China. Victors of battles sometimes slaughtered tens or hundreds of thousands of civilians. In this environment, a philosopher named Confucius praised the importance of authority and the stability of the state. Evaluating based on “what is righteous” rather than “what brings power” created a more structured society. For those who suffered through the many wars of ancient China, sacrificing individualism for order was probably quite attractive.

Confucianism values relationships over the individual. Or, more specifically, it defines the individual in terms of his or her relation to parents, children, siblings, friends, ruler, and subordinates. Carrying out ones social roles properly is of utmost importance. This philosophy dominated China for nearly two millennia, but in the 15th century a combination of power and economic values guided the emperor. This resulted in China’s “Treasure Fleets,” so named because they exchanged Chinese treasure (e.g. porcelain and silk) for foreign treasure (e.g. spices and gold).

Early in the 15th century, well before Europe discovered the New World, China explored as far as India and Africa with fleets of up to 300 ships—some of them five times the size of the ships that Christopher Columbus sailed to discover the Americas. Not only larger, but also more technologically advanced, Chinese ships were divided into separate watertight sections. Perhaps patterned after the structure of bamboo, they were constructed so that no single breach of the hull would sink the ship. This design would not appear on European ships for another three centuries.

In addition to building the largest, most technologically advanced fleet of the era, China also introduced a string of inventions: paper, printing, gunpowder, the mechanical clock, and the magnetic compass. Europe, which was splintered into many small countries, provinces, and city-states, would have had little defense if confronted with China’s vast fleets and advanced technology. But Europe had nothing that China wanted.

The mind of the superior man
dwells on righteousness;
the mind of the little man
dwells on profit.


And then Confucians retook power, evaluating technology according to their traditional values, which elevated the farmer above the merchant. Farmers must have been relieved because they had been taxed mercilessly by the eunuch regime to build and stock the fleets of ships. The new rulers dismantled the fleets, shut down shipbuilding factories, and eventually made it against the law to sail on the ocean. They also destroyed other technology of their predecessors, including mechanical clocks.

Confucian values shunned foreigners, an almost instinctive reaction in 15th century China, when stories of Mongol hordes invading overland were still fresh. Kubilai Khan, grandson of Ghengiz Khan, had conquered China and Mongol control continued from 1279 until 1368. With that history, a defensive wall on the frontier must have seemed a much better technology than fleets of ships. The wall had been started 1600 years earlier under the Qin and then the Han Dynasties, but was vastly expanded in the 15th century under the Ming Dynasty at the expense of all exploration.

Isolation and Shock

What made China’s decision to abandon naval technology so serious? No strong neighbor continued developing shipping when China dropped it, quickly exposing the mistake. Instead, it took four centuries before China could compare its choice against the alternative.

China was ready to start an industrial revolution centuries before Europe got around to it, but political unity made possible their decision to stop development of technology. Europe’s political fragmentation made it impossible to focus the massive resources China did, but also made it impossible to abandon development of new technology. If one country—say, Holland—developed oceangoing fleets, then other countries—say, Spain and England—would note Holland’s success and copy it. Countries that ignored successful examples would become marginalized and, sometimes, overrun.

Naval technology shrank the world, eventually making China and Europe neighbors. The consequences of China’s 15th century choice became clear in the middle of the 19th century, when European ships (armed with cannons using Chinese-invented gunpowder) demanded access to her ports for trading. The technological disparity was so stark that China had little defense and, so, was exploited economically.

In the second half of the 20th century China again evaluated technology according to authority. Communist authority stopped technological progress during the Cultural Revolution of the 1960s, but the disadvantages were stark and quickly obvious. The “freeze” was much shorter this second time around because China was no longer isolated.

In the 15th century it must have been difficult to predict that wall technology would be so much less valuable than ship technology. In the 21st century many independent organizations and countries are exploring many different technologies, and since transportation and communication technology has connected them all, any missed opportunities are quickly exposed.

Authority guides many of the people alive today, whether manifested in religion or politics. It defines what is right and wrong, good and bad. But many sources of authority date back centuries or millennia, and could not have anticipated specific technology. It is to the followers of authority to analyze how ships, clocks, radio, television, cloning, and countless more inventions fit within the teachings. This can be complicated and even contradictory (e.g. cloning could save precious lives even as it dares to create life). But there is another way to analyze technology, and because it is more quantitative than qualitative, its results are less subjective.

Anything that won’t sell,
I don’t want to invent.

Thomas Edison


Thomas Edison claimed that an invention that did not turn a profit was worthless. A century before Edison invented the light bulb, Adam Smith speculated that capitalism and the free market worked as if an “invisible hand” were making decisions, setting just the right prices to make demand equal supply. By encouraging competition the invisible hand would keep prices as low as possible. Taking this approach to evaluating technology, we need ask only if it would make money. More broadly, we might ask if it contributes to greater economic efficiency, progress, or growth.

Asking those questions is the starting point. Answering them involves weighing costs and benefits for the known alternatives (which include doing nothing at all) by assigning economic value to the effects of the alternative, assessing risk, and discounting the future (balancing future benefit against present benefit). The alternative with the greatest economic value wins.

Edison’s light bulb, for example, produced brighter light than candles and reduced the risk of fire, but it consumed electricity. We can place an economic value on electricity because it has a market price. Because candles do not always cause fires, assigning a value is harder. Insurance companies place economic value on such occurrences using historical statistics to determine the likelihood of various types of fires and the cost of repair.

But what is the value of brighter light? Is it the profit from manufacturing and commerce that could not have been conducted with just candlelight? Is it the improved productivity of workers who have not damaged their eyesight by straining in dim light? Or the saved medical expenses of treating them (assuming some treat­ment existed and that they would be treated)? Or is the value of brighter light simply whatever the market would pay for it?

Even if that is the case, our cost-benefit analysis is not complete. We have to consider acquisition costs for light bulbs and candles, and we have to make some assumptions about the relative value of something now versus something in the future. To switch candles to light bulbs we would have to purchase bulbs, switches, and wiring, connect to an electric utility, and establish a contract for purchasing electricity at a predictable rate.

Because these costs are immediate but the benefits accrue over time, determining whether the investment is worthwhile depends on where else you could place your money. If interest rates were very high, you might earn more by keeping money in the bank than investing in light bulbs and their infrastructure. A $1000 investment might save $10 per year if spent on light bulbs and infrastructure, but earn $100 per year if placed in a bank.

Once we assess all these factors, we could conclude whether a light bulb creates more economic value than its alternatives. If so, it is a good technology. But, technically, our conclusion is not guaranteed to be right. The future is unpredictable, so either wax or electricity might become much less (or more) expensive, or interest rates might (always do) change. And that would render our cost-benefit analysis invalid.

But it is an imperfect world and our cost-benefit analysis may be good enough. Plus, the market makes continual corrections, motivating people in new directions, so this economic approach to evaluating technology is adaptable and resilient. For instance, if electricity shot up in price, the market would provide a profit motive for development of high-efficiency light bulbs or the reintroduction of candles.

All of this balancing of options and present against the future is complicated when the technology in question does not even exist yet. When evaluating an investment in a prospective technology, venture capital firms assess three dimensions:

  1. The product itself (Are there barriers to entry of competitors, such as patents or specialized knowledge?)
  2. The people proposing to deliver it (Are they experienced? Do they have a good track record?)
  3. The market for the product (Does it exist? Has it been receptive to new technology?)

In some cases, the market values growth over profit. Market participants—entrepreneurs and businesses—recognize that securing a lion’s share of the market can lead to large long term profit, and so sacrifice a smaller immediate profit. In the bull market of the 1990s, investment capital flowed to companies with plans to dominate Internet purchases of pet food, groceries, airline tickets, gift certificates, on-line greeting cards, toys, or furniture.

Focused on growth and promises that profits would eventually come in lavish amounts to those companies that locked-in their market segment, investors plowed their money in. But lock-in and profit are not as easy to achieve as raw growth.

The lock-in that Microsoft had achieved with their operating systems was because customers moving to a new operating system had to undertake the expensive and risky process of replacing applications and migrating data. Internet buyers tended to have little loyalty to commerce web sites because they could switch from Pets.com to Petopia.com every time they bought a bag of dog food, patronizing whichever competitor was offering the biggest loss-leader discount.

With time, few “dot com” companies found either lock-in or profits, so many closed down. Conventional, bricks-and-mortar pet supply stores acquired both Pets.com and Petopia.com. In the end, economic evaluation says that profitable technology is good and unprofitable technology is bad.

You could say that capitalism is just
a couple-hundred-year-old mechanism
for speeding up science, but
capitalism and the free market

are not very good at saying
“pause,” let alone “stop”

Bill Joy


A young Chinese woman wearing a red smock and black pants, but no eye protection, swings a hammer at the base of a television picture tube. For years, the glass body braced against 14 pounds of air pressure on every square inch, protecting the vacuum inside. The hammer changes that in an explosive instant. If all goes well, only the tail (yoke) of the tube breaks free and air rushes into the void. If all does not go well, the glass can shatter and implode. Either way, once the prized coils of wire are stripped from the yoke to sell as bulk copper, the lead and barium laden glass is dumped.

The tubes—technically, cathode ray tubes or CRTs—are “good” technology in Guiyu, China, where they are evaluated by strict economics: cost of acquisition, transportation, and labor subtracted from the market price of copper. But these CRTs have become “bad” technology in the U.S., where they were used and discarded by people upgrading to larger televisions or to flat panel computer screens. To recycle a CRT costs between $15 and $40, and some states, including California, prohibit dumping them in landfills because of their toxic contents, which can leach into soil and ground water.

Ecological evaluation of technology considers environmental factors that economic evaluations traditionally ignore. These real but unaccounted costs include impacts to health and the environment long into the future. Traditional economic evaluation “externalizes” costs, such as lead contamination in groundwater. Ecologic evaluation considers factors like the lead-poisoned fish and the people that have few, if any, other sources of food.

In the case of the CRT, the lead once served an important function: protecting the viewer from high-energy electron radiation firing at the inside of the screen to illuminate it. Ecological evaluation would weigh this health benefit against the cost of safe disposal and any alternatives to the CRT. This was not done when and where CRTs were made or sold, so when they came to the end of their useful life, the alternatives were proper recycling, which nobody had committed to—or wanted to—pay for, or “offshore recycling,” which meant dumping in countries where labor is cheap and environmental protection is minimal.

Environmental laws level the playing field for competing corporations, so those that protect the environment are not under priced—and eventually eliminated—by those that do not. While individuals may see ecologic evaluation as simply the right thing to do, corporations are called on to maximize stockholder value, and that can be served by externalizing as many costs as possible. If corporations’ directors do not drink the local water or eat the local fish, dumping may be attractive because it is cheap.

One justification for government is to protect that which is held in common. If the factory next door to you pollutes its own air, it also pollutes yours. If a hydroelectric plant upstream from you dams its river, it also dams yours. In such a position, you would likely want government to enforce ecologic evaluation of technology. We find an illustrative example in the Pacific Northwest.

The Snake River flows west across Idaho to Oregon, heading north into Washington before resuming its westward travel to the Pacific Ocean. Along the way, it passes through Hells Canyon, a river gorge averaging more than a mile deep. In 1955, the U.S. government issued the Idaho Power Company a license to generate electricity by damming the river. Idaho Power built Brownlee Dam in 1958, Oxbow in 1961, and Hells Canyon in 1967. Together known as the Hells Canyon Complex, these three hydroelectric dams can produce more than 1.17 million kilowatts of power, about 1.6% of the developed hydroelectric generating capacity in the U.S. (In 2001 hydroelectric provided 6% of the nation’s electricity).

The license to operate these three dams expires in 2005, 50 years after its 1955 issuance. To continue operating, Idaho Power needs a new license from the Federal Energy Regulatory Commission (FERC), which evaluates technology on factors beyond economics. So, in order to gain FERC’s approval, Idaho Power is conducting a decade-long, multimillion-dollar study, proposing to spend approximately $178,000,000 on:

  • Water quality
  • Fisheries
  • Wildlife and botanical protection and reintroduction
  • Public recreation access, facilities, and condition
  • Historical/archeological preservation
  • Aesthetics

In exploring “water quality,” Idaho Power is looking at more than just pollution. If the temperature of water coming off a dam is too high or low, fish can be hurt or killed. The amount of oxygen dissolved in water affects aquatic life in the top two meters, and that, in turn, affects the rest of the ecosystem. Depending on how they deflect or spill water, the dams can increase or decrease oxygen content. Keeping water still in reservoirs reduces it and spilling water through the air, much as a waterfall would, increases it.

Dams block anadromous fish (those that migrate from saltwater to freshwater for spawning, such as salmon and steelhead) from returning upstream to their breeding grounds. One solution is to provide a way around the dams (e.g. stepped pools), but at the Hells Canyon Complex this approach encountered problems, so hatcheries were built downstream of the dams instead. Since anadromous fish sense their way back to their birthplace, those born in hatcheries return there, rather than upstream of the dams where their ancestors may have bred. Conservation groups maintain that hatcheries address only one aspect of the disruption to the fish and the greater environment.

Constructing and operating a technology such as the Hells Canyon Complex affects wildlife habitats. Narrow channels of water are converted to reservoirs, and cold, fast water slows down and warms in the sun, affecting fish, aquatic plants, animals that feed on them, and the habitats that those animals reach. Even the hatcheries cause a chain reaction leading to less fertilization of soils, which affects trees (as we mentioned in the Overview section at the beginning of this book). So part of the relicensing process includes proposals by Idaho Power to improve habitats adjacent to the complex. One example: reintroduction of mountain quail. Another is the protection of sensitive plants and the control of invasive weeds.

And there are human considerations. Long before the dams were built, and before European settlement of the Pacific Northwest, tribes of American Indians roamed and lived in the area. Preserving archeological sites and enhancing them with interpretive centers is part of the overall evaluation. Impact on current tribes (Burns Paiute, Nez Perce, Shoshone-Paiute, Shoshone-Bannock, and the confederated tribes of the Warm Springs Reservation and Umatilla Reservation) is also a factor, so Idaho Power is proposing projects and programs to benefit them.

On top of that, Hells Canyon Complex shares the Snake River with recreational boaters, fishers, and hikers, and campers. Since it will be evaluated on its impact on these recreational users, Idaho Power proposes to improve litter and sanitation programs, improve road maintenance, enhance the four Hells Canyon parks, and improve or expand boat docks, ramps, and launches, among other projects. Aesthetic improvements to the Hells Canyon Complex also influence the likelihood of re-licensing, so Idaho Power proposes to blend facilities, structures, and signage into the landscape.

So, while Idaho Power evaluates the Hells Canyon Complex on economics, the U.S. Government evaluates it ecologically on these six areas: water quality, fish, wildlife & botany, public recreation, historical & archeological sites, and aesthetics. The government protects the Snake River and surrounding Hells Canyon Recreation Area on behalf of the nation’s citizens, allowing a private enterprise to profit from it, provided that enterprise performs on certain non-economic measures.

Other technologies exploit common resources with less ecologic evaluation. Combustion of coal to generate electricity creates airborne particles that fall onto bodies of water. One pollutant, mercury, converts to an even more toxic compound, methyl mercury, in the environment. Methyl mercury bio-accumulates up the food chain, becoming more concentrated at each step. Large fish such as tuna can concentrate enough that it may be safe for humans to eat no more than one 6.5 ounce can per week.

The U.S. government studies and evaluates the health risks of such toxins as mercury, passing laws limiting output of these pollutants. Instead of ecologic evaluation, these external costs may be ignored until regulation sets a certain “free” level, above which manufacturers or users of the technology must pay in the forms of fines or lawsuits.

Pharmaceutical technologies are evaluated the other way around, assessing risks and costs before release of product. But it was not always this way. Prior to government regulation, untested drugs were sold from the backs of wagons. Society recognized the devastating and irreversible harm that can take place when not all repercussions of a drug are well understood, and insisted on government oversight. That recognition is spreading to technology beyond drugs. Weighing the possible costs, both direct and ecologic, against the alternatives prior to releasing the technology is called the Precautionary Principle. It places the burden of proof on those creating the new technology to document its impact in comparison to alternatives, which include sticking with existing technology. The Precautionary Principle is very important for those who are not involved in selecting a technology because they often bear the brunt of the costs.

Benefits are often anticipated and directly affect those selecting the technology. Costs are often unanticipated and indirectly affect many uninvolved in selecting the technology. Competition and self-interest leads to this. Someone inventing, developing, and selling a technology will succeed, at least in the short term, by focusing on those who might purchase it. For them, benefits must outweigh costs. Whether the technology succeeds in the marketplace is much less dependent on those only peripherally affected by it, so for them benefits need not exceed costs. It is also simply easier to focus on and study a small group of direct beneficiaries than the diverse and possibly dispersed population of those indirectly affected.

One way we already protect our diverse and disperse environment is with deposits on recyclable beverage containers. These bottles and cans are less likely to be dumped because they can be redeemed for cash. Although the pennies may be insufficient to change behavior of many consumers, some people retrieve discarded containers because they rely on the deposits. The container deposit is represented as the value of the material for recycling, but it could also be considered the cost to our environment of dumping the material.

Economist Robert Costanza of the University of Maryland combines the beverage container deposit concept with another already in general use: construction performance bonds. In the U.S., construction companies commonly post bonds guaranteeing completion of a major project. Incomplete or late projects cause all or part of the bond to be forfeited. Costanza advocates the Precautionary Polluter Pays Principle (“4P”), which would require developers of new technology to post an interest-bearing bond to cover a worst-case scenario. An independent scientific study would determine just how costly the proposed technology could be. Over time, if the technology does not cause environmental damage or can be shown to be safer than initially predicted, portions or all of the bond with interest are returned to the developer.

4P would make the developer responsible for covering ecologic costs of a technology, rather than forcing those affected by it—but not profiting from it—to prove it is harming them. We noted in the last chapter that some asbestos victims died before courts found in their favor because the legal process can take years or decades.

Although 4P is attractive in its simplicity, it leaves difficult questions. What bond would be appropriate for technology that could have irreversible impact? When would a bond be liquidated for nuclear technology, whose wastes remain dangerous for many thousands of years? What would a bond pay for if a genetic technology changed humans? Who would collect the bond if a technology were to make the human species extinct, as a global exchange of thermonuclear weapons threatened to do? In the extreme, the Precautionary Principle dictates that the risks outweigh any bond that could be posted and the technology should not be pursued.

Ecologic evaluation considers those who are not directly involved with the technology. It takes a long-term approach, asking whether a technology is sustainable and how it will affect future generations. Long-term thinking is becoming more important with increasingly powerful technology, and considering the full context may help us avoid extinction. Narrow economic evaluation can be myopic: Lenin once pointed out that a capitalist would sell rope to his own hangman.

If Bacteria Used the Precautionary Principle

Give a little DNA. Get a little DNA. Bacteria are pretty easygoing about evolution—none of that formal male-female sexual reproduction. As a result of their sharing snippets of genetic instructions, plus a bit of random mutation, bacteria have explored and adapted to almost every environment on this planet, from deep inside rocks to steam fissures on the bottom of the ocean to the human intestine. But what if bacteria were not so cavalier about trying out new forms? What if they had the intelligence and consciousness to employ the Precautionary Principle to their evolution?

If there were just one evolutionary creation that bacteria could have prevented it would surely be cyanobacteria (more commonly known as blue-green algae, even though it’s not an algae). More than 2.2 billion years ago, a mutation allowed cyanobacteria to get the hydrogen atoms they consumed from water, while other bacteria continued to get their hydrogen from sugars, airborne hydrogen, or hydrogen sulfide. With water immensely abundant, cyanobacteria multiplied almost without bounds, and produced a lot of waste. Because they consume H2O and use hydrogen, their waste is oxygen.

From less than one billionth of one percent of the atmosphere, oxygen skyrocketed to its current level of 20%. All life on earth had evolved to that point with virtually no free oxygen, so this change was a global disaster. Volcanoes and asteroid impacts were mild in comparison. Even humans, whose lineage evolved in an oxygen-rich environment, suffer from oxidation, and may take anti-oxidant supplements (e.g. vitamins C, E, and beta carotene) to counter oxygen’s tendency to react with and change a variety of compounds.

Bacteria were—and are—not in a position to employ the Precautionary Principle, but humans are. What might we learn from the cyanobacteria episode? One lesson is that something that can both replicate itself and consume an untapped food or energy source will change the world. We may eventually be able to create our own “cyano-technologies” that brilliantly exploit some previously untapped resource, much as the runaway nanotechnology does in Michael Crichton’s fictional novel Prey.

With technology changing at an accelerating pace and with surprising behaviors emerging from new complexity, we may well create a variety of technologies with the potential to change the world. If we would like to survive—and not have to hide under a rock, side by side with the bacteria still waiting for oxygen to go away—then this is a fine time to become familiar with the Precautionary Principle…and start evaluating our technology ecologically.


Ours is a progressively technical
civilization…committed to the quest for
continually improved means
to carelessly examined ends.

Robert K. Merton

One way or another, consciously or not, we all evaluate technology. Usually we do it unconsciously. The values on which we weigh technology are so much a part of us that we are unaware of them. If we were aware of what was going on, we might hear the following:

  • A malnourished peasant: “Of course we evaluate based on whether it helps us get food.”
  • An aborigine in Australia: “Of course we evaluate based on what our spirit ancestors provided.”
  • A warlord in Afghanistan: “Of course we evaluate based on whether it maintains or expands our power.”
  • An imam in a mosque: “Of course we evaluate based on the teaching of the Koran.”
  • A partner in a venture capital firm: “Of course we evaluate based on profit.”
  • An environmentalist: “Of course we evaluate based on sustainability and compatibility with the surroundings.”

And they all might add, “Anyone doing differently is a fool.”

How do we reconcile these diverse and seemingly incompatible approaches to evaluating technology? Concerned that the average person is unable to understand the growing complexity of the technology on which he or she relies—and, therefore, unable to predict or choose its future impact—author James Burke offers four alternatives:

  1. Return to an intermediate technology
  2. Assess scientific and technological research according to worth for society
  3. Direct research and development towards more durability and less planned obsolescence
  4. Allow technology to continue to evolve as it’s always done

In the first case, we get a simpler environment, safe from the vastly powerful technologies recently developed. But what technologies are we willing to relinquish? Do we sacrifice life-saving technology? Few would be willing to surrender even the conveniences—which so quickly seem necessities—such as email and cellular phones. Our civilization is so interconnected, like a spider’s web, that removing any strand affects the others. And removing many strands could be disastrous.

Burke’s second alternative presents the immense problem of figuring out who will decide the social value of each technology. As we saw in this chapter, the various value systems make very different evaluations. Socialist societies have attempted to impose a centrally directed order with, often, unfortunate and self-destructive results. Those who direct scientific and technological research will have power, and power can corrupt.

But even if we could figure out some democratic system to distribute that power to everyone (and considering what is popular on television, creating a technologically-informed and participatory population would not be easy), there is a larger problem. Simply evaluating an area of research does not tell us the impacts of the eventual technologies. Intel’s research into a better way to build a handheld calculator was not intended to create the microprocessor found in microwave ovens, antilock brakes, automated teller machines, and home computers. What was the worth to society, in 1969, of reducing the number of integrated circuits in a calculator? No one could have guessed.

Burke’s third alternative is similar to our category of “ecologic.” This compromise would reduce the energy focused on relentless progress and novelty, redirecting it to the less fortunate people in the world. The rich would stop wanting so much more new technology and the poor could get enough for at least their basic needs, and perhaps even catch up with the rich. As Burke himself points out, this is a utopian vision, but perhaps utopia will be feasible as it has not been in the past. Advances in agriculture and manufacturing technology make it much easier to create enough of life’s necessities. Advances in communication and information technology make it much easier to share such a vision on a global basis.

Whether such a vision would work or not, we do know that the present system has created at least a trickle of advances down to the poor. And this is the last of Burke’s four alternatives: allowing technology’s evolution to continue as it always has. Although a fairer redistribution might be a noble goal, many have much more faith in “the rising tide lifting all boats.” They believe that the best way to bring the benefits of technology to the poor is to let the rich forge ahead, following their natural, even selfish, desires.

This notion does have a track record. For example, lifespan in poor countries today matches what it was in rich countries a few centuries back. Luxuries common in the 21st century—for example antibiotics or cellular phones—could not have been had by emperors or kings even one century back. And consider the centralized Socialist/Communist economies of the 20th century. Despite their original vision, they did little to prove that meddling in the affairs of progress could be successful.

But what if just letting technology progress “naturally” leads to dire consequences? That is precisely what worries Bill Joy, someone who deeply understands technology. A child prodigy, Joy went on to study electrical engineering. He rewrote the Unix computer operating system while at UC Berkeley and was a founder of computer manufacturer Sun Microsystems, becoming its chief scientist.

Joy’s concerns stem from three emerging areas: genetic technology, nanotechnology, and robotic technology (GNR). Not only could each of these technologies be designed to self-replicate, but economic and efficiency factors could motivate us to make them self-replicating. For instance, a company that would manufacture nanobots may be able to do so with less expense if it enlists the nanobots it has already made in order to make more. In this way, making just a few would result in a never-ending supply.

And, unlike nuclear technology, which relies on hard-to-get materials like plutonium, these three technologies depend mostly on information. Since information is so hard to control (witness the illegal copying of music on the Internet), it could eventually become quite feasible for a bright, but discontented person to create and unleash a technological plague.

Combining elements of authority, economics, and ecologic, Joy offers a five-pronged proposal to prevent this from happening:

  1. Compel scientists, technologists, and corporate leaders to pledge they will not work on weapons of mass destruction (a variant of the Hippocratic Oath)
  2. Create an international organization similar to the U.S. Office of Technology Assessment (which was closed in 1995) to evaluate new technology
  3. Make corporations financially liable for the consequences of the technology they develop
  4. As was proposed for nuclear technology in 1946 (but not adopted), internationalize control of technologies evaluated as too dangerous to be developed commercially
  5. Renounce development of technology so dangerous that it could escape our control and threaten our entire species

Joy’s detractors claim that his worry is unjustified, that technology has long increased in power, often appears too powerful, and yet always works out to improve our lot. They would say that Burke’s laissez faire fourth alternative is not only the best choice, but the only workable one simply because pursuit of knowledge cannot be stopped. When it is stifled in one place (e.g. China destroys its fleets of ships in the 15th century), it simply goes somewhere else (e.g. Europe, whose fleets came to dominate the world). So they claim there is nothing to worry about and, even if there were, there’s nothing we could do about it anyway.

The views of scientists should not
have special weight in deciding
questions that involve ethics or risks:
indeed, such judgments are best left to
broader and more dispassionate groups.

Martin Rees (scientist)

But stifling technology is not the only alternative to the public giving a blank check to the experts directing technology. Those experts wager not just their own lives, but ours, too, so even if we do not understand the technical details of their projects, we may want a say in what values they employ and what risks they term “acceptable.” Given the radically diverse ways in which people evaluate, we may wish to question the experts’ entire process of evaluation.

Recognizing the stakes, the U.S. Congress amended a 2003 bill funding nanotechnology with a provision requiring “regular and ongoing public discussions, through mechanisms such as citizens’ panels, consensus conferences, and educational events…” Average citizens may not be experts on nanotechnology, but they are experts on their own values. The process of evaluating technology can benefit from technical understanding, but it is absolutely dependent on values.

What would it look like to ask citizens to evaluate technology? Denmark has asked their citizens to evaluate such technological issues as electronic surveillance, noise and technology, genetically modified food, infertility, and the future of the private automobile. “Consensus conferences” start out like jury duty, with the Danish Board of Technology sending out invitations to randomly selected citizens. Of those interested, the Board selected a panel of 14 based on diversity of age, gender, education, profession, and geography. The panel receives a brief technical education over two weekends and then spends three days at the conference. For one and a half days experts answer questions from the panel and from a general audience. Then the panel develops a report, striving to achieve consensus before the report is publicly presented. After the presentation, the experts correct any factual misunderstandings and the report is submitted to the Danish Parliament.

Key to the success of citizens’ panels is unbiased technical information. Or, failing that, multi-biased technical information. Just as the U.S. Congress is subject to the sway of special-interest groups providing expert testimony skewed to serve their own political objectives, so, too, would be citizens’ panels. Two factors that might reduce the influence of skewed information: (1) testimony is given in a public forum and (2) citizens are not seeking campaign contributions from the employers of the experts.

Switzerland calls citizens’ panels “PubliForums” and France calls them “Citizens’ Conferences.” Every country involving citizens in the this way faces the challenge of giving their citizens a tool to understand and evaluate technology. For making conscious choices based on both a broad, contextual understanding and on examined values, ICE-9 would be a good tool.


This webpage is adapted from the book
Technology Challenged: Understanding Our Creations & Choosing Our Future
available at Amazon