4. How Does Technology Work?

Hidden beneath the surface,
technology of all descriptions
works according to a few,
simple principles.

 

Early stone tools took skill to create. Even with many hours of practice, modern anthropologists cannot match the skill Cro Magnon and Neanderthal had for chipping away at rocks to create sharp cutting edges. Each strike of stone against stone chips away rock and changes the optimal angle of the next blow. Of course, tens of thousands of years ago, we spent years learning how to make knives from stone because it was a matter of survival.

But as difficult as the technique may have been, understanding how the technology worked was simple. It had no moving parts. Little by little that has all changed. Modern technology works in ways so complex that only teams of experts can understand them completely.

Take a car, for instance. Specifically, consider the brakes. Mechanical engineers understand the physical interaction of brake shoe and rotor or drum. For antilock brakes, we need a computer engineer because computers monitor each tire for impending skid. But few computer engineers understand all the layers within computers (software engineers may not understand the hardware and hardware engineers often divide their field into designing digital logic circuits, analog circuits, microchips, and more), so we would need a team of engineers to explain exactly how the brakes on a modern car work.

Even immortality would not allow an individual to grasp all of technology’s workings because specialists are inventing anew faster than anyone can keep up. Of course, we can console ourselves that we don’t need to know how every technology works. The beauty of specialization, made possible by agricultural surpluses, is that we can (and do) delegate tasks to specialists, such as engineers, technicians, and scientists.

But delegation is different from abdication, which is what we do when—knowing nothing of technology—we let the specialists make all the decisions. We may have to rely on experts to process the details, but we can equip ourselves to comprehend their analyses, opinions, and predictions. Threading through a sea of technical details, are simple patterns that connect wide varieties of technology, explaining aspects of how those technologies work in common sense terms.

There is a pleasure in discovering simple patterns just behind the apparent complexity of technology—and seeing how they have remained true over time. In this chapter we examine seven that pop up in the functioning of a variety of technologies:

  1. All technologies rely on energy, from a horse pulling a plow to gasoline fueling a car. Many convert energy from one form to another.
  2. Technology can be distributed into many small parts (e.g. power generation at home with solar or wind) or centralized (e.g. nuclear power plants or hydroelectric dams), sometimes alternating between the two as new inventions make one better than the other.
  3. Bicycles, nuclear power plants, airplanes, and missiles rely on feedback and correction, two key elements of control systems, which keep much technology focused on the goals we set.
  4. Information is the difference between a compact disc that comes in junk mail and one containing the human genome. In the form of rules for solving a problem, it is an algorithm, which enables us to understand a technology’s behavior without having to understand its implementation.
  5. Repetition and layers are two ways that complex technology can be composed of simple building blocks (much as the repetition of 26 letters and the layering of words, sentences, paragraphs, and chapters compose this book).
  6. Emergent behavior is about the whole being more than the sum of the parts. Just as an ant colony behaves very differently from any individual ant, so, too, do many complex technological systems behave differently from any of their components.

Since these patterns have endured over time, we may find them threading through future technology, however strange and foreign it may appear. We still need experts to design, build, maintain, and explain. But, unless we choose to cede control of our future to those experts, we need a basis from which to understand their explanations and to form our own evaluations. That basis starts here.

 

Energy:  the muscle behind technology

Here’s a trick that won’t work. Wire a solar cell (which generates electricity when exposed to light) to a light bulb that shines on it. This system creates its own energy, running the light from its own light. A similar trick was proposed in the waterwheel invention of Robert Fludd, a 17th century London doctor. In theory, water flowing over the wheel powered a pump that sent all the water back upstream so it could again turn the wheel. If you could tap a bit of the energy from the solar cell or Fludd’s waterwheel to do work—perhaps run a stereo or grind wheat—then you’d perform work for free:  no external source of energy necessary.

These two contraptions do not—and cannot—work. Commonly referred to as perpetual motion machines, the U.S. Patent Office will not issue patents to them, though inventors still try. The flaw in the solar cell and bulb system—just as in any perpetual motion machine—is that some energy is always lost:  By the numbers:

15%   of light striking solar cell is converted into electricity (85% reflects or becomes heat)

x  99%   of the electricity going through wires is not dissipated as heat

x  50%   of the electricity flowing through the light bulb becomes light (50% becomes heat)

x  25%   of that light may actually strike the solar cell  (75% light shines elsewhere)

   =  2%   of the light that strikes the solar cell would become light striking the solar cell again

Ch4 perpetual motionAny energy we might initially endow the system with would quickly disperse in the form of heat and of light shining somewhere other than onto the solar cell. And even if every part of the system could be 100% efficient, tapping any energy to do work would quickly exhaust the system’s internal supply. Without a continuous external source of energy, our contraption would just sit in the dark.

Technology—just as everything else in this Universe—needs energy to do work. So, no matter how complicated and confusing a technology may be, here is something we do know about it:  somewhere, there is a source of energy.

Of course, energy sources can be inconspicuous. For example, most wristwatches can operate for years on a tiny battery hidden inside, but eventually the chemical energy stored in the battery is exhausted and the watch stops. Some watches even derive energy from the motion of the wearer’s arm or the heat differential between the arm and ambient air. No battery, but still an external source of energy fueled by the food eaten by the person wearing the watch…and not the magic of perpetual motion.

Food was the source of energy for the first technology humans developed, and is still the only source for the tools animals use. Energy in food is transformed to energy in muscles, which operate sticks, stones, hammers, scalpels, bicycles, and other manual technology. The energy in food comes from nuclear fusion in the sun, which creates light, which creates plants through photosynthesis, which feed animals, which feed other animals.

Though dogs may have started living with humans 14,000 years ago, they were not the first animals we tapped to power technology. Domesticated reindeer started pulling sledges in Northern Europe about 7000 years ago, 1000 years before we domesticated horses and 3000 years before we got them to pull vehicles. But even with domestication of animals, the only energy source for technology was muscle powered by food.

That all changed with the sails on ships, which, more than 5500 years ago, harnessed wind. Sails are limited to propulsion, but waterwheels, invented about 2100 years ago, used moving water to grind wheat, corn, and sugar cane, work bellows to make fires hot, and pound hammers onto rocks and metals. Windmills, first created around 1600 years ago, were applied to many of the same tasks. The sun, again, is the source of these energies, whether by differentially heating the atmosphere to create wind or by evaporating water to create rain and flowing water.

Fire, long useful for cooking food and staying warm, first became an energy source for technology around 100 AD.  Greek temples used steam turbines to open and close doors, as if willed by the gods. Whether limited by materials, imagination, or motivation (a huge slave population already provided manual labor), the Greeks did not develop the steam turbine into a practical energy source. Over time, the steam turbine was forgotten, but, by the year 1700, it was reinvented and went beyond novelty tricks to power steamboats, steam trains, and—for a short time—steam cars, too. Burning wood releases the energy captured from sunlight through photosynthesis.

So does burning fossil fuel. Coal, oil, diesel, and gasoline were plants, dinosaurs, and other animals, before millions of years of high-heat, high-pressure subterranean processing. In 1860 the internal combustion engine tapped fossil fuel’s explosive energy. Now the burning of diesel and gasoline propels most of our transportation and the burning of coal provides more than half of U.S. electricity. While not renewable, fossil fuels originally got their energy from the same source many renewables do:  sunlight and photosynthesis.

Propelling arrows with gunpowder, 13th century China tapped chemical energy. While gunpowder has been wildly successful in creating explosions for guns and mining, several attempts at using it in internal combustion engines failed. Another source of chemical energy has proven an excellent source of energy for technology. In 1800, the invention of electrical batteries allowed us to convert the chemical potential in various metals into electric current. Some evidence suggests that two millennia ago chemical batteries were used in Iraq, but it remains speculation. Reversible chemical processes are used in rechargeable batteries, especially useful in laptop computers and cellular phones.

In 1943 we harnessed nuclear energy. Nuclear fission (splitting large radioactive atoms into smaller ones) is used as both an energy source (nuclear plants) and a weapon (atomic bomb). The sun, too, uses nuclear energy, but in a fusion process (combining small atoms into larger). The only Earth-based fusion reactions producing significant energy have been uncontrolled, and so used only as weapons:  hydrogen bombs. Although we have yet to control a fusion reaction to produce energy, for billions of years the fusion reaction located a safe 93,000,000 miles away from Earth has been source to most of the forms of energy we discuss in this section.

Another source not leading right back to the sun is geothermal energy. Heat deep within the earth from both gravitational pressure and radioactivity creates a temperature differential, which can be harnessed to produce energy. The classic view of geothermal energy is of steam shooting from the ground. Like steam from other sources (e.g. wood or coal fires or nuclear fission reactions), this can drive turbines, which generate electricity.

Back to the sun. In 1954, 115 years after the principle of converting sunlight directly into electricity was discovered, photovoltaic cells, or “solar cells,” made it practical. The sun’s light causes electrons to move, which is electricity. Solar cells are important sources of electricity on earth-orbiting satellites, the International Space Station, handheld calculators, and some buildings and homes. Centralized generating plants are few and small compared to fossil fuel or nuclear plants.

 

How Solar Cells Work

Solar cells are semiconductors, similar to integrated circuits. These silicon products are descendants of the germanium rectifier that made possible 19th century “crystal” radios (which we touched on in the chapter What is Technology?)  A rectifier allows electricity, the movement of electrons, to flow in only one direction. This is crucial in solar cells because when a photon of light liberates an electron in the absence of a rectifier, that electron can fall right back into place, releasing the energy it absorbed from the photon as another photon. Light goes into the material and light comes out.

But if a liberated electron is caught on the wrong side of a rectifier, it cannot return to the hole it left. If the easiest way around the rectifier and back to the hole it left is through wires, light bulbs, motors, or televisions, then that electron will go that way, driving our appliances. The more photons, the greater the imbalance of electrons on one side of the rectifier and holes on the other, the more force with which those electrons will flow through an electrical appliance to restore the balance.

Almost all the electricity we consume comes, not from these semiconductors, but from moving magnetic fields. It’s a physical law that electrons move in a conductor (e.g. wire) when a magnetic field moves nearby. The turbines spun by moving water in hydroelectric dams or by steam in coal, oil, or nuclear plants spin magnets near coils of wire, generating electricity. In the year 2001, the U.S. generated 494 million kilowatt hours from solar, representing just 0.013% of the electricity used that year. Even some of the solar power came, not from solar cells, but from moving magnetic fields, with sunlight heating water into steam to drive turbines.

 

In the use of fossil fuels we see some remarkable transformations of energy. In the sun’s fusion reaction, matter is converted to energy (according to Einstein’s famous e=mc2 equation) at the rate of 4,600,000 tons per second. Of the energy released from the sun, only one part in two billion reaches the earth. Millions of years ago, some of that was consumed by photosynthesis in plants to create sugar molecules. These molecules fueled growth of those plants, some of which were consumed by animals. On dying, some of these plants and animals did not decay. Instead, deprived of oxygen, they began a subterranean process involving pressure and heat that, over millions of years, produced fossil fuels. These carbon compounds, dense with potential energy, combust to produce heat to produce motion.

In combustion engines fossil fuels drives wheels of vehicles. A 2003 study calculated that 16 acres of wheat (and millions of years) would be required to make a gallon of gasoline, about enough to travel 20 to 40 miles in a typical car. In power plants fossil fuels produce electricity. The process of generating electricity from sunlight by way of coal, can be broken into 11 steps:

  1. Sun shines on ancient plants, fueling photosynthesis
  2. Photosynthesis creates sugar molecules, building blocks for biological growth
  3. Plants and animals die but, deprived of oxygen, are prevented from decaying
  4. Remains sink deep into the earth to be heated and crushed for millions of years
  5. Miners dig coal from the earth
  6. Trains transport coal to power plants
  7. Coal is crushed and refined
  8. Coal dust is combusted to heat water
  9. Steam expands and pushes turbine
  10. Turbine spins with magnets attached
  11. Moving magnetic field induces electric current in nearby coils of wire

From there, electricity drives technology, which continues to transform energy. Light bulbs turn electricity to light. Motors convert it into motion (which in a refrigerator drives a pump that creates a temperature differential). Heaters and electric ovens convert it to heat. Microwave ovens convert it to microwaves, which interact with foods and beverages to create heat. Televisions and stereos convert it into information-laden light and sound (another form of energy).

The history of technology includes many of these energy transformations. The first diesel train locomotives, like the steam locomotives that they replaced, had linkages to each of the drive wheels. Success of the diesel locomotive came with a counter-intuitive idea:  use the diesel engine to generate electricity, which is wired to an electric motor on each of the drive wheels. This sequence of fossil fuel to mechanical energy to electricity and back to mechanical energy eliminated heavy and inefficient mechanical linkages or transmissions. This back-and-forth conversion is so efficient and economical that the diesel-electric engine rapidly replaced the pure diesel, and is commonly referred to simply as the “diesel engine.”

Submarines benefited from the same diesel-electric combination. Until the snorkel, invented by a Dutch officer in 1933, submarines could not use their diesel engines while submerged because combustion requires large quantities of oxygen. The snorkel let them draw in oxygen while running just below the surface, hidden from radar. However, diving to safety deep below the surface still meant running on electricity. Bulky transmissions to connect either the diesel engine or electric motor to the propeller were eliminated by connecting the diesel engine to a generator, which both recharged the batteries and powered the electric motor now connected directly to the propeller.

This approach persists in nuclear submarines, whose generators derive heat from a fission reaction, which does not need oxygen (nor does it produce exhaust, which would reveal it’s submerged location to hunting ships). The heat turns water into steam to spin a turbine that turns a generator that produces electricity. As in the diesel-electric submarines, electric motors turn the propeller. Unlike the diesel-electric submarine, the nuclear submarine can run completely submerged until food runs out, since it produces both oxygen and drinking water from seawater. Land-based nuclear power plants take an approach similar to the submarines. Heat from the fission reaction turns water to steam, which spins a turbine to generate electricity.

“Fuel cells” in hydrogen vehicles demonstrate energy transformation by combining hydrogen from storage tanks with oxygen from the air to create water and electricity, which runs electric motors attached to the wheels. These “zero-emission” buses and cars release pure water as exhaust. It stands to reason that if energy is released by combining hydrogen and oxygen into water, then it must take energy to separate them back out. Otherwise, we might just have the perpetual motion machine science maintains is impossible. Various technologies are under development to perform this separation of hydrogen, but the simplest is running electricity through water. So it is possible that a zero-emission vehicle would run on electricity from hydrogen separated from water by electricity generated by burning coal. This system as a whole is, clearly, not zero-emission. But it does illustrate our main point: every technology requires energy and many technologies transform it.

When energy is as ready as switching on a light or pressing a gas pedal it is easy to ignore. Imagine being stripped of advanced technology for generating, converting, and consuming energy. Thousands of people begin foraging for sticks shortly after dawn. Before dusk, they shoulder bundles of wood for miles, selling them in towns to buy food for another day. Burning wood cooks food, heats homes, and drives primitive industry. To see this in the 21st century visit Ethiopia, where 90% of consumed energy comes from biomass: wood, charcoal, and cow dung. The capital, Addis Ababa, draws 15,000 “women fuelwood carriers,” walking up to 10 miles with loads of 70 to 100 pounds, which sell for as much as 70 cents. The fuel carriers are acutely aware of where energy comes from.

That awareness would be useful for those who can casually tap hundreds of horsepower in their cars, similar amounts in their homes (for heating, lighting, appliances, and entertainment), and more in elevators, airplanes, and climate-controlled businesses. Choices, amplified by and dependent upon technology, are informed by such awareness. The amount of energy we control has increased dramatically over the millennia and it appears that the trend will continue. Far in the future, when technology may appear quite unlike anything we have today, understanding and evaluating it will still depend on grasping the source of its energy.

 

Organization Part 1: Centralized vs. Distributed

Another characteristic that will persist into the future is the organization of technology systems as either centralized or distributed. The movie Back to the Future depicted a “Mr. Fusion” reactor mounted in a car, energy production distributed to the point of use. But there are good reasons that current nuclear energy production is now centralized:  nuclear power plants are complex, expensive, dangerous, and require expert maintenance. Other technologies, including solar cells and windmills, can be either centralized or distributed, with advantages for each. History shows many technologies swinging from centralized organization to distributed and back, influenced by new capabilities, concerns, or requirements.

The “monster in the basement” was what Mrs. Vanderbilt called the steam engine that ran an electrical generator to power the new electric lights in her house. The fabulously wealthy Vanderbilt family had replaced their candles and gaslights shortly after Edison’s 1879 invention of the incandescent light, but before he developed a centralized power system. The monster in the basement was an example of a distributed power system and, with visions of high-pressure steam exploding through the floorboards, Mrs. Vanderbilt came close to throwing it out.

Until the 19th century, many factories were built next to rivers so that waterwheels could provide power to grind wheat, spin cloth, or cut wood. That distributed generation has become centralized, with most power today generated by large oil, coal, gas, nuclear, or hydroelectric plants.

 

Distributed 11th vs. Centralized 21st

In 11th century England, 5624 water mills provided most of the non-muscle power to between 1.25 to 2 million people. 1000 years later, in the 21st century United Kingdom, 177 power stations generate electricity to support almost 60,000,000 people. Another way to look at it:  about 300 people were supported by each water mill in the 11th century and many more than 300,000 people by each power plant now. So there has been more than a 1000 to 1 move toward centralization.

Some of the factors we did not consider tend to offset each other. Unlike 11th century inhabitants, 21st century people use more than electricity (e.g. fossil fuel for cars, trucks, airplanes, and trains). But we can safely say that 21st century people consume more electrical energy than 11th century people consumed of any kind of energy. A millennium ago, there were no electric lights, computers, refrigerators, microwave ovens, or televisions on which to spend energy.

So, even though each 11th century inhabitant consumed less energy, they had 1000 times as many power plants per person than we do now. How can this be?  Our few, centralized plants are far larger and more powerful than the many distributed plants they had.

A friend of the author grew up on a small farm in Iowa, where a windmill provided energy for lights and a radio. In the 1940s, power lines marched past, erected by the new Rural Electrification Administration (REA). The price for connection to the reliable and centralized power?  Dismantling the windmill. The REA wanted no competition from that old farm. Centralized energy production was eradicating distributed, but, like a pendulum, the trend can also swing back.

In recent decades more people have started generating their own electricity with windmills in their yards or solar cells on their roofs. This distributed approach can be attractive for remote locations, which are often expensive to connect to the electrical distribution grid.

This pattern of centralized vs. distributed organization also shows up in technologies other than energy generation. Until the 1940s, residents of Key West, Florida, used cisterns to catch rainwater, a highly distributed water system. Then the U.S. Army built a central reverse-osmosis desalination plant to replace cisterns on the one-mile by four mile island off the Florida coast. By order of the U.S. Army, all cisterns were filled with salt water, covered over, or somehow disabled to prevent any diseases harbored by standing water.

Surprisingly low water pressure from the new system spurred nocturnal work habits among residents, who often washed laundry at 2:00 AM when water pressure was a little higher. Even at low pressure, the desalination process was expensive, so it was replaced with a 16-inch pipe that brought water from the mainland. Vulnerable to storms, the new pipe inspired little confidence and cisterns returned, cleverly disguised (one as a fireplace) to avoid the U.S. Army’s unwanted assistance.

What factors cause technology to organize in a centralized or distributed way?  One is technological capability. Because mid-20th century computers were huge, expensive, and required expert maintenance (and security and air conditioning), they were centralized. Late-20th century microprocessors were tiny, cheap, and ran without maintenance, so they were distributed (to microwave ovens, VCRs, and the doorknobs of hotel rooms)…except when other factors came into play. Some computing problems were so large (tracking every purchase at Wal-Mart) that they needed many coordinated microprocessors, so these were centralized…until the Internet and new software allowed some of these tasks to be distributed (e.g. calculation of protein folding patterns or the search for prime numbers).

Environmental conservation and volatile energy supply are two other factors that affect the organization of technology.  Environmental conservation has encouraged some Californians to invest in solar power for their homes. In 2001, unpredictable energy supplies in California (prices jumped as high as fifty times previous levels and blackouts rolled around the state) prompted another movement towards decentralized solar energy production.

By contrast, convenience and simplicity of distribution have pushed energy technology toward centralization. Distributing electricity is much easier than solid fuels, such as coal. Centralized coal-fired electrical generation saves us from having coal dropped off at each of our homes, as was once done for heating purposes. If we wished to use it to generate electricity, we would each have to operate and maintain our own little power plant. This would be much harder than training a dedicated and expert staff operating in shifts around the clock to operate and maintain centralized power plants.

Efficiency is another advantage of centralization, given current energy technology. Not only are larger plants more efficient, but they can run continuously because someone is always using power. It would be a waste to run our own plant when we are not consuming electricity or to shut it down and start it up each time we do. When large plants have to shut down for maintenance, starting them back up is a lengthy and expensive process, so it happens as infrequently as possible.

But just when we are sure that the choice between centralized and distributed organization is obvious, technology can change and so can the best choice. Thomas Edison’s first electrical power plant was in downtown New York City. A high density of consumers was crucial because Edison used direct current (DC), which does not travel well. Over long distances, most of its energy is lost to heat. Clearly, distributed power production appeared to be the future of electricity. And then technology changed.

Nikola Tesla invented and developed technology for alternating current, which can travel well over long distances. Today, alternating current dominates (alternating 60 times per second, or 60 Hertz in the U.S.) and towers hundreds of feet tall march across rural areas, carrying extremely high voltage from centralized power plants to far away consumers. Since then, the long-term trend has been toward centralization, as technological advances allowed larger, more efficient fossil fuel and nuclear plants.

New technology may well follow trends from centralized to distributed and back again. Having seen this pattern in energy production, water distribution, and computers, it may be easier to identify elsewhere. Centralized and distributed systems have been tested over thousands of years in technology and over millions of years in biologic systems—plants and animals. Studying the tradeoffs made and the environments in which each approach has been most successful may save us from making costly mistakes.

 

Control: like riding a bicycle

A completely different pattern in how technology works concerns control. What does “control” mean in technology?  Bicycling explains.

Bicycles are not stable. Rushing along on just two wheels, we are constantly falling to the left or the right, but once we’ve mastered “balance” we subtly turn the handlebars in the direction of the fall and start the process going to the other side. Don’t believe it?  Try holding the handlebars dead straight or—easier to do—drop both wheels into a narrow channel just wide enough for the tires, like the rut between railroad tracks and a crossing street. Wear a helmet, gloves, and appropriate body armor—and do not try this where trains are running.

OK, for liability reasons, please do not try this at all.

To ride we need two things:  feedback of which way we are starting to lean over and correction with the handlebars. This section is about how control (feedback and correction) is a pattern common to many technologies.

For most of history, control has been provided by the human component in the system, such as the rider on the bicycle. Even in the striking of one stone against another to create a sharp edge there is an element of control. After each strike, the human observes how the stone has chipped (feedback) and adjusts the next strike accordingly (correction). One of the oldest technologies to appear to control itself without a human in the control loop is a curious novelty from ancient China.

Living in China around 300 AD, you might have been lucky enough to see the strangest carriage in the world. A small statue pivoted on top of the carriage to point south, no matter which way the carriage turned. How did it work?  The magic was performed by gearing very much like the differential in a car’s transmission, which allows the left and right wheels of a car to turn at different speeds. And this is critical since as a car turns right, for instance, the left wheels have to travel farther and spin faster than the right wheels. In the case of the Chinese carriage, the different speed of the left and right wheels determined how far to rotate the statue.

Whatever direction the statue started out pointing (presumably south), the mechanism would keep it pointing… assuming that the wheels did not slip on the ground and all the machinery were perfectly precise. In practice, imperfection and slipping would be present. Small errors would accumulate and eventually, the statue could be pointing any direction, so it failed to become anything more than a novelty.

The South Pointing Carriage fails to incorporate control because it lacks feedback when it wanders from pointing south and it lacks correction to reposition it to point south. What the ancient Chinese needed was a control system that incorporated something they had already invented:  the magnetic compass. And something invented over 1600 years later would have been useful for monitoring the compass and signaling for appropriate corrections:  the microprocessor. Its invention in 1971 has done more to remove humans from the control loop than anything else in history.

Microprocessors can monitor sensors, follow algorithms, and operate motors, lights, and other electric devices. With modern technology, the inventors of the South Pointing Carriage could have monitored a magnetic compass with a microprocessor. They would have programmed it with an algorithm to run an electric motor to rotate the statue until pointing south.

This ability to monitor a condition, follow an algorithm, and operate devices makes microprocessors good at the feedback and correction necessary for control.  So good, in fact, that they have invaded many technologies, such as thermostats, antilock brakes, microwave ovens, and video games. This is beginning to remove humans from the loop, though we are still involved in setting the parameters, such as the temperature that the thermostat should try to maintain. As technology becomes more sophisticated, requiring finer and quicker control, and computers become ever less expensive, we may find human control farther and farther removed.

In nuclear power plants, airplane cockpits, and intercontinental ballistic missile launch sites the stakes for control mistakes are high. So how should they be controlled?  Humans are more fallible than technology, which shows few tendencies to sleep, forget, drink, gamble, or get into compromising situations subject to blackmail. On the other hand, humans have a much greater contextual awareness than does technology, which does not yet read the morning paper or know about circumstances that might impinge on a decision.

For now we compromise by keeping both technology and humans in the control loop of critical systems. Technology does the repetitive (check the temperature 10 times per second, 24 hours a day) and humans make the really big decisions (flood the reactor core).

A twist in history gives us an example of human control being inadequate. Mid-20th century, a new airplane design called the “flying wing” was introduced. The military aircraft was all wing and no fuselage, a very efficient shape. Unfortunately, it is also an unstable shape and human pilots had difficulty controlling it. So much difficulty that the design was abandoned.

Near the end of the 20th century, the designers of the stealth bomber independently came up with the same flying wing shape. At that time, computers could be used on-board to operate all of the aerodynamic control surfaces, keeping the plane stable and flying in the direction the pilot indicated. Like antilock brakes, the on-board computers were able to make many comparisons and corrections each second. There is an interesting coincidence between the stealth bomber and the old flying wing. The stealth designers used computer simulations to determine the optimal dimensions of the modern plane. They came up with a wingspan of 172 feet. Later, when they realized how similar their new plane and the flying wing appeared, they looked up the old specifications. The flying wing had a wingspan of 172 feet.

One way to look at this: we relinquished control of one technology (airplanes) to another (computers). Essential to the feedback and correction in a control system is information. Correction is based on feedback, which is information. Modern airplanes have replaced mechanical linkages between the pilots’ controls and aerodynamic control surfaces (e.g. flaps, ailerons, elevators) with wires. Called “fly by wire” systems, it is clear that information in the form of electrical signals is at the heart of the system. The stealth bomber took this farther by letting an information-processing computer operate the controls. Information is at the heart of many technologies.

 

If there were something like
a guidebook for living creatures,
I think the first line would read like
a biblical commandment:
Make thy information larger.

— Werner Loewenstein

Information: algorithms

Information controls how technology works. It has since long before the stealth bomber. Silk looms in 18th century France stored information about the patterns to be woven as pegs on a cylinder, holes on strips of paper, and holes in cards (invented by Bouchon, Vaucanson, and Jacquard, respectively). That information controlled which colored thread of silk the loom wove through the fabric at each of thousands of steps. This automated the fabrication of complex designs that consumers wanted in their silk clothing, tablecloths, and wall hangings, which could be ruined by the careless mistake of a fatigued worker.

Looms inspired computers, which initially stored information with mechanical gears, paper tape, punch cards before such modern developments as the optical compact disc (CD). The information on a compact disc, for instance, can control a computer by telling it what sounds to play (if a music CD and the computer has an application that plays sound files) or what instructions to execute (if an application CD). This example shows two different kinds of information, which computer scientists term “data” and “program.”

The peg or holes in silk looms represented data, as did the information (“feedback”) from sensors on the stealth bomber. Program information for the looms was in the techniques of the loom operators and was shared verbally (from master to apprentice). Program information for the stealth bombers was in computer applications and was shared magnetically (from development computer to on-board computer). While programs may be complicated—computer programs may have millions of lines of code—they are based on algorithms, which are defined as the rules for solving a problem. For instance, a basic thermostat controlling a home heater, whose computer program would be completely unintelligible to most of us, follows this simple algorithm:

  1. Measure current temperature (data)
  2. Compare it to the desired level set on the dial (data from the human operator)
  3. If current temperature is less than set temperature, then turn on furnace; otherwise, turn it off.
  4. Loop back to step 1

Something common in bathrooms gives us another example. The Sonicare ™ electric toothbrush, which promises to clean and whiten your teeth with high-speed vibration of the bristles, follows this algorithm:

  1. If the button is pressed, run for two minutes before turning off.
  2. If the button is pressed while running, turn off.
  3. If off for less than 45 seconds (you just wanted to add more toothpaste, for instance) when the button is pressed, run for whatever was left of the original two minutes.
  4. If off for more than 45 seconds or placed back in the charger before the button is pressed, run for a full two minutes.

We find a slightly more complicated, but also more familiar, algorithm in the cookbook Laurel’s Kitchen:  A Handbook for Vegetarian Cooking and Nutrition. Their recipe for cornbread:

 

Ingredients

2 cups cornmeal

½ cup wheat germ

1 teaspoon salt

½ teaspoon baking soda

1 teaspoon baking powder

1 tablespoon brown sugar

1 large egg, beaten

1 tablespoon oil

2 cups buttermilk

Instructions

Preheat oven to 425.

In a large bowl stir together dry ingredients.

In another bowl, mix the wet ingredients.

Combine the two just until they are well mixed.

Turn into an 8” x 8” baking pan, well greased.

Bake for 20 to 25 minutes.

 

Notice that the recipe does not specify who is following the rules. It could be a woman, a boy, a robot, or some brilliantly coordinated insects. The technical phrase for this flexibility is “substrate independence,” but we can also view it as separating function from form, procedure from implementation, or information from matter. Because of the algorithm’s substrate independence, understanding the behavior of a technology can often transcend the details of how that technology is implemented. Even when we comprehend little else about a technology, algorithms can allow us to predict its behavior.

Computers have been built on substrates of mechanical gears, magnetic relays, vacuum tubes, discrete transistors, integrated circuits of millions of transistors, test tubes of DNA, and even Tinker Toys™, the children’s toy of wooden blocks, pulleys, strings, and sticks. Daniel Hillis, a computer scientist who designed one of the most advanced computers of the 20th century, created a Tinker Toy computer to play tic-tac-toe. It had neither the power nor the speed of the commercial products he designed, but it operated on the same algorithms. He summarized this substrate independence for computers:  “One of the most remarkable things about computers is that their essential nature transcends technology.”  One of the most remarkable things about technology is that its essential nature transcends matter. Information in the form of algorithms is why.

We can apply this principle to understanding nanobots, microscopic robots built on the nanometer scale (each part of the robot might be just billionths of a meter or a few atoms in a row). Because nanobots have not been invented yet, we can only speculate about how they would work. Some call them science fiction; others predict they are a likely consequence of the nanotechnology we are already developing (which is still primitive, limited to such inventions as sunscreen with nanometer scale titanium dioxide particles to better block ultraviolet rays and Eddie Bauer pants that won’t stain even with red wine). The consequences of nanobots are interesting enough and serious enough for us to start thinking about them just in case they are possible.

 

Free Matter and Valuable Information

Fabricating from the atoms up—perhaps even scavenging the carbon atoms with which we have polluted our atmosphere through smokestacks and vehicle tailpipes—with nanotechnology suggests that information may become more valuable than materials. To explain, we start with a couple technologies that already exist.

The value of a compact disc (CD) depends almost entirely on something invisible to the naked eye. Suppose you receive one in the mail. How much is it worth?  If it is a new computer application, it could be hundreds of dollars. If a music CD, perhaps $10 to $15. If a pitch for Internet service (and you are already satisfied in that way), it is worthless—at best it would make a shiny drink coaster on your coffee table. If it contains the human genome, then it cost hundreds of millions to create, but since it is freely downloadable, the CD is not worth much. If you could only take that CD back a few years in time, you could sell it to those about to spend all that money on the human genome project!  In short, the value of the CD you hold in your hand has much more to do with the information on it than in the materials that compose it.

New digitally controlled chemical molding systems may soon download information on the design of a table or bookshelf and then have the item pop out of an automated factory. Current designs of this “automated factory in a box” already produce hulls for boats, but could produce almost anything that can be molded out of plastic. Soon it may be possible to drop these from airplanes into remote areas suffering from natural or human disasters. They could produce sections of irrigation pipeline in the morning, download new plans, and produce containers to store food in the afternoon.

In the future, nanotechnology may remove the restriction that the product be molded out of plastic and make the automated factories small and affordable. A “nanotechnology factory” may use carbon as a raw material since it can be very hard (e.g. diamonds and fullerenes) and there’s a lot of it just floating around in the air from all the fossil fuels burned the last few centuries. It would be possible to download the plans or specifications for an object and have it fabricated in your home or office by a “matter compiler.”  If that technology develops, our attitudes about information and material in products will be very different. And information may be one of the few things left with value.

 

Nanobots could manipulate matter at the nanometer scale, which means they could arrange and rearrange atoms. Scavenging atoms and molecules from their surroundings, they could make copies of themselves. The Sorcerer’s Apprentice segment of the movie Fantasia, when Mickey Mouse lets replicating brooms get out of control suggests what could happen with nanobots. Once a nanobot makes a copy of itself, both it and the copy make copies. Then those four each make copies. Even if making a copy took a day, there would be one billion of them at the end of a month and 1153 quadrillion at the end of a second month. What’s to stop this?

Perhaps an algorithm. The self-replicating nanobot could include a counter, which would start at “10” in the original nanobot. Then, when a copy is made, the counter decrements to “9” and the copy has its counter set to “9”, too. The four nanobots in the next generation each have counter values of “8”. When the counter reaches “0”, the nanobots stop multiplying. This would result in 10 generations or 1024 nanobots. If this is not enough for our purpose, we could program the counter to start at a larger number. Even without understanding the substrate or the implementation of nanobots, we can understand this algorithm.

This is not a new technique. Nature already came up with this technique to prevent rampant replication. Our algorithm is similar to that in some cells, which divide until their counter—a steadily shortening tail of telomeres—runs down. Now we get to apply something we know about in one field—medicine—to technology. This allows us to anticipate what might happen if the counter fails. Cancer is a disease that defeats the telomeres countdown, mutating the cell to allow unlimited reproduction. Could something like cancer afflict nanobots?  Could a mutation in a self-replicating nanobot change its algorithm and allow it to replicate forever?  What would stop it from converting all material on the surface of the earth into copies of itself?  Evaluating nanotechnology while nanobots are still science fiction would seem a good idea.

 

Are Computers Really Getting Cheaper?

Near the end of World War II, the first general purpose electronic computer was built for almost half a million dollars. In 2004 dollars, that would be more than $5,000,000, and a 2004 computer costing $1000 would run circles around the Eniac. This is no surprise; computers have plummeted in price and continue to become more powerful every year (doubling in power every 18 to 24 months). What might be a surprise is that if we built just one new computer, rather than enjoying the economy of scale from making millions, it would cost more than the Eniac.

The Eniac is like an underdog coming into a boxing match. It’s slow, able to execute just 5000 instructions per second. It’s weak, with just 19,000 vacuum tubes and 1,500 relays. It’s hungry, consuming almost 200 kilowatts of electrical power. And it’s heavy, weighing over 30 tons.

A modern microprocessor would be the heavy “odds-on” favorite in a fight. It’s fast, executing a billion instructions every second. It’s powerful, orchestrating tens of millions of transistors (functionally like vacuum tubes or relays). It’s not hungry, drawing less than 100 watts (double that for a complete PC). It’s light, weighing less than an ounce (though, to be usable, it needs the rest of a 20 pound PC around it).

At first, the microprocessor wins on cost, too, at about $150. But suppose we made just one microprocessor. Research and development for its design would cost between 1/5 and ¼ billion dollars. Plant, property, and equipment for a “development” fabrication plant would cost about half a billion dollars. That single microprocessor would cost about ¾ billion dollars!  The Eniac seems almost a bargain at $5,000,000…as long as you don’t need something more powerful than a handheld calculator.

Of course, nobody makes just one microprocessor. Instead, a company like Intel will invest almost two billion dollars more (than the ¾ billion we already counted) on plant, property, and equipment for a “production” fabrication plant and then produce 10 million processors each month. This makes possible massively powerful computers composed of many microprocessors. These include ASCI Q (12,000 processors), Blue Gene (one million processors), and the protein folding simulation project Folding@Home (more than 200,000 processors distributed all over the world).

 

Nanocubes are just a millionth of a millimeter

(a nanometre) across. Stacked like bricks,

they could make up a range of materials

with useful properties such as light emission

or electrical conduction. Many chemists are currently

trying to develop molecular-scale construction kits

in which the individual components are single molecules…

– Philip Ball

Organization Part 2: Repetition & Layers

Nanotechnology promises to be able to create nearly anything just by assembling atoms. That assembly would rely on two things: repetition and layers. Repetition is doing the same thing again and again, such as stacking brick upon brick. The layering that interests us is not the physical layering of bricks but the conceptual layering of bricks to make walls, walls to make buildings, and buildings to make towns.

The designer of a town need not understand how a building is made, the designer of a building need not understand how a wall is made, and the designer of a wall need not understand how a brick is made. Like Dr. Seuss’ Cat In The Hat (with smaller cats under each hat), an onion, or a Russian Egg (with smaller eggs inside each egg), towns, buildings, walls and bricks are layers containing layers.

Biological processes layer with organs, cells, proteins, and molecules, using mass repetition at each layer. Technological processes do the same with their own building blocks. For instance, nano­technology is the creation of objects at the nanometer (one billionth of a meter or roughly molecular) scale. If we could repeat and layer molecules, we could create just about anything for which we had a design. If we could do it economically, it would change our world. Although nanotechnology is very young, repetition and layers have already proven useful in a variety of complex technologies.

A billion microprocessor instructions execute in less than a second. In fewer than seven seconds, a microprocessor could execute one instruction for every human on earth—and that’s on a machine costing just $1000. Who could possibly think up 7,000,000,000 different instructions?  Nobody has to because most computers use about 100 instructions, which they repeat in many combinations. The power of the computer’s language is in repetition.

A similar example of repetition can be found in robots. In the movie Terminator 2, Arnold Schwarzenegger faces a futuristic robot made of liquid metal. It can reshape itself to appear as a person or an object. Blow a hole in it and it heals. Smash it into pieces and the pieces flow back together, like mercury, to reform the original. Complete magic?  Not if you imagine tiny robots—perhaps as small as nanobots—working together like a cheerleading team, but instead of forming a pyramid of a half-dozen people, millions of them would assemble into anything for which they had a plan. That plan or design would be encoded in each robot’s software or downloaded from some other system.

The U.S. military is already testing robots just one millimeter on a side that are strewn from robot planes. They wait and detect passing vehicles, transmitting what they notice up to the plane, which relays it to battlefield commanders. One-millimeter robots are small enough to hide in carpets. How long before they can go beyond listening and signaling to choreograph their own movements?

One mobile robot, the PARC Polybot, transforms itself from a looped tractor tread for speed to a caterpillar for climbing or descending obstacles (like stairs) to a four-legged spider when the terrain is level but uneven. How does it do this?  The robot is composed of a dozen identical modules that can attach to each other in many ways all on their own, using infrared communication between the modules and computers in each module. More and smaller modules make the robot more versatile. Plans for a 200 module robot will come closer to, but still well short of, the technology portrayed in Terminator 2.

Repetition in robots means greater tolerance to individual module failures and more flexibility in assuming shapes. Since the modules are all the same, they can be mass-produced and, so, lower in cost than similar modules that are unique.

Supercomputers made of many mass-produced microprocessors, cost much less than comparably powerful supercomputers based on older designs. The most powerful computers now consist of thousands of microprocessors in tight networks:  ASCI Q will use 12,000 to simulate nuclear weapons at Los Alamos National Laboratory. The Blue Gene computer will simulate how proteins are created using more than a million processors.

Repetition is behind the power of printing with interchangeable type. In the 15th century, Johannes Gutenberg created a simple cast to form individual letters and a rack for holding them. Although the 11th century Chinese alchemist Pi Cheng developed movable type four centuries earlier, he had to deal with thousands of symbols. Writing in a language with just a few dozen symbols (without j, v, and w, the German alphabet then had just 23 letters), Gutenberg could better exploit the power of repetition. For this and a few other reasons, he was the first to make printing with movable type practical and successful.

Who would guess the power of repetition simply by looking at bins of metal blocks, each with a letter in relief?  Yet, Gutenberg’s press, distinctive because of these interchangeable symbols, enabled Martin Luther to give birth to the Protestant movement, scientific knowledge to spread, and the common person to read.

The power of repetition also enabled further specialization because it allowed writers to reach an audience far beyond their own town. They could research and write on subjects of interest to only a small percentage of the populace because a small percentage of Europe’s population was still large enough to justify a book.

Specialists develop complex technologies by working on different aspects of it. In the opening of this chapter, we listed a few of the specialists necessary to understand a modern car, and this team approach is necessary for many of our technologies. How do we team up, break the complex into specialty pieces and then reassemble those into something that works?  By layering, a technique that systematically hides information.

Imagine a modern computer as having layers like an onion, from the outer husk down to the core. We start with the application software (e.g. a web browser, email reader, accounting program, or word processor) seen on the computer screen:

Ch4 layers1.Application software runs on…

2.Operating systems, which are programmed in…

3.Machine languages, which are interpreted by…

4.Microprocessors, which are assembled from…

5.Logic gates, which are comprised of…

6.Transistors, which are built from…

7.Silicon (with impurities).

Why layer?  Because you can specialize in understanding one layer, using pieces of the layer below as if they were like Lego™ building blocks. You do not need to understand how to make them or how they work internally. All you need to know is what functions they perform. Someone working at the next layer up can use whatever you create, and that person need not understand the details of your design if you document its function.

This way of hiding information that’s not needed is called “functional abstraction” because, instead of understanding all the details of the layer below, you satisfy yourself with a summary or abstract of what function or service it performs for you. The motivation for layering is making complex systems simpler to understand, design, build, test, and modify.

The concepts of repetition and layering offer good news in our quest to understand technology. If we decide we want involvement in some aspect of a complex technology, we may need to learn just one instance in one layer. Repetition allows us to replicate the one instance we understand with, often, predictable results. With layering, we need not swallow technology whole, but can bite off just a little by treating the layers above and below our bite as hidden. This means we need only understand how the layers neighboring ours behave externally. We do not need to understand the complexities of how they operate internally.

But incorporating these powerful concepts into our understanding of how technology works is like climbing a mountain from whose peak we can now see a higher peak. Well, two higher peaks.

The first peak has to do with the ability to communicate. If we are seeking expert knowledge in order to evaluate a complex technology, we may need to call on a different expert for each layer. An expert on one layer may have little to tell us about another layer. He may even use different terminology and, so, may have difficulty communicating with experts on other layers. It should not be a surprise, then, that layering falls short of eliminating all the challenges of understanding complex systems. Even great tools don’t solve everything.

The second peak of our conceptual mountain is a bit of a surprise, however. The premise of our brilliant divide-and-conquer approach is that we can break complex technology down into its components and, by understanding one or a few of them, extrapolate to the whole. The key sentence three paragraphs back was, “Repetition allows us to replicate the one instance we understand with, often, predictable results,” and the problem is in the word “often.”

When the components being replicated are organized into certain types of networks, new behaviors emerge quite unlike the behavior of any of the components. This “emergent behavior” makes such systems hard to predict. While it is appearing more frequently in new technologies, it is already as common in Nature as ants.

 

The amazing feats…comes not from
complex actions of separate colony members
but from the concerted actions of many nestmates
working together…One ant alone is a disappointment;
it is really no ant at all.

– Bert Holldőbler

Emergent behavior

Peter Cochrane played with ants in a gutter, watching how they work and interact. Running back and forth on six tiny legs, ants have been perfecting their roles since the time of dinosaurs 100 million years ago. How many mistakes have they made?  What is it about their behavior that is so effective that it has become dominant?  Cochrane had good reason to be curious. As head of research and development for British Telecom, he had nearly 1000 people working to improve a vast network of computers, satellites, and cables. He believed that his people could learn a lot from ants.

Ants teach us how complex behavior emerges from a system of simple parts. An ant has about 200 neurons of brainpower, a bit less than the human complement of 100 billion neurons (Vision and hearing require lots of neurons to decode, so the 125 million neurons in the human eye may suggest why the 200-neuron ants rely more on smelling than seeing). If you were given an individual ant to study, it would be difficult to predict the behavior of a colony:

  • Hierarchy – The queen, larvae, workers (sometimes several castes and sizes), and soldier members identify themselves and each other with chemicals (pheromones).
  •  Communication – Ants touch each other and use pheromones to convey about 10 to 20 equivalents of words or phrases. They communicate discoveries of a food supply, a new territory to explore, an invasion by enemies, or a good location for a new nest.
  •  Career planning – Pheromones from the queen inhibit all but a few daughters from laying eggs, because the colony needs more workers than rival queens. Soldiers spread pheromones that, in sufficient quantity, cause larvae to develop into workers instead of soldiers, maintaining balance between the castes. After all, soldiers need to eat and it is workers that bring food.
  •  Domestication – Some species of ants tend aphids much as we tend cattle because aphids secrete “honeydew” (actually excrement) rich in sugar, B vitamins and minerals. The ants protect the aphids fiercely and keep aphid eggs alongside their own. They know which parts (e.g. root or leaf) of which plants each type of aphid likes, carrying the aphids from inside the nest to the appropriate “pasture” or, when necessary, on to new pastures.
  •  Slavery – While division of labor between workers and soldiers is common within a species, some ant species are equipped for nothing but attacking and kidnapping other species of ants. They rely entirely on their slaves for tending the eggs, gathering food, and repairing the nest. When not raiding another species’ colony for eggs to raise as future slaves, they sit idly or beg food from their slaves. Deprived of the slaves, they would die.

Much of an ant colony’s behavior emerges from the interaction of its parts. The simple rules that govern the individuals often lead to the appearance of coordination. Foraging ants leave a chemical (pheromone) trail. When they find food, they retrace their steps to return to the colony. In many colonies, ants follow the simple rule of following paths other ants have taken. As more ants find the food and return, the path gets a stronger chemical dose and becomes more attractive for other ants to follow. When the food is consumed or disappears, the ants go back to aimless wandering.

Observing from above, it might appear that the ants know how to find any nearby food and then coordinate legions to retrieve it. But without supervision or central coordination, what appears to be intelligent behavior actually emerges from nothing but the complex interaction of simple parts. This collective behavior is called swarm intelligence.

With simple rules and without supervision, ants can be very efficient. And inspiring: after building an ant farm in the British Telecom R&D laboratory, Cochrane had a team rewrite network software from 1,600,000 lines of code to just 1,000. Southwest Airlines copied ant colony behavior to gain more than $10 million in their freight delivery business. Transferring freight among their airplanes uses rules similar to those used by ants to forage. Finding the best path to food works along similar rules, it appears, to finding the best path to deliver a package. Swarm intelligence may be neither conventional nor intuitive, but it is effective.

Swarm intelligence is one example of what is generally called emergent behavior. If we build a system of parts that interact with each other, we may see it act in surprising ways, doing things that would be hard to predict from the behavior of each part. Is it the nature of all technology systems to exhibit this emergent behavior?  Why don’t we see it in toasters or in cars?  What about technologies whose rules are far more complex than the rules foraging ants use?  Will we have difficulty predicting how these “systems of parts” behave?

Not all complex systems exhibit emergent behavior. There are four ingredients to a system that does:

  1. No central control
  2. Parts work on their own
  3. Parts affect each other
  4. Effects can loop back to their source

Traditional technology is designed for clear and largely unchanging specifications. It is efficient but brittle. But, taking a cue from biology, technology is starting to adapt. As we attempt to solve problems that defy clean definition or problems that, by their nature, continue to evolve, we are designing more flexible technology. And we are finding that it tends to exhibit emergent behavior.

 

We used to think that
if we knew one, we knew two,
because one and one are two.
We are finding that we must learn
a great deal more about “and.”

— Sir Arthur Eddington

 

Emergent behavior will appear in technology more frequently as we make it more complex and flexible. The Internet is a good example. It adapts and evolves to many environments around the world, having been designed by the U.S. government to adapt to the damage that could be inflicted by war. Attempts to block pornography, copying of music, or politically unacceptable information are all treated by the Internet as damage to the network. By its very design, it attempts to “heal” this damage. Its behavior may be difficult to predict, but it is flexible and robust.

 

Some Systems Work Even When Broken

Complex systems exhibiting emergent behavior are often tolerant of small failures. Stepping on a few ants does not cause the colony to fail. By contrast, removing a few lines of software could easily cause a computer to crash. Why are ant colonies so resilient?  If the answer lies in their organization, can we copy it to make our technology resilient, too?  The secrets behind resilient systems could be of value in many fields, as Albert Laszlo Barbarasi states in his book Linked: the New Science of Networks:

“Robustness is a major concern for biologists, who want to understand how a cell survives and functions under extreme conditions and frequent internal errors. It concerns social scientists and economists addressing the stability of human organizations in the face of famine, war, and changes in social and economic policy. It is a serious issue for ecologists and environmental scientists, motivating ambitious worldwide projects to preserve the sustainability of an ecosystem threatened by the disruptive effects of industrial development. Achieving robustness is the ultimate goal for specialists in increasingly interdependent communications systems, which must maintain a high degree of readiness despite inevitable malfunctions of their components.”

Networks with many interconnections allowing feedback in a variety of ways, can form robust systems that can adapt. Coming from a different angle, this is beginning to describe the very same systems that exhibit emergent behavior, focusing on our 3rd (parts affect each other) and 4th (effects can loop back) ingredients.

Biology has evolved a pattern of organization that may become increasingly common in future technology, as we seek to replace our specialized, efficient, brittle systems with those that can adapt to damage and change. Some have called the 20th century that of the computer and the 21st that of biology. Computers are not going away in the 21st century, but they may start emulating the patterns we are discovering in biology.

_________________________

 

Already the following views are widespread:
thinking is a type of computation, DNA is software,
evolution is an algorithmic process. If we keep going
we will quietly arrive at the notion that all materials
and all processes are actually forms of computation.
Our final destination is a view that the atoms
of the universe are fundamentally intangible bits.

— Kevin Kelly

 

Does technology work like Nature (e.g. an ant colony)?  Or, visa versa, does Nature work like technology (e.g. the computer)?  Kevin Kelly, founding editor of Wired Magazine, suggests this second approach, putting thinking, DNA, and evolution in computer terms.

The models we use to understand our surroundings affect what we see and how we make decisions. But most people do not even have models for how technology works. For them it is simply a mystery because it is as complex and detailed as the tinkering of engineers, technicians, hobbyists, and those who delight in disassembling technology to figure out what makes it tick. We need those people to keep things running, but most of us do not need that level of detail in order to understand how technology works.

Recognizing that we do not, allows us to evaluate the technology influencing many of our decisions. The truth is that simple and easily understood patterns are common to many technologies, and will probably apply to future inventions, as well. In this chapter we made a start, uncovering these patterns:

  • Energy showed us that any technology, no matter how advanced, will rely on some form of energy, and may convert it into other forms. So, when trying to understand how technology works, we can look for a metaphorical power cord.
  • Technology can be distributed or centralized. Factors affecting which way it is organized include technological capability, cost, maintenance requirements, reliability of alternatives, and social concerns.
  • Control systems monitor and adjust temperature, antilock brakes, airplane wing surfaces, and other “real world” systems. Feedback and correction are critical to many technologies.
  • Information is the invisible component of technology. Software, crucial to every computer on earth, is no more than information. As material fabrication technologies advance, information in the form of algorithms and designs will become even more important.
  • Complex technologies are often composed of repeating simple components many times and of concealing complexity within layers. This means that we do not need to understand an entire system, but rather can focus on just one layer. It also means that evaluating a complex technology could require relying on a different expert for each layer.
  • Emergent behavior warned us that understanding how the components of a system work does not necessarily tell us how the whole system works. As technological systems become more complex, we should be prepared for some surprises, and consider that in our evaluation.

Understanding how technology works illustrates a difference between competence and literacy. The engineers, technicians, and hobbyists possess, at least, competence. Those who grasp patterns spanning many technologies have an important component of literacy.

 

This webpage is adapted from the book
Technology Challenged: Understanding Our Creations & Choosing Our Future
available at Amazon