Emergence Explained Entities 09.23

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Emergence Explained Entities 09.23 as PDF for free.

More details

  • Words: 15,169
  • Pages: 33
DRAFT

10/17/2008

Emergence Explained: Entities and Functionality Russ Abbott Department of Computer Science, California State University, Los Angeles and The Aerospace Corporation [email protected] Abstract. We apply the notions developed in the preceding paper ([1]) to discuss issues such as: the nature of entities, the fundamental importance of interactions between entities and their environment, the central and often ignored role (especially in computer science) of energy, and the aggregation of complexity.

1 Introduction In [1] we characterized emergent phenomena as phenomena that may be described independently of their implementations.1 We distinguished between static emergence (emergence that is implemented by energy wells) and dynamic emergence (emergence that is implemented by energy flows). We argued that emergence (of both forms) produces objectively real phenomena (because they are distinguishable by their entropy and mass characteristics) but that interaction among emergent phenomena is epiphenomenal and can always be reduced to the fundamental forces of physics. Our focus in that paper was on the phenomenon of emergence itself. In this paper we explore the entities that arise as a consequence of the two types of emergence, focusing especially on dynamic emergence.

2 Material Entities As human beings we seem naturally to think in terms of entities—things or objects. Yet the question of how one might characterize what should and should not be considered an entity remains philosophically unresolved. (See, for example, [Boyd], [Laylock], [Miller], [Rosen], [Varzi Fall ‘04].) We propose to define a material entity as any instance of emergence. What is fundamental to material entities is that one can identify the force or forces of nature that bind them together and that cause them to persist in a form that allows one to distinguish them from their environments—because of their distinguishable entropy and mass properties. Some material entities (such as an atom, a molecule, a pencil, a table, a solar system, a galaxy) are all instances of static emergence. These entities persist because they exist in energy wells. Biological entities (such as you and I) and social entities (such as a social club, a corporation, or a country) are instances of dynamic emergence. These entities persist as a result of energy flows.

1

In [1] we credited Anderson with being one of the first prominent physicists to argue that new laws of nature, i.e., laws not derivable from physics, exist at various levels of complexity. While re-reading [Schrodinger] we found the following. “[L]iving matter, while not eluding the 'laws of physics' … is likely to involve 'other laws of physics,' hitherto unknown, which … will form just as integral a part of [the] science [of living matter] as the former.” As we pointed out in the earlier paper, there are indeed new laws (such as the theory of computability), which, while consistent with the laws of physics are not reducible to them.

Emergence Explained

1/33

DRAFT

10/17/2008

On the other hand, what might be considered conceptual (or Platonic) entities—such as numbers, mathematical sets (and other mathematical constructs), properties, relations, propositions, categories named by common nouns (such as the category of cats, but not individual cats), and ideas in general—are not (as far as we know) instances of emergence.2 Nor are intellectual products such as poems and novels, scientific papers, or computer programs (when considered as texts). Time instances (e.g., midnight December 31, 1999), durations (e.g., a minute), and segments (e.g., the 20th century) are also not instances of emergence. Neither are the comparable constructs with respect to space and distance. Since by definition every material entity is an instance of emergence, all material entities consist of matter and energy arranged to implement some independently describable abstraction. Since none of the preceding conceptual entities involve matter or energy, none of them satisfy our definition of a material entity. 2.1 Static material entities Statically emergent material entities (static entities for short) are created when the fundamental forces of nature bind matter together. The nucleus of any atom (other than simple Hydrogen, whose nucleus consist of a single proton) is a static entity. It results from the application of the strong nuclear force, which binds the nucleons together in the nucleus. Similarly any atom (the nucleus along with the atom’s electrons) is also a static entity. An atom is a consequence of the electromagnetic force, which binds the atom’s electrons to its nucleus. Molecules are also bound together by the electromagnetic force. On a much larger scale, astronomical bodies, e.g., the earth, are bound together by gravity, as are solar systems and galaxies. Static entities, like all instances of emergence, have properties which may be described independently of how they are constructed. As Weinberg [W] points out, “a diamond [may be described in terms of its hardness even though] it doesn't make sense to talk about the hardness … of individual ‘elementary’ particles.” The hardness of a diamond may be characterized and measured independently of how diamonds achieve that property—which, as Weinberg also points out, is a consequence of how diamonds are implemented, namely, their “carbon atoms … fit together neatly.” A distinguishing feature of static entities (as with static emergence in general) is that the mass of any static entity is strictly smaller than the sum of the masses of its components. This may be seen most clearly in nuclear fission and fusion, in which one starts and ends with the same number of atomic components—electrons, protons, and neutrons—but which nevertheless converts mass into energy. This raises the obvious question: which mass was converted to energy? The answer has to do with the strong nuclear force, which implements what is called the “binding energy” of nucleons within a nucleus. For example, a helium nucleus (also known as an alpha particle, two protons and two neutrons bound together), which is one of the products of hydrogen fusion, has less mass than the 2

We simply do not understand how ideas as subjective experience come into being. When we learn how subjective experience is connected to the brain, we may find that ideas (or at least their physical realizations) are in fact instances of emergence and that one can identify the forces that hold ideas together. For now, though, we can’t say that an idea as such is an instance of emergence since we don’t know how ideas are implemented physically. Even if we did know how ideas are implemented in the mind, the physical instantiation of an idea would still not be the same thing as the referent of the idea. (My thinking of the number 2 is not the number 2—assuming there is such a thing as the number 2 as an abstraction.) So we maintain the position that concepts as such are not material entities, and we do not discuss them further.

Emergence Explained

2/33

DRAFT

10/17/2008

sum of the masses of the protons and neutrons that make it up when considered separately.3 The missing mass is released as energy. The same entity-mass relationship holds for all static entities. An atom or molecule has less mass (by a negligible but real amount) than the sum of the masses of its components taken separately. The solar system has less mass (by a negligible but real amount) than the mass of the sun and the planets taken separately. Thus the entropy of these entities is lower than the entropy of the components as an unorganized collection. In other words, a static entity is distinguishable by the fact that it has lower mass and lower entropy than its components taken separately. Static entities exist in what is often called an energy well; they require energy to pull their components apart. Static entities are also at an energy equilibrium. Manufactured or constructed artifacts also exhibit static emergence. The binding force that holds manufactured static entities together is typically the electromagnetic force, which we exploit when we use nails, glue, screws, etc. to bind static entities together into new static entities. A house, for example, has the statically emergent property number-ofbedrooms, which is a property of (a way of describing) the house from a perspective that sees it as an entity. A house implements the property of having a certain number of bedrooms by the way in which it is constructed from its components. A static entity consists of a fixed collection of components over which it supervenes. By specifying the states and conditions of its components, one fixes the properties of the entity. But static entities that undergo repair and maintenance, such as houses, no longer consist of a fixed collection of component elements thereby raising the question of whether such entities really do supervene over their components. We resolve this issue when we discuss Theseus’ ship. 2.2 Dynamic entities Dynamic entities are instances of dynamic emergence. Dynamic emergence occurs when energy flows through and modifies an open system in a persistent way. As in the case with all emergence, dynamic emergence results in the organization of matter in a way that differs from how it would be organized without the energy flowing through it. That is, dynamic entities have properties as entities that may be described independently of how those properties are implemented. Dynamic entities include biological and social entities —and, as we discuss below, hurricanes. For dynamic entities its very existence—or at least its persistence as an entity—depends on a flow of energy. Most dynamic entities appear to be built upon a skeleton of one or more static entities. The bodies of most biological organisms, for example, continue to exist as static entities even after the organism ceases to exist as a dynamic entity, i.e., after the organism dies. When those bodies are part of a dynamic entity, however, the dynamic entity includes processes to repair them. We discuss this phenomenon also when we examine the puzzle of Theseus’ ship.

3

It turns out that iron nuclei “lack” the most mass. Energy from fusion is possible for elements lighter than iron; energy from fission is possible for elements heavier than iron.

Emergence Explained

3/33

DRAFT

10/17/2008

2.3 Dissipative structures Somewhat intermediate between static and dynamic entities are what Prigogine [Prigogine] (and elsewhere) calls dissipative structures. A dissipative structure is an organized pattern of activity that occurs when an external source of energy is introduced into a constrained environment. A dissipative structure is so named because in maintaining its pattern of activity it dissipates the energy supplied to it. Typically, a static entity becomes dissipative when a stream of energy is pumped into it in such a way that the energy (a) disturbs the internal structure of the entity but (b) dissipates before the static entity’s structure is destroyed. Musical instruments offer a nice range of examples. Some are very simple (direct a stream of air over the mouth of a soda bottle); others are more acoustically complex (a violin). All are static entities that emit sounds when energy is pumped into them. Another commonly cited example is the collection of Rayleigh-Bénard convection patterns that form in a confined liquid when one surface is heated and the opposite surface is kept cool. (See Figure 1.) For a much larger example, consider how water is distributed over the earth. Water is transported from place to place via processes that include evaporation, atmospheric weather system movements, precipitation, groundwater flows, ocean current flows, etc. Taken as a whole, these cycles may be understood as a dissipative structure which is shaped by gravity and the earth’s fixed geographic structure and driven primarily by solar energy, which is pumped into the earth’s atmosphere. Our notion of a dissipative entity is broad enough to include virtually any energy-consuming device. Consider a digital clock. It converts an inflow of energy into an ongoing series of structured activities—resulting in the display of the time. Does a digital clock qualify as a dissipative entity? One may argue that since the design of a digital clock limits the ways in which it can respond to the energy inflow it receives it should not be characterized as a dissipative entity. But any static entity has only a limited number of ways in which it can respond to an inflow of energy. We suggest that it would be virtually impossible to formalize a principled distinction between Rayleigh-Bénard convection cycles and the structured activities within a digital clock.4, 5 Just as emergent phenomena are typically limited to feasibility ranges, dissipative entities also operate in distinct ways within various energy intensity ranges. Blow too gently into a recorder (the musical instrument) and nothing happens. Force too much air through it, and the recorder will break. Within the range in which sounds are produced, different intensities will produce either the intended sounds or unintended squeaks. Thus dissipative entities exhibit phases and phase transitions that depend on the intensity of the energy they encounter. The primary concern about global warming, for example, is not that the temperature will rise by a degree or two—although the melting of the ice caps is potentially destructive—but the possibility that a phase transition will occur and that the overall global climate structure, including atmospheric and oceanic currents, will change dis4

Another common example of a dissipative structure is the Belousov-Zhabotinsky (BZ) reaction, which in some ways is a chemical clock. We design digital clocks to tell time. We didn’t design BZ reactions to tell time. Yet in some sense they both do. That one surprises us and the other doesn’t shouldn’t mislead us into putting them into different categories of phenomena.

5

In all our examples, the form in which energy is delivered also matters. An electric current will produce different effects from a thermal energy source when introduced into a digital clock and a Rayleigh-Bénard device.

Emergence Explained

4/33

DRAFT

10/17/2008

astrously. When energy is flowing through it, a dissipative entity is by definition far from equilibrium. So a dissipative entity is a static entity that is maintained in a far-from-equilibrium state. The sorts of dissipative structures we have been discussing are not fully qualified dynamic entities, however, because they do not include mechanisms to repair their static structures. Their static structures are maintained by other forces than those produced by the energy that flows through them. In particular, dissipative structures do not cycle material through themselves as all fully qualified dynamic entities do. As we will see below, one consequences of this fact is that dynamic entities do not easily supervene over their material components. Static entities and dissipative structures do. 2.4 Hurricanes as dynamic entities Most dynamic entities are biological or social, but there are some naturally occurring dynamic entities that are neither. Probably the best known are hurricanes. A hurricane operates as a heat engine in which condensation—which replaces combustion as the source of heat—occurs in the upper atmosphere. A hurricane involves a greater than normal pressure differential between the ocean surface and the upper atmosphere. That pressure differential causes warm moist surface air to rise. When the moisture-laden air reaches the upper atmosphere, which is cooler, it condenses, releasing heat. The heat warms the air and reduces the pressure, thereby maintaining the pressure differential.6 (See Figure 3.) Hurricanes are objectively recognizable as entities. They have reduced entropy—hurricanes are quite well organized—and because of the energy flowing through them, they have more mass than their physical components (the air and water molecules making them up) would have on their own. Hurricanes illustrate the case of a dynamic entity with no static structure. When a hurricane loses its external source of energy—typically by moving over land—the matter of which it’s composed is no longer bound together into an organized structure. The hurricane’s entropy rises and its excess mass dissipates until it no longer exists as an entity. 2.5

Petty reductionism fails for dynamic entities—for all practical purposes Petty reductionism is another way of saying that an entity supervenes over the matter of which it is composed: fixing the properties of the matter of which an entity is composed fixes the properties of the entity.7 Hurricanes illustrate a difficulty with supervenience and petty reductionism for dynamic entities. The problem is that from moment to moment new matter is incorporated into a hurricane and matter then in a hurricane leaves it. Define as what we might call the hurricane’s supervenience base the smallest collection of matter over which a hurricane supervenes. Since matter cycles continually through a hurricane, a hurricane’s supervenience base 6

A characterization of a hurricane as a vertical heat engine may be found in Wikipedia. (URL as of 9/1/2005: http://en.wikipedia.org/wiki/Hurricane.) The preceding hurricane description was paraphrased from NASA, “Hurricanes: The Greatest Storms on Earth,” (URL as of 3/2005 http://earthobservatory.nasa.gov/Library/Hurricanes/.)

7

Recall that a set of higher level predicates is said to supervene over a set of lower level predicates if a configuration of truth values for the lower level predicates determines the truth values for the higher level predicates. We are using the term supervene loosely to say that an entity supervenes over its components.

Emergence Explained

5/33

DRAFT

10/17/2008

consists of the entire collection of matter that is part of a hurricane over its lifetime. It would seem that a hurricane’s supervenience base must be significantly larger than the amount of matter that constitutes a hurricane at any moment. Because a hurricane’s supervenience base is so much larger than the matter that makes it up at any moment the fact that a hurricane supervene over its supervenience base is not very useful. Other than tracking all the matter in a hurricane’s supervenience base, there is no easy reducibility equation that maps the properties of a hurricane’s supervenience base onto properties of the hurricane itself. Furthermore, the longer a hurricane persists, the larger its supervenience base—even if the hurricane itself maintains approximately the same size during its lifetime. Much of the matter in a hurricane’s supervenience base is likely also to be included in the supervenience bases of other hurricanes. Like Weinberg’s example of quarks being composed (at least momentarily) of protons, hurricanes are at least partially composed of each other. Thus just as Weinberg gave up on the usefulness of petty reductionism in particle physics, we must also give up on the usefulness of petty reductionism and supervenience for dynamic entities as well. 2.6 Minimal dynamic entities In [Kauffman] Kauffman asks what the basic characteristics are of what he calls autonomous agents. He suggests that the ability to perform a thermodynamic (Carnot engine) work cycle is fundamental. In what may turn out to be the same answer we suggest looking for the minimal biological organism that perpetuates itself by consuming energy. Bacteria seem to be too complex. Viruses and prions don’t consume energy. Hurricanes aren’t biological. Is there anything in between? Such a minimal entity may help us understand the yet-to-be-discovered transition from the inanimate to the animate. Since self-perpetuation does not imply reproduction (as hurricanes illustrate), simple selfperpetuating organisms may not be able to reproduce. That means that if they are to exist, it must be relatively easy for them to come into being directly from inorganic materials. Self-perpetuating organisms may not include any record—like DNA—of their design (as hurricanes again illustrate). One wouldn’t expect to see evolution among such organisms —at least not evolution that depends on modifications of such design descriptions. 2.7 Biological and social dynamic entities Biological and social entities also depend on external energy sources. Photosynthesizing plants depend on sunlight. Other biological entities depend on food whose energy resources were almost always derived originally from the sun. Social entities may be organized along a number of different lines. In modern economies, money is a proxy for energy. Economic entities persist only so long as the amount of money they take in exceeds the amount of money they expend. Political entities depend on the energy contributed— either voluntarily, through taxes, or conscription—of their subjects. Smaller scale social entities such as families, clubs, etc., depend on the contributions of their member. The contributions may be voluntary, or they may result from implicit (social norms) or explicit coercion. No matter the immediate source of the energy or the nature of the components, biological and social entities follow the same pattern we saw with hurricanes.

Emergence Explained

6/33

DRAFT

10/17/2008



They have reduced entropy (greater order) than their components would have on their own.



They depend on external sources of energy to stay in existence. Because of the energy flowing through them, the have more mass than their components would on their own.



The material that makes them up changes with time. Their supervenience bases are generally much larger than the material of which they are composed at any moment. The longer a dynamic entity persists, the greater the difference. Petty reductionism either fails entirely, or it becomes a historical narrative. One can tell the story of a country, for example, as a history that depends in part on who its citizens are at various times. One would have a difficult time constructing an equation that maps a country’s supervenience base (which includes its citizens over all time) to its state at any moment unless that mapping were in effect a historical record.



Most biological and social entities have other dynamic entities as components. These component entities have “divided loyalties” in some sense—to themselves and to other dynamic entities of which they are also components.

Even though dynamic entities persist in time, and even though the properties of dynamic entities are a function of the properties of their components at any moment, since the components of which a dynamic entity is composed change from time to time, there is no direct way to map the properties of the components a dynamic entity will have over its lifetime to the moment-to-moment properties of the entity itself except as a narrative, i.e., a story which describes which elements happen to become incorporated into the dynamic entity at various moments during its lifetime. All entities are subject to the effect of interactions with elements they encounter in their environments. Dynamic entities are doubly vulnerable. They are also subject to having their components replaced by other components. To persist they must have defenses against infiltration by elements which once incorporated into their internal mechanisms may lead to their weakening or destruction. Social entities are more vulnerable still. Some of their components (people) are simultaneously components of other social entities—often resulting it divided loyalties. 2.8 Theseus’ ship The notion of a social dynamic entity can help resolve the paradox of Theseus’s ship, a mythical ship that was maintained (repaired, repainted, etc.) in the harbor at Athens for so long that all of its original material was replaced. The puzzle arises when one asks whether the ship at some time t is “the same ship” as it was when first docked-or at any other time. This becomes a puzzle when one thinks of Theseus’ ship as identical to the material of which it is composed at any moment, i.e., that the ship supervenes over its components. Since any modification to the ship, e.g., new paint, will change the material of which the ship is composed, it would seem that the repainted ship is not “the same ship” as it was before it was repainted. The repainted ship consists of a different set of components. If supervenience holds, the repainted ship cannot be the same ship as it was before being re-

Emergence Explained

7/33

DRAFT

10/17/2008

painted since by supervenience, its properties are fixed by the properties of its components, and its components are different from what they were before. This cycling of material through an entity wasn’t a problem when we were discussing hurricanes or social or biological entities—we had already given up on the usefulness of petty reductionism and supervenience for dynamic entities. In those cases we thought of the entity as including not only its momentary physical components but as also including the energy that was flowing through it along with means to slough off old material and to incorporate new material into its structure. To apply the same perspective to Theseus’ ship, think of the physical ship along with the maintenance process as a social entity—call it the Theseus ship maintenance entity. That social entity, like all social entities, is powered by an external energy source. (Since the maintenance of Theseus’ ship is a governmental or societal function, the energy source is either voluntary, conscripted, or taxation.) The Theseus ship maintenance entity uses energy from its energy source to do the maintenance work on the ship. Just as the material that makes up a hurricane changes from time to time and the people who are employed by a business change from time to time, the physical ship also changes from time to time. But like a hurricane and a company, the ship maintenance entity persists over time. 2.9 Thermodynamic computing: nihil ex nihilo In Computer Science we assume that one can specify a Turing Machine, a Finite State Automaton, a Cellular Automaton, or a piece of software, and it will do its thing—for free. Turing machines run for free. Cellular Automata run for free. The Game of Life runs for free. Software in general runs for free. Even agents in agent-based models run for free.8 Although that may be a useful abstraction, we should recognize that we are leaving out something important. In the real world dynamic material entities require energy. To run real software in the real world requires a real computer, which uses real energy. The problem is that the real energy that drives software is not visible to the software itself. Software does not have to pay its own energy bill. A theory of thermodynamic computation is needed to bring together the notions of energy and computing. Until we find a way to integrate the real energy cost of running software into the software itself, we are unlikely to build a successful model of artificial life.

3 Entities and functionality Once—but not before—one has established the notion of an entity, it becomes possible to talk about the interaction. This is a critical step. Until we establish that entities can be differentiated from their environments—and hence also from each other—it makes no sense to talk about an interaction either between an entity and its environment or among entities. But once one has established the possibility of distinguishable entities it then makes sense to talk about how they interact.

8

Many agent-based and artificial life models acknowledge the importance of energy by imposing an artificial price for persistence, but we are not aware of any in which the cost of persistence is fully integrated into the functioning of the entity.

Emergence Explained

8/33

DRAFT

10/17/2008

3.1 Interaction among/between entitles In the earlier paper, we argued that all interactions are epiphenomenal. Is that consistent with the position that entities can interact at all? If all interactions are simply combinations of primitive forces, what does it mean to say that an entity as such interacts at all? In our earlier example, we talked about the interaction between patterns in the Game of Life. Similarly, one can talk about the interaction between Newtonian objects, e.g., two (ubiquitous) Newtonian billiard balls. When two Newtonian objects interaction, the entities that exist after the interaction are almost never identical to the entities that existed prior to the interaction. If one looks carefully enough, the post-reaction entities, will almost certainly have more or fewer atoms or molecules than the pre-interaction entities. Yet the post- reaction entities will have enough in common with the pre-reaction entities that we are comfortable identifying them with each other. This is true of Newtonian entities, and it is true of almost all interacting entities, static or dynamic. Like interaction among static entities interactions among dynamic entities also depends on the entities. Two obvious examples are interactions among people. Sex between biological organisms (including people) is successful only when the participants are the appropriate organisms. This is the case at all levels—ranging from the physical (in which various physical parts must fit together) to the cell level at which the gamete and the egg must combine properly. Sex would not occur if only the components—e.g., organs or molecules, etc.—of the sexual partners were left along in a darkened room no matter how sweetly the romantic music was playing. Similarly symbolic interaction works only between entities that are capable of operating at the symbolic level. Science is not in a position to explain how people operate at a conceptual/symbolic level. Yet it is clear that we do. It is also clear that as symbolic entities we interact with each other symbolically through language. We each are capable (to some reasonably good approximation) of communication to each other what ideas are occurring in our consciousness. 3.2 The reductionist blind spot As suggested by our definition of emergence, there are two ways to describe an entity. 1. By giving an “external” description: by describing its functional/phenomenological properties.9 2. By giving an “internal” description: by describing its structure and internal operation, i.e., its implementation or how it works. The traditional scientific agenda—i.e., the agenda of petty reductionism10—has been (a) to observe nature, (b) to identify likely categories of entities, i.e., aspects of nature that can be described independently of how they are implemented, and (c) to seek to explain the observed functionality/phenomenology of those entities by understanding their structure and internal operation. In some cases we find that task (c) leads us to conclude 9

As we pointed out in [1], a functional description developed in anticipation of creating an entity to match it is often known as a requirements specification, an abstract specification, or a functional specification.

10

Recall Weinberg’s characterization of petty reductionism as the “doctrine that things behave the way they do because of the properties of their constituents.”

Emergence Explained

9/33

DRAFT

10/17/2008

that what we had postulated as a type of entity was not, perhaps because we found that different instances were implemented completely differently. Once this explanatory task is accomplished, the reductionist tradition has been to put aside an entity’s functional/phenomenological description and replace it with (that is to reduce it to) the explanation of how that functionality/phenomenology is brought about. After all, once one can explain in terms of lower-level mechanisms how some behavior or appearance comes about one can presumably reproduce that behavior or appearance by means of those lower-level mechanisms. Of course one then has the task of explaining the lower-level mechanisms in terms of still lower-level mechanisms, etc. But that’s what science is about, peeling nature’s onion until her fundamental mechanisms are revealed. In the first paper, we argued that contrary to the reductionist tradition, an understanding of how functionality is brought about does not eliminate the significance of that functionality. Our example was the implementation of a Turing Machine on a Game of Life platform. A reductive analysis of a Game-of-Life Turing Machine may help us understand how the Turning Machine is implemented, but it doesn’t help us understand the functionality that the Turing Machine provides. To illustrate this in terms of natural science, suppose that what will eventually be determined to be Game of Life Turing Machines (somehow) occurred in nature.11 We will assume that when these creatures are first discovered in the wild, the scientific community doesn’t know how they function. After we observe them for a while, we find out that they are capable of transforming inputs into outputs in reliable ways. Specialties within the fields of biology, zoology, and ecology develop to study these creatures. Scientists study the functions they compute, how those functions relate to each other, and how those functions relate to the environment and the ecological niches within which the creatures are known to exist. In fairly short order, it is determined that these creatures compute exactly the Turing computable functions. They are thereafter known as bioTMs. The bioTM biologists and microbiologists study how bioTMs function internally. They examine the bodies of dead bioTMs, and using the newest micro-surgical instruments, they study the internal processes of live bioTMs. What they find are strange patterns that pulse through the bodies of bioTMs. Let’s also suppose that there is a separate science that studies Game of Life patterns. Scientists in that field build catalogs of patterns and study how those patterns interact with each other. Some interactions create new patterns. Others destroy patterns. Etc. These interactions are called pattern reactions. This discipline is comparable to what we know in our world as chemistry. It was soon discovered that bioTMs are built upon Game of Life patterns. A new field of bioTM biochemistry develops. Scientists in this field study those Game of Life patterns that are found to be important for bioTMs. In a major breakthrough, scientists discover that all bioTMs are universal and that each has an encoding of its operation as quintuples

11

As a conceptual construct a Turing Machine is not an energy-based entity. There are no forces—either static or dynamic—that hold Turing Machines together. Furthermore, Turing Machines have no means of reproduction; so it is even more unrealistic to suppose that they simply “occur” in nature. But as an example, this nicely illustrates our point about reductionism.

Emergence Explained

10/33

DRAFT

10/17/2008

—which themselves are encoded as Game of Life patterns. This is comparable to our discovery of DNA. Once the biologists understand that Game of Life Turing Machines are all universal, they, with the help of some of their mathematical colleagues are able to work out Computability theory, and they are able to explain many of the features of Turing Machines that they had previously only just been cataloged. Let’s now suppose that there is yet another science which investigates the fundamental nature of the Game of Life. These are the Game of Life physicists. Of course, since the basic rules are not all that difficult, Game of Life physics soon reached a dead end. Once the Game of Life physicists had discovered the Game of Life rules, the fundamental rules underlying how nature works. There was no more to find out. But once the Game of Life physicists work out the Game of Life theory of everything, the Game of Life chemists were able to work out how Game of Life patterns come about as a result of the Game of Life rules, and the Game Turing Machine biochemists were able to figure out how Turing Machines functionality may be explained strictly in terms of Game of Life rules. Now that we know everything there is to know about Game of Life Turing Machines, does it make sense to discard our understanding of Turing Machines as transducers that transform input to output according to computability theory? The strict reductionists claim that it does. After all, once one knows the fundamental facts about the Game of Life, everything else is just a matter of historical accidents. Given an initial configuration of what the chemists call patterns, it is only because of the Game of Life rules that the Game of Life pattern reactions occur. Similarly, it is only because of an extremely unlikely sequence of circumstances that Game of Life patterns happen to be configured in such a way that Game of Life Turing Machines come into existence. But given that unlikely historical event, the entire operation of the resulting Game of Life Turing Machines is completely explained by the Game of Life rules. Because of the work of the Game of Life physicists and chemists we now know that Game of Life Turing Machines are nothing but Game of Life cells going on and off according to the Game of Life rules based on the historical accident of an unusual initial condition. What’s wrong with this point of view? This perspective throws away everything the Game of Life biologists, mathematicians, and ecologists learned about Game of Life Turing machines. The functionality of Turing Machines as transducers is important on its own. We still want to know which functions are computable, how those functions interact with other functions, how they interact with the environment in which they were found, etc. These are independent facts about functions computed by Game of Life Turning Machines that cannot be deduced from either a study of Game of Life patterns or Game of Life rules. We use the term the reductionist blind spot to refer to the doctrine that once one understands how higher level functionality can be implemented by lower levels of functionality, the higher level is nothing more than a derivable consequence of the lower level. Significantly, the reductionist tradition does not dismiss all descriptions given in terms of functionality. After all, what does reductionism do when it reaches “the bottom,” when

Emergence Explained

11/33

DRAFT

10/17/2008

nature’s onion is completely peeled? One version of the current “bottom” is the standard model of particle physics, which consists of various classes of particles and the four fundamental forces. This bottom level is necessarily described functionally. It can’t be described in terms of implementing mechanisms—or it wouldn’t be the bottom level. The reductionist perspective reduces all higher level functionality to primitive forces plus mass and extension. This is not in dispute. As we said in [1], all higher level functionality is indeed epiphenomenal with respect to the primitive forces. The difficulty arises because functionality must be described in terms of the interaction of an entity with its environment. The fundamental forces, for example, are described in terms of fields that extend beyond the entity. This is quite a different form of descriptions from a structural and operational description, which is always given in terms of component elements. When higher levels of functionality are described, we tend to ignore the fact that those descriptions are also given in terms of a relationship to an environment. What the reductionist blind spot fails to see is that when we replace a description of how an entity interacts with its environment with a description of how an entity operates, we lose track of how the entity interacts with its environment. The functionality of a Turing Machine is defined with respect to its tape, which is its environment. This is particularly easy to see with (traditional) Turing Machines when formulated in terms that distinguish the machine itself from its environment. The functionality of a Turing machine, the function which it computes, is defined as its transformation of an input, which it finds in its environment, into an output, which it leaves in its environment. What other formulation is possible? If there were no environment how would the input be provided and the output retrieved? It is not relevant whether or not the computational tape is considered part of the Turing Machine or part of the environment. All that matters is that the input is initially found in the environment and the output is returned to the environment. A Turing Machine computes a function after all. The same story holds for energy-based entities. Higher levels of functionality, the interaction of the entity with its environment, are important on their own. An entity’s higher level functionality is more than just the internal mechanism that brings it about. As higher and more sophisticated levels of functionality are created—or found in nature—it is important to answer questions such as: how are these higher levels of functionality used and how do they interact with each other and with their environment? Answering these questions fills in the reductionist blind spot. The importance of describing entities from both perspectives was captured nicely by Eric Jakobsson12 when he characterized biology as being “concerned equally with mechanism and function.” Below we extend Jakobsson’s perspective beyond biology, but we agree with his insight that mechanism and function are both significant. The two most important questions to be answered about higher levels of functionality are (a) how do they come about and (b) how do they persist? Entities that are a result of static emergence come about as a result of clumps of matter finding their way into energy wells. They persist because energy is required to pull them out of their energy wells. Entities that are a result of dynamic emergence present a much more complex story—the 12

At the Understanding Complex Systems Symposium, University of Illinois, Champagne-Urbana, Ill, May 2006.

Emergence Explained

12/33

DRAFT

10/17/2008

short version of which is that dynamic entities persists as long as their functionality enable them to acquire from the environment the energy then need to maintain themselves. The theory of evolution answers the questions about how new functionality comes into existence and how it persists with respect to biological functionality. We explore these questions with respect to dynamic entities more generally in the rest of this paper. The whole plus its environment is more than the sum of the parts plus their environment. 3.3 Functionality and the environment Are Turing Machines a special case? Is the functionality of other entities also of interest? To examine that question we first clarify what we mean by functionality. For us, functionality will always refer to how an entity interacts with its environment. A traditional notion of emergence, e.g., [Stanford], is that “emergent entities (properties or substances) ‘arise’ out of more fundamental entities and yet are ‘novel’ or ‘irreducible’ with respect to them.” Or [Dict of Philosophy of Mind Ontario, Mandik] “Properties of a complex physical system are emergent just in case they are neither (i) properties had by any parts of the system taken in isolation nor (ii) resultant of a mere summation of properties of parts of the system.” (But he goes on to dismiss properties which are explainable as a result of the interaction of the components as not emergent. So nothing is emergent in this view.) What does it mean for there to be a new property? A property is an external description of something. How can there be an external description, which is not defined in terms of lower level constructs? The only primitive properties (external properties, which are not described by internal constructs) are forces (and mass and size and time)? How can there be new properties? Entropy/order is also primitive? Only makes sense with respect to interaction with entities in the environment. E.g., catch a mouse? Reflect a glider? API? But API expressed in what terms? Functionality is an extension of the notion of force. A force is the functionality of primitive elements that exert forces. Functionality is that same notion, how something acts in the world, applied to higher level entities. Pheromones and ant foraging. Mouse traps. Termite nest building. All require interaction with other entities on the same level as the interacting entity. Functionalism too, as its name implies, has an environmental focus. As Fodor points out, [R]eferences to can openers, mousetraps, camshafts, calculators and the like bestrew the pages of functionalist philosophy. To make a better mousetrap is to devise a new kind of mechanism whose behavior is reliable with respect to the high-level regularity “live mouse in, dead mouse out.” For a better mouse trap to be better, the environment must be reasonably stable; mice must remain more or less the same size.

Emergence Explained

13/33

DRAFT

10/17/2008

is that refers to macro-level properties which arise from micro-level elements but are not reducible to them. construct has a property that its component elements don’t have. Similarly, the functionality of any entity is defined with respect to its environment. As we will see later, the interaction of an entity with its environment is particularly important for dynamic entities because dynamic entities depend on their environment for the energy that enables them to persist. More generally, consider the following from Weinberg. Grand reductionism is … the view that all of nature is the way it is (with certain qualifications about initial conditions and historical accidents) because of simple universal laws, to which all other scientific laws may in some sense be reduced. And this. [A]part from historical accidents that by definition cannot be explained, the [human] nervous system [has] evolved to what [it is] entirely because of the principles of macroscopic physics and chemistry, which in turn are what they are entirely because of the principles of the standard model of elementary particles. Even though Weinberg gives historical accidents, i.e., the environment, as important a role in shaping the world as he does the principles of physics, he does so grudgingly, seemingly attempting to dismiss them in a throw-away subordinate clause. This is misleading, especially given Weinberg’s example—evolution. Contrary to his implication, the human nervous system (and the designs of biological organisms in general) evolved as they did not primarily because of the principles of physics and chemistry but primarily because of the environment in which that evolution took place and in which those organisms must function. We would extend Jakobsson’s statement beyond biology to include any science that studies the functional relationship between entities and their environment—and most sciences study those relationships. The study of solids, for example, is such a science—even though solids are static entities. What does hard mean other than resistance to (external) pressure. Without an environment with respect to which a solid is understood as relating, the term hard—and other functional properties of solids—have no meaning. Without reference to an environment, a diamond’s carbon atoms would still fit together neatly, but the functional consequences of that fact would be beyond our power to describe. This really is not foreign even to elementary particle physics. The Pauli exclusion principle, which prevents two fermions from occupying the same quantum state, formalizes a constraint the environment imposes on elementary particles.13 Thus although neither Weinberg nor Fodor focuses on this issue explicitly—in fact, they both tend to downplay it—they both apparently agree that the environment within which something exists is important. In summary, a functional description is really a description of how an entity interacts with its environment. This is attractive because it ties both sorts of descriptions to the material 13

This was pointed out to me by Eshel Ben-Jacob [private communication].

Emergence Explained

14/33

DRAFT

10/17/2008

world. Emergence occurs when an interaction with an environment may be understood in terms of an implementation. 3.4 Turing Machines and functionality A Turing machine is explicitly a description of functionality. The best way to understand a Turing machine is as a finite state entity interacting with its environment. In that sense a Turing machine is a minimal model of what it means to interact with an environment. The entity must be able to: (a) read the environment (b) change the environment (c) traverse the environment. Computability results with the environment is unbounded. The environment need be no more complex than a sequence of cells whose contents can be modified. But the real environment is multi-scalar and hence much more complex than a sequence of read/write cells. Also as formulated, a Turing machine interacts with a closed environment. It is only the Turing machine itself that is able to change the environment. But environments need not be closed. In an open environment the entity is far from equilibrium with respect to information flow. This is in addition to the fact that it is far from equilibrium with respect to energy. PD is an example of how this makes a difference, but what more is there to say about it? Entity knows more and more about the environment, which enables it to find out more about where energy is available? Hurricanes can’t read/write its environment; we do. The real mechanism is that the entity is shaped by its environment. But the key is that the entity must be shapeable, i.e., it must be able to change state and direction depending on the environment. Hurricanes can’t be shaped in that way. So one minimal requirement is that the entity be an FSA. Second requirement is that the entity be mobile and be able to change direction. Third is that the entity be able to modify the environment. 3.5 Downward entailment of ideas In the previous paper we discussed downward entailment. An area in which downward entailment has a significant effect is downward entailment of ideas. We as human beings are able to develop ideas and then see the world in terms of them. Because we do that we interact with the world based to a great extent on our ideas. This means that the level of abstraction within which ideas are generated and used determines to a great extent how we as human beings affect the world. Considering how significant our role in modifying the world has become, downward entailment from ideas is one of the most important examples of downward entailment. 3.6 What dynamic entities do vs. how dynamic entities work In his talk at the 2006 Understanding Complex Systems Symposium Eric Jakobsson made the point that biology must be equally concerned with what organisms do in their worlds and the mechanisms that allow them to do it. In our definitions, we have insisted on grounding our notions in terms of material objects. An epiphenomenon is a phenomenon of something. Emergence must be an implemented abstraction. But the abstraction side has until now been left abstract. What does it mean to specify some behavior? What does it mean to describe an entity independently of its implementation? At the most basic level, a function is specified in terms of (input/output) pairs. More generally, functionality is specified in terms of behavior. All of these specificEmergence Explained

15/33

DRAFT

10/17/2008

ations are given in terms of an environment. Even input/output pairs are defined in terms of the transformation of some input (in the environment) to some output. That’s how it works on a Turing Machine. The environment is the tape; the input is found on the tape at the start of the computation; the output is found on the tape at the end of the computation. Thus for us emergence is defined in terms of the contrast between the effect of an entity on its environment and the internal mechanism that allows the entity to have the effect. 3.7 The whole is more than the sum of its parts The whole plus the environment is more than the sum of its parts plus the environment. The difference is the functionality the whole can bring to the environment that the parts even as an aggregate cannot. 3.8 Stigmergy Once one has autonomous entities (or agents) that persist in their environment, the ways in which complexity can develop grows explosively. Prior to agents, to get something new, one had to build it as a layer on top of some existing substrate. As we have seen, nature has found a number of amazing abstractions along with some often surprising ways to implement them. Nonetheless, this construction mechanism is relatively ponderous. Layered hierarchies of abstractions are powerful, but they are not what one might characterize as lightweight or responsive to change. Agents change all that. Half a century ago, Pierre-Paul Grasse invented [Grasse] the term stigmergy to help describe how social insect societies function. The basic insight is that when the behavior of an entity depends to at least some extent on the state of its environment, it is possible to modify that entity’s behavior by changing the state of the environment. Grasse used the term “stigmergy” for this sort of indirect communication and control. This sort of interplay between agents and their environment often produces epiphenomenal effects that are useful to the agents. Often those effects may be understood in terms of formal abstractions. Sometimes it is easier to understand them less formally. Two of the most widely cited examples of stigmergic interaction are ant foraging and bird flocking. In ant foraging, ants that have found a food source leave pheromone markers that other ants use to make their way to that food source. In bird flocking, each bird determines how it will move at least in part by noting the positions and velocities of its neighboring birds. The resulting epiphenomena are that food is gathered and flocks form. Presumably these epiphenomena could be formalized in terms of abstract effects that obeyed a formal set of rules—in the same way that the rules for gliders and Turing Machines can abstracted away from their implementation by Game of Life rules. But often the effort required to generate such abstract theories doesn’t seem worth the effort—as long as the results are what one wants. Here are some additional examples of stigmergy. •

When buyers and sellers interact in a market, one gets market epiphenomena. Economics attempts to formalize how those interactions may be abstracted into theories.



We often find that laws, rules, and regulations have both intended and unintended consequences. In this case the laws, rules, and regulations serve as the environment

Emergence Explained

16/33

DRAFT

10/17/2008

within which agents act. As the environment changes, so does the behavior of the agents. •

Both sides of the evo-devo (evolution-development) synthesis [Carroll] exhibit stigmergic emergence. On the “evo” side, species create environmental effects for each other as do sexes within species.



The “devo” side is even more stigmergic. Genes, the switches that control gene expression, and the proteins that genes produce when expressed all have environmental effects on each other.



Interestingly enough, the existence of gene switches was discovered in the investigation of another stigmergic phenomenon. Certain bacteria generate an enzyme to digest lactose, but they do it only when lactose is present. How do the bacteria “know” when to generate the enzyme? It turns out to be simple. The gene for the enzyme exists in the bacteria, but its expression is normally blocked by a protein that is attached to the DNA sequence just before the enzyme gene. This is called a gene expression switch. When lactose is in the environment, it infuses into the body of the bacteria and binds to the protein that blocks the expression of the gene. This causes the protein to detach from the DNA thereby “turning on” the gene and allowing it to be expressed. The lactose enzyme switch is a lovely illustration of stigmergic design. As we described the mechanism above, it seems that lactose itself turns on the switch that causes the lactose-digesting enzyme to be produced. If one were thinking about the design of such a system, one might imagine that the lactose had been designed so that it would bind to that switch. But of course, lactose wasn’t “designed” to do that. It existed prior to the switch. The bacteria evolved a switch that lactose would bind to. So the lactose must be understood as being part of the environment to which the bacteria adapted by evolving a switch to which lactose would bind. How clever; how simple; how stigmergic!



Cellular automata operate stigmergically. Each cell serves as an environment for its neighbors. As we have seen, epiphenomena may include gliders and Turing Machines.



Even the operation of the Turing Machine as an abstraction may be understood stigmergically. The head of a Turing Machine (the equivalent of an autonomous agent) consults the tape, which serves as its environment, to determine how to act. By writing on the tape, it leaves markers in its environment to which it may return—not unlike the way foraging ants leave pheromone markers in their environment. When the head returns to a marker, that marker helps the head determine how to act at that later time.



In fact, one may understand all computations as being stigmergic with respect to a computer’s instruction execution cycle. Consider the following familiar code fragment.

Emergence Explained

17/33

DRAFT

10/17/2008 temp:= x; x := y; y := temp;

The epiphenomenal result is that x and y are exchanged. But this result is not a consequence of any one statement. It is an epiphenomenon of the three statements being executed in sequence by a computer’s instruction execution cycle. Just as there in nothing in the rules of the Game of Life about gliders, there is nothing in a computer’s instruction execution cycle about exchanging the values of x and y—or about any other algorithm that software implements. Those effects are all epiphenomenal. •

The instruction execution cycle itself is epiphenomenal over the flow of electrons through gates—which knows no more about the instruction execution cycle than the instruction execution cycle knows about algorithms.

In all of the preceding examples it is relatively easy to identify the agent(s), the environment, and the resulting epiphenomena. 3.9 Design and evolution It is not surprising that designs appear in nature. It is almost tautologous to say that those things whose designs work in the environments in which they find themselves will persist in those environments. This is a simpler (and more accurate) way of saying that it is the fit—entities with designs that fit their environment—that survive. But fit means able to extract sufficient energy to persist. 3.10 The accretion of complexity An entity that suits its environment persists in that environment. But anything that persists in an environment by that very fact changes that environment for everything else. This phenomenon is commonly referred to as an ever changing fitness landscape. What has been less widely noted in the complexity literature is that when something is added to an environment it may enable something else to be added latter—something that could not have existed in that environment prior to the earlier addition. This is an extension of notions from ecology, biology, and the social sciences. A term for this phenomenon from the ecology literature, is succession. (See, for example, [Trani].) Historically succession has been taken to refer to a fairly rigid sequence of communities of species, generally leading to what is called a climax or (less dramatically) a steady state. Our notion is closer to that of bricolage, a notion that originated with the structuralism movement of the early 20th century [Wiener] and which is now used in both biology and the social sciences. Bricolage means the act or result of tinkering, improvising, or building something out of what is at hand. In genetics bricolage refers to the evolutionary process as one that tinkers with an existing genome to produce something new. [Church].

Emergence Explained

18/33

DRAFT

10/17/2008

John Seely Brown, former chief scientist for the Xerox Corporation and former director  of the Xerox Palo Alto Research Center captured its sense in a recent talk. [W]ith bricolage you appropriate something. That means you bring it into your space, you tinker with it, and you repurpose it and reposition it. When you repurpose something, it is yours.14 Ciborra [Ciborra] uses bricolage to characterize the way that organizations tailor their information systems to their changing needs through continual tinkering. This notion of building one thing upon another applies to our framework in that anything that persists in an environment changes that environment for everything else. The Internet provides many interesting illustrations. •

Because the Internet exists at all, access to a very large pool of people is available. This enabled the development of websites such as eBay.



The establishment of eBay as a persistent feature of the Internet environment enabled the development of enterprises whose only sales outlet was eBay. These are enterprises with neither brick and mortar nor web storefronts. The only place they sell is on eBay. This is a nice example of ecological succession.



At the same time—and again because the Internet provides access to a very large number of people—other organizations were able to establish what are known as massively multi-player online games. Each of these games is a simulated world in which participants interact with the game environment and with each other. In most of these games, participants seek to acquire virtual game resources, such as magic swords. Often it takes a fair amount of time, effort, and skill to acquire such resources.



The existence of all of these factors resulted, though a creative leap, in an eBay market in which players sold virtual game assets for real money. This market has become so large that there are now websites dedicated exclusively to trading in virtual game assets. [Wallace]



BBC News reported [BBC] that there are companies that hire low-wage Mexican and Chinese teenagers to earn virtual assets, which are then sold in these markets. How long will it be before a full-fledged economy develops around these assets? There may be brokers and retailers who buy and sell these assets for their own accounts even though they do not intend to play the game. (Perhaps they already exist.) Someone may develop a service that tracks the prices of these assets. Perhaps futures and options markets will develop along with the inevitable investment advisors.

The point is that once something fits well enough into its environment to persist it adds itself to the environment for everything else. This creates additional possibilities and a world with ever increasing complexity. 14

In passing, Brown claims that this is how most new technology develops. [T]hat is the way we build almost all technology today, even though my lawyers don't want to hear about it. We borrow things; we tinker with them; we modify them; we join them; we build stuff.

Emergence Explained

19/33

DRAFT

10/17/2008

In each of the examples mentioned above, one can identify what we have been calling an autonomous entity. In most cases, these entities are self-perpetuating in that the amount of money they extract from the environment (by selling either products, services, or advertising) is more than enough to pay for the resources needed to keep it in existence. In other cases, some Internet entities run on time and effort contributed by volunteers. But the effect is the same. As long as an entity is self-perpetuating, it becomes part of the environment and can serve as the basis for the development of additional entities. 3.11 Increasing complexity increasing efficiency, and historical contingency The phenomenon whereby new entities are built on top of existing entities is now so widespread and commonplace that it may seem gratuitous even to comment on it. But it is an important phenomenon, and one that has not received the attention it deserves. Easy though this phenomenon is to understand once one sees it, it is not trivial. After all, the second law of thermodynamics tells us that overall entropy increases and complexity diminishes. Yet we see complexity, both natural and man made, continually increasing. For the most part, this increasing complexity consists of the development of new autonomous entities, entities that implement the abstract designs of dissipative structures. This does not contradict the Second Law. Each autonomous entity maintains its own internally reduced entropy by using energy imported from the environment to export entropy to the environment. Overall entropy increases. Such a process works only in an environment that itself receives energy from outside itself. Within such an environment, complexity increases. Progress in science and technology and the bountifulness of the marketplace all exemplify this pattern of increasing complexity. One might refer to this kind of pattern as a metaepiphenomenon since it is an epiphenomenon of the process that creates epiphenomena. This creative process also tends to exhibit a second meta-epiphenomenon. Overall energy utilization becomes continually more efficient. As new autonomous entities find ways to use previously unused or under-used energy flows (or forms of energy flows that had not existed until some newly created autonomous entity generated them, perhaps as a waste product), more of the energy available to the system as a whole is put to use. The process whereby new autonomous entities come into existence and perpetuate themselves is non-reductive. It is creative, contingent, and almost entirely a sequence of historical accidents. As they say, history is just one damn thing after another—to which we add, and nature is a bricolage. We repeat the observation Anderson made more than three decades ago. The ability to reduce everything to simple fundamental laws [does not imply] the ability to start from those laws and reconstruct the universe. 3.12 Evolutionary environments • Access to a supply of externally provided energy and means for exchanging it. All such environments are what is commonly known as far from equilibrium sys-

Emergence Explained

20/33

DRAFT

10/17/2008

tems in that externally supplied energy continually flows through them. The overall creative process can be summarized as consisting of finding increasingly innovative ways of using the available energy. To facilitate this process, mechanisms must be available to support the fungibility of energy—and its proxies such as money, power, and attention. •

Standards. New products, services, and other items are almost always created (composed) from existing products, services, and other items. Composition is greatly facilitated when the elements to be composed adhere to widely accepted standards.



Communication and transportation infrastructures. Communication and transportation infrastructures facilitate the exchange/transfer/flow of (a) information throughout the environment and (b) energy (in one direction) and (c) products and services (in the other) among trading partners.



A reasonable level of confidence in the stability and continuity of the products and services installed in the environment. Mechanisms must be available to allow agreements to be made and for installed products and services to be relied upon.



Minimum overhead. Cultural or other mechanisms must exist to discourage corruption along with enforcement mechanisms to make it harder to siphon off energy flows for non-productive uses. More generally, the environment must incorporate mechanisms that minimize the overhead of participating.



Both (a) centralized but quasi-democratic and transparent governance of the overall system, its infrastructure, and the standards making process and (b) decentralized overall control (“power to the edge”) in which as much autonomy as possible is ceded to environment participants.



Mechanisms that ensure that a certain amount of the available energy is devoted to the exploration of the space of new possibilities.



Mechanisms that allow new products and services to be developed and installed in the environment and then made known to other participants in the environment.



A (primarily, but perhaps not exclusively) bottom-up (i.e., market-like) means for allocating energy (or its proxies) according to usefulness: the more (less) useful a product or service is found to be (according to actual usage), the more (fewer) resources it will have at its disposal. All of the participants in the environment must be self-sustaining in terms of their overall energy transactions. This is possible because the environment is based on an available external source of “free” energy.



An ability to form communities of interest (formal, informal, voluntary, and feebased) to facilitate the sharing of information, experience, and expertise. The value of shared information is typically enhanced when it is shared in groups.



Both (a) sufficient stability of the overall environment that participants can establish regularized modes of participation and (b) (generally collaborative) means to allow the environment to evolve as conditions change.

Emergence Explained

21/33

DRAFT

10/17/2008

4 Entities, emergence, and science 4.1 Entities and the sciences One reason that the sciences at levels higher than physics and chemistry seem somehow softer than physics and chemistry is that they work with autonomous entities, entities that for the most part do not supervene over any conveniently compact collection of matter. Entities in physics and chemistry are satisfyingly solid—or at least they seemed to be before quantum theory. In contrast, the entities of the higher level sciences are not defined in terms of material boundaries. These entities don’t exist as stable clumps of matter; it’s hard to hold them completely in one’s hand—or in the grip of an instrument. The entities of the special sciences are objectively real—there is some objective measure (their reduced entropy relative to their environment) by which they qualify as entities. But as we saw earlier, the processes through which these entities interact and by means of which they perpetuate themselves are epiphenomenal. Even though the activities of higher level entities may be described in terms that are independent of the forces that produce them (recall that this is our definition of epiphenomenal), the fundamental forces of physics are the only forces in nature. There is no strong emergence. All other force-like effects are epiphenomenal. Consequently we find ourselves in the position of claiming that the higher level sciences study epiphenomenal interactions among real if often somewhat ethereal entities. “Of course,” one might argue, “one can build some functionality that is not a logical consequence of its components.” Fodor’s simplest functionalist examples illustrate this phenomenon. The physics underlying the components of a mousetrap won’t tell you that when you put the components together in a particular way the result will trap a mouse. The reason why rules of fundamental physics cannot tell you that is because mice simply are not a part of the ontology of fundamental physics in the same way as Turing Machines are not part of the ontology of the Game of Life. If an object is designed to have a function, then if its design works, of course it has that function—even if, as is likely, that function is logically independent of the laws that govern the components. We build objects with particular functions all the time. It’s called ingenuity—or simply good software or engineering design. Even chimpanzees build and use tools. They use stems to extract termites from mounds, they use rocks to open nuts, and perhaps even more interestingly, they “manufacture” sponges by chewing grass roots until they become an absorbent mass. [Smithsonian] But of course from the perspective of fundamental physics, stems are not probes; rocks are not hammers; and roots are not sponges. To be clear about this point, when we say that the functionality of a designed element is logically independent of some lower level domain we are not saying that the higher level functionality is completely unconstrained by the lower level framework. Of course a Turing Machine emulation is constrained by the rules of the Game of Life, and the functioning of a mouse trap is constrained by the laws of physics. But in both cases, other than those constraints, the functionality of the designed artifact is logically independent of the laws governing the underlying phenomena. Typically, the functionality of the designed

Emergence Explained

22/33

DRAFT

10/17/2008

artifact is expressed in terms that are not even a present in the ontological framework of the lower level elements. The question we pose in this subsection (and answer in the next) is whether such logically independent functionality occurs “in nature” at an intermediate level, at the level of individual things. Or does this sort of phenomenon occur only in human (or chimpanzee) artifacts? Given the current debate (at least in the United States) about evolution, one might take this as asking whether the existence of a design always implies the existence of a (presumably intelligent) designer.

5 Some practical considerations 5.1 Emergence and software As noted earlier, the computation that results when software is executed is emergent. It is an epiphenomenon of the operation of the (actual or virtual) machine that executes the software. Earlier we defined emergence as synonymous with epiphenomenon. At that time we suggested that formalizable epiphenomena are often of significant interest. We also said that formalization may not always be in the cards. Software, which one would imagine to be a perfect candidate for formalization, now seems to be a good example of an epiphenomenon that is unlikely to be formalized. It had once been hoped that software development could evolve to a point at which one need only write down a formal specification of what one wanted the software to do. Then some automatic process would produce software that satisfied that specification. That dream now seems quite remote. Besides the difficulty of developing (a) a satisfactory specification language and (b) a system that can translate specifications written in such a language into executable code, the real problem is that it has turned out to be at least as difficult and complex to write formal specifications as it is to write the code that produces the specified results. Even if one could write software by writing specifications, in many cases—especially cases that involve large and complex systems, the kinds of cases for which it really matters—doing so doesn’t seem to result in much intellectual leverage, if indeed it produces any at all. This illustrates quite nicely that we often find ourselves in the position of wanting to produce epiphenomena (epiphenomena, which may be very important to us), whose formalization as an abstraction we find to be either infeasible or not particularly useful. 5.2 Bricolage as design The process of building one capability on top of another not only drives the overall increase in complexity, it also provides guidance to designers about how to do good design work. Any good designer—a developer, an architect, a programmer, or an engineer— knows that it is often best if one can take advantage of forces and processes already in existence as part of one’s design. But even before engineering, we as human beings made use of pre-existing capabilities. Agriculture and animal husbandry use both plant reproduction and such animal capabilitEmergence Explained

23/33

DRAFT

10/17/2008

ies as locomotion or material (i.e., skin) production for our own purposes. The exploitation of existing capabilities for our own purposes is not a new idea. An interesting example of this approach to engineering involves recent developments in robotics. Collins reported [Collins] that a good way to make a robot walk is by exploiting gravity through what he called passive-dynamic motion—raise the robot’s leg and let gravity pull it back down—rather than by directing the robot’s limbs to follow a predefined trajectory. This illustrates in a very concrete way the use of an existing force in a design. Instead of building a robot whose every motion was explicitly programmed, Collins built a robot whose motions were controlled in part by gravity, a pre-existing force. 5.3 Infrastructure-centric development Building new capabilities on top of existing ones is not only good design, it is highly leveraged design. But now that we are aware of this strategy a further lesson can be drawn. New systems should be explicitly designed to serve as a possible basis for systems yet to come. Another way of putting this is that every time we build a new system, it should be built so that it becomes part of our environment, i.e., our infrastructure, and not just a piece of closed and isolated functionality. By infrastructure we mean systems such as the Internet, the telephone system, the electric power distribution system, etc. Each of these systems can be characterized in isolation in terms of the particular functions they perform. But more important than the functional characterization of any of these individual systems is the fact that they exist in the environment in such a way that other systems can use them as services. We should apply this perspective to all new systems that we design: design them as infrastructure services and not just as bits of functionality. Clearly Microsoft understands this. Not only does it position the systems it sells as infrastructure services, it also maintains tight ownership and control over them. When such systems become widely used elements of the economy, the company makes a lot of money. The tight control it maintains and the selfishness with which it controls these systems earns it lots of resentment as well. Society can’t prosper when any important element of its infrastructure is controlled primarily for selfish purposes. The US Department of Defense (DoD) is currently reinventing itself [Dick] to be more infrastructure-centric. This requires it to transform what is now a huge collection of independent “stovepipe” information systems, each supporting only its original procurement specification, to a unified assembly of interoperating systems. The evocative term stovepipe is intended to distinguish the existing situation—in which the DoD finds that it has acquired and deployed a large number of functionally isolated systems (the “stovepipes”)—from the more desirable situation in which all DoD systems are available to each other as an infrastructure of services. 5.4 Service refactoring and the age of services The process whereby infrastructure services build on other infrastructure services leads not only to new services, it also leads to service refactoring. The corporate trend toward Emergence Explained

24/33

DRAFT

10/17/2008

outsourcing functions that are not considered part of the core competence of the corporation illustrates this. Payroll processing is a typical example. Because many organizations have employees who must be paid, these organizations must provide a payroll service for themselves. It has now become feasible to factor out that service and offer it as part of our economic infrastructure. This outsourcing of internal processes leads to economic efficiencies in that many such processes can be done more efficiently when performed by specialized organizations. Such specialized organizations can take advantage of economies of scale. They can also serve as focal points where expertise in their specialized service can be concentrated and the means of providing those services improved. As this process establishes itself ever more firmly, more and more organizations will focus more on offering services rather than functions, and organizations will become less stovepiped. We frequently speak of the “service industries.” For the most part this term has been used to refer to low level services—although even the fast food industry can be seen as the “outsourcing” of the personal food preparation function. With our more general notion of service in mind, historians may look back to this period as the beginning of the age of services. Recall that a successful service is an autonomous entity. It persists as long as it is able to extract from its environment enough resources, typically money, to perpetuate itself. 5.5 A possible undesirable unintended consequence The sort of service refactoring we just discussed tends to make the overall economic system more efficient. It also tends to improve reliability: the payroll service organizations are more reliable than the average corporate payroll department. On the other hand, by eliminating redundancy, efficiency makes the overall economic system more vulnerable to large scale failure. If a payroll service organization has a failure, it is likely to have a larger impact than the failure of any one corporate payroll department. This phenomenon seems to be quite common—tending to transform failure statistics from a Gaussian to a scale free distribution: the tails are longer and fatter. [Colbaugh] Failures may be less frequent, but when they occur they may be more global. This may be yet another unintended and unexpected emergent phenomenon—a modern example of the tragedy of the commons. Increased economic efficiency leads to increased vulnerability to major disasters at the societal-level. On the other hand, perhaps our growing realization that catastrophic failures may occur along with our ability to factor out commonly needed services will help us solve this problem as well. We now see increasing number of disaster planning services being offered.

6 Observations Our fundamental existence depends on taking energy and other resources from the environment. We must all do it to stay in existence. Raises fundamental ethical questions: how

Emergence Explained

25/33

DRAFT

10/17/2008

can taking be condemned? Supports stewardship notions since we are all dependent on environment. Dynamic entities are composed of static and dynamic entities (bodies and societies). That’s what makes them solid. But those static entity components are frequently replaced. Competition for energy and other resources justifies picture of evolution as survival of the meanest. Also justifies group selection since groups can ensure access to resources better than individuals.

7 Concluding remarks For most of its history, science has pursued the goal of explaining existing phenomena in terms of simpler phenomena. That’s the reductionist agenda. The approach we have taken is to ask how new phenomena may be constructed from and implemented in terms of existing phenomena. That’s the creative impulse of artists, computer scientists, engineers—and of nature. It is these new phenomena that are often thought of as emergent. When thinking in the constructive direction, a question arises that is often under-appreciated: what allows one to put existing things together to get something new—and something new that will persist in the world? What binding forces and binding strategies do we (and nature) have at our disposal? Our answer has been that there are two sorts of binding strategies: energy wells and energy-consuming processes. Energy wells are reasonably well understood—although it is astonishing how many different epiphenomena nature and technology have produced through the use of energy wells. We have not even begun to catalog the ways in which energy-consuming processes may be used to construct stable, self-perpetuating, autonomous entities. Earlier we wrote that science does not consider it within its realm to ask constructivist questions. That is not completely true. Science asks about how we got here from the big bang, and science asks about biological evolution. These are both constructivist questions. Since science is an attempt to understand nature, and since constructive processes occur in nature, it is quite consistent with the overall goals of science to ask how these constructive processes work. As far as we can determine, there is no sub-discipline of science that asks, in general, how the new arises from the existing. Science has produced some specialized answers to this question. The biological evolutionary explanation involves random mutation and crossover of design records. The cosmological explanation involves falling into energy wells of various sorts. Is there any more to say about how nature finds and then explores new possibilities? If as Dennett argues in [Dennett ‘96] this process may be fully explicated as generalized Darwinian evolution, questions still remain. Is there any useful way to characterize the search space that nature is exploring? What search strategies does nature use to explore that space? Clearly one strategy is human inventiveness.

Emergence Explained

26/33

DRAFT

10/17/2008

8 Acknowledgement We are grateful for numerous enjoyable and insightful discussions with Debora Shuger during which many of the ideas in this paper were developed and refined. References Abbott, R., “Emergence, Entities, Entropy, and Binding Forces,” The Agent 2004 Conference on: Social Dynamics: Interaction, Reflexivity, and Emergence, Argonne National Labs and University of Chicago, October 2004. URL as of 4/2005: http://abbott.calstatela.edu/PapersAndTalks/abbott_agent_2004.pdf. American Heritage, The American Heritage® Dictionary of the English Language, 2000. URL as of 9/7/2005: http://www.bartleby.com/61/67/S0146700.html. Anderson, P.W., “More is Different,” Science, 177 393-396, 1972. BBC News, “Gamer buys $26,500 virtual land,” BBC News, Dec. 17, 2004. URL as of 2/2005: http://news.bbc.co.uk/1/hi/technology/4104731.stm. Bedau, M.A., “Downward causation and the autonomy of weak emergence”. Principia 6 (2002): 5-50. URL as of 11/2004: http://www.reed.edu/~mab/papers/principia.pdf. Boyd, Richard, "Scientific Realism", The Stanford Encyclopedia of Philosophy (Summer 2002 Edition), Edward N. Zalta (ed.), URL as of 9/01/2005: http://plato.stanford.edu/archives/sum2002/entries/scientific-realism/. Brown, J.S., Talk at San Diego State University, January 18, 2005. URL as of 6/2005: http://ctl.sdsu.edu/pict/jsb_lecture18jan05.pdf Carroll, S.B., Endless Forms Most Beautiful: The New Science of Evo Devo and the Making of the Animal Kingdom, W. W. Norton, 2005. Chaitin, G. Algorithmic Information Theory, reprinted 2003. URL as of Sept. 6, 2005: http://www.cs.auckland.ac.nz/CDMTCS/chaitin/cup.pdf. CFCS, Committee on the Fundamentals of Computer Science: Challenges and Opportunities, National Research Council, Computer Science: Reflections on the Field, Reflections from the Field, 2004. URL as of 9/9/2005: http://www.nap.edu/books/0309093015/html/65.html. Church, G. M., “From systems biology to synthetic biology,” Molecular Systems Biology, March, 29, 2005. URL as of 6/2005: http://www.nature.com/msb/journal/v1/n1/full/msb4100007.html. Ciborra, C. "From Thinking to Tinkering: The Grassroots of Strategic Information Systems", The Information Society 8, 297-309, 1992. Clarke, A. C., "Hazards of Prophecy: The Failure of Imagination,” Profiles of The Future, Bantam Books, 1961.

Cockshott, P. and G. Michaelson, “Are There New Models of Computation: Reply to Wegner and Eberbach.” URL as of Oct. 10, 2005: http://www.dcs.gla.ac.uk/~wpc/reports/wegner25aug.pdf.

Emergence Explained

27/33

DRAFT

10/17/2008

Colbaugh, R. and Kristen Glass, “Low Cognition Agents: A Complex Networks Perspective,” 3rd Lake Arrowhead Conference on Human Complex Systems, 2005. Collins, Steven, Martijn Wisse, and Andy Ruina, “A Three-Dimensional Passive-Dynamic Walking Robot with Two Legs and Knees,” The International Journal of Robotics Research Vol. 20, No. 7, July 2001, pp. 607-615, URL as of 2/2005: http://ruina.tam.cornell.edu/research/topics/locomotion_and_robotics/papers/3d_passive_dynamic/3d_passive_dynamic.pdf Comm Tech Lab and the Center for Microbial Ecology, The Microbe Zoo, URL as of Oct 10, 2005: http://commtechlab.msu.edu/sites/dlc-me/zoo/microbes/riftiasym.html Comte, A. “Positive Philosophy,” translated by Harriet Martineau, NY: Calvin Blanchard, 1855. URL: as of 7/2005: http://www.d.umn.edu/cla/faculty/jhamlin/2111/ComteSimon/Comtefpintro.html. Cowan, R., “A spacecraft breaks open a comet's secrets,” Science News Online, Vol. 168, No. 11 , p. 168, Sept. 10, 2005. URL as of 9/9/2005: http://www.sciencenews.org/articles/20050910/bob9.asp. Dennett, D. C., The Intentional Stance, MIT Press/Bradford Books, 1987. Dennett, D. C. “Real Patterns,” The Journal of Philosophy, (88, 1), 1991. Dennett, D. C., Darwin's Dangerous Idea: Evolution and the Meanings of Life, V, 1996. Dick, D., et. al., “C2 Policy Evolution at the U.S. Department of Defense,” 10th International Command and Control Research and Technology Symposium, Office of the Assistant Secretary of Defense, Networks and Information Integration (OASD-NII), June 2005. URL as of 6/2005: http://www.dodccrp.org/events/2005/10th/CD/papers/177.pdf. Einstein, A., Sidelights on Relativity, An address delivered at the University of Leyden, May 5th, 1920. URL as of 6/2005: http://www.gutenberg.org/catalog/world/readfile?fk_files=27030. Emmeche, C, S. Køppe and F. Stjernfelt, “Levels, Emergence, and Three Versions of Downward Causation,” in Andersen, P.B., Emmeche, C., N. O. Finnemann and P. V. Christiansen, eds. (2000): Downward Causation. Minds, Bodies and Matter. Århus: Aarhus University Press. URL as of 11/2004: http://www.nbi.dk/~emmeche/coPubl/2000d.le3DC.v4b.html. Fodor, J. A., “Special Sciences (or the disunity of science as a working hypothesis),” Synthese 28: 97-115. 1974. Fodor, J.A., “Special Sciences; Still Autonomous after All These Years,” Philosophical Perspectives, 11, Mind, Causation, and World, pp 149-163, 1998. Fredkin, E., "Digital Mechanics", Physica D, (1990) 254-270, North-Holland. URL as of 6/2005: This and related papers are available as of 6/2005 at the Digital Philosophy website, URL: http://www.digitalphilosophy.org/.

Emergence Explained

28/33

DRAFT

10/17/2008

Gardner, M., Mathematical Games: “The fantastic combinations of John Conway's new solitaire game ‘life’," Scientific American, October, November, December, 1970, February 1971. URL as of 11/2004: http://www.ibiblio.org/lifepatterns/october1970.html. Grasse, P.P., “La reconstruction du nid et les coordinations inter-individuelles chez Bellicosi-termes natalensis et Cubitermes sp. La theorie de la stigmergie: Essai d'interpretation des termites constructeurs.” Ins. Soc., 6, 41-83, 1959. Hardy, L., “Why is nature described by quantum theory?” in Barrow, J.D., P.C.W. Davies, and C.L. Harper, Jr. Science and Ultimate Reality, Cambridge University Press, 2004. Holland, J. Emergence: From Chaos to Order, Addison-Wesley, 1997. Hume, D. An Enquiry Concerning Human Understanding, Vol. XXXVII, Part 3. The Harvard Classics. New York: P.F. Collier & Son, 1909–14; Bartleby.com, 2001. URL a of 6/2005:: www.bartleby.com/37/3/. Kauffman, S. “Autonomous Agents,” in Barrow, J.D., P.C.W. Davies, and C.L. Harper, Jr. Science and Ultimate Reality, Cambridge University Press, 2004. Kim, J. “Multiple realization and the metaphysics of reduction,” Philosophy and Phenomenological Research, v 52, 1992. Kim, J., Supervenience and Mind. Cambridge University Press, Cambridge, 1993. Langton, C., "Computation at the Edge of Chaos: Phase Transitions and Emergent Computation." In Emergent Computation, edited by Stephanie Forest. The MIT Press, 1991. Laughlin, R.B., A Different Universe, Basic Books, 2005. Laycock, Henry, "Object", The Stanford Encyclopedia of Philosophy (Winter 2002 Edition), Edward N. Zalta (ed.), URL as of 9/1/05: http://plato.stanford.edu/archives/win2002/entries/object/. Leibniz, G.W., Monadology, for example, Leibniz's Monadology, ed. James Fieser (Internet Release, 1996). URL as of 9/16/2005: http://stripe.colorado.edu/~morristo/monadology.html Lowe, E. J., “Things,” The Oxford Companion to Philosophy, (ed T. Honderich), Oxford University Press, 1995. Maturana, H. & F. Varela, Autopoiesis and Cognition: the Realization of the Living., Boston Studies in the Philosophy of Science, #42, (Robert S. Cohen and Marx W. Wartofsky Eds.), D. Reidel Publishing Co., 1980. Miller, Barry, "Existence", The Stanford Encyclopedia of Philosophy (Summer 2002 Edition), Edward N. Zalta (ed.), URL as of 9/1/05: http://plato.stanford.edu/archives/sum2002/entries/existence/. NASA (National Aeronautics and Space Administration), “Hurricanes: The Greatest Storms on Earth,” Earth Observatory. URL as of 3/2005 http://earthobservatory.nasa.gov/Library/Hurricanes/. Nave, C. R., “Nuclear Binding Energy”, Hyperphysics, Department of Physics and Astronomy, Georgia State University. URL as of 6/2005: http://hyperphysics.phy-astr.gsu.edu/hbase/nucene/nucbin.html. Emergence Explained

29/33

DRAFT

10/17/2008

NOAA, Glossary of Terminology, URL as of 9/7/2005: http://www8.nos.noaa.gov/coris_glossary/index.aspx?letter=s. O'Connor, Timothy, Wong, Hong Yu "Emergent Properties", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.), forthcoming URL: http://plato.stanford.edu/archives/sum2005/entries/properties-emergent/. Prigogine, Ilya and Dilip Kondepudi, Modern Thermodynamics: from Heat Engines to Dissipative Structures, John Wiley & Sons, N.Y., 1997. Ray, T. S. 1991. “An approach to the synthesis of life,” Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, vol. XI, Eds. C. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen, Redwood City, CA: Addison-Wesley, 371--408. URL page for Tierra as of 4/2005: http://www.his.atr.jp/~ray/tierra/. Rendell, Paul, “Turing Universality in the Game of Life,” in Adamatzky, Andrew (ed.), Collision-Based Computing, Springer, 2002. URL as of 4/2005: http://rendell.server.org.uk/gol/tmdetails.htm, http://www.cs.ualberta.ca/~bulitko/F02/papers/rendell.d3.pdf, and http://www.cs.ualberta.ca/~bulitko/F02/papers/tm_words.pdf Rosen, Gideon, "Abstract Objects", The Stanford Encyclopedia of Philosophy (Fall 2001 Edition), Edward N. Zalta (ed.), URL as of 9/1/05: http://plato.stanford.edu/archives/fall2001/entries/abstract-objects/. Sachdev, S, “Quantum Phase Transitions,” in The New Physics, (ed G. Fraser), Cambridge University Press, (to appear 2006). URL as of 9/11/2005: http://silver.physics.harvard.edu/newphysics_sachdev.pdf. Shalizi, C., Causal Architecture, Complexity and Self-Organization in Time Series and Cellular Automata, PhD. Dissertation, Physics Department, University of WisconsinMadison, 2001. URL as of 6/2005: http://cscs.umich.edu/~crshalizi/thesis/single-spacedthesis.pdf Shalizi, C., “Review of Emergence from Chaos to Order,” The Bactra Review, URL as of 6/2005: http://cscs.umich.edu/~crshalizi/reviews/holland-on-emergence/ Shalizi, C., “Emergent Properties,” Notebooks, URL as of 6/2005: http://cscs.umich.edu/~crshalizi/notebooks/emergent-properties.html. Smithsonian Museum, “Chimpanzee Tool Use,” URL as of 6/2005: http://nationalzoo.si.edu/Animals/ThinkTank/ToolUse/ChimpToolUse/default.cfm. Summers, J. “Jason’s Life Page,” URL as of 6/2005: http://entropymine.com/jason/life/. Trani, M. et. al., “Patterns and trends of early successional forest in the eastern United States,” Wildlife Society Bulletin, 29(2), 413-424, 2001. URL as of 6/2005: http://www.srs.fs.usda.gov/pubs/rpc/2002-01/rpc_02january_31.pdf. University of Delaware, Graduate College of Marine Studies, Chemosynthesis, URL as of Oct 10, 2005: http://www.ocean.udel.edu/deepsea/level-2/chemistry/chemo.html Uvarov, E.B., and A. Isaacs, Dictionary of Science, September, 1993. URL as of 9/7/2005:

Emergence Explained

30/33

DRAFT

10/17/2008

http://oaspub.epa.gov/trs/trs_proc_qry.navigate_term?p_term_id=29376&p_term_cd=TE RMDIS. Varzi, Achille, "Boundary", The Stanford Encyclopedia of Philosophy (Spring 2004 Edition), Edward N. Zalta (ed.), URL as of 9/1/2005: http://plato.stanford.edu/archives/spr2004/entries/boundary/. Varzi, A., "Mereology", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), Edward N. Zalta (ed.), URL as of 9/1/2005: http://plato.stanford.edu/archives/fall2004/entries/mereology/ . Wallace, M., “The Game is Virtual. The Profit is Real.” The New York Times, May 29, 2005. URL of abstract as of 6/2005: http://query.nytimes.com/gst/abstract.html?res=F20813FD3A5D0C7A8EDDAC0894DD404482. Wegner, P. and E. Eberbach, “New Models of Computation,” Computer Journal, Vol 47, No. 1, 2004. Wegner, P. and D.. Goldin, “Computation beyond Turing Machines”, Communications of the ACM, April 2003. URL as of 2/22/2005: http://www.cse.uconn.edu/~dqg/papers/cacm02.rtf. Weinberg, S., “Reductionism Redux,” The New York Review of Books, October 5, 1995. Reprinted in Weinberg, S., Facing Up, Harvard University Press, 2001. URL as of 5/2005 as part of a discussion of reductionism: http://pespmc1.vub.ac.be/AFOS/Debate.html Wiener, P.P., Dictionary of the History of Ideas, Charles Scribner's Sons, 1973-74. URL as of 6/2005: http://etext.lib.virginia.edu/cgi-local/DHI/dhi.cgi?id=dv4-42. WordNet 2.0, URL as of 6/2005: www.cogsci.princeton.edu/cgi-bin/webwn. Wolfram, S., A New Kind of Science, Wolfram Media, 2002. URL as of 2/2005: http://www.wolframscience.com/nksonline/toc.html. Woodward, James, "Scientific Explanation", The Stanford Encyclopedia of Philosophy (Summer 2003 Edition), Edward N. Zalta (ed.). URL as of 9/13/2005: http://plato.stanford.edu/archives/sum2003/entries/scientific-explanation/. Zuse, K., “Rechnender Raum” (Vieweg, Braunschweig, 1969); translated as Calculating Space, MIT Technical Translation AZT-70-164-GEMIT, MIT (Project MAC), Cambridge, Mass. 02139, Feb. 1970. URL as of 6/2005: ftp://ftp.idsia.ch/pub/juergen/zuserechnenderraum.pdf.

Emergence Explained

31/33

DRAFT

10/17/2008

Figures and Tables Table 1. Dissipative structures vs. self-perpetuating entities

Dissipative structures

Self-perpetuating entities

Pure epiphenomena, e.g., 2-chamber example.

Has functional design, e.g., hurricane.

Artificial boundaries.

Self-defining boundaries

Externally maintained energy gradient.

Imports, stores, and internally distributes energy.

Figure 1. Four Rayleigh-Benard convection patterns

Emergence Explained

32/33

DRAFT

10/17/2008

Figure 2. Anatomy of a hurricane. [Image from [NASA].]

Emergence Explained

33/33

Related Documents