Top Down (Downward) Causation

This is a (very) stripped down version of the chapter written by Keith Farnsworth, George Ellis and Luc Jaeger for "From Matter to Life: Information and Causality", published by Cambridge University Press.

What causes what?

When something happens in the non-living (abiotic) universe, we can trace its cause through a series of direct causal relationships to explain it. For example, a stone in the desert suddenly splits in two because the heat of the sun caused differential expansion, causing internal stresses; a line of chemical bonds within the stone is the first to give way, weakened by the slow progress of entropy maximising chemistry, rooted in the same laws of physics as caused the relative weakness of these bonds in the stone. This however does not explain why the stone was there in the first place, but for that we can form another chain of physical causal steps * see ‘Definitions of causation’ for further explanation.  Philosophers struggling with the question of “free will” call each step in such an explanation, one of transient causation. Following these steps back, one always returns to the basic laws of physics and (currently) to the ‘big bang’, from where our understanding runs out. Most philosophers are interested in causes that include the animate, especially the explanation of human actions and for this, many of them maintain (though do not necessarily agree with) a notion of agent causation which is the point we arrive at when a living organism appears to create a cause spontaneously, breaking the chain that leads back to fundamental physics. This notion describes the appearance of an action by an agent, the cause of which is the agent itself: it seems to act without a prior cause. Indeed this apparent behaviour is one of the mysteries common to  life: it seems to have the ability to spontaneously generate events, as though it possessed free will. Here we are interested in what lies behind such weirdness. We start with a proposition (which may be tested): that the ultimate cause of anything is information and that the source of this information gives the apparent character of the cause as either transient, randomly spontaneous, or agent caused.

Definitions of causation

What exactly do we mean when we say A causes B?
Causal power is attributed to an agency which can influence a system to change outcomes, but does not necessarily itself bring about a physical change by direct interaction with it. In an easily grasped analogy, the Mafia boss says his rival must be permanently dealt with (the boss has causal power), but his henchman does the dirty deed. The action of the henchman is physical and dynamic and the henchman is logically described at the same ontological level as his victim (it is not the cartel that kills the rival, not the rotten society, not the atoms in the henchman’s body, but the henchman himself). A dynamic, physical cause linking agents of the same ontological level is referred to as an effective cause (alternatively efficient cause, following Aristotle (Falcon 2015). So, when a snooker ball strikes another it causes it to move, and that is an effective cause. But the laws of physics that dictate what will happen when one ball hits another are not effective causes, even though they do have causal power (as such they are called formal causes). It seems that effective cause is always accompanied by a transfer of energy: this is the only way in which a physical change can take place in the physical universe. However, for an effective cause to be realised, non-effective causes must also exercise their power. Auletta et al. (2008) introduce these distinctions as a preamble to explaining top-down causation and they will be useful for what follows here. By definition top-down causes are generated at a different (higher) ontological level than that at which they are realised through an effective cause. Without the effective cause, though, nothing could happen.

Actions without causes?

In the case of transient cause, the information at the end of the chain is to be found in the physical rules of the universe. The ‘selection’ of these particular laws and fundamental constant values, from among all possible, is what makes the universe what it is. The cosmologist and mathematician George Ellis, (currently one of the leading thinkers in top-down causation) points out that the necessary simplicity of the early universe, relative to its present state, means that this cannot account for all causes. (Indeed such an explanation would constitute a kind of pre-formationism - the term used to describe the faulty belief, once held, that living things were already fully formed, but miniature versions, in the zygote).

In trying to account for what appear to be spontaneous actions, philosophers often come up against ‘actions without causes’ where the chain of transient causes seems to halt. If there really is no cause whatsoever, then two things must be true: first there must be more than one option (because if there were only one, then the action would be wholly determined and we would step directly on to the next link in a causal chain); second, that there must be no way (mechanism or set of circumstances) for choosing among the options (because if there were, then that ‘way’ would be sufficient to account for the cause. In this situation, we are left only with a random choice of the path to follow. But, as we argue elsewhere, randomness is the phenomenon of spontaneously introducing new ‘information’ into the system. Thus the random ‘action without a cause’, is itself derived from information, in this case non-structured, i.e. ‘random information’. This kind of information finds a use as the raw material for selection and therefore pattern formation, from where form, complexity and function may be developed.

However, philosophers who believe (all be it uncomfortably) in agent causation, insist that it cannot be reduced to random action (for then it is clear that the source of causation is not the agent itself, but rather is an independent random process). Having established the physical foundation of the universe as one source, randomness as another, we now need a third source of  information to explain this most puzzling source of causation.  We might speculate that this one is peculiar to living systems, not found elsewhere (excluding human creations, since they are logically dependent on living systems).

Cybernetic control systems

The easiest way to appreciate this third source, is in fact through a human creation, namely a thermostatically controlled heating or cooling system, which is a practical example of a cybernetic system.  The disarmingly simple, but highly significant feature of it is that the thermostat must have a set-point or reference level, to which it regulates the temperature. That set-point is a piece of information and what is different about it is that it is not obviously derived from anything else: it is apparently novel information that is not random. Of course in a heating system, this information was introduced by human intervention and in the case of an organism’s homeostatic system regulating e.g. body temperature, or salt concentration, it was presumably selected for by evolution, but both of these explanations conform to the notion of new information being introduced into, or by living systems. The set-point is not seen in abiotic systems, though natural dynamic equilibria are abundant, for example the balance between opposing chemical reactions and the balance between thermal expansion and gravity maintaining a star. For these systems, the equilibrium point is a dynamic ‘attractor’ and the information it represents derives from the laws of physics: these are not cybernetic systems. Only in life do we see a genuine set-point which is independent, pure, novel and functional information.

This fascinating observation has inspired a new information-based way of thinking about what life actually is. Among the several suggestions for what would be the necessary and sufficient properties of a system to be living, perhaps the most fundamental so far found is the appearance of top-down causation by information, of which the set-point is clearly an example. Sarah Walker (2012) calls this “the hard problem of life”, in an echo of  “the hard problem of consciousness”. The conundrum is, how can information (which is taken to be insubstantial and non-physical) influence the physical world. This is an important question because it seems that all cases of identified life (including artificial and hypothetical life) have in common the feature of information appearing to control aspects of the physical world. In the language we use on this website, that translates to the ‘transcendent phenomenon of information’ apparently directing behaviours at lower levels of organisation in matter and energy. Now we ask: is this even possible?

What we have learned about trascendent phenomena suggests that it is, if and only if the level under (apparent) control is not the foundational substrate of existence. In other words, since each level Hn is an TC of a lower level Hn-1 in the modular organisation of nature, as long as n>2, there is no reason in principle why top-down causation (Hn of Hn-1) should not be possible, since given that restriction Hn-1 is also transcendent. It is also noteworthy that many well informed commentators say that we do not yet know what H1 is (Ellis included), so all known levels of natural organisation are potentially available for top-down causation. Balancing this and just as a reminder: transcendent complexes are emergent properties that together appear to form a new and higher level of organisational structure, creating a picture of nature as modular. Of course, higher levels of organisation may be no more than convenient scales of aggregation for thinking about processes that are really part of a scale-continuum in the organisation of matter and energy. However, early tests of this and efforts to discover natural levels of organisation show them emerging from statistical descriptions of natural patterns and coinciding with the observed modularity [references for this]. The view of modular organisation is becoming more compelling and if information really can be interpreted as a transcendent complex of the distribution of matter in space and time (as we contend), then there is no in-principle reason why it cannot influence matter and energy in the sense of causation.

In a thought experiment to illustrate the problem, Paul Davies puts it this way: consider the difference between flying a kite and flying a radio-controlled toy plane. The former has physical (effective) causation: it is obvious how a tug on the string directly causes physical changes in the behaviour of the kite, in the sense that all we have are physical forces and these (to a good approximation) obey Newton’s laws of motion, or more generally Hamiltonian mechanics. The toy plane is controlled via a communications channel: the only way you can influence it is via pure information (communicated via modulation of a radio signal) and there is no place for this in the Hamiltonian description of the system. How, then, can this pure information determine the behaviour of the physical object of the plane? To understand it better, let’s simplify even further and think of a remote control that just switches a lamp on and off. The remote control may send a pulse of infra-red light to a tuned photo-detector which, by the mechanism of receiving it, generates a small electrical current: enough to move an electromagnetic relay switch and turn the lamp on.

It now seems that we have reduced the problem to a continuous chain of physical phenomena, but more careful thought reveals that this is only the case because the whole system was specifically designed to use the information from the remote control in one very particular way. Indeed, it does not matter that the (one bit of) information was sent by light, or radio, it could have been by wire, and it is true that switched signals on wires, turning other switches on and off, is the basic hardware ingredient of a digital computer. The, at first hidden, but essential ingredient of this control by pure information, is the design of the system in which it operates. The design gives the control information a context, without which it would have no ‘meaning’ in the sense that it would not be functional. Elsewhere we argue that functional information is that which causes a difference: we can now interpret that as meaning it has causal power. The pure information and the regulated system of which it is a part are inseparable. This idea can be generalised to any control system: the information causes physical effects only because it is embodied within the physical system which gives it the context necessary for it to become functional. We may even go so far as to say that design is the embodiment of functional information in the physical form of a system.

For information control to work in a cybernetic system, there must be a part of the system that performs a comparison between the current value and the set-point. This action is one of information processing (e.g. subtraction), which can be performed by a molecular switch, just as well as it can by an electronic switch in a computer. The osmoregulation of a bacterial cell is one of the most basic tasks of homeostasis in biology, so makes a good example (reviewed by Wood, 2011). We see immediately that it is far from simple in practice, with not a single molecule, but several cascades of molecular switches in operation, working differently to up-regulate and down-regulate the osmotic potential of the cell. To quote from Janet Wood’s review:

“cytoplasmic homeostasis may require adjustments to multiple, interwoven cytoplasmic properties. Osmosensory transporters with diverse structures and bioenergetic mechanisms activate in response to osmotic stress as other proteins inactivate.”

To take just one example, channels formed from proteins embedded in the cell membrane literally open and close in direct response to the internal osmotic potential, and these are crucial for relieving excess solvent. Crucially, the set-point is to be found in the shape of these molecules: they are physically designed so as to embody it. We do not know where this information came from (the usual, but in this case unconfirmed answer is natural selection), but here, the important point is that we now see how it influences the physical world. It is not a mystery at all: the set point is embodied in a physical object (the shape of a protein) and this protein shape directly affects the flow of small molecules through the cell membrane. Pure information embodied in molecular structure has causal power within an osmoregulation system.

Generalising top-down causation

There is far more to top-down causation than cybernetic systems and their set-points. George Ellis counts such systems as one of five different mechanisms of top-down causation. More generally, he (and we) recognise causation from one level of aggregation to another in the modular organisation of nature: top-down, bottom-up and same-level. In Ellis (2011) he identifies modular hierarchical structuring, as the basis of all complexity, leading to emergent levels of structure and function based on lower level networks. Quoting Ellis from that paper:

Both bottom-up and top-down causation occur in the hierarchy of structure and causation. Bottom-up causation is the basic way physicists think: lower level action underlies higher level behaviour, for example physics underlies chemistry, biochemistry underlies cell biology and so on. As the lower level dynamics proceeds, for example diffusion of molecules through a gas, the corresponding coarse-grained higher level variables will change as a consequence of the lower level change, for example a non-uniform temperature will change to a uni- form temperature. However, while lower levels generally fulfil necessary conditions for what occurs on higher levels, they only sometimes (very rarely in complex systems) provide sufficient conditions. It is the combination of bottom-up and top-down causation that enables same-level behaviour to emerge at higher levels, because the entities at the higher level set the context for the lower level actions in such a way that consistent same-level behaviour emerges at the higher level”.

In order of sophistication, the five mechanisms of top-down causation identified by Ellis are:

1) deterministic, where boundary conditions or initial data in a structured system uniquely determine outcomes;
2) non-adaptive information control, where goals determine the outcomes;
3) adaptive selection, where selection criteria choose outcomes from random inputs, in a given higher level context;
4) adaptive information control, where goals are set adaptively; and
5) adaptive selection of selection criteria, probably only occurring when intelligence is involved.

Functional Equivalence Classes and modular hierarchy

Ellis explains that course graining produces higher level variables from lower level variables (for example averages of particle behaviours). We would interpret this more specifically by saying that phenomena at a given level of organisation can produce transcendent phenomena, so creating a higher level of organisation, this being the mechanism generating the modular hierarchy of nature.  If we start with a coarse-grained (termed 'effective' in some of the literature) view at some level, we are denied information about the details at the fine-grained level below. For this reason, many possible states at the lower level may be responsible for what we see at the higher (which is exactly the micro-state / macro-state relationship we use in statistical thermodynamics). There are, therefore, multiple realisations of any higher-level phenomenon. The multiple ways of realising a single higher level phenomenon can be collected together as a class of functional equivalence: a set of states, configurations, or realisations at the lower level, which all produce an identical phenomenon at the higher. A functional equivalence class is by definition the ensemble of entities sharing in common that they perform some defined function. But a phenomenon can only be functional in a particular context, since function is always context dependent (Cummins, 1975). This context is provided by the TC, which organises one or more of the members of one or more functional equivalence classes into an integrated whole having ‘emergent properties’. The TC is an information structure composed of the interactions among its components. In practice, these make up the material body, which embodies the information that collectively constitutes the TC. It must be described in terms of functional equivalence classes because it is multiply realisable. Crucially, the TC does not integrate the lower level components per se, it integrates their effects, so the TC emerges from functional equivalence classes, not from the particular structures or states that constitute their members. Specifically, a TC is the multiply realisable information structure that gives lower level structures the context for their actions to become functional. It is an aggregate phenomenon of functions. For it to exist a set of components must be interacting to perform these functions; the components must collectively be members of the necessary functional equivalence classes. This thought constitutes a subtle shift that generalises the notion of higher levels from mere course grained aggregates of lower levels (e.g. averages), to the definition of functionally equivalent classes, containing the lower levels as members of the class. Given this concept, we conclude that the existence of multiple lower level phenomena belonging to a functional equivalence class, for example those constituting the form of a bacterial cell, indicates that a higher level function is dictating the function of the lower level phenomena. One might say that the higher level provides a ‘design brief’ for component parts to perform a particular function and it does not matter what these parts are, or how they work, as long as they do the job.

Luc Jaeger and colleagues have described functional equivalence in the molecular machinery of cells (Auletta, Ellis & Jaeger, 2008; Jaeger & Calkins, 2012). They point to functional equivalence among structurally different molecules, folding patterns of RNA-based functional molecules (e.g. RNase) and molecular networks, including regulation ‘complementation’ in which regulatory systems are interchangeable. Indeed, they catalogue a host of examples found in the genetic, metabolic and regulatory systems of bacterial cells. But functional equivalence is only one of three requirements for what they call top-down causation by information control. In this scheme, a higher level of organisation exercises its control, not via setting boundary conditions, but by sending signals (information) to which lower level components respond. The signals represent changes in the form or behaviour of lower level elements that would benefit the functioning of the higher level. This is a kind of cybernetic control, which spans scales of organisation (or ontological levels as these authors refer to them: Auletta, Ellis & Jaeger, 2008; Jaeger & Calkins, 2012; Jaeger, 2015). Within this framework, it is the combination of top-down causation by information control with top-down causation by adaptive selection that enables the exploration of TCs, and according to Jaeger & Calkins, ( 2012), might characterise life. In general, at any given level of biological organisation (other than top and bottom), there will be more than one TC sharing behaviours in common. These TCs may therefore belong to a functional equivalence class for the TC of the next level up in life’s hierarchy. A nested hierarchy of functional information structures is formed this way.  

Bees do it.

The democracy used by a bee colony in choosing a new home for the hive (extensively studied by Thomas Seeley - see Seeley 2010) provides a good example. A choice of several potential new homes, differing in quality, are explored by scout bees. Each presents a report to the hive, communicated by waggle dance. Through a network of such communications (pure information), support for the better options builds and refines, until a quorum is achieved, and by this a decision is made. The decision is not that of a single bee, but rather an emergent phenomenon and a property of the scout bees and their communications collectively. Thus the cause of the hive choosing a particular new home exists at a higher organisational level than the effective cause which is the physical action of relocation exercised by each individual bee. The hive as a collective controls the home-location of the individuals. So where precisely is the cause of the decision to be located? Causal power should be attributed to the behavioural repertoire of the individual bees since without that, such decision making would be impossible, but the effective cause is the actual movement of bees on mass. Where they go is determined by which potential new home achieved a quorum of support, which in turn is the outcome of information exchange building mutual information among scout bees. Crucially, this mutual information represents the level of match between the potential home and the hive’s requirements. The hive can be thought of as having a goal instantiated as a (multi-factorial) set-point: pure information. Potential homes are compared to this by the scouts, who by a series of communications of pure information, reach a threshold of mutual information, from which, according to their behavioural algorithm, the specific effective cause (moving to the best-match home) results. Again, crucially, the set-point is not a property of any one bee, it depends on the size of the colony, so its role in determining the outcome necessarily implies cybernetic control exercised by the higher level of organisation upon the lower. We do not yet know how the information describing the colony size is embodied in the bees, but we can be sure that information is the source of causation at every level of abstraction in this example.

Even bacteria do it.

Returning to bacteria, Jaeger and colleagues take as axiomatic that unicellular organisms have a ‘master function’ giving them the high-level goal of reproduction (including replication). If we regard the whole organism as nothing more than a coordinated network of chemicals together with their mutual interactions (reactions, binding, recognition etc.), this teleological (goal-oriented) function can only be achieved through top-down causation, since it does not exist at the level of chemical interactions. In general any network of biochemical interactions can have several potential functions because it can be an effective cause of several outcomes. According to the downward causation argument, the particular functions of the network that are observed in a living cell have been selected, by the higher level of organisation, from among all the possibilities, and of course the selection criterion is that they match best with the higher-level function. For this to work there must obviously be a) a range of possible effective causes (the equivalence class), b) a higher level function from which to identify c) a goal which can be expressed as a set-point in cybernetic terms and d) a means of influencing the behaviour of elements in the equivalence class e) that correlates with the difference between the present state and the goal. This ‘means of influencing’ is identified with ‘information control’. Jaeger and colleagues point to the substitutability of a biochemical process from one organism to another as an example. Certainly, this example illustrates an equivalence class and it shows that of all the potential functions of the substituted element, one is expressed and it is the one that is most beneficial to the whole organism, implying selection.

How ecosystems develop top-down control

Perhaps an even clearer example of top-down causation can be found in ecology where the niche of every species (more precisely, every phenotype) is determined by its ecological community, which is of course the sum of all other species exerting an influence. An ecological community consists of all the organisms in it, together with all the interactions among them. Whilst many of these interactions are of a material form: transferring food resources form prey to predator, many are not, for example competition merely describes the influence of one organism’s behaviour on the outcomes for another’s. There seems to be some sort of causation by pure information on two levels here: first the communication needed for one organism to influence another without a material transfer between them and second, at a grander scale, the defining of a particular niche by the community, of which any species occupying it is a part.

Taking the first level briefly, it is obvious that for example a cat can prohibit another from sitting on the mat by staring at it in a display of dominance. This is of course communication  using a sign and the effect is to alter the ‘mind’ of the cat on the receiving end. Influence by communicated information like this is in a special category because it relies on sophisticated cognition to work (number 5 in Ellis’s list). Bacteria communicate by releasing chemical signals into their environment, but this could be thought of as an extension of the chemical signalling network within the cell to include the extra-cellular biotic environment. This kind of signalling may well have been an essential step in the development of multi-cellular organisms. The characteristic way in which such signals are implemented is through the well known lock and key recognition of signalling molecules by their receptor molecules. We can certainly interpret this as a communication channel carrying information which in turn influences biochemical processes in the cell. At every point in the communication, we understand that the information is embodied in the form of molecules, which in turn are a material part of the living system.

The second and higher level of causation is more relevant here because it constitutes diktat from information instantiated at a higher level of organisation, down to that at a lower level. The way it works is that species both create new niches and close them off by occupying them, so in effect an existing assembly of species sets boundary conditions for the ecological sustainability of any population of a new species. The constraint of low-level processes by boundary conditions set at a higher level is common among natural systems and inevitable among synthetic ones, a prime example being, Conway’s Game of Life. Whatever happens in the game is established by the initial conditions which effectively set the boundary conditions for what follows. The living ecosystem is a far more interesting case since here the boundary conditions are not ‘given’ at the beginning, rather they emerge as the community builds and they derive from its member populations, to which they also apply. It is like a club which changes its rules depending on who has already joined and then applies these rules to new applicants. Is this really a case of top-down causation, or is it only a matter of perspective: we find it convenient to aggregate the populations into the level of a community, but, perhaps there is no such thing in reality. In the present case, this is rather easy to answer, because what we call the community is not simply an assembly of organisms, it specifically includes all their interactions and the organisation among them. An ecological community is specified by a very particular network of interactions, unique to it. This network exists at the community scale of organisation, indeed it defines that scale, and it gives rise to measurable properties that are only observable at that scale. As we know, a network of this sort is functional information instantiated in the probability distribution of interactions among its nodes; ultimately among individual organisms in the case of an ecological community. Accordingly, we conclude that an ecological community is a clear case of information which emerged from complexity (the formation of a relatively stable network from component interactions) and therefore exists at a higher level of organisation than the component parts. Further, we conclude that this information is functional in, among other things, setting the ‘rules for entry’ to any prospective population of organisms and is by this an example of top-down causation… for real.


Auletta, G.; Ellis, G.; Jaeger, L. (2008). Top-down causation by information control: From a philosophical problem to a scientific research programme. J. R. Soc. Interface 2008, 5, 1159–1172.

Cummins, R. (1975). Functional analysis. Journal of Philosophy, 72:741–765.

Ellis, G.F.R. (2011) Top-down causation and emergence: Some comments on mechanisms. Interface Focus. 2, 126–140.

Jaeger, L. (2015). A (bio)chemical perspective on the origin of life and death. In “The role of Life in Death”, Edited by John Behr, Conor Cunningham and The John Templeton Foundation. Eugene, Oregon: Cascade Books, Wipf and Stock Publishers.

Jaeger, L.; Calkins, E. (2012). Downward causation by information control in micro-organisms. Interface Focus 2012, 2, doi:10.1098/rsfs.2011.0045.

Seeley, T.D. (2010). Honeybee Democracy. Princeton: Princeton University Press.

Walker, S.I. Cisneros, L. Davies, P.C.W. (2012). Evolutionary transitions and top-down causation. In Proceedings of Artificial Life XIII, Michigan State University, East Lansing, MI, USA. 13, 283–290.

Wood, J.M. (2011). Bacterial Osmoregulation: A Paradigm for the Study of Cellular Homeostasis. Annu. Rev. Microbiol. 65:215–38.