Emergence from complexity


Keith Farnsworth

(note this page is best read in conjunction with the page on Transcendent Complexes here).

Introductory Background

Emergence is the appearance of phenomena at a scale of system organisation that is not present at the lower scales within it. The idea is that all systems are made from relationships amongst component parts. If we begin with the most elemental components, these are related to one another by constrained interactions to form, what at least appear to be, more complex units. The interactions are constrained in the sense that not any interaction is permitted if the system is to be at all organised: typically relationships between component parts have to be highly prescribed for the system to be even identifiable, let alone functional. This is a familiar truth to artists of all kinds - creativity is the product of freedom and imagination, constrained by the regulations of the art form (rules of music, discipline of craft technique etc.). The alternative to musical regulation in composition is noise and to the regulation of development in a human body, the alternative is a teratoma. Of course the constraint that regulates is the  action of embodied information. Emergent phenomena can be simple and, in principle, easy to understand: e.g. the flow of water emerging from inter-molecular forces acting on individual molecules. But they can also be extremely complicated and so puzzling as to be mysterious: life itself is the primary example.

We may take elementary particles (electrons etc.) as the elemental components and the laws of physics that govern their behaviour as the regulations for their interactions. We know that they interact to form atoms and that atoms interact in ways that seem different to those governing the elementary particles from which they are composed (chemistry seems to have some different rules to physics). It is important to realise here that the rules of physics are not replaced by, but added to by those of chemistry. We know that organisms are all made from chemical components, obeying the laws of chemistry, but it seems there are some very different and important laws that spontaneously ‘emerge’ to make life very special. That is when the right molecules interact in the right way to count as part of something living. Indeed, living can be defined as a network of chemical reactions that obey some particular rules which extend beyond inanimate chemistry: they appear to be special emergent rules of life. (IFB specialises in explaining all that, so do read on).

The essential message of scientific constructivism is that 'creation' is composed of a nested hierarchy of phenomena in which new laws, laws that could not be foreseen by studying lower levels of the hierarchy, govern the workings of each level in turn. Thus, physics has its laws and chemistry has additional ones that cannot be understood at a level below chemistry. The same with biology: laws of life are new and unique to life, they cannot be derived from chemistry and physics. One reason this is an uncomfortable conclusion is that we know that physics, chemistry and biology are just categories of scientific inquiry, reflecting the historical divisions of research pursued by different people who’s interests were so divided. We might suspect that nature is not really made from distinct ‘Russian dolls’, each with its own governing principles. Whether or not this is true, rests on the question of whether the emergence of apparently new phenomena with each increasing level of complexity is genuine, or just an appearance. If it is genuine: each level really does introduce new rules as though from nothing, then it is called ‘hard emergence’ (sometimes 'strong emergence'). If it is only an illusion, then the apparently new rules are really just a reworking of those already known from the more fundamental levels and we call that ‘soft emergence’ (sometimes 'weak emergence').

The conventional view in science is that there is no such thing as hard emergence. This is the opposition to scientific constructivism: it is scientific reductionism. It holds that every phenomenon of a system, no matter how complex, is reducible to the properties of a set of  sub-systems, which jointly provide the necessary and sufficient conditions for the phenomenon. This principle, reductionism argues, should be applied all the way down a nested hierarchy of sub-systems within systems to reach the fundamental laws of physics operating on elementary particles. Accordingly, all the phenomena of life are merely an appearance in the subjective mind of the observer; what is real in the sense of objective 'out there' public and independent of the observer, is only the physical laws acting on the elementary particles. There are therefore no laws of biology. As influential physiologist Jacques Monod* wrote (1971): "It does not seem likely that the study of living things will ever uncover general laws applicable outside the biosphere." Essentially, reductionism is saying 'there is no such thing as a forest, it is all just a lot of trees'. People taking this view look forward to the time when we can fully explain even the mind and consciousness in terms of atoms.

* In Chance and Necessity (1971), quoted in Rosen, R. (2000): Essays on Life Itself, Columbia University Press.

Of course, reductionism has been a tremendously powerful tool for modern science, uncovering many of the details about how life's mechanisms and processes work. But many scientists with a theoretical and especially those with a philosophical interest in nature, find these successes unsatisfactory. As one leading proponent of constructivism (Robert Rosen) said of his mentor Nicolas Rashevsky, who, for decades, sought to understand the organism in its entirety through models of its parts "the more different properties were captured in such models, the more the organism itself seemed to retreat from view." (Rosen 2000, p260). Rashevsky was one of the first in the modern era to about-turn in frustration with this: he proceeded to develop the Relational Biology that inspired Rosen and established one of the major endeavors of constructivism. It is succinctly described by saying 'take the whole organised system and throw away all but the organisation' - this is exactly the opposite of the reductionist programme in which we take the system and throw away the organisation, leaving only the unrelated parts (see the illustration of mechano-diversity here). Indeed it is a seldom recognised assumption that any property of any system can be decomposed into a set of unrelated properties of sub-systems. There is no proof for this assumption, it may in fact be falsifiable*, but it underlies all of reductionist science. If, on the other hand, it is wrong, then there must be properties of 'complex' systems that cannot exist in any way other than by the organisation of the whole system: that is, to use the slogan, systems can be more than the sum of their parts. In my view, this is only a matter of information (organisational constraint) being embodied at a higher level of system organisation, but there is a long way to go before we can examine that conclusion.

* A major theme of Rosen's work, continued by others, e.g. A.H. Louie, is the demonstration that this assumption is incorrect and replacing it with a better one and thence developing a proper understanding of how life works.

Kim’s philosophical challenge to (hard) emergence.


Scientific constructivism is largely ignored: most scientists, and the builders of their philosophical foundations, continue to reject  the idea of emergence. Perhaps the most prominent challenges from reductionist supporting philosophy (in recent times) has come from Jaegwon Kim, who developed a series of related arguments against the very idea that emergent properties could be anything other than an illusion of perspective (our natural tendency to see the forest despite the trees!). In other words, Kim thought that (hard) emergence was just an artefact of the way we perceive things (subjective and therefore not science). It is worth saying at the outset that Kim’s primary interest is in understanding the nature of the human mind, so not necessarily as general as we wish to be here.

Kim begins by defining the notion of emergence as a confluence of two necessary properties: supervenience and irreducibility (see e.g. Kim 2006). The first defines the ‘raw material’ of a putative emergent phenomenon as follows -

 “Supervenience: If property M emerges from properties N1, . . . Nn, then M supervenes on N1, . . . Nn. That is to say, systems that are alike in respect of basal conditions, N1, . . . Nn must be alike in respect of their emergent properties” (Kim, 2006, p550).

Supervenience is one of those philosophical words that does not mean what it seems it ought to(intention is another example). Supervenience refers to properties of things and specifically, a property that cannot be separate from other (underlying) properties. Using a popular  example, the beauty of a flower is supervenient upon (supervenes on) its other properties such as shape and colour and cannot exist without them.

So, if M supervenes on the set of underlying properties N1, . . . Nn , then everything about M is determined by the properties of N1, . . . Nn , and nothing else. The philosophers Mossio, Bich and Moreno state this subtly differently in their (2013) introduction to the topic: “Supervenience is a relation by virtue of which the emergent property of a whole is determined by the properties of, and relations between, its realisers” (Mossio et al. 2013, from which the Kim quote was taken for this article). The subtle, but I think crucial, difference is the addition here of ‘relations between’ - we shall see why very soon.

The other ingredient of emergence is irreducibility and this is really just a statement confirming what we are interested in:

“Property M is emergent from a set of properties, N1, . . . Nn, only if M is not functionally reducible with the set of the Ns as its realizer” (Kim, 2006, p. 555).

In other words, for a property to be called emergent at some system level, it must at least not be something that is already a property of one of the subsystems (i.e. at a lower level).

Kim’s core argument has been called the ‘exclusion argument’ and can be summarised by the following question. If an emergent property M, of a system S, emerges from components C (having properties N1, . . . Nn), so that M exerts some causal effect X, then why can we not, more simply, ascribe X to C? In other words, if M supervenes on the properties of the components of S, then S has no properties that are not simply a result of the properties of C, so if M is said to cause X, it is merely an intermediary between C and X which can be dispensed with: it is really C that causes X. In this case, M is termed epiphenomenon which is tantamount to saying an illusion.

If we return to the flower example, letting petals and stamen be the components (C), this seems to say that if the beauty (M) of the flower (S) causes you to smile (X), since beauty supervenes on the shape and colour of its components (these being N1, . . . Nn  - the properties of C), then it is really the petals and stamen that are making you smile. Now I am sure you object to this idea. What is missing is that the flower is a particular arrangement of the petals and stamen, indeed one arrangement out of many possible, that is beautiful.

If we take the information perspective (as advocated on the IFB website), it is rather simple to see what is wrong with the exclusion argument, at least as it is expressed above. Recall the subtle difference between the properties of N1, . . . Nn  and their properties together with ‘relations between’. Each member of S is a collection of information (embodied as form). If S is the collection of these (C with properties N1, . . . Nn), then it is only an assembly of objects (A), that is an unspecified, indeterminate assembly, with appropriately large entropy (for A, imagine pulling a flower apart and leaving the petals etc. haphazardly on the table). However, M arises from perhaps only one very particular configuration of the assembly: S, so that only one particular set of relations between N1, . . . Nn will produce M. Specifying this particular S from the set of possible arrangements of A reduces entropy: particularising the configuration (from among many possible) instantiates information that is more than that of the sum of parts C. So in information terms M does not supervene on A, but on S and the difference between them is the missed ingredient that is necessary for causing M. We cannot dispense with M when trying to explain the cause of X: it is caused by N1, . . . Nn being in a particular configuration and that particularity is where M derives from (we might say X is caused by N1, . . . Nn + M1, . . . Mn where the Ms are properties of the configuration S. This is the information-based meaning of emergence. It is the additional information S-A that is instantiated by a particular configuration of relations among the component parts (the information theory page may help to understand this if it is still unfamiliar).

Irreducibility is equally obvious, since it is the additional information, S-A=M1, . . . Mn that instantiates - and is embodied by - the particular configuration of parts C. Further, since all things that exist are as they are because of the information that they embody, there is no fundamental difference, in the information description, between a basal component (a member of C) and a particular assembly of components S. Both are what they are because of the particular information, embodied by them, that orders their matter in space and time. Thus emergence is only a matter of the ontological level at which some causal information is instantiated.

This is what Mossio et al. 2013 seem to end up arguing, though in more general merelogical terms (the philosophical analysis of part-whole relations), rather than with specific reference to information. They make a great deal of the relations among the component parts being a contributor to the emergent properties of the whole, which although they do not identify explicitly with information, they do consider in terms of constraints, which is effectively the same thing. That, recall, is because information, physically embodied, is the constraint of a system to be a particular way. The notion of constraint used by Mossio et al. 2013 is the same as that used by Montévil and Mossio (2015) to understand causation as constraint. We pick that idea up on our page about causation, where it is used to introduce the concept of causal closure that is fundamental to the definition of life and foundational to the theory of autopoiesis. So finally, it is worth noting that Mossio et al. 2013, use their idea of constraint to examine causal closure and whether or not it implies or relies on the idea of downward causation.

It seems that using information structure as our model for thinking about emergence, it is a real and natural phenomenon, quite to be expected.

References

Kim, J. (2006). Emergence: Core Ideas and Issues. Synthese. 151: 547-559.

Mossio, M., Bich, L., Moreno, A. (2013). Emergence, Closure and Inter-level Causation in Biological Systems. Erkenntnis. 78, 153-178. (obtained from https://hal.archives-ouvertes.fr/hal-01354366).

Montévil, M. and Mossio, M. (2015). Biological organisation as closure of constraints. J. Theor. Biol., 372:179–191, 2015.


Soft Emergence in the Game of Life

A very good explanation which demonstrates soft emergence is provided by Russ Abbott in Emergence Explained’ (2006). It uses the computer-based cellular automaton called Conway’s ‘Game of Life’ (GoL).
The GoL is a very simple set of rules which govern transitions from ‘on’ to ‘off’ and back among a set of interconnected elements, best thought of as grid squares, for example on a chess board. Each element (grid square) has four neighbours with which it shares a side and four with which it shares a corner; all these together are its eight neighbours. If an element is on now, then in the next move (or time-step) it will stay on if and only if two or three of its neighbours is on, otherwise it will go off. If it is off now, it will stay off unless exactly three of its neighbours is now on, in which case it switches on in the next move. That’s it.
Famously, there are certain patterns of on and off which you can make on the grid that persist and move over the grid, looking almost like flat little creatures - the glider being the best known of these. Some people have created large ‘zoos’ of patterns that do interesting things in the grid, interact in fascinating ways, reproduce and generally amaze. If you watch these patterns performing their behaviours on the grid, they certainly seem to have a life of their own, but never forget that they are in fact just made from, and strictly obey, the simple elemental rules. Never the less, to describe the behaviour of the patterns, we have to refer to what appear to be new rules that describe the way different kind of pattern interact.

There is then, a higher level of abstraction, beyond and not including the elemental rules, where GoL ‘creatures’ can be described according to their behaviours and interactions. They can do a lot of things, for example they can represent even very complicated calculations, but one thing they are never able to do is change the elemental rules from which they are made. For this reason we refer to them as 'transcendent' phenomena of the rules of the GoL. As scientists, we could be very reductionist and say there is nothing to see except the repeated application of a few rules, or we may be more synthesist and describe the diversity and behaviours of patterns we see within the GoL, without reference to the underlying causes. Both are valid, but incomplete.

One especially important and elaborate creation within the GoL is the ‘Universal Turing Machine’ (UTM): it really is astonishing that such a thing can be created within GoL, but you can see one working in this YouTube video: here. As well as giving you a good impression of the sort of dynamic patterns that can be created with the rules of GoL, the UTM is of special interest for us. The reason is that its author, the British mathematician Alan Turing, proved that it could, in principle, compute anything that is computable, because by definition a UTM can compute and can simulate any other Turing Machine, which in turn can compute anything that a ‘real’ computer can compute. In fact, the Turing Machine, and its more general relative the UTM, were devised as representations of computation in general and their study yielded fundamental insights for the development of all the computers used today. So, this universal computer, which can compute anything, can be made using the GoL (and this was established by John Conway himself - another British mathematician).

Now, to continue Abbott’s explanation, the UTM generated in a GoL is pure information and is a transcendent complex (explained here) of the patterns that comprise it as interacting component parts, these patterns themselves being transcendent complexes of the GoL rules. So far, then, we have two hierarchical levels of soft-emergence and it is important to realise that both are pure information: the first as it comprises nothing but patterns on the grid and the second, because it is computation, made from those patterns. This turns out to be a general feature of transcendent complexes: they are pure information. They are also (by definition) logically independent of the elemental components and rules from which they are built. This is because they can be built from more than one kind of elemental system. For example a Turing machine can be made from a digital computer program, or in a physical model, even made from plastic building components (e.g. Lego), as well as from the GoL. When we want to study and discuss the transcendent complexes (e.g. a UTM), it is not relevant to refer to the GoL, Lego, or computer with which it is formed.

Is it a large step to apply the same thinking to a living cell? Can all the workings of a cell (the process of living) be formed, not from biochemicals, but in a computer simulation, or on the GoL grid? This does not mean a computer model of the cell, it literally means the process of living, not ‘run on’ biochemistry, but rather in some other logical base-system. If the answer is yes, then we shall know that life is a transcendent phenomenon and that it is pure information (i.e. life is a cybernetic phenomenon).


A note of caution from Robert Rosen
In his Essays on Life Itself, Rosen exposes the sloppy thinking of assuming that all systems have to do, to transition from merely complicated non-life to become living systems, is to get sufficiently complex. There is no known mechanism, nor any critical threshold of complexity for non-life to become life. Indeed, Rosen shows that the idea of complexity is itself a subjective one, entirely depending on the scale, dimensionality and coordinate system used to describe the system. Rather than complexity (and elaborate talk concerning the 'edge of chaos' etc.), the reason life is different from non-life is its circular causality (what Rosen calls non-predicativity, using a mathematical term). Emergence is very likely a property of life, but it is not restricted to living systems and it is certainly not definitive for life.