The Autonomy of Life

A short article by Keith Farnsworth May 2019 Still under construction (you might find it interesting to see me working on it)

The defining feature of life might turn out to be its autonomy.

Life differs from everything else we know in the universe in that we cannot (yet) explain it without reference to ‘agency’, or purpose. At the heart of all explanations of life - self-maintaining, self-propagating, adapting and learning - we ascribe a reason and role to every part or system composing life. The most parsimonious understanding of this reason is life itself: we might say that all life exists to maintain and reproduce life (see autopoiesis). But we would not feel the need to say such a thing about any non-living system: no galaxy or rock or river exists to maintain itself, nor for any other purpose. Not even fire, or whirlpools need such a statement, though they do maintain themselves, using resources from around them. But fires just happen: they follow the laws of chemistry with the laws of physics right behind: directing cause and effect as mere “happenings”, blindly, without need and without purpose. We do not need to refer to what fires “do” in order to understand them (yes, we know they burn, but that is not doing anything in the sense of having a role, it is just a description of the physical process we identify as fire). It is enough to understand the chemical reaction of exothermic oxidation, which in the right circumstances, just happens.

So why, when a bacterium swims towards a sugar concentration, must we refer to what the bacterium is doing (with the implication of a purpose) and consider what functions the internal mechanisms of its little body are given to enable this 'doing'. The reason is that to properly account for them, we need to recognise living organisms as having a goal. That is unique to living systems - we would never (I hope) think of a stone or a whirlpool as having a goal, but there is no living organism simple and lowly enough not to warrant, indeed to require, the notion of a goal to explain its actions.

This goal-oriented behaviour divides living from non-living things. No dead animal or plant has a goal. Now notice that for anything to have a goal necessarily implies that it is an identifiably separate thing, meaning separate from the universal chain of cause and effect that makes up the whole of the non-living environment. The reason is that if a thing is not causally separated from this chain, then all of its actions are determined by prior causes from lower down the chain and it makes no sense to talk of its goal because it could never use one because it could never enact one: it would always be a slave to external prior causes. For this reason, 'goal' and 'causal independence' are inseparable.

This kind of 'separation' is, on first thought, quite a subtle distinction because we are all used to conceptually separating (identifying) a thing from the rest of the universe in order to examine it. For example, we have just thought of a fire and consider it a separate entity. But it is only our thinking that creates this, in fact illusion, of separateness. Really a fire is a phenomenon that is continuous with the rest of the physical universe: its causes are all prior causes and its effects are all passed on to other components of the universe without any gap or boundary. The edges of a fire, where it stops, are just the places where the prior conditions for fire are no longer met. In the non-living universe, everything that we conceive as a separate thing is in fact just a region of space-time where matter and energy appear to us in a pattern that we can recognise and name: its identity is nothing more than our imagination.

This (admittedly strange to start with) idea was central to the thinking of the great psychologist and systems thinker Gregory Bateson, who, in a burst of inspired innovation, realised that what we call 'reality' is a fabrication, made by our minds working to interpret the continuously varying and seamlessly joined up world around us - joined by the chains of cause and effect that link everything to everything else. That is apart from living things, because they, uniquely it seems, have their own true identity and it is by that radical departure from the norm, that they uniquely can have a goal. This making identifiable things from continuous patterns is what Bateson was referring to when he said that "information is a difference that makes a difference". An organism identifies something based on a difference in the pattern that it notices, i.e. that makes a difference to it. Bateson, as far as I know, never went on to say that the exception is that a living thing really is a separate pattern: the information is not just in the 'eye of the beholder', it is real and embodied in the observed organism. Now, I should briefly explain why this strange exception exists.

Organisms, uniquely, are whole things, with a definite boundary and a causal relationship joining every part together into a meaningful whole, which on the level of the organism, acts with unity to give a definite identity to the system that is that organism. So bacteria swim, birds listen, plants compete for water and nematodes attack their tiny prey, whilst galaxies and rivers just are; and their actions and reactions just happen.  It is the very fact of being an organised whole that makes an organism something which 'does' as well as is 'done to'. What lies behind this is the very curious phenomenon of primary cause apparently arising from within the organism, rather than it being merely a link on the chain of cause and effect that leads ultimately back to the 'big bang'. This curious phenomenon is autonomy (the most advanced form of which is free will). Autonomy is necessary to make a definition of 'purpose' meaningful. Without it, goal and purpose are just nonsense. It is only systems that incorporate this sort of wholeness that are identifiable separate things in and of themselves. Autonomy, goal and self-identification are all aspects of the same thing and that thing is unique to life.

What really is Autonomy?

Let us start with Sharov's (2010) definition of an active agent: anything that can make decisions and act in the physical world. Strictly speaking, this means without exogenous control. The Chinese robot spacecraft that landed on the dark side of the moon (2019) had to be autonomous to some extent because there can be no communications with it on the dark side. These days it is not hard to imagine an autonomous robot in the sense of making decisions for itself without exogenous control. It certainly does not have to be following a pre-determined algorithm of decision making in which anticipated situations and decisions have been programmed for it to follow (as human-made cybernetic systems always used to be). No, using 'machine learning', it could be quite an independent 'thinker'. But for the time being, as a computer, it has to depend on human beings to design, build and maintain it, so it is by no means independent of us: we who are organisms that are alive and act in the world, apparently of our own volition, to create AI machines, among other things. The cleverest AI robot today is no more than an information parasite, wholly dependent on humans for its existence.

As usual, the seemingly simple word - autonomy in this case - turns out to be hard to define precisely,  because it has many different meanings. Autonomy litterally, from the Greek, means 'living by one's own laws', from auto (self) and nomos (law). The 'autonomous robot' is then not autonomous at all (putting aside the question of whether it lives), it operates under the laws set by its designers, which are therefore not its own (at no point did it choose to adopt its designers laws, rather than some others). More deeply, we must question whether it can even be refered to as a 'one' which may or may not have its own laws: does it have an independent  identity at all? We may argue about this, but to make progress, we would need a concrete definition of independence of identity. That means we need a way to decide if a thing has a logical boundary of self in the real world, rather than merely as an abstract object of our minds, for example as an engineer might conceptually isolate a part of a machine, or an ecologist may isolate a predator-prey interaction in order to better understand it. Independence of identity is a necessary condition for autonomy in this (strong) sense.

The way it can be established (or rejected) is by examining whether it is logically a part of something else and whether its parts wholly constitute it, or only partially (so that it remains incomplete) and whether its parts are only part of it, or are also part of something else (that does not wholly contain it). This topic is covered by a (little known) formal philosophy / mathematics called merology (from the Greek mero = part).

In Farnsworth (2017), I used merology (see Effingham, 2013) to answer the question: what kind of system has an independent identity? The motivation was the same as here: to establish the necessary and sufficient conditions for autonomy, which I then defined as having at least some degree of 'internal control' (where control meant causation), for which it is first necessary to delineate internal from external, which logically cannot be done unless the system in question has an independent identity. This set the task of finding an organisational structure for which `internal' is causally distinct from `external', giving a clear definition to both.

The first step is to establish that two distinct things cannot be part of each other - that is in fact a standard result. From this, comes a definition of an isolated object (y): no part of y can overlap with anything other than y, so for all other things (z), there cannot be any z that overlaps with y (which defines z as exterior to y). In this, overlap is defined as: x overlaps y if and only if there exists some z such that z is a part of x and z is a part of y, simultaneously of course. Next, a definition of composition: a set of 'x's compose y if and only if (i) each x is part of y; (ii) no two of the 'x's overlap and (iii) every part of y overlaps at least one of the 'x's. There are a several ways to define 'a part', depending on the application. In our case, we define x as a part of y if and only if x and y are causally connected: i.e. the state of object y is strictly determined by the state of object x or vice-versa, or both. We then take the definition of a  transitive relation,  specifying the relation as causation: any relation R on a set X is transitive if and only if for any a, b, c in X, whenever aRb and bRc then aRc. With R as causation, this means for any a causes b and b causes c, then a must cause c. We can then take the special case of transitive - the closure of transitive relations - to specify a causal closure on a set of objects. This is a set of objects which relate to one another causally in such a way that no object is not causally connected to at least one other in the set and all the connections are transitive. This can be formally states as 1) if a and b are members of a causally related set, then a and b are tranisitively related; 2) if a and b are members of a transitively related set AND b and c are causally related members of the same set, then a and c are members of a transitively related set; 3) Nothing is in  the transitively related set unless by rules (1) and (2).

How can a thing be the ultimate cause of its actions?


An agent cannot be free unless it is free of exogenous control and this requires that it be autonomous in the sense that its component parts form a system in relational closure (Farnsworth, 2017). Relational closure means that every part of the system has a causal relation with every other part, so that there is a causal path between any pair of randomly selected component parts. This is a formalisation of the concept of `circular causality' that is described by Rosslenbroich, (2016). Relational closure is also the informational structure of the autocatalytic sets conceived by Kauffman (1986) and Hordijk (2004) and these in turn embody autopoiesis, from which agent causation might be derived (Kauffman, 2006). Relational closure means that the system as a whole is organisationally closed (see box below), which is a concept neatly encapsulated in Kauffman's (2000) term: the Kantian whole. What is special about such systems is that for any component within them, the state is at least in part a result of only internal as opposed to external causes. This causal structure defines the system boundary in terms of relationships among component parts as it 'envelopes' only those parts for which closure is true.



Identifying ‘it’ : The Kantian Whole

The idea that living organisms require an explanation of agency because they are recognised as wholes’ (in which component parts logically seem to have a role), was first formalised by Immanuel Kant (1724-1804) in his “Critique of Teleological Judgment” (well described in Ginsborg, 2006). As Kant put it: for something to be judged as a natural end "it is required that its parts altogether reciprocally produce one another, as far as both their form and combination is concerned, and thus produce a whole out of their own causality". A system composed of parts which in turn owe their existence to that of the system is accordingly termed a ‘Kantian whole’ by Stuart Kauffman (Kauffman and Clayton 2006) and it seems a pity that his terminology has not caught on yet. The difference between living and non-living things in this context is that among the non-living, our identifying of separate objects is a subjective convenience: in fact they have no separate identity at all, they are just conceptually identifiable sub-assemblies of the whole of the universe.

That is because all the matter of the universe is no more than an assembly of sub-atomic particles that are arranged in space with a particular configuration that gives the appearance of separate entities because, currently, their distribution is heterogeneous, having regions of similarity and interfaces between them (e.g. the high-density surface of a planet and the low-density space beyond). But still, when a sub-atomic particle vibrates in one part of the universe, the effects of this propagate throughout the universe, for all time. These effects will have a (very tiny) influence on what happens at any time and place within a 'bubble' that  expands as the speed of light (or slower). This is true of all sub-atomic particles everywhere. What is more, particles are vibrating because other particles (including photons) caused them to do so by their vibration. Assuming that nothing happens without a prior cause, we now see that the cause of everything is everywhere (subject to Einstein's special relativity): the outcomes are only a matter of scale - the influences get smaller with distance in both space and time, but they are still there.

Of course this also applies to all the atoms of a living system, but in contrast with the non-living, they are influenced by a special kind of local organisation - special in that it locally  'contorts' the universal chains of causation into auto-recursive loops (see explanation on the causation page). Living things are logically separate precisely because they are necessarily organised wholes, in the sense defined by Kant. The whole is an autopoietic system: it makes and maintains the parts. The parts constitute the whole and by their collective interaction give rise to emergent properties that belong to the whole and not to the parts. The whole is bounded by a tegument which has selective capabilities, filtering what is necessary from the environment, rejecting the rest and expelling waste. The components collectively form an autocatalytic set in which each part catalyses the production of others and no more. In this the organism is self-referencing, for to make each part, one needs every other part, but each of these needs the part we started with. This state of affairs constitutes task closure (equivalently - operational closure) in which every part is the cause of every other, but only in the context of the whole and this is the cardinal sign of a Kantian whole. The self-referencing among the parts isolates them from their interconnection with the rest of the universe. Parts of the whole are related to one another in cause and effect, rather than the whole universe. This creates the (apparent) autonomy of the whole, for what happens within the organism is determined by the organism, specifically the organised happenings of the interacting parts, without (direct) reference to the rest of the universe. The organised whole acts in accordance with its autopoietic nature - it modifies its immediate environment (especially internally) to maintain its integrity. This is not a decision, of course, it is an inevitable consequence of its organisational structure. A bacterium is just an organised set of chemical reactions, but it is one that makes, maintains and reproduces itself. It does this consistently in a range of environmental circumstances, so it is to this extent independent of the nature of the universe surrounding it. That makes it crucially different from any non-living system, for which any change in circumstances results in a change of internal process (and perhaps composition). For the organism, there is an ‘it’ with a separate identity and some degree of autonomy of action.

That said, the Kantian whole concept is neither precise enough, nor accurate enough for scientific analysis - it is really just a metaphor. We would not go as far as Maturana did:

"I am not saying, as Kant and others have said, that the parts exist for the whole and the whole for the parts. I talk of the manner in which the molecular process interconnect with each other so that a living system exists as a totality that appears to an observer as if the parts existed for the whole and the whole for the parts -- which is not the case. The components of any system exist as local entities only in relations of contiguity with other components, and any relation of the parts to the whole established by the observer as a metaphor for his or her understanding has no operational presence" - Humberto Maturana Romesin.

The whole exists in the sense that it has characteristics that are not found in any of the parts: it is literally the way they are put together. For life, this way is very special because it gives the whole the property of autopoiesis, from which it can gain autonomy. To help understand that, we need to get to grips with concepts of 'closure'.


Organisational and Causal Closure

This box explains the (at first rather opaque) idea of closure and its different kinds.
This may seem a little arcane, but it turns out to be tremendously important for precisely working with causality and autonomy. A Kantian whole may be more precisely defined as a system with transitive causal closure. To understand this, we will start with the definition of closure in general, this leads directly to operational closure, with which we explain transitive closure and then identify organisational  closure, finishing with causal closure.


Closure: is a mathematical concept applying to sets of relations. In general, a set ‘has closure under an operation’ if performance of that operation on members of the set always produces a member of that same set: this is the general definition of operational closure. If the operation is relational (e.g. A>B), then the system has relational closure if it has operational closure under the relational operator. Mathematicians will say that the set of real numbers is closed to addition because the relational operation A + B, where A and B are both real numbers always produces another real number. In this sense closure implies that one cannot get out of the set, no matter how devious and complicated one chooses combinations of the operation. The set of real numbers is obviously not closed under the operation of taking the square root, since negative numbers are real and the square root of them is not (and this is equivalent to multiplying by i (the square root of -1)), so the outcome takes one out of the set.

This idea can be applied in many relevant systems, for a relevant example: if X is a set of chemical species and R a set of reactions, then if for every possible reaction in R among members of X, the products are always also members of X, then the set X is closed under R (this is one of the prerequisites of autocatalytic sets).

This mathematical idea was used in the ‘Closure Thesis’ (post-hoc) underpinning Maturana and Varela's theory of autopoiesis/ They say that "every autonomous system is operationally closed". Quoting Varela [25] “A domain K has closure if all the operations defined in it remain with the same domain. The operation of the system has therefore closure, if the results of its action remain within the system itself". Here Varela is referring to a system that constructs itself: every action (operation) of the system results in a part of the system. This, though, is a curious logic, since for self-construction it is necessary that every part of the system is the result of operations on only parts of the system, leaving open the possibility that some operations of the system may result in things that are not part of the system. That is, it does not matter if the system's operation produces by-products that are not a part of the system (e.g. waste matter), as long as all the parts of the system are produced by what are already parts of the system, as opposed to anything beyond the system.


Transitive closure. In maths, if A->B and B->C then the set {A B C} is transitive only if A->C. If we interpret the arrow -> as meaning cause (or more generally, constrain), then transitive closure defines a causal chain. This is very useful when challenged to account for circular causality. In the more general and realistic language of constraints: A->B->C->A implies that A sets limits on the behavioural repertoire of B, that in turn limits the behaviour of C and that limits A. We might say that the constraint on A by C is limited by A itself, using B as an intermediary. This turns out to be a very common
template of biological control systems.


Organisational closure. According to Heylighen (1990), “In cybernetics, a system is organisationally closed if its internal processes produce its own organisation” (my emphasis). This was used in the ‘Closure Thesis’ underpinning Maturana and Varela's theory of autopoiesis: every autonomous system is operationally closed. Quoting Varela (1974) “A domain K has closure if all the operations defined in it remain with the same domain. The operation of the system has therefore closure, if the results of its action remain within the system itself". More loosely, Vernon et al. (2015) stated that “the term operational closure is appropriate when one wants to identify any system that is identified by an observer to be self-contained and parametrically coupled with its environment but not controlled by the environment. On the other hand, organizational closure characterizes an operationally-closed system that exhibits some form of self-production or self-construction”. In other words, organisational closure was the term chosen by the Santiago school to describe autopoiesis in general, rather than specifically in the case of life, for which obviously autopoiesis was already the term. That left operational closure to refer to any system that is closed in the mathematical sense to control relations: hence not controlled by its environment. 

Causal closure.
Rosen 1991 used the Aristotelean language to describe the special case of causal closure, calling it “closure to efficient cause”. This is an operational closure in which the operation is specifically causal (meaning efficient cause).

This idea was formalised by a mereological argument (mereology deals with parts and wholes with formal logic) in Farnsworth (2017) to define systems with an inherent cybernetic boundary (where inside is definitively separate from outside) as only those systems having transitive closure for causation, (xCy: read as x causes y), meaning that the state of an object y is  strictly determined by the state of the object x. The transitive causal closure of a system means that its  components form a set A of causally related objects under C such that there is no object in A who’s  state is not caused by an object in A: every part of the system is causally connected to every other.
It is from such systems that a transcendent complex may arise. Finally, taking causation as manifest in mutual information, Bertschinger et al. (2006) derived a quantitative metric of information closure to  operationalise these concepts in systems theory, which has developed into an information-theoretic  method for system identification (Bertschinger et al. 2008).

References.

Bertschinger, N.; Olbrich, E.; ay, N.; Jost, J. Information and closure in systems theory. German Workshop on Artificial Life <7, Jena, July 26 - 28, 2006>: Explorations in the complexity of possible life, 9-19 (2006), 2006.

Bertschinger, N.; Olbrich, E.; ay, N.; Jost, J. Autonomy: An information theoretic perspective. Bio Systems 2008, 91, 331–45.

Farnsworth, K.D. Can a robot have free will? Entropy 2017.

Heylighen, F. Relational Closure: a mathematical concept for distinction-making and complexity analysis. Cybernetics and Systems 1990, 90, 335–342.

Varela, F.; Maturana, H.; Uribe, R. Autopoiesis: the organization of living systems, its characterization and a model. Curr. Mod. Biol. 1974, 5, 187–96.

Vernon, D.; Lowe, R.; Thill, S.; Ziemke, T. Embodied cognition and circular causality: on the role of constitutive autonomy in the reciprocal coupling of perception and action. Frontiers in psychology 2015, 6.


see also (further reading):
Hordijk, W.; Steel, M. Detecting autocatalytic, self-sustaining sets in chemical reaction systems. J. Theor. Biol. 2004, 227, 451–461.

Kauffman, S.A. Autocatalytic sets of proteins. J. Theor. Biol. 1986, 119, 1–24.

Kauffman, S.A. Origins of Order: Self-Organization and Selection in Evolution; Oxford University Press: Oxford, UK, 1993.

Kauffman, S.A. Investigations; Oxford University Press, 2000.

Mossio, M.; Bich, L.; Moreno, A. Emergence, closure and inter-level causation in biological systems. Erkenntnis 2013, 78, 153–178.

Rosen, R. Life itself: A comprehensive inquiry into the nature, origin and fabrication of life; Columbia University Press: New York, USA., 1991.








Levels of Autonomy: from homeostasis to free-will

Two broad features are jointly necessary for autonomous agency (causal independence): organisational closure and the embodiment of an objective-function providing a `goal': so far only organisms demonstrate both. Organisational closure has been studied (mostly in abstract), especially as  cell autopoiesis and the cybernetic principles of autonomy, but the role of an internalised `goal'  and how it is instantiated by cell signalling and the functioning of nervous systems has received less attention in this field. Here, I add some biological `flesh' to the cybernetic theory and trace the evolutionary development of step-changes in autonomy: 1) homeostasis of organisationally closed systems; 2) perception-action systems; 3) action-selection systems; 4) cognitive systems; 5) memory supporting a self-model able to anticipate and evaluate actions and consequences. Each stage is characterised by the number of nested goal-directed control-loops embodied by the organism, summarised as will-nestedness N. Organism tegument, receptor/transducer system, mechanisms of cellular and whole-organism re-programming and organisational integration, all contribute to causal independence. Conclusion: organisms are cybernetic phenomena whose identity is created by the information structure of the highest level of causal closure (maximum N.), which has increased through evolution, leading to increased causal independence, which may be quantified by `Integrated Information Theory' measures.


In Farnsworth (2017), I provided the following definition of free will. Though it does not meet the approval of every modern philosopher, it at least has the virtue of being precise and idependent of assumptions concerning the (real or imagined) attributes of healthy (and awake) adult humans. The definition applies to a general agent, which could be anything, natural or technological (hence the title of the work "can a robot have free will).

 An agent has 'free will' if all of the following are jointly true:
•    FW1: there exists a definite entity to which free-will may (or may not) be attributed;
•    FW2: there are viable alternative actions for the entity to select from;
•    FW3: it is not constrained in the exercising of two or more of the alternatives;
•    FW4: its “will” is generated by non-random process internal to it;
•    FW5: in similar circumstances, it may act otherwise according to a different internally generated “will’.

In this definition the term “will” means an intentional plan which is jointly determined by a “goal” and information about the state (including its history) of the agent (internal) and (usually, but not necessarily) its environment. The term “goal” here means a definite objective that is set and maintained internally to the agent. The objective is a fixed point in at least one variable. The set point of a homeostatic system is the most obvious example. An intentional plan is an algorithm that is embodied as (stored) information within the agent. The meaning of 'action' here is the creation of an additional (to those already present) constraint on one or more physical force in the system that includes the agent and may extend arbitrarily beyond it, the practical result of which is a change to the trajectory of the development of the system in time. Examples include movement, changes or stasis (counter to change that otherwise would occur) in concentration of materials such as solutes, and the resultant effects such as maintenance of the agent against decay, changes in colour (think cuttlefish) or perception (e.g. eye pupil dilation), production of antibodies, alteration of physiology (e.g. root / shoot ratio in plants) and so on. 

The list of criteria (FW1-FW5) was chosen to address the main features that most philosophers have thought important (though they do not necessarily agree with one another about what is important and some philosophers would leave out some items of the list). FW2 and FW5 are intended to examine the effect of determinism and FW4 represents the “source arguments” for and against free will, whist FW3 ensures freedom in the most obvious (superficial) sense. Only one of the list (FW1) is not usually included in any philosophical discussion of free will, perhaps because it is usually considered to be self evident, but it plays an important role in this case because we are not assuming anything about the agent, in particular we are not assuming that it is a self-coherent whole (more precisely an organisationally closed system). This is important because, as the 2017 paper demonstrates, organisational closure (see box above) is a necessary condition for free will as it is defined here.

The implication of agency is that a system to which it is attributed acts in a way that is systematic (not random) and such that the state (and maybe history of states) of the system (agent) is one of the determinants of the system’s behaviour. More specifically, the next state of the system is not random, not wholly determined by exogenous control, nor intrinsic to its structure (as in clockwork), but is at least partly determined by its present and (optionally) one or more of its previous states. The proximate cause of an action taken by an agent with agency is identified as its ‘will’. This proximate cause is not merely mechanism, it is the result of information with causal power rather than just deterministic effective cause (see discussion of causes).


Stentor

How Autonomy is built up (in organisms)

Starting from nothing, how can any system build itself up to a level of autonomy that creates at least the character of free-will? The answer can only be by a process of boot-strapping.

In Farnsworth (2018) I concluded that organism identity is created by the information structure existing at the highest level of causal closure at which the highest level of will-nestedness is identified and that this coincides with the ‘maximally irreducible cause-effect structure’ defined in IIT.


Free Will Machine
The "free will machine" (from Farnsworth 2017) is a kind if cybernetic structure intended to illustrate the requirements for free will as some philosophers define it. The term 'machine' follows the convention in cybernetics of calling information processing devices machines. This one generates predictions of its own state in alternative futures Ft+n (calculated by Turing Machine TM2), having built an internal representation of itself interacting with its environment (this representation is created by TM1). It selects the optimal response from among possible reponses at time t (Rt), using a goal-bases criterion G, within the Finite State Automaton (FSA), in which the goal is internally determined. IPS is the implementation of the selection in the physical system - a translation of information (the optimal future f' into optimal action r' which changes the system's state from Qt to Qt+1 .

References

Bertschinger, N.; Olbrich, E.; ay, N.; Jost, J. (2006). Information and closure in systems theory. German Workshop on Artificial Life <7, Jena, July 26 - 28, 2006>: Explorations in the complexity of possible life, 9-19.

Bertschinger, N.; Olbrich, E.; ay, N.; Jost, J. (2008). Autonomy: An information theoretic perspective. Bio Systems 2008, 91, 331–45.

Effingham, N. An Introduction to Ontology; Polity Press: Cambridge, UK, 2013.

Ginsborg, H. (2006). Kant’s biological teleology and its philosophical significance. – In: Bird, G. (ed.), A companion to Kant. Blackwell, pp. 455–470.

Farnsworth, K.D. (2017). Can a Robot Have Free Will? Entropy. 19, 237; doi:10.3390/e19050237

Farnsworth, K.D. (2018). How organisms gained causal independence and how to quantify it. Biology.

Kauffman, S. A. and Clayton, P. (2006). On emergence, agency and organisation. – Phil. Biol. 21: 501–521.

Rosen, R. (1991). Life itself: A comprehensive enquiry into the nature, origin and fabrication of life; Columbia University Press: New York, USA.

Sharov, A.A. (2010). Functional Information: Towards Synthesis of Biosemiotics and Cybernetics. Entropy, 12, 1050–1070.



  • Back to Philosophy Theme