
Here is a list of highlighted Publications The full set is available on Google Scholar
Also on ResearchGate, of course.
For a bit of nostalgia, please take a look at my Research Team page - those were the days!
(You can get more detail on my publications here)
My expedition into biological information and function started with thoughts about biodiversity. Rather than taking it for granted that this had to do with the number of species in a community, I sought to quantitatively define what biodiversity really is. I realised ‘diversity’ meant degree of difference and that is mathematically equivalent to the quantity of information. But not information about the community, it was information that is physically embodied in its structure and form. So started an effort to quantify biodiversity in units of information: the sum of differences (i.e. complexity) in and among forms which physically embody the information [10,11]. Notably, biodiversity was a combination of genetic, functional and organisational diversity and the common factor among these was information- embodied at levels of organisation from molecular up to ecological. Indeed information turned out to be the essence of all living systems.
But life is not static information, it is constantly renewing, replicating and adapting: life itself, necessarily dynamic, must be information processing. Paralleling John Von Neumann's insight into self-replication, it can be said that life is computation and what it is computing is itself. It was obvious that not all the differences, e.g. among leaf shapes or the markings on a shark, matter. A lot of this detail was just random. What was needed was some quantifiable measure of functional information - loosely - that which causes a difference "that makes a difference" - to use Bateson's memorable phrase. w
The idea of biological function, with all its awkward teleological implications had not been resolved at that point, so with colleagues, I set about to establish a generally applicable definition. The best starting point for one consistent with phyisical principles was the 'functional role' account attributed to Cummins (1970). Formalised and set in a more concretely physical context, that became a definition [5] using two important concepts: a) the hierarchy of organisation in living systems (from atoms, thought cells and organisms, to ecosystems) and b) causation as the physical expression of physically embodied information.
To understand this concept, I needed to know what information did for, and meant to, living organisms. The result was a realisation that living itself is fundamentally a process of information processing, i.e. computing. Life is a cybernetic process: a set of logical rules acted out by molecular interactions, in total having the effect of self-replication.
But if living organisms are computing, what is it that they are working out? The answer came from Maturana and Varela’s theory of autopoiesis and Von Neumann's self-replicator theory : life is computing itself. That is a completely auto-reflexive cybernetic process and attempts to understand it led me to the theories of Robert Rosen and Stuart Kauffman. They captured self-making in the ideas of closure to efficient causation, autocatalytic sets and thermodynamic work cycles, but these remained rather abstract. More recent advances, e.g. by Jannie Hofmeyr and Marcello Barbierei have made the ideas more biologically grounded. But the role of information was still obscure and questions remained about whether closure to efficient causation and downward causation could exist at all (many scientists still believe they cannot).
The breakthrough came from realising that information, in general, constrained randomness by specifying the particular from among a random set of possibilities (diagram above: A random unconstrained; B simple strong constraint, as in a crystal; C more complicated constraint of interaction among molecules and D functional effect of dynamic morphological constraint in enzyme action, as in the 'lock and key mechanism').
Physically, life computing itself means that it constantly constrains the set of possible chemical reactions that take part within, to only that small subset which collectively and continuously re-make the organism. The computation is in and among the biochemical reactions (elements of analogue computing), most of which, just as in a digital computer, are controllable. The action of an enzyme can be switched on and off by reversible changes in its shape in response to a control signal and networks of such signals amount to a computer. Obviously, all talk of control implies constraint and thereby physical information. The value of information is the constraint it provides, selecting what is functional from all that is not.
To define function, we needed
[10] Lyashevska, O., Farnsworth, K.D. (2012) How many dimensions of biodiversity do we need? Ecological Indicators, 18: 485-492.
[11] Farnsworth, K.D., Lyashevska, O., Fung, T.C. (2012). Functional Complexity: The source of value in biodiversity. Ecological Economics 11: 46–52.



Before fisheries, I developed a theoretical understanding of how large mammalian grazing animals managed to co-exist in the wild and how the effectively distributed themselves to optimise their food resources (leading to some generalisations of the 'ideal free distribution' - a kind of spatial game theory result. On the way, I developed a modification of the foraging functional response for large mammalialn herbivores and this was taken up in several aquatic ecology works through collaboration with Prof. Jaimie Dick and colleagues. Even before that I worked out the optimal growth algorithm for the shape of botanical trees, did a bit of mathematical epidemiology and diagnostic imaging physics involving the UK's first clinical MRI scanner (for the Institute of Cancer Research).