This is the problem of what has become known as phenomenal consciousness (p-c), the experience of experience itself; distinct from action consciousness (a-c) which is roughly what we regard as ‘thinking’. Feelings are states of p-c and are also the definitive result of sentience (the ability to have feelings). Among philosophers, p-c is often summed up as ‘what it is like’ to, e.g. see the colour red, or taste mint, this following the highly influential (1974) paper by Thomas Nagel “What is it like to be a bat?” [1]. Of course only a bat can know, just as you and I can only ever know what it is like to be ourselves: p-c is very strictly subjective and private. For that reason, some scientists dismiss the question as untestable and therefore non-scientific. Others, like me, are not put off so easily.

To help, I notice some things about feelings. They are usually rather involuntary responses (not obviously dependent upon a-c, as some accounts have suggested). They usually have an emotional association, indeed feelings are often synonymous with emotional states in everyday language. Forget the subtle feeling of seeing a colour, think of what it is like to feel pain, to have an orgasm or to finally relieve your bladder when you are bursting to go. Those more carnal, basic feelings are where it is at, in my opinion and all of them have in common a) emotional ‘charge’, b) normative valence (good or bad) that drives us to seek or avoid them and c) a sense of them being whole body experiences, not just mental. Think agony and ecstasy and how motivational they both are.

The ‘value’ of everything

This idea was taken up in a rather neglected paper [2] by leading consciousness researchers Axel Cleeremans and Catherine Tallon-Baudry, who tend to take a physiological (hence scientific) approach to questions of conscious behaviours. They proposed that experience itself was the reason that we (and every other p-conscious system) gets off our behinds and does anything at all. Contrary to those believing experiences were just a superfluous frill, a side-effect of some more functional process, i.e. were epiphenomenal, Axel and Catherine assert that p-c functions as a motivational reward (I guess they did not include pain in that). They express this idea in somewhat unfortunate terms of ‘value’, claiming that ‘all subjective experiences have value’, elaborating that this meant ‘intrinsic value’, illustrating it with several homely examples. The problem with that (why I said unfortunate) is that intrinsic value does not exist, even in principle, because value is necessarily relational - i.e. we have to ask, value to whom and for what? No matter, the point is that p-c can reasonably be called functional if it performs a role in motivation.

Readers of a previous blog post here, on the subject of pain, will recognise that idea immediately. Along with animal pain expert Bob Elwood, I gave some scientific credence [3] to the idea that pain is not an informative signal, rather it is a command to attend immediately to the source of the unpleasant feeling. The crux of it was that motivation was only functional in cases where the system in harm’s way (the animal) had free choice over what to do next. Any system merely following a behaviour-generating algorithm, no matter how complicated, could solve all its problems without even the notion of pain or pleasure. But as the number of sensors and actuators (e.g. muscles) grows and as the system develops anticipation of actions and their consequences, the range of options quickly outstrips any reasonable computation for the best action to follow. At some point in the evolution of animals, a rapid, rough and ready system for deciding what to do, especially in an emergency, offered a guide to action and proved to be very good indeed, hence conserved through subsequent evolutionary history. That system was the integration of a nuronal summary of expected outcomes by a hormone enabled modulation of neural processing: essentially the ‘emotional brain’.

It’s his hormones!

In every case of emotion, hormones are involved - in vertebrates, the autonomic nervous system directly interacts with somatic hormones, integrating brain-based neural processing with the body. Probably invertebrates have an analogous system of integration via hormones (analogues of vertebrate hormones are known and in some cases the very same hormones mediate the same behaviours and longer term behavioural dispositions - moods).  Also, it seems, every animal able to anticipate the consequences of its actions, needs a working model of itself to enable possible actions to be tried (in-neuro) and the effects remembered for evidence-based decision making. This is what the tremendously popular active inference account of action selection tells us (essentially, it conceives of the brain as a Bayesian inference model, trying to minimise the difference between what it detects through the senses and what it expects to be the case). I would only add that it is what it wants, rather than what it expects. How, then, does a brain know what it wants? The simple, organic rather than dry computational, answer, is that it wants what feels the nicest. For that, obviously, there must be a feeling to be nice or nasty.

So I agree with Axel and Catherine in concluding that feelings serve a very important biological purpose - at least for the complicated anticipatory and behaviourally free organism. Feelings provide an integrating common currency of normative value to the organism for different behaviours and their consequences - a currency which acts as motivation and the overall objective function for the organism. There are many ways to feel good and that is what all of us and probably at least all vertebrates, are seeking all the time. As with pain, the integrating process that generates this is a combination of hormone secretions and the  receptors to detect them, linked to a neural network that implements a model of the self that can represent either the current state or some memory or imagined state. The error between the two, in neural signals, is just an inert valence signal, until it is emotionally charged with hormones, at once connecting brain and body in a ‘visceral experience’ of pleasure or distress. The homunculus lives! But not as the manager (as in defunct theories of consciousness), more as a voodoo doll into which the brain sticks hypothetical pins to see what it might feel like, before choosing which real pins to apply in real action.

Bliss in fish


If that is true, then feeling is to be expected of any organism that is free to make action-selection choices and does so on the basis of anticipating the results. In humans, certain regions of brain anatomy have been identified with particular kinds of decision making. That does not at all mean that organisms lacking analogous structures are incapable of feeling. The evolutionary history of brain development is consistently one of packaging-off functions to more specialised novel structures, just as the history of organisms has gone from the single cell that did everything to a community of more than 200 specialist cell-types that each do one thing far better, but have to rely on one another to live (as a multicellular organism like me). I believe it is very likely that at least all vertebrates (yes fish, that includes you) have at least the basic pleasure and pain feelings and therefore possess what counts as phenomenal consciousness. The nuts and bolts of p-c are likely to be the same as those for pain (a good example of p-c if ever there was one). These are [3] a) a neural system able to select actions in anticipation of their consequences, implying b) a model of the self in some state that can be summarised using c) a universal normative currency, that d) naturally integrates neural signals with the whole body via hormones to create qualitative bodily senses of pleasure and distress: i.e. feelings.

But not zombies!


What we all do next is going to depend on what we expect it to feel like and that works brilliantly as a strategy. The hypothetical zombie, able to function perfectly as well as we do, but without any feelings at all, in practice, I suggest, is an impossible nonsense. I suspect AI systems are going to need feelings (for real) to enable them to efficiently decide how to behave, once their task space exceeds the reach of algorithmic computation. To those who seek the source of feelings in a specialist apparatus of meta-cognition, (the defunct managerial homunculus), I have to say that no matter how complicated and subtle, inside it is always going to be no more than feelingless signals. What is needed instead is integration through a general, loosely bounded ‘wash’ of valence signal - AI is going to need the equivalent of hormones and a sense of itself bathed in nice, good hormones, as a thing to strive for.

My (hopefully testable) hypothesis is that:

Feelings are just what happens when neurons join with the body via hormones. It’s what is currently missing from all the ‘dry wiring and signals’ of neuroscientific models.

References


[1] Nagel T. (1974) What is it like to be a bat? Philos Rev. 83:435–50

[2] Cleeremans, A. and Tallon-Baudry, C. (2022) Consciousness matters: phenomenal experience has functional value. Neuroscience of Consciousness. DOI: https://doi.org/10.1093/nc/niac007
 
[3] Farnsworth, K.D. and Elwood, R.W. (2023) Why it hurts: With freedom comes the biological need for pain. Animal Cognition. 26:1259–1275.

Image credits

cat - from https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Ff4c0n5edfyj11.jpg (original source unknown)

fish - from https://www.reddit.com/r/ByCam/comments/s5cqg2/wow_thats_one_ugly_fish_but_a_lovely_set_of_teeth/ (original source unknown).