Ouch!

It is time to start applying some of the theoretical discoveries highlighted in www.whatlifeis.info. As usual my inspiration comes from a short and irrelevant experience; nipping my finger between two steel bars as I carried them from the car. It hurt, but I did not look, stop, nor drop the offending items, I carried on and laid them down properly before muttering a word normally written as #**! on polite blogs like this. The thing is, I had a choice and I prioritised doing it properly because the alternatives could have led to more significant accidents. It reminds me of the school rugby player I saw who, after a nasty fall near the end of a tightly contested match, carried on playing for six more minutes, then collapsed at the final whistle, rolling around in agony: his ankle was broken. You probably have your own experience of heroic stoicism like that.

Still, pain is commonly explained as a warning signal that causes us to withdraw and protect an injured body part. If that really is its function, then it is not a great success. Besides, a reflex withdrawal is all we need to achieve the required effect (and even nematodes can manage that). We don’t need pain to get us quickly out of trouble and we don’t need a signal (information) to be nasty, we just need it to be informative, but pain can be very nasty indeed!
Why?
Pain.jpg, Jun 2022
All animals with a nervous system have nociceptors to detect actual or potential injury and these do send signals to other nerves to, at the most basic level, generate a reflex withdrawal reaction. As a rule, climbing through the evolutionary tree from nematodes towards primates, the more basic behaviour generating systems are preserved, but augmented, so we retain the emergency system of reflex withdrawal. But the nociceptive signal that triggers it is not pain.
To be clear, pain is an experience with a normative character (it is bad) and valence (it has a strength to it) and a characteristic feeling - all hallmarks of a consciousness and what philosophers call 'qualia' - the ineffable subjective feelings that make the world for us: what it is like to touch frogspawn, to see the colour red and to hear a baby cry. More formally, the International Association for the Study of Pain (every one follows them) defines pain as

"An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage." (Raja et al. 2020).

Motivation

So again, why do we need an “unpleasant experience”, why can it not be neutral, or even nice (for pity’s sake). The answer, to put it bluntly, is that it has to be bad to make us really want to stop it. That seems obvious, but it provides the best clue as to the biological value of pain and the reason it has persisted as an evolutionary novelty (even though there is still much debate over which animals can feel pain and which not, most scientists accept it for mammals, and probably most for vertebrates in general and evidence mounts for at least some invertebrates - more on that shortly). The biological value of pain is not as a message, but rather as an imperative - a command - to attend to what is causing the problem, immediately! The valence indicates just how urgent it is, based on how bad things could be. The nastiness of it makes us really want to stop it from happening. But (and here is the key point) these motivations are only relevant for an organism that has alternatives to attending to the pain and the freedom to pursue them. If, like me, an animal can choose to carry on regardless, then it needs pain to drive the point home: deal with this now!  

So pain is a necessary part of action selection - the system that determines what behaviour is enacted next, but only in organisms with the ability to choose for themselves what to do. Choosing for oneself here means being the author of one’s own actions; it strictly rules out following an inbuilt algorithm, however sophisticated. So if the brain of an organism just runs a computation (information processing) that takes input signals from nociceptors and selects an action that will deal with the injury, suppressing alternatives unless a more pressing matter is at hand (escape, or winning the match), then it doesn’t need pain. If, on the other hand, the organism is free to choose what it does because its action-selection algorithm has indeterminate outcomes, then it does indeed need pain. Crucial to understanding this, is what we mean by ‘the organism’. It is emphatically not a particular brain circuit, or the whole brain or any other part, but is the integrated whole, the top level of organisation, what I call elsewhere the “Kantian whole” (explained on the autonomy page) that gives identity to the organism. The motivation of pain is for the top-level of the hierarchy of organisation, the only level to which notions of free will can be applied. Attributing the will to any lower level, such as an action-selection module, or an emotion centre, is just another homunculus fallacy (see previous post on 'free will').

Free choice and consciousness

Of course, pain, like every other feeling and thought, is really a pattern in neural excitations, but because it concerns organisation at the top level it must be accessible to all parts of the brain (and perhaps beyond, if embodiment is true). This global accessibility is one of the requirements for consciousness and so is the role of integrating information (see Integrated Information Theory), hence, pain’s ability to recruit the attention of every part of ones being is at least consistent with it being a feature of consciousness. Most philosophers would not have it any other way - because pain is ‘qualia’, it must be a part of conscious experience. However, my point is not the necessity for consciousness. It is that the ability to choose a future action by weighing up the alternatives before deciding, provides the reason for pain, irrespective of how conscious the organism must be to make it real. If I’m right, then we can guess that organisms which have that sort of anticipatory action-selection also have painful experiences.

But how is this sort of action-selection created? We don’t really know yet, but for sure, it must entail a way of predicting what might happen in future scenarios and there must be a way to compare these (in a common currency) in order to choose the best (see Budaev et al. 2020 for fascinating detail). The way of predicting, in turn must use a model of the self and one’s environment. This model could be representation, like the one you get when you remember your way around a building you have not visited for a while. That requires an unknown degree of  flexibility in computing, which is why I specified a universal Turing machine (the best you can get) when describing the role of a self-model in producing free-will (Farnsworth 2017), though that is not especially difficult to achieve with a neural network. On the other hand it might be just a statistical inference model, as followers of Karl Friston’s sophisticated perception processing algorithms (using free energy minimisation) believe (e.g. Pezzulo et al. 2021).

Whatever it is, it must be accessible to the whole system that organises behaviour and it must produce predictions that can be appraised by the whole in terms of how good they seem. I speculate that this internal model generates scenarios that appear as (hypothetical) memories and that the integrated whole (the organism) has a feeling about each (probably mediated by emotion evoking neurotransmitters such as serotonin and dopamine) which bathes the whole system in a mood of attraction or repulsion. What to do next is whatever feels nicest - even if it is escape from what currently feels so bad. For example, imagining (the model at work) getting a good meal may be motivation enough to overcome the pain of a sore tooth for a hungry lion gripping a zebra. For this to work, action-selection must be prospective and based on subjective feelings which are the integrator of information and the common currency. It seems to me that this does not have to be a conscious experience in the full sense of self-awareness. Feelings can integrate action selection over multiple time-scales and with relevant levels of motivation, without one having to think about it. That is a good thing, first because it is reasonably quick, second because it does not distract the organism with complicated pondering, third because it leaves open the choice of what to do next (feel the pain and do it anyway) and fourth because it is available to non-sapient beings… such as lobsters (though some strongly disagree with that bit - see e.g. Key et al. 2021).  

Getting real

BobElwood.jpg, Jun 2022I have teamed up with emeritus professor Bob Elwood to properly construct this argument, with good examples from the empirical literature. Bob is well suited to the task: he is the scientist who demonstrated that decapods (e.g. lobsters and crabs) show all the known signs of feeling pain (Elwood 2012, 2019, 2021). Not that we can prove they have that subjective experience, but that their behaviour under carefully managed experimental circumstances is entirely consistent with the idea that they do indeed feel pain (and I note that those experiment did not involve really hurting them - just in case).  

I expect my ideas will come in for a lot of flack this time, but you know what they say: no pain, no gain.






A preprint of the paper is now available here
A PDF copy of an accompanying lecture is here

References

Sergey Budaev, Tore S Kristiansen, Jarl Giske, and Sigrunn Eliassen. Computational animal welfare: towards cognitive architecture models of animal sentience, emotion and wellbeing. Royal Society open science, 7(12):201886, 2020.

Robert. W. Elwood. Evidence for pain in decapod crustaceans. Animal Welfare, 21(2): 23–27, JUN 2012. ISSN 0962-7286. doi: 10.7120/096272812X13353700593365. 

Robert W. Elwood. Discrimination between nociceptive reflexes and more complex responses consistent with pain in crustaceans. Philosophical Transactions Of The Royal Society B-Biological Sciences, 374(1785), SEP 23 2019. ISSN 0962-8436. doi: 10.1098/rstb.2019.0368.

Robert W. Elwood. Potential pain in fish and decapods: Similar experimental approaches and similar results. Frontiers In Veterinary Science, 8, Apr 20 2021. doi: 10.3389/fvets.2021.631151.

Keith D. Farnsworth. Can a robot have free will? Entropy, 19(5):237, 2017. doi: 10.3390/e19050237.

Brian Key and Deborah Brown. Designing brains for pain: Human to mollusc. Frontiers In Physiology, 9, AUG 2 2018. ISSN 1664-042X. doi: 10.3389/fphys. 2018.01027.

Giovanni Pezzulo, Thomas Parr, and Karl Friston. The evolution of brain architectures for predictive coding and active inference. Philosophical transactions of the royal society b-biological sciences, 377(1844), feb 14 2022. ISSN 0962-8436. doi: 10.1098/rstb.2020.0531.

Raja SN, Carr DB, Cohen M, Finnerup NB, Flor H, Gibson S, Keefe FJ, Mogil JS, Ringkamp M, Sluka KA, Song XJ, Stevens B, Sullivan MD, Tutelman PR, Ushida T, Vader K. The revised International Association for the Study of Pain definition of pain: concepts, challenges, and compromises. Pain. 2020 Sep 1;161(9):1976-1982. doi: 10.1097/j.pain.0000000000001939. PMID: 32694387; PMCID: PMC7680716.

Pain image from https://swindonsportstherapy.co.uk/pain-science/

Bob Elwood's press coverage:
https://www.cbc.ca/news/canada/prince-edward-island/pei-lobster-feelings-1.4489691