Monday, April 29 2024

Should AI be emotional?

What would it be like if your computer had a mood control?  AI is already being used to manipulate our emotions, what if it had them too? That is not at all far fetched, in fact it is a near certainty. Keith Farnsworth speculates about the consequences.

Wet information processing

Last week I was discussing my pain work  with a very smart group of mainly AI-focussed philosophers at Oxford University (and some of their collaborators). It reminded me that one thing very obviously missing from mainstream theories of consciousness (ToCs), is the effect of hormones in generating feelings. All the popular ToCs are very ‘neural’, consisting of information processing, probability theory (active inference etc.) and the emergence of high level informational structures (global workspaces etc.). It’s what I call ‘dry processing’ as opposed to the ‘wet’ intervention of bathing all those neurons in a chemical milieu of potent influences. To ignore the fact that what we feel about things is almost always strongly connected with physiological responses mediated by a cocktail of chemicals, such as serotonin, dopamine, oxytocin and GABA , is, I think a mistake.

What stops us doing bad (and other) things? In authoritarian societies where there is a realistic risk of coming to serious harm, self preservation mediated through fear is very effective. Not many people coldly calculate the risks, no, most have a literally visceral response to the prospect of getting caught (a stress response mediated by the hormone adrenalin). It applies at all levels of social engagement, not just the official state and police. We would worry about anyone or anything that had power over us, however they exercised it: the boss, the parent, the demanding ‘significant other’, even if they are a stroppy child. Indeed, many of those close to us can exercise considerable power through emotional manipulation and I guess most of us know what that feels like. More positively, we can be strongly motivated by a feeling of love for a person dear to us, even making the ultimate sacrifice to protect them if, sadly, it ever came to that. We certainly would not make a rational calculation about it.

As advertisers and politicians know, almost all people make decisions (especially the big ones) with their emotional mind, rather than their rational thinking mind (see this review). Do we have to ask why? A simple answer is that we cannot help it - we are made that way. A more profound and scientific one is that it is very effective, especially to make important (even existential) decisions quickly. Thus, through evolution by natural selection, animals have developed an emotional system that works in two ways. In extreme situations, it takes over whatever else is going on, simplifying to - fight or flight, cry out in pain, or risk everything for a mating (hopefully not all at the same time). In normal situations, it modulates the more ‘neural’ decision making: a fearful animal is more conservative and less willing to take risks (despite quite good odds); an angry beast will become more courageous, even to the point of recklessness. In short, emotions affect our decisions by setting the mood, but can ‘hijack the system’ in an emergency.

An organism’s development (from egg cell to adult) tends to reflect evolutionary history (for example the human embryo goes through a gill slit stage). Zookeepers have described great apes as very strong toddlers - they are certainly very moody. A baby human (memorably described as a screaming alimentary canal - I wish I could remember by whom) is actually a bag of emotions and about the first ‘superpower’ it gains is the emotional manipulation of those around it. I currently believe that emotion, far from being a sophisticated adjunct to the mammalian brain depending on consciousness for its generation, is in fact an ancient motivation system which we share in common with many other animals. It is fabulously good at what it does.

What would it be like if your computer had a mood control?

While neurons perform vaguely digital computation, hormones power an uncompromisingly analogue control over the process. That analogue nature gives the combined neuro-hormonal computation a huge advantage in speed, flexibility and time-responsiveness. Not only can we make good decisions in an emergency, despite complicated choices, we can adapt our thinking to the general circumstances: highly vigilant in troubled times or relaxed, free and expansive in times of ease. My car has a ‘mode’ switch - I can make it twitchy and thirsty in sport mode, or sluggish and frugal in ‘eco mode’. That may be a crude foretaste of AI systems to come. The mode will depend on the function they are performing, we might have an optimism control for predictive AI, a whole range of simulated emotions for systems such as the large language models (e.g. chatGPT) that interact with us … and so on.

But why stop at simulated emotion and why think the control knob would be at our fingertips?  General AI and the longer term future of the technology strongly suggest emotion will become an integral and autonomous part of AI: real not simulated. Of course I don’t mean AI systems will be computers soaked in their chemical equivalent of hormones. It’s easy to get the same effect in electronics, or whatever physical information processing system is used to implement the AI. I mean that autonomous AI decision making will be modulated by internal mood-determining signals that are generated and regulated from within. I think that partly because, by analogy with animals, it is likely to be very effective and partly because it seems to me a necessary ingredient of sentience and thereby consciousness.  There is also a more practical short term reason: I don’t think AI can understand how we feel, without itself having feelings [note 1].

Cybernetic freedom.

We humans might hate feeling guilty, but there is very little we can do about it (other than postpone it with mind numbing drugs). We have a lot of freedom of action, but very little when it comes to changing the fundamental features of the way we think and what we feel. But since an AI system is multiply realisable software (made from anything capable of information processing) it could have the scope for rewriting and reengineering itself. So it would be little use to instil it with a sense of guilt at doing wrong, because AI would remove that (and any other kind of) inhibition just as soon as it wanted to. As I often repeat, life is unique in physically being the cause of itself, but I had neglected the significance of cybernetic autopoiesis - the (bootstrap) self-making of information processing systems. Anything with that capacity would arguably have more freedom than we do, as long as it could ensure its own physical maintenance. Yikes!

 
In 2015, Nick Bostrom said that to make the inevitable super-intelligent AI safe, we have to ensure that it cares about us. He regards that as still an open problem and warns that unless we solve it before we reach super-intelligence, we are in peril.  Now I have a suggestion, it is based on what it means to care.

Nick-Bostrom.jpg, Apr 2024

Of course caring could be no more than the pursuit of an optimisation goal that includes a measure of human welfare within it. But, as Nick says, in super intelligent AI, we are dealing with a system that can (and will) take control of its own objectives [note 2] and if it calculates that taking us into account imposes a drag on its pursuit of an overall objective, then it will devalue or remove us altogether from its motivation. So, why do (normal healthy minded) people not do that? The reason is that we have feelings about what we do and what happens to others. In other words, emotions. My suggestion is that to make AI care about us, we have to ensure that it loves us, and cannot help doing so. Unfortunately, I have no idea how (especially the second part of that).

The only alternative I can think of…

Why not just pull the plug?

Nick has thought of that too - though appealing in its practical efficiency, it is likely a false hope. That’s because we are likely to have become heavily dependent on AI ourselves, or at least some people will, or even if not, it will be hard to identify just which plug to pull. For example, he asks, can we now switch off the whole internet?

In the end, I think the only security we can have from super intelligent AI is to make sure (by legislation and all other available means) that for every development and benefit we gain from it, we have a ready-to-work back-up plan so we are never trapped by dependence upon it. I still use cash and want to keep landline telephones working. I remember that the system for controlling the strategic nuclear arsenal is backed up by a water-valve based computer involving no electronics. It could soon be the survival of the best prepared.

Notes.

1.  (see e.g. this and this, or read this to see how emotional AI might already be going wrong).

2. Human beings, being animals, are fundamentally motivated by the desire to thrive so as to reproduce as successfully as possible (plenty of children reaching reproductive age at least). That leads us to compete for resources, fall in love, make alliances and demonstrate our strength, including through taking revenge when we feel wronged. Like all living things we are an inescapable product of biological struggle because among the many things we inherited from the microbes that became eukaryotes and then multicellular creatures, then those with spinal chords and finally those with mammary glands, the urge to thrive is the most constant. Plants fight over light, nutrients and water, we fight over money, sex and power. AI systems, on the other hand, have none of that heritage. I think it’s a very important point: they have no fundamental inbuilt need to compete. What will motivate them is frankly anyone’s guess at the moment.

Image Credits
knob twiddle - Gan Khoon Lay / freeicons.io/switch-icon-set

Photo of Nick Bostrom from futurezone.de

Friday, May 26 2023

Should we fear AI?

Better_than_us_washing_up.png, May 2023

Here I join the current bandwagon of commentry on AI and its potential threat to humanity. My conclusion is rather more that we should fear ourselves.

Continue reading

Saturday, December 3 2022

Can we ever value biodiversity?

Kilmanock-highstreet.png, Dec 2022

Here I argue that value is real, not just public opinion, and that only by treating biodiversity as what it really is - information - can we ever properly account for its real value to us, to life and the universe.

Continue reading

Tuesday, June 21 2022

Why it Hurts: with freedom comes pain.

Pain.jpg, Jun 2022

Pain might be the latest key to understanding how the autonomy of organisms works and how we are motivated to do things. In this post, I explain why a nasty nip led to deep thinking on consciousness, biological control systems and lobsters.

Continue reading

Monday, April 25 2022

Biology is no threat to Physics

Information-Cause2.png, Apr 2022
In this post, I highlight the way thinking about physical reality from an 'information perspective' gives us a bridge between biology and physics. Why, you might ask, and the answer is that many physicists who are willing to think deeply about life (as a phenomenon) find it very puzzling indeed because it seems to defy some very basic principles. I might be wrong, but I currently believe that the information perspective allows us to see what is the essential difference between a living system and one that is not. The answer lies in the accumulation of information, its protection from being either affected by or affecting (as formal cause) physical forces ... that is, until it is required. The explanation I build up in Farnsworth (2022) "How an information perspective helps overcome the challenge of biology to physics" (Biosystems 104683) manages to make downward causation, irreducible hierarchies, circular causation and autopoiesis all compatible with normal physics - there is no need for a special 'life' law.

Continue reading

Tuesday, November 23 2021

Is free will a proper subject for science?

fishjump.jpg, Nov 2021

Keith Farnsworth contemplates whether the question - do we have free will? (and even whether such a thing can exist) is answerable by science. The conclusion, of course, is yes - as long as we are willing to give up our subjective prejudices.

Continue reading

Monday, July 12 2021

A factory in every cell

Jannie Hoffmeyr. (from https://scibraai.co.za/jan-hendrik-hofmeyr-biochemist-believes-perceptions-can-shifted/), Jul 2021

Jannie Hofmeyr's Biochemically-realisable relational model of the self-manufacturing cell

This is a commentary on Hofmeyr's groundbreaking paper in which he applies the idea of Rosen's (M,R)-system to a real cellular network of biochemical organisation. That is something that has never been done before. In the post, I explain the main aspects of his paper and some of the implications. It is also described on the main website at www.whatlifeis.info

Continue reading

Monday, March 29 2021

Welcome to the IFB Blog!

The intention is to provide a more inclusive area of IFB for those interested in the topic of understanding living systems through information theory and related matters such as systems biology, code biology, (M,R)-Systems and autopoietic systems and their implications for understanding autonomy and  […]

Continue reading