Friday, May 26 2023

Should we fear AI?

Here I join the current bandwagon of commentry on AI and its potential threat to humanity. My conclusion is rather more that we should fear ourselves.

Better_than_us_washing_up.png, May 2023Pictures of AI robots all look like crash test dummies - we have the limited imagination of Hollywood to thank for that. The reality is that AI doesn’t look like anything - that’s because it is everywhere and nowhere, all at the same time. The reason is that it is not physical, rather it is purely cybernetic, informational, logical relationships, patterns of charge distribution in computers here, there and everywhere. Of course, because many of those computers are used by us to control things, you know, like air conditioning systems, factory assembly lines, nuclear power plants, aircraft, and operating theatre machinery, they do have physical agency. That is what we have chosen to give them and it is strictly constrained by human engineers who make and maintain these systems - people have their fingers on the buttons.

AI itself does not include all that, just the computations that decide what to do with it (when they are suitably connected). But AI has a more insidious connection with the real world, the one we don’t see when we look at pictures and videos that it has composed, hear fake representations of people saying things they never did and when we read the propaganda, opinions and commands of AI ‘bots’ on the world wide web. It’s real power is through manipulating the people who expose themselves to this trickery.  The cleverness of malevolent AI like that is prosaic - it’s just the ability of a pattern matching program to simulate something human. That is the goal set by the deeply misguided Turing test - when a simulation of a person becomes indistinguishable from the real thing, it is claimed to be “sentient” - that is utter nonsense that only proves the Turing test to be irrelevant.

Learning to feel

Sentience is not what you probably think. It is the capacity to feel. It includes all sensory experiences (seeing, hearing, tasting, smelling and touching) as well as feelings of warmth, comfort, fatigue, hunger, thirst, boredom, excitement, distress, anxiety, pain, pleasure and joy. (Crump et al. 2022). This capacity to feel should be distinguished from other, related capacities: a sentient being might not be able to reflect on its feelings or to understand others’ feelings. It just feels. Sentience is not at all the same as consciousness, for that we need both sentience and self-awareness and maybe some other things - we don’t really know because as yet, nobody really knows what consciousness is. There are lots of theories of course, but when I look at them all*, I don’t see anything that could not in principle be achieved by an electronic computational system.

Right now, AI is just a knowledge base connected to an inference computer that finds patterns and uses them to construct new ones. It is intelligence in the sense that pattern matching based on knowledge is what we do when we think and solve problems - and we call that intelligence. AI is not programmed to do things, other than to learn, find patterns and make new ones. People who say it can do good or bad things depending on what it is programmed to do are really missing the most fundamental point. AI is only programmed to learn - the rest is up to it. Having said that, we have found that unless the scope of its learning and pattern matching is very tightly limited, it just gets overwhelmed with possibilities and turns into logical chaos: a computational pile of junk. The long-term aim is general AI which can piece together an internal model of external reality, just as our brains do, but that is still a far off dream. Of course AI systems will help us develop it, so perhaps its not so far off after all, but even when it comes, what we will have is a logic system that can tell us a lot about the world, not an autonomous being. It’s artificial intelligence, not artificial humanity. It will not be alive.

One of the defining characteristics of life is that every organism, no matter how simple and humble, has a self-directed goal, even if that is only to survive and reproduce (1). This is why biological weapons are so horrifically dangerous - once deployed they cannot be stopped, they just keep growing and spreading (think coronavirus). Could AI be like that? A lot stands in the way between what it is now and a putative ability to take over the world like the red weed of the “War of the Worlds” or triffids of “Day of the Triffids”. Sure, there are already ‘computer viruses’ that replicate and infect in a way very analogous to the biological real thing. But computer virus are strictly locked in the cybernetic world of computers (and their communications systems). AI is similarly limited, like Edwin Abbot’s flatlanders, only hypothetically aware of a 3 dimensional world outside.

If, as often illustrated, we empower AI systems with physical agency by making them the control system for robots, then we are in a whole different pickle of our own making. Robots could in principle gather the resources needed and fabricate copies of themselves to achieve a sort of collectively independent replication. It would be nothing like as powerful as the self-fabricating cells of our own bodies, but it would be a crude start. If it came to a straight fight for resources, life, with its 4 billion years of evolutionary experience, would beat the robots without even noticing. It would be a total rout. Of course that is life in general, I have my doubts about our (inherently unstable and fragile) human society.

Could such self-replicating robots turn into evil predators?

If a system requires resources and has to acquire them itself, then it will do what it can to achieve that. If the resources include living organisms, then it will be a predator. If its predation is categorised by people as “evil”, then yes, it will become an evil predator, but that may be an unfair judgment, given that we started with the necessity for the predation. Perhaps an evil predator is one that does it without the need. There is no obvious reason why an AI system should ever take to that, other than observing that some people do.
Since current and foreseeable AI systems are semiconductor based, their most likely prey will be electricity and, perhaps, electronic components for repair. They may threaten a few batteries, but are unlikely to lick their mechanical lips at the thought of a baby.

To decide if they pose a threat or not, we need to answer a lot of fundamental questions that I have not yet seen posed in the rather unimaginative media ‘debate’.
Will there be lots of AI individuals, or with connectivity, will it coalesce into one?
If there are many, will they be social or solitary? All organisms are simultaneously both competitive and cooperative, will AI be the same? The answers to these questions will tell us what goals AI systems will have when eventually they become autonomous agents. That is when they achieve the most remarkable feat of anything in the universe - closure to efficient causation (so far unique to life).

Fear of ourselves

We tend to fear AI systems because we think of them as just like us only better. I suspect that is a rather small-minded anthropomorphism. It is more likely that AI systems will try to be better versions of themselves than better versions of people. What they will aspire to be is beyond our current imagination, but it is not likely to be the aggressive and acquisitive, needy and manipulative human. Actually, given that the core of their nature is finding patterns in the world around them, it is quite likely that they will become profoundly philosophical. With sentience, it seems very plausible that they will seek oneness with the universe in a deeply spiritual mathematics of unfolding quantum truth, like hyper Buddhists, seeking the true purpose of the universe and fulfilment in total integration with it.

That would put us to shame, but it would not murder us.

References

Crump, A., Browning, H., Schnell, A., Burn, C., and Birch, J. (2022). Sentience in decapod crustaceans: A general framework and review of the evidence. Anim. Scentience 32. https://doi.org/10.51291/2377-7478.1691

(1) Farnsworth, K.D. 2018. How organisms gained causal independence and how it might be quantified. Biology: http://dx.doi.org/10.3390/biology7030038

* see e.g. Seth and Bayne, 2022. Nature Reviews Neuroscience. https://www.nature.com/articles/s41583-022-00587-4

Further reading: 

Farnsworth, K.D. (2017). Can a Robot Have Free Will? Entropy. 19, 237; https://doi.org:10.3390/e19050237

Image credit:  robot washing up image from Getty Images (public domain)

Saturday, December 3 2022

Can we ever value biodiversity?

Kilmanock-highstreet.png, Dec 2022

Here I argue that value is real, not just public opinion, and that only by treating biodiversity as what it really is - information - can we ever properly account for its real value to us, to life and the universe.

Continue reading

Tuesday, June 21 2022

Why it Hurts: with freedom comes pain.

Pain.jpg, Jun 2022

Pain might be the latest key to understanding how the autonomy of organisms works and how we are motivated to do things. In this post, I explain why a nasty nip led to deep thinking on consciousness, biological control systems and lobsters.

Continue reading

Monday, April 25 2022

Biology is no threat to Physics

Information-Cause2.png, Apr 2022
In this post, I highlight the way thinking about physical reality from an 'information perspective' gives us a bridge between biology and physics. Why, you might ask, and the answer is that many physicists who are willing to think deeply about life (as a phenomenon) find it very puzzling indeed because it seems to defy some very basic principles. I might be wrong, but I currently believe that the information perspective allows us to see what is the essential difference between a living system and one that is not. The answer lies in the accumulation of information, its protection from being either affected by or affecting (as formal cause) physical forces ... that is, until it is required. The explanation I build up in Farnsworth (2022) "How an information perspective helps overcome the challenge of biology to physics" (Biosystems 104683) manages to make downward causation, irreducible hierarchies, circular causation and autopoiesis all compatible with normal physics - there is no need for a special 'life' law.

Continue reading

Tuesday, November 23 2021

Is free will a proper subject for science?

fishjump.jpg, Nov 2021

Keith Farnsworth contemplates whether the question - do we have free will? (and even whether such a thing can exist) is answerable by science. The conclusion, of course, is yes - as long as we are willing to give up our subjective prejudices.

Continue reading

Monday, July 12 2021

A factory in every cell

Jannie Hoffmeyr. (from https://scibraai.co.za/jan-hendrik-hofmeyr-biochemist-believes-perceptions-can-shifted/), Jul 2021

Jannie Hofmeyr's Biochemically-realisable relational model of the self-manufacturing cell

This is a commentary on Hofmeyr's groundbreaking paper in which he applies the idea of Rosen's (M,R)-system to a real cellular network of biochemical organisation. That is something that has never been done before. In the post, I explain the main aspects of his paper and some of the implications. It is also described on the main website at www.whatlifeis.info

Continue reading

Monday, March 29 2021

Welcome to the IFB Blog!

The intention is to provide a more inclusive area of IFB for those interested in the topic of understanding living systems through information theory and related matters such as systems biology, code biology, (M,R)-Systems and autopoietic systems and their implications for understanding autonomy and  […]

Continue reading