Is a robot a person?

https://www.youtube.com/watch?v=gx12eBw_rfo&t=2s

What does it mean to be a person?

Is being a human and being a person the same thing? Or could you be one, but not the other?

 

More

Human vs. person - is there a difference?

The definition of a human is pretty straightforward - it just means being part of the human race (Homo Sapiens if you want to get scientific about it). It’s a description of our species. But figuring out exactly what it means to be a person is a whole different ball game.

Philosophers like Rene Descartes and John Locke reckon that a person is “any human (or non-human) being who possesses continuous consciousness over time and is capable of framing representations about the world, formulating plans and acting on them.” Bit wordy, but basically a being that can understand the world and make decisions in it. Australian philosopher Peter Singer goes a step further and defines a person as a conscious, thinking being which knows that it’s a person - so he brings in the idea of self-awareness.

British theologian Thomas White adds the criteria of feelings into the mix - he has eight things he thinks make a person, one of which is ‘has emotions’. And American professor Harry G Frankfurt thinks it’s to do with having desires - wanting things. Not just practical things like food or sleep (called ‘first-order desires’) which even animals and plants have, but things like wanting to be loved, or wanting to change something about yourself (‘second-order desires’).

So even the big thinkers of the world, philosophers whose job it is to help us solve these puzzles, can’t quite agree on what it means to be a person.

 

Descartes, Licke and Singer

Rene Descartes, John Licke and Peter Singer

Does human always = person?

People also disagree about whether being a human automatically makes you a person. Certainly in history that hasn’t always been the case - for hundreds of years black slaves were considered property not people. They were definitely human, but they had no rights and could be bought and sold like possessions.

And even though slavery is illegal now, the same sorts of questions still hang around with issues like abortion. An unborn foetus is certainly human, but is it a person? Or only a potential person? In many countries, abortion is legal because an unborn baby isn’t considered a person yet - it’s the property of the woman who carries it, so she can choose what to do with it.

Some people feel it can be dangerous to say that anyone human isn’t also a person - they feel it poses a threat to the well-being of some of the most vulnerable humans (like children, or those who are sick or elderly). Certainly it’s this line of argument that helps some bioethicists justify the killing of Alzheimer's patients or kids born with disabilities.

But others think it’s a helpful question when you’re tackling tough ethical dilemmas. Like someone who’s brain-dead and in a coma -  it would be easier to switch off their life-support machines and use their organs to save someone else’s life (which some people would consider a good thing) if you don’t think they’re actually a person anymore.

What about the apes?

Whichever way you go, the bigger question (in light of our topic anyway) is can anything other than a human also be a person? If they aren’t the exact same thing, then it’s possible that you could be one and not the other. Like how all apples are fruit, but not all fruits are apples. Perhaps all humans are persons, but not all persons are human.

In fact, that’s just what Personhood Theory is all about. It’s a branch of Western philosophy that takes the definition of a person beyond just ‘a human’ and suggests that anything that’s sentient (that has consciousness) could be a person too.

Some would argue that the very name ‘Artificial Intelligence’ rules robots out - any intelligence robots have is fake, it’s been programmed into them. But doesn’t ‘programmed’ really just mean ‘taught’?

For example, some researchers working with dolphins and apes have argued that these animals should be treated as people. Some apes can communicate using sign language, so are they any less of a person than a human who’s deaf? Or would a fully-developed, conscious and self-aware ape just have ‘ape-hood’ rather than personhood? Elephants are smart too and have intricate social systems - there’s even research to suggest they grieve when a member of their herd dies. So using White’s idea of a person, that would tick the ‘has emotions’ box.

And what about spiritual beings? In Christianity, God, Jesus and the Holy Spirit are called the three ‘persons’ of the Trinity - but they’re not considered human. And in New Zealand the Whanganui River is greatly respected by the local Māori people as Te Awa Tupua (translated ‘an integrated, living whole’). In 2012 the government gave the sacred river legal personhood in order to protect it. Or what about alien life - if it exists? If we came across other conscious, self-aware beings who could communicate with us, could they be persons too?

Can we rule robots out?

So if we open things up like this, does that mean something with Artificial Intelligence (AI) could tick the boxes for being a person? Although they can never be part of the same species as us - they’ll never be human - machines might still be able to share our personhood. Certainly AI is getting pretty smart and there are robots who can actually make decisions (that was Descartes’ criteria) beyond instructions that were specifically programmed into them. But does that mean they’re actually conscious? Or self-aware? And what about emotions - will robots ever be able to genuinely love another being, or feel sad, angry or confused?

https://www.youtube.com/watch?v=DHyUYg8X31c

Some would argue that the very name ‘Artificial Intelligence’ rules robots out - any intelligence robots have is fake, it’s been programmed into them. But doesn’t ‘programmed’ really just mean ‘taught’? And that’s the same way humans take in new information and skills.

So could a robot ever pass for a human?

There's a test that will help you answer this question, created by famous computer scientist, Dr Alan Turing in 1950. But is it a good test? Or can you think of any better ways to spot the differences between a human and a robot? Take a look at the video below and make your call. 

https://www.youtube.com/watch?v=3wLqsRLvV-c

Do you know what robots can and can’t do?

Do robots have brains?

We use the phrase 'Artificial Intelligence', but just how smart are most robots? And where does their intelligence live - do they have brains like we do? Well, they will do soon if the robotics team at the Massachusetts Institute of Technology (MIT) are successful...

https://www.youtube.com/watch?v=Nddfda_cN04

 

Can robots learn to speak a language?

This question is better asked if we flip it on its head. How do people learn to speak a language, and can we get a robot to learn like a person does? Because ultimately we’re not asking if it can remember words - we’re asking if a robot can ever learn what a word means.

 

More

How your brain works

Pointer on soma of nueron
Neural tissue 200x magnification - Pointer on soma (cell body) of neuron

Your brain is made up of millions of cells called neurons. Each neuron has a small and specific task - like identifying a pattern or recognising a colour or sound. All these different neurons are continuously working together, each doing a small thing, then connecting together to do a big thing - like remembering a face. The neurons doing the small things then connect to neurons that combine all those small things together, and this goes on layer after layer after layer.

Neural Network Model for a human

When a child first recognises a shape with four equal sides and hears the word ‘square’ associated with the image, it creates a memory. The more times they hear the word ‘square’ associated with the shape, the stronger their neurons connect the word with the image. If the child calls a square ‘a circle’, someone will tell them they’ve made a mistake - so they not only learn what a square is but also what it isn’t.

Soon, every time a child hears the word ‘square’ they think of the image of a square, and every time they see an image of a square they think of the word ‘square’. The child has now fully learned this word and understands its meaning.

How a robot brain can ‘think’ like a human brain

Neural network

'Deep learning' is a branch of AI which mimics the brain’s activity using artificial neural networks. Deep learning is particularly good at recognising patterns. Instead of neurons, a robot has processing units. Each processing unit is given a small and specific task. We can give the processing units the same small tasks as the neurons that they’re imitating. These processing units are connected together in layers just like the neurons in your brain, and these layers combine to create a function which we call an 'Artificial Neural Network'.

For a robot to learn what the word ‘face’ means it would have to know what an image of a face was. Here’s an example of what that Artificial Neural Network might look like:

Artificial Neural Network

To get a robot to associate the word ‘face’ with multiple images of different faces, we upload an image of a face and tell the computer that this image is called a ‘face’. Then we upload multiple unrelated images and tell the robot that all of these are called ‘not face’. We then show it lots of different images, asking it to identify them as ‘face’ or ‘not face’, and each time the robot makes a mistake it’s coded to tune its variables until it always predicts the right answer. Once it’s tuned itself to be correct every time, it’s made a connection - same as the child with the square, it’s learned both the word and its meaning.

Dual understanding

Research is often done using both kids and robots alongside each other - word tasks are given to both children and computers, then their responses are compared. This allows people to better tune the robot to match and mimic the human learning process. It also helps the researchers to better understand how humans learn and take in new information - and that in turn helps them get even better at teaching their robots. It’s just one giant circle of learning!

In the podcast below, Professor Plunkett (Oxford University) explains how neuroscientists are using Artificial Intelligence to imitate processes in human brains, helping us to better understand how small children start to learn language.

https://www.youtube.com/watch?v=YWi95zSqzh0

The future

Programming a robot to be able to learn a language is just the first step. Nando de Freitas, Professor of Computer Science at the University of Oxford, believes that if we can teach a robot to truly understand the meaning of a word then that opens up a whole new kind of AI understanding. He thinks that eventually we’ll be able to programme robots to teach themselves new things - just as humans can learn on their own through trial and error.

But would creating a robot that knows how to learn, act and plan without any human interference be a good idea? That’s a whole other question.

Can robots be creative?

Whether it's painting or sculpting an amazing work of art, creating a beautiful song, or writing a thrilling or moving story, we humans are a pretty creative bunch. But could a robot ever produce any of those things?


https://www.youtube.com/watch?v=xxbo_nLK45o

 

"Man is a robot with defects." - Emil Cioran (Romanian Philosopher)

Romanian philosopher Emil Cioran once said: “Man is a robot with defects.”

Criminal robots: are they responsible for their actions?

If robots are becoming more like people, should they have to live by the same rules? And if a robot commits a crime, who gets in trouble? The manufacturer who made the parts? The programmer who created the intelligence? Or perhaps the robot itself...

https://www.youtube.com/watch?v=1FnNMedGXCM

Is a robot a person?

  • Robots can learn

    'Deep learning' is a branch of Artificial Intelligence (AI) which mimics the brain’s activity using artificial neural networks. Instead of neurons, a robot has processing units. Each processing unit is given a small and specific task, for example identifying curved lines or eyes if it was learning what a face was. In this learning process, a robot is shown an image of a face and then told this is a ‘face’. Next, it is shown multiple unrelated images and told these are called ‘not face’. The robot then has to identify different images as ‘face’ or ‘not face’. Each time it makes a mistake it can be coded to tune its variables. Once it’s tuned itself to be correct every time, it’s made a connection - it’s learned both the word and its meaning.

  • Robots can’t love

    Australian philosopher, Peter Singer defines a person as a conscious, thinking being which knows that it’s a person - bringing in the idea of self-awareness. British theologian, Thomas White adds the criteria of feelings - he has eight things he thinks make a person, one of which is ‘has emotions’. Certainly, Artificial Intelligence is getting pretty smart and there are robots who can make decisions beyond instructions that were specifically programmed into them. But does that mean they’re actually conscious? Or self-aware? And what about emotions - will robots ever be able to genuinely love another being, or feel sad, angry or confused?

  • They can be creative

    According to Ada Lovelace, an English mathematician, a machine must be able to create original ideas to be considered intelligent. A machine can pass the Lovelace test if it can produce an outcome that its designers cannot explain based on their original code. Evolutionary, or genetic, algorithms might allow robots to pass this test. A machine could start with some musical parts and a basic algorithm, and develop an original and beautiful musical piece through the process of mutation and selection. This process would have so much randomness and complexity built in that the result might pass the Lovelace Test.

  • They struggle with small talk

    British computer scientist, Alan Turing proposed a test where a computer would be considered intelligent if its conversation couldn't be easily distinguished from a human's. Today, the few robots who have passed the test found clever ways to fool judges rather than using pure computing power. For example, one misled people by acting as a psychologist and encouraging them to talk more, and another was given the persona of a 13-year-old Ukrainian boy, so judges interpreted its mistakes and awkward grammar as language and culture barriers. Who in Turing's day could have predicted that today's computers would be able to pilot spacecraft, perform delicate surgeries, and solve massive equations, but still struggle with the most basic small talk?