Oxford University Press's
Academic Insights for the Thinking World

Robotic Sheepdogs: A Thought Experiment

early-bird-banner.JPG

While catching up on the news today, I came across this excellent little video about robots. One of the robots could recognise human faces, while the other relaxed whenever someone cuddled or stroked it. What would happen when the robots met? Well, you’ll have to watch the film clip for that one, but it put me in mind of a book we published a little while ago called Guilty Robots, Happy Dogs: The Question of Alien Minds by David McFarland. In it, Professor McFarland examines the philosophical positions behind ideas about whether robots can ever feel guilt, whether animals can ever really feel happy, and whether we can ever know what non-human minds might be like. In the extract I’ve chosen below, he asks us to try a thought experiment about a robotic sheepdog.


Another possible candidate for behaviour of an animal or machine that would make us suspect that it had mental abilities is cooperative behaviour. Cooperative behaviour takes many forms, ranging from cooperation among ants (usually termed collective behaviour, because there is no direct communication between the participants) to human behaviour that requires cooperation at the mental level. There are numerous studies of cooperation among robots designed to fulfil particular tasks, such as security surveillance, reconnaissance, bomb disposal, and even playing soccer. Some of these cooperate by sharing both knowledge and know-how. However, this does not mean that these robots have any mental abilities. For computer scientists, there is no problem in endowing a robot with explicit representations. It is the other aspects of mentality that is a problem for them.

Let us now do a thought experiment. Suppose that you are cooperating with a sheepdog robot (such robots have been made). The robot is perfectly capable of rounding up sheep without minute-to-minute guidance from you. In fact the only influence that you have over the robot is to urge it to go faster or slower.

The robot (type-I) is perfectly capable of rounding up sheep, or ducks, provided that they are of the domestic type that flock. In fact the robot’s overriding priority is to keep the flock together. It must adjust its speed of manoeuvre to the state of the flock. If it moves too quickly the flock will tend to break up, because not all the individuals can go at the same speed. If the robot moves too slowly, the flock momentum is lost, and they may head off in a new direction. The type-I robot is an automaton and carries out all these manoeuvres automatically. It can do no other.

One day you and the robot are rounding up a few sheep, and you notice that one of them is lame. The robot has adjusted to the slow speed of this animal and the flock is moving rather slowly. Even so, the lame sheep is finding it hard to keep up. You would prefer that the robot leave the lame sheep out of the flock—but how to do it? The thing is to get the robot to speed up so that the lame sheep is left behind, but the robot is programmed to keep the flock together at all costs. Give it a try anyway.

You order the robot to speed up. It speeds up very slightly then slows down again. It is keeping the flock together and must adjust is pace to the slowest sheep. You order it to speed up again—no response. As expected, the robot will not speed up and break up the flock. Then suddenly it does speed up dramatically, leaves the lame sheep behind, and rounds up the others. You are surprised.

Later you ask the robot designer why the robot broke the overriding rule of keeping the flock together. ‘Oh, I forgot to tell you that was a type-II robot. It is the same as type-I but a bit more intelligent.’ What does this mean? Surely a more intelligent sheepdog robot is one that is better at rounding up sheep. Now here’s a thought—did the type-II robot realise that you wanted to separate the lame sheep, and so it acted accordingly? In other words, by requesting the robot to speed up, even though it was against the normal ‘rules’, you were in effect asking for the robot’s cooperation.

My dog, Border, has a special yip, accompanied by a fixed stare at me, that indicates that she is requesting something (water, food, cuddle, or to be let out). If she is in the house and hears a commotion outside (someone arriving, or another dog vocalising), she wants to join in the fun, but does not go to the door and attempt to get out, she comes to me and requests to be let out. She is, in effect, asking for my cooperation. I realise, from the context, what she wants. Similarly (somehow), the robot realises what you want (to separate the lame sheep). We are tempted to say that the robot believes that you want it to speed up and that the consequences will be that the lame sheep is left out of the flock, and the robot believes that this is what you want. Similarly, it is tempting to endow Border with some cognitive ability in seeking my cooperation. There are many situations where the behaviour of an animal prompts us to think that the animal must have some mental ability. Unfortunately, in such cases we cannot ask the designer, as is the case with robots, and we cannot rely on language, as is the case with children.

Recent Comments

  1. […] and recovered, Sunday took me back to Blackwell to see David McFarland, author of Guilty Robots, Happy Dogs, and his robot, Bookbot (pictured above). A lively family […]

Comments are closed.