Oxford University Press's
Academic Insights for the Thinking World

How to avoid programming

By Robert St. Amant


What does a computer scientist do? You might expect that we spend a lot of our time programming, and this sometimes happens, for some of us. When I spend a few weeks or even months building a software system, the effort can be enormously fun and satisfying. But most of the time, what I actually do is a bit different. Here’s an example from my past work, related to the idea of computational thinking.

Imagine you have a new robot in your home. You haven’t yet figured out all of its capabilities, so you mainly use it for sentry duty; it rolls from room to room while you’re not at home, turning lights and appliances on and off, perhaps checking for fires or burglaries.

Your robot is a simple reflex agent, in the jargon of artificial intelligence. It acts on reflexes, or rules: “If you sense such-and-such in the environment, then take such-and-such an action.” Your robot has a variety of sensors for identifying furniture and doorways and such, and you can write rules for it to follow: “When you sense a doorway with an end table to the right, then go through the doorway.” You can also place small signs in a room for the robot: “If you sense a sign with a red octagon next to a doorway, don’t go through that doorway.”

Here’s the question we became interested in: Given a specific behavior that you’d like the robot to carry out, and given the choice between making changes to either the robot’s program (writing new rules) or the environment (putting up new signs), what do you do?

We often choose between general versions of these two strategies (or apply both) in our everyday lives. For example, if my two-year-old niece is visiting, I can tell her not to play with various fragile knick knacks, but because “programming” a two-year-old isn’t very effective, I can also make some changes to the environment as a form of childproofing. I do the same to myself; I can “program” myself to remember to take a gift for a colleague to work in the morning, or instead I can place the package in front of the door so that I won’t leave without either picking it up or stepping over it. Cognitive scientists have studied such strategies under the umbrella of embodied and situated cognition, but less is known about how they might apply to interactive computer systems.

I set up an experiment with the help of David Christian, a graduate student working with me. Participants were given the task of directing a simulated robot through a small maze to reach a goal. At each intersection of the maze was a symbol: a square, a circle, a diamond, or a triangle. The robot was pre-programmed with a behavior for each symbol: turn left, turn right, or go straight. Here’s the tricky part. The robot’s program had a bug — the robot couldn’t reach the goal following its current program. To fix the problem, the participant could either change a rule or change a symbol at an intersection, making as many changes as needed (watching the robot run through the maze after each change) for the goal to be reached.

The robot is represented by an arrow, the goal by a yellow square

Our main question was this: Do participants have a natural preference between programmatic control of the robot versus modification of its environment? Each maze was set up so that a single change to either the robot’s program or its environment would produce the correct behavior. The two problem-solving strategies aren’t quite the same, though. Changing the robot’s program produces a “global” change in its behavior; if we were to change the green square symbol in the robot’s program to “Go straight,” that would apply in every intersection showing a green square. (If it’s not possible to go straight, the robot simply stops.) Changing the environment, on the other hand, is a “local” change; the difference is in the robot’s behavior at just one specific intersection.

About a third of the time participants chose only programming changes for a given maze, and about a third of the time they chose only environmental changes. In the cases where participants chose a mixture of programming and environmental changes, we saw an interesting pattern. They tended to choose one or more programming changes first, and then a series of environmental changes until the problem was solved. In 78% of the mixed cases, the first change was to the program, and in 89% of the mixed cases, the last change was to the environment. About half of our participants were computer programmers, but we saw no difference between programmers and non-programmers in our main performance measures.

The patterns in the mixed strategies are reminiscent of how we solve problems in the real world. For example, in government we have broad rules established at the federal level, with specialized problems handled at the local level. Business people sometimes talk about setting the ground rules for their cooperation and then dealing with specific situations as they arise. It turns out that this same pattern seemed to hold, at least some of the time, in the way people chose to solve problems in our robot control experiment. We make initial, global decisions that move us quickly through a problem, and then we make small, local changes until we hit upon a solution. We see this same approach in many areas of computing, from computer architecture to software engineering to artificial intelligence.

Let’s come back to computational thinking. Jan Cuny, Larry Snyder, and Jeannette Wing describe computational thinking as “the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent.” My own interest is in the commonality between strategies for computational thinking and strategies we apply for solving problems in our everyday lives. There are usually differences, but sometimes we can find basic similarities. Then we can say, “The way you go about solving this familiar problem is one version of a general strategy in computational thinking. Here’s how it works…”

Robert St. Amant is an Associate Professor of Computer Science at North Carolina State University, and the author of Computing for Ordinary Mortals, from Oxford University Press. You can follow him on Twitter at @RobStAmant and read his Huffington Post column or his previous OUPblog posts.

Subscribe to the OUPblog via email or RSS.
Subscribe to only technology articles on the OUPblog via email or RSS.
View more about this book on the

Image credit: Maze image courtesy of Robert St. Amant. Used with permission.

Recent Comments

  1. Rob St. Amant

    I forgot to include a link to the paper:

    St. Amant, R., and Christian, D. B. (2003). Environment modification in a simulated human-robot interaction task: Experimentation and analysis. Proceedings of the International Conference on Intelligent User Interfaces, pp. 174-181. ACM Press.

Comments are closed.