Should Computers Be Able To Make Mistakes?

Do you think that advanced computers should be able to make mistakes?  It’s an interesting question in advanced computer science, one proposed a while back.  One of the problems with modern computers, theorized Michael Rabin, is that we demand perfection from them.  However, isn’t that the point of a computer?  Let’s take a closer look at what he meant:

For computer scientist Michael Rabin, the moment of truth came about a decade ago when he realized that perhaps one reason computers cannot do all we might want is because we demand perfection. Perhaps, he reasoned, if we allowed computers to make mistakes–as humans do–we might be able to get them to do more.

The idea was disarming, but, as Rabin showed, it works. He and many other investigators have devised extremely powerful probabilistic algorithms–ways of solving problems that almost always work but have a small but definite chance of being wrong (Science, 4 June 1976, p. 989). The next step for Rabin, who has a joint position at Harvard University and at Hebrew University in Jerusalem, was to go on to the workings of the computer itself. When computers were simple autonomous units, it was relatively easy to design ways to control which part of the computer was doing what. But with the advent of computer networks and of parallel processing, the problem of keeping order in the computer becomes increasingly complex. Some practical situations, in fact, are so complicated taht they simply have no workable, deterministic solution.

Rabin says that the theme of his work is to “achieve order through disorder,” in much the same way as occurs in statistical mechanics. There, he notes, a large system, such as the molecules of a gas, behaves in an orderly fashion as a result of randomized behavior of the individual constituents. As examples of the power of this approach, he tells of probabilistic solutions to several well- known problems in computer science that had frustrated investigators who were looking for simple deterministic solutions.

The fanciful problem of the dining philosophers invented by Edwin Dijkstra of Einthoven in Holland is one of these. Says Richard C. Holt of the University of Toronto, who put a picture of the dining philosphers on the cover of his book on computer science, the problem has an intrinsic fascination of its own. “It is easy to state and you can imagine solving it. But there is no easy solution.”

The story is that a group of philosophers is sitting around a table, talking and thinking. Between every pair of philosophers is a fork and in the center of the table is a plate of spaghetti. Each philosopher needs two forks to eat spaghetti. From time to time, the philosophers become hungry and want to eat some spaghetti. The system works well if no two philosophers sitting next to each other want to eat at the same time. But suppose, Rabin says, that they all become hungry at once. Each philosopher turns to his right and picks up a fork. Then each turns to his left. All the left forks, however, are now taken. “There is a deadlock. The philosophers never get to eat,” Rabin remarks. The difficulty is caused, he says, because the individual philosophers operate sequentially but the group operates concurrently. “If philosopher a were the only one who wanted to eat and then philosopher b were the only one who wanted to eat, the system would work,” he explains.

How can a protocol be devised so that deadlock is impossible? One way, Rabin notes, is to make one of the philosophers king. He tells each of the others when to eat. But if all of the philosophers are equal, Rabin says, “there are some schedules in which the philosophers will starve.””

Kolata, Gina. “Order out of chaos in computers; computer scientists are controlling decision-making with probabilistic methods, which work in cases where determinism does not.” Science 223 (1984): 917+.

Do you think that advanced – say, quantum – computing would be better off with a orderly but somewhat random way of doing things, such as in the molecules of a gas as noted in the excerpt above.

Bill Gordon, blogger at We Hate Malware, is somewhat vocal on the issues – “I think that quantum computing is really the next step in breaking the next barrier of technology.  With quantum states, however comes a sort of unpredictable, organized chaos way of looking at the computer and its systems.”

It’s interesting to think about, but quantum computing would definitely introduce the random factor that was talked about in that article back in 1984.  I’m sure they weren’t even thinking about quantum computers back then.

 

 

Comments are Disabled