Most practicing scientists scarcely harbor any doubts that science makes progress. For, what they see is that despite the many false alleys into which science has strayed across the centuries, despite the waxing and waning of theories and beliefs, the history of science, at least since the ‘early modern period’ (the 16th and 17th centuries) is one of steady accumulation of scientific knowledge. For most scientists this growth of knowledge is progress. Indeed, to deny either the possibility or actuality of progress in science is to deny its raison d’être.
On the other hand careful examination by historians and philosophers of science has shown that identifying progress in science is in many ways a formidable and elusive problem. At the very least scholars such Karl Popper, Thomas Kuhn, Larry Laudan and Paul Thagard while not doubting that science makes progress, have debated on how science is progressive or what it is about science that makes it inherently progressive. Then there are the serious doubters and skeptics. In particular, those of a decidedly postmodernist cast of mind reject the very idea that science makes progress. They claim that science is just another ‘story’ constructed ‘socially’; by implication one cannot speak of science making objective progress.
A major source of the problem is the question of what we mean by the very idea of progress. The history of this idea is long and complicated as historian Robert Nisbet has shown. Even narrowing our concern to the realm of science we find at least two different views. There is the view espoused by most practicing scientists mentioned earlier, and stated quite explicitly by physicist-philosopher John Ziman that growth of knowledge is manifest evidence of progress in science. We may call this the ‘knowledge-centric’ view. Contrast this with what philosopher of science Larry Laudan suggested: progress in a science occurs if successive theories in that science demonstrate a growth in ‘problem–solving effectiveness’. We may call this the ‘problem-centric’ view.
The dilemma lies in that it is quite possible that while the knowledge-centric view may indicate progress in a given scientific field the problem-centric perspective may suggest quite the contrary. An episode from the history of computer science illustrates this dilemma.
Around 1974, computer scientist Jack Dennis proposed a new style of computing he called data flow. This arose in response to a desire to exploit the ‘natural’ parallelism between computational operations constrained only by the availability of the data required by each operation. The image is that of computation as a network of operations, each operation being activated as and when its required input data is available to it as output of other operations: data ‘flows’ between operations and computation proceeds in a naturally parallel fashion.
The dilemma lies in that it is quite possible that while the knowledge-centric view may indicate progress in a given scientific field the problem-centric perspective may suggest quite the contrary.
The prospect of data flow computing evoked enormous excitement in the computer science community, for it was perceived as a means of liberating computing from the shackles of sequential processing inherent in the style of computing prevalent since the mid-1940s when a group of pioneers invented the so-called ‘von Neumann’ style (named after applied mathematician John von Neumann, who had authored the first report on this style). Dennis’s idea was seen as a revolutionary means of circumventing the ‘von-Neumann bottleneck’ which limited the ability of conventional (‘von Neumann’) computers from exploiting parallel processing. Almost immediately it prompted much research in all aspects of computing — computer design, programming techniques and programming languages — at universities, research centers and corporations in Europe, the UK, North America and Asia. Arguably the most publicized and ambitious project inspired by data flow was the Japanese Fifth Generation Computer Project in the 1980s, involving the co-operative participation of several leading Japanese companies and universities.
There is no doubt that from a knowledge-centric perspective the history of data flow computing from mid-1970s to the late 1980s manifested progress — in the sense that both theoretical research and experimental machine building generated much new knowledge and understanding into the nature of data flow computing and, more generally, parallel computing. But from a problem-centric view it turned out to be unprogressive. The reasons are rather technical but in essence it rested on the failure to realize what had seemed the most subversive idea in the proposed style: the elimination of the central memory to store data in the von Neumann computer: the source of the ‘von Neumann bottleneck’. As research in practical data flow computing developed it eventually became apparent that the goal of computing without a central memory could not be realized. Memory was needed, after all, to hold large data objects (‘data structures’). The effectiveness of the data flow style as originally conceived was seriously undermined. Computer scientists gained knowledge about the limits of data flow, thus becoming wiser (if sadder) in the process. But insofar as effectively solving the problem of memory-less computing, the case for progress in this particular field in computer science was found to have no merit.
In fact, this episode reveals that the idea of the growth of knowledge as a marker of progress in science is trivially true since even failure — as in the case of the data flow movement — generates knowledge (of the path not to take). For this reason as a theory of progress knowledge-centrism can never be refuted: knowledge is always produced. In contrast the problem-centric theory of progress — that a science makes progress if successive theories or models demonstrate greater problem solving effectiveness — is at least falsifiable in any particular domain, as the data flow episode shows. A supporter of Karl Popper’s principle of falsifiability would no doubt espouse problem-centrism as a more promising empirical theory of progress than knowledge-centrism.
Featured image credit: ‘Colossus’ from The National Archives. Public Domain via Wikimedia Commons.
Nice piece and interesting example re. data flow computing with which I was unfamiliar (though I am aware of the perils of parallel computing, i.e. that it’s almost impossible in the absence of “trivial parallelizability” – that seems somehow related to the insights gleaned from the failure of data flow computing).
But it seems that the early empiricists (Locke, Hume, Berkeley – or even back to Aristotle) were concerned with knowledge in the sense of arguing that that’s what was important about sense data, i.e. that it was the only reliable way of acquiring knowledge. I.e. they, at least, seemed pretty “knowledge-centric”. So to make an argument that the empirical basis for progress should be “problem-centric” rather than “knowledge-centric” seems to create a rather artificial (and perhaps unwarranted) distinction between these “centrisms”. Further, Popper’s falsification criteria was meant as a demarcation criteria between what is properly scientific and what is not. So it sounds like the argument here might be dangerously close to circularity re. trying to use science to prove something about science.
[…] المصدر: مدونة مطبعة جامعة اكسفورد […]
Scientific knowledge isn’t the only kind of knowledge as far as I can see. Logic and deduction can generate new knowledge. And these do not necessarily follow the scientific method.
The falsifiability criterion applies to whether or not you can call a theory scientific or not (since you can test the theory and see whether it holds or falls apart).
However, the theories of scientific knowledge themselves are not scientific theories (or at least do not have to be). They are philosophical ones so to speak.