We live in the information age, the product of the modern computer. The modern computer is itself a product of modern logic, which underwent a revolution in the late nineteenth and early twentieth centuries. This revolution was driven by logicians and mathematicians: George Boole (whose Boolean data types are so important to computing) and Augustus de Morgan in Britain, Gottlob Frege and David Hilbert in Germany, and Kurt Gödel in Austria, among many others. As well as logicians and mathematicians, these thinkers were philosophers. They revolutionised logic based on their philosophies.

About a century ago, then, our world was transformed by a logical revolution, which may broadly be called philosophical. This transformation was the key to the technological advances of the past century. When, in 1936, Alan Turing wrote the seminal article in which he introduced the idea of a “computing machine” (now known as a Turing Machine), he was responding to a question posed by one logician (Hilbert) by reworking the tools of another (Gödel).

What about today’s logic? Could current advances in logic or its philosophy lead to the sort of computer-driven technological change we’ve seen in the past hundred years?

“Could current advances in logic or its philosophy lead to the sort of computer-driven technological change we’ve seen in the past hundred years?”

In some ways, the question is easy to answer. Logic is now a huge, sprawling subject, with logicians housed in mathematics, computer science, philosophy, and cognitive science departments. It is a major branch of philosophy and a crucial component of any philosophy degree. It advances every day, and these small daily advances accumulate over time to result in significant technological change. Neural networks and deep learning are products of these changes and have already revolutionised the field of Artificial Intelligence.

As philosophers, though, our interest is in the conceptual foundations of logic. And we think that twenty-first century logic is still shackled by an understandable but erroneous assumption. This erroneous assumption is that logic is finite. Yet far from being finite, logic is infinite, as we argue in our recent monograph *One True Logic*. Appreciating logic’s infinitude should lead to a conceptual revolution in logic, and potentially to a technological one. Rethinking the conceptual foundations of logic could revolutionise technology in the twenty-first century just as it did in the twentieth.

Let us explain. A logic is in many ways like an ordinary language such as English, Arabic, Spanish, or Mandarin. The feature of a logic most relevant here is that it contains some vocabulary and some rules for stringing together its vocabulary—a grammar. In English, an example of a grammatical rule might be: if *A *is a sentence and *B* is a sentence, then “*A *and *B*” is a sentence; so, for example “Fido is a dog and Felix is a cat” is a sentence since “Fido is a dog” and “Felix is a cat” both are.

An exactly analogous rule applies in logic, which allows two shorter sentences to be joined by “and”—or its formal counterpart—to form a longer one. And logic provides many other ways of building longer sentences. But if logic is finite, then these sentences must be of finite length. We cannot, for example, have a sentence containing “and” infinitely.

“Twenty-first century logic is still shackled by an understandable but erroneous assumption. This erroneous assumption is that logic is finite.”

Inspired by thinkers such as Alfred Tarski and Gila Sher, we think that restricting the correct logic to finitely long sentences is a mistake. There is no chasm between finite and infinite. In fact, the correct logic is not merely infinite; it is in a sense *maximally* infinite. It allows the conjunction of *any *infinite number of sentences. It is as infinite as can be. Or so we argue in *One True Logic*.

There’s an interesting historical resonance to our thesis that logic is infinite. Quite a few early twentieth-century logicians, including Zermelo, Löwenheim, Skolem and even Hilbert, were all at one point happy to use infinitely long sentences. Although this aspect of the modern pioneers’ work was important at the time, it was later ignored—almost airbrushed out of logic’s official history.

What are the technological consequences of taking logic to be infinite? We don’t know for sure. Clearly, we cannot expect anything like current computers to process an infinite amount of information. But there is no reason why they couldn’t mimic reasoning with infinitely long formulas of the sort we are all capable of. An infinitely long sentence must, for example, imply itself.

What is the upshot? We know from the history of our subject that philosophical advances in logic can and do lead to technological advances, which nobody could have foreseen. We believe logic is infinite, indeed maximally so. All that remains is to find a twenty-first century Turing to show where this leads.

This is poorly expressed and ignores the real uses of logic in computing. You cannot discuss a logic meaningfully without addressing both syntax and semantics. It is usless to talk about infinity without considering the different types of infinity. Why must an infinitely long sentence imply itself – that depends on your meta-logic. Does this statement apply in a paraconsistent logics (widely used in CS)? How does your thesis interact with the axiom of choice (again, a key philosophical aspect of logic in the realm of computation). This is a very disappointing and poorly-expressed article.