You are currently browsing the tag archive for the ‘Rosser’ tag.

Why primality is polynomial time, but factorisation is not

Differentiating between the signature of a number and its value

A brief review: The significance of evidence-based reasoning

In a paper: `The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of Cognitive Systems Research [An16], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for objectively evidencing such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … Chetan. R. Murthy: [Mu91], \S 1 Introduction.

“Intuitively we require that for each event-describing sentence, \phi_{o^{\iota}}n_{\iota} say (i.e. the concrete object denoted by n_{\iota} exhibits the property expressed by \phi_{o^{\iota}} ), there shall be an algorithm (depending on I, i.e. M^{*} ) to decide the truth or falsity of that sentence.” … Martin H Löb: [Lob59], p.165.

Definition 1 (Evidence-based reasoning in Arithmetic): Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively evidencing such acceptance.

The significance of introducing evidence-based reasoning for assigning truth values to the formulas of a first-order Peano Arithmetic, such as PA, under a well-defined interpretation (see Section 3 in [An16]), is that it admits the distinction:

(1) algorithmically verifiable `truth’ (Definition 2}); and

(2) algorithmically computable `truth’ (Definition 3).

Definition 2 (Deterministic algorithm): A deterministic algorithm computes a mathematical function which has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.

Note that a deterministic algorithm can be suitably defined as a `realizer‘ in the sense of the Brouwer-Heyting-Kolmogorov rules (see [Ba16], p.5).

For instance, under evidence-based reasoning the formula [(\forall x)F(x)] of the first-order Peano Arithmetic PA must always be interpreted weakly under the classical, standard, interpretation of PA (see [An16], Theorem 5.6) in terms of algorithmic verifiability (see [An16], Definition 1); where, if the PA-formula [F(x)] interprets as an arithmetical relation F^{*}(x) over N :

Definition 2 (Algorithmic verifiability): The number-theoretical relation F^{*}(x) is algorithmically verifiable if, and only if, for any natural number n , there is a deterministic algorithm AL_{(F,\ n)} which can provide evidence for deciding the truth/falsity of each proposition in the finite sequence \{F^{*}(1), F^{*}(2), \ldots, F^{*}(n)\} .

Whereas [(\forall x)F(x)] must always be interpreted strongly under the finitary interpretation of PA (see [An16], Theorem 6.7) in terms of algorithmic computability ([An16], Definition 2), where:

Definition 3 (Algorithmic computability): The number theoretical relation F^{*}(x) is algorithmically computable if, and only if, there is a deterministic algorithm AL_{F} that can provide evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F^{*}(1), F^{*}(2), \ldots\} .

The significance of the distinction between algorithmically computable reasoning based on algorithmically computable truth, and algorithmically verifiable reasoning based on algorithmically verifiable truth, is that it admits the following, hitherto unsuspected, consequences:

(i) PA has two well-defined interpretations over the domain N of the natural numbers (including 0 ):

(a) the weak non-finitary standard interpretation I_{PA(N, SV)} ([An16], Theorem 5.6),

and

(b) a strong finitary interpretation I_{PA(N, SC)} ([An16], Theorem 6.7);

(ii) PA is non-finitarily consistent under I_{PA(N, SV)} ([An16], Theorem 5.7);

(iii) PA is finitarily consistent under I_{PA(N, SC)} ([An16], Theorem 6.8).

The significance of evidence-based reasoning for Computational Complexity

In this investigation I now show the relevance of evidence-based reasoning, and of distinguishing between algorithmically verifiable and algorithmically computable number-theoretic functions (as defined above), for Computational Complexity is that it assures us a formal foundation for placing in perspective, and complementing, an uncomfortably counter-intuitive entailment in number theory—Theorem 2 below—which has been treated by conventional wisdom as sufficient for concluding that the prime divisors of an integer cannot be proven to be mutually independent.

However, I show there that such informally perceived barriers are, in this instance, illusory; and that admitting the above distinction illustrates:

(a) Why the prime divisors of an integer are mutually independent Theorem 2;

(b) Why determining whether the signature (Definition 3 below) of a given integer n —coded as the key in a modified Bazeries-cylinder (see Definition 7 of this paper) based combination lock—is that of a prime, or not, can be done in polynomial time O(log_{_{e}}n) (Corollary 4 of this paper); as compared to the time \ddot{O}(log_{_{e}}^{15/2}n) given by Agrawal et al in [AKS04], and improved to \ddot{O}(log_{_{e}}^{6}n) by Lenstra and Pomerance in [LP11], for determining whether the value of a given integer n is that of a prime or not.

(c) Why it can be cogently argued that determining a factor of a given integer cannot be polynomial time.

Definition 4 (Signature of a number): The signature of a given integer n is the sequence a_{_{n,i}} where n + a_{_{n,i}} \equiv 0\ mod\ (p_{_{i}}) for all primes p_{_{i}}\ such\ that\ 1\leq i \leq \pi(\sqrt{n}) .

Unique since, if p_{_{\pi(\sqrt{m})+1}}^{2} > m \geq p_{_{\pi(\sqrt{m})}}^{2} and p_{_{\pi(\sqrt{n})+1}}^{2} > n \geq p_{_{\pi(\sqrt{n})}}^{2} have the same signature, then |m - n| = c_{_{1}}.\prod_{i=1}^{\pi(\sqrt{m})}p_{_{i}} = c_{_{2}}.\prod_{i=1}^{\pi(\sqrt{n})}p_{_{i}} ; whence c_{_{1}} = c_{_{2}} = 0 since \prod_{i=1}^{k}p_{_{i}} > (\prod_{i=2}^{k-2}p_{_{i}}).p_{_{k}}^{^{2}} > p_{_{k+1}}^{2} for k > 4 by appeal to Bertrand’s Postulate 2.p_{_{k}} > p_{_{k+1}} ; and the uniqueness is easily verified for k \leq 4 .

Definition 5 (Value of a number): The value of a given integer n is any well-defined interpretation—over the domain of the natural numbers—of the (unique) numeral [n] that represents n in the first-order Peano Arithmetic PA.

We note that Theorem 2 establishes a lower limit for [AKS04] and [LP11], because determining the signature of a given integer n does not require knowledge of the value of the integer as defined by the Fundamental Theorem of Arithmetic.

Theorem 1: (Fundamental Theorem of Arithmetic): Every positive integer n > 1 can be represented in exactly one way as a product of prime powers:

n=p_{1}^{n_{1}}p_{2}^{n_{2}}\cdots p_{k}^{n_{k}}=\prod _{i=1}^{k}p_{i}^{n_{i}}

where p_{1} < p_{2} < \ldots < p_{k} are primes and the n_{i} are positive integers (including 0 ).

Are the prime divisors of an integer mutually independent?

In this paper I address the query:

Query 1: Are the prime divisors of an integer n mutually independent?

Definition 6 (Independent events): Two events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other.

Intuitively, the prime divisors of an integer seem to be mutually independent by virtue of the Fundamental Theorem of Arithmetic

Moreover, the prime divisors of n can also be seen to be mutually independent in the usual, linearly displayed, Sieve of Eratosthenes, where whether an integer n is crossed out as a multiple of a prime p is obviously independent of whether it is also crossed out as a multiple of a prime q \neq p :

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 …

Despite such compelling evidence, conventional wisdom appears to accept as definitive the counter-intuitive conclusion that although we can see it as true, we cannot mathematically prove the following proposition as true:

Proposition 1: Whether or not a prime p divides an integer n is independent of whether or not a prime q \neq p divides the integer n .

We note that such an unprovable-but-intuitively-true conclusion makes a stronger assumption than that in Gödel’s similar claim for his arithmetical formula [(\forall x)R(x)] —whose Gödel-number is 17Gen\ r —in [Go31], p.26(2). Stronger, since Gödel does not assume his proposition to be intuitively true, but shows that though the arithmetical formula with Gödel-number 17Gen\ r is not provable in his Peano Arithmetic P yet, for any P -numeral [n] , the formula [R(n)] whose Gödel-number is Sb \left(r \begin{array}{c}17 \\ Z(n)\end{array}\right) is P -provable, and therefore meta-mathematically true under any well-defined Tarskian interpretation of P (cf., [An16], Section 3.).

Expressed in computational terms (see [An16], Corollary 8.3), under any well-defined interpretation of P , Gödel’s formula [R(x)] translates as an arithmetical relation, say R'(x) , such that R'(n) is algorithmically verifiable, but not algorithmically computable, as always true over N , since [\neg (\forall x)R(x)] is P -provable ([An16], Corollary 8.2).

We thus argue that a perspective which denies Proposition 1 is based on perceived barriers that reflect, and are peculiar to, only the argument that:

Theorem 2: There is no deterministic algorithm that, for any given n , and any given prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Proof By a standard result in the Theory of Numbers ([Ste02], Chapter 2, p.9, Theorem 2.1, we cannot define a probability function for the probability that a random n is prime over the probability space (1, 2, 3, \ldots, ) .

(Compare with the informal argument in [HL23], pp.36-37.)

In other words, treating Theorem 2 as an absolute barrier does not admit the possibility—which has consequences for the resolution of outstanding problems in both the theory of numbers and computational complexity—that Proposition 1 is algorithmically verifiable, but not algorithmically computable, as true, since:

Theorem 3: For any given n , there is a deterministic algorithm that, given any prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Can Gödel be held responsible for not clearly distinguishing—in his seminal 1931 paper on formally undecidable propositions (pp.596-616, ‘From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931‘, Jean van Heijenoort, Harvard University Press, 1976 printing)—between the implicit circularity that is masked by the non-constructive nature of his proof of undecidability in PM, and the lack of any circularity in his finitary proof of undecidability in Peano Arithmetic?

“The analogy of this argument with the Richard antinomy leaps to the eye. It is closely related to the “Liar” too;[Fn.14] for the undecidable proposition [R (q); q] states that q belongs to K , that is, by (1), that [R (q); q] is not provable. We therefore have before us a proposition that says about itself that it is not provable [in PM].[Fn.15]

[Fn.14] Any epistemological antinomycould be used for a similar proof of the existence of undecidable propositions.”

[Fn.15] Contrary to appearances, such a proposition involves no faulty circularity, for initially it [only] asserts that a certain well-defined formula (namely, the one obtained from the q th formula in the lexicographic order by a certain substitution) is unprovable. Only subsequently (and so to speak by chance) does it turn out that this formula is precisely the one by which the proposition itself was expressed.”

It is a question worth asking, if we heed Abel-Luis Peralta, who is a Graduate in Scientific Calculus and Computer Science in the Faculty of Exact Sciences at the National University of La Plata in Buenos Aires, Argentina; and who has been contending in a number of posts on his Academia web-page that:

(i) Gödel’s semantic definition of ‘[R(n) : n] ‘, and therefore of ‘\neg Bew[R(n) : n] ‘, is not only:

(a) self-referential under interpretation—in the sense of the above quote (pp.597-598, van Heijenoort) from Gödel’s Introduction in his 1931 paper ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems I’ (pp.596-616, van Heijenoort);

but that:

(b) neither of the definitions can be verified by a deterministic Turing machine as yielding a valid formula of PM.

Peralta is, of course, absolutely right in his contentions.

However, such non-constructiveness is a characteristic of any set-theoretical system in which PM is interpretable; and in which, by Gödel’s self-confessed Platonism (apparent in his footnote #15 in the quote above), we do not need to establish that his definitions of ‘[R(n) : n] ‘ and ‘\neg Bew[R(n) : n] ‘ need to be verifiable by a deterministic Turing machine in order to be treated as valid formulas of PM.

Reason: By the usual axiom of separation of any formal set theory such as ZFC in which PM is interpreted, Gödel’s set-theoretical definition (p.598, Heijenoort):

n \in K \equiv \neg Bew[R(n) : n]

lends legitimacy to \neg Bew[R(n) : n] as a PM formula.

Thus Gödel can formally assume—without further proof, by appeal simply to the axiom of choice of ZFC—that the PM formulas with exactly one variable—of the type of natural numbers—can be well-ordered in a sequence in some way such as, for example (Fn.11, p.598, Heijenoort):

“… by increasing the sum of the finite sequences of integers that is the ‘class sign’;, and lexicographically for equal sums.”

We cannot, though, conclude from this that:

(ii) Gödel’s formally undecidable P-formula, say [(\forall x)R(x)] —whose Gödel-number is defined as 17Gen\ r in Gödel’s proof of his Theorem VI (on pp.607-609 of van Heijenoort)—also cannot be verified by a deterministic Turing machine to be a valid formula of Gödel’s Peano Arithmetic P.

Reason: The axioms of set-theoretical systems such as PM, ZF, etc. would all admit—under a well-defined interpretation, if any—infinite elements, in the putative domain of any such interpretation, which are not Turing-definable.

Nevertheless, to be fair to two generations of scholars who—apart from those who are able to comfortably wear the logician’s hat—have laboured in attempts to place the philosophical underpinnings of Gödel’s reasoning (in his 1931 paper) in a coherent perspective (see this post; also this and this), I think Gödel must, to some extent, be held responsible—but in no way accountable—for the lack of a clear-cut distinction between the non-constructivity implicit in his semantic proof in (i), and the finitarity that he explicitly ensures for his syntactic proof in (ii).

Reason: Neither in his title, nor elsewhere in his paper, does Gödel categorically state that his goal was:

(iii) not only to demonstrate the existence of formally undecidable propositions in PM, a system which admits non-finitary elements under any putative interpretation;

(iv) but also to prevent the admittance of non-finitary elements—precisely those which would admit conclusions such as (ii)—when demonstrating the existence of formally undecidable propositions in ‘related’ systems such as his Peano Arithmetic P.

He merely hints at this by stating (see quote below from pp.587-589 of van Heijenoort) that his demonstration of (iii) is a ‘sketch’ that lacked the precision which he intended to achieve in (iv):

“Before going into details, we shall first sketch the main idea of the proof, of course without any claim to complete precision. The formulas of a formal system (we restrict ourselves here to the system PM) in outward appearance are finite sequences of primitive signs (variables, logical constants, and parentheses or punctuation dots), and it is easy to state with complete precision which sequences of primitive signs are meaningful formulas and which are not….

by:

(v) weakening the implicit assumption—of the decidability of the semantic truth of PM-propositions under any well-defined interpretation of PM—which underlies his proof of the existence of formally undecidable set-theoretical propositions in PM;

The method of proof just explained can clearly be applied to any formal system that, first, when interpreted as representing a system of notions and propositions, has at its disposal sufficient means of expression to define the notions occurring in the argument above (in particular, the notion “provable formula”) and in which, second, every provable formula is true in the interpretation considered. The purpose of carrying out the above proof with full precision in what follows is, among other things, to replace the second of the assumptions just mentioned by a purely formal and much weaker one.”

and:

(vi) insisting—in his proof of the existence of formally undecidable arithmetical propositions in his Peano Arithmetic P—upon the introduction of a methodology for constructively assigning unique truth values to only those (primitive recursive) quantified number-theoretic assertions (#1 to #45 on pp.603-606 of van Heijenoort) that are bounded when interpreted over the domain N of the natural numbers (footnote #34 on p.603 of van Heijenoort):

“Wherever one of the signs (x) , (Ex) , or \varepsilon x occurs in the definitions below, it is followed by a bound on x . This bound serves merely to ensure that the notion defined is recursive (see Theorem IV). But in most cases the extension of the notion defined would not change if this bound were omitted.”

From today’s perspective, one could reasonably hold that—as Peralta implicitly contends—Gödel is misleadingly suggesting (in the initial quote above from pp.587-589 of van Heijenoort) that his definitions of ‘[R(n) : n] ‘ and ‘~Bew[R(n) : n] ‘ may be treated as yielding ‘meaningful’ formulas of PM which are well-definable constructively (in the sense of being definable by a deterministic Turing machine).

In my previous post I detailed precisely why such an assumption would be fragile, by showing how the introduction of the boundedness Gödel insisted upon in (vi) distinguishes:

(vii) Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (pp.598-599 of van Heijenoort), which admits Peralta’s contention (1);

from:

(viii) Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (pp.607-609 of van Heijenoort), which does not admit the corresponding contention (ii).

Moreover, we note that:

(1) Whereas Gödel can—albeit non-constructively—claim that his definition of ‘Bew[R(n) : n] ‘ yields a formula in PM, we cannot claim, correspondingly, that his primitive recursive formula Bew(x) is a formula in his Peano Arithmetic P.

(2) The latter is a number-theoretic relation defined by Gödel in terms of his primitive recursive relation #45, ‘xBy ‘, as:

#46. Bew(x) \equiv (\exists y)yBx .

(3) In Gödel’s terminology, ‘Bew(x) ‘ translates under interpretation over the domain N of the natural numbers as:

x is the Gödel-number of some provable formula [F] of Gödel’s Peano Arithmetic P’.

(4) However, unlike Gödel’s primitive recursive functions and relations #1 to #45, both ‘(\exists y)yBx ‘ and ‘\neg (\exists y)yBx ‘ are number-theoretic relations which are not primitive recursive—which means that they are not effectively decidable by a Turing machine under interpretation in N.

(5) Reason: Unlike in Gödel’s definitions #1 to #45 (see footnote #34 on p.603 of van Heijenoort, quoted above), there is no bound on the quantifier ‘(\exists y) ‘ in the definition of Bew(x) .

Hence, by Turing’s Halting Theorem, we cannot claim—in the absence of specific proof to the contrary—that there must be some deterministic Turing machine which will determine whether or not, for any given natural number m , the assertion Bew(m) is true under interpretation in N.

This is the crucial difference between Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (which admits Peralta’s contention (i)), and Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (which does not admit his contention (i)).

(6) We cannot, therefore—in the absence of specific proof to the contrary—claim by Gödel’s Theorems V or VII that there must be some P-formula, say [Bew_{_{PA}}(x)] (corresponding to the PM-formula Bew[R(n) : n] ), such that, for any given natural number m :

(a) If Bew(m) is true under interpretation in N, then [Bew_{_{PA}}(m)] is provable in P;

(b) If \neg Bew(m) is true under interpretation in N, then \neg [Bew_{_{PA}}(m)] is provable in P.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A: Is Gödel’s reasoning really kosher?

Many scholars yet harbour a lingering suspicion that Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x,p)] involves a latent contradiction—arising from a putative, implicit, circular self-reference—that is masked by unverifiable, even if not patently invalid, mathematical reasoning.

The following proof of Gödel’s Theorem VI of his 1931 paper is intended to:

\bullet strip away the usual mathematical jargon that shrouds proofs of Gödel’s argument which makes his—admittedly arcane—reasoning difficult for a non-logician to unravel;

and

\bullet show that, and why—unlike in the case of the paradoxical ‘Liar’ sentence: ‘This sentence is a lie’—Gödel’s proposition [(\forall x)R(x, p)] does not involve any circular self-reference that could yield a Liar-like contradiction, either in a formal mathematical language, or when interpreted in any language of common discourse.

B: Gödel’s 45 primitive recursive arithmetic functions and relations

We begin by noting that:

(1) In his 1931 paper on formally ‘undecidable’ arithmetical propositions, Gödel shows that, given a well-defined system of Gödel-numbering, every formula of a first-order Peano Arithmetic such as PA can be Gödel-numbered by Gödel’s primitive recursive relation #23, Form(x) , which is true if, and only if, x is the Gödel-number (GN) of a formula of PA.

(2) So, given any natural number n , (1) allows us to decompose n and effectively determine whether, or not, n is the GN of some PA formula.

(3) Gödel also defines a primitive recursive relation #44, Bw(x) , which is true if, and only if, x is the GN of a finite sequence of formulas in PA, each of which is either an axiom, or an immediate consequence of two preceding formulas in the sequence.

(4) So, given any natural number n , (3) allows us to effectively determine whether, or not, the natural number n is the GN of a proof sequence in PA.

(5) Further, Gödel defines a primitive recursive relation #45, xBy , which is true if, and only if, x is the GN of a proof sequence in PA whose last formula has the GN y .

(6) Gödel then defines a primitive recursive relation, say Q(x,y) \equiv xBSUBy , such that, for any m,n :

mBSUBn is true if, and only if, m happens to be a GN that can be decomposed into a proof sequence whose last member is some PA formula [F(n)] , and n happens to be a GN that decomposes into the PA-formula [F(u)] with only one variable [u] .

(7) The essence of Gödel’s Theorem VI lies in answering the question:

Query 1: Is there any natural number n for which mBSUBn is true?

C: Gödel’s reasoning in Peano Arithmetic

(8) Now, by Gödel’s Theorem VII (a standard representation theorem of arithmetic), xBSUBy can be expressed in PA by some (formally well-defined) formula [\neg R(x,y)] such that, for any m,n :

(a) If mBSUBn is true, then [\neg R(m,n)] is PA-provable;

(b) If \neg mBSUBn is true, then [R(m,n)] is PA-provable.

(9) Further, by (6) and (8), for any m,n , if n is the GN of F(x) then:

(a) If mBSUBn is true, then [R(m,n)] is PA-provable; and m is a PA-proof of [F(n)] ;

(b) If \neg mBSUBn is true, then [\neg R(m,n)] is PA-provable; and m is not a PA-proof of [F(n)] .

(10) In his Theorem VI, Gödel then argues as follows:

(a) Let q be the GN of the formula [R(x,y)] defined in (8).

(b) Let p be the GN of [(\forall x)R(x,y)] .

(c) Let r be the GN of [R(x,p)] .

(d) Let 17Gen\ r be the GN of [(\forall x)R(x,p)] .

(11) We note that all the above primitive recursive relations are formally well-defined within the standard primitive recursive arithmetic PRA; and all the PA-formulas—as well as their corresponding Gödel-numbers—are well-defined in the first-order Peano Arithmetic PA.

In other words, as Gödel emphasised in his paper, the 46—i.e., 45 + xBSUBy —PRA functions and relations that he defines are all bounded, and therefore effectively decidable as true or false over the domain N of the natural numbers; whilst the PA-formulas that he defines do not involve any reference—or self-reference—to either the meaning or the truth/falsity of any PA-formulas under an interpretation in N , but only to their PA-provability which, he shows, is effectively decidable by his system of Gödel-numbering and his definition of the primitive recursive relation xBy .

(12) If we now substitute p for n , and [(\forall x)R(x,p)] for [F(n)] , in (9) we have (since p is the GN of [(\forall x)R(x,y)] ) that:

(i) If mBSUBp is true, then [R(\neg m,p)] is PA-provable; whence m is a PA-proof of [(\forall x)R(x,p)] ;

(ii) If \neg mBSUBp is true, then [R(m,p)] is PA-provable; whence m is not a PA-proof of [(\forall x)R(x,p)] .

Hence n = p answers Query 1 affirmatively.

D: Gödel’s conclusions

(13) Gödel then concludes that, if PA is consistent then:

By (12)(i), if mSUBp is true for some m , then both [R(\neg m,p)] and [(\forall x)R(x,p)] are PA-provable—a contradiction since, by Generalisation in PA, the latter implies that [R(m,p)] is provable in PA.

Hence [(\forall x)R(x,p)] , whose GN is 17Gen\ r , is not provable in PA if PA is consistent.

(14) Moreover, if PA is assumed to also be \omega -consistent (which means that we cannot have a PA-provable formula [\neg (\forall x)F(x)] such that [F(m)] is also provable in PA for any given numeral [m] ) then:

By (13), m is not a PA-proof of [(\forall x)R(x,p)] for any given m ; whence [R(m,p)] is PA-provable for any given m by (12)(ii).

Hence [\neg (\forall x)R(x,p)] , whose GN is Neg(17Gen r) , is not provable in PA.

E: Gödel’s [(\forall x)R(x,p)] does not refer to itself

We note that Gödel’s formula [(\forall x)R(x,p)] —whose GN is 17Gen\ r — does not refer to itself since it is defined in terms of the natural number p , and not in terms of the natural number 17Gen\ r .

F: Somewhere, far beyond Gödel

The consequences of Gödel’s path-breaking answer to Query 1 are far-reaching (as detailed in this thesis).

For instance, taken together with the proof that PA is categorical with respect to algorithmic computability (Corollary 7.2 of this paper), and that PA is not \omega -consistent (Corollary 8.4 of this paper), the above entails that:

\bullet There can be no interpretation of Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x),p] over the domain of the natural numbers—whether expressed mathematically or in any language of common discourse—that could lead to a contradiction;

\bullet Gödel’s [(\forall x)R(x,p)] is not a formally undecidable arithmetical proposition, since [\neg (\forall x)R(x,p)] is PA-provable (see Corollary 8.2 of this paper).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In this post I address two critical issues, as raised in private correspondence with researchers, which may illuminate some objections to Gödel’s reasoning and conclusions that have been raised elsewhere by Wittgenstein, Floyd, Putnam et al.:

(i) By Rosser’s reasoning, doesn’t simple consistency suffice for defining an undecidable arithmetical proposition?

(ii) Doesn’t Gödel’s undecidable formula assert its own unprovability?

NOTE: The following correspondence refers copiously to this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

Subsequently, most of the cited results were detailed formally in the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

A: Doesn’t simple consistency suffice for defining Rosser’s undecidable arithmetical proposition?

You claim that the PA system is \omega-inconsistent, and that Gödel’s first theorem holds vacuously. But by Rosser’s result, simple consistency suffices.

Well, it does seem surprising that Rosser’s claim—that his ‘undecidable’ proposition only assumes simple consistency—has not been addressed more extensively in the literature. Number-theoretic expositions of Rosser’s proof have generally remained either implicit or sketchy (see, for instance, this post).

Note that Rosser’s proposition and reasoning involve interpretation of an existential quantifier, whilst Gödel’s proposition and reasoning only involve interpretation of a universal quantifier.

The reason why Rosser’s claim is untenable is that—in order to interpret the existential quantifier as per Hilbert’s \epsilon-calculus—Rosser’s argument needs to assume his Rule C (see Elliott Mendelson, Introduction to Mathematical Logic, 1964 ed., p.73), which implicitly implies that Gödel’s arithmetic P—in which Rosser’s argumentation is grounded—is \omega-consistent .

See, for instance, this analysis of (a) Wang’s outline of Rosser’s argument on p.5, (b) Beth’s outline of Rosser’s argument on p.6, and (c) Mendelson’s exposition of Rosser’s argument in Section 4.2 on p.8.

Moreover, the assumption is foundationally fragile, because Rule C invalidly assumes that we can introduce an ‘unspecified’ formula denoting an ‘unspecified’ numeral into PA even if the formula has not been demonstrated to be algorithmically definable in terms of the alphabet of PA.

See Theorem 8.5 and following remarks in Section 8, pp.7-8 of this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

B: As I see it, rule C is only a shortcut.

As I see it, rule C is only a shortcut; it is totally eliminable. Moreover, it is part of predicate logic, not of the Peano’s arithmetic.

Assuming that Rule C is a short cut which can always be eliminated is illusory, and is tantamount to invalidly (see Corollary 8.6, p.17 of the Epsilon 2015 paper) claiming that Hilbert’s \epsilon calculus is a conservative extension of the first-order predicate calculus.

Reason: Application of Rule C invalidly (see Theorem 8.5 and following remarks in Section 8, pp.7-8 of the Epsilon 2015 paper) involves introduction of a new individual constant, say [d], in a first-order theory K (see Mendelson 1964, p.74, I(iv)); ‘invalidly’ since Rule C does not qualify that [d] must be algorithmically computable from the alphabet of K—which is necessary if K is first-order.

Notation: We use square brackets to indicate that the expression within the brackets denotes a well-formed formula of a formal system, say K, that is to be viewed syntactically merely as a first-order string of K—i.e, one which is finitarily constructed from the alphabet of the language of K—without any reference to its meaning under any interpretation of K.

Essentially, Rule C mirrors in K the intuitionistically objectionable postulation that the formula [(\exists x)F(x)] of K can always be interpreted as:

F'(a) holds for some element a

in the domain of the interpretation of K under which the formula [F(x)] interprets as the relation F'(x).

The Epsilon 2015 paper shows that this is not a valid interpretation of the formula [(\exists x)F(x)] under any finitary, evidence-based, interpretation of K.

That, incidentally, is a consequence of the proof that PA is not \omega-consistent; which itself is a consequence of (Theorem 7.1, p.15, of the Epsilon 2015 paper):

Provability Theorem for PA: A PA formula [F(x)] is provable if, and only if, [F(x)] interprets as an arithmetical relation F'(x) that is algorithmically computable as always true (see Definition 3, p.7, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers.

Compare with what Gödel has essentially shown in his famous 1931 paper on formally undecidable arithmetical propositions, which is that (Lemma 8.1, p.16, of the Epsilon 2015 paper):

Gödel: There is a PA formula [R(x, p)]—which Gödel refers to by its Gödel number r—which is not provable in PA, even though [R(x, p)] interprets as an arithmetical relation that is algorithmically verifiable as always true (see Definition 4, p.7, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers.

C: If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

If I have understood correctly, you claim that the PA system is \omega-inconsistent from an intuitionistic point of view? If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

There is nothing to bypass—the first-order Peano Arithmetic PA is a formal axiomatic system which is \omega-inconsistent as much for an intuitionist, as it is for a realist, a finitist, a formalist, a logicist or a nominalist.

Philosophers may differ about beliefs that are essentially unverifiable; but the \omega-incompleteness of PA is a verifiable logical meta-theorem that none of them would dispute.

D: Isn’t Gödel’s undecidable formula [(\forall x)R(x, p)]—which Gödel refers to by its Gödel number 17Gen\ r—self-referential?

Isn’t Gödel’s undecidable formula [(\forall x)R(x, p)]—which Gödel refers to by its Gödel number 17Gen\ r—self-referential and covertly paradoxical?

According to Wittgenstein it interprets in any model as a sentence that is devoid of sense, or even meaning. I think a good reason for this is that the formula is simply syntactically wrongly formed: the provability of provability is not defined and can not be consistently defined.

What you propose may be correct, but for automation systems of deduction wouldn’t \omega-inconsistency be much more problematic than undecidability?

How would you feel if a syntax rule is proposed, that formulas containing numerals are instantiations of open formulas that may not be part of the canonical language? Too daring, may be?

Let me briefly respond to the interesting points that you have raised.

1. The \omega-inconsistency of PA is a meta-theorem; it is a Corollary of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper).

2. Gödel’s PA-formula [(\forall x)R(x, p)] is not an undecidable formula of PA. It is merely unprovable in PA.

3. Moreover, Gödel’s PA-formula [\neg(\forall x)R(x, p)] is provable in PA, which is why the PA formula [(\forall x)R(x, p)] is not an undecidable formula of PA.

4. Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential.

5. Wittgenstein correctly believed—albeit purely on the basis of philosophical considerations unrelated to whether or not Gödel’s formal reasoning was correct—that Gödel was wrong in stating that the PA formula [(\forall x)R(x, p)] asserts its own unprovability in PA.

Reason: We have for Gödel’s primitive recursive relation Q(x, y) that:

Q(x, p) is true if, and only if, the PA formula [R(x, p)] is provable in PA.

However, in order to conclude that the PA formula [(\forall x)R(x, p)] asserts its own unprovability in PA, Gödel’s argument must further imply—which it does not—that:

(\forall x)Q(x, p) is true (and so, by Gödel’s definition of Q(x, y), the PA formula [(\forall x)R(x, p)] is not provable in PA) if, and only if, the PA formula [(\forall x)R(x, p)] is provable in PA.

In other words, for the PA formula [(\forall x)R(x, p)] to assert its own unprovability in PA, Gödel must show—which his own argument shows is impossible, since the PA formula [(\forall x)R(x, p)] is not provable in PA—that:

The primitive recursive relation Q(x, p) is algorithmically computable as always true if, and only if, the arithmetical relation R'(x, p) is algorithmically computable as always true (where R'(x, p) is the arithmetical interpretation of the PA formula [R(x, p)] over the structure \mathbb{N} of the natural numbers).

6. Hence, Gödel’s PA-formula [(\forall x)R(x, p)] is not covertly paradoxical.

7. IF Wittgenstein believed that the PA formula [(\forall x)R(x, p)] is empty of meaning and has no valid interpretation, then he was wrong, and—as Gödel justifiably believed—he could not have properly grasped Gödel’s formal reasoning that:

(i) ‘17Gen\ r is not \kappa-provable’ is a valid meta-theorem if PA is consistent, which means that:

‘If PA is consistent and we assume that the PA formula [(\forall x)R(x, p)] is provable in PA, then the PA formula [\neg(\forall x)R(x, p)] must also be provable in PA; from which we may conclude that the PA formula [(\forall x)R(x, p)] is not provable in PA’

(ii) ‘Neg(17Gen\ r) is not \kappa-provable’ is a valid meta-theorem ONLY if PA is \omega-consistent, which means that:

‘If PA is \omega-consistent and we assume that the PA formula [\neg(\forall x)R(x, p)] is provable in PA, then the PA formula [(\forall x)R(x, p)] must also be provable in PA; from which we may conclude that the PA formula [\neg(\forall x)R(x, p)] is not provable in PA’.

8. In fact the PA formula [(\forall x)R(x, p)] has the following TWO meaningful interpretations (the first of which is a true arithmetical meta-statement—since the PA formula [R(n)] is provable in PA for any PA-numeral [n]—but the second is not—since the PA formula [(\forall x)R(x, p)] is not provable in PA):

(i) For any given natural number n, there is an algorithm which will verify that each of the arithmetical meta-statements ‘R'(1, p) is true’, ‘R'(2, p) is true’, …, ‘R'(n, p) is true’ holds under the standard, algorithmically verifiable, interpretation \mathbb{M} of PA (see \S 5, p.11 of the Epsilon 2015 paper);

(ii) There is an algorithm which will verify that, for any given natural number n, the arithmetical statement ‘R'(n, p) is true’ holds under the finitary, algorithmically computable, interpretation \mathbb{B} of PA (see \S 6, p.13 of the Epsilon 2015 paper).

9. IF Wittgenstein believed that the PA formula [(\forall x)R(x, p)] is not a well-defined PA formula, then he was wrong.

Gödel’s definition of the PA formula [(\forall x)R(x, p)] yields a well-formed formula in PA, and cannot be treated as ‘syntactically wrongly formed’.

10. The Provability Theorem for PA shows that both ‘proving something in PA’ and ‘proving that something is provable in PA’ are finitarily well-defined meta-mathematical concepts.

11. The Provability Theorem for PA implies that PA is complete with respect to the concepts of satisfaction, truth and provability definable in automated deduction systems, which can only define algorithmically computable truth.

12. The Provability Theorem for PA implies that PA is categorical, so you can introduce your proposed syntax rule ONLY if it leads to a conservative extension of PA.

13. Whether ‘daring’ or not, why would you want to introduce such a rule?

E: Consider these two statements of yours …

Consider these two statements of yours:

“(iv): p is the Gödel-number of the formula [(\forall x)][R(x, y)] of PA” and

“D(4): Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential.”

If ‘p‘ is the Gödel-number of the open formula in para (iv), and the second argument of the closed formula R in para D(4) is ‘p‘, then the second formula is obtained by instantiating the variable ‘y‘ in the first with its own Gödel-number.

So how would you call, in one word, the relation between the entire formula (in D(4)) and its second argument?

Para D(4) is an attempt to clarify precisely this point.

1. Apropos the first statement ‘(iv)’ cited by you:

From a pedantic perspective, the “relation between the entire formula (in D(4)) and its second argument” cannot be termed self-referential because the “second argument”, i.e., p, is the Gödel-number of the PA formula [(\forall x)R(x, y)], and not that of “the entire formula (in 4)”, i.e., of the formula [(\forall x)R(x, p)] itself (whose Gödel number is 17Gen\ r).

Putting it crudely, 17Gen\ r is neither self-referential—nor circularly defined—because it is not defined in terms of 17Gen\ r, but in terms of p.

2. Apropos the second statement ‘D(4)’ cited by you:

I would interpret:

Gödel’s PA-formula [(\forall x)R(x, p)] is self-referential

to mean, in this particular context, that—as Gödel wrongly claimed:

[(\forall x)R(x, p)] asserts its own unprovability in PA.

Now, if we were to accept the claim that [(\forall x)R(x, p)] is self-referential in the above sense, then (as various critics of Gödel’s reasoning have pointed out) we would have to conclude further that Gödel’s argument leads to the contradiction:

(\forall x)Q(x, p) is true—and so, by Gödel’s definition of Q(x, y)—the PA formula [(\forall x)R(x, p)] is not provable in PA—if, and only if, the PA formula [(\forall x)R(x, p)] is provable in PA.

However, in view of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper), this contradiction would only follow if Gödel’s argument were to establish (which it does not) that:

The primitive recursive relation Q(x, p) is algorithmically computable as always true if, and only if, the arithmetical interpretation R'(x, p) of the PA formula [R(x, p)] is algorithmically computable as always true over the structure \mathbb{N} of the natural numbers.

The reason Gödel cannot claim to have established the above is that his argument only proves the much weaker meta-statement:

The arithmetical interpretation R'(x, p) of the PA formula [R(x, p)] is algorithmically verifiable as always true over the structure \mathbb{N} of the natural numbers.

Ergo—contrary to Gödel’s claim— Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential (and so, even though Gödel’s claimed interpretation of what his own reasoning proves is wrong, there is no paradox in Gödel’s reasoning per se)!

F: Is the PA system \omega-inconsistent without remedy?

Is the PA system \omega-inconsistent without remedy? Is it possible to introduce a new axiom or new rule which by-passes the problematic unprovable statements of the Gödel-Rosser Theorems?

1. Please note that the first-order Peano Arithmetic PA is:

(i) consistent (Theorem 7.3, p.15, of the Epsilon 2015 paper); which means that for any PA-formula [A], we cannot have that both [A] and [\neg A] are Theorems of PA;

(ii) complete (Theorem 7.1, p.15, of the Epsilon 2015 paper); which means that we cannot add an axiom to PA which is not a Theorem of PA without inviting inconsistency;

(iii) categorical (Theorem 7.2, p.15, of the Epsilon 2015 paper); which means that if \mathbb{M} is an interpretation of PA over a structure \mathbb{S}, and \mathbb{B} is an interpretation of PA over a structure \mathbb{T}, then \mathbb{S} and \mathbb{T} are identical and denote the structure \mathbb{N} of the natural numbers defined by Dedekind’s axioms; and so PA has no model which contains an element that is not a natural number (see Footnote 54, p.16, of the Epsilon 2015 paper).

2. What this means with respect to Gödel’s reasoning is that:

(i) PA has no undecidable propositions, which is why it is not \omega-consistent (Corollary 8.4, p.16, of the Epsilon 2015 paper);

(ii) The Gödel formula [(\forall x)R(x, p)] is not provable in PA; but it is algorithmically verifiable as true (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically verifiable standard interpretation \mathbb{M} of PA (see Section 5, p.11, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers;

(iii) The Gödel formula [(\forall x)R(x, p)] is not provable in PA; and it is algorithmically computable as false (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation \mathbb{B} of PA (see Section 6, p.13, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers;

(iv) The Gödel formula [\neg(\forall x)R(x, p)] is provable in PA; and it is therefore also algorithmically verifiable as true under the algorithmically verifiable standard interpretation \mathbb{M} of PA over the structure \mathbb{N} of the natural numbers—which means that the logic by which the standard interpretation of PA assigns values of ‘satisfaction’ and ‘truth’ to the formulas of PA (under Tarski’s definitions) may be paraconsistent (see http://plato.stanford.edu/entries/logic-paraconsistent) since PA is consistent;

(v) The Gödel formula [\neg(\forall x)R(x, p)] is provable in PA; and it is therefore algorithmically computable as true (Corollary 8.2, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation \mathbb{B} of PA over the structure \mathbb{N} of the natural numbers.

3. It also means that:

(a) The “Gödel-Rosser Theorem” is not a Theorem of PA;

(b) The “unprovable Gödel sentence” is not a “problematic statement”;

(c) The “PA system” does not require a “remedy” just because it is “\omega-inconsistent”;

(d) No “new axiom or new rule” can “by-pass the unprovable sentence”.

4. Which raises the question:

Why do you see the “unprovable Gödel sentence” as a “problematic statement” that requires a “remedy” which must “by-pass the unprovable sentence”?

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought

Christopher Mole is an associate professor of philosophy at the University of British Columbia, Vancouver. He is the author of Attention is Cognitive Unison: An Essay in Philosophical Psychology (OUP, 2011), and The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought (Routledge, 2016).

In his preface to The Unexplained Intellect, Mole emphasises that his book is an attempt to provide arguments for (amongst others) the three theses that:

(i) “Intelligence might become explicable if we treat intelligence thought as if it were some sort of computation”;

(ii) “The importance of the rapport between an organism and its environment must \ldots be understood from a broadly computational perspective”;

(iii) “\ldots our difficulties in accounting for our psychological orientation with respect to time are indications of the need to shift our philosophical focus away from mental states—which are altogether too static—and towards a theory of the mind in which it is dynamic mental entities that are taken to be metaphysically foundational”.

The Brains blog

Mole explains at length his main claims in The Unexplained Intellect—and the cause that those claims serve—in a lucid and penetrating, VI-part, series of invited posts in The Brains blog (a leading forum for work in the philosophy and science of mind that was founded in 2005 by Gualtiero Piccinini, and has been administered by John Schwenkler since late 2011).

In these posts, Mole seeks to make the following points.

I: The Unexplained Intellect: The mind is not a hoard of sentences

We do not currently have a satisfactory account of how minds could be had by material creatures. If such an account is to be given then every mental phenomenon will need to find a place within it. Many will be accounted for by relating them to other things that are mental, but there must come a point at which we break out of the mental domain, and account for some things that are mental by reference to some that are not. It is unclear where this break out point will be. In that sense it is unclear which mental entities are, metaphysically speaking, the most fundamental.

At some point in the twentieth century, philosophers fell into the habit of writing as if the most fundamental things in the mental domain are mental states (where these are thought of as states having objective features of the world as their truth-evaluable contents). This led to a picture in which the mind was regarded as something like a hoard of sentences. The philosophers and cognitive scientists who have operated with this picture have taken their job to be telling us what sort of content these mental sentences have, how that content is structured, how the sentences come to have it, how they get put into and taken out of storage, how they interact with one another, how they influence behaviour, and so on.

This emphasis on states has caused us to underestimate the importance of non-static mental entities, such as inferences, actions, and encounters with the world. If we take these dynamic entities to be among the most fundamental of the items in the mental domain, then — I argue — we can avoid a number of philosophical problems. Most importantly, we can avoid a picture in which intelligent thought would be beyond the capacities of any physically implementable system.

II: The Unexplained Intellect: Computation and the explanation of intelligence

A lot of philosophers think that consciousness is what makes the mind/body problem interesting, perhaps because they think that consciousness is the only part of that problem that remains wholly philosophical. Other aspects of the mind are taken to be explicable by scientific means, even if explanatorily adequate theories of them remain to be specified.

\ldots I’ll remind the reader of computability theory’s power, with a view to indicating how it is that the discoveries of theoretical computer scientists place constraints on our understanding of what intelligence is, and of how it is possible.

III: The Unexplained Intellect: The importance of computability

If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would (be) absolutely inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.

IV: The Unexplained Intellect: Consequences of imperfection

The lesson to be drawn is that, if we think of intelligence as involving the maintenance of satisfiable beliefs, and if we think of our beliefs as corresponding to a set of representational states, then our intelligence would depend on a run of good luck the chances of which are unknown.

My suggestion is that we can reach a more explanatorily satisfactory conception of intelligence if we adopt a dynamic picture of the mind’s metaphysical foundations.

V: The Unexplained Intellect: The importance of rapport

I suggest that something roughly similar is true of us. We are not guaranteed to have satisfiable beliefs, and sometimes we are rather bad at avoiding unsatisfiability, but such intelligence as we have is to be explained by reference to the rapport between our minds and the world.

Rather than starting from a set of belief states, and then supposing that there is some internal process operating on these states that enables us to update our beliefs rationally, we should start out by accounting for the dynamic processes through which the world is epistemically encountered. Much as the three-colourable map generator reliably produces three-colourable maps because it is essential to his map-making procedure that borders appear only where they will allow for three colorability, so it is essential to what it is for a state to be a belief that beliefs will appear only if there is some rapport between the believer and the world. And this rapport — rather than any internal processing considered in isolation from it — can explain the tendency for our beliefs to respect the demands of intelligence.

VI: The Unexplained Intellect: The mind’s dynamic foundations

\ldots memory is essentially a form of epistemic retentiveness: One’s present knowledge counts as an instance of memory when and only when it was attained on the basis of an epistemic encounter that lies in one’s past. One can epistemically encounter a proposition as the conclusion of an argument, and so can encounter it before the occurrence of any event to which it pertains, but one cannot encounter an event in that way. In the resulting explanation of memory’s temporal asymmetry, it is the dynamic events of epistemic encountering to which we must make reference. These encounters, and not the knowledge states to which they lead, do the lion’s share of the explanatory work.

A: Simplifying Mole’s perspective

It may help simplify Mole’s thought-provoking perspective if we make an arbitrary distinction between:

(i) The mind of an applied scientist, whose primary concern is our sensory observations of a ‘common’ external world;

(ii) The mind of a philosopher, whose primary concern is abstracting a coherent perspective of the external world from our sensory observations; and

(iii) The mind of a mathematician, whose primary concern is adequately expressing such abstractions in a formal language of unambiguous communication.

My understanding of Mole’s thesis, then, is that:

(a) although a mathematician’s mind may be capable of defining the ‘truth’ value of some logical and mathematical propositions without reference to the external world,

(b) the ‘truth’ value of any logical or mathematical proposition that purports to represent any aspect of the real world must be capable of being evidenced objectively to the mind of an applied scientist; and that,

(c) of the latter ‘truths’, what should interest the mind of a philosopher is whether there are some that are ‘knowable’ completely independently of the passage of time, and some that are ‘knowable’ only partially, or incrementally, with the passage of time.

B. Support for Mole’s thesis

It also seems to me that Mole’s thesis implicitly subsumes, or at the very least echoes, the belief expressed by Chetan R. Murthy (‘An Evaluation Semantics for Classical Proofs‘, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, 1991; also Cornell TR 91-1213):

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …”

If so, the thesis seems significantly supported by the following paper that is due to appear in the December 2016 issue of ‘Cognitive Systems Research’:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Goedelian Thesis

The CSR paper implicitly suggests that there are, indeed, (only?) two ways of assigning ‘true’ or ‘false’ values to any mathematical description of real-world events.

C. Algorithmic computability

First, a number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence (cf. ibid Murthy 91) for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

(We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ on p.503 of Stephen Cole Kleene’s ‘Introduction to Metamathematics‘, North Holland Publishing Company, Amsterdam.)

D. Algorithmic verifiability

Second, a number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

We note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

The following theorem (Theorem 2.1, p.37 of the CSR paper) shows that although every algorithmically computable relation is algorithmically verifiable, the converse is not true:

Theorem: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable.

E. The significance of algorithmic ‘truth’ assignments for Mole’s theses

The significance of such algorithmic ‘truth’ assignments for Mole’s theses is that:

Algorithmic computability—reflecting the ambit of classical Newtonian mechanics—characterises natural phenomena that are determinate and predictable.

Such phenomena are describable by mathematical propositions that can be termed as ‘knowable completely’, since at any point of time they are algorithmically computable as ‘true’ or ‘false’.

Hence both their past and future behaviour is completely computable, and their ‘truth’ values are therefore ‘knowable’ independent of the passage of time.

Algorithmic verifiability—reflecting the ambit of Quantum mechanics—characterises natural phenomena that are determinate but unpredictable.

Such phenomena are describable by mathematical propositions that can only be termed as ‘knowable incompletely’, since at any point of time they are only algorithmically verifiable, but not algorithmically computable, as ‘true’ or ‘false’

Hence, although their past behaviour is completely computable, their future behaviour is not completely predictable, and their ‘truth’ values are not independent of the passage of time.

F. Where Mole’s implicit faith in the adequacy of set theoretical representations of natural phenomena may be misplaced

It also seems to me that, although Mole’s analysis justifiably holds that the:

\ldots importance of the rapport between an organism and its environment”

has been underacknowledged, or even overlooked, by existing theories of the mind and intelligence, it does not seem to mistrust, and therefore ascribe such underacknowledgement to any lacuna in, the mathematical and epistemic foundations of the formal language in which almost all descriptions of real-world events are currently sought to be expressed, which is the language of the set theory ZF.

G. Any claim to a physically manifestable ‘truth’ must be objectively accountable

Now, so far as applied science is concerned, history teaches us that the ‘truth’ of any mathematical proposition that purports to represent any aspect of the external world must be capable of being evidenced objectively; and that such ‘truths’ must not be only of a subjective and/or revelationary nature which may require truth-certification by evolutionarily selected prophets.

(Not necessarily religious—see, for instance, Melvyn B. Nathanson’s remarks, “Desperately Seeking Mathematical Truth“, in the Opinion piece in the August 2008 Notices of the American Mathematical Society, Vol. 55, Issue 7.)

The broader significance of seeking objective accountability is that it admits the following (admittedly iconoclastic) distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA as the language of science; and

2. The first-order Set Theory ZF as the language of science fiction.

It is a distinction that is faintly reflected in Stephen G. Simpson’s more conservative perspective in his paper ‘Partial Realizations of Hilbert’s Program‘ (#6.4, p.15):

“Finitistic reasoning (my read: ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (my read: ‘First-order Set Theory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.”

The distinction is supported by the formal argument (detailed in the above-cited CSR paper) that:

(i) PA has two, hitherto unsuspected, evidence-based interpretations, the first of which can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about ‘true’ arithmetical propositions.

What this means is that the language of arithmetic—formally expressed as PA—can provide all the foundational needs for all practical applications of mathematics in the physical sciences. This was was the point that I sought to make—in a limited way, with respect to quantum phenomena—in the following paper presented at Unilog 2015, Istanbul last year:

Algorithmically Verifiable Logic vis `a vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

(Presented on 26’th June at the workshop on ‘Emergent Computational Logics’ at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see Theorem 1 in \S4 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about ‘true’ set-theoretical propositions, or that of mechanistic reasoning about ‘true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that—although of possible interest to philosophers of science—are only mentally conceivable by mathematicians subjectively, and have no verifiable physical counterparts, or immediately practical applications of mathematics, that can materially impact on the study of physical phenomena.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

H. The importance of Mole’s ‘rapport’

Accordingly, I see it as axiomatic that the relationship between an evidence-based mathematical language and the physical phenomena that it purports to describe, must be in what Mole terms as ‘rapport’, if we view mathematics as a set of linguistic tools that have evolved:

(a) to adequately abstract and precisely express through human reasoning our observations of physical phenomena in the world in which we live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of our co-operative potential in acieving a better understanding of physical phenomena.

This is the perspective that I sought to make in the following paper presented at Epsilon 2015, Montpellier, last June, where I argue against the introduction of ‘unspecifiable’ elements (such as completed infinities) into either a formal language or any of its evidence-based interpretations (in support of the argument that since a completed infinity cannot be evidence-based, it must therefore be dispensible in any purported description of reality):

Why Hilbert’s and Brouwer’s interpretations of quantification are complementary and not contradictory.’

(Presented on 10th June at the Epsilon 2015 workshop on ‘Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics’, 10th June 2015 – 12th June 2015, University of Montpellier, France.)

I. Why mathematical reasoning must reflect an ‘agnostic’ perspective

Moreover, from a non-mathematician’s perspective, a Propertarian like Curt Doolittle would seem justified in his critique (comment of June 2, 2016 in this Quanta review) of the seemingly ‘mystical’ and ‘irrelevant’ direction in which conventional interpretations of Hilbert’s ‘theistic’ and Brouwer’s ‘atheistic’ reasoning appear to have pointed mainstream mathematics for, as I argue informally in an earlier post, the ‘truths’ of any mathematical reasoning must reflect an ‘agnostic’ perspective.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In a recent paper A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory, authors Adam Yedidia and Scott Aaronson argue upfront in their Introduction that:

Like any axiomatic system capable of encoding arithmetic, ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency. Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.

The question arises:

How reasonable is it to build modern mathematics on top of a Set Theory such as ZF?

Some immediate points to ponder upon (see also reservations expressed by Stephen G. Simpson in Logic and Mathematics and in Partial Realizations of Hilbert’s Program):

1. “Like any axiomatic system capable of encoding arithmetic, …”

The implicit assumption here that every ZF formula which is provable about the finite ZF ordinals must necessarily interpret as a true proposition about the natural numbers is fragile since, without such an assumption, we can only conclude from Goodstein’s argument (see Theorem 1.1 here) that a Goodstein sequence defined over the finite ZF ordinals must terminate even if the corresponding Goodstein sequence over the natural numbers does not terminate!

2. “ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency.”

The implicit assumption here is that ZF is \omega-consistent, which implies that ZF is consistent and must therefore have an interpretation over some mathematically definable structure in which ZF theorems interpret as ‘true’.

The question arises: Must such ‘truth’ be capable of being evidenced objectively, or is it only of a subjective, revelationary, nature (which may require truth-certification by evolutionarily selected prophets—see Nathanson’s remarks as cited in this post)?

The significance of seeking objective accountbility is that in a paper, “The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis“, which is due to appear in the December 2016 issue of Cognitive Systems Research, we show (see also this post) that the first-order Peano Arithmetic PA:

(i) is finitarily consistent; but

(ii) is not \omega-consistent; and

(iii) has no ‘undecidable’ arithmetical proposition (whence both of Gödel’s Incompleteness Theorems hold vacuously so far as the arithmetic of the natural numbers is concerned).

3. “Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.”

Now, one justification for such an assumption (without which it may be difficult to justify building modern mathematics on top of ZF) could be the belief that acquisition of set-theoretical knowledge by students of mathematics has some essential educational dimension.

If so, one should take into account not only the motivations of such a student for the learning of mathematics, but also those of a mathematician for teaching it.

This, in turn, means that both the content of the mathematics which is to be learnt (or taught), as well as the putative utility of such learning (or teaching) for a student (or teacher), merit consideration.

Considering content, I would iconoclastically submit that the least one may then need to accomodate is the following distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA, which is the language of science; and

2. The first-order Set Theory ZF, which is the language of science fiction.

A distinction that is reflected in Stephen G. Simpson’s more conservative perspective in Partial Realizations of Hilbert’s Program (\S6.4, p.15):

Finitistic reasoning (read ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (read ‘First-order Set Thyeory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.

Reason:

(i) PA has two, hitherto unsuspected, evidence-based interpretations (see this post), the first of which can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions.

It is this language of arithmetic—formally expressed as PA—that provides the foundation for all practical applications of mathematics where the latter could be argued as having an essential educational dimension.

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see paragraph 4.2 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about `true’ set-theoretical propositions, or that of mechanistic reasoning about `true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that are only mentally conceivable by mathematicians (subjectively?), and have no physical counterparts, or immediately practical applications of mathematics, which could meaningfully be argued as having an essential educational dimension.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

The distinction is lost when—as seems to be the case currently—we treat the acquisition of mathematical knowledge as necessarily including the body of essentially set-theoretic theorems—to the detriment, I would argue, of the larger body of aspiring students of mathematics whose flagging interest in acquiring such a wider knowledge in universities around the world reflects the fact that, for most students, their interests seem to lie primarily in how a study of mathematics can enable them to:

(a) adequately abstract and precisely express through human reasoning their experiences of the world in which they live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of their latent potential in acieving their personal real-world goals.

In other words, it is not obvious how how any study of mathematics that has the limited goals (a) and (b) can have any essentially educational dimension that justifies the assumption that ZF is consistent.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A foundational argument for defining Effective Computability formally, and weakening the Church and Turing Theses – II

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1 The Logical Issue

In the previous posts we addressed first the computational issue, and second the philosophical issue—concerning the informal concept of `effective computability’—that seemed implicit in Selmer Bringsjord’s narrational case against Church’s Thesis [1].

We now address the logical issue that leads to a formal definability of this concept which—arguably—captures our intuitive notion of the concept more fully.

We note that in this paper on undecidable arithmetical propositions we have shown how it follows from Theorem VII of Gödel’s seminal 1931 paper that every recursive function f(x_{1}, x_{2}) is representable in the first-order Peano Arithmetic PA by a formula [F(x_{1}, x_{2}, x_{3})] which is algorithmically verifiable, but not algorithmically computable, if we assume (Aristotle’s particularisation) that the negation of a universally quantified formula of the first-order predicate calculus is always indicative of the existence of a counter-example under the standard interpretation of PA.

In this earlier post on the Birmingham paper, we have also shown that:

(i) The concept of algorithmic verifiability is well-defined under the standard interpretation \mathcal{I}_{PA(\mathbb{N},\ Standard)} of PA over the structure \mathbb{N} of the natural numbers; and

(ii) The concept of algorithmic computability too is well-defined under the algorithmic interpretation \mathcal{I}_{PA(\mathbb{N},\ Algorithmic)} of PA over the structure \mathbb{N} of the natural numbers; and

We shall argue in this post that the standard postulation of the Church-Turing Thesis—which postulates that the intuitive concept of `effective computability’ is completely captured by the formal notion of `algorithmic computability’—does not hold if we formally define a number-theoretic formula as effectively computable if, and only if, it is algorithmically verifiable; and it therefore needs to be replaced by a weaker postulation of the Thesis as an instantiational equivalence.

\S 2 Weakening the Church and Turing Theses

We begin by noting that the following theses are classically equivalent [1]:

Standard Church’s Thesis: [2] A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is recursive [3].

Standard Turing’s Thesis: [4] A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is Turing-computable [5].

In this paper we shall argue that, from a foundational perspective, the principle of Occam’s razor suggests the Theses should be postulated minimally as the following equivalences:

Weak Church’s Thesis: A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a recursive function (or relation, treated as a Boolean function).

Weak Turing’s Thesis: A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a Turing-computable function (or relation, treated as a Boolean function).

\S 2.1 The need for explicitly distinguishing between `instantiational’ and `uniform’ methods

Why Church’s Thesis?

It is significant that both Kurt Gödel (initially) and Alonzo Church (subsequently—possibly under the influence of Gödel’s disquietitude) enunciated Church’s formulation of `effective computability’ as a Thesis because Gödel was instinctively uncomfortable with accepting it as a definition that minimally captures the essence of `intuitive effective computability’ [6].

Kurt Gödel’s reservations

Gödel’s reservations seem vindicated if we accept that a number-theoretic function can be effectively computable instantiationally (in the sense of being algorithmically verifiable as defined in the Birmingham paper, reproduced in this post), but not by a uniform method (in the sense of being algorithmically computable as defined in the Birmingham paper, reproduced in this post).

The significance of the fact (considered in the Birmingham paper, reproduced in this post) that `truth’ too can be effectively decidable both instantiationally and by a uniform method under the standard interpretation of PA is reflected in Gödel’s famous 1951 Gibbs lecture[7], where he remarks:

“I wish to point out that one may conjecture the truth of a universal proposition (for example, that I shall be able to verify a certain property for any integer given to me) and at the same time conjecture that no general proof for this fact exists. It is easy to imagine situations in which both these conjectures would be very well founded. For the first half of it, this would, for example, be the case if the proposition in question were some equation F(n) = G(n) of two number-theoretical functions which could be verified up to very great numbers n.” [8]

Alan Turing’s perspective

Such a possibility is also implicit in Turing’s remarks [9]:

“The computable numbers do not include all (in the ordinary sense) definable numbers. Let P be a sequence whose n-th figure is 1 or 0 according as n is or is not satisfactory. It is an immediate consequence of the theorem of \S8 that P is not computable. It is (so far as we know at present) possible that any assigned number of figures of P can be calculated, but not by a uniform process. When sufficiently many figures of P have been calculated, an essentially new method is necessary in order to obtain more figures.”

Boolos, Burgess and Jeffrey’s query

The need for placing such a distinction on a formal basis has also been expressed explicitly on occasion [10].

Thus, Boolos, Burgess and Jeffrey [11] define a diagonal halting function, d, any value of which can be decided effectively, although there is no single algorithm that can effectively compute d.

Now, the straightforward way of expressing this phenomenon should be to say that there are well-defined number-theoretic functions that are effectively computable instantiationally but not uniformly. Yet, following Church and Turing, such functions are labeled as uncomputable [12]!

However, as Boolos, Burgess and Jeffrey note quizically:

“According to Turing’s Thesis, since d is not Turing-computable, d cannot be effectively computable. Why not? After all, although no Turing machine computes the function d, we were able to compute at least its first few values, For since, as we have noted, f_{1} = f_{2} = f_{3} = the empty function we have d(1) = d(2) = d(3) = 1. And it may seem that we can actually compute d(n) for any positive integer n—if we don’t run out of time.” [13]

Why should Chaitin’s constant \Omega be labelled `uncomputable’?

The reluctance to treat a function such as d(n)—or the function \Omega(n) that computes the n^{th} digit in the decimal expression of a Chaitin constant \Omega [14]—as computable, on the grounds that the `time’ needed to compute it increases monotonically with n, is curious [15]; the same applies to any total Turing-computable function f(n)![16]

Moreover, such a reluctance to treat instantiationally computable functions such as d(n) as not `effectively computable’ is difficult to reconcile with a conventional wisdom that holds the standard interpretation of the first order Peano Arithmetic PA as defining an intuitively sound model of PA.

Reason: We have shown in the Birmingham paper (reproduced in this post) that ‘satisfaction’ and ‘truth’ under the standard interpretation of PA is definable constructively in terms of algorithmic verifiability (instantiational computability).

\S 2.2 Distinguishing between algorithmic verifiability and algorithmic computability

We now show in Theorem 1 that if Aristotle’s particularisation is presumed valid over the structure \mathbb{N} of the natural numbers—as is the case under the standard interpretation of the first-order Peano Arithmetic PA—then it follows from the instantiational nature of the (constructively defined [17]) Gödel \beta-function that a primitive recursive relation can be instantiationally equivalent to an arithmetical relation, where the former is algorithmically computable over \mathbb{N}, whilst the latter is algorithmically verifiable (i.e., instantiationally computable) but not algorithmically computable over \mathbb{N}.[18]

\S 2.2.1 Significance of Gödel’s \beta-function

We note first that in Theorem VII of his seminal 1931 paper on formally undecidable arithmetical propositions Gödel showed that, given a total number-theoretic function f(x) and any natural number n, we can construct a primitive recursive function \beta(z, y, x) and natural numbers b_{n}, c_{n} such that \beta(b_{n}, c_{n}, i) = f(i) for all 0 \leq i \leq n.

In this paper we shall essentially answer the following question affirmatively:

Query 3: Does Gödel’s Theorem VII admit construction of an arithmetical function A(x) such that:

(a) for any given natural number n, there is an algorithm that can verify A(i) = f(i) for all 0 \leq i \leq n (hence A(x) may be said to be algorithmically verifiable if f(x) is recursive);

(b) there is no algorithm that can verify A(i) = f(i) for all 0 \leq i (so A(x) may be said to be algorithmically uncomputable)?

\S 2.2.2 Defining effective computability

Now, in the Birmingham paper (reproduced in this post), we have formally defined what it means for a formula of an arithmetical language to be:

(i) Algorithmically verifiable;

(ii) Algorithmically computable.

under an interpretation.

We shall thus propose the definition:

Effective computability: A number-theoretic formula is effectively computable if, and only if, it is algorithmically verifiable.

Intuitionistically unobjectionable: We note first that since every finite set of integers is recursive, every well-defined number-theoretical formula is algorithmically verifiable, and so the above definition is intuitionistically unobjectionable; and second that the existence of an arithmetic formula that is algorithmically verifiable but not algorithmically computable (Theorem 1) supports Gödel’s reservations on Alonzo Church’s original intention to label his Thesis as a definition [19].

The concept is well-defined, since we have shown in the Birmingham paper (reproduced in this post) that the algorithmically verifiable and the algorithmically computable PA formulas are well-defined under the standard interpretation of PA and that:

(a) The PA-formulas are decidable as satisfied / unsatisfied or true / false under the standard interpretation of PA if, and only if, they are algorithmically verifiable;

(b) The algorithmically computable PA-formulas are a proper subset of the algorithmically verifiable PA-formulas;

(c) The PA-axioms are algorithmically computable as satisfied / true under the standard interpretation of PA;

(d) Generalisation and Modus Ponens preserve algorithmically computable truth under the standard interpretation of PA;

(e) The provable PA-formulas are precisely the ones that are algorithmically computable as satisfied / true under the standard interpretation of PA.

\S 3 Gödel’s Theorem VII and algorithmically verifiable, but not algorithmically computable, arithmetical propositions

In his seminal 1931 paper on formally undecidable arithmetical propositions, Gödel defined a curious primitive recursive function—Gödel’s \beta-function—as [20]:

Definition 1: \beta (x_{1}, x_{2}, x_{3}) = rm(1+(x_{3}+ 1) \star x_{2}, x_{1})

where rm(x_{1}, x_{2}) denotes the remainder obtained on dividing x_{2} by x_{1}.

Gödel showed that the above function has the remarkable property that:

Lemma 1: For any given denumerable sequence of natural numbers, say f(k, 0),\ f(k, 1),\ \ldots, and any given natural number n, we can construct natural numbers b, c, j such that:

(i) j = max(n, f(k, 0), f(k, 1), \ldots, f(k, n));

(ii) c = j!;

(iii) \beta(b, c, i) = f(k, i) for 0 \leq i \leq n.

Proof: This is a standard result [21]. \Box

Now we have the standard definition [22]:

Definition 2: A number-theoretic function f(x_{1}, \ldots, x_{n}) is said to be representable in PA if, and only if, there is a PA formula [F(x_{1}, \dots, x_{n+1})] with the free variables [x_{1}, \ldots, x_{n+1}], such that, for any given natural numbers k_{1}, \ldots, k_{n+1}:

(i) if f(k_{1}, \ldots, k_{n}) = k_{n+1} then PA proves: [F(k_{1}, \ldots, k_{n}, k_{n+1})];

(ii) PA proves: [(\exists_{1} x_{n+1})F(k_{1}, \ldots, k_{n}, x_{n+1})].

The function f(x_{1}, \ldots, x_{n}) is said to be strongly representable in PA if we further have that:

(iii) PA proves: [(\exists_{1} x_{n+1})F(x_{1}, \ldots, x_{n}, x_{n+1})]

Interpretation of `[\exists_{1}]‘: The symbol `[\exists_{1}]‘ denotes `uniqueness’ under an interpretation which assumes that Aristotle’s particularisation holds in the domain of the interpretation.

Formally, however, the PA formula:

[(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})]

is merely a short-hand notation for the PA formula:

[\neg(\forall x_{3})\neg F(x_{1}, x_{2}, x_{3}) \wedge (\forall y)(\forall z)(F(x_{1}, x_{2}, y) \wedge F(x_{1}, x_{2}, z) \rightarrow y=z)].

We then have:

Lemma 2 \beta(x_{1}, x_{2}, x_{3}) is strongly represented in PA by [Bt(x_{1}, x_{2}, x_{3}, x_{4})], which is defined as follows:

[(\exists w)(x_{1} = ((1 + (x_{3} + 1)\star x_{2}) \star w + x_{4}) \wedge (x_{4} < 1 + (x_{3} + 1) \star x_{2}))].

Proof: This is a standard result [23]. \Box

Gödel further showed (also under the tacit, but critical, presumption of Aristotle’s particularisation [24] that:

Lemma 3: If f(x_{1}, x_{2}) is a recursive function defined by:

(i) f(x_{1}, 0) = g(x_{1})

(ii) f(x_{1}, (x_{2}+1)) = h(x_{1}, x_{2}, f(x_{1}, x_{2}))

where g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are recursive functions of lower rank [25] that are represented in PA by well-formed formulas [G(x_{1}, x_{2})] and [H(x_{1}, x_{2}, x_{3}, x_{4})],

then f(x_{1}, x_{2}) is represented in PA by the following well-formed formula, denoted by [F(x_{1}, x_{2}, x_{3})]:

[(\exists u)(\exists v)(((\exists w)(Bt(u, v, 0, w) \wedge G(x_{1}, w))) \wedge Bt(u, v, x_{2}, x_{3}) \wedge (\forall w)(w < x_{2} \rightarrow (\exists y)(\exists z)(Bt(u, v, w, y) \wedge Bt(u, v, (w+1), z) \wedge H(x_{1}, w, y, z)))].

Proof: This is a standard result [26]. \Box

\S 4.1 What does “[(\exists_{1} x_{3})F(k, m, x_{3})] is provable” assert under the standard interpretation of PA?

Now, if the PA formula [F(x_{1}, x_{2}, x_{3})] represents in PA the recursive function denoted by f(x_{1}, x_{2}) then by definition, for any given numerals [k], [m], the formula [(\exists_{1} x_{3})F(k, m, x_{3})] is provable in PA; and true under the standard interpretation of PA.

We thus have that:

Lemma 4:[(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA” is the assertion that:

Given any natural numbers k, m, we can construct natural numbers t_{(k, m)}, u_{(k, m)}, v_{(k, m)}—all functions of k, m—such that:

(a) \beta(u_{(k, m)}, v_{(k, m)}, 0) = g(k);

(b) for all i<m, \beta(u_{(k, m)}, v_{(k, m)}, i) = h(k, i, f(k, i));

(c) \beta(u_{(k, m)}, v_{(k, m)}, m) = t_{(k, m)};

where f(x_{1}, x_{2}), g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are any recursive functions that are formally represented in PA by F(x_{1}, x_{2}, x_{3}), G(x_{1}, x_{2}) and H(x_{1}, x_{2}, x_{3}, x_{4}) respectively such that:

(i) f(k, 0) = g(k)

(ii) f(k, (y+1)) = h(k, y, f(k, y)) for all y<m

(iii) g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are recursive functions that are assumed to be of lower rank than f(x_{1}, x_{2}).

Proof: For any given natural numbers k and m, if [F(x_{1}, x_{2}, x_{3})] interprets as a well-defined arithmetical relation under the standard interpretation of PA, then we can define a deterministic Turing machine TM that can `construct’ the sequences:

f(k, 0), f(k, 1), \ldots, f(k, m)

and:

\beta(u_{(k, m)}, v_{(k, m)}, 0), \beta(u_{(k, m)}, v_{(k, m)}, 1), \ldots, \beta(u_{(k, m)}, v_{(k, m)}, m)

and give evidence to verify the assertion. \Box[27]

We now see that:

Theorem 1: Under the standard interpretation of PA [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is algorithmically verifiable, but not algorithmically computable, as always true over \mathbb{N}.

Proof: It follows from Lemma 4 that:

(1) [(\exists_{1} x_{3})F(k, m, x_{3})] is PA-provable for any given numerals [k, m]. Hence [(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA. It then follows from the definition of [F(x_{1}, x_{2}, x_{3})] in Lemma 3 that, for any given natural numbers k, m, we can construct some pair of natural numbers u_{(k, m)}, v_{(k, m)}—where u_{(k, m)}, v_{(k, m)} are functions of the given natural numbers k and m—such that:

(a) \beta(u_{(k, m)}, v_{(k, m)}, i) = f(k, i) for 0 \leq i \leq m;

(b) F^{*}(k, m, f(k, m)) holds in \mathbb{N}.

Since \beta(x_{1}, x_{2}, x_{3}) is primitive recursive, \beta(u_{(k, m)}, v_{(k, m)}, i) defines a deterministic Turing machine TM that can `construct’ the denumerable sequence f'(k, 0), f'(k, 1), \ldots for any given natural numbers k and m such that:

(c) f(k, i) = f'(k, i) for 0 \leq i \leq m.

We can thus define a deterministic Turing machine TM that will give evidence that the PA formula [(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA.

Hence [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is algorithmically verifiable over \mathbb{N} under the standard interpretation of PA.

(2) Now, the pair of natural numbers u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})} are defined such that:

(a) \beta(u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})}, i) = f(x_{1}, i) for 0 \leq i \leq x_{2};

(b) F^{*}(x_{1}, x_{2}, f(x_{1}, x_{2})) holds in \mathbb{N};

where v_{(x_{1}, x_{2})} is defined in Lemma 3 as j!, and:

(c) j = max(n, f(x_{1}, 0), f(x_{1}, 1), \ldots, f(x_{1}, x_{2}));

(d) n is the `number’ of terms in the sequence f(x_{1}, 0), f(x_{1}, 1), \ldots, f(x_{1}, x_{2}).

Since j is not definable for a denumerable sequence \beta(u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})}, i) we cannot define a denumerable sequence f'(x_{1}, 0), f'(x_{1}, 1), \ldots such that:

(e) f(k, i) = f'(k, i) for all i \geq 0.

We cannot thus define a deterministic Turing machine TM that will give evidence that the PA formula [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] interprets as true under the standard interpretation of PA for any given sequence of numerals [(a_{1}, a_{2})].

Hence [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is not algorithmically computable over \mathbb{N} under the standard interpretation of PA.

The theorem follows. \Box

Corollary 1: If the standard interpretation of PA is sound, then the classical Church and Turing theses are false.

The above theorem now suggests the following definition:

Definition 2: (Effective computability) A number-theoretic function is effectively computable if, and only if, it is algorithmically verifiable.

Such a definition of effective computability now allows the classical Church and Turing theses to be expressed as the weak equivalences in \S 2—rather than as identities—without any apparent loss of generality.

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Bri93 Selmer Bringsjord. 1993. The Narrational Case Against Church’s Thesis. Easter APA meetings, Atlanta.

Ch36 Alonzo Church. 1936. An unsolvable problem of elementary number theory. In M. Davis (ed.). 1965. The Undecidable Raven Press, New York. Reprinted from the Am. J. Math., Vol. 58, pp.345-363.

Ct75 Gregory J. Chaitin. 1975. A Theory of Program Size Formally Identical to Information Theory. J. Assoc. Comput. Mach. 22 (1975), pp. 329-340.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

Go51 Kurt Gödel. 1951. Some basic theorems on the foundations of mathematics and their implications. Gibbs lecture. In Kurt Gödel, Collected Works III, pp.304-323.\ 1995. Unpublished Essays and Lectures. Solomon Feferman et al (ed.). Oxford University Press, New York.

Ka59 Laszlo Kalmár. 1959. An Argument Against the Plausibility of Church’s Thesis. In Heyting, A. (ed.) Constructivity in Mathematics. North-Holland, Amsterdam.

Kl36 Stephen Cole Kleene. 1936. General Recursive Functions of Natural Numbers. Math. Annalen vol. 112 (1936) pp.727-766.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Me90 Elliott Mendelson. 1990. Second Thoughts About Church’s Thesis and Mathematical Proofs. Journal of Philosophy 87.5.

Pa71 Rohit Parikh. 1971. Existence and Feasibility in Arithmetic. The Journal of Symbolic Logic, Vol.36, No. 3 (Sep., 1971), pp. 494-508.

Si97 Wilfried Sieg. 1997. Step by recursive step: Church’s analysis of effective calculability Bulletin of Symbolic Logic, Volume 3, Number 2.

Sm07 Peter Smith. 2007. Church’s Thesis after 70 Years. A commentary and critical review of Church’s Thesis After 70 Years. In Meinong Studies Vol 1 (Ontos Mathematical Logic 1), 2006 (2013), Eds. Adam Olszewski, Jan Wolenski, Robert Janusz. Ontos Verlag (Walter de Gruyter), Frankfurt, Germany.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

An07 Bhupinder Singh Anand. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. In The Reasoner, Vol(1)7 p2-3.

An12 … 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Notes

Return to 1: cf. Me64, p.237.

Return to 2: Church’s (original) Thesis: The effectively computable number-theoretic functions are the algorithmically computable number-theoretic functions Ch36.

Return to 11: cf. Me64, p.227.

Return to 4: After describing what he meant by “computable” numbers in the opening sentence of his 1936 paper on Computable Numbers Tu36, Turing immediately expressed this thesis—albeit informally—as: “… the computable numbers include all numbers which could naturally be regarded as computable”.

Return to 5: cf. BBJ03, p.33.

Return to 6: See Si97.

Return to 7: Go51.

Return to 8: Parikh’s paper Pa71 can also be viewed as an attempt to investigate the consequences of expressing the essence of Gödel’s remarks formally.

Return to 9: Tu36, \S9(II), p.139.

Return to 10: Parikh’s distinction between `decidability’ and `feasibility’ in Pa71 also appears to echo the need for such a distinction.

Return to 11: BBJ03, p. 37.

Return to 12: The issue here seems to be that, when using language to express the abstract objects of our individual, and common, mental `concept spaces’, we use the word `exists’ loosely in three senses, without making explicit distinctions between them (see An07).

Return to 13: BBJ03, p.37.

Return to 14: Chaitin’s Halting Probability is given by 0 < \Omega = \sum2^{-|p|} < 1, where the summation is over all self-delimiting programs p that halt, and |p| is the size in bits of the halting program p; see Ct75.

Return to 15: The incongruity of this is addressed by Parikh in Pa71.

Return to 16: The only difference being that, in the latter case, we know there is a common `program’ of constant length that will compute f(n) for any given natural number n; in the former, we know we may need distinctly different programs for computing f(n) for different values of n, where the length of the program will, sometime, reference n.

Return to 17: By Kurt Gödel; see Go31, Theorem VII.

Return to 18: Analagous distinctions in analysis: The distinction between algorithmically computable, and algorithmically verifiable but not algorithmically computable, number-theoretic functions seeks to reflect in arithmetic the essence of uniform methods (formally detailed in the Birmingham paper (reproduced in this post) and in its main consequence—the Provability Theorem for PA—as detailed in this post), classically characterised by the distinctions in analysis between: (a) uniformly continuous, and point-wise continuous but not uniformly continuous, functions over an interval; (b) uniformly convergent, and point-wise convergent but not uniformly convergent, series.

A limitation of set theory and a possible barrier to computation: We note, further, that the above distinction cannot be reflected within a language—such as the set theory ZF—which identifies `equality’ with `equivalence’. Since functions are defined extensionally as mappings, such a language cannot recognise that a set which represents a primitive recursive function may be equivalent to, but computationally different from, a set that represents an arithmetical function; where the former function is algorithmically computable over \mathbb{N}, whilst the latter is algorithmically verifiable but not algorithmically computable over \mathbb{N}.

Return to 19: See the Provability Theorem for PA in this post.

Return to 20: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.21.

Return to 21: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.22.

Return to 22: Me64, p.118.

Return to 23: cf. Me64, p.131, proposition 3.21.

Return to 24: The implicit assumption being that the negation of a universally quantified formula of the first-order predicate calculus is indicative of “the existence of a counter-example”—Go31, p.32.

Return to 25: cf. Me64, p.132; Go31, p.30(2).

Return to 26: cf. Go31, p.31(2); Me64}, p.132.

Return to 27: A critical philosophical issue that we do not address here is whether the PA formula [F(x_{1}, x_{2}, x_{3}] can be considered to interpret under a sound interpretation of PA as a well-defined predicate, since the denumerable sequences \{f(k, 0), f(k, 1), \ldots, f(k, m), m_{p}: p>0 and m_{p} is not equal to m_{q} if p is not equal to q\}—are represented by denumerable, distinctly different, functions \beta(u_{p_{1}}, v_{p_{2}}, i) respectively. There are thus denumerable pairs (u_{p_{1}}, v_{p_{2}}) for which \beta(u_{p_{1}}, v_{p_{2}}, i) yields the sequence f(k, 0), f(k, 1), \ldots, f(k, m).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A foundational argument for defining Effective Computability formally, and weakening the Church and Turing Theses – I

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1.1 The Philosophical Issue

In a previous post we have argued that standard interpretations of classical theory may inadvertently be weakening a desirable perception—of mathematics as the lingua franca of scientific expression—by ignoring the possibility that since mathematics is, indeed, indisputably accepted as the language that most effectively expresses and communicates intuitive truth, the chasm between formal truth and provability must, of necessity, be bridgeable.

We further queried whether the roots of such interpretations may lie in removable ambiguities that currently persist in the classical definitions of foundational elements; ambiguities that allow the introduction of non-constructive—hence non-verifiable, non-computational, ambiguous and essentially Platonic—elements into the standard interpretations of classical mathematics.

Query 1: Are formal classical theories essentially unable to adequately express the extent and range of human cognition, or does the problem lie in the way formal theories are classically interpreted at the moment?

We noted that the former addressed the question of whether there are absolute limits on our capacity to express human cognition unambiguously; the latter, whether there are only temporal limits—not necessarily absolute—to the capacity of classical interpretations to communicate unambiguously that which we intended to capture within our formal expression.

We argued that, prima facie, applied science continues, perforce, to interpret mathematical concepts Platonically, whilst waiting for mathematics to provide suitable, and hopefully reliable, answers as to how best it may faithfully express its observations verifiably.

\S 1.2 Are axiomatic computational concepts really unambiguous?

This now raises the corresponding philosophical question that is implicit in Selmer Bringsjord’s narrational case against Church’s Thesis [1]:

Query 2 Is there a duality in the classical acceptance of non-constructive, foundational, concepts as axiomatic?

We now argue that beyond the question raised in an earlier post of whether—as computer scientist Lance Fortnow believes—Turing machines can `capture everything we can compute’, or whether—as computer scientists Peter Wegner and Dina Goldin suggest—they are `inappropriate as a universal foundation for computational problem solving’, we also need to address the philosophical question—implicit in Bringsjord’s paper—of whether, or not, the concept of `effective computability’ is capable of a constructive, and intuitionistically unobjectionable, definition; and the relation of such definition to that of formal provability and to the standard perceptions of the Church and Turing Theses as reviewed here by at heart Trinity mathmo and by profession Philosopher Peter Smith.

We therefore consider the case for introduction of such a definition from a philosophical point of view, and consider some consequences.

\S 1.2.1 Mendelson’s thesis

We note that Elliott Mendelson [2] is quoted by Bringsjord in his paper as saying (italicised parenthetical qualifications added):

(i) “Here is the main conclusion I wish to draw:

it is completely unwarranted to say that CT is unprovable just because it states an equivalence between a vague, imprecise notion (effectively computable function) and a precise mathematical notion (partial-recursive function)”.

(ii) “The concepts and assumptions that support the notion of partial-recursive function are, in an essential way, no less vague and imprecise (non-constructive, and intuitionistically objectionable) than the notion of effectively computable function; the former are just more familiar and are part of a respectable theory with connections to other parts of logic and mathematics.

(The notion of effectively computable function could have been incorporated into an axiomatic presentation of classical mathematics, but the acceptance of CT made this unnecessary.) …

Functions are defined in terms of sets, but the concept of set is no clearer (not more non-constructive, and intuitionistically objectionable) than that of function and a foundation of mathematics can be based on a theory using function as primitive notion instead of set.

Tarski’s definition of truth is formulated in set-theoretic terms, but the notion of set is no clearer (not more non-constructive, and intuitionistically objectionable), than that of truth.

The model-theoretic definition of logical validity is based ultimately on set theory, the foundations of which are no clearer (not more non-constructive, and intuitionistically objectionable) than our intuitive (non-constructive, and intuitionistically objectionable) understanding of logical validity”.

(iii) “The notion of Turing-computable function is no clearer (not more non-constructive, and intuitionistically objectionable) than, nor more mathematically useful (foundationally speaking) than, the notion of an effectively computable function …”

where:

(a) The Church-Turing Thesis, CT, is formulated as:

“A function is effectively computable if and only if it is Turing-computable”.

(b) An effectively computable function is defined to be the computing of the function by an algorithm.

(c) The classical notion of an algorithm is expressed by Mendelson as:

“… an effective and completely specified procedure for solving a whole class of problems. …

An algorithm does not require ingenuity; its application is prescribed in advance and does not depend upon any empirical or random factors”.

and, where Bringsjord paraphrases (iii) as:

(iv) “The notion of a formally defined program for guiding the operation of a TM is no clearer than, nor more mathematically useful (foundationally speaking) than, the notion of an algorithm”.

adding that:

(v) “This proposition, it would then seem, is the very heart of the matter.

If (iv) is true then Mendelson has made his case; if this proposition is false, then his case is doomed, since we can chain back by modus tollens and negate (iii)”.

\S 1.2.2 The concept of `constructive, and intuitionistically unobjectionable’

Now, prima facie, any formalisation of a `vague and imprecise’, `intuitive’ concept—say C—would normally be intended to capture the concept C both faithfully and completely within a constructive, and intuitionistically unobjectionable [3], language L.

Clearly, we could disprove the thesis—that C and its formalisation L are interchangeable, hence equivalent—by showing that there is a constructive aspect of C that is formalisable in a constructive language L’, but that such formalisation cannot be assumed expressible in L without introducing inconsistency.

However, equally clearly, there can be no way of proving the equivalence as this would contradict the premise that the concept is `vague and imprecise’, hence essentially open-ended in a non-definable way, and so non-formalisable.

Obviously, Mendelson’s assertion that there is no justification for claiming Church’s Thesis as unprovable must, therefore, rely on an interpretation that differs significantly from the above; for instance, his concept of provability may appeal to the axiomatic acceptability of `vague and imprecise’ concepts—as suggested by his remarks.

Now, we note that all the examples cited by Mendelson involve the decidability (computability) of an infinitude of meta-mathematical instances, where the distinction between the constructive meta-assertion that any given instance is individually decidable (instantiationally computable), and the non-constructive meta-assertion that all the instances are jointly decidable (uniformly computable), is not addressed explicitly.

However, \S 1.2.1(a), \S 1.2.1(b) and \S 1.2.1(c) appear to suggest that Mendelson’s remarks relate implicitly to non-constructive meta-assertions.

Perhaps the real issue, then, is the one that emerges if we replace Mendelson’s use of implicitly open-ended concepts such as `vague and imprecise’ and `intuitive’ by the more meta-mathematically meaningful concept of `non-constructive, and intuitionistically objectionable’, as italicised and indicated parenthetically.

The essence of Mendelson’s meta-assertion \S 1.2.1(iii) then appears to be that, if the classically accepted definitions of foundational concepts such as `partial recursive function’, `function’, `Tarskian truth’ etc. are also non-constructive, and intuitionistically objectionable, then replacing one non-constructive concept by another may be psychologically unappealing, but it should be meta-mathematically valid and acceptable.

\S 1.2.3 The duality

Clearly, meta-assertion \S 1.2.1(iii) would stand refuted by a `non-algorithmic’ effective method that is constructive.

However, if it is explicitly—and, as suggested by the nature of the arguments in Bringsjord’s paper, widely—accepted at the outset that any effective method is necessarily algorithmic (i.e. uniform as stated in \S 1.2.1(c) above), then any counter-argument to CT can, prima facie, only offer non-algorithmic methods that may, paradoxically, be `effective’ intuitively but in a non-constructive, and intuitionistically objectionable, way only!

Recognition of this dilemma is implicit in the admission that the various arguments, as presented by Bringsjord in the case against Church’s Thesis—including his narrational case—are open to reasonable, but inconclusive, refutations.

Nevertheless, if we accept Mendelson’s thesis that the inter-changeability of non-constructive concepts is valid in the foundations of mathematics, then Bringsjord’s case against Church’s Thesis, since it is based similarly on non-constructive concepts, should also be considered conclusive classically (even though it cannot, prima facie, be considered constructively conclusive in an intuitionistically unobjectionable way).

There is, thus, an apparent duality in the—seemingly extra-logical—decision as to whether an argument based on non-constructive concepts may be accepted as classically conclusive or not.

That this duality may originate in the very issues raised in Mendelson’s remarks—concerning the non-constructive roots of foundational concepts that are classically accepted as mathematically sound—is seen if we note that these issues may be more significant than is, prima facie, apparent.

\S 1.2.4 Definition of a formal mathematical object, and consequences

Thus, if we define a formal mathematical object as any symbol for an individual letter, function letter or a predicate letter that can be introduced through definition into a formal theory without inviting inconsistency, then it can be argued that unrestricted, non-constructive, definitions of non-constructive, foundational, set-theoretic concepts—such as `mapping’, `function’, `recursively enumerable set’, etc.—in terms of constructive number-theoretic concepts—such as recursive number-theoretic functions and relations—may not always correspond to formal mathematical objects.

In other words, the assumption that every definition corresponds to a formal mathematical object may introduce a formal inconsistency into standard Peano Arithmetic and, ipso facto, into any Axiomatic Set Theory that models standard PA (an inconsistency which, loosely speaking, may be viewed as a constructive arithmetical parallel to Russell’s non-constructive impredicative set).

Since it can also be argued that the non-constructive element in Tarski’s definitions of `satisfiability’ and `truth’, and in Church’s Thesis, originate in a common, but removable, ambiguity in the interpretation of an effective method, perhaps it is worth considering whether Bringsjord’s acceptance of the assumption—that every constructive effective method is necessarily algorithmic, in the sense of being a uniform procedure as in \S 1.2.1(c) above—is mathematically necessary, or even whether it is at all intuitively tenable.

(Uniform procedure: A property usually taken to be a necessary condition for a procedure to qualify as effective.)

Thus, we may argue that we can explicitly and constructively define a `non-algorithmic’ effective method as one that, in any given instance, is instantiationally computable if, and only if, it terminates finitely with a conclusive result; and an `algorithmic’ effective method as one that is uniformly computable if, and only if, it terminates finitely, with a conclusive result, in any given instance. [4]

\S 1.2.5 Bringsjord’s case against CT

Apropos the specific arguments against CT it would seem, prima facie, that a non-algorithmic effective—even if not obviously constructive—method could be implicit in the following argument considered by Bringsjord:

“Assume for the sake of argument that all human cognition consists in the execution of effective processes (in brains, perhaps). It would then follow by CT that such processes are Turing-computable, i.e., that computationalism is true. However, if computationalism is false, while there remains incontrovertible evidence that human cognition consists in the execution of effective processes, CT is overthrown”.

Assuming computationalism is false, the issue in this argument would, then, be whether there is a constructive, and adequate, expression of human cognition in terms of individually effective methods.

An appeal to such a non-algorithmic effective method may, in fact, be implicit in Bringsjord’s consideration of the predicate H, defined by:

H(P, i) iff (\exists n)S(P, i, n)

where the predicate S(P, u, n) holds if, and only if, TM M, running program P on input u, halts in exactly n steps (= MP : u =>n halt).

Bringsjord’s Total Computability

Bringsjord defines S as totally (and, implicitly, uniformly) computable in the sense that, given some triple (P, u, n), there is some (uniform) program P* which, running on some TM M*, can infallibly give us a verdict, Y (`yes’) or N (`no’), for whether or not S is true of this triple.

He then notes that, since the ability to (uniformly) determine, for a pair (P, i), whether or not H is true of it, is equivalent to solving the full halting problem, H is not totally computable.

Bringsjord’s Partial Computability

However, he also notes that there is a program (implicitly non-uniform, and so, possibly, effective individually) which, when asked whether or not some TM M run by P on u halts, will produce Y iff MP : u =>n halt. For this reason H is declared partially (implicitly, individually) computable.

More explicitly, Bringsjord remarks that Laszlo Kalmár’s refutation of CT [5] is classically inconclusive mainly because it does not admit any uniform effective method, but appeals to the existence of an infinitude of individually effective methods.

Kalmár’s Perspective of CT

Considering that it always seeks to calculate g(n) (defined below) constructively even in the absence of a uniform procedure—not necessarily within a fixed postulate system—we shall show that Kalmár’s finitary argument ([6]as reproduced below from Bringsjord’s paper) makes a pertinent observation:

“First, he draws our attention to a function g that isn’t Turing-computable, given that f is [7]:

g(x) = \mu y(f(x, y) = 0) = {the least y such that f(x, y) = 0 if y exists; and 0 if there is no such y}

Kalmár proceeds to point out that for any n in N for which a natural number y with f(n, y) = 0 exists,

`an obvious method for the calculation of the least such y … can be given,’

namely, calculate in succession the values f(n, 0), f(n, 1), f(n, 2), \ldots (which, by hypothesis, is something a computist or TM can do) until we hit a natural number m such that f(n, m) = 0, and set y = m.

On the other hand, for any natural number n for which we can prove, not in the frame of some fixed postulate system but by means of arbitrary—of course, correct—arguments that no natural number y with f(n, y) = 0 exists, we have also a method to calculate the value g(n) in a finite number of steps.

Kalmár goes on to argue as follows. The definition of g itself implies the tertium non datur, and from it and CT we can infer the existence of a natural number p which is such that

(*) there is no natural number y such that f(p, y)= 0; and

(**) this cannot be proved by any correct means.

Kalmár claims that (*) and (**) are very strange, and that therefore CT is at the very least implausible.”

Distinguishing between individually effective computability and uniformly effective computability

Now, the significant point that emerges from Bringsjord’s and Kalmár’s philosophical arguments is the need to distinguish formally between non-algorithmic (i.e., instantiational) computability (or, more precisely, algorithmic verifiability as defined here) and algorithmic (i.e., uniform) computability (or, more precisely, algorithmic computability as defined here) as highlighted in the Birmingham paper.

In the next post we note that this point has also been raised from a more formal, logical, perspective; and consider what is arguably an intuitively-more-adequate formal definability of `effective computability’ in terms of ‘algorithmic verifiability’ under which the classical Church and Turing Theses do not hold, but weakened Church and Turing Theses do!

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Bri93 Selmer Bringsjord. 1993. The Narrational Case Against Church’s Thesis. Easter APA meetings, Atlanta.

Ch36 Alonzo Church. 1936. An unsolvable problem of elementary number theory. In M. Davis (ed.). 1965. The Undecidable Raven Press, New York. Reprinted from the Am. J. Math., Vol. 58, pp.345-363.

Ct75 Gregory J. Chaitin. 1975. A Theory of Program Size Formally Identical to Information Theory. J. Assoc. Comput. Mach. 22 (1975), pp. 329-340.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

Go51 Kurt Gödel. 1951. Some basic theorems on the foundations of mathematics and their implications. Gibbs lecture. In Kurt Gödel, Collected Works III, pp.304-323.\ 1995. Unpublished Essays and Lectures. Solomon Feferman et al (ed.). Oxford University Press, New York.

Ka59 Laszlo Kalmár. 1959. An Argument Against the Plausibility of Church’s Thesis. In Heyting, A. (ed.) Constructivity in Mathematics. North-Holland, Amsterdam.

Kl36 Stephen Cole Kleene. 1936. General Recursive Functions of Natural Numbers. Math. Annalen vol. 112 (1936) pp.727-766.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Me90 Elliott Mendelson. 1990. Second Thoughts About Church’s Thesis and Mathematical Proofs. Journal of Philosophy 87.5.

Pa71 Rohit Parikh. 1971. Existence and Feasibility in Arithmetic. The Journal of Symbolic Logic, Vol.36, No. 3 (Sep., 1971), pp. 494-508.

Si97 Wilfried Sieg. 1997. Step by recursive step: Church’s analysis of effective calculability Bulletin of Symbolic Logic, Volume 3, Number 2.

Sm07 Peter Smith. 2007. Church’s Thesis after 70 Years. A commentary and critical review of Church’s Thesis After 70 Years. In Meinong Studies Vol 1 (Ontos Mathematical Logic 1), 2006 (2013), Eds. Adam Olszewski, Jan Wolenski, Robert Janusz. Ontos Verlag (Walter de Gruyter), Frankfurt, Germany.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

An07 Bhupinder Singh Anand. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. In The Reasoner, Vol(1)7 p2-3.

An12 … 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Notes

Return to 1: Bri93.

Return to 2: Me90.

Return to 3: The terms `constructive’ and `constructive, and intuitionistically unobjectionable’ are used synonymously both in their familiar linguistic sense, and in a mathematically precise sense. Mathematically, we term a concept as `constructive, and intuitionistically unobjectionable’ if, and only if, it can be defined in terms of pre-existing concepts without inviting inconsistency. Otherwise, we understand it to mean unambiguously verifiable, by some `effective method’, within some finite, well-defined, language or meta-language. It may also be taken to correspond, broadly, to the concept of `constructive, and intuitionistically unobjectionable’ in the sense apparently intended by Gödel in his seminal 1931 paper Go31, p.26.

Return to 4: We note that the possibility of a distinction between the interpreted number-theoretic meta-assertions, `For any given natural number x, F(x) is true’ and `F(x) is true for all natural numbers x‘, is not evident unless these are expressed symbolically as, `(\forall x)(F(x) is true)’ and `(\forall x)F(x) is true’, respectively. The issue, then, is whether the distinction can be given any mathematical significance. For instance, under a constructive formulation of Tarski’s definitions, we may qualify the latter by saying that it can be meaningfully asserted as a totality only if `F‘ is a well-defined mathematical object.

Return to 5: Ka59.

Return to 6: Ka59.

Return to 7: Bringsjord notes that the original proof can be found on page 741 of Kleene Kl36.

Return to 8: We detail a formal proof of this Thesis in this post.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

So where exactly does the buck stop?

Another reason why Lucas and Penrose should not be faulted for continuing to believe in their well-known Gödelian arguments against computationalism lies in the lack of an adequate consensus on the concept of `effective computability’.

For instance, Boolos, Burgess and Jeffrey (2003: Computability and Logic, 4th ed.~CUP, p37) define a diagonal halting function, d, any value of which can be computed effectively, although there is no single algorithm that can effectively compute d.

“According to Turing’s Thesis, since d is not Turing-computable, d cannot be effectively computable. Why not? After all, although no Turing machine computes the function d, we were able to compute at least its first few values, For since, as we have noted, f_{1} = f_{2} = f_{3} = the empty function we have d(1) = d(2) = d(3) = 1. And it may seem that we can actually compute d(n) for any positive integer n—if we don’t run out of time.”
… ibid. 2003. p37.

Now, the straightforward way of expressing this phenomenon should be to say that there are well-defined real numbers that are instantiationally computable, but not algorithmically computable.

Yet, following Church and Turing, such functions are labeled as effectively uncomputable!

The issue here seems to be that, when using language to express the abstract objects of our individual, and common, mental `concept spaces’, we use the word `exists’ loosely in three senses, without making explicit distinctions between them.

First, we may mean that an individually conceivable object exists, within a language L, if it lies within the range of the variables of L. The existence of such objects is necessarily derived from the grammar, and rules of construction, of the appropriate constant terms of the language—generally finitary in recursively defined languages—and can be termed as constructive in L by definition.

Second, we may mean that an individually conceivable object exists, under a formal interpretation of L in another formal language, say L, if it lies within the range of a variable of L under the interpretation.

Again, the existence of such an object in L is necessarily derivable from the grammar, and rules of construction, of the appropriate constant terms of L, and can be termed as constructive in L by definition.

Third, we may mean that an individually conceivable object exists, in an interpretation M of L, if it lies within the range of an interpreted variable of L, where M is a Platonic interpretation of L in an individual’s subjective mental conception (in Brouwer’s sense).

Clearly, the debatable issue is the third case.

So the question is whether we can—and, if so, how we may—correspond the Platonically conceivable objects of various individual interpretations of L, say M, M, M, …, unambiguously to the mathematical objects that are definable as the constant terms of L.

If we can achieve this, we can then attempt to relate L to a common external world and try to communicate effectively about our individual mental concepts of the world that we accept as lying, by consensus, in a common, Platonic, `concept-space’.

For mathematical languages, such a common `concept-space’ is implicitly accepted as the collection of individual intuitive, Platonically conceivable, perceptions—M, M, M, …,—of the standard intuitive interpretation, say M, of Dedekind’s axiomatic formulation of the Peano Postulates.

Reasonably, if we intend a language or a set of languages to be adequate, first, for the expression of the abstract concepts of collective individual consciousnesses, and, second, for the unambiguous and effective communication of those of such concepts that we can accept as lying within our common concept-space, then we need to give effective guidelines for determining the Platonically conceivable mathematical objects of an individual perception of M that we can agree upon, by common consensus, as corresponding to the constants (mathematical objects) definable within the language.

Now, in the case of mathematical languages in standard expositions of classical theory, this role is sought to be filled by the Church-Turing Thesis (CT). Its standard formulation postulates that every number-theoretic function (or relation, treated as a Boolean function) of M, which can intuitively be termed as effectively computable, is partial recursive / Turing-computable.

However, CT does not succeed in its objective completely.

Thus, even if we accept CT, we still cannot conclude that we have specified explicitly that the domain of M consists of only constructive mathematical objects that can be represented in the most basic of our formal mathematical languages, namely, first-order Peano Arithmetic (PA) and Recursive Arithmetic (RA).

The reason seems to be that CT is postulated as a strong identity, which, prima facie, goes beyond the minimum requirements for the correspondence between the Platonically conceivable mathematical objects of M and those of PA and RA.

“We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers.”
… Church 1936: An unsolvable problem of elementary number theory, Am.~J.~Math., Vol.~58, pp.~345–363.

“The theorem that all effectively calculable sequences are computable and its converse are proved below in outline.
… Turing 1936: On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, ser.~2.~vol.~42 (1936–7), pp.~230–265.

This violation of the principle of Occam’s Razor is highlighted if we note (e.g., Gödel 1931: On undecidable propositions of Principia Mathematica and related systems I, Theorem VII) that, pedantically, every recursive function (or relation) is not shown as identical to a unique arithmetical function (or relation), but (see the comment following Lemma 9 of this paper) only as instantiationally equivalent to an infinity of arithmetical functions (or relations).

Now, the standard form of CT only postulates algorithmically computable number-theoretic functions of M as effectively computable.

It overlooks the possibility that there may be number-theoretic functions and relations which are effectively computable / decidable instantiationally in a Tarskian sense, but not algorithmically.

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic. (4th ed). Cambridge University Press, Cambridge.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. pp.5-38.

Lu61 John Randolph Lucas. 1961. Minds, Machines and Gödel. In Philosophy. Vol. 36, No. 137 (Apr. – Jul., 1961), pp. 112-127, Cambridge University Press.

Lu03 John Randolph Lucas. 2003. The Gödelian Argument: Turn Over the Page. In Etica & Politica / Ethics & Politics, 2003, 1.

Lu06 John Randolph Lucas. 2006. Reason and Reality. Edited by Charles Tandy. Ria University Press, Palo Alto, California.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand. pp.145-146.

Pe90 Roger Penrose. 1990. The Emperor’s New Mind: Concerning Computers, Minds and the Laws of Physics. 1990, Vintage edition. Oxford University Press.

Pe94 Roger Penrose. 1994. Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford University Press.

Sc67 Joseph R. Schoenfield. 1967. Mathematical Logic. Reprinted 2001. A. K. Peters Ltd., Massachusetts.

Ta33 Alfred Tarski. 1933. The concept of truth in the languages of the deductive sciences. In Logic, Semantics, Metamathematics, papers from 1923 to 1938. (p152-278). ed. John Corcoran. 1983. Hackett Publishing Company, Indianapolis.

Wa63 Hao Wang. 1963. A survey of Mathematical Logic. North Holland Publishing Company, Amsterdam.

An07a Bhupinder Singh Anand. 2007. The Mechanist’s challenge. In The Reasoner, Vol(1)5 p5-6.

An07b … 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – I. In The Reasoner, Vol(1)6 p3-4.

An07c … 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. In The Reasoner, Vol(1)7 p2-3.

An08 … 2008. Can we really falsify truth by dictat?. In The Reasoner, Vol(2)1 p7-8.

An12 … 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A finitary arithmetical perspective on the forcing of non-standard models onto PA

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In the previous post we formally argued that the first order Peano Arithmetic PA is categorical from a finitary perspective (\S 5, Corollary 1).

We now argue that conventional wisdom which holds PA as essentially incomplete—and thus precludes categoricity—may appeal to finitarily fragile arguments (as mentioned in this earlier post and in this preprint, now reproduced below) for the existence of non-standard models of the first-order Peano Arithmetic PA.

Such wisdom ought, therefore, to be treated foundationally as equally fragile from a post-computationalist arithmetical perspective within classical logic, rather than accepted as foundationally sound relative to an ante-computationalist perspective of set theory.

Post-computationalist doctrine

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” (cf. Mu91).

\S 1 Introduction

Once we accept as logically sound the set-theoretically based meta-argument [1] that the first-order Peano Arithmetic PA [2] can be forced—by an ante-computat- ionalist interpretation of the Compactness Theorem—into admitting non-standard models which contain an `infinite’ integer, then the set-theoretical properties [3] of the algebraic and arithmetical structures of such putative models should perhaps follow without serious foundational reservation.

Compactness Theorem: If every finite subset of a set of sentences has a model, then the whole set has a model (BBJ03. p.147).

However we shall argue that, from an arithmetical perspective, we can only conclude by the Compactness Theorem that if Th(\mathbb{N}) is the \mathcal{L}_{A}-theory of the standard model (interpretation) (Ka91, p.10-11), then we may consistently add to it the following as an additional—not necessarily independent—axiom:

(\exists y)(y > x).

Moreover, we shall argue that even though (\exists y)(y > x) is algorithmically computable (Definition 2) as always true in the standard model (whence all of its instances are in Th(\mathbb{N})) we cannot conclude by the Compactness Theorem that:

\cup_{k \in \mathbb{N}}\{Th(\mathbb{N}) \cup \{c > \underline{n}\ |\ n < k\}\}

is consistent and has a model M_{c} which contains an `infinite’ integer [4].

Reason: We shall argue that the condition k \in \mathbb{N} in the above definition of \cup_{k \in \mathbb{N}}T_{k} requires, first of all, that we must be able to extend Th(\mathbb{N}) by the addition of a `relativised’ axiom (cf. Fe92; Me64, p.192) such as:

(\exists y)((x \in \mathbb{N}) \rightarrow (y > x)),

from which we may conclude the existence of some c such that:

M_{c} \models c>\underline{n} for all n \in \mathbb{N}.

However, we shall further show that even this would not yield a model for Th(\mathbb{N}) since, by Theorem 1, we cannot introduce a `completed’ infinity such as \mathbb{N} into either Th(\mathbb{N}) or any model of Th(\mathbb{N})!

\S 1.1 A post-computationalist doctrine

More generally we shall argue that—if our interest is in the arithmetical properties of models of PA—then we first need to make explicit any appeal to non-constructive considerations such as Aristotle’s particularisation (Definition 3).

We shall then argue that, even from a classical perspective, there are serious foundational, post-computationalist, reservations to accepting that a consistent PA can be forced by the Compactness Theorem into admitting non-standard models which contain elements other than the natural numbers.

Reason: Any arithmetical application of the Compactness Theorem to PA can neither ignore currently accepted post-computationalist doctrines of objectivity—nor contradict the constructive assignments of satisfaction and truth to the atomic formulas of PA (therefore to the compound formulas under Tarski’s inductive definitions) in terms of either algorithmical verifiability or algorithmic computability (An12, \S 3).

The significance of this doctrine [5] is that it helps highlight how the algorithmically verifiable (Definition 1) formulas of PA define the classical non-finitary standard interpretation of PA [6] (to which standard arguments for the existence of non-standard models of PA critically appeal).

Accordingly, we shall show that standard arguments which appeal to the ante-computationalist interpretation of the Compactness Theorem—for forcing non-standard models of PA [7] which contain an `infinite’ integer—cannot admit constructive assignments of satisfaction and truth [8] (in terms of algorithmical verifiability) to the atomic formulas of their putative extension of PA.

We shall conclude that such arguments therefore questionably postulate by axiomatic fiat that which they seek to `prove’!

\S 1.2 Standard arguments for non-standard models of PA

In this limited investigation we shall consider only the following three standard arguments for the existence of non-standard models of the first-order Peano Arithmetic PA:

(i) If PA is consistent, then we obtain a non-standard model for PA which contains an `infinite’ integer by applying the Compactness Theorem to the union of the set of formulas that are satisfied or true in the classical `standard’ model of PA [9] and the countable set of all PA-formulas of the form [c_{n} = S(c_{n+1})].

(ii) If PA is consistent, then we obtain a non-standard model for PA which contains an `infinite’ integer by adding a constant c to the language of PA and applying the Compactness Theorem to the theory P\cup\{c > \underline{n}: \underline{n}\ =\ \underline{0},\ \underline{1},\ \underline{2},\ \ldots\}.

(iii) If PA is consistent, then we obtain a non-standard model for PA which contains an `infinite’ integer by adding the PA formula [\neg (\forall x)R(x)] as an axiom to PA, where [(\forall x)R(x)] is a Gödelian formula [10] that is unprovable in PA, even though [R(n)] is provable in PA for any given PA numeral [n] (Go31, p.25(1)).

We shall first argue that (i) and (ii)—which appeal to Thoralf Skolem’s ante-computationalist reasoning (in Sk34) for the existence of a non-standard model of PA—should be treated as foundationally fragile from a finitary, post-computationalist perspective within classical logic [11].

We shall then argue that although (iii)—which appeals to Kurt Gödel’s (also ante-computationalist) reasoning (Go31) for the existence of a non-standard model of PA—does yield a model other than the classical `standard’ model of PA, we cannot conclude by even classical (albeit post-computationalist) reasoning that the domain is other than the domain N of the natural numbers unless we make the non-constructive—and logically fragile—extraneous assumption that a consistent PA is necessarily \omega-consistent.

(\omega-consistency): A formal system S is \omega-consistent if, and only if, there is no S-formula [F(x)] for which, first, [\neg(\forall x)F(x)] is S-provable and, second, [F(a)] is S-provable for any given S-term [a].

\S 2 Algorithmically verifiable formulas and algorithmically computable formulas

We begin by distinguishing between:

Definition 1: An atomic formula [F(x)] [12] of PA is algorithmically verifiable under an interpretation if, and only if, for any given numeral [n], there is an algorithm AL_{(F,\ n)} which can provide objective evidence (Mu91) for deciding the truth value of each formula in the finite sequence of PA formulas \{[F(1)], [F(2)], \ldots, [F(n)]\} under the interpretation.

The concept is well-defined in the sense that the `algorithmic verifiability’ of the formulas of a formal language which contain logical constants can be inductively defined under an interpretation in terms of the `algorithmic verifiability’ of the interpretations of the atomic formulas of the language (An12).

We note further that the formulas of the first order Peano Arithmetic PA are decidable under the standard interpretation of PA over the domain \mathbb{N} of the natural numbers if, and only if, they are algorithmically verifiable under the interpretation [13].

Definition 2: An atomic formula [F(x)] of PA is algorithmically computable under an interpretation if, and only if, there is an algorithm AL_{F} that can provide objective evidence for deciding the truth value of each formula in the denumerable sequence of PA formulas \{[F(1)], [F(2)], \ldots\} under the interpretation.

This concept too is well-defined in the sense that the `algorithmic computability’ of the formulas of a formal language which contain logical constants can also be inductively defined under an interpretation in terms of the `algorithmic computability’ of the interpretations of the atomic formulas of the language (An12).

We note further that the PA-formulas are decidable under an algorithmic interpretation of PA over \mathbb{N} if, and only if, they are algorithmically computable under the interpretation [14].

Although we shall not appeal to the following in this paper, we note in passing that the foundational significance [15] of the distinction lies in the argument that:

Lemma 1: There are algorithmically verifiable number theoretical functions which are not algorithmically computable. [16]

Proof: Let r(n) denote the n^{th} digit in the decimal expansion \sum_{n=1}^{\infty}r(n).10^{-n} of a putatively given real number \mathbb{R} in the interval 0 < \mathbb{R} \leq 1. By the definition of a real number as the limit of a Cauchy sequence of rationals, it follows that r(n) is an algorithmically verifiable number-theoretic function. Since every algorithmically computable real is countable (Tu36), Cantor's diagonal argument (Kl52, pp.6-8) shows that there are real numbers that are not algorithmically computable. The Lemma follows. \Box

\S 3 Making non-finitary assumptions explicit

We next make explicit—and briefly review—a tacitly held fundamental tenet of classical logic which is unrestrictedly adopted as intuitively obvious by standard literature [17] that seeks to build upon the formal first-order predicate calculus FOL:

Definition 3: (Aristotle’s particularisation) This holds that from an assertion such as:

`It is not the case that: For any given x, P^{*}(x) does not hold’,

usually denoted symbolically by `\neg(\forall x)\neg P^{*}(x)‘, we may always validly infer in the classical, Aristotlean, logic of predicates (HA28, pp.58-59) that:

`There exists an unspecified x such that P^{*}(x) holds’,

usually denoted symbolically by `(\exists x)P^{*}(x)‘.

\S 3.1 The significance of Aristotle’s particularisation for the first-order predicate calculus

Now we note that in a formal language the formula `[(\exists x)P(x)]‘ is an abbreviation for the formula `[\neg(\forall x)\neg P(x)]‘; and that the commonly accepted interpretation of this formula tacitly appeals to Aristotlean particularisation.

However, as L. E. J. Brouwer had noted in his seminal 1908 paper on the unreliability of logical principles (Br08), the commonly accepted interpretation of this formula is ambiguous if interpretation is intended over an infinite domain.

Brouwer essentially argued that, even supposing the formula `[P(x)]‘ of a formal Arithmetical language interprets as an arithmetical relation denoted by `P^{*}(x)‘, and the formula `[\neg(\forall x)\neg P(x)]‘ as the arithmetical proposition denoted by `\neg(\forall x)\neg P^{*}(x)‘, the formula `[(\exists x)P(x)]‘ need not interpret as the arithmetical proposition denoted by the usual abbreviation `(\exists x)P^{*}(x)‘; and that such postulation is invalid as a general logical principle in the absence of a means for constructing some putative object a for which the proposition P^{*}(a) holds in the domain of the interpretation.

Hence we shall follow the convention that the assumption that `(\exists x)P^{*}(x)‘ is the intended interpretation of the formula `[(\exists x)P(x)]‘—which is essentially the assumption that Aristotle’s particularisation holds over the domain of the interpretation—must always be explicit.

\S 3.2 The significance of Aristotle’s particularisation for PA

In order to avoid intuitionistic objections to his reasoning, Kurt Gödel introduced the syntactic property of \omega-consistency [18] as an explicit assumption in his formal reasoning in his seminal 1931 paper on formally undecidable arithmetical propositions (Go31, p.23 and p.28).

Gödel explained at some length [19] that his reasons for introducing \omega-consistency explicitly was to avoid appealing to the semantic concept of classical arithmetical truth in Aristotle’s logic of predicates (which presumes Aristotle’s particularisation).

The two concepts are meta-mathematically equivalent in the sense that, if PA is consistent, then PA is \omega-consistent if, and only if, Aristotle’s particularisation holds under the standard interpretation of PA [20].

\S 4 The ambiguity in admitting an `infinite’ constant

We begin our consideration of standard arguments for the existence of non-standard models of PA which contain an `infinite’ integer by first highlighting and eliminating an ambiguity in the argument as it is usually found in standard texts [21]:

Corollary. There is a non-standard model of P with domain the natural numbers in which the denotation of every nonlogical symbol is an arithmetical relation or function.

Proof. As in the proof of the existence of nonstandard models of arithmetic, add a constant \infty to the language of arithmetic and apply the Compactness Theorem to the theory

P\cup\{\infty \neq n: n\ =\ 0,\ 1,\ 2,\ \ldots\}

to conclude that it has a model (necessarily infinite, since all models of P are). The denotations of \infty in any such model will be a non-standard element, guaranteeing that the model is non-standard. Then apply the arithmetical Löwenheim-Skolem theorem to conclude that the model may be taken to have domain the natural numbers, and the denotations of all nonlogical symbols arithmetical.”

… BBJ03, p.306, Corollary 25.3.

\S 4.1 We cannot force PA to admit a transfinite ordinal

The ambiguity lies in a possible interpretation of the symbol \infty as a `completed’ infinity (such as Cantor’s first transfinite ordinal \omega) in the context of non-standard models of PA. To eliminate this possibility we establish trivially that, and briefly examine why:

Theorem 1: No model of PA can admit a transfinite ordinal under the standard interpretation of the classical logic FOL[22].

Proof: Let [G(x)] denote the PA-formula:

[x=0 \vee \neg(\forall y)\neg(x=Sy)]

Since Aristotle’s particularisation is tacitly assumed under the standard interpretation of FOL, this translates in every model of PA, as:

If x denotes an element in the domain of a model of PA, then either x is 0, or x is a `successor’.

Further, in every model of PA, if G(x) denotes the interpretation of [G(x)]:

(a) G(0) is true;

(b) If G(x) is true, then G(Sx) is true.

Hence, by Gödel’s completeness theorem:

(c) PA proves [G(0)];

(d) PA proves [G(x) \rightarrow G(Sx)].

Gödel’s Completeness Theorem: In any first-order predicate calculus, the theorems are precisely the logically valid well-formed formulas (i. e. those that are true in every model of the calculus).

Further, by Generalisation:

(e) PA proves [(\forall x)(G(x) \rightarrow G(Sx))];

Generalisation in PA: [(\forall x)A] follows from [A].

Hence, by Induction:

(f) [(\forall x)G(x)] is provable in PA.

Induction Axiom Schema of PA: For any formula [F(x)] of PA:

[F(0) \rightarrow ((\forall x)(F(x) \rightarrow F(Sx)) \rightarrow (\forall x)F(x))]

In other words, except 0, every element in the domain of any model of PA is a `successor’. Further, the standard PA axioms ensure that x can only be a `successor’ of a unique element in any model of PA.

Since Cantor’s first limit ordinal \omega is not the `successor’ of any ordinal in the sense required by the PA axioms, and since there are no infinitely descending sequences of ordinals (cf. Me64, p.261) in a model—if any—of a first order set theory such as ZF, the theorem follows. \Box

\S 4.2 Why we cannot force PA to admit a transfinite ordinal

Theorem 1 reflects the fact that we can define the usual order relation `<' in PA so that every instance of the PA Axiom Schema of Finite Induction, such as, say:

(i) [F(0) \rightarrow ((\forall x)(F(x) \rightarrow F(Sx)) \rightarrow (\forall x)F(x))]

yields the weaker PA theorem:

(ii) [F(0) \rightarrow ((\forall x) ((\forall y)(y < x \rightarrow F(y)) \rightarrow F(x)) \rightarrow (\forall x)F(x))]

Now, if we interpret PA without relativisation in ZF [23]— i.e., numerals as finite ordinals, [Sx] as [x \cup \left \{ x \right \}], etc.— then (ii) always translates in ZF as a theorem:

(iii) [F(0) \rightarrow ((\forall x)((\forall y)(y \in x \rightarrow F(y)) \rightarrow F(x)) \rightarrow (\forall x)F(x))]

However, (i) does not always translate similarly as a ZF-theorem, since the following is not necessarily provable in ZF:

(iv) [F(0) \rightarrow ((\forall x)(F(x) \rightarrow F(x \cup \left \{x\right \})) \rightarrow (\forall x)F(x))]

Example: Define [F(x)] as `[x \in \omega]’.

We conclude that, whereas the language of ZF admits as a constant the first limit ordinal \omega which would interpret in any putative model of ZF as the (`completed’ infinite) set \omega of all finite ordinals:

Corollary 1: The language of PA admits of no constant that interprets in any model of PA as the set N of all natural numbers.

We note that it is the non-logical Axiom Schema of Finite Induction of PA which does not allow us to introduce—contrary to what is suggested by standard texts [24]—an `actual’ (or `completed’) infinity disguised as an arbitrary constant (usually denoted by c or \infty) into either the language, or a putative model, of PA [25].

\S 5 Forcing PA to admit denumerable descending dense sequences

The significance of Theorem 1 is seen in the next two arguments, which attempt to implicitly bypass the Theorem’s constraint by appeal to the Compactness Theorem for forcing a non-standard model onto PA [26].

However, we argue in both cases that applying the Compactness Theorem constructively—even from a classical perspective—does not logically yield a non-standard model for PA with an `infinite’ integer as claimed [27].

\S 5.1 An argument for a non-standard model of PA

The first is the argument (Ln08, p.7) that we can define a non-standard model of PA with an infinite descending chain of successors, where the only non-successor is the null element 0:

1. Let <N (the set of natural numbers); = (equality); S (the successor function); + (the addition function); \ast (the product function); 0 (the null element)> be the structure that serves to define a model of PA, say N.

2. Let T[N] be the set of PA-formulas that are satisfied or true in N.

3. The PA-provable formulas form a subset of T[N].

4. Let \Gamma be the countable set of all PA-formulas of the form [c_{n} = Sc_{n+1}], where the index n is a natural number.

5. Let T be the union of \Gamma and T[N].

6. T[N] plus any finite set of members of \Gamma has a model, e.g., N itself, since N is a model of any finite descending chain of successors.

7. Consequently, by Compactness, T has a model; call it M.

8. M has an infinite descending sequence with respect to S because it is a model of \Gamma.

9. Since PA is a subset of T, M is a non-standard model of PA.

\S 5.2 Why the argument in \S 5.1 is logically fragile

However if—as claimed in \S 5.1(6) above—N is a model of T[N] plus any finite set of members of \Gamma, and the PA term [c_{n}] is well-defined for any given natural number n, then:

\bullet All PA-formulas of the form [c_{n} = Sc_{n+1}] are PA-provable,

\bullet \Gamma is a proper sub-set of the PA-provable formulas, and

\bullet T is identically T[N].

Reason: The argument cannot be that some PA-formula of the form [c_{n} = Sc_{n+1}] is true in N, but not PA-provable, as this would imply that if PA is consistent then PA+[\neg (c_{n} = Sc_{n+1})] has a model other than N; in other words, it would presume that which is sought to be proved, namely that PA has a non-standard model [28]!

Consequently, the postulated model M of T in \S 5.1(7) by `Compactness’ is the model N that defines T[N]. However, N has no infinite descending sequence with respect to S, even though it is a model of \Gamma.

Hence the argument does not establish the existence of a non-standard model of PA with an infinite descending sequence with respect to the successor function S.

\S 5.3 A formal argument for a non-standard model of PA

The second is the more formal argument [29]:

“Let Th(\mathbb{N}) denote the complete \mathcal{L}_{A}-theory of the standard model, i.e. Th(\mathbb{N}) is the collection of all true \mathcal{L}_{A}-sentences. For each n \in \mathbb{N} we let \underline{n} be the closed term (\ldots(((1+1)+1)+ \ldots +1))) (n\ 1s) of \mathcal{L}_{A}; \underline{0} is just the constant symbol 0. We now expand our language \mathcal{L}_{A} by adding to it a new constant symbol c, obtaining the new language \mathcal{L}_{c}, and consider the following \mathcal{L}_{c}-theory with axioms

\rho (for each \rho \in Th(\mathbb{N}))

and

c>\underline{n} (for each n \in \mathbb{N})

This theory is consistent, for each finite fragment of it is contained in

T_{k} = Th(\mathbb{N}) \cup \{c > \underline{n}\ |\ n < k\}

for some k \in \mathbb{N}, and clearly the \mathcal{L}_{c}-structure (\mathbb{N},\ k) with domain \mathbb{N},\ 0,\ 1,\ +,\ \cdot and < interpreted naturally, and c interpreted by the integer k, satisfies T_{k}. Thus by the compactness theore \cup_{k \in \mathbb{N}}T_{k} is consistent and has a model M_{c}. The first thing to note about M_{c} is that

M_{c} \models c>\underline{n}

for all n \in \mathbb{N}, and hence it contains an `infinite’ integer.”

\S 5.4 Why the argument in \S 5.3 too is logically fragile

We note again that, from an arithmetical perspective, any application of the Compactness Theorem to PA cannot ignore currently accepted computationalist doctrines of objectivity (cf. Mu91) and contradict the constructive assignment of satisfaction and truth to the atomic formulas of PA (therefore to the compound formulas under Tarski’s inductive definitions) in terms of either algorithmical verifiability or algorithmic computability (An12, \S 3).

Accordingly, from an arithmetical perspective we can only conclude by the Compactness Theorem that if Th(\mathbb{N}) is the \mathcal{L}_{A}-theory of the standard model (interpretation), then we may consistently add to it the following as an additional—not necessarily independent—axiom:

(\exists y)(y > x).

Moreover, even though (\exists y)(y > x) is algorithmically computable as always true in the standard model—whence all instances of it are also therefore in Th(\mathbb{N})—we cannot conclude by the Compactness Theorem that \cup_{k \in \mathbb{N}}T_{k} is consistent and has a model M_{c} which contains an `infinite’ integer.

Reason: The condition `k \in \mathbb{N}‘ in \cup_{k \in \mathbb{N}}T_{k} requires, first of all, that we must be able to extend Th(\mathbb{N}) by the addition of a `relativised’ axiom (cf. Fe92; Me64, p.192) such as:

(\exists y)((x \in \mathbb{N}) \rightarrow (y > x)),

from which we may conclude the existence of some c such that:

M_{c} \models c>\underline{n}

for all n \in \mathbb{N}.

However, even this would not yield a model for Th(\mathbb{N}) since, by Theorem 1, we cannot introduce a `completed’ infinity such as \mathbb{N} into any model of Th(\mathbb{N})!

As the argument stands, it seeks to violate finitarity by adding a new constant c to the language \mathcal{L}_{A} of PA that is not definable in \mathcal{L}_{A} and, ipso facto, adding an atomic formula [c=x] to PA whose satisfaction under any interpretation of PA is not algorithmically verifiable!

Since the atomic formulas of PA are algorithmically verifiable under the standard interpretation (An12, Corollary 2), the above conclusion too postulates that which it seeks to prove!

Moreover, the postulation would be false if Th(\mathbb{N}) were categorical.

Since Th(\mathbb{N}) must have a non-standard model if it is not categorical, we consider next whether we may conclude from Gödel’s incompleteness argument (in Go31) that any such model can have an `infinite’ integer.

\S 6 Gödel’s argument for a non-standard model of PA

We begin by considering the Gödelian formula [(\forall x)R(x)] [30] which is unprovable in PA if PA is consistent, even though the formula [R(n)] is provable in a consistent PA for any given PA numeral [n].

Now, it follows from Gödel’s reasoning [31] that:

Theorem 2: If PA is consistent, then we may add the PA formula [\neg (\forall x)R(x)] as an axiom to PA without inviting inconsistency.

Theorem 3: If PA is \omega-consistent, then we may add the PA formula [(\forall x)R(x)] as an axiom to PA without inviting inconsistency.

Gödel concluded from this that:

Corollary 2: If PA is \omega-consistent, then there are at least two distinctly different models of PA. \Box

If we assume that a consistent PA is necessarily \omega-consistent, then it follows that one of the two putative models postulated by Corollary 2 must contain elements other than the natural numbers.

We conclude that Gödel’s justification for the assumption that non-standard models of PA containing elements other than the natural numbers are logically feasible lies in his non-constructive—and logically fragile—assumption that a consistent PA is necessarily \omega-consistent.

\S 6.1 Why Gödel’s assumption is logically fragile

Now, whereas Gödel’s proof of Corollary 2 appeals to the non-constructive Aristotle’s particularisation, a constructive proof of the Corollary follows trivially from evidence-based interpretations of PA (An12).

Reason: Tarski’s inductive definitions allow us to provide finitary satisfaction and truth certificates to all atomic (and ipso facto to all compound) formulas of PA over the domain N of the natural numbers in two essentially different ways:

(1) In terms of algorithmic verifiabilty (An12, \S 4.2); and

(2) In terms of algorithmic computability (An12, \S 4.3).

That there can be even one, let alone two, logically sound and finitary assignments of satisfaction and truth certificates to both the atomic and compound formulas of PA was hitherto unsuspected!

Moreover, neither the putative `algorithmically verifiable’ model, nor the `algorithmically computable’ model, of PA defined by these finitary satisfaction and truth assignments contains elements other than the natural numbers.

(a) Any algorithmically verifiable model of PA is necessarily over \mathbb{N}

For instance if, in the first case, we assume that the algorithmically verifiable atomic formulas of PA determine an algorithmically verifiable model of PA over the domain \mathbb{N} of the PA numerals, then such a putative model would be isomorphic to the standard model of PA over the domain N of the natural numbers (An12, \S 4.2 & \S 5, Corollary 2).

However, such a putative model of PA over \mathbb{N} would not be finitary since, if the formula [(\forall x) F(x)] were to interpret as true in it, then we could only conclude that, for any numeral [n], there is an algorithm which will finitarily certify the formula [F(n)] as true under an algorithmically verifiable interpretation in \mathbb{N}.

We could not conclude that there is a single algorithm which, for any numeral [n], will finitarily certify the formula [F(n)] as true under the algorithmically verifiable interpretation in \mathbb{N}.

Consequently, the PA Axiom Schema of Finite Induction would not interpret as true finitarily under the algorithmically verifiable interpretation of PA over the domain \mathbb{N} of the PA numerals.

Thus the algorithmically verifiable interpretation of PA would not define a finitary model of PA.

However, if we were to assume that the algorithmically verifiable interpretation of PA defines a non-finitary model of PA, then it would follow that:

\bullet PA is necessarily \omega-consistent;

\bullet Aristotle’s particularisation holds over N; and

\bullet The `standard’ interpretation of PA also defines a non-finitary model of PA over N.

(b) The algorithmically computable interpretation of PA is over \mathbb{N}

The second case is where the algorithmically computable atomic formulas of PA determine an algorithmically computable model of PA over the domain N of the natural numbers (An12, \S 4.3 & \S 5.2).

The algorithmically computable model of PA is finitary since we can show that, if the formula [(\forall x) F(x)] interprets as true under it, then we may always conclude that there is a single algorithm which, for any numeral [n], will finitarily certify the formula [F(n)] as true in N under the algorithmically computable interpretation.

Consequently we can show that all the PA axioms—including the Axiom Schema of Finite Induction—interpret finitarily as true in N under the algorithmically computable interpretation of PA, and the PA Rules of Inference preserve such truth finitarily (An12, \S 5.2 Theorem 4).

Thus the algorithmically computable interpretation of PA defines a finitary model of PA from which we may conclude that:

\bullet PA is consistent (An12, \S 5.3, Theorem 6).

\S 6.2 Why we cannot conclude that PA is necessarily \omega-consistent

By the way the above finitary interpretation (b) is defined under Tarski’s inductive definitions (An12, \S 4.3), if a PA-formula [F] interprets as true in the corresponding finitary model of PA, then there is an algorithm that provides a certificate for such truth for [F] in N; whilst if [F] interprets as false in the above finitary model of PA, then there is no algorithm that can provide such a truth certificate for [F] in N (An12, \S 2).

Now, if there is no algorithm that can provide such a truth certificate for the Gödelian formula [R(x)] in N, then we would have by definition first that the PA formula [\neg(\forall x)R(x)] is true in the model, and second by Gödel’s reasoning that the formula [R(n)] is true in the model for any given numeral [n]. Hence Aristotle’s particularisation would not hold in the model.

However, by definition if PA were \omega-consistent then Aristotle’s particularisation must necessarily hold in every model of PA.

It follows that unless we can establish that there is some algorithm which can provide such a truth certificate for the Gödelian formula [R(x)] in N, we cannot make the unqualified assumption—as Gödel appears to do—that a consistent PA is necessarily \omega-consistent.

Conclusion

We have argued that standard arguments for the existence of non-standard models of the first-order Peano Arithmetic PA with domains other than the domain N of the natural numbers should be treated as logically fragile even from within classical logic. In particular we have argued that although Gödel’s argument for the existence of a non-standard model of PA does yield a model of PA other than the classical non-finitary `standard’ model, we cannot conclude from it that the domain is other than the domain N of the natural numbers unless we make the non-constructive—and logically fragile—assumption that a consistent PA is necessarily \omega-consistent.

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Be59 Evert W. Beth. 1959. The Foundations of Mathematics. In Studies in Logic and the Foundations of Mathematics. Edited by L. E. J. Brouwer, E. W. Beth, A. Heyting. 1959. North Holland Publishing Company, Amsterdam.

BF58 Paul Bernays and Abraham A. Fraenkel. 1958. Axiomatic Set Theory In Studies in Logic and the Foundations of Mathematics. Edited by L. E. J. Brouwer, E. W. Beth, A. Heyting. 1959. North Holland Publishing Company, Amsterdam.

Bo00 Andrey I. Bovykin. 2000. On order-types of models of arithmetic. Thesis submitted to the University of Birmingham for the degree of Ph.D. in the Faculty of Science, School of Mathematics and Statistics, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom, 13 April 2000.

Br08 L. E. J. Brouwer. 1908. The Unreliability of the Logical Principles. English translation in A. Heyting, Ed. L. E. J. Brouwer: Collected Works 1: Philosophy and Foundations of Mathematics. Amsterdam: North Holland / New York: American Elsevier (1975): pp.107-111.

Co66 Paul J. Cohen. 1966. Set Theory and the Continuum Hypothesis. (Lecture notes given at Harvard University, Spring 1965) W. A. Benjamin, Inc., New York.

Da82 Martin Davis. 1958. Computability and Unsolvability. 1982 ed. Dover Publications, Inc., New York.

EC89 Richard L. Epstein, Walter A. Carnielli. 1989. Computability: Computable Functions, Logic, and the Foundations of Mathematics. Wadsworth & Brooks, California.

Fe92 Solomon Feferman. 1992. What rests on what: The proof-theoretic analysis of mathematics Invited lecture, 15th International Wittgenstein Symposium: Philosophy of Mathematics, held in Kirchberg/Wechsel, Austria, 16-23 August 1992.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

HA28 David Hilbert & Wilhelm Ackermann. 1928. Principles of Mathematical Logic. Translation of the second edition of the Grundzüge Der Theoretischen Logik. 1928. Springer, Berlin. 1950. Chelsea Publishing Company, New York.

Hi25 David Hilbert. 1925. On the Infinite. Text of an address delivered in Münster on 4th June 1925 at a meeting of the Westphalian Mathematical Society. In Jean van Heijenoort. 1967.Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

HP98 Petr Hájek and Pavel Pudlák. 1998. Metamathematics of First-Order Arithmetic. Volume 3 of Perspectives in Mathematical Logic Series, ISSN 0172-6641. Springer-Verlag, Berlin, 1998.

Ka91 Richard Kaye. 1991. Models of Peano Arithmetic. Oxford Logic Guides, 15. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1991.

Ka11 Richard Kaye. 2011. Tennenbaum’s theorem for models of arithmetic. In Set Theory, Arithmetic, and Foundations of Mathematics. Eds. Juliette Kennedy & Roman Kossak. Lecture Notes in Logic (No. 36). pp.66-79. Cambridge University Press, 2011. Cambridge Books Online. http://dx.doi.org/10.1017/CBO9780511910616.005.

Kl52 Stephen Cole Kleene. 1952. Introduction to Metamathematics. North Holland Publishing Company, Amsterdam.

Kn63 G. T. Kneebone. 1963. Mathematical Logic and the Foundations of Mathematics: An Introductory Survey. D. Van Norstrand Company Limited, London.

Ko06 Roman Kossak and James H. Schmerl. 2006. The structure of models of Peano arithmetic. Oxford Logic Guides, 50. Oxford Science Publications. The Clarendon Press, Oxford University Press, Oxford, 2006.

Li64 A. H. Lightstone. 1964. The Axiomatic Method. Prentice Hall, NJ.

Ln08 Laureano Luna. 2008. On non-standard models of Peano Arithmetic. In The Reasoner, Vol(2)2 p7.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Mu91. Chetan R. Murthy. 1991. An Evaluation Semantics for Classical Proofs. Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, (also Cornell TR 91-1213), 1991.

Nv64 P. S. Novikov. 1964. Elements of Mathematical Logic. Oliver & Boyd, Edinburgh and London.

Qu63 Willard Van Orman Quine. 1963. Set Theory and its Logic. Harvard University Press, Cambridge, Massachusette.

Rg87 Hartley Rogers Jr. 1987. Theory of Recursive Functions and Effective Computability. MIT Press, Cambridge, Massachusetts.

Ro53 J. Barkley Rosser. 1953. Logic for Mathematicians. McGraw Hill, New York.

Sh67 Joseph R. Shoenfield. 1967. Mathematical Logic. Reprinted 2001. A. K. Peters Ltd., Massachusetts.

Sk28 Thoralf A. Skolem. 1928. On Mathematical Logic. Text of a lecture delivered on 22nd October 1928 before the Norwegian Mathematical Association. In Jean van Heijenoort. 1967. Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931 Harvard University Press, Cambridge, Massachusetts.

Sk34 Thoralf A. Skolem. 1934. \”{U}ber die Nicht-charakterisierbarkeit der Zahlenreihe mittels endlich oder abz\”{a}hlbar unendlich vieler Aussagen mit ausschliesslich Zahlenvariablen. Fundamenta Mathematicae, 23, 150-161. English version: Peano’s axioms and models of arithmetic. In Mathematical interpretations of formal systems. North Holland, Amsterdam, 1955, pp.1-14.

Sm92 Raymond M. Smullyan. 1992. Gödel’s Incompleteness Theorems. Oxford University Press, Inc., New York.

Su60 Patrick Suppes. 1960. Axiomatic Set Theory. Van Norstrand, Princeton.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

Wa63 Hao Wang. 1963. A survey of Mathematical Logic. North Holland Publishing Company, Amsterdam.

An12 Bhupinder Singh Anand. 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

An13 … 2013. A suggested mathematical perspective for the EPR argument. Presented on 7’th April at the workshop on `Logical Quantum Structures‘ at UNILOG’2013, 4^{th} World Congress and School on Universal Logic, 29^{th} March 2013 – 7^{th} April 2013, Rio de Janeiro, Brazil.

Notes

Return to 1: By which we mean arguments such as in Ka91 (see pg.1), where the meta-theory is taken to be a set-theory such as ZF or ZFC, and the logical consistency of the meta-theory is not considered relevant to the argumentation.

Return to 2: For purposes of this investigation we may take this to be a first order theory such as the theory S defined in Me64, pp.102-103.

Return to 3: eg. Ka91; Bo00; BBJ03,\ ch.25,\ p.302; Ko06; Ka11.

Return to 4: As argued in Ka91, p.10-11.

Return to 5: Some of the—hitherto unsuspected—consequences of this doctrine are detailed in An12.

Return to 6: An12, Corollary 2; `non-finitary’ because the Axiom Schema of Finite Induction cannot be finitarily verified as true under the standard interpretation of PA with respect to `truth’ as defined by the algorithmically verifiable formulas of PA.

Return to 7: eg., BBJ03, p.155, Lemma 13.3 (Model existence lemma).

Return to 8: cf. The standard non-constructive set-theoretical assignment-by-postulation (S5) of the satisfaction properties (S1) to (S8) in BBJ03, p.153, Lemma 13.1 (Satisfaction properties lemma), which appeals critically to Aristotle’s particularisation.

Return to 9: For purposes of this investigation we may take this to be an interpretation of PA as defined in Me64, p.107.

Return to 10: In his seminal 1931 paper Go31, Kurt Gödel defines, and refers to, the formula corresponding to [R(x)] only by its `Gödel’ number r (op. cit., p.25, Eqn.(12)), and to the formula corresponding to [(\forall x)R(x)] only by its `Gödel’ number 17\ Gen\ r.

Return to 11: By `classical logic’ we mean the standard first-order predicate calculus FOL where we neither deny the Law of the Excludeds Middle, nor assume that the FOL is \omega-consistent (i.e., we do not assume that Aristotle’s particularisation must hold under any interpretation of the logic).

Return to 12: Notation: For the sake of convenience, we shall use square brackets to indicate that the expression enclosed by them is to be treated as denoting a formula of a formal theory, and not as denoting an interpretation.

Return to 13: However, as noted earlier, the Axiom Schema of Finite Induction cannot be finitarily verified as true under the standard interpretation of PA with respect to `truth’ as defined by the algorithmically verifiable formulas of PA .

Return to 14: In this case however, the Axiom Schema of Finite Induction can be finitarily verified as true under the standard interpretation of PA with respect to `truth’ as defined by the algorithmically computable formulas of PA (An12, Theorem 4).

Return to 15: The far reaching—hitherto unsuspected—consequences of this distinction for PA are detailed in An12.

Return to 16: We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions. From the point of view of a finitary mathematical philosophy, the significant difference between the two concepts could be expressed (An13) by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function.

Return to 17: See Hi25, p.382; HA28, p.48; Sk28, p.515; Go31, p.32.; Kl52, p.169; Ro53, p.90; BF58, p.46; Be59, pp.178 & 218; Su60, p.3; Wa63, p.314-315; Qu63, pp.12-13; Kn63, p.60; Co66, p.4; Me64, pp.45, 47, 52(ii), 214(fn); Nv64, p.92; Li64, p.33; Sh67, p.13; Da82, p.xxv; Rg87, p.xvii; EC89, p.174; Mu91; Sm92, p.18, Ex.3; BBJ03, p.102.

Return to 18: The significance of \omega-consistency for the formal system PA is highlighted inAn12.

Return to 19: In his introduction on p.9 of Go31.

Return to 20: For details see An12.

Return to 21: cf. HP98, p.13, \S 0.29; Me64, p.112, Ex. 2.

Return to 22: For purposes of this investigation we may take this to be the first order predicate calculus K as defined in Me64, p.57.

Return to 23: In the sense indicated by Feferman Fe92.

Return to 24: eg. HP98, p.13, \S 0.29; Ka91, p.11 & p.12, fig.1; BBJ03. p.306, Corollary 25.3; Me64, p.112, Ex. 2.

Return to 25: A possible reason why the Axiom Schema of Finite Induction does not admit non-finitary reasoning into either PA, or into any model of PA, is suggested in \S 6.1 below.

Return to 26: eg. Ln08, p.7; Ka91, pp.10-11, p.74 & p.75, Theorem 6.4.

Return to 27: And as suggested also by standard texts in such cases; eg. BBJ03. p.306, Corollary 25.3; Me64, p.112, Ex. 2.

Return to 28: To place this distinction in perspective, Adrien-Marie Legendre and Carl Friedrich Gauss independently conjectured in 1796 that, if \pi (x) denotes the number of primes less than x, then \pi (x) is asymptotically equivalent to x/In(x). Between 1848/1850, Pafnuty Lvovich Chebyshev confirmed that if \pi (x)/(x/In(x)) has a limit, then it must be 1. However, the crucial question of whether \pi (x)/(x/In(x)) has a limit at all was answered in the affirmative using analytic methods independently by Jacques Hadamard and Charles Jean de la Vallée Poussin only in 1896, and using only elementary methods by Atle Selberg and Paul Erdös in 1949.

Return to 29: Ka91, pp.10-11; attributed as essentially Skolem’s argument in Sk34.

Return to 30: In his seminal 1931 paper Go31, Kurt Gödel defines, and refers to, the formula corresponding to [R(x)] only by its `Gödel’ number r (op. cit., p.25, Eqn.(12)), and to the formula corresponding to [(\forall x)R(x)] only by its `Gödel’ number 17\ Gen\ r.

Return to 31: Go31, p.25(1) & p.25(2).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 47 other subscribers

Recent posts