You are currently browsing the tag archive for the ‘unpredictability’ tag.

Why primality is polynomial time, but factorisation is not

Differentiating between the signature of a number and its value

A brief review: The significance of evidence-based reasoning

In a paper: `The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of Cognitive Systems Research [An16], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for objectively evidencing such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … Chetan. R. Murthy: [Mu91], \S 1 Introduction.

“Intuitively we require that for each event-describing sentence, \phi_{o^{\iota}}n_{\iota} say (i.e. the concrete object denoted by n_{\iota} exhibits the property expressed by \phi_{o^{\iota}} ), there shall be an algorithm (depending on I, i.e. M^{*} ) to decide the truth or falsity of that sentence.” … Martin H Löb: [Lob59], p.165.

Definition 1 (Evidence-based reasoning in Arithmetic): Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively evidencing such acceptance.

The significance of introducing evidence-based reasoning for assigning truth values to the formulas of a first-order Peano Arithmetic, such as PA, under a well-defined interpretation (see Section 3 in [An16]), is that it admits the distinction:

(1) algorithmically verifiable `truth’ (Definition 2}); and

(2) algorithmically computable `truth’ (Definition 3).

Definition 2 (Deterministic algorithm): A deterministic algorithm computes a mathematical function which has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.

Note that a deterministic algorithm can be suitably defined as a `realizer‘ in the sense of the Brouwer-Heyting-Kolmogorov rules (see [Ba16], p.5).

For instance, under evidence-based reasoning the formula [(\forall x)F(x)] of the first-order Peano Arithmetic PA must always be interpreted weakly under the classical, standard, interpretation of PA (see [An16], Theorem 5.6) in terms of algorithmic verifiability (see [An16], Definition 1); where, if the PA-formula [F(x)] interprets as an arithmetical relation F^{*}(x) over N :

Definition 2 (Algorithmic verifiability): The number-theoretical relation F^{*}(x) is algorithmically verifiable if, and only if, for any natural number n , there is a deterministic algorithm AL_{(F,\ n)} which can provide evidence for deciding the truth/falsity of each proposition in the finite sequence \{F^{*}(1), F^{*}(2), \ldots, F^{*}(n)\} .

Whereas [(\forall x)F(x)] must always be interpreted strongly under the finitary interpretation of PA (see [An16], Theorem 6.7) in terms of algorithmic computability ([An16], Definition 2), where:

Definition 3 (Algorithmic computability): The number theoretical relation F^{*}(x) is algorithmically computable if, and only if, there is a deterministic algorithm AL_{F} that can provide evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F^{*}(1), F^{*}(2), \ldots\} .

The significance of the distinction between algorithmically computable reasoning based on algorithmically computable truth, and algorithmically verifiable reasoning based on algorithmically verifiable truth, is that it admits the following, hitherto unsuspected, consequences:

(i) PA has two well-defined interpretations over the domain N of the natural numbers (including 0 ):

(a) the weak non-finitary standard interpretation I_{PA(N, SV)} ([An16], Theorem 5.6),

and

(b) a strong finitary interpretation I_{PA(N, SC)} ([An16], Theorem 6.7);

(ii) PA is non-finitarily consistent under I_{PA(N, SV)} ([An16], Theorem 5.7);

(iii) PA is finitarily consistent under I_{PA(N, SC)} ([An16], Theorem 6.8).

The significance of evidence-based reasoning for Computational Complexity

In this investigation I now show the relevance of evidence-based reasoning, and of distinguishing between algorithmically verifiable and algorithmically computable number-theoretic functions (as defined above), for Computational Complexity is that it assures us a formal foundation for placing in perspective, and complementing, an uncomfortably counter-intuitive entailment in number theory—Theorem 2 below—which has been treated by conventional wisdom as sufficient for concluding that the prime divisors of an integer cannot be proven to be mutually independent.

However, I show there that such informally perceived barriers are, in this instance, illusory; and that admitting the above distinction illustrates:

(a) Why the prime divisors of an integer are mutually independent Theorem 2;

(b) Why determining whether the signature (Definition 3 below) of a given integer n —coded as the key in a modified Bazeries-cylinder (see Definition 7 of this paper) based combination lock—is that of a prime, or not, can be done in polynomial time O(log_{_{e}}n) (Corollary 4 of this paper); as compared to the time \ddot{O}(log_{_{e}}^{15/2}n) given by Agrawal et al in [AKS04], and improved to \ddot{O}(log_{_{e}}^{6}n) by Lenstra and Pomerance in [LP11], for determining whether the value of a given integer n is that of a prime or not.

(c) Why it can be cogently argued that determining a factor of a given integer cannot be polynomial time.

Definition 4 (Signature of a number): The signature of a given integer n is the sequence a_{_{n,i}} where n + a_{_{n,i}} \equiv 0\ mod\ (p_{_{i}}) for all primes p_{_{i}}\ such\ that\ 1\leq i \leq \pi(\sqrt{n}) .

Unique since, if p_{_{\pi(\sqrt{m})+1}}^{2} > m \geq p_{_{\pi(\sqrt{m})}}^{2} and p_{_{\pi(\sqrt{n})+1}}^{2} > n \geq p_{_{\pi(\sqrt{n})}}^{2} have the same signature, then |m - n| = c_{_{1}}.\prod_{i=1}^{\pi(\sqrt{m})}p_{_{i}} = c_{_{2}}.\prod_{i=1}^{\pi(\sqrt{n})}p_{_{i}} ; whence c_{_{1}} = c_{_{2}} = 0 since \prod_{i=1}^{k}p_{_{i}} > (\prod_{i=2}^{k-2}p_{_{i}}).p_{_{k}}^{^{2}} > p_{_{k+1}}^{2} for k > 4 by appeal to Bertrand’s Postulate 2.p_{_{k}} > p_{_{k+1}} ; and the uniqueness is easily verified for k \leq 4 .

Definition 5 (Value of a number): The value of a given integer n is any well-defined interpretation—over the domain of the natural numbers—of the (unique) numeral [n] that represents n in the first-order Peano Arithmetic PA.

We note that Theorem 2 establishes a lower limit for [AKS04] and [LP11], because determining the signature of a given integer n does not require knowledge of the value of the integer as defined by the Fundamental Theorem of Arithmetic.

Theorem 1: (Fundamental Theorem of Arithmetic): Every positive integer n > 1 can be represented in exactly one way as a product of prime powers:

n=p_{1}^{n_{1}}p_{2}^{n_{2}}\cdots p_{k}^{n_{k}}=\prod _{i=1}^{k}p_{i}^{n_{i}}

where p_{1} < p_{2} < \ldots < p_{k} are primes and the n_{i} are positive integers (including 0 ).

Are the prime divisors of an integer mutually independent?

In this paper I address the query:

Query 1: Are the prime divisors of an integer n mutually independent?

Definition 6 (Independent events): Two events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other.

Intuitively, the prime divisors of an integer seem to be mutually independent by virtue of the Fundamental Theorem of Arithmetic

Moreover, the prime divisors of n can also be seen to be mutually independent in the usual, linearly displayed, Sieve of Eratosthenes, where whether an integer n is crossed out as a multiple of a prime p is obviously independent of whether it is also crossed out as a multiple of a prime q \neq p :

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 …

Despite such compelling evidence, conventional wisdom appears to accept as definitive the counter-intuitive conclusion that although we can see it as true, we cannot mathematically prove the following proposition as true:

Proposition 1: Whether or not a prime p divides an integer n is independent of whether or not a prime q \neq p divides the integer n .

We note that such an unprovable-but-intuitively-true conclusion makes a stronger assumption than that in Gödel’s similar claim for his arithmetical formula [(\forall x)R(x)] —whose Gödel-number is 17Gen\ r —in [Go31], p.26(2). Stronger, since Gödel does not assume his proposition to be intuitively true, but shows that though the arithmetical formula with Gödel-number 17Gen\ r is not provable in his Peano Arithmetic P yet, for any P -numeral [n] , the formula [R(n)] whose Gödel-number is Sb \left(r \begin{array}{c}17 \\ Z(n)\end{array}\right) is P -provable, and therefore meta-mathematically true under any well-defined Tarskian interpretation of P (cf., [An16], Section 3.).

Expressed in computational terms (see [An16], Corollary 8.3), under any well-defined interpretation of P , Gödel’s formula [R(x)] translates as an arithmetical relation, say R'(x) , such that R'(n) is algorithmically verifiable, but not algorithmically computable, as always true over N , since [\neg (\forall x)R(x)] is P -provable ([An16], Corollary 8.2).

We thus argue that a perspective which denies Proposition 1 is based on perceived barriers that reflect, and are peculiar to, only the argument that:

Theorem 2: There is no deterministic algorithm that, for any given n , and any given prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Proof By a standard result in the Theory of Numbers ([Ste02], Chapter 2, p.9, Theorem 2.1, we cannot define a probability function for the probability that a random n is prime over the probability space (1, 2, 3, \ldots, ) .

(Compare with the informal argument in [HL23], pp.36-37.)

In other words, treating Theorem 2 as an absolute barrier does not admit the possibility—which has consequences for the resolution of outstanding problems in both the theory of numbers and computational complexity—that Proposition 1 is algorithmically verifiable, but not algorithmically computable, as true, since:

Theorem 3: For any given n , there is a deterministic algorithm that, given any prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Can Gödel be held responsible for not clearly distinguishing—in his seminal 1931 paper on formally undecidable propositions (pp.596-616, ‘From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931‘, Jean van Heijenoort, Harvard University Press, 1976 printing)—between the implicit circularity that is masked by the non-constructive nature of his proof of undecidability in PM, and the lack of any circularity in his finitary proof of undecidability in Peano Arithmetic?

“The analogy of this argument with the Richard antinomy leaps to the eye. It is closely related to the “Liar” too;[Fn.14] for the undecidable proposition [R (q); q] states that q belongs to K , that is, by (1), that [R (q); q] is not provable. We therefore have before us a proposition that says about itself that it is not provable [in PM].[Fn.15]

[Fn.14] Any epistemological antinomycould be used for a similar proof of the existence of undecidable propositions.”

[Fn.15] Contrary to appearances, such a proposition involves no faulty circularity, for initially it [only] asserts that a certain well-defined formula (namely, the one obtained from the q th formula in the lexicographic order by a certain substitution) is unprovable. Only subsequently (and so to speak by chance) does it turn out that this formula is precisely the one by which the proposition itself was expressed.”

It is a question worth asking, if we heed Abel-Luis Peralta, who is a Graduate in Scientific Calculus and Computer Science in the Faculty of Exact Sciences at the National University of La Plata in Buenos Aires, Argentina; and who has been contending in a number of posts on his Academia web-page that:

(i) Gödel’s semantic definition of ‘[R(n) : n] ‘, and therefore of ‘\neg Bew[R(n) : n] ‘, is not only:

(a) self-referential under interpretation—in the sense of the above quote (pp.597-598, van Heijenoort) from Gödel’s Introduction in his 1931 paper ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems I’ (pp.596-616, van Heijenoort);

but that:

(b) neither of the definitions can be verified by a deterministic Turing machine as yielding a valid formula of PM.

Peralta is, of course, absolutely right in his contentions.

However, such non-constructiveness is a characteristic of any set-theoretical system in which PM is interpretable; and in which, by Gödel’s self-confessed Platonism (apparent in his footnote #15 in the quote above), we do not need to establish that his definitions of ‘[R(n) : n] ‘ and ‘\neg Bew[R(n) : n] ‘ need to be verifiable by a deterministic Turing machine in order to be treated as valid formulas of PM.

Reason: By the usual axiom of separation of any formal set theory such as ZFC in which PM is interpreted, Gödel’s set-theoretical definition (p.598, Heijenoort):

n \in K \equiv \neg Bew[R(n) : n]

lends legitimacy to \neg Bew[R(n) : n] as a PM formula.

Thus Gödel can formally assume—without further proof, by appeal simply to the axiom of choice of ZFC—that the PM formulas with exactly one variable—of the type of natural numbers—can be well-ordered in a sequence in some way such as, for example (Fn.11, p.598, Heijenoort):

“… by increasing the sum of the finite sequences of integers that is the ‘class sign’;, and lexicographically for equal sums.”

We cannot, though, conclude from this that:

(ii) Gödel’s formally undecidable P-formula, say [(\forall x)R(x)] —whose Gödel-number is defined as 17Gen\ r in Gödel’s proof of his Theorem VI (on pp.607-609 of van Heijenoort)—also cannot be verified by a deterministic Turing machine to be a valid formula of Gödel’s Peano Arithmetic P.

Reason: The axioms of set-theoretical systems such as PM, ZF, etc. would all admit—under a well-defined interpretation, if any—infinite elements, in the putative domain of any such interpretation, which are not Turing-definable.

Nevertheless, to be fair to two generations of scholars who—apart from those who are able to comfortably wear the logician’s hat—have laboured in attempts to place the philosophical underpinnings of Gödel’s reasoning (in his 1931 paper) in a coherent perspective (see this post; also this and this), I think Gödel must, to some extent, be held responsible—but in no way accountable—for the lack of a clear-cut distinction between the non-constructivity implicit in his semantic proof in (i), and the finitarity that he explicitly ensures for his syntactic proof in (ii).

Reason: Neither in his title, nor elsewhere in his paper, does Gödel categorically state that his goal was:

(iii) not only to demonstrate the existence of formally undecidable propositions in PM, a system which admits non-finitary elements under any putative interpretation;

(iv) but also to prevent the admittance of non-finitary elements—precisely those which would admit conclusions such as (ii)—when demonstrating the existence of formally undecidable propositions in ‘related’ systems such as his Peano Arithmetic P.

He merely hints at this by stating (see quote below from pp.587-589 of van Heijenoort) that his demonstration of (iii) is a ‘sketch’ that lacked the precision which he intended to achieve in (iv):

“Before going into details, we shall first sketch the main idea of the proof, of course without any claim to complete precision. The formulas of a formal system (we restrict ourselves here to the system PM) in outward appearance are finite sequences of primitive signs (variables, logical constants, and parentheses or punctuation dots), and it is easy to state with complete precision which sequences of primitive signs are meaningful formulas and which are not….

by:

(v) weakening the implicit assumption—of the decidability of the semantic truth of PM-propositions under any well-defined interpretation of PM—which underlies his proof of the existence of formally undecidable set-theoretical propositions in PM;

The method of proof just explained can clearly be applied to any formal system that, first, when interpreted as representing a system of notions and propositions, has at its disposal sufficient means of expression to define the notions occurring in the argument above (in particular, the notion “provable formula”) and in which, second, every provable formula is true in the interpretation considered. The purpose of carrying out the above proof with full precision in what follows is, among other things, to replace the second of the assumptions just mentioned by a purely formal and much weaker one.”

and:

(vi) insisting—in his proof of the existence of formally undecidable arithmetical propositions in his Peano Arithmetic P—upon the introduction of a methodology for constructively assigning unique truth values to only those (primitive recursive) quantified number-theoretic assertions (#1 to #45 on pp.603-606 of van Heijenoort) that are bounded when interpreted over the domain N of the natural numbers (footnote #34 on p.603 of van Heijenoort):

“Wherever one of the signs (x) , (Ex) , or \varepsilon x occurs in the definitions below, it is followed by a bound on x . This bound serves merely to ensure that the notion defined is recursive (see Theorem IV). But in most cases the extension of the notion defined would not change if this bound were omitted.”

From today’s perspective, one could reasonably hold that—as Peralta implicitly contends—Gödel is misleadingly suggesting (in the initial quote above from pp.587-589 of van Heijenoort) that his definitions of ‘[R(n) : n] ‘ and ‘~Bew[R(n) : n] ‘ may be treated as yielding ‘meaningful’ formulas of PM which are well-definable constructively (in the sense of being definable by a deterministic Turing machine).

In my previous post I detailed precisely why such an assumption would be fragile, by showing how the introduction of the boundedness Gödel insisted upon in (vi) distinguishes:

(vii) Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (pp.598-599 of van Heijenoort), which admits Peralta’s contention (1);

from:

(viii) Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (pp.607-609 of van Heijenoort), which does not admit the corresponding contention (ii).

Moreover, we note that:

(1) Whereas Gödel can—albeit non-constructively—claim that his definition of ‘Bew[R(n) : n] ‘ yields a formula in PM, we cannot claim, correspondingly, that his primitive recursive formula Bew(x) is a formula in his Peano Arithmetic P.

(2) The latter is a number-theoretic relation defined by Gödel in terms of his primitive recursive relation #45, ‘xBy ‘, as:

#46. Bew(x) \equiv (\exists y)yBx .

(3) In Gödel’s terminology, ‘Bew(x) ‘ translates under interpretation over the domain N of the natural numbers as:

x is the Gödel-number of some provable formula [F] of Gödel’s Peano Arithmetic P’.

(4) However, unlike Gödel’s primitive recursive functions and relations #1 to #45, both ‘(\exists y)yBx ‘ and ‘\neg (\exists y)yBx ‘ are number-theoretic relations which are not primitive recursive—which means that they are not effectively decidable by a Turing machine under interpretation in N.

(5) Reason: Unlike in Gödel’s definitions #1 to #45 (see footnote #34 on p.603 of van Heijenoort, quoted above), there is no bound on the quantifier ‘(\exists y) ‘ in the definition of Bew(x) .

Hence, by Turing’s Halting Theorem, we cannot claim—in the absence of specific proof to the contrary—that there must be some deterministic Turing machine which will determine whether or not, for any given natural number m , the assertion Bew(m) is true under interpretation in N.

This is the crucial difference between Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (which admits Peralta’s contention (i)), and Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (which does not admit his contention (i)).

(6) We cannot, therefore—in the absence of specific proof to the contrary—claim by Gödel’s Theorems V or VII that there must be some P-formula, say [Bew_{_{PA}}(x)] (corresponding to the PM-formula Bew[R(n) : n] ), such that, for any given natural number m :

(a) If Bew(m) is true under interpretation in N, then [Bew_{_{PA}}(m)] is provable in P;

(b) If \neg Bew(m) is true under interpretation in N, then \neg [Bew_{_{PA}}(m)] is provable in P.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A: Is Gödel’s reasoning really kosher?

Many scholars yet harbour a lingering suspicion that Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x,p)] involves a latent contradiction—arising from a putative, implicit, circular self-reference—that is masked by unverifiable, even if not patently invalid, mathematical reasoning.

The following proof of Gödel’s Theorem VI of his 1931 paper is intended to:

\bullet strip away the usual mathematical jargon that shrouds proofs of Gödel’s argument which makes his—admittedly arcane—reasoning difficult for a non-logician to unravel;

and

\bullet show that, and why—unlike in the case of the paradoxical ‘Liar’ sentence: ‘This sentence is a lie’—Gödel’s proposition [(\forall x)R(x, p)] does not involve any circular self-reference that could yield a Liar-like contradiction, either in a formal mathematical language, or when interpreted in any language of common discourse.

B: Gödel’s 45 primitive recursive arithmetic functions and relations

We begin by noting that:

(1) In his 1931 paper on formally ‘undecidable’ arithmetical propositions, Gödel shows that, given a well-defined system of Gödel-numbering, every formula of a first-order Peano Arithmetic such as PA can be Gödel-numbered by Gödel’s primitive recursive relation #23, Form(x) , which is true if, and only if, x is the Gödel-number (GN) of a formula of PA.

(2) So, given any natural number n , (1) allows us to decompose n and effectively determine whether, or not, n is the GN of some PA formula.

(3) Gödel also defines a primitive recursive relation #44, Bw(x) , which is true if, and only if, x is the GN of a finite sequence of formulas in PA, each of which is either an axiom, or an immediate consequence of two preceding formulas in the sequence.

(4) So, given any natural number n , (3) allows us to effectively determine whether, or not, the natural number n is the GN of a proof sequence in PA.

(5) Further, Gödel defines a primitive recursive relation #45, xBy , which is true if, and only if, x is the GN of a proof sequence in PA whose last formula has the GN y .

(6) Gödel then defines a primitive recursive relation, say Q(x,y) \equiv xBSUBy , such that, for any m,n :

mBSUBn is true if, and only if, m happens to be a GN that can be decomposed into a proof sequence whose last member is some PA formula [F(n)] , and n happens to be a GN that decomposes into the PA-formula [F(u)] with only one variable [u] .

(7) The essence of Gödel’s Theorem VI lies in answering the question:

Query 1: Is there any natural number n for which mBSUBn is true?

C: Gödel’s reasoning in Peano Arithmetic

(8) Now, by Gödel’s Theorem VII (a standard representation theorem of arithmetic), xBSUBy can be expressed in PA by some (formally well-defined) formula [\neg R(x,y)] such that, for any m,n :

(a) If mBSUBn is true, then [\neg R(m,n)] is PA-provable;

(b) If \neg mBSUBn is true, then [R(m,n)] is PA-provable.

(9) Further, by (6) and (8), for any m,n , if n is the GN of F(x) then:

(a) If mBSUBn is true, then [R(m,n)] is PA-provable; and m is a PA-proof of [F(n)] ;

(b) If \neg mBSUBn is true, then [\neg R(m,n)] is PA-provable; and m is not a PA-proof of [F(n)] .

(10) In his Theorem VI, Gödel then argues as follows:

(a) Let q be the GN of the formula [R(x,y)] defined in (8).

(b) Let p be the GN of [(\forall x)R(x,y)] .

(c) Let r be the GN of [R(x,p)] .

(d) Let 17Gen\ r be the GN of [(\forall x)R(x,p)] .

(11) We note that all the above primitive recursive relations are formally well-defined within the standard primitive recursive arithmetic PRA; and all the PA-formulas—as well as their corresponding Gödel-numbers—are well-defined in the first-order Peano Arithmetic PA.

In other words, as Gödel emphasised in his paper, the 46—i.e., 45 + xBSUBy —PRA functions and relations that he defines are all bounded, and therefore effectively decidable as true or false over the domain N of the natural numbers; whilst the PA-formulas that he defines do not involve any reference—or self-reference—to either the meaning or the truth/falsity of any PA-formulas under an interpretation in N , but only to their PA-provability which, he shows, is effectively decidable by his system of Gödel-numbering and his definition of the primitive recursive relation xBy .

(12) If we now substitute p for n , and [(\forall x)R(x,p)] for [F(n)] , in (9) we have (since p is the GN of [(\forall x)R(x,y)] ) that:

(i) If mBSUBp is true, then [R(\neg m,p)] is PA-provable; whence m is a PA-proof of [(\forall x)R(x,p)] ;

(ii) If \neg mBSUBp is true, then [R(m,p)] is PA-provable; whence m is not a PA-proof of [(\forall x)R(x,p)] .

Hence n = p answers Query 1 affirmatively.

D: Gödel’s conclusions

(13) Gödel then concludes that, if PA is consistent then:

By (12)(i), if mSUBp is true for some m , then both [R(\neg m,p)] and [(\forall x)R(x,p)] are PA-provable—a contradiction since, by Generalisation in PA, the latter implies that [R(m,p)] is provable in PA.

Hence [(\forall x)R(x,p)] , whose GN is 17Gen\ r , is not provable in PA if PA is consistent.

(14) Moreover, if PA is assumed to also be \omega -consistent (which means that we cannot have a PA-provable formula [\neg (\forall x)F(x)] such that [F(m)] is also provable in PA for any given numeral [m] ) then:

By (13), m is not a PA-proof of [(\forall x)R(x,p)] for any given m ; whence [R(m,p)] is PA-provable for any given m by (12)(ii).

Hence [\neg (\forall x)R(x,p)] , whose GN is Neg(17Gen r) , is not provable in PA.

E: Gödel’s [(\forall x)R(x,p)] does not refer to itself

We note that Gödel’s formula [(\forall x)R(x,p)] —whose GN is 17Gen\ r — does not refer to itself since it is defined in terms of the natural number p , and not in terms of the natural number 17Gen\ r .

F: Somewhere, far beyond Gödel

The consequences of Gödel’s path-breaking answer to Query 1 are far-reaching (as detailed in this thesis).

For instance, taken together with the proof that PA is categorical with respect to algorithmic computability (Corollary 7.2 of this paper), and that PA is not \omega -consistent (Corollary 8.4 of this paper), the above entails that:

\bullet There can be no interpretation of Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x),p] over the domain of the natural numbers—whether expressed mathematically or in any language of common discourse—that could lead to a contradiction;

\bullet Gödel’s [(\forall x)R(x,p)] is not a formally undecidable arithmetical proposition, since [\neg (\forall x)R(x,p)] is PA-provable (see Corollary 8.2 of this paper).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In this post I address two critical issues, as raised in private correspondence with researchers, which may illuminate some objections to Gödel’s reasoning and conclusions that have been raised elsewhere by Wittgenstein, Floyd, Putnam et al.:

(i) By Rosser’s reasoning, doesn’t simple consistency suffice for defining an undecidable arithmetical proposition?

(ii) Doesn’t Gödel’s undecidable formula assert its own unprovability?

NOTE: The following correspondence refers copiously to this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

Subsequently, most of the cited results were detailed formally in the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

A: Doesn’t simple consistency suffice for defining Rosser’s undecidable arithmetical proposition?

You claim that the PA system is \omega-inconsistent, and that Gödel’s first theorem holds vacuously. But by Rosser’s result, simple consistency suffices.

Well, it does seem surprising that Rosser’s claim—that his ‘undecidable’ proposition only assumes simple consistency—has not been addressed more extensively in the literature. Number-theoretic expositions of Rosser’s proof have generally remained either implicit or sketchy (see, for instance, this post).

Note that Rosser’s proposition and reasoning involve interpretation of an existential quantifier, whilst Gödel’s proposition and reasoning only involve interpretation of a universal quantifier.

The reason why Rosser’s claim is untenable is that—in order to interpret the existential quantifier as per Hilbert’s \epsilon-calculus—Rosser’s argument needs to assume his Rule C (see Elliott Mendelson, Introduction to Mathematical Logic, 1964 ed., p.73), which implicitly implies that Gödel’s arithmetic P—in which Rosser’s argumentation is grounded—is \omega-consistent .

See, for instance, this analysis of (a) Wang’s outline of Rosser’s argument on p.5, (b) Beth’s outline of Rosser’s argument on p.6, and (c) Mendelson’s exposition of Rosser’s argument in Section 4.2 on p.8.

Moreover, the assumption is foundationally fragile, because Rule C invalidly assumes that we can introduce an ‘unspecified’ formula denoting an ‘unspecified’ numeral into PA even if the formula has not been demonstrated to be algorithmically definable in terms of the alphabet of PA.

See Theorem 8.5 and following remarks in Section 8, pp.7-8 of this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

B: As I see it, rule C is only a shortcut.

As I see it, rule C is only a shortcut; it is totally eliminable. Moreover, it is part of predicate logic, not of the Peano’s arithmetic.

Assuming that Rule C is a short cut which can always be eliminated is illusory, and is tantamount to invalidly (see Corollary 8.6, p.17 of the Epsilon 2015 paper) claiming that Hilbert’s \epsilon calculus is a conservative extension of the first-order predicate calculus.

Reason: Application of Rule C invalidly (see Theorem 8.5 and following remarks in Section 8, pp.7-8 of the Epsilon 2015 paper) involves introduction of a new individual constant, say [d], in a first-order theory K (see Mendelson 1964, p.74, I(iv)); ‘invalidly’ since Rule C does not qualify that [d] must be algorithmically computable from the alphabet of K—which is necessary if K is first-order.

Notation: We use square brackets to indicate that the expression within the brackets denotes a well-formed formula of a formal system, say K, that is to be viewed syntactically merely as a first-order string of K—i.e, one which is finitarily constructed from the alphabet of the language of K—without any reference to its meaning under any interpretation of K.

Essentially, Rule C mirrors in K the intuitionistically objectionable postulation that the formula [(\exists x)F(x)] of K can always be interpreted as:

F'(a) holds for some element a

in the domain of the interpretation of K under which the formula [F(x)] interprets as the relation F'(x).

The Epsilon 2015 paper shows that this is not a valid interpretation of the formula [(\exists x)F(x)] under any finitary, evidence-based, interpretation of K.

That, incidentally, is a consequence of the proof that PA is not \omega-consistent; which itself is a consequence of (Theorem 7.1, p.15, of the Epsilon 2015 paper):

Provability Theorem for PA: A PA formula [F(x)] is provable if, and only if, [F(x)] interprets as an arithmetical relation F'(x) that is algorithmically computable as always true (see Definition 3, p.7, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers.

Compare with what Gödel has essentially shown in his famous 1931 paper on formally undecidable arithmetical propositions, which is that (Lemma 8.1, p.16, of the Epsilon 2015 paper):

Gödel: There is a PA formula [R(x, p)]—which Gödel refers to by its Gödel number r—which is not provable in PA, even though [R(x, p)] interprets as an arithmetical relation that is algorithmically verifiable as always true (see Definition 4, p.7, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers.

C: If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

If I have understood correctly, you claim that the PA system is \omega-inconsistent from an intuitionistic point of view? If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

There is nothing to bypass—the first-order Peano Arithmetic PA is a formal axiomatic system which is \omega-inconsistent as much for an intuitionist, as it is for a realist, a finitist, a formalist, a logicist or a nominalist.

Philosophers may differ about beliefs that are essentially unverifiable; but the \omega-incompleteness of PA is a verifiable logical meta-theorem that none of them would dispute.

D: Isn’t Gödel’s undecidable formula [(\forall x)R(x, p)]—which Gödel refers to by its Gödel number 17Gen\ r—self-referential?

Isn’t Gödel’s undecidable formula [(\forall x)R(x, p)]—which Gödel refers to by its Gödel number 17Gen\ r—self-referential and covertly paradoxical?

According to Wittgenstein it interprets in any model as a sentence that is devoid of sense, or even meaning. I think a good reason for this is that the formula is simply syntactically wrongly formed: the provability of provability is not defined and can not be consistently defined.

What you propose may be correct, but for automation systems of deduction wouldn’t \omega-inconsistency be much more problematic than undecidability?

How would you feel if a syntax rule is proposed, that formulas containing numerals are instantiations of open formulas that may not be part of the canonical language? Too daring, may be?

Let me briefly respond to the interesting points that you have raised.

1. The \omega-inconsistency of PA is a meta-theorem; it is a Corollary of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper).

2. Gödel’s PA-formula [(\forall x)R(x, p)] is not an undecidable formula of PA. It is merely unprovable in PA.

3. Moreover, Gödel’s PA-formula [\neg(\forall x)R(x, p)] is provable in PA, which is why the PA formula [(\forall x)R(x, p)] is not an undecidable formula of PA.

4. Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential.

5. Wittgenstein correctly believed—albeit purely on the basis of philosophical considerations unrelated to whether or not Gödel’s formal reasoning was correct—that Gödel was wrong in stating that the PA formula [(\forall x)R(x, p)] asserts its own unprovability in PA.

Reason: We have for Gödel’s primitive recursive relation Q(x, y) that:

Q(x, p) is true if, and only if, the PA formula [R(x, p)] is provable in PA.

However, in order to conclude that the PA formula [(\forall x)R(x, p)] asserts its own unprovability in PA, Gödel’s argument must further imply—which it does not—that:

(\forall x)Q(x, p) is true (and so, by Gödel’s definition of Q(x, y), the PA formula [(\forall x)R(x, p)] is not provable in PA) if, and only if, the PA formula [(\forall x)R(x, p)] is provable in PA.

In other words, for the PA formula [(\forall x)R(x, p)] to assert its own unprovability in PA, Gödel must show—which his own argument shows is impossible, since the PA formula [(\forall x)R(x, p)] is not provable in PA—that:

The primitive recursive relation Q(x, p) is algorithmically computable as always true if, and only if, the arithmetical relation R'(x, p) is algorithmically computable as always true (where R'(x, p) is the arithmetical interpretation of the PA formula [R(x, p)] over the structure \mathbb{N} of the natural numbers).

6. Hence, Gödel’s PA-formula [(\forall x)R(x, p)] is not covertly paradoxical.

7. IF Wittgenstein believed that the PA formula [(\forall x)R(x, p)] is empty of meaning and has no valid interpretation, then he was wrong, and—as Gödel justifiably believed—he could not have properly grasped Gödel’s formal reasoning that:

(i) ‘17Gen\ r is not \kappa-provable’ is a valid meta-theorem if PA is consistent, which means that:

‘If PA is consistent and we assume that the PA formula [(\forall x)R(x, p)] is provable in PA, then the PA formula [\neg(\forall x)R(x, p)] must also be provable in PA; from which we may conclude that the PA formula [(\forall x)R(x, p)] is not provable in PA’

(ii) ‘Neg(17Gen\ r) is not \kappa-provable’ is a valid meta-theorem ONLY if PA is \omega-consistent, which means that:

‘If PA is \omega-consistent and we assume that the PA formula [\neg(\forall x)R(x, p)] is provable in PA, then the PA formula [(\forall x)R(x, p)] must also be provable in PA; from which we may conclude that the PA formula [\neg(\forall x)R(x, p)] is not provable in PA’.

8. In fact the PA formula [(\forall x)R(x, p)] has the following TWO meaningful interpretations (the first of which is a true arithmetical meta-statement—since the PA formula [R(n)] is provable in PA for any PA-numeral [n]—but the second is not—since the PA formula [(\forall x)R(x, p)] is not provable in PA):

(i) For any given natural number n, there is an algorithm which will verify that each of the arithmetical meta-statements ‘R'(1, p) is true’, ‘R'(2, p) is true’, …, ‘R'(n, p) is true’ holds under the standard, algorithmically verifiable, interpretation \mathbb{M} of PA (see \S 5, p.11 of the Epsilon 2015 paper);

(ii) There is an algorithm which will verify that, for any given natural number n, the arithmetical statement ‘R'(n, p) is true’ holds under the finitary, algorithmically computable, interpretation \mathbb{B} of PA (see \S 6, p.13 of the Epsilon 2015 paper).

9. IF Wittgenstein believed that the PA formula [(\forall x)R(x, p)] is not a well-defined PA formula, then he was wrong.

Gödel’s definition of the PA formula [(\forall x)R(x, p)] yields a well-formed formula in PA, and cannot be treated as ‘syntactically wrongly formed’.

10. The Provability Theorem for PA shows that both ‘proving something in PA’ and ‘proving that something is provable in PA’ are finitarily well-defined meta-mathematical concepts.

11. The Provability Theorem for PA implies that PA is complete with respect to the concepts of satisfaction, truth and provability definable in automated deduction systems, which can only define algorithmically computable truth.

12. The Provability Theorem for PA implies that PA is categorical, so you can introduce your proposed syntax rule ONLY if it leads to a conservative extension of PA.

13. Whether ‘daring’ or not, why would you want to introduce such a rule?

E: Consider these two statements of yours …

Consider these two statements of yours:

“(iv): p is the Gödel-number of the formula [(\forall x)][R(x, y)] of PA” and

“D(4): Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential.”

If ‘p‘ is the Gödel-number of the open formula in para (iv), and the second argument of the closed formula R in para D(4) is ‘p‘, then the second formula is obtained by instantiating the variable ‘y‘ in the first with its own Gödel-number.

So how would you call, in one word, the relation between the entire formula (in D(4)) and its second argument?

Para D(4) is an attempt to clarify precisely this point.

1. Apropos the first statement ‘(iv)’ cited by you:

From a pedantic perspective, the “relation between the entire formula (in D(4)) and its second argument” cannot be termed self-referential because the “second argument”, i.e., p, is the Gödel-number of the PA formula [(\forall x)R(x, y)], and not that of “the entire formula (in 4)”, i.e., of the formula [(\forall x)R(x, p)] itself (whose Gödel number is 17Gen\ r).

Putting it crudely, 17Gen\ r is neither self-referential—nor circularly defined—because it is not defined in terms of 17Gen\ r, but in terms of p.

2. Apropos the second statement ‘D(4)’ cited by you:

I would interpret:

Gödel’s PA-formula [(\forall x)R(x, p)] is self-referential

to mean, in this particular context, that—as Gödel wrongly claimed:

[(\forall x)R(x, p)] asserts its own unprovability in PA.

Now, if we were to accept the claim that [(\forall x)R(x, p)] is self-referential in the above sense, then (as various critics of Gödel’s reasoning have pointed out) we would have to conclude further that Gödel’s argument leads to the contradiction:

(\forall x)Q(x, p) is true—and so, by Gödel’s definition of Q(x, y)—the PA formula [(\forall x)R(x, p)] is not provable in PA—if, and only if, the PA formula [(\forall x)R(x, p)] is provable in PA.

However, in view of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper), this contradiction would only follow if Gödel’s argument were to establish (which it does not) that:

The primitive recursive relation Q(x, p) is algorithmically computable as always true if, and only if, the arithmetical interpretation R'(x, p) of the PA formula [R(x, p)] is algorithmically computable as always true over the structure \mathbb{N} of the natural numbers.

The reason Gödel cannot claim to have established the above is that his argument only proves the much weaker meta-statement:

The arithmetical interpretation R'(x, p) of the PA formula [R(x, p)] is algorithmically verifiable as always true over the structure \mathbb{N} of the natural numbers.

Ergo—contrary to Gödel’s claim— Gödel’s PA-formula [(\forall x)R(x, p)] is not self-referential (and so, even though Gödel’s claimed interpretation of what his own reasoning proves is wrong, there is no paradox in Gödel’s reasoning per se)!

F: Is the PA system \omega-inconsistent without remedy?

Is the PA system \omega-inconsistent without remedy? Is it possible to introduce a new axiom or new rule which by-passes the problematic unprovable statements of the Gödel-Rosser Theorems?

1. Please note that the first-order Peano Arithmetic PA is:

(i) consistent (Theorem 7.3, p.15, of the Epsilon 2015 paper); which means that for any PA-formula [A], we cannot have that both [A] and [\neg A] are Theorems of PA;

(ii) complete (Theorem 7.1, p.15, of the Epsilon 2015 paper); which means that we cannot add an axiom to PA which is not a Theorem of PA without inviting inconsistency;

(iii) categorical (Theorem 7.2, p.15, of the Epsilon 2015 paper); which means that if \mathbb{M} is an interpretation of PA over a structure \mathbb{S}, and \mathbb{B} is an interpretation of PA over a structure \mathbb{T}, then \mathbb{S} and \mathbb{T} are identical and denote the structure \mathbb{N} of the natural numbers defined by Dedekind’s axioms; and so PA has no model which contains an element that is not a natural number (see Footnote 54, p.16, of the Epsilon 2015 paper).

2. What this means with respect to Gödel’s reasoning is that:

(i) PA has no undecidable propositions, which is why it is not \omega-consistent (Corollary 8.4, p.16, of the Epsilon 2015 paper);

(ii) The Gödel formula [(\forall x)R(x, p)] is not provable in PA; but it is algorithmically verifiable as true (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically verifiable standard interpretation \mathbb{M} of PA (see Section 5, p.11, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers;

(iii) The Gödel formula [(\forall x)R(x, p)] is not provable in PA; and it is algorithmically computable as false (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation \mathbb{B} of PA (see Section 6, p.13, of the Epsilon 2015 paper) over the structure \mathbb{N} of the natural numbers;

(iv) The Gödel formula [\neg(\forall x)R(x, p)] is provable in PA; and it is therefore also algorithmically verifiable as true under the algorithmically verifiable standard interpretation \mathbb{M} of PA over the structure \mathbb{N} of the natural numbers—which means that the logic by which the standard interpretation of PA assigns values of ‘satisfaction’ and ‘truth’ to the formulas of PA (under Tarski’s definitions) may be paraconsistent (see http://plato.stanford.edu/entries/logic-paraconsistent) since PA is consistent;

(v) The Gödel formula [\neg(\forall x)R(x, p)] is provable in PA; and it is therefore algorithmically computable as true (Corollary 8.2, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation \mathbb{B} of PA over the structure \mathbb{N} of the natural numbers.

3. It also means that:

(a) The “Gödel-Rosser Theorem” is not a Theorem of PA;

(b) The “unprovable Gödel sentence” is not a “problematic statement”;

(c) The “PA system” does not require a “remedy” just because it is “\omega-inconsistent”;

(d) No “new axiom or new rule” can “by-pass the unprovable sentence”.

4. Which raises the question:

Why do you see the “unprovable Gödel sentence” as a “problematic statement” that requires a “remedy” which must “by-pass the unprovable sentence”?

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Ferguson’s and Priest’s thesis

In a brief, but provocative, review of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘A Dictionary of Logic‘, philosophers Thomas Macaulay Ferguson and Graham Priest, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

A nineteenth century parody of the struggle to define ‘truth’ objectively

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in this nineteenth century quote:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

Making sense of mathematical propositions about infinite processes

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

The Liar paradox

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning here).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

Gödel’s influence on Turing’s reasoning

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

What is logic: using Ockham’s razor

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the EPR paradox in the following paper that I presented on 26’th June at the workshop on Emergent Computational Logics at UNILOG’2015, Istanbul, Turkey:

Algorithmically Verifiable Logic vis à vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

where I introduced the definition:

A finite set \lambda of rules is a Logic of a formal mathematical language \mathcal{L} if, and only if, \lambda constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of \mathcal{L}; and

(b) Of truth/falsity to the sentences of the Theory T(\mathcal{U}) which is defined semantically by the \lambda-interpretation of \mathcal{L} over a structure \mathcal{U}.

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

\bullet Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as `true’ under an algorithmically computable interpretation of PA over the structure \mathbb{N} of the natural numbers;

\bullet Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

\bullet Interpret such representations unambiguously; and

\bullet Conclude further:

\bullet First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

\bullet Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought

Christopher Mole is an associate professor of philosophy at the University of British Columbia, Vancouver. He is the author of Attention is Cognitive Unison: An Essay in Philosophical Psychology (OUP, 2011), and The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought (Routledge, 2016).

In his preface to The Unexplained Intellect, Mole emphasises that his book is an attempt to provide arguments for (amongst others) the three theses that:

(i) “Intelligence might become explicable if we treat intelligence thought as if it were some sort of computation”;

(ii) “The importance of the rapport between an organism and its environment must \ldots be understood from a broadly computational perspective”;

(iii) “\ldots our difficulties in accounting for our psychological orientation with respect to time are indications of the need to shift our philosophical focus away from mental states—which are altogether too static—and towards a theory of the mind in which it is dynamic mental entities that are taken to be metaphysically foundational”.

The Brains blog

Mole explains at length his main claims in The Unexplained Intellect—and the cause that those claims serve—in a lucid and penetrating, VI-part, series of invited posts in The Brains blog (a leading forum for work in the philosophy and science of mind that was founded in 2005 by Gualtiero Piccinini, and has been administered by John Schwenkler since late 2011).

In these posts, Mole seeks to make the following points.

I: The Unexplained Intellect: The mind is not a hoard of sentences

We do not currently have a satisfactory account of how minds could be had by material creatures. If such an account is to be given then every mental phenomenon will need to find a place within it. Many will be accounted for by relating them to other things that are mental, but there must come a point at which we break out of the mental domain, and account for some things that are mental by reference to some that are not. It is unclear where this break out point will be. In that sense it is unclear which mental entities are, metaphysically speaking, the most fundamental.

At some point in the twentieth century, philosophers fell into the habit of writing as if the most fundamental things in the mental domain are mental states (where these are thought of as states having objective features of the world as their truth-evaluable contents). This led to a picture in which the mind was regarded as something like a hoard of sentences. The philosophers and cognitive scientists who have operated with this picture have taken their job to be telling us what sort of content these mental sentences have, how that content is structured, how the sentences come to have it, how they get put into and taken out of storage, how they interact with one another, how they influence behaviour, and so on.

This emphasis on states has caused us to underestimate the importance of non-static mental entities, such as inferences, actions, and encounters with the world. If we take these dynamic entities to be among the most fundamental of the items in the mental domain, then — I argue — we can avoid a number of philosophical problems. Most importantly, we can avoid a picture in which intelligent thought would be beyond the capacities of any physically implementable system.

II: The Unexplained Intellect: Computation and the explanation of intelligence

A lot of philosophers think that consciousness is what makes the mind/body problem interesting, perhaps because they think that consciousness is the only part of that problem that remains wholly philosophical. Other aspects of the mind are taken to be explicable by scientific means, even if explanatorily adequate theories of them remain to be specified.

\ldots I’ll remind the reader of computability theory’s power, with a view to indicating how it is that the discoveries of theoretical computer scientists place constraints on our understanding of what intelligence is, and of how it is possible.

III: The Unexplained Intellect: The importance of computability

If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would (be) absolutely inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.

IV: The Unexplained Intellect: Consequences of imperfection

The lesson to be drawn is that, if we think of intelligence as involving the maintenance of satisfiable beliefs, and if we think of our beliefs as corresponding to a set of representational states, then our intelligence would depend on a run of good luck the chances of which are unknown.

My suggestion is that we can reach a more explanatorily satisfactory conception of intelligence if we adopt a dynamic picture of the mind’s metaphysical foundations.

V: The Unexplained Intellect: The importance of rapport

I suggest that something roughly similar is true of us. We are not guaranteed to have satisfiable beliefs, and sometimes we are rather bad at avoiding unsatisfiability, but such intelligence as we have is to be explained by reference to the rapport between our minds and the world.

Rather than starting from a set of belief states, and then supposing that there is some internal process operating on these states that enables us to update our beliefs rationally, we should start out by accounting for the dynamic processes through which the world is epistemically encountered. Much as the three-colourable map generator reliably produces three-colourable maps because it is essential to his map-making procedure that borders appear only where they will allow for three colorability, so it is essential to what it is for a state to be a belief that beliefs will appear only if there is some rapport between the believer and the world. And this rapport — rather than any internal processing considered in isolation from it — can explain the tendency for our beliefs to respect the demands of intelligence.

VI: The Unexplained Intellect: The mind’s dynamic foundations

\ldots memory is essentially a form of epistemic retentiveness: One’s present knowledge counts as an instance of memory when and only when it was attained on the basis of an epistemic encounter that lies in one’s past. One can epistemically encounter a proposition as the conclusion of an argument, and so can encounter it before the occurrence of any event to which it pertains, but one cannot encounter an event in that way. In the resulting explanation of memory’s temporal asymmetry, it is the dynamic events of epistemic encountering to which we must make reference. These encounters, and not the knowledge states to which they lead, do the lion’s share of the explanatory work.

A: Simplifying Mole’s perspective

It may help simplify Mole’s thought-provoking perspective if we make an arbitrary distinction between:

(i) The mind of an applied scientist, whose primary concern is our sensory observations of a ‘common’ external world;

(ii) The mind of a philosopher, whose primary concern is abstracting a coherent perspective of the external world from our sensory observations; and

(iii) The mind of a mathematician, whose primary concern is adequately expressing such abstractions in a formal language of unambiguous communication.

My understanding of Mole’s thesis, then, is that:

(a) although a mathematician’s mind may be capable of defining the ‘truth’ value of some logical and mathematical propositions without reference to the external world,

(b) the ‘truth’ value of any logical or mathematical proposition that purports to represent any aspect of the real world must be capable of being evidenced objectively to the mind of an applied scientist; and that,

(c) of the latter ‘truths’, what should interest the mind of a philosopher is whether there are some that are ‘knowable’ completely independently of the passage of time, and some that are ‘knowable’ only partially, or incrementally, with the passage of time.

B. Support for Mole’s thesis

It also seems to me that Mole’s thesis implicitly subsumes, or at the very least echoes, the belief expressed by Chetan R. Murthy (‘An Evaluation Semantics for Classical Proofs‘, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, 1991; also Cornell TR 91-1213):

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …”

If so, the thesis seems significantly supported by the following paper that is due to appear in the December 2016 issue of ‘Cognitive Systems Research’:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Goedelian Thesis

The CSR paper implicitly suggests that there are, indeed, (only?) two ways of assigning ‘true’ or ‘false’ values to any mathematical description of real-world events.

C. Algorithmic computability

First, a number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence (cf. ibid Murthy 91) for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

(We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ on p.503 of Stephen Cole Kleene’s ‘Introduction to Metamathematics‘, North Holland Publishing Company, Amsterdam.)

D. Algorithmic verifiability

Second, a number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

We note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

The following theorem (Theorem 2.1, p.37 of the CSR paper) shows that although every algorithmically computable relation is algorithmically verifiable, the converse is not true:

Theorem: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable.

E. The significance of algorithmic ‘truth’ assignments for Mole’s theses

The significance of such algorithmic ‘truth’ assignments for Mole’s theses is that:

Algorithmic computability—reflecting the ambit of classical Newtonian mechanics—characterises natural phenomena that are determinate and predictable.

Such phenomena are describable by mathematical propositions that can be termed as ‘knowable completely’, since at any point of time they are algorithmically computable as ‘true’ or ‘false’.

Hence both their past and future behaviour is completely computable, and their ‘truth’ values are therefore ‘knowable’ independent of the passage of time.

Algorithmic verifiability—reflecting the ambit of Quantum mechanics—characterises natural phenomena that are determinate but unpredictable.

Such phenomena are describable by mathematical propositions that can only be termed as ‘knowable incompletely’, since at any point of time they are only algorithmically verifiable, but not algorithmically computable, as ‘true’ or ‘false’

Hence, although their past behaviour is completely computable, their future behaviour is not completely predictable, and their ‘truth’ values are not independent of the passage of time.

F. Where Mole’s implicit faith in the adequacy of set theoretical representations of natural phenomena may be misplaced

It also seems to me that, although Mole’s analysis justifiably holds that the:

\ldots importance of the rapport between an organism and its environment”

has been underacknowledged, or even overlooked, by existing theories of the mind and intelligence, it does not seem to mistrust, and therefore ascribe such underacknowledgement to any lacuna in, the mathematical and epistemic foundations of the formal language in which almost all descriptions of real-world events are currently sought to be expressed, which is the language of the set theory ZF.

G. Any claim to a physically manifestable ‘truth’ must be objectively accountable

Now, so far as applied science is concerned, history teaches us that the ‘truth’ of any mathematical proposition that purports to represent any aspect of the external world must be capable of being evidenced objectively; and that such ‘truths’ must not be only of a subjective and/or revelationary nature which may require truth-certification by evolutionarily selected prophets.

(Not necessarily religious—see, for instance, Melvyn B. Nathanson’s remarks, “Desperately Seeking Mathematical Truth“, in the Opinion piece in the August 2008 Notices of the American Mathematical Society, Vol. 55, Issue 7.)

The broader significance of seeking objective accountability is that it admits the following (admittedly iconoclastic) distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA as the language of science; and

2. The first-order Set Theory ZF as the language of science fiction.

It is a distinction that is faintly reflected in Stephen G. Simpson’s more conservative perspective in his paper ‘Partial Realizations of Hilbert’s Program‘ (#6.4, p.15):

“Finitistic reasoning (my read: ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (my read: ‘First-order Set Theory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.”

The distinction is supported by the formal argument (detailed in the above-cited CSR paper) that:

(i) PA has two, hitherto unsuspected, evidence-based interpretations, the first of which can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about ‘true’ arithmetical propositions.

What this means is that the language of arithmetic—formally expressed as PA—can provide all the foundational needs for all practical applications of mathematics in the physical sciences. This was was the point that I sought to make—in a limited way, with respect to quantum phenomena—in the following paper presented at Unilog 2015, Istanbul last year:

Algorithmically Verifiable Logic vis `a vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

(Presented on 26’th June at the workshop on ‘Emergent Computational Logics’ at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see Theorem 1 in \S4 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about ‘true’ set-theoretical propositions, or that of mechanistic reasoning about ‘true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that—although of possible interest to philosophers of science—are only mentally conceivable by mathematicians subjectively, and have no verifiable physical counterparts, or immediately practical applications of mathematics, that can materially impact on the study of physical phenomena.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

H. The importance of Mole’s ‘rapport’

Accordingly, I see it as axiomatic that the relationship between an evidence-based mathematical language and the physical phenomena that it purports to describe, must be in what Mole terms as ‘rapport’, if we view mathematics as a set of linguistic tools that have evolved:

(a) to adequately abstract and precisely express through human reasoning our observations of physical phenomena in the world in which we live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of our co-operative potential in acieving a better understanding of physical phenomena.

This is the perspective that I sought to make in the following paper presented at Epsilon 2015, Montpellier, last June, where I argue against the introduction of ‘unspecifiable’ elements (such as completed infinities) into either a formal language or any of its evidence-based interpretations (in support of the argument that since a completed infinity cannot be evidence-based, it must therefore be dispensible in any purported description of reality):

Why Hilbert’s and Brouwer’s interpretations of quantification are complementary and not contradictory.’

(Presented on 10th June at the Epsilon 2015 workshop on ‘Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics’, 10th June 2015 – 12th June 2015, University of Montpellier, France.)

I. Why mathematical reasoning must reflect an ‘agnostic’ perspective

Moreover, from a non-mathematician’s perspective, a Propertarian like Curt Doolittle would seem justified in his critique (comment of June 2, 2016 in this Quanta review) of the seemingly ‘mystical’ and ‘irrelevant’ direction in which conventional interpretations of Hilbert’s ‘theistic’ and Brouwer’s ‘atheistic’ reasoning appear to have pointed mainstream mathematics for, as I argue informally in an earlier post, the ‘truths’ of any mathematical reasoning must reflect an ‘agnostic’ perspective.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In a recent paper A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory, authors Adam Yedidia and Scott Aaronson argue upfront in their Introduction that:

Like any axiomatic system capable of encoding arithmetic, ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency. Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.

The question arises:

How reasonable is it to build modern mathematics on top of a Set Theory such as ZF?

Some immediate points to ponder upon (see also reservations expressed by Stephen G. Simpson in Logic and Mathematics and in Partial Realizations of Hilbert’s Program):

1. “Like any axiomatic system capable of encoding arithmetic, …”

The implicit assumption here that every ZF formula which is provable about the finite ZF ordinals must necessarily interpret as a true proposition about the natural numbers is fragile since, without such an assumption, we can only conclude from Goodstein’s argument (see Theorem 1.1 here) that a Goodstein sequence defined over the finite ZF ordinals must terminate even if the corresponding Goodstein sequence over the natural numbers does not terminate!

2. “ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency.”

The implicit assumption here is that ZF is \omega-consistent, which implies that ZF is consistent and must therefore have an interpretation over some mathematically definable structure in which ZF theorems interpret as ‘true’.

The question arises: Must such ‘truth’ be capable of being evidenced objectively, or is it only of a subjective, revelationary, nature (which may require truth-certification by evolutionarily selected prophets—see Nathanson’s remarks as cited in this post)?

The significance of seeking objective accountbility is that in a paper, “The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis“, which is due to appear in the December 2016 issue of Cognitive Systems Research, we show (see also this post) that the first-order Peano Arithmetic PA:

(i) is finitarily consistent; but

(ii) is not \omega-consistent; and

(iii) has no ‘undecidable’ arithmetical proposition (whence both of Gödel’s Incompleteness Theorems hold vacuously so far as the arithmetic of the natural numbers is concerned).

3. “Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.”

Now, one justification for such an assumption (without which it may be difficult to justify building modern mathematics on top of ZF) could be the belief that acquisition of set-theoretical knowledge by students of mathematics has some essential educational dimension.

If so, one should take into account not only the motivations of such a student for the learning of mathematics, but also those of a mathematician for teaching it.

This, in turn, means that both the content of the mathematics which is to be learnt (or taught), as well as the putative utility of such learning (or teaching) for a student (or teacher), merit consideration.

Considering content, I would iconoclastically submit that the least one may then need to accomodate is the following distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA, which is the language of science; and

2. The first-order Set Theory ZF, which is the language of science fiction.

A distinction that is reflected in Stephen G. Simpson’s more conservative perspective in Partial Realizations of Hilbert’s Program (\S6.4, p.15):

Finitistic reasoning (read ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (read ‘First-order Set Thyeory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.

Reason:

(i) PA has two, hitherto unsuspected, evidence-based interpretations (see this post), the first of which can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions.

It is this language of arithmetic—formally expressed as PA—that provides the foundation for all practical applications of mathematics where the latter could be argued as having an essential educational dimension.

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see paragraph 4.2 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about `true’ set-theoretical propositions, or that of mechanistic reasoning about `true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that are only mentally conceivable by mathematicians (subjectively?), and have no physical counterparts, or immediately practical applications of mathematics, which could meaningfully be argued as having an essential educational dimension.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

The distinction is lost when—as seems to be the case currently—we treat the acquisition of mathematical knowledge as necessarily including the body of essentially set-theoretic theorems—to the detriment, I would argue, of the larger body of aspiring students of mathematics whose flagging interest in acquiring such a wider knowledge in universities around the world reflects the fact that, for most students, their interests seem to lie primarily in how a study of mathematics can enable them to:

(a) adequately abstract and precisely express through human reasoning their experiences of the world in which they live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of their latent potential in acieving their personal real-world goals.

In other words, it is not obvious how how any study of mathematics that has the limited goals (a) and (b) can have any essentially educational dimension that justifies the assumption that ZF is consistent.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Three Dogmas of FOL: Hibertian Theism, Brouwerian Atheism, and Finitary Agnosticism (work in progress as on 10/07/2017)

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

What essentially differentiate our approach of finitary agnosticism to mathematical reasoning in this investigation from both the classical Hilbertian, and the intuitionistic Brouwerian, perspectives is how we address two significant dogmas of the classical first order logic FOL.

Hilbertian Theism

In a 1925 address[1] Hilbert had shown that the axiomatisation \mathcal{L}_{\varepsilon} of classical Aristotlean predicate logic proposed by him as a formal first-order \varepsilon-predicate calculus[2] in which he used a primitive choice-function[3] symbol, `\varepsilon‘, for defining the quantifiers `\forall‘ and `\exists‘ would adequately express—and yield, under a suitable interpretation—Aristotle’s logic of predicates if the \varepsilon-function was interpreted to yield Aristotlean particularisation[4].

Classical approaches to mathematics—essentially following Hilbert—can be labelled `theistic’ in that they implicitly assume—without providing adequate objective criteria—[5] both that:

\bullet The standard first order logic FOL is consistent;

and that:

\bullet The standard interpretation of FOL is finitarily sound (which implies Aristotle’s particularisation).

The significance of the label `theistic’ is that conventional wisdom tacitly believes that Aristotle’s particularisation remains valid—without qualification—even over infinite domains; a belief that is not unequivocally self-evident, but must be appealed to as an article of faith.

Although intended to highlight an entirely different distinction, that the choice of such a label may not be totally inappropriate is suggested by Tarski’s point of view[6] to the effect:

“… that Hilbert’s alleged hope that meta-mathematics would usher in a `feeling of absolute security’ was a `kind of theology’ that `lay far beyond the reach of any normal human science’ …”.

Brouwerian Atheism

We note second that, in sharp contrast, constructive approaches to mathematics—such as Intuitionism[6a]—can be labelled `atheistic’ since they deny both that:

\bullet FOL is consistent (since they deny the Law of The Excluded Middle[7];

and that:

\bullet The standard interpretation of FOL is sound (since they deny Aristotle’s particularisation).

The significance of the label `atheistic’ is that whereas constructive approaches to mathematics deny the faith-based belief in the unqualified validity of Aristotle’s particularisation over infinite domains, their denial of the Law of the Excluded Middle is itself a belief—in the inconsistency of FOL—that is also not unequivocally self-evident, and must also be appealed to as an article of faith since it does not take into consideration the objective criteria for the consistency of FOL that follows from the Birmingham paper.

Although Brouwer’s explicitly stated objection appeared to be to the Law of the Excluded Middle as expressed and interpreted at the time[8], some of Kleene’s remarks[9], some of Hilbert’s remarks[10] and, more particularly, Kolmogorov’s remarks[11] suggest that the intent of Brouwer’s fundamental objection (see this post) can also be viewed today as being limited only to the yet prevailing belief—as an article of faith—that the validity of Aristotle’s particularisation can be extended without qualification to infinite domains.

Finitary Agnosticism

In our investigations, however, we adopt what may be labelled a finitarily `agnostic’ perspective[12] by noting that although, if Aristotle’s particularisation holds in an interpretation then the Law of the Excluded Middle must also hold in the interpretation, the converse is not true.

We thus follow a middle path by explicitly assuming on the basis of the Birmingham paper (reproduced in this post) that:

\bullet FOL is finitarily consistent;

and by explicitly stating when an argument appeals to the postulation that:

\bullet The standard interpretation of FOL is sound.

The significance of the label `agnostic’ is that we neither hold FOL to be inconsistent, nor hold that Aristotle’s particularisation can be applied—without qualification—over infinite domains.

The two seemingly contradictory but perhaps complementary perspectives

It follows from the above that the Brouwerian Atheistic perspective is merely a restricted perspective within the Finitary Agnostic perspective, whilst the Hilbertian Theistic perspective contradicts the Finitary Agnostic perspective.

Curiously, a case can be made (as is attempted in this post, and in this paper that I presented on 10th June 2015 at the Epsilon 2015 workshop on Hilberts Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France, June 10-11-12) that the Hilbertian perspective formalises the concept of `algorithmically verifiable truth’ in the non-finitary—hence essentially subjective—reasoning that circumscribes human intelligences (which, by the Anthropic principle, can be taken to reflect the ‘truths’ of natural laws), whilst the Finitary perspective formalises the concept of ‘algorithmically computable truth’ in the finitary—hence essentially objective—reasoning that circumscribes mechanical intelligences; and that the two are complementary, not contradictory, perspectives on the nature and scope of quantification.

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Be59 Evert W. Beth. 1959. The Foundations of Mathematics. Studies in Logic and the Foundations of Mathematics. Edited by L. E. J. Brouwer, E. W. Beth, A. Heyting. 1959. North Holland Publishing Company, Amsterdam.

BF58 Paul Bernays and Abraham A. Fraenkel. 1958. Axiomatic Set Theory. Studies in Logic and the Foundations of Mathematics. Edited by L. E. J. Brouwer, E. W. Beth, A. Heyting. 1959. North Holland Publishing Company, Amsterdam.

Br08 L. E. J. Brouwer. 1908. The Unreliability of the Logical Principles. English translation in A. Heyting, Ed. L. E. J. Brouwer: Collected Works 1: Philosophy and Foundations of Mathematics. Amsterdam: North Holland / New York: American Elsevier (1975): pp.107-111.

Br13 L. E. J. Brouwer. 1913. Intuitionism and Formalism. Inaugural address at the University of Amsterdam, October 14, 1912. Translated by Professor Arnold Dresden for the Bulletin of the American Mathematical Society, Volume 20 (1913), pp.81-96. 1999. Electronically published in Bulletin (New Series) of the American Mathematical Society, Volume 37, Number 1, pp.55-64.

Br23 L. E. J. Brouwer. 1923. On the significance of the principle of the excluded middle in mathematics, especially in function theory. Address delivered on 21 September 1923 at the annual convention of the Deutsche Mathematiker-Vereinigung in Marburg an der Lahn. In Jean van Heijenoort. 1967. Ed. From Frege to G&oumldel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

Co66 Paul J. Cohen. 1966. Set Theory and the Continuum Hypothesis. (Lecture notes given at Harvard University, Spring 1965) W. A. Benjamin, Inc., New York.

Cr05 John N. Crossley. 2005. What is Mathematical Logic? A Survey. Address at the First Indian Conference on Logic and its Relationship with Other Disciplines held at the Indian Institute of Technology, Powai, Mumbai from January 8 to 12. Reprinted in Logic at the Crossroads: An Interdisciplinary View – Volume I (pp.3-18). ed. Amitabha Gupta, Rohit Parikh and Johan van Bentham. 2007. Allied Publishers Private Limited, Mumbai.

Da82 Martin Davis. 1958. Computability and Unsolvability. 1982 ed. Dover Publications, Inc., New York.

EC89 Richard L. Epstein, Walter A. Carnielli. 1989. Computability: Computable Functions, Logic, and the Foundations of Mathematics. Wadsworth & Brooks, California.

Fr09 Curtis Franks. 2009. The Autonomy of Mathematical Knowledge: Hilbert’s Program Revisited}. Cambridge University Press, New York.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

HA28 David Hilbert & Wilhelm Ackermann. 1928. Principles of Mathematical Logic. Translation of the second edition of the Grundzüge Der Theoretischen Logik. 1928. Springer, Berlin. 1950. Chelsea Publishing Company, New York.

Hi25 David Hilbert. 1925. On the Infinite. Text of an address delivered in Münster on 4th June 1925 at a meeting of the Westphalian Mathematical Society. In Jean van Heijenoort. 1967.Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

Hi27 David Hilbert. 1927. The Foundations of Mathematics. Text of an address delivered in July 1927 at the Hamburg Mathematical Seminar. In Jean van Heijenoort. 1967.Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

Kl52 Stephen Cole Kleene. 1952. Introduction to Metamathematics. North Holland Publishing Company, Amsterdam.

Kn63 G. T. Kneebone. 1963. Mathematical Logic and the Foundations of Mathematics: An Introductory Survey. D. Van Norstrand Company Limited, London.

Ko25 Andrei Nikolaevich Kolmogorov. 1925. On the principle of excluded middle. In Jean van Heijenoort. 1967. Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

Li64 A. H. Lightstone. 1964. The Axiomatic Method. Prentice Hall, NJ.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Ma09 Maria Emilia Maietti. 2009. A minimalist two-level foundation for constructive mathematics. Dipartimento di Matematica Pura ed Applicata, University of Padova.

MS05 Maria Emilia Maietti and Giovanni Sambin. 2005. Toward a minimalist foundation for constructive mathematics. Clarendon Press, Oxford.

Mu91 Chetan R. Murthy. 1991. An Evaluation Semantics for Classical Proofs. Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, (also Cornell TR 91-1213), 1991.

Nv64 P. S. Novikov. 1964. Elements of Mathematical Logic. Oliver & Boyd, Edinburgh and London.

Qu63 Willard Van Orman Quine. 1963. Set Theory and its Logic. Harvard University Press, Cambridge, Massachusette.

Rg87 Hartley Rogers Jr. 1987. Theory of Recursive Functions and Effective Computability.<i. MIT Press, Cambridge, Massachusetts.

Ro53 J. Barkley Rosser. 1953. Logic for Mathematicians McGraw Hill, New York.

Sh67 Joseph R. Shoenfield. 1967. Mathematical Logic. Reprinted 2001. A. K. Peters Ltd., Massachusetts.

Sk28 Thoralf Skolem. 1928. On Mathematical Logic Text of a lecture delivered on 22nd October 1928 before the Norwegian Mathematical Association. In Jean van Heijenoort. 1967. Ed. From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931. Harvard University Press, Cambridge, Massachusetts.

Sm92 Raymond M. Smullyan. 1992. Gödel’s Incompleteness Theorems. Oxford University Press, Inc., New York.

Su60 Patrick Suppes. 1960. Axiomatic Set Theory. Van Norstrand, Princeton.

Ta33 Alfred Tarski. 1933. The concept of truth in the languages of the deductive sciences. In Logic, Semantics, Metamathematics, papers from 1923 to 1938 (p152-278). ed. John Corcoran. 1983. Hackett Publishing Company, Indianapolis.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp.\ 544-546.

Wa63 Hao Wang. 1963. A survey of Mathematical Logic. North Holland Publishing Company, Amsterdam.

An12 Bhupinder Singh Anand. 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Notes

Return to 1: Hi25.

Return to 2: Hi27, pp.465-466; see also this post.

Return to 3: Hi25, p.382.

Return to 4: Hi25, pp.382-383; Hi27, p.466(1).

Return to 5: See for instance: Hi25, p.382; HA28, p.48; Sk28, p.515; Go31, p.32.; Kl52, p.169; Ro53, p.90; BF58, p.46; Be59, pp.178 & 218; Su60, p.3; Wa63, p.314-315; Qu63, pp.12-13; Kn63, p.60; Co66, p.4; Me64, p.52(ii); Nv64, p.92; Li64, p.33; Sh67, p.13; Da82, p.xxv; Rg87, p.xvii; EC89, p.174; Mu91; Sm92, p.18, Ex.3; BBJ03, p.102; Cr05, p.6.

Return to 6: As commented upon in Fr09, p.3.

Return to 6a: But see also Ma09 and MS05.

Return to 7: “The formula \forall x(A(x)\vee \neg A(x)) is classically provable, and hence under classical interpretation true. But it is unrealizable. So if realizability is accepted as a necessary condition for intuitionistic truth, it is untrue intuitionistically, and therefore unprovable not only in the present intuitionistic formal system, but by any intuitionistic methods whatsoever”. Kl52, p.513.

Return to 8: Br23, p.335-336; Kl52, p.47; Hi27, p.475.

Return to 9: Kl52, p.49.

Return to 10: For instance in Hi27, p.474.

Return to 11: In Ko25, fn. p.419; p.432.

Return to 12: See, for instance An12 (reproduced in this post).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A foundational argument for defining Effective Computability formally, and weakening the Church and Turing Theses – II

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1 The Logical Issue

In the previous posts we addressed first the computational issue, and second the philosophical issue—concerning the informal concept of `effective computability’—that seemed implicit in Selmer Bringsjord’s narrational case against Church’s Thesis [1].

We now address the logical issue that leads to a formal definability of this concept which—arguably—captures our intuitive notion of the concept more fully.

We note that in this paper on undecidable arithmetical propositions we have shown how it follows from Theorem VII of Gödel’s seminal 1931 paper that every recursive function f(x_{1}, x_{2}) is representable in the first-order Peano Arithmetic PA by a formula [F(x_{1}, x_{2}, x_{3})] which is algorithmically verifiable, but not algorithmically computable, if we assume (Aristotle’s particularisation) that the negation of a universally quantified formula of the first-order predicate calculus is always indicative of the existence of a counter-example under the standard interpretation of PA.

In this earlier post on the Birmingham paper, we have also shown that:

(i) The concept of algorithmic verifiability is well-defined under the standard interpretation \mathcal{I}_{PA(\mathbb{N},\ Standard)} of PA over the structure \mathbb{N} of the natural numbers; and

(ii) The concept of algorithmic computability too is well-defined under the algorithmic interpretation \mathcal{I}_{PA(\mathbb{N},\ Algorithmic)} of PA over the structure \mathbb{N} of the natural numbers; and

We shall argue in this post that the standard postulation of the Church-Turing Thesis—which postulates that the intuitive concept of `effective computability’ is completely captured by the formal notion of `algorithmic computability’—does not hold if we formally define a number-theoretic formula as effectively computable if, and only if, it is algorithmically verifiable; and it therefore needs to be replaced by a weaker postulation of the Thesis as an instantiational equivalence.

\S 2 Weakening the Church and Turing Theses

We begin by noting that the following theses are classically equivalent [1]:

Standard Church’s Thesis: [2] A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is recursive [3].

Standard Turing’s Thesis: [4] A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is Turing-computable [5].

In this paper we shall argue that, from a foundational perspective, the principle of Occam’s razor suggests the Theses should be postulated minimally as the following equivalences:

Weak Church’s Thesis: A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a recursive function (or relation, treated as a Boolean function).

Weak Turing’s Thesis: A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a Turing-computable function (or relation, treated as a Boolean function).

\S 2.1 The need for explicitly distinguishing between `instantiational’ and `uniform’ methods

Why Church’s Thesis?

It is significant that both Kurt Gödel (initially) and Alonzo Church (subsequently—possibly under the influence of Gödel’s disquietitude) enunciated Church’s formulation of `effective computability’ as a Thesis because Gödel was instinctively uncomfortable with accepting it as a definition that minimally captures the essence of `intuitive effective computability’ [6].

Kurt Gödel’s reservations

Gödel’s reservations seem vindicated if we accept that a number-theoretic function can be effectively computable instantiationally (in the sense of being algorithmically verifiable as defined in the Birmingham paper, reproduced in this post), but not by a uniform method (in the sense of being algorithmically computable as defined in the Birmingham paper, reproduced in this post).

The significance of the fact (considered in the Birmingham paper, reproduced in this post) that `truth’ too can be effectively decidable both instantiationally and by a uniform method under the standard interpretation of PA is reflected in Gödel’s famous 1951 Gibbs lecture[7], where he remarks:

“I wish to point out that one may conjecture the truth of a universal proposition (for example, that I shall be able to verify a certain property for any integer given to me) and at the same time conjecture that no general proof for this fact exists. It is easy to imagine situations in which both these conjectures would be very well founded. For the first half of it, this would, for example, be the case if the proposition in question were some equation F(n) = G(n) of two number-theoretical functions which could be verified up to very great numbers n.” [8]

Alan Turing’s perspective

Such a possibility is also implicit in Turing’s remarks [9]:

“The computable numbers do not include all (in the ordinary sense) definable numbers. Let P be a sequence whose n-th figure is 1 or 0 according as n is or is not satisfactory. It is an immediate consequence of the theorem of \S8 that P is not computable. It is (so far as we know at present) possible that any assigned number of figures of P can be calculated, but not by a uniform process. When sufficiently many figures of P have been calculated, an essentially new method is necessary in order to obtain more figures.”

Boolos, Burgess and Jeffrey’s query

The need for placing such a distinction on a formal basis has also been expressed explicitly on occasion [10].

Thus, Boolos, Burgess and Jeffrey [11] define a diagonal halting function, d, any value of which can be decided effectively, although there is no single algorithm that can effectively compute d.

Now, the straightforward way of expressing this phenomenon should be to say that there are well-defined number-theoretic functions that are effectively computable instantiationally but not uniformly. Yet, following Church and Turing, such functions are labeled as uncomputable [12]!

However, as Boolos, Burgess and Jeffrey note quizically:

“According to Turing’s Thesis, since d is not Turing-computable, d cannot be effectively computable. Why not? After all, although no Turing machine computes the function d, we were able to compute at least its first few values, For since, as we have noted, f_{1} = f_{2} = f_{3} = the empty function we have d(1) = d(2) = d(3) = 1. And it may seem that we can actually compute d(n) for any positive integer n—if we don’t run out of time.” [13]

Why should Chaitin’s constant \Omega be labelled `uncomputable’?

The reluctance to treat a function such as d(n)—or the function \Omega(n) that computes the n^{th} digit in the decimal expression of a Chaitin constant \Omega [14]—as computable, on the grounds that the `time’ needed to compute it increases monotonically with n, is curious [15]; the same applies to any total Turing-computable function f(n)![16]

Moreover, such a reluctance to treat instantiationally computable functions such as d(n) as not `effectively computable’ is difficult to reconcile with a conventional wisdom that holds the standard interpretation of the first order Peano Arithmetic PA as defining an intuitively sound model of PA.

Reason: We have shown in the Birmingham paper (reproduced in this post) that ‘satisfaction’ and ‘truth’ under the standard interpretation of PA is definable constructively in terms of algorithmic verifiability (instantiational computability).

\S 2.2 Distinguishing between algorithmic verifiability and algorithmic computability

We now show in Theorem 1 that if Aristotle’s particularisation is presumed valid over the structure \mathbb{N} of the natural numbers—as is the case under the standard interpretation of the first-order Peano Arithmetic PA—then it follows from the instantiational nature of the (constructively defined [17]) Gödel \beta-function that a primitive recursive relation can be instantiationally equivalent to an arithmetical relation, where the former is algorithmically computable over \mathbb{N}, whilst the latter is algorithmically verifiable (i.e., instantiationally computable) but not algorithmically computable over \mathbb{N}.[18]

\S 2.2.1 Significance of Gödel’s \beta-function

We note first that in Theorem VII of his seminal 1931 paper on formally undecidable arithmetical propositions Gödel showed that, given a total number-theoretic function f(x) and any natural number n, we can construct a primitive recursive function \beta(z, y, x) and natural numbers b_{n}, c_{n} such that \beta(b_{n}, c_{n}, i) = f(i) for all 0 \leq i \leq n.

In this paper we shall essentially answer the following question affirmatively:

Query 3: Does Gödel’s Theorem VII admit construction of an arithmetical function A(x) such that:

(a) for any given natural number n, there is an algorithm that can verify A(i) = f(i) for all 0 \leq i \leq n (hence A(x) may be said to be algorithmically verifiable if f(x) is recursive);

(b) there is no algorithm that can verify A(i) = f(i) for all 0 \leq i (so A(x) may be said to be algorithmically uncomputable)?

\S 2.2.2 Defining effective computability

Now, in the Birmingham paper (reproduced in this post), we have formally defined what it means for a formula of an arithmetical language to be:

(i) Algorithmically verifiable;

(ii) Algorithmically computable.

under an interpretation.

We shall thus propose the definition:

Effective computability: A number-theoretic formula is effectively computable if, and only if, it is algorithmically verifiable.

Intuitionistically unobjectionable: We note first that since every finite set of integers is recursive, every well-defined number-theoretical formula is algorithmically verifiable, and so the above definition is intuitionistically unobjectionable; and second that the existence of an arithmetic formula that is algorithmically verifiable but not algorithmically computable (Theorem 1) supports Gödel’s reservations on Alonzo Church’s original intention to label his Thesis as a definition [19].

The concept is well-defined, since we have shown in the Birmingham paper (reproduced in this post) that the algorithmically verifiable and the algorithmically computable PA formulas are well-defined under the standard interpretation of PA and that:

(a) The PA-formulas are decidable as satisfied / unsatisfied or true / false under the standard interpretation of PA if, and only if, they are algorithmically verifiable;

(b) The algorithmically computable PA-formulas are a proper subset of the algorithmically verifiable PA-formulas;

(c) The PA-axioms are algorithmically computable as satisfied / true under the standard interpretation of PA;

(d) Generalisation and Modus Ponens preserve algorithmically computable truth under the standard interpretation of PA;

(e) The provable PA-formulas are precisely the ones that are algorithmically computable as satisfied / true under the standard interpretation of PA.

\S 3 Gödel’s Theorem VII and algorithmically verifiable, but not algorithmically computable, arithmetical propositions

In his seminal 1931 paper on formally undecidable arithmetical propositions, Gödel defined a curious primitive recursive function—Gödel’s \beta-function—as [20]:

Definition 1: \beta (x_{1}, x_{2}, x_{3}) = rm(1+(x_{3}+ 1) \star x_{2}, x_{1})

where rm(x_{1}, x_{2}) denotes the remainder obtained on dividing x_{2} by x_{1}.

Gödel showed that the above function has the remarkable property that:

Lemma 1: For any given denumerable sequence of natural numbers, say f(k, 0),\ f(k, 1),\ \ldots, and any given natural number n, we can construct natural numbers b, c, j such that:

(i) j = max(n, f(k, 0), f(k, 1), \ldots, f(k, n));

(ii) c = j!;

(iii) \beta(b, c, i) = f(k, i) for 0 \leq i \leq n.

Proof: This is a standard result [21]. \Box

Now we have the standard definition [22]:

Definition 2: A number-theoretic function f(x_{1}, \ldots, x_{n}) is said to be representable in PA if, and only if, there is a PA formula [F(x_{1}, \dots, x_{n+1})] with the free variables [x_{1}, \ldots, x_{n+1}], such that, for any given natural numbers k_{1}, \ldots, k_{n+1}:

(i) if f(k_{1}, \ldots, k_{n}) = k_{n+1} then PA proves: [F(k_{1}, \ldots, k_{n}, k_{n+1})];

(ii) PA proves: [(\exists_{1} x_{n+1})F(k_{1}, \ldots, k_{n}, x_{n+1})].

The function f(x_{1}, \ldots, x_{n}) is said to be strongly representable in PA if we further have that:

(iii) PA proves: [(\exists_{1} x_{n+1})F(x_{1}, \ldots, x_{n}, x_{n+1})]

Interpretation of `[\exists_{1}]‘: The symbol `[\exists_{1}]‘ denotes `uniqueness’ under an interpretation which assumes that Aristotle’s particularisation holds in the domain of the interpretation.

Formally, however, the PA formula:

[(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})]

is merely a short-hand notation for the PA formula:

[\neg(\forall x_{3})\neg F(x_{1}, x_{2}, x_{3}) \wedge (\forall y)(\forall z)(F(x_{1}, x_{2}, y) \wedge F(x_{1}, x_{2}, z) \rightarrow y=z)].

We then have:

Lemma 2 \beta(x_{1}, x_{2}, x_{3}) is strongly represented in PA by [Bt(x_{1}, x_{2}, x_{3}, x_{4})], which is defined as follows:

[(\exists w)(x_{1} = ((1 + (x_{3} + 1)\star x_{2}) \star w + x_{4}) \wedge (x_{4} < 1 + (x_{3} + 1) \star x_{2}))].

Proof: This is a standard result [23]. \Box

Gödel further showed (also under the tacit, but critical, presumption of Aristotle’s particularisation [24] that:

Lemma 3: If f(x_{1}, x_{2}) is a recursive function defined by:

(i) f(x_{1}, 0) = g(x_{1})

(ii) f(x_{1}, (x_{2}+1)) = h(x_{1}, x_{2}, f(x_{1}, x_{2}))

where g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are recursive functions of lower rank [25] that are represented in PA by well-formed formulas [G(x_{1}, x_{2})] and [H(x_{1}, x_{2}, x_{3}, x_{4})],

then f(x_{1}, x_{2}) is represented in PA by the following well-formed formula, denoted by [F(x_{1}, x_{2}, x_{3})]:

[(\exists u)(\exists v)(((\exists w)(Bt(u, v, 0, w) \wedge G(x_{1}, w))) \wedge Bt(u, v, x_{2}, x_{3}) \wedge (\forall w)(w < x_{2} \rightarrow (\exists y)(\exists z)(Bt(u, v, w, y) \wedge Bt(u, v, (w+1), z) \wedge H(x_{1}, w, y, z)))].

Proof: This is a standard result [26]. \Box

\S 4.1 What does “[(\exists_{1} x_{3})F(k, m, x_{3})] is provable” assert under the standard interpretation of PA?

Now, if the PA formula [F(x_{1}, x_{2}, x_{3})] represents in PA the recursive function denoted by f(x_{1}, x_{2}) then by definition, for any given numerals [k], [m], the formula [(\exists_{1} x_{3})F(k, m, x_{3})] is provable in PA; and true under the standard interpretation of PA.

We thus have that:

Lemma 4:[(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA” is the assertion that:

Given any natural numbers k, m, we can construct natural numbers t_{(k, m)}, u_{(k, m)}, v_{(k, m)}—all functions of k, m—such that:

(a) \beta(u_{(k, m)}, v_{(k, m)}, 0) = g(k);

(b) for all i<m, \beta(u_{(k, m)}, v_{(k, m)}, i) = h(k, i, f(k, i));

(c) \beta(u_{(k, m)}, v_{(k, m)}, m) = t_{(k, m)};

where f(x_{1}, x_{2}), g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are any recursive functions that are formally represented in PA by F(x_{1}, x_{2}, x_{3}), G(x_{1}, x_{2}) and H(x_{1}, x_{2}, x_{3}, x_{4}) respectively such that:

(i) f(k, 0) = g(k)

(ii) f(k, (y+1)) = h(k, y, f(k, y)) for all y<m

(iii) g(x_{1}) and h(x_{1}, x_{2}, x_{3}) are recursive functions that are assumed to be of lower rank than f(x_{1}, x_{2}).

Proof: For any given natural numbers k and m, if [F(x_{1}, x_{2}, x_{3})] interprets as a well-defined arithmetical relation under the standard interpretation of PA, then we can define a deterministic Turing machine TM that can `construct’ the sequences:

f(k, 0), f(k, 1), \ldots, f(k, m)

and:

\beta(u_{(k, m)}, v_{(k, m)}, 0), \beta(u_{(k, m)}, v_{(k, m)}, 1), \ldots, \beta(u_{(k, m)}, v_{(k, m)}, m)

and give evidence to verify the assertion. \Box[27]

We now see that:

Theorem 1: Under the standard interpretation of PA [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is algorithmically verifiable, but not algorithmically computable, as always true over \mathbb{N}.

Proof: It follows from Lemma 4 that:

(1) [(\exists_{1} x_{3})F(k, m, x_{3})] is PA-provable for any given numerals [k, m]. Hence [(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA. It then follows from the definition of [F(x_{1}, x_{2}, x_{3})] in Lemma 3 that, for any given natural numbers k, m, we can construct some pair of natural numbers u_{(k, m)}, v_{(k, m)}—where u_{(k, m)}, v_{(k, m)} are functions of the given natural numbers k and m—such that:

(a) \beta(u_{(k, m)}, v_{(k, m)}, i) = f(k, i) for 0 \leq i \leq m;

(b) F^{*}(k, m, f(k, m)) holds in \mathbb{N}.

Since \beta(x_{1}, x_{2}, x_{3}) is primitive recursive, \beta(u_{(k, m)}, v_{(k, m)}, i) defines a deterministic Turing machine TM that can `construct’ the denumerable sequence f'(k, 0), f'(k, 1), \ldots for any given natural numbers k and m such that:

(c) f(k, i) = f'(k, i) for 0 \leq i \leq m.

We can thus define a deterministic Turing machine TM that will give evidence that the PA formula [(\exists_{1} x_{3})F(k, m, x_{3})] is true under the standard interpretation of PA.

Hence [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is algorithmically verifiable over \mathbb{N} under the standard interpretation of PA.

(2) Now, the pair of natural numbers u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})} are defined such that:

(a) \beta(u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})}, i) = f(x_{1}, i) for 0 \leq i \leq x_{2};

(b) F^{*}(x_{1}, x_{2}, f(x_{1}, x_{2})) holds in \mathbb{N};

where v_{(x_{1}, x_{2})} is defined in Lemma 3 as j!, and:

(c) j = max(n, f(x_{1}, 0), f(x_{1}, 1), \ldots, f(x_{1}, x_{2}));

(d) n is the `number’ of terms in the sequence f(x_{1}, 0), f(x_{1}, 1), \ldots, f(x_{1}, x_{2}).

Since j is not definable for a denumerable sequence \beta(u_{(x_{1}, x_{2})}, v_{(x_{1}, x_{2})}, i) we cannot define a denumerable sequence f'(x_{1}, 0), f'(x_{1}, 1), \ldots such that:

(e) f(k, i) = f'(k, i) for all i \geq 0.

We cannot thus define a deterministic Turing machine TM that will give evidence that the PA formula [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] interprets as true under the standard interpretation of PA for any given sequence of numerals [(a_{1}, a_{2})].

Hence [(\exists_{1} x_{3})F(x_{1}, x_{2}, x_{3})] is not algorithmically computable over \mathbb{N} under the standard interpretation of PA.

The theorem follows. \Box

Corollary 1: If the standard interpretation of PA is sound, then the classical Church and Turing theses are false.

The above theorem now suggests the following definition:

Definition 2: (Effective computability) A number-theoretic function is effectively computable if, and only if, it is algorithmically verifiable.

Such a definition of effective computability now allows the classical Church and Turing theses to be expressed as the weak equivalences in \S 2—rather than as identities—without any apparent loss of generality.

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Bri93 Selmer Bringsjord. 1993. The Narrational Case Against Church’s Thesis. Easter APA meetings, Atlanta.

Ch36 Alonzo Church. 1936. An unsolvable problem of elementary number theory. In M. Davis (ed.). 1965. The Undecidable Raven Press, New York. Reprinted from the Am. J. Math., Vol. 58, pp.345-363.

Ct75 Gregory J. Chaitin. 1975. A Theory of Program Size Formally Identical to Information Theory. J. Assoc. Comput. Mach. 22 (1975), pp. 329-340.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

Go51 Kurt Gödel. 1951. Some basic theorems on the foundations of mathematics and their implications. Gibbs lecture. In Kurt Gödel, Collected Works III, pp.304-323.\ 1995. Unpublished Essays and Lectures. Solomon Feferman et al (ed.). Oxford University Press, New York.

Ka59 Laszlo Kalmár. 1959. An Argument Against the Plausibility of Church’s Thesis. In Heyting, A. (ed.) Constructivity in Mathematics. North-Holland, Amsterdam.

Kl36 Stephen Cole Kleene. 1936. General Recursive Functions of Natural Numbers. Math. Annalen vol. 112 (1936) pp.727-766.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Me90 Elliott Mendelson. 1990. Second Thoughts About Church’s Thesis and Mathematical Proofs. Journal of Philosophy 87.5.

Pa71 Rohit Parikh. 1971. Existence and Feasibility in Arithmetic. The Journal of Symbolic Logic, Vol.36, No. 3 (Sep., 1971), pp. 494-508.

Si97 Wilfried Sieg. 1997. Step by recursive step: Church’s analysis of effective calculability Bulletin of Symbolic Logic, Volume 3, Number 2.

Sm07 Peter Smith. 2007. Church’s Thesis after 70 Years. A commentary and critical review of Church’s Thesis After 70 Years. In Meinong Studies Vol 1 (Ontos Mathematical Logic 1), 2006 (2013), Eds. Adam Olszewski, Jan Wolenski, Robert Janusz. Ontos Verlag (Walter de Gruyter), Frankfurt, Germany.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

An07 Bhupinder Singh Anand. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. In The Reasoner, Vol(1)7 p2-3.

An12 … 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Notes

Return to 1: cf. Me64, p.237.

Return to 2: Church’s (original) Thesis: The effectively computable number-theoretic functions are the algorithmically computable number-theoretic functions Ch36.

Return to 11: cf. Me64, p.227.

Return to 4: After describing what he meant by “computable” numbers in the opening sentence of his 1936 paper on Computable Numbers Tu36, Turing immediately expressed this thesis—albeit informally—as: “… the computable numbers include all numbers which could naturally be regarded as computable”.

Return to 5: cf. BBJ03, p.33.

Return to 6: See Si97.

Return to 7: Go51.

Return to 8: Parikh’s paper Pa71 can also be viewed as an attempt to investigate the consequences of expressing the essence of Gödel’s remarks formally.

Return to 9: Tu36, \S9(II), p.139.

Return to 10: Parikh’s distinction between `decidability’ and `feasibility’ in Pa71 also appears to echo the need for such a distinction.

Return to 11: BBJ03, p. 37.

Return to 12: The issue here seems to be that, when using language to express the abstract objects of our individual, and common, mental `concept spaces’, we use the word `exists’ loosely in three senses, without making explicit distinctions between them (see An07).

Return to 13: BBJ03, p.37.

Return to 14: Chaitin’s Halting Probability is given by 0 < \Omega = \sum2^{-|p|} < 1, where the summation is over all self-delimiting programs p that halt, and |p| is the size in bits of the halting program p; see Ct75.

Return to 15: The incongruity of this is addressed by Parikh in Pa71.

Return to 16: The only difference being that, in the latter case, we know there is a common `program’ of constant length that will compute f(n) for any given natural number n; in the former, we know we may need distinctly different programs for computing f(n) for different values of n, where the length of the program will, sometime, reference n.

Return to 17: By Kurt Gödel; see Go31, Theorem VII.

Return to 18: Analagous distinctions in analysis: The distinction between algorithmically computable, and algorithmically verifiable but not algorithmically computable, number-theoretic functions seeks to reflect in arithmetic the essence of uniform methods (formally detailed in the Birmingham paper (reproduced in this post) and in its main consequence—the Provability Theorem for PA—as detailed in this post), classically characterised by the distinctions in analysis between: (a) uniformly continuous, and point-wise continuous but not uniformly continuous, functions over an interval; (b) uniformly convergent, and point-wise convergent but not uniformly convergent, series.

A limitation of set theory and a possible barrier to computation: We note, further, that the above distinction cannot be reflected within a language—such as the set theory ZF—which identifies `equality’ with `equivalence’. Since functions are defined extensionally as mappings, such a language cannot recognise that a set which represents a primitive recursive function may be equivalent to, but computationally different from, a set that represents an arithmetical function; where the former function is algorithmically computable over \mathbb{N}, whilst the latter is algorithmically verifiable but not algorithmically computable over \mathbb{N}.

Return to 19: See the Provability Theorem for PA in this post.

Return to 20: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.21.

Return to 21: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.22.

Return to 22: Me64, p.118.

Return to 23: cf. Me64, p.131, proposition 3.21.

Return to 24: The implicit assumption being that the negation of a universally quantified formula of the first-order predicate calculus is indicative of “the existence of a counter-example”—Go31, p.32.

Return to 25: cf. Me64, p.132; Go31, p.30(2).

Return to 26: cf. Go31, p.31(2); Me64}, p.132.

Return to 27: A critical philosophical issue that we do not address here is whether the PA formula [F(x_{1}, x_{2}, x_{3}] can be considered to interpret under a sound interpretation of PA as a well-defined predicate, since the denumerable sequences \{f(k, 0), f(k, 1), \ldots, f(k, m), m_{p}: p>0 and m_{p} is not equal to m_{q} if p is not equal to q\}—are represented by denumerable, distinctly different, functions \beta(u_{p_{1}}, v_{p_{2}}, i) respectively. There are thus denumerable pairs (u_{p_{1}}, v_{p_{2}}) for which \beta(u_{p_{1}}, v_{p_{2}}, i) yields the sequence f(k, 0), f(k, 1), \ldots, f(k, m).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A foundational argument for defining Effective Computability formally, and weakening the Church and Turing Theses – I

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1.1 The Philosophical Issue

In a previous post we have argued that standard interpretations of classical theory may inadvertently be weakening a desirable perception—of mathematics as the lingua franca of scientific expression—by ignoring the possibility that since mathematics is, indeed, indisputably accepted as the language that most effectively expresses and communicates intuitive truth, the chasm between formal truth and provability must, of necessity, be bridgeable.

We further queried whether the roots of such interpretations may lie in removable ambiguities that currently persist in the classical definitions of foundational elements; ambiguities that allow the introduction of non-constructive—hence non-verifiable, non-computational, ambiguous and essentially Platonic—elements into the standard interpretations of classical mathematics.

Query 1: Are formal classical theories essentially unable to adequately express the extent and range of human cognition, or does the problem lie in the way formal theories are classically interpreted at the moment?

We noted that the former addressed the question of whether there are absolute limits on our capacity to express human cognition unambiguously; the latter, whether there are only temporal limits—not necessarily absolute—to the capacity of classical interpretations to communicate unambiguously that which we intended to capture within our formal expression.

We argued that, prima facie, applied science continues, perforce, to interpret mathematical concepts Platonically, whilst waiting for mathematics to provide suitable, and hopefully reliable, answers as to how best it may faithfully express its observations verifiably.

\S 1.2 Are axiomatic computational concepts really unambiguous?

This now raises the corresponding philosophical question that is implicit in Selmer Bringsjord’s narrational case against Church’s Thesis [1]:

Query 2 Is there a duality in the classical acceptance of non-constructive, foundational, concepts as axiomatic?

We now argue that beyond the question raised in an earlier post of whether—as computer scientist Lance Fortnow believes—Turing machines can `capture everything we can compute’, or whether—as computer scientists Peter Wegner and Dina Goldin suggest—they are `inappropriate as a universal foundation for computational problem solving’, we also need to address the philosophical question—implicit in Bringsjord’s paper—of whether, or not, the concept of `effective computability’ is capable of a constructive, and intuitionistically unobjectionable, definition; and the relation of such definition to that of formal provability and to the standard perceptions of the Church and Turing Theses as reviewed here by at heart Trinity mathmo and by profession Philosopher Peter Smith.

We therefore consider the case for introduction of such a definition from a philosophical point of view, and consider some consequences.

\S 1.2.1 Mendelson’s thesis

We note that Elliott Mendelson [2] is quoted by Bringsjord in his paper as saying (italicised parenthetical qualifications added):

(i) “Here is the main conclusion I wish to draw:

it is completely unwarranted to say that CT is unprovable just because it states an equivalence between a vague, imprecise notion (effectively computable function) and a precise mathematical notion (partial-recursive function)”.

(ii) “The concepts and assumptions that support the notion of partial-recursive function are, in an essential way, no less vague and imprecise (non-constructive, and intuitionistically objectionable) than the notion of effectively computable function; the former are just more familiar and are part of a respectable theory with connections to other parts of logic and mathematics.

(The notion of effectively computable function could have been incorporated into an axiomatic presentation of classical mathematics, but the acceptance of CT made this unnecessary.) …

Functions are defined in terms of sets, but the concept of set is no clearer (not more non-constructive, and intuitionistically objectionable) than that of function and a foundation of mathematics can be based on a theory using function as primitive notion instead of set.

Tarski’s definition of truth is formulated in set-theoretic terms, but the notion of set is no clearer (not more non-constructive, and intuitionistically objectionable), than that of truth.

The model-theoretic definition of logical validity is based ultimately on set theory, the foundations of which are no clearer (not more non-constructive, and intuitionistically objectionable) than our intuitive (non-constructive, and intuitionistically objectionable) understanding of logical validity”.

(iii) “The notion of Turing-computable function is no clearer (not more non-constructive, and intuitionistically objectionable) than, nor more mathematically useful (foundationally speaking) than, the notion of an effectively computable function …”

where:

(a) The Church-Turing Thesis, CT, is formulated as:

“A function is effectively computable if and only if it is Turing-computable”.

(b) An effectively computable function is defined to be the computing of the function by an algorithm.

(c) The classical notion of an algorithm is expressed by Mendelson as:

“… an effective and completely specified procedure for solving a whole class of problems. …

An algorithm does not require ingenuity; its application is prescribed in advance and does not depend upon any empirical or random factors”.

and, where Bringsjord paraphrases (iii) as:

(iv) “The notion of a formally defined program for guiding the operation of a TM is no clearer than, nor more mathematically useful (foundationally speaking) than, the notion of an algorithm”.

adding that:

(v) “This proposition, it would then seem, is the very heart of the matter.

If (iv) is true then Mendelson has made his case; if this proposition is false, then his case is doomed, since we can chain back by modus tollens and negate (iii)”.

\S 1.2.2 The concept of `constructive, and intuitionistically unobjectionable’

Now, prima facie, any formalisation of a `vague and imprecise’, `intuitive’ concept—say C—would normally be intended to capture the concept C both faithfully and completely within a constructive, and intuitionistically unobjectionable [3], language L.

Clearly, we could disprove the thesis—that C and its formalisation L are interchangeable, hence equivalent—by showing that there is a constructive aspect of C that is formalisable in a constructive language L’, but that such formalisation cannot be assumed expressible in L without introducing inconsistency.

However, equally clearly, there can be no way of proving the equivalence as this would contradict the premise that the concept is `vague and imprecise’, hence essentially open-ended in a non-definable way, and so non-formalisable.

Obviously, Mendelson’s assertion that there is no justification for claiming Church’s Thesis as unprovable must, therefore, rely on an interpretation that differs significantly from the above; for instance, his concept of provability may appeal to the axiomatic acceptability of `vague and imprecise’ concepts—as suggested by his remarks.

Now, we note that all the examples cited by Mendelson involve the decidability (computability) of an infinitude of meta-mathematical instances, where the distinction between the constructive meta-assertion that any given instance is individually decidable (instantiationally computable), and the non-constructive meta-assertion that all the instances are jointly decidable (uniformly computable), is not addressed explicitly.

However, \S 1.2.1(a), \S 1.2.1(b) and \S 1.2.1(c) appear to suggest that Mendelson’s remarks relate implicitly to non-constructive meta-assertions.

Perhaps the real issue, then, is the one that emerges if we replace Mendelson’s use of implicitly open-ended concepts such as `vague and imprecise’ and `intuitive’ by the more meta-mathematically meaningful concept of `non-constructive, and intuitionistically objectionable’, as italicised and indicated parenthetically.

The essence of Mendelson’s meta-assertion \S 1.2.1(iii) then appears to be that, if the classically accepted definitions of foundational concepts such as `partial recursive function’, `function’, `Tarskian truth’ etc. are also non-constructive, and intuitionistically objectionable, then replacing one non-constructive concept by another may be psychologically unappealing, but it should be meta-mathematically valid and acceptable.

\S 1.2.3 The duality

Clearly, meta-assertion \S 1.2.1(iii) would stand refuted by a `non-algorithmic’ effective method that is constructive.

However, if it is explicitly—and, as suggested by the nature of the arguments in Bringsjord’s paper, widely—accepted at the outset that any effective method is necessarily algorithmic (i.e. uniform as stated in \S 1.2.1(c) above), then any counter-argument to CT can, prima facie, only offer non-algorithmic methods that may, paradoxically, be `effective’ intuitively but in a non-constructive, and intuitionistically objectionable, way only!

Recognition of this dilemma is implicit in the admission that the various arguments, as presented by Bringsjord in the case against Church’s Thesis—including his narrational case—are open to reasonable, but inconclusive, refutations.

Nevertheless, if we accept Mendelson’s thesis that the inter-changeability of non-constructive concepts is valid in the foundations of mathematics, then Bringsjord’s case against Church’s Thesis, since it is based similarly on non-constructive concepts, should also be considered conclusive classically (even though it cannot, prima facie, be considered constructively conclusive in an intuitionistically unobjectionable way).

There is, thus, an apparent duality in the—seemingly extra-logical—decision as to whether an argument based on non-constructive concepts may be accepted as classically conclusive or not.

That this duality may originate in the very issues raised in Mendelson’s remarks—concerning the non-constructive roots of foundational concepts that are classically accepted as mathematically sound—is seen if we note that these issues may be more significant than is, prima facie, apparent.

\S 1.2.4 Definition of a formal mathematical object, and consequences

Thus, if we define a formal mathematical object as any symbol for an individual letter, function letter or a predicate letter that can be introduced through definition into a formal theory without inviting inconsistency, then it can be argued that unrestricted, non-constructive, definitions of non-constructive, foundational, set-theoretic concepts—such as `mapping’, `function’, `recursively enumerable set’, etc.—in terms of constructive number-theoretic concepts—such as recursive number-theoretic functions and relations—may not always correspond to formal mathematical objects.

In other words, the assumption that every definition corresponds to a formal mathematical object may introduce a formal inconsistency into standard Peano Arithmetic and, ipso facto, into any Axiomatic Set Theory that models standard PA (an inconsistency which, loosely speaking, may be viewed as a constructive arithmetical parallel to Russell’s non-constructive impredicative set).

Since it can also be argued that the non-constructive element in Tarski’s definitions of `satisfiability’ and `truth’, and in Church’s Thesis, originate in a common, but removable, ambiguity in the interpretation of an effective method, perhaps it is worth considering whether Bringsjord’s acceptance of the assumption—that every constructive effective method is necessarily algorithmic, in the sense of being a uniform procedure as in \S 1.2.1(c) above—is mathematically necessary, or even whether it is at all intuitively tenable.

(Uniform procedure: A property usually taken to be a necessary condition for a procedure to qualify as effective.)

Thus, we may argue that we can explicitly and constructively define a `non-algorithmic’ effective method as one that, in any given instance, is instantiationally computable if, and only if, it terminates finitely with a conclusive result; and an `algorithmic’ effective method as one that is uniformly computable if, and only if, it terminates finitely, with a conclusive result, in any given instance. [4]

\S 1.2.5 Bringsjord’s case against CT

Apropos the specific arguments against CT it would seem, prima facie, that a non-algorithmic effective—even if not obviously constructive—method could be implicit in the following argument considered by Bringsjord:

“Assume for the sake of argument that all human cognition consists in the execution of effective processes (in brains, perhaps). It would then follow by CT that such processes are Turing-computable, i.e., that computationalism is true. However, if computationalism is false, while there remains incontrovertible evidence that human cognition consists in the execution of effective processes, CT is overthrown”.

Assuming computationalism is false, the issue in this argument would, then, be whether there is a constructive, and adequate, expression of human cognition in terms of individually effective methods.

An appeal to such a non-algorithmic effective method may, in fact, be implicit in Bringsjord’s consideration of the predicate H, defined by:

H(P, i) iff (\exists n)S(P, i, n)

where the predicate S(P, u, n) holds if, and only if, TM M, running program P on input u, halts in exactly n steps (= MP : u =>n halt).

Bringsjord’s Total Computability

Bringsjord defines S as totally (and, implicitly, uniformly) computable in the sense that, given some triple (P, u, n), there is some (uniform) program P* which, running on some TM M*, can infallibly give us a verdict, Y (`yes’) or N (`no’), for whether or not S is true of this triple.

He then notes that, since the ability to (uniformly) determine, for a pair (P, i), whether or not H is true of it, is equivalent to solving the full halting problem, H is not totally computable.

Bringsjord’s Partial Computability

However, he also notes that there is a program (implicitly non-uniform, and so, possibly, effective individually) which, when asked whether or not some TM M run by P on u halts, will produce Y iff MP : u =>n halt. For this reason H is declared partially (implicitly, individually) computable.

More explicitly, Bringsjord remarks that Laszlo Kalmár’s refutation of CT [5] is classically inconclusive mainly because it does not admit any uniform effective method, but appeals to the existence of an infinitude of individually effective methods.

Kalmár’s Perspective of CT

Considering that it always seeks to calculate g(n) (defined below) constructively even in the absence of a uniform procedure—not necessarily within a fixed postulate system—we shall show that Kalmár’s finitary argument ([6]as reproduced below from Bringsjord’s paper) makes a pertinent observation:

“First, he draws our attention to a function g that isn’t Turing-computable, given that f is [7]:

g(x) = \mu y(f(x, y) = 0) = {the least y such that f(x, y) = 0 if y exists; and 0 if there is no such y}

Kalmár proceeds to point out that for any n in N for which a natural number y with f(n, y) = 0 exists,

`an obvious method for the calculation of the least such y … can be given,’

namely, calculate in succession the values f(n, 0), f(n, 1), f(n, 2), \ldots (which, by hypothesis, is something a computist or TM can do) until we hit a natural number m such that f(n, m) = 0, and set y = m.

On the other hand, for any natural number n for which we can prove, not in the frame of some fixed postulate system but by means of arbitrary—of course, correct—arguments that no natural number y with f(n, y) = 0 exists, we have also a method to calculate the value g(n) in a finite number of steps.

Kalmár goes on to argue as follows. The definition of g itself implies the tertium non datur, and from it and CT we can infer the existence of a natural number p which is such that

(*) there is no natural number y such that f(p, y)= 0; and

(**) this cannot be proved by any correct means.

Kalmár claims that (*) and (**) are very strange, and that therefore CT is at the very least implausible.”

Distinguishing between individually effective computability and uniformly effective computability

Now, the significant point that emerges from Bringsjord’s and Kalmár’s philosophical arguments is the need to distinguish formally between non-algorithmic (i.e., instantiational) computability (or, more precisely, algorithmic verifiability as defined here) and algorithmic (i.e., uniform) computability (or, more precisely, algorithmic computability as defined here) as highlighted in the Birmingham paper.

In the next post we note that this point has also been raised from a more formal, logical, perspective; and consider what is arguably an intuitively-more-adequate formal definability of `effective computability’ in terms of ‘algorithmic verifiability’ under which the classical Church and Turing Theses do not hold, but weakened Church and Turing Theses do!

References

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic (4th ed). Cambridge University Press, Cambridge.

Bri93 Selmer Bringsjord. 1993. The Narrational Case Against Church’s Thesis. Easter APA meetings, Atlanta.

Ch36 Alonzo Church. 1936. An unsolvable problem of elementary number theory. In M. Davis (ed.). 1965. The Undecidable Raven Press, New York. Reprinted from the Am. J. Math., Vol. 58, pp.345-363.

Ct75 Gregory J. Chaitin. 1975. A Theory of Program Size Formally Identical to Information Theory. J. Assoc. Comput. Mach. 22 (1975), pp. 329-340.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York.

Go51 Kurt Gödel. 1951. Some basic theorems on the foundations of mathematics and their implications. Gibbs lecture. In Kurt Gödel, Collected Works III, pp.304-323.\ 1995. Unpublished Essays and Lectures. Solomon Feferman et al (ed.). Oxford University Press, New York.

Ka59 Laszlo Kalmár. 1959. An Argument Against the Plausibility of Church’s Thesis. In Heyting, A. (ed.) Constructivity in Mathematics. North-Holland, Amsterdam.

Kl36 Stephen Cole Kleene. 1936. General Recursive Functions of Natural Numbers. Math. Annalen vol. 112 (1936) pp.727-766.

Me64 Elliott Mendelson. 1964. Introduction to Mathematical Logic. Van Norstrand, Princeton.

Me90 Elliott Mendelson. 1990. Second Thoughts About Church’s Thesis and Mathematical Proofs. Journal of Philosophy 87.5.

Pa71 Rohit Parikh. 1971. Existence and Feasibility in Arithmetic. The Journal of Symbolic Logic, Vol.36, No. 3 (Sep., 1971), pp. 494-508.

Si97 Wilfried Sieg. 1997. Step by recursive step: Church’s analysis of effective calculability Bulletin of Symbolic Logic, Volume 3, Number 2.

Sm07 Peter Smith. 2007. Church’s Thesis after 70 Years. A commentary and critical review of Church’s Thesis After 70 Years. In Meinong Studies Vol 1 (Ontos Mathematical Logic 1), 2006 (2013), Eds. Adam Olszewski, Jan Wolenski, Robert Janusz. Ontos Verlag (Walter de Gruyter), Frankfurt, Germany.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

An07 Bhupinder Singh Anand. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. In The Reasoner, Vol(1)7 p2-3.

An12 … 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

Notes

Return to 1: Bri93.

Return to 2: Me90.

Return to 3: The terms `constructive’ and `constructive, and intuitionistically unobjectionable’ are used synonymously both in their familiar linguistic sense, and in a mathematically precise sense. Mathematically, we term a concept as `constructive, and intuitionistically unobjectionable’ if, and only if, it can be defined in terms of pre-existing concepts without inviting inconsistency. Otherwise, we understand it to mean unambiguously verifiable, by some `effective method’, within some finite, well-defined, language or meta-language. It may also be taken to correspond, broadly, to the concept of `constructive, and intuitionistically unobjectionable’ in the sense apparently intended by Gödel in his seminal 1931 paper Go31, p.26.

Return to 4: We note that the possibility of a distinction between the interpreted number-theoretic meta-assertions, `For any given natural number x, F(x) is true’ and `F(x) is true for all natural numbers x‘, is not evident unless these are expressed symbolically as, `(\forall x)(F(x) is true)’ and `(\forall x)F(x) is true’, respectively. The issue, then, is whether the distinction can be given any mathematical significance. For instance, under a constructive formulation of Tarski’s definitions, we may qualify the latter by saying that it can be meaningfully asserted as a totality only if `F‘ is a well-defined mathematical object.

Return to 5: Ka59.

Return to 6: Ka59.

Return to 7: Bringsjord notes that the original proof can be found on page 741 of Kleene Kl36.

Return to 8: We detail a formal proof of this Thesis in this post.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 47 other subscribers

Recent posts