You are currently browsing the tag archive for the ‘human intelligence’ tag.

**Why primality is polynomial time, but factorisation is not**

**Differentiating between the signature of a number and its value**

**A brief review: The significance of evidence-based reasoning**

In a paper: `The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of *Cognitive Systems Research* [An16], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective *self-evidence*—without any specified methodology for objectively *evidencing* such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the *values* of a simple functional language as specifying *evidence* for propositions in a constructive logic …” … Chetan. R. Murthy: [Mu91], \S 1 Introduction.

“Intuitively we require that for each event-describing sentence, say (i.e. the concrete object denoted by exhibits the property expressed by ), there shall be an algorithm (depending on **I**, i.e. ) to decide the truth or falsity of that sentence.” … Martin H Löb: [Lob59], p.165.

**Definition 1** (*Evidence-based reasoning in Arithmetic*): Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively *evidencing* such acceptance.

The significance of introducing *evidence-based* reasoning for assigning truth values to the formulas of a first-order Peano Arithmetic, such as PA, under a well-defined interpretation (see Section 3 in [An16]), is that it admits the distinction:

(1) algorithmically *verifiable* `truth’ (Definition 2}); and

(2) algorithmically *computable* `truth’ (Definition 3).

**Definition 2** (*Deterministic algorithm*): A deterministic algorithm computes a mathematical function which has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.

Note that a deterministic algorithm can be suitably defined as a `*realizer*‘ in the sense of the *Brouwer-Heyting-Kolmogorov* rules (see [Ba16], p.5).

For instance, under *evidence-based* reasoning the formula of the first-order Peano Arithmetic PA must always be interpreted *weakly* under the classical, standard, interpretation of PA (see [An16], Theorem 5.6) in terms of algorithmic *verifiability* (see [An16], Definition 1); where, if the PA-formula interprets as an arithmetical relation over :

**Definition 2** (*Algorithmic verifiability*): The number-theoretical relation is algorithmically *verifiable* if, and only if, for any natural number , there is a deterministic algorithm which can provide evidence for deciding the truth/falsity of each proposition in the finite sequence .

Whereas must always be interpreted *strongly* under the finitary interpretation of PA (see [An16], Theorem 6.7) in terms of algorithmic *computability* ([An16], Definition 2), where:

**Definition 3** (*Algorithmic computability*): The number theoretical relation is algorithmically *computable* if, and only if, there is a deterministic algorithm that can provide evidence for deciding the truth/falsity of each proposition in the denumerable sequence .

The significance of the distinction between algorithmically *computable* reasoning based on algorithmically *computable* truth, and algorithmically *verifiable* reasoning based on algorithmically *verifiable* truth, is that it admits the following, hitherto unsuspected, consequences:

(i) PA has two well-defined interpretations over the domain of the natural numbers (including ):

(a) the *weak* non-finitary standard interpretation ([An16], Theorem 5.6),

and

(b) a *strong* finitary interpretation ([An16], Theorem 6.7);

(ii) PA is *non-finitarily* consistent under ([An16], Theorem 5.7);

(iii) PA is *finitarily* consistent under ([An16], Theorem 6.8).

**The significance of evidence-based reasoning for Computational Complexity**

In this paper submitted to ICLA 2019, I now show the relevance of *evidence-based* reasoning, and of distinguishing between algorithmically *verifiable* and algorithmically *computable* number-theoretic functions (as defined above), for Computational Complexity is that it assures us a formal foundation for placing in perspective, and complementing, an uncomfortably counter-intuitive entailment in number theory—Theorem 2 below—which has been treated by conventional wisdom as sufficient for concluding that the prime divisors of an integer *cannot* be proven to be mutually independent.

However, I show there that such informally perceived barriers are, in this instance, illusory; and that admitting the above distinction illustrates:

(a) *Why* the prime divisors of an integer are mutually independent Theorem 2;

(b) *Why* determining whether the signature (Definition 3 below) of a given integer —coded as the key in a modified Bazeries-cylinder (see Definition 7 of this paper) based combination lock—is that of a prime, or not, can be done in polynomial time (Corollary 4 of this paper); as compared to the time given by Agrawal et al in [AKS04], and improved to by Lenstra and Pomerance in [LP11], for determining whether the value of a given integer is that of a prime or not.

(c) *Why* it can be cogently argued that determining a factor of a given integer cannot be polynomial time.

**Definition 4** (*Signature of a number*): The signature of a given integer is the sequence where for all primes .

Unique since, if and have the same signature, then ; whence since for by appeal to Bertrand’s Postulate ; and the uniqueness is easily verified for .

**Definition 5** (*Value of a number*): The *value* of a given integer is any well-defined interpretation—over the domain of the natural numbers—of the (unique) numeral that represents in the first-order Peano Arithmetic PA.

We note that Theorem 2 establishes a lower limit for [AKS04] and [LP11], because determining the *signature* of a given integer does not require knowledge of the *value* of the integer as defined by the Fundamental Theorem of Arithmetic.

**Theorem 1**: (*Fundamental Theorem of Arithmetic*): Every positive integer can be represented in exactly one way as a product of prime powers:

where are primes and the are positive integers (including ).

** Are the prime divisors of an integer mutually independent?**

In this paper I address the query:

**Query 1**: Are the prime divisors of an integer mutually independent?

**Definition 6** (*Independent events*): Two events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other.

Intuitively, the prime divisors of an integer *seem* to be mutually independent by virtue of the Fundamental Theorem of Arithmetic

Moreover, the prime divisors of can also be *seen* to be mutually independent in the usual, linearly displayed, Sieve of Eratosthenes, where whether an integer is crossed out as a multiple of a prime is *obviously* independent of whether it is also crossed out as a multiple of a prime :

~~1~~, 2, 3, ~~4~~, 5, ~~6~~, 7, ~~8~~, ~~9~~, ~~10~~, 11, ~~12~~, 13, ~~14~~, ~~15~~, ~~16~~, 17, ~~18~~, 19 …

Despite such compelling evidence, conventional wisdom appears to accept as definitive the counter-intuitive conclusion that although we can *see* it as true, we cannot mathematically prove the following proposition as *true*:

**Proposition 1**: Whether or not a prime divides an integer is independent of whether or not a prime divides the integer .

We note that such an *unprovable-but-intuitively-true* conclusion makes a *stronger* assumption than that in Gödel’s similar claim for his arithmetical formula —whose Gödel-number is —in [Go31], p.26(2). *Stronger*, since Gödel does not assume his proposition to be *intuitively true*, but shows that though the arithmetical formula with Gödel-number is not provable in his Peano Arithmetic yet, for any -numeral , the formula whose Gödel-number is *is* -provable, and therefore *meta-mathematically true* under any well-defined Tarskian interpretation of (cf., [An16], Section 3.).

Expressed in computational terms (see [An16], Corollary 8.3), under any well-defined interpretation of , Gödel’s formula translates as an arithmetical relation, say , such that is algorithmically *verifiable*, but not algorithmically *computable*, as always true over , since is -provable ([An16], Corollary 8.2).

We thus argue that a perspective which denies Proposition 1 is based on perceived barriers that reflect, and are peculiar to, *only* the argument that:

**Theorem 2**: There is no deterministic algorithm that, for any given , and any given prime , will evidence that the probability that divides is , and the probability that does not divide is .

**Proof** By a standard result in the Theory of Numbers ([Ste02], Chapter 2, p.9, Theorem 2.1, we cannot define a probability function for the probability that a random is prime over the probability space .

(Compare with the informal argument in [HL23], pp.36-37.)

In other words, treating Theorem 2 as an absolute barrier does not admit the possibility—which has consequences for the resolution of outstanding problems in both the theory of numbers and computational complexity—that Proposition 1 is algorithmically *verifiable*, but not algorithmically *computable*, as *true*, since:

**Theorem 3**: For any given , there is a deterministic algorithm that, given any prime , will evidence that the probability that divides is , and the probability that does not divide is .

**Author’s working archives & abstracts of investigations**

**Can Gödel be held responsible for not clearly distinguishing—in his seminal 1931 paper on formally undecidable propositions (pp.596-616, ‘ From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931‘, Jean van Heijenoort, Harvard University Press, 1976 printing)—between the implicit circularity that is masked by the non-constructive nature of his proof of undecidability in PM, and the lack of any circularity in his finitary proof of undecidability in Peano Arithmetic?**

“The analogy of this argument with the Richard antinomy leaps to the eye. It is closely related to the “Liar” too;[*Fn.14*] for the undecidable proposition states that belongs to , that is, by (1), that is not provable. We therefore have before us a proposition that says about itself that it is not provable [in PM].[*Fn.15*] …

[*Fn.14*] Any epistemological antinomycould be used for a similar proof of the existence of undecidable propositions.”

[*Fn.15*] Contrary to appearances, such a proposition involves no faulty circularity, for initially it [only] asserts that a certain well-defined formula (namely, the one obtained from the th formula in the lexicographic order by a certain substitution) is unprovable. Only subsequently (and so to speak by chance) does it turn out that this formula is precisely the one by which the proposition itself was expressed.”

It is a question worth asking, if we heed Abel-Luis Peralta, who is a Graduate in Scientific Calculus and Computer Science in the Faculty of Exact Sciences at the National University of La Plata in Buenos Aires, Argentina; and who has been contending in a number of posts on his Academia web-page that:

(i) Gödel’s semantic definition of ‘‘, and therefore of ‘‘, is not only:

(a) self-referential under interpretation—in the sense of the above quote (pp.597-598, van Heijenoort) from Gödel’s Introduction in his 1931 paper ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems I’ (pp.596-616, van Heijenoort);

but that:

(b) neither of the definitions can be verified by a deterministic Turing machine as yielding a valid formula of PM.

Peralta is, of course, absolutely right in his contentions.

However, such non-constructiveness is a characteristic of any set-theoretical system in which PM is interpretable; and in which, by Gödel’s self-confessed Platonism (apparent in his footnote #15 in the quote above), we do not need to establish that his definitions of ‘‘ and ‘‘ need to be verifiable by a deterministic Turing machine in order to be treated as valid formulas of PM.

*Reason*: By the usual axiom of separation of any formal set theory such as ZFC in which PM is interpreted, Gödel’s set-theoretical definition (p.598, Heijenoort):

lends legitimacy to as a PM formula.

Thus Gödel can formally assume—without further proof, by appeal simply to the axiom of choice of ZFC—that the PM formulas with exactly one variable—of the type of natural numbers—can be well-ordered in a sequence in some way such as, for example (Fn.11, p.598, Heijenoort):

“… by increasing the sum of the finite sequences of integers that is the ‘class sign’;, and lexicographically for equal sums.”

We cannot, though, conclude from this that:

(ii) Gödel’s formally undecidable P-formula, say —whose Gödel-number is defined as in Gödel’s proof of his Theorem VI (on pp.607-609 of van Heijenoort)—also cannot be verified by a deterministic Turing machine to be a valid formula of Gödel’s Peano Arithmetic P.

*Reason*: The axioms of set-theoretical systems such as PM, ZF, etc. would all admit—under a well-defined interpretation, if any—infinite elements, in the putative domain of any such interpretation, which are not Turing-definable.

Nevertheless, to be fair to two generations of scholars who—apart from those who are able to comfortably wear the logician’s hat—have laboured in attempts to place the philosophical underpinnings of Gödel’s reasoning (in his 1931 paper) in a coherent perspective (see this post; also this and this), I think Gödel must, to some extent, be held responsible—but in no way accountable—for the lack of a clear-cut distinction between the non-constructivity implicit in his semantic proof in (i), and the finitarity that he explicitly ensures for his syntactic proof in (ii).

Reason: Neither in his title, nor elsewhere in his paper, does Gödel categorically state that his goal was:

(iii) not only to demonstrate the existence of formally undecidable propositions in PM, a system which admits non-finitary elements under any putative interpretation;

(iv) but also to prevent the admittance of non-finitary elements—precisely those which would admit conclusions such as (ii)—when demonstrating the existence of formally undecidable propositions in ‘related’ systems such as his Peano Arithmetic P.

He merely hints at this by stating (see quote below from pp.587-589 of van Heijenoort) that his demonstration of (iii) is a ‘sketch’ that lacked the precision which he intended to achieve in (iv):

“Before going into details, we shall first sketch the main idea of the proof, of course without any claim to complete precision. The formulas of a formal system (we restrict ourselves here to the system PM) in outward appearance are finite sequences of primitive signs (variables, logical constants, and parentheses or punctuation dots), and it is easy to state with complete precision which sequences of primitive signs are meaningful formulas and which are not….

by:

(v) weakening the implicit assumption—of the decidability of the semantic truth of PM-propositions under any well-defined interpretation of PM—which underlies his proof of the existence of formally undecidable set-theoretical propositions in PM;

The method of proof just explained can clearly be applied to any formal system that, first, when interpreted as representing a system of notions and propositions, has at its disposal sufficient means of expression to define the notions occurring in the argument above (in particular, the notion “provable formula”) and in which, second, every provable formula is true in the interpretation considered. The purpose of carrying out the above proof with full precision in what follows is, among other things, to replace the second of the assumptions just mentioned by a purely formal and much weaker one.”

and:

(vi) insisting—in his proof of the existence of formally undecidable arithmetical propositions in his Peano Arithmetic P—upon the introduction of a methodology for constructively assigning unique truth values to only those (primitive recursive) quantified number-theoretic assertions (#1 to #45 on pp.603-606 of van Heijenoort) that are bounded when interpreted over the domain N of the natural numbers (footnote #34 on p.603 of van Heijenoort):

“Wherever one of the signs , , or occurs in the definitions below, it is followed by a bound on . This bound serves merely to ensure that the notion defined is recursive (see Theorem IV). But in most cases the extension of the notion defined would not change if this bound were omitted.”

From today’s perspective, one could reasonably hold that—as Peralta implicitly contends—Gödel is misleadingly suggesting (in the initial quote above from pp.587-589 of van Heijenoort) that his definitions of ‘‘ and ‘‘ may be treated as yielding ‘meaningful’ formulas of PM which are well-definable constructively (in the sense of being definable by a deterministic Turing machine).

In my previous post I detailed precisely why such an assumption would be fragile, by showing how the introduction of the boundedness Gödel insisted upon in (vi) distinguishes:

(vii) Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (pp.598-599 of van Heijenoort), which admits Peralta’s contention (1);

from:

(viii) Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (pp.607-609 of van Heijenoort), which does not admit the corresponding contention (ii).

Moreover, we note that:

(1) Whereas Gödel can—albeit non-constructively—claim that his definition of ‘‘ yields a formula in PM, we cannot claim, correspondingly, that his primitive recursive formula is a formula in his Peano Arithmetic P.

(2) The latter is a number-theoretic relation defined by Gödel in terms of his primitive recursive relation #45, ‘‘, as:

#46. .

(3) In Gödel’s terminology, ‘‘ translates under interpretation over the domain N of the natural numbers as:

‘ is the Gödel-number of some provable formula of Gödel’s Peano Arithmetic P’.

(4) However, unlike Gödel’s primitive recursive functions and relations #1 to #45, both ‘‘ and ‘‘ are number-theoretic relations which are not primitive recursive—which means that they are not effectively decidable by a Turing machine under interpretation in N.

(5) Reason: Unlike in Gödel’s definitions #1 to #45 (see footnote #34 on p.603 of van Heijenoort, quoted above), there is no bound on the quantifier ‘‘ in the definition of .

Hence, by Turing’s Halting Theorem, we cannot claim—in the absence of specific proof to the contrary—that there must be some deterministic Turing machine which will determine whether or not, for any given natural number , the assertion is true under interpretation in N.

This is the crucial difference between Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (which admits Peralta’s contention (i)), and Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (which does not admit his contention (i)).

(6) We cannot, therefore—in the absence of specific proof to the contrary—claim by Gödel’s Theorems V or VII that there must be some P-formula, say [Bew (corresponding to the PM-formula ), such that, for any given natural number :

(a) If is true under interpretation in N, then [Bew is provable in P;

(b) If is true under interpretation in N, then [Bew is provable in P.

**A: Is Gödel’s reasoning really kosher?**

Many scholars yet harbour a lingering suspicion that Gödel’s definition of his formally undecidable arithmetical proposition involves a latent contradiction—arising from a putative, implicit, circular self-reference—that is masked by unverifiable, even if not patently invalid, mathematical reasoning.

The following proof of Gödel’s Theorem VI of his 1931 paper is intended to:

strip away the usual mathematical jargon that shrouds proofs of Gödel’s argument which makes his—admittedly arcane—reasoning difficult for a non-logician to unravel;

and

show that, and why—unlike in the case of the paradoxical ‘Liar’ sentence: ‘This sentence is a lie’—Gödel’s proposition does not involve any circular self-reference that could yield a Liar-like contradiction, either in a formal mathematical language, or when interpreted in any language of common discourse.

**B: Gödel’s 45 primitive recursive arithmetic functions and relations**

We begin by noting that:

(1) In his 1931 paper on formally ‘undecidable’ arithmetical propositions, Gödel shows that, given a well-defined system of Gödel-numbering, every formula of a first-order Peano Arithmetic such as PA can be Gödel-numbered by Gödel’s primitive recursive relation #23, , which is true if, and only if, is the Gödel-number (GN) of a formula of PA.

(2) So, given any natural number , (1) allows us to decompose and effectively determine whether, or not, is the GN of some PA formula.

(3) Gödel also defines a primitive recursive relation #44, , which is true if, and only if, is the GN of a finite sequence of formulas in PA, each of which is either an axiom, or an immediate consequence of two preceding formulas in the sequence.

(4) So, given any natural number , (3) allows us to effectively determine whether, or not, the natural number is the GN of a proof sequence in PA.

(5) Further, Gödel defines a primitive recursive relation #45, , which is true if, and only if, is the GN of a proof sequence in PA whose last formula has the GN .

(6) Gödel then defines a primitive recursive relation, say , such that, for any :

is true if, and only if, happens to be a GN that can be decomposed into a proof sequence whose last member is some PA formula , and happens to be a GN that decomposes into the PA-formula with only one variable .

(7) The essence of Gödel’s Theorem VI lies in answering the question:

**Query 1:** Is there any natural number for which is true?

**C: Gödel’s reasoning in Peano Arithmetic**

(8) Now, by Gödel’s Theorem VII (a standard representation theorem of arithmetic), can be expressed in PA by some (formally well-defined) formula such that, for any :

(a) If is true, then is PA-provable;

(b) If is true, then is PA-provable.

(9) Further, by (6) and (8), for any , if is the GN of then:

(a) If is true, then is PA-provable; and is a PA-proof of ;

(b) If is true, then is PA-provable; and is not a PA-proof of .

(10) In his Theorem VI, Gödel then argues as follows:

(a) Let be the GN of the formula defined in (8).

(b) Let be the GN of .

(c) Let be the GN of .

(d) Let be the GN of .

(11) We note that all the above primitive recursive relations are formally well-defined within the standard primitive recursive arithmetic PRA; and all the PA-formulas—as well as their corresponding Gödel-numbers—are well-defined in the first-order Peano Arithmetic PA.

In other words, as Gödel emphasised in his paper, the 46—i.e., —PRA functions and relations that he defines are all bounded, and therefore effectively decidable as true or false over the domain of the natural numbers; whilst the PA-formulas that he defines do not involve any reference—or self-reference—to either the meaning or the truth/falsity of any PA-formulas under an interpretation in , but only to their PA-provability which, he shows, is effectively decidable by his system of Gödel-numbering and his definition of the primitive recursive relation .

(12) If we now substitute for , and for , in (9) we have (since is the GN of ) that:

(i) If is true, then is PA-provable; whence is a PA-proof of ;

(ii) If is true, then is PA-provable; whence is not a PA-proof of .

**Hence answers Query 1 affirmatively.**

**D: Gödel’s conclusions**

(13) Gödel then concludes that, if PA is consistent then:

By (12)(i), if is true for some , then both and are PA-provable—a contradiction since, by Generalisation in PA, the latter implies that is provable in PA.

Hence , whose GN is , is not provable in PA if PA is consistent.

(14) Moreover, if PA is assumed to also be -consistent (which means that we cannot have a PA-provable formula such that is also provable in PA for any given numeral ) then:

By (13), is not a PA-proof of for any given ; whence is PA-provable for any given by (12)(ii).

Hence , whose GN is , is not provable in PA.

**E: Gödel’s does not refer to itself**

We note that Gödel’s formula —whose GN is — does not refer to itself since it is defined in terms of the natural number , and not in terms of the natural number .

**F: Somewhere, far beyond Gödel**

The consequences of Gödel’s path-breaking answer to Query 1 are far-reaching (as detailed in this thesis).

For instance, taken together with the proof that PA is categorical with respect to algorithmic computability (Corollary 7.2 of this paper), and that PA is not -consistent (Corollary 8.4 of this paper), the above entails that:

There can be no interpretation of Gödel’s definition of his formally undecidable arithmetical proposition over the domain of the natural numbers—whether expressed mathematically or in any language of common discourse—that could lead to a contradiction;

Gödel’s is not a formally undecidable arithmetical proposition, since is PA-provable (see Corollary 8.2 of this paper).

(*Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.*)

**Ferguson’s and Priest’s thesis**

In a brief, but provocative, *review* of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘*A Dictionary of Logic*‘, philosophers *Thomas Macaulay Ferguson* and *Graham Priest*, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

** A nineteenth century parody of the struggle to define ‘truth’ objectively**

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in *this nineteenth century quote*:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

**Making sense of mathematical propositions about infinite processes**

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

**The Liar paradox**

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning *here*).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

**Gödel’s influence on Turing’s reasoning**

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by *the following paper* due to appear in the December 2016 issue of ‘*Cognitive Systems Research*‘:

**What is logic: using Ockham’s razor**

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the *EPR paradox* in the *following paper* that I *presented* on 26’th June at the workshop on *Emergent Computational Logics* at UNILOG’2015, Istanbul, Turkey:

where I introduced the definition:

A finite set of rules is a Logic of a formal mathematical language if, and only if, constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of ; and

(b) Of truth/falsity to the sentences of the Theory which is defined semantically by the -interpretation of over a structure .

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as `true’ under an algorithmically computable interpretation of PA over the structure of the natural numbers;

Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

Interpret such representations unambiguously; and

Conclude further:

First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

(*Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.*)

**A Economist: The return of the machinery question**

In a Special Report on Artificial Intelligence in its issue of 25th June 2016, ‘*The return of the machinery question*‘, the Economist suggests that both cosmologist *Stephen Hawking* and enterpreneur *Elon Musk* share to some degree the:

“… fear that AI poses an existential threat to humanity, because superintelligent computers might not share mankind’s goals and could turn on their creators”.

**B Our irrational propensity to fear that which we are drawn to embrace**

Surprising, since I suspect both would readily agree that, if anything should scare us, it is our irrational propensity to fear that which we are drawn to embrace!

And therein should lie not only our comfort, but perhaps also our salvation.

For Artificial Intelligence is constrained by rationality; Human Intelligence is not.

An Artificial Intelligence must, whether individually or collectively, create and/or destroy only rationally. Humankind can and does, both individually and collectively, create and destroy irrationally.

**C Justifying irrationality**

For instance, as the legatees of logicians Kurt Goedel and Alfred Tarski have amply demonstrated, a Human Intelligence can easily be led to believe that some statements of even the simplest of mathematical languages—Arithmetic—must be both ‘formally undecidable’ and ‘true’, even in the absence of any objective yardstick for determining what is ‘true’!

**D Differentiating between Human reasoning and Mechanistic reasoning**

An Artificial Intelligence, however, can only treat as true that which can be proven—by its rules—to be true by an objective assignment of ‘truth’ and ‘provability’ values to the propositions of the language that formally expresses its mechanical operations—Arithmetic.

The implications of the difference are not obvious; but that the difference could be significant is the thesis of *this paper* which is due to appear in the December 2016 issue of Cognitive Systems Research:

‘*The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning*‘.

**E Respect for evidence-based ‘truth’ could be Darwinian**

More importantly, the paper demonstrates that both Human Intelligence—whose evolution is accepted as Darwinian—and Artificial Intelligence—whose evolution it is ‘feared’ may be Darwinian—share a common (Darwinian?) respect for an accountable concept of ‘truth’.

A respect that should make both Intelligences fitter to survive by recognising what philosopher *Christopher Mole* describes in *this invitational blogpost* as the:

“… importance of the rapport between an organism and its environment”

—an environment that can obviously accommodate the birth, and nurture the evolution, of both intelligences.

So, it may not be too far-fetched to conjecture that the evolution of both intelligences must also, then, share a Darwinian respect for the kind of human values—towards protecting intelligent life forms—that, no matter in how limited or flawed a guise, is visibly emerging as an inherent characteristic of a human evolution which, no matter what the cost could, albeit optimistically, be viewed as struggling to incrementally strengthen, and simultaneously integrate, individualism (fundamental particles) into nationalism (atoms) into multi-nationalism (molecules) and, possibly, into universalism (elements).

**F The larger question: Should we fear an extra-terrestrial Intelligence?**

From a broader perspective yet, our apprehensions about the evolution of a rampant Artificial Intelligence created by a Frankensteinian Human Intelligence should, perhaps, more rightly be addressed—as some have urged—within the larger uncertainty posed by SETI:

*Is there a rational danger to humankind in actively seeking an extra-terrestrial intelligence?*

I would argue that any answer would depend on how we articulate the question and that, in order to engage in a constructive and productive debate, we need to question—and reduce to a minimum—some of our most cherished mathematical and scientific beliefs and fears which cannot be communicated objectively.

(*Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.*)

**We investigate whether the probabilistic distribution of prime numbers can be treated as a heuristic model of quantum behaviour, since it too can be treated as a quantum phenomena, with a well-defined binomial probability function that is algorithmically computable, where the conjectured values of differ from actual values with a binomial standard deviation, and where we define a phenomena as a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable, but not algorithmically computable.**

**1. Thesis: The concept of ‘mathematical truth’ must be accountable**

The thesis of this investigation is that a major philosophical challenge—which has so far inhibited a deeper understanding of the quantum behaviour reflected in the mathematical representation of some laws of nature (see, for instance, *this paper* by Eamonn Healey)—lies in holding to account the uncritical acceptance of propositions of a mathematical language as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology of accountability for objectively evidencing such acceptance.

**2. The concept of ‘set-theoretical truth’ is not accountable**

Since current folk lore is that all scientific truths can be expressed adequately, and communicated unambiguously, in the first order Set Theory ZF, and since the Axiom of Infinity of ZF cannot—even in principle—be objectively evidenced as true under any putative interpretation of ZF (as we argue in *this post*), an undesirable consequence of such an uncritical acceptance is that the distinction between the truths of mathematical propositions under interpretation which can be objectively evidenced, and those which cannot, is not evident.

**3. The significance of such accountability for mathematics**

The significance of such a distinction for mathematics is highlighted in this paper due to appear in the December 2016 issue of *Cognitive Systems Research*, where we address this challenge by considering the two finitarily accountable concepts of algorithmic verifiability and algorithmic computability (first introduced in *this paper* at the *Symposium on Computational Philosophy* at the AISB/IACAP World Congress 2012-Alan Turing 2012, Birmingham, UK).

**(i) Algorithmic verifiability**

A number-theoretical relation is algorithmically verifiable if, and only if, for any given natural number , there is an algorithm which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence .

**(ii) Algorithmic computability**

A number theoretical relation is algorithmically computable if, and only if, there is an algorithm that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence .

**(iii) Algorithmic verifiability vis à vis algorithmic computability**

We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

From the point of view of a finitary mathematical philosophy—which is the constraint within which an applied science ought to ideally operate—the significant difference between the two concepts could be expressed by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function (Thesis 1 on p.9 of *this paper* that was presented on 26th June at the workshop on *Emergent Computational Logics* at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey).

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true.

We show in the CSR paper how such accountability helps define finitary truth assignments that differentiate human reasoning from mechanistic reasoning in arithmetic by identifying two, hitherto unsuspected, Tarskian interpretations of the first order Peano Arithmetic PA, under both of which the PA axioms interpret as finitarily true over the domain of the natural numbers, and the PA rules of inference preserve such truth finitarily over .

**4. The ambit of human reasoning vis à vis the ambit of mechanistic reasoning**

One corresponds to the classical, non-finitary, putative standard interpretation of PA over , and can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions.

The other corresponds to a finitary interpretation of PA over that circumscibes the ambit of mechanistic reasoning about ‘true’ arithmetical propositions, and establishes the long-sought for consistency of PA (see *this post*); which establishes PA as a mathematical language of unambiguous communication for the mathematical representation of physical phenomena.

**5. The significance of such accountability for the mathematical representation of physical phenomena**

The significance of such a distinction for the mathematical representation of physical phenomena is highlighted in *this paper* that was presented on 26th June at the workshop on *Emergent Computational Logics* at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey, where we showed how some of the seemingly paradoxical elements of quantum mechanics may resolve if we define:

**Quantum phenomena**: *A phenomena is a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable but not algorithmically computable.*

**6. The mathematical representation of quantum phenomena that is determinate but not predictable**

By considering the properties of Gödel’s function (see 4.1 on p.8 of *this preprint*)—which allows us to strongly represent any non-terminating sequence of natural numbers by an arithmetical function—it would follow that, since any projection of the future values of a quantum-phenomena-associated, algorithmically verifiable, function is consistent with an infinity of algorithmically computable functions, all of whose past values are identical to the algorithmically verifiable past values of the function, the phenomena itself would be essentially unpredicatable if it cannot be represented by an algorithmically computable function.

However, since the algorithmic verifiability of any quantum phenomena shows that it is mathematically determinate, it follows that the physical phenomena itself must observe determinate laws.

**7. Such representation does not need to admit multiverses**

Hence (contrary to any interpretation that admits unverifiable multiverses) only one algorithmically computable extension of the function is consistent with the law determining the behaviour of the phenomena, and each possible extension must therefore be associated with a probability that the next observation of the phenomena is described by that particular extension.

**8. Is the probability of the future behaviour of quantum phenomena definable by an algorithmically computable function?**

The question arises: Although we cannot represent quantum phenomena explicitly by an algorithmically computable function, does the phenomena lend itself to an algorithmically computable probability of its future behaviour in the above sense?

**9. Can primes yield a heuristic model of quantum behaviour?**

We now show that the distribution of prime numbers denoted by the arithmetical prime counting function is a quantum phenomena in the above sense, with a well-defined probability function that is algorithmically computable.

**10. Two prime probabilities**

We consider the two probabilities:

(i) The probability of selecting a number that has the property of being prime from a given set of numbers;

*Example 1*: I have a bag containing numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability of determining a proper factor of a given number .

*Example 2*: I give you a -digit combination lock along with a -digit number . The lock only opens if you set the combination to a proper factor of which is greater than . What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

**11. The probability of a randomly chosen number from the set of natural numbers is not definable**

Clearly the probability of selecting a number that has the property of being prime from a given set of numbers is definable if the precise proportion of primes to non-primes in is definable.

However if is the set of all integers, and we cannot define a precise ratio of primes to composites in , but only an order of magnitude such as , then equally obviously cannot be defined in (see Chapter 2, p.9, Theorem 2.1, *here*).

**12. The prime divisors of a natural number are independent**

Now, the following paper proves , since it shows that whether or not a prime divides a given integer is independent of whether or not a prime divides :

*Why Integer Factorising cannot be polynomial time*

We thus have that , with a binomial standard deviation.

Hence, even though we cannot define the probability of selecting a number from the set of all natural numbers that has the property of being prime, can be treated as the putative non-heuristic probability that a given is a prime.

**13. The distribution of primes is a quantum phenomena**

The distribution of primes is thus determinate but unpredictable, since it is representable by the algorithmically verifiable but not algorithmically computable arithmetical number-theoretic function , where is the ‘th prime.

The Prime Number Generating Theorem and the Trim and Compact algorithms detailed in *this 1964 investigation* illustrate why the arithmetical number-theoretic function is algorithmically verifiable but not algorithmically computable (see also *this Wikipedia proof* that no non-constant polynomial function with integer coefficients exists that evaluates to a prime number for all integers .).

Moreover, although the distribution of primes is a quantum phenomena with probabilty , it is easily seen (see Figs. 7-11 on pp.23-26 of *this preprint*) that the generation of the primes is algorithmically computable.

**14. Why the universe may be algorithmically computable**

By analogy, this suggests that although the measurable values of some individual properties of particles in the universe over time may represent a quantum phenomena, the universe itself may be algorithmically computable if the laws governing the generation of all the particles in the universe over time are algorithmically computable.

**A. A mathematical physicist’s conception of thinking about infinity in consistent ways**

John Baez is a mathematical physicist, currently working at the math department at U. C. Riverside in California, and also at the Centre for Quantum Technologies in Singapore.

Baez is not only academically active in the areas of network theory and information theory, but also socially active in promoting and supporting the Azimuth Project, which is a platform for scientists, engineers and mathematicians to collaboratively do something about the global ecological crisis.

In a recent post—*Large Countable Ordinals (Part 1)*—on the Azimuth Blog, Baez confesses to a passionate urge to write a series of blogs—that might even eventually yield a book—about the infinite, reflecting both his fascination with, and frustration at, the challenges involved in formally denoting and talking meaningfully about *different sizes of infinity*:

“I love the infinite. … It may not exist in the physical world, but we can set up rules to think about it in consistent ways, and then it’s a helpful concept. … Cantor’s realization that there are different sizes of infinity is … part of the everyday bread and butter of mathematics.”

**B. Why thinking about infinity in a consistent way must be constrained by an objective, evidence-based, perspective**

I would cautiously submit however that (as I briefly argue in this blogpost), before committing to any such venture, whether we *can* think about the “*different sizes of infinity*” in “*consistent ways*“, and to what extent such a concept is “*helpful*“, are issues that may need to be addressed from an objective, evidence-based, computational perspective in addition to the conventional self-evident, intuition-based, classical perspective towards formal axiomatic theories.

**C. Why we cannot conflate the behaviour of Goodstein’s sequence in Arithmetic with its behaviour in Set Theory**

Let me suggest why by briefly reviewing—albeit unusually—the usual argument of Goodstein’s Theorem (see here) that every Goodstein sequence over the natural numbers must terminate finitely.

**1. The Goodstein sequence over the natural numbers**

First, let , be the terms of the Goodstein sequence for over the domain of the natural numbers, where is the base in which the hereditary representation of the ‘th term of the sequence is expressed.

**Some properties of Goodstein’s sequence over the natural numbers**

We note that, for any natural number , R. L. Goodstein uses the properties of the hereditary representation of to construct a sequence of natural numbers by an unusual, but valid, algorithm.

**Hereditary representation**: The representation of a number as a sum of powers of a base , followed by expression of each of the exponents as a sum of powers of , etc., until the process stops. For example, we may express the hereditary representations of in base and base as follows:

We shall ignore the peculiar manner of constructing the individual members of the Goodstein sequence, since these are not germane to understanding the essence of Goodstein’s argument. We need simply accept for now that is well-defined over the structure of the natural numbers, and has, for instance, the following properties:

If we replace the base in each term of the sequence by , we arrive at a corresponding sequence of, say, Goodstein’s functions for over the domain of the natural numbers.

Where, for instance:

It is fairly straightforward (see here) to show that, for all :

Either , or .

Clearly terminates in if, and only if, there is a natural number such that, for any , we have either that or that .

However, since we cannot, equally clearly, *immediately* conclude from the axioms of the first-order Peano Arithmetic PA that such a must exist merely from the definition of the sequence in , we cannot *immediately* conclude from the above argument that must terminate finitely in .

**2. The Goodstein sequence over the finite ordinal numbers**

Second, let , be the terms of the Goodstein sequence over the domain of the finite ordinal numbers , where is Cantor’s least transfinite ordinal.

If we replace the base in each term of the sequence by , where ranges over all ordinals upto , it is again fairly straightforward to show that:

Either , or .

Clearly, in this case too, terminates in if, and only if, there is an ordinal such that, for all finite , we have either that , or that .

**3. Goodstein’s argument over the transfinite ordinal numbers**

If we, however, let then—since the ZF axioms do not admit an infinite descending set of ordinals—it now ** immediately follows** that we cannot have:

for all .

Hence must terminate finitely in , since we must have that for some finite .

**4. The intuitive justification for Goodstein’s Theorem**

The *intuitive* justification—which must *implicitly* underlie any formal argument—for Goodstein’s Theorem then is that, since the finite ordinals can be meta-mathematically * seen* to be in a correspondence with the natural numbers, we can conclude from (2) above that every Goodstein sequence over the natural numbers

*must also*terminate finitely.

**5. The fallacy in Goodstein’s argument**

The fallacy in this conclusion is exposed if we note that, by (2), must terminate finitely in **even if*** did not* terminate in !

**6. Why we need to heed Skolem’s cautionary remarks**

Clearly, if we heed Skolem’s cautionary remarks (reproduced here) about unrestrictedly corresponding conclusions concerning elements of different formal systems, then we can validly only conclude that the relationship of ‘terminating finitely’ with respect to the ordinal inequality ‘‘ over an infinite set of finite ordinals in any *putative* interpretation of a first order Ordinal Arithmetic cannot be obviously corresponded to the relationship of ‘terminating finitely’ with respect to the natural number inequality ‘‘ over an infinite set of natural numbers in any interpretation of PA.

**7. The significance of Skolem’s qualification**

The significance of Skolem’s qualification is highlighted if we note that we cannot force PA to admit a constant denoting a ‘completed infinity’, such as Cantor’s least ordinal , into either PA or into any interpretation of PA without inviting inconsistency.

(The proof is detailed in Theorem 4.1 on p.7 of this preprint. See also this blogpage).

**8. PA is finitarily consistent**

Moreover, the following paper, due to appear in the December 2016 issue of *Cognitive Systems Research*, gives a finitary proof of consistency for the first-order Peano Arithmetic PA:

**9. Why ZF cannot have an evidence-based interpretation**

It also follows from the above-cited CSR paper that ZF axiomatically postulates the existence of an infinite set which cannot be evidenced as true even under any *putative* interpretation of ZF.

**10. The appropriate conclusion of Goodstein’s argument**

So, if a ‘completed infinity’ cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency, it would follow in Russell’s colourful phraseology that the appropriate conclusion to be drawn from Goodstein’s argument is that:

(i) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(ii) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of notional interest.

Which raises the issue not only of whether we *can* think about the *different sizes of infinity* in a consistent way, but also to what extent we may need to justify that such a concept is *helpful* to an emerging student of mathematics.

**The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought**

*Christopher Mole* is an associate professor of philosophy at the University of British Columbia, Vancouver. He is the author of *Attention is Cognitive Unison: An Essay in Philosophical Psychology* (OUP, 2011), and *The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought* (Routledge, 2016).

In his preface to *The Unexplained Intellect*, Mole emphasises that his book is an attempt to provide arguments for (amongst others) the three theses that:

(i) “Intelligence might become explicable if we treat intelligence thought as if it were some sort of computation”;

(ii) “The importance of the rapport between an organism and its environment must be understood from a broadly computational perspective”;

(iii) “ our difficulties in accounting for our psychological orientation with respect to time are indications of the need to shift our philosophical focus away from mental *states*—which are altogether too static—and towards a theory of the mind in which it is *dynamic* mental entities that are taken to be metaphysically foundational”.

Mole explains at length his main claims in *The Unexplained Intellect*—and the cause that those claims serve—in a lucid and penetrating, VI-part, series of invited posts in *The Brains blog* (a leading forum for work in the philosophy and science of mind that was founded in 2005 by *Gualtiero Piccinini*, and has been administered by *John Schwenkler* since late 2011).

In these posts, Mole seeks to make the following points.

**I: The Unexplained Intellect: The mind is not a hoard of sentences**

We do not currently have a satisfactory account of how minds could be had by material creatures. If such an account is to be given then every mental phenomenon will need to find a place within it. Many will be accounted for by relating them to other things that are mental, but there must come a point at which we break out of the mental domain, and account for some things that are mental by reference to some that are not. It is unclear where this break out point will be. In that sense it is unclear which mental entities are, metaphysically speaking, the most fundamental.

At some point in the twentieth century, philosophers fell into the habit of writing as if the most fundamental things in the mental domain are mental states (where these are thought of as states having objective features of the world as their truth-evaluable contents). This led to a picture in which the mind was regarded as something like a hoard of sentences. The philosophers and cognitive scientists who have operated with this picture have taken their job to be telling us what sort of content these mental sentences have, how that content is structured, how the sentences come to have it, how they get put into and taken out of storage, how they interact with one another, how they influence behaviour, and so on.

This emphasis on states has caused us to underestimate the importance of non-static mental entities, such as inferences, actions, and encounters with the world. If we take these dynamic entities to be among the most fundamental of the items in the mental domain, then — I argue — we can avoid a number of philosophical problems. Most importantly, we can avoid a picture in which intelligent thought would be beyond the capacities of any physically implementable system.

**II: The Unexplained Intellect: Computation and the explanation of intelligence**

A lot of philosophers think that consciousness is what makes the mind/body problem interesting, perhaps because they think that consciousness is the only part of that problem that remains wholly philosophical. Other aspects of the mind are taken to be explicable by scientific means, even if explanatorily adequate theories of them remain to be specified.

I’ll remind the reader of computability theory’s power, with a view to indicating how it is that the discoveries of theoretical computer scientists place constraints on our understanding of what intelligence is, and of how it is possible.

**III: The Unexplained Intellect: The importance of computability**

If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would (be) *absolutely* inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.

**IV: The Unexplained Intellect: Consequences of imperfection**

The lesson to be drawn is that, if we think of intelligence as involving the maintenance of satisfiable beliefs, and if we think of our beliefs as corresponding to a set of representational states, then our intelligence would depend on a run of good luck the chances of which are unknown.

My suggestion is that we can reach a more explanatorily satisfactory conception of intelligence if we adopt a dynamic picture of the mind’s metaphysical foundations.

**V: The Unexplained Intellect: The importance of rapport**

I suggest that something roughly similar is true of us. We are not guaranteed to have satisfiable beliefs, and sometimes we are rather bad at avoiding unsatisfiability, but such intelligence as we have is to be explained by reference to the rapport between our minds and the world.

Rather than starting from a set of belief states, and then supposing that there is some internal process operating on these states that enables us to update our beliefs rationally, we should start out by accounting for the dynamic processes through which the world is epistemically encountered. Much as the three-colourable map generator reliably produces three-colourable maps because it is essential to his map-making procedure that borders appear only where they will allow for three colorability, so it is essential to what it is for a state to be a belief that beliefs will appear only if there is some rapport between the believer and the world. And this rapport — rather than any internal processing considered in isolation from it — can explain the tendency for our beliefs to respect the demands of intelligence.

**VI: The Unexplained Intellect: The mind’s dynamic foundations**

memory is essentially a form of epistemic retentiveness: One’s present knowledge counts as an instance of memory when and only when it was attained on the basis of an epistemic encounter that lies in one’s past. One can epistemically encounter a *proposition* as the conclusion of an argument, and so can encounter it before the occurrence of any event to which it pertains, but one cannot encounter an *event* in that way. In the resulting explanation of memory’s temporal asymmetry, it is the dynamic events of epistemic encountering to which we must make reference. These encounters, and not the knowledge states to which they lead, do the lion’s share of the explanatory work.

**A: Simplifying Mole’s perspective**

It may help simplify Mole’s thought-provoking perspective if we make an arbitrary distinction between:

(i) The mind of an applied scientist, whose primary concern is our sensory observations of a ‘common’ external world;

(ii) The mind of a philosopher, whose primary concern is abstracting a coherent perspective of the external world from our sensory observations; and

(iii) The mind of a mathematician, whose primary concern is adequately expressing such abstractions in a formal language of unambiguous communication.

My understanding of Mole’s thesis, then, is that:

(a) although a mathematician’s mind may be capable of defining the ‘truth’ value of some logical and mathematical propositions without reference to the external world,

(b) the ‘truth’ value of any logical or mathematical proposition that purports to represent any aspect of the real world must be capable of being evidenced objectively to the mind of an applied scientist; and that,

(c) of the latter ‘truths’, what should interest the mind of a philosopher is whether there are some that are ‘knowable’ completely independently of the passage of time, and some that are ‘knowable’ only partially, or incrementally, with the passage of time.

**B. Support for Mole’s thesis**

It also seems to me that Mole’s thesis implicitly subsumes, or at the very least echoes, the belief expressed by Chetan R. Murthy (‘An Evaluation Semantics for Classical Proofs‘, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, 1991; also Cornell TR 91-1213):

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …”

If so, the thesis seems significantly supported by the following paper that is due to appear in the December 2016 issue of ‘Cognitive Systems Research’:

The CSR paper implicitly suggests that there are, indeed, (only?) two ways of assigning ‘true’ or ‘false’ values to any mathematical description of real-world events.

**C. Algorithmic computability**

First, a number theoretical relation is algorithmically computable if, and only if, there is an algorithm that can provide objective evidence (cf. ibid Murthy 91) for deciding the truth/falsity of each proposition in the denumerable sequence .

(We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ on p.503 of Stephen Cole Kleene’s ‘*Introduction to Metamathematics*‘, North Holland Publishing Company, Amsterdam.)

**D. Algorithmic verifiability**

Second, a number-theoretical relation is algorithmically verifiable if, and only if, for any given natural number , there is an algorithm which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence .

We note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

The following theorem (Theorem 2.1, p.37 of the *CSR paper*) shows that although every algorithmically computable relation is algorithmically verifiable, the converse is not true:

**Theorem**: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable.

**E. The significance of algorithmic ‘truth’ assignments for Mole’s theses**

The significance of such algorithmic ‘truth’ assignments for Mole’s theses is that:

*Algorithmic computability*—reflecting the ambit of classical Newtonian mechanics—characterises natural phenomena that are determinate and predictable.

Such phenomena are describable by mathematical propositions that can be termed as ‘knowable completely’, since at any point of time they are algorithmically computable as ‘true’ or ‘false’.

Hence both their past and future behaviour is completely computable, and their ‘truth’ values are therefore ‘knowable’ independent of the passage of time.

*Algorithmic verifiability*—reflecting the ambit of Quantum mechanics—characterises natural phenomena that are determinate but unpredictable.

Such phenomena are describable by mathematical propositions that can only be termed as ‘knowable incompletely’, since at any point of time they are only algorithmically verifiable, but not algorithmically computable, as ‘true’ or ‘false’

Hence, although their past behaviour is completely computable, their future behaviour is not completely predictable, and their ‘truth’ values are not independent of the passage of time.

**F. Where Mole’s implicit faith in the adequacy of set theoretical representations of natural phenomena may be misplaced**

It also seems to me that, although Mole’s analysis justifiably holds that the:

“ importance of the rapport between an organism and its environment”

has been underacknowledged, or even overlooked, by existing theories of the mind and intelligence, it does not seem to mistrust, and therefore ascribe such underacknowledgement to any lacuna in, the mathematical and epistemic foundations of the formal language in which almost all descriptions of real-world events are currently sought to be expressed, which is the language of the set theory ZF.

**G. Any claim to a physically manifestable ‘truth’ must be objectively accountable**

Now, so far as applied science is concerned, history teaches us that the ‘truth’ of any mathematical proposition that purports to represent any aspect of the external world must be capable of being evidenced objectively; and that such ‘truths’ must not be only of a subjective and/or revelationary nature which may require truth-certification by evolutionarily selected prophets.

(Not necessarily religious—see, for instance, Melvyn B. Nathanson’s remarks, “*Desperately Seeking Mathematical Truth*“, in the Opinion piece in the August 2008 Notices of the American Mathematical Society, Vol. 55, Issue 7.)

The broader significance of seeking objective accountability is that it admits the following (admittedly iconoclastic) distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA as the language of science; and

2. The first-order Set Theory ZF as the language of science fiction.

It is a distinction that is faintly reflected in Stephen G. Simpson’s more conservative perspective in his paper ‘*Partial Realizations of Hilbert’s Program*‘ (#6.4, p.15):

“Finitistic reasoning (my read: ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (my read: ‘First-order Set Theory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.”

The distinction is supported by the formal argument (detailed in the above-cited CSR paper) that:

(i) PA has two, hitherto unsuspected, evidence-based interpretations, the first of which can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about ‘true’ arithmetical propositions.

What this means is that the language of arithmetic—formally expressed as PA—can provide all the foundational needs for all practical applications of mathematics in the physical sciences. This was was the point that I sought to make—in a limited way, with respect to quantum phenomena—in the following paper presented at Unilog 2015, Istanbul last year:

(Presented on 26’th June at the workshop on ‘*Emergent Computational Logics*’ at *UNILOG’2015, 5th World Congress and School on Universal Logic*, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see Theorem 1 in 4 of *this post*), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about ‘true’ set-theoretical propositions, or that of mechanistic reasoning about ‘true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that—although of possible interest to philosophers of science—are only mentally conceivable by mathematicians subjectively, and have no verifiable physical counterparts, or immediately practical applications of mathematics, that can materially impact on the study of physical phenomena.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

**H. The importance of Mole’s ‘rapport’**

Accordingly, I see it as axiomatic that the relationship between an evidence-based mathematical language and the physical phenomena that it purports to describe, must be in what Mole terms as ‘rapport’, if we view mathematics as a set of linguistic tools that have evolved:

(a) to adequately abstract and precisely express through human reasoning our observations of physical phenomena in the world in which we live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of our co-operative potential in acieving a better understanding of physical phenomena.

This is the perspective that I sought to make in the following paper presented at Epsilon 2015, Montpellier, last June, where I argue against the introduction of ‘unspecifiable’ elements (such as completed infinities) into either a formal language or any of its evidence-based interpretations (in support of the argument that since a completed infinity cannot be evidence-based, it must therefore be dispensible in any purported description of reality):

(Presented on 10th June at the Epsilon 2015 workshop on ‘*Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics*’, 10th June 2015 – 12th June 2015, University of Montpellier, France.)

**I. Why mathematical reasoning must reflect an ‘agnostic’ perspective**

Moreover, from a non-mathematician’s perspective, a Propertarian like *Curt Doolittle* would seem justified in his critique (comment of June 2, 2016 in *this Quanta review*) of the seemingly ‘mystical’ and ‘irrelevant’ direction in which conventional interpretations of Hilbert’s ‘theistic’ and Brouwer’s ‘atheistic’ reasoning appear to have pointed mainstream mathematics for, as I argue informally in an *earlier post*, the ‘truths’ of any mathematical reasoning must reflect an ‘agnostic’ perspective.

In a recent paper *A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory*, authors Adam Yedidia and Scott Aaronson argue upfront in their Introduction that:

“*Like any axiomatic system capable of encoding arithmetic, ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency. Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.*“

The question arises:

*How reasonable is it to build modern mathematics on top of a Set Theory such as ZF?*

Some immediate points to ponder upon (see also reservations expressed by Stephen G. Simpson in *Logic and Mathematics* and in *Partial Realizations of Hilbert’s Program*):

**1. “Like any axiomatic system capable of encoding arithmetic, …”**

The implicit assumption here that every ZF formula which is provable about the finite ZF ordinals must necessarily interpret as a true proposition about the natural numbers is fragile since, without such an assumption, we can only conclude from Goodstein’s argument (see Theorem 1.1 here) that a Goodstein sequence defined over the finite ZF ordinals must terminate even if the corresponding Goodstein sequence over the natural numbers does not terminate!

**2. “ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency.”**

The implicit assumption here is that ZF is -consistent, which implies that ZF is consistent and must therefore have an interpretation over some mathematically definable structure in which ZF theorems interpret as ‘true’.

The question arises: Must such ‘truth’ be capable of being evidenced objectively, or is it only of a subjective, revelationary, nature (which may require truth-certification by evolutionarily selected prophets—see Nathanson’s remarks as cited in *this post*)?

The significance of seeking objective accountbility is that in a paper, “*The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis*“, which is due to appear in the December 2016 issue of *Cognitive Systems Research*, we show (see also *this post*) that the first-order Peano Arithmetic PA:

(i) is finitarily consistent; but

(ii) is *not* -consistent; and

(iii) has no ‘undecidable’ arithmetical proposition (whence both of Gödel’s Incompleteness Theorems hold vacuously so far as the arithmetic of the natural numbers is concerned).

**3. “Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.”**

Now, one justification for such an assumption (without which it may be difficult to justify building modern mathematics on top of ZF) could be the belief that acquisition of set-theoretical knowledge by students of mathematics has some essential educational dimension.

If so, one should take into account not only the motivations of such a student for the learning of mathematics, but also those of a mathematician for teaching it.

This, in turn, means that both the content of the mathematics which is to be learnt (or taught), as well as the putative utility of such learning (or teaching) for a student (or teacher), merit consideration.

Considering content, I would iconoclastically submit that the least one may then need to accomodate is the following distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA, which is the language of science; and

2. The first-order Set Theory ZF, which is the language of science fiction.

A distinction that is reflected in Stephen G. Simpson’s more conservative perspective in *Partial Realizations of Hilbert’s Program* (6.4, p.15):

Finitistic reasoning (*read ‘First-order Peano Arithmetic PA’*) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (*read ‘First-order Set Thyeory ZF’*) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.

Reason:

(i) PA has two, hitherto unsuspected, evidence-based interpretations (see *this post*), the first of which can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions.

It is this language of arithmetic—formally expressed as PA—that provides the foundation for all practical applications of mathematics where the latter could be argued as having an essential educational dimension.

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see paragraph 4.2 of *this post*), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about `true’ set-theoretical propositions, or that of mechanistic reasoning about `true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that are only mentally conceivable by mathematicians (subjectively?), and have no physical counterparts, or immediately practical applications of mathematics, which could meaningfully be argued as having an essential educational dimension.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

The distinction is lost when—as seems to be the case currently—we treat the acquisition of mathematical knowledge as necessarily including the body of essentially set-theoretic theorems—to the detriment, I would argue, of the larger body of aspiring students of mathematics whose flagging interest in acquiring such a wider knowledge in universities around the world reflects the fact that, for most students, their interests seem to lie primarily in how a study of mathematics can enable them to:

(a) adequately abstract and precisely express through human reasoning their experiences of the world in which they live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of their latent potential in acieving their personal real-world goals.

In other words, it is not obvious how how any study of mathematics that has the limited goals (a) and (b) can have any essentially educational dimension that justifies the assumption that ZF is consistent.

**A foundational argument for defining Effective Computability formally, and weakening the Church and Turing Theses – II**

** The Logical Issue**

In the previous posts we addressed first the computational issue, and second the philosophical issue—concerning the informal concept of `effective computability’—that seemed implicit in Selmer Bringsjord’s narrational case against Church’s Thesis ^{[1]}.

We now address the logical issue that leads to a formal definability of this concept which—arguably—captures our intuitive notion of the concept more fully.

We note that in this paper on undecidable arithmetical propositions we have shown how it follows from Theorem VII of Gödel’s seminal 1931 paper that every recursive function is representable in the first-order Peano Arithmetic PA by a formula which is algorithmically verifiable, but not algorithmically computable, *if* we assume (*Aristotle’s particularisation*) that the negation of a universally quantified formula of the first-order predicate calculus is always indicative of the existence of a counter-example under the standard interpretation of PA.

In this earlier post on the Birmingham paper, we have also shown that:

(i) The concept of algorithmic verifiability is well-defined under the standard interpretation of PA over the structure of the natural numbers; and

(ii) The concept of algorithmic computability too is well-defined under the algorithmic interpretation of PA over the structure of the natural numbers; and

We shall argue in this post that the standard postulation of the Church-Turing Thesis—which postulates that the intuitive concept of `effective computability’ is completely captured by the formal notion of `algorithmic computability’—does not hold if we formally define a number-theoretic formula as effectively computable if, and only if, it is algorithmically verifiable; and it therefore needs to be replaced by a weaker postulation of the Thesis as an instantiational equivalence.

** Weakening the Church and Turing Theses**

We begin by noting that the following theses are classically equivalent ^{[1]}:

**Standard Church’s Thesis:** ^{[2]} A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is recursive ^{[3]}.

**Standard Turing’s Thesis:** ^{[4]} A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is Turing-computable ^{[5]}.

In this paper we shall argue that, from a foundational perspective, the principle of Occam’s razor suggests the Theses should be postulated minimally as the following equivalences:

**Weak Church’s Thesis:** A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a recursive function (or relation, treated as a Boolean function).

**Weak Turing’s Thesis:** A number-theoretic function (or relation, treated as a Boolean function) is effectively computable if, and only if, it is instantiationally equivalent to a Turing-computable function (or relation, treated as a Boolean function).

** The need for explicitly distinguishing between `instantiational’ and `uniform’ methods**

**Why Church’s Thesis?**

It is significant that both Kurt Gödel (initially) and Alonzo Church (subsequently—possibly under the influence of Gödel’s disquietitude) enunciated Church’s formulation of `effective computability’ as a Thesis because Gödel was instinctively uncomfortable with accepting it as a definition that *minimally* captures the essence of `*intuitive* effective computability’ ^{[6]}.

**Kurt Gödel’s reservations**

Gödel’s reservations seem vindicated if we accept that a number-theoretic function can be effectively computable instantiationally (in the sense of being algorithmically *verifiable* as defined in the Birmingham paper, reproduced in this post), but not by a uniform method (in the sense of being algorithmically *computable* as defined in the Birmingham paper, reproduced in this post).

The significance of the fact (considered in the Birmingham paper, reproduced in this post) that `truth’ too can be effectively decidable *both* instantiationally *and* by a uniform method under the standard interpretation of PA is reflected in Gödel’s famous 1951 Gibbs lecture^{[7]}, where he remarks:

“I wish to point out that one may conjecture the truth of a universal proposition (for example, that I shall be able to verify a certain property for any integer given to me) and at the same time conjecture that no general proof for this fact exists. It is easy to imagine situations in which both these conjectures would be very well founded. For the first half of it, this would, for example, be the case if the proposition in question were some equation of two number-theoretical functions which could be verified up to very great numbers .” ^{[8]}

**Alan Turing’s perspective**

Such a possibility is also implicit in Turing’s remarks ^{[9]}:

“The computable numbers do not include all (in the ordinary sense) definable numbers. Let P be a sequence whose *n*-th figure is 1 or 0 according as *n* is or is not satisfactory. It is an immediate consequence of the theorem of that P is not computable. It is (so far as we know at present) possible that any assigned number of figures of P can be calculated, but not by a uniform process. When sufficiently many figures of P have been calculated, an essentially new method is necessary in order to obtain more figures.”

**Boolos, Burgess and Jeffrey’s query**

The need for placing such a distinction on a formal basis has also been expressed explicitly on occasion ^{[10]}.

Thus, Boolos, Burgess and Jeffrey ^{[11]} define a diagonal *halting function*, , any value of which can be decided effectively, although there is no single algorithm that can effectively compute .

Now, the straightforward way of expressing this phenomenon should be to say that there are well-defined number-theoretic functions that are effectively computable instantiationally but not uniformly. Yet, following Church and Turing, such functions are labeled as uncomputable ^{[12]}!

However, as Boolos, Burgess and Jeffrey note quizically:

“According to Turing’s Thesis, since is not Turing-computable, cannot be effectively computable. Why not? After all, although no Turing machine computes the function , we were able to compute at least its first few values, For since, as we have noted, the empty function we have . And it may seem that we can actually compute for any positive integer —if we don’t run out of time.” ^{[13]}

**Why should Chaitin’s constant be labelled `uncomputable’?**

The reluctance to treat a function such as —or the function that computes the digit in the decimal expression of a Chaitin constant ^{[14]}—as computable, on the grounds that the `time’ needed to compute it increases monotonically with , is curious ^{[15]}; the same applies to any total Turing-computable function !^{[16]}

Moreover, such a reluctance to treat instantiationally computable functions such as as not `effectively computable’ is difficult to reconcile with a conventional wisdom that holds the standard interpretation of the first order Peano Arithmetic PA as defining an intuitively sound model of PA.

*Reason:* We have shown in the Birmingham paper (reproduced in this post) that ‘satisfaction’ and ‘truth’ under the standard interpretation of PA is definable constructively in terms of algorithmic verifiability (*instantiational computability*).

** Distinguishing between algorithmic verifiability and algorithmic computability**

We now show in Theorem 1 that if Aristotle’s particularisation is presumed valid over the structure of the natural numbers—as is the case under the standard interpretation of the first-order Peano Arithmetic PA—then it follows from the instantiational nature of the (constructively defined ^{[17]}) Gödel -function that a primitive recursive relation can be instantiationally equivalent to an arithmetical relation, where the former is algorithmically computable over , whilst the latter is algorithmically verifiable (i.e., instantiationally computable) but not algorithmically computable over .^{[18]}

** Significance of Gödel’s -function**

We note first that in Theorem VII of his seminal 1931 paper on formally undecidable arithmetical propositions Gödel showed that, given a total number-theoretic function and any natural number , we can construct a primitive recursive function and natural numbers such that for all .

In this paper we shall essentially answer the following question affirmatively:

**Query 3:** Does Gödel’s Theorem VII admit construction of an arithmetical function such that:

(a) for any given natural number , there is an algorithm that can verify for all (hence may be said to be algorithmically verifiable if is recursive);

(b) there is no algorithm that can verify for all (so may be said to be algorithmically uncomputable)?

** Defining effective computability**

Now, in the Birmingham paper (reproduced in this post), we have formally defined what it means for a formula of an arithmetical language to be:

(i) Algorithmically verifiable;

(ii) Algorithmically computable.

under an interpretation.

We shall thus propose the definition:

**Effective computability:** A number-theoretic formula is effectively computable if, and only if, it is algorithmically verifiable.

**Intuitionistically unobjectionable:** We note first that since every finite set of integers is recursive, every well-defined number-theoretical formula is algorithmically verifiable, and so the above definition is intuitionistically unobjectionable; and second that the existence of an arithmetic formula that is algorithmically verifiable but not algorithmically computable (Theorem 1) supports Gödel’s reservations on Alonzo Church’s original intention to label his Thesis as a definition ^{[19]}.

The concept is well-defined, since we have shown in the Birmingham paper (reproduced in this post) that the algorithmically verifiable and the algorithmically computable PA formulas are well-defined under the standard interpretation of PA and that:

(a) The PA-formulas are decidable as satisfied / unsatisfied or true / false under the standard interpretation of PA if, and only if, they are algorithmically verifiable;

(b) The algorithmically computable PA-formulas are a proper subset of the algorithmically verifiable PA-formulas;

(c) The PA-axioms are algorithmically computable as satisfied / true under the standard interpretation of PA;

(d) Generalisation and Modus Ponens preserve algorithmically computable truth under the standard interpretation of PA;

(e) The provable PA-formulas are precisely the ones that are algorithmically computable as satisfied / true under the standard interpretation of PA.

** Gödel’s Theorem VII and algorithmically verifiable, but not algorithmically computable, arithmetical propositions**

In his seminal 1931 paper on formally undecidable arithmetical propositions, Gödel defined a curious primitive recursive function—Gödel’s -function—as ^{[20]}:

**Definition 1:**

where denotes the remainder obtained on dividing by .

Gödel showed that the above function has the remarkable property that:

**Lemma 1:** For any given denumerable sequence of natural numbers, say , and any given natural number , we can construct natural numbers such that:

(i) ;

(ii) !;

(iii) for .

**Proof:** This is a standard result ^{[21]}.

Now we have the standard definition ^{[22]}:

**Definition 2:** A number-theoretic function is said to be representable in PA if, and only if, there is a PA formula with the free variables , such that, for any given natural numbers :

(i) if then PA proves: ;

(ii) PA proves: .

The function is said to be strongly representable in PA if we further have that:

(iii) PA proves:

**Interpretation of `‘:** The symbol `‘ denotes `uniqueness’ under an interpretation which assumes that Aristotle’s particularisation holds in the domain of the interpretation.

Formally, however, the PA formula:

is merely a short-hand notation for the PA formula:

.

We then have:

**Lemma 2** is strongly represented in PA by , which is defined as follows:

.

**Proof:** This is a standard result ^{[23]}.

Gödel further showed (also under the tacit, but critical, presumption of Aristotle’s particularisation ^{[24]} that:

**Lemma 3:** If is a recursive function defined by:

(i)

(ii)

where and are recursive functions of lower rank ^{[25]} that are represented in PA by well-formed formulas and ,

then is represented in PA by the following well-formed formula, denoted by :

**Proof:** This is a standard result ^{[26]}.

** What does “ is provable” assert under the standard interpretation of PA?**

Now, if the PA formula represents in PA the recursive function denoted by then by definition, for any given numerals , the formula is provable in PA; and true under the standard interpretation of PA.

We thus have that:

**Lemma 4:** “ is true under the standard interpretation of PA” is the assertion that:

Given any natural numbers , we can construct natural numbers —all functions of —such that:

(a) ;

(b) for all , ;

(c) ;

where , and are any recursive functions that are formally represented in PA by and respectively such that:

(i)

(ii) for all

(iii) and are recursive functions that are assumed to be of lower rank than .

**Proof:** For any given natural numbers and , if interprets as a well-defined arithmetical relation under the standard interpretation of PA, then we can define a deterministic Turing machine that can `construct’ the sequences:

and:

and give evidence to verify the assertion. ^{[27]}

We now see that:

**Theorem 1:** Under the standard interpretation of PA is algorithmically verifiable, but not algorithmically computable, as always true over .

**Proof:** It follows from Lemma 4 that:

(1) is PA-provable for any given numerals . Hence is true under the standard interpretation of PA. It then follows from the definition of in Lemma 3 that, for any given natural numbers , we can construct some pair of natural numbers —where are functions of the given natural numbers and —such that:

(a) for ;

(b) holds in .

Since is primitive recursive, defines a deterministic Turing machine that can `construct’ the denumerable sequence for any given natural numbers and such that:

(c) for .

We can thus define a deterministic Turing machine that will give evidence that the PA formula is true under the standard interpretation of PA.

Hence is algorithmically verifiable over under the standard interpretation of PA.

(2) Now, the pair of natural numbers are defined such that:

(a) for ;

(b) holds in ;

where is defined in Lemma 3 as !, and:

(c) ;

(d) is the `number’ of terms in the sequence .

Since is not definable for a denumerable sequence we cannot define a denumerable sequence such that:

(e) for all .

We cannot thus define a deterministic Turing machine that will give evidence that the PA formula interprets as true under the standard interpretation of PA for any given sequence of numerals .

Hence is not algorithmically computable over under the standard interpretation of PA.

The theorem follows.

**Corollary 1:** If the standard interpretation of PA is sound, then the classical Church and Turing theses are false.

The above theorem now suggests the following definition:

**Definition 2:** (*Effective computability*) A number-theoretic function is effectively computable if, and only if, it is algorithmically verifiable.

Such a definition of effective computability now allows the classical Church and Turing theses to be expressed as the weak equivalences in —rather than as identities—without any apparent loss of generality.

**References**

**BBJ03** George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. *Computability and Logic* (4th ed). Cambridge University Press, Cambridge.

** Bri93** Selmer Bringsjord. 1993. *The Narrational Case Against Church’s Thesis.* Easter APA meetings, Atlanta.

**Ch36** Alonzo Church. 1936. *An unsolvable problem of elementary number theory.* In M. Davis (ed.). 1965. *The Undecidable* Raven Press, New York. Reprinted from the Am. J. Math., Vol. 58, pp.345-363.

**Ct75** Gregory J. Chaitin. 1975. *A Theory of Program Size Formally Identical to Information Theory.* J. Assoc. Comput. Mach. 22 (1975), pp. 329-340.

**Go31** Kurt Gödel. 1931. *On formally undecidable propositions of Principia Mathematica and related systems I.* Translated by Elliott Mendelson. In M. Davis (ed.). 1965. *The Undecidable.* Raven Press, New York.

**Go51** Kurt Gödel. 1951. *Some basic theorems on the foundations of mathematics and their implications.* Gibbs lecture. In Kurt Gödel, Collected Works III, pp.304-323.\ 1995. *Unpublished Essays and Lectures.* Solomon Feferman et al (ed.). Oxford University Press, New York.

**Ka59** Laszlo Kalmár. 1959. *An Argument Against the Plausibility of Church’s Thesis.* In Heyting, A. (ed.) *Constructivity in Mathematics.* North-Holland, Amsterdam.

**Kl36** Stephen Cole Kleene. 1936. *General Recursive Functions of Natural Numbers.* Math. Annalen vol. 112 (1936) pp.727-766.

**Me64** Elliott Mendelson. 1964. *Introduction to Mathematical Logic.* Van Norstrand, Princeton.

**Me90** Elliott Mendelson. 1990. *Second Thoughts About Church’s Thesis and Mathematical Proofs.* Journal of Philosophy 87.5.

**Pa71** Rohit Parikh. 1971. *Existence and Feasibility in Arithmetic.* The Journal of Symbolic Logic, Vol.36, No. 3 (Sep., 1971), pp. 494-508.

**Si97** Wilfried Sieg. 1997. *Step by recursive step: Church’s analysis of effective calculability* Bulletin of Symbolic Logic, Volume 3, Number 2.

**Sm07** Peter Smith. 2007. *Church’s Thesis after 70 Years.* A commentary and critical review of *Church’s Thesis After 70 Years.* In Meinong Studies Vol 1 (Ontos Mathematical Logic 1), 2006 (2013), Eds. Adam Olszewski, Jan Wolenski, Robert Janusz. Ontos Verlag (Walter de Gruyter), Frankfurt, Germany.

**Tu36** Alan Turing. 1936. *On computable numbers, with an application to the Entscheidungsproblem* In M. Davis (ed.). 1965. *The Undecidable.* Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

**An07** Bhupinder Singh Anand. 2007. *Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II.* In *The Reasoner*, Vol(1)7 p2-3.

**An12** … 2012. *Evidence-Based Interpretations of PA.* In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

**Notes**

Return to 1: cf. Me64, p.237.

Return to 2: *Church’s (original) Thesis:* The effectively computable number-theoretic functions are the algorithmically computable number-theoretic functions Ch36.

Return to 11: cf. Me64, p.227.

Return to 4: After describing what he meant by “computable” numbers in the opening sentence of his 1936 paper on Computable Numbers Tu36, Turing immediately expressed this thesis—albeit informally—as: “… the computable numbers include all numbers which could naturally be regarded as computable”.

Return to 5: cf. BBJ03, p.33.

Return to 6: See Si97.

Return to 7: Go51.

Return to 8: Parikh’s paper Pa71 can also be viewed as an attempt to investigate the consequences of expressing the essence of Gödel’s remarks formally.

Return to 9: Tu36, , p.139.

Return to 10: Parikh’s distinction between `decidability’ and `feasibility’ in Pa71 also appears to echo the need for such a distinction.

Return to 11: BBJ03, p. 37.

Return to 12: The issue here seems to be that, when using language to express the abstract objects of our individual, and common, mental `concept spaces’, we use the word `exists’ loosely in three senses, without making explicit distinctions between them (see An07).

Return to 13: BBJ03, p.37.

Return to 14: Chaitin’s Halting Probability is given by , where the summation is over all self-delimiting programs that halt, and is the size in bits of the halting program ; see Ct75.

Return to 15: The incongruity of this is addressed by Parikh in Pa71.

Return to 16: The only difference being that, in the latter case, we know there is a common `program’ of constant length that will compute for any given natural number ; in the former, we know we may need distinctly different programs for computing for different values of , where the length of the program will, sometime, reference .

Return to 17: By Kurt Gödel; see Go31, Theorem VII.

Return to 18: **Analagous distinctions in analysis:** The distinction between algorithmically computable, and algorithmically verifiable but not algorithmically computable, number-theoretic functions seeks to reflect in arithmetic the essence of *uniform* methods (formally detailed in the Birmingham paper (reproduced in this post) and in its main consequence—the Provability Theorem for PA—as detailed in this post), classically characterised by the distinctions in analysis between: (a) uniformly continuous, and point-wise continuous but not uniformly continuous, functions over an interval; (b) uniformly convergent, and point-wise convergent but not uniformly convergent, series.

**A limitation of set theory and a possible barrier to computation:** We note, further, that the above distinction cannot be reflected within a language—such as the set theory ZF—which identifies `equality’ with `equivalence’. Since functions are defined extensionally as mappings, such a language cannot recognise that a set which represents a primitive recursive function may be equivalent to, but computationally different from, a set that represents an arithmetical function; where the former function is algorithmically computable over , whilst the latter is algorithmically verifiable but not algorithmically computable over .

Return to 19: See the Provability Theorem for PA in this post.

Return to 20: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.21.

Return to 21: cf. Go31, p.31, Lemma 1; Me64, p.131, Proposition 3.22.

Return to 22: Me64, p.118.

Return to 23: cf. Me64, p.131, proposition 3.21.

Return to 24: The implicit assumption being that the negation of a universally quantified formula of the first-order predicate calculus is indicative of “the existence of a counter-example”—Go31, p.32.

Return to 25: cf. Me64, p.132; Go31, p.30(2).

Return to 26: cf. Go31, p.31(2); Me64}, p.132.

Return to 27: A critical philosophical issue that we do not address here is whether the PA formula can be considered to interpret under a sound interpretation of PA as a well-defined predicate, since the denumerable sequences and is not equal to if is not equal to —are represented by denumerable, distinctly different, functions respectively. There are thus denumerable pairs for which yields the sequence .

## Recent comments