Why primality is polynomial time, but factorisation is not

Differentiating between the signature of a number and its value

A brief review: The significance of evidence-based reasoning

In a paper: `The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of Cognitive Systems Research [An16], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for objectively evidencing such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … Chetan. R. Murthy: [Mu91], \S 1 Introduction.

“Intuitively we require that for each event-describing sentence, \phi_{o^{\iota}}n_{\iota} say (i.e. the concrete object denoted by n_{\iota} exhibits the property expressed by \phi_{o^{\iota}} ), there shall be an algorithm (depending on I, i.e. M^{*} ) to decide the truth or falsity of that sentence.” … Martin H Löb: [Lob59], p.165.

Definition 1 (Evidence-based reasoning in Arithmetic): Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively evidencing such acceptance.

The significance of introducing evidence-based reasoning for assigning truth values to the formulas of a first-order Peano Arithmetic, such as PA, under a well-defined interpretation (see Section 3 in [An16]), is that it admits the distinction:

(1) algorithmically verifiable `truth’ (Definition 2}); and

(2) algorithmically computable `truth’ (Definition 3).

Definition 2 (Deterministic algorithm): A deterministic algorithm computes a mathematical function which has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.

Note that a deterministic algorithm can be suitably defined as a `realizer‘ in the sense of the Brouwer-Heyting-Kolmogorov rules (see [Ba16], p.5).

For instance, under evidence-based reasoning the formula [(\forall x)F(x)] of the first-order Peano Arithmetic PA must always be interpreted weakly under the classical, standard, interpretation of PA (see [An16], Theorem 5.6) in terms of algorithmic verifiability (see [An16], Definition 1); where, if the PA-formula [F(x)] interprets as an arithmetical relation F^{*}(x) over N :

Definition 2 (Algorithmic verifiability): The number-theoretical relation F^{*}(x) is algorithmically verifiable if, and only if, for any natural number n , there is a deterministic algorithm AL_{(F,\ n)} which can provide evidence for deciding the truth/falsity of each proposition in the finite sequence \{F^{*}(1), F^{*}(2), \ldots, F^{*}(n)\} .

Whereas [(\forall x)F(x)] must always be interpreted strongly under the finitary interpretation of PA (see [An16], Theorem 6.7) in terms of algorithmic computability ([An16], Definition 2), where:

Definition 3 (Algorithmic computability): The number theoretical relation F^{*}(x) is algorithmically computable if, and only if, there is a deterministic algorithm AL_{F} that can provide evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F^{*}(1), F^{*}(2), \ldots\} .

The significance of the distinction between algorithmically computable reasoning based on algorithmically computable truth, and algorithmically verifiable reasoning based on algorithmically verifiable truth, is that it admits the following, hitherto unsuspected, consequences:

(i) PA has two well-defined interpretations over the domain N of the natural numbers (including 0 ):

(a) the weak non-finitary standard interpretation I_{PA(N, SV)} ([An16], Theorem 5.6),

and

(b) a strong finitary interpretation I_{PA(N, SC)} ([An16], Theorem 6.7);

(ii) PA is non-finitarily consistent under I_{PA(N, SV)} ([An16], Theorem 5.7);

(iii) PA is finitarily consistent under I_{PA(N, SC)} ([An16], Theorem 6.8).

The significance of evidence-based reasoning for Computational Complexity

In this investigation I now show the relevance of evidence-based reasoning, and of distinguishing between algorithmically verifiable and algorithmically computable number-theoretic functions (as defined above), for Computational Complexity is that it assures us a formal foundation for placing in perspective, and complementing, an uncomfortably counter-intuitive entailment in number theory—Theorem 2 below—which has been treated by conventional wisdom as sufficient for concluding that the prime divisors of an integer cannot be proven to be mutually independent.

However, I show there that such informally perceived barriers are, in this instance, illusory; and that admitting the above distinction illustrates:

(a) Why the prime divisors of an integer are mutually independent Theorem 2;

(b) Why determining whether the signature (Definition 3 below) of a given integer n —coded as the key in a modified Bazeries-cylinder (see Definition 7 of this paper) based combination lock—is that of a prime, or not, can be done in polynomial time O(log_{_{e}}n) (Corollary 4 of this paper); as compared to the time \ddot{O}(log_{_{e}}^{15/2}n) given by Agrawal et al in [AKS04], and improved to \ddot{O}(log_{_{e}}^{6}n) by Lenstra and Pomerance in [LP11], for determining whether the value of a given integer n is that of a prime or not.

(c) Why it can be cogently argued that determining a factor of a given integer cannot be polynomial time.

Definition 4 (Signature of a number): The signature of a given integer n is the sequence a_{_{n,i}} where n + a_{_{n,i}} \equiv 0\ mod\ (p_{_{i}}) for all primes p_{_{i}}\ such\ that\ 1\leq i \leq \pi(\sqrt{n}) .

Unique since, if p_{_{\pi(\sqrt{m})+1}}^{2} > m \geq p_{_{\pi(\sqrt{m})}}^{2} and p_{_{\pi(\sqrt{n})+1}}^{2} > n \geq p_{_{\pi(\sqrt{n})}}^{2} have the same signature, then |m - n| = c_{_{1}}.\prod_{i=1}^{\pi(\sqrt{m})}p_{_{i}} = c_{_{2}}.\prod_{i=1}^{\pi(\sqrt{n})}p_{_{i}} ; whence c_{_{1}} = c_{_{2}} = 0 since \prod_{i=1}^{k}p_{_{i}} > (\prod_{i=2}^{k-2}p_{_{i}}).p_{_{k}}^{^{2}} > p_{_{k+1}}^{2} for k > 4 by appeal to Bertrand’s Postulate 2.p_{_{k}} > p_{_{k+1}} ; and the uniqueness is easily verified for k \leq 4 .

Definition 5 (Value of a number): The value of a given integer n is any well-defined interpretation—over the domain of the natural numbers—of the (unique) numeral [n] that represents n in the first-order Peano Arithmetic PA.

We note that Theorem 2 establishes a lower limit for [AKS04] and [LP11], because determining the signature of a given integer n does not require knowledge of the value of the integer as defined by the Fundamental Theorem of Arithmetic.

Theorem 1: (Fundamental Theorem of Arithmetic): Every positive integer n > 1 can be represented in exactly one way as a product of prime powers:

n=p_{1}^{n_{1}}p_{2}^{n_{2}}\cdots p_{k}^{n_{k}}=\prod _{i=1}^{k}p_{i}^{n_{i}}

where p_{1} < p_{2} < \ldots < p_{k} are primes and the n_{i} are positive integers (including 0 ).

Are the prime divisors of an integer mutually independent?

In this paper I address the query:

Query 1: Are the prime divisors of an integer n mutually independent?

Definition 6 (Independent events): Two events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other.

Intuitively, the prime divisors of an integer seem to be mutually independent by virtue of the Fundamental Theorem of Arithmetic

Moreover, the prime divisors of n can also be seen to be mutually independent in the usual, linearly displayed, Sieve of Eratosthenes, where whether an integer n is crossed out as a multiple of a prime p is obviously independent of whether it is also crossed out as a multiple of a prime q \neq p :

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 …

Despite such compelling evidence, conventional wisdom appears to accept as definitive the counter-intuitive conclusion that although we can see it as true, we cannot mathematically prove the following proposition as true:

Proposition 1: Whether or not a prime p divides an integer n is independent of whether or not a prime q \neq p divides the integer n .

We note that such an unprovable-but-intuitively-true conclusion makes a stronger assumption than that in Gödel’s similar claim for his arithmetical formula [(\forall x)R(x)] —whose Gödel-number is 17Gen\ r —in [Go31], p.26(2). Stronger, since Gödel does not assume his proposition to be intuitively true, but shows that though the arithmetical formula with Gödel-number 17Gen\ r is not provable in his Peano Arithmetic P yet, for any P -numeral [n] , the formula [R(n)] whose Gödel-number is Sb \left(r \begin{array}{c}17 \\ Z(n)\end{array}\right) is P -provable, and therefore meta-mathematically true under any well-defined Tarskian interpretation of P (cf., [An16], Section 3.).

Expressed in computational terms (see [An16], Corollary 8.3), under any well-defined interpretation of P , Gödel’s formula [R(x)] translates as an arithmetical relation, say R'(x) , such that R'(n) is algorithmically verifiable, but not algorithmically computable, as always true over N , since [\neg (\forall x)R(x)] is P -provable ([An16], Corollary 8.2).

We thus argue that a perspective which denies Proposition 1 is based on perceived barriers that reflect, and are peculiar to, only the argument that:

Theorem 2: There is no deterministic algorithm that, for any given n , and any given prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Proof By a standard result in the Theory of Numbers ([Ste02], Chapter 2, p.9, Theorem 2.1, we cannot define a probability function for the probability that a random n is prime over the probability space (1, 2, 3, \ldots, ) .

(Compare with the informal argument in [HL23], pp.36-37.)

In other words, treating Theorem 2 as an absolute barrier does not admit the possibility—which has consequences for the resolution of outstanding problems in both the theory of numbers and computational complexity—that Proposition 1 is algorithmically verifiable, but not algorithmically computable, as true, since:

Theorem 3: For any given n , there is a deterministic algorithm that, given any prime p \geq 2 , will evidence that the probability \mathbb{P}(p\ |\ n) that p divides n is \frac{1}{p} , and the probability \mathbb{P}(p\not|\ n) that p does not divide n is 1 - \frac{1}{p} .

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Can Gödel be held responsible for not clearly distinguishing—in his seminal 1931 paper on formally undecidable propositions (pp.596-616, ‘From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931‘, Jean van Heijenoort, Harvard University Press, 1976 printing)—between the implicit circularity that is masked by the non-constructive nature of his proof of undecidability in PM, and the lack of any circularity in his finitary proof of undecidability in Peano Arithmetic?

“The analogy of this argument with the Richard antinomy leaps to the eye. It is closely related to the “Liar” too;[Fn.14] for the undecidable proposition [R (q); q] states that q belongs to K , that is, by (1), that [R (q); q] is not provable. We therefore have before us a proposition that says about itself that it is not provable [in PM].[Fn.15]

[Fn.14] Any epistemological antinomycould be used for a similar proof of the existence of undecidable propositions.”

[Fn.15] Contrary to appearances, such a proposition involves no faulty circularity, for initially it [only] asserts that a certain well-defined formula (namely, the one obtained from the q th formula in the lexicographic order by a certain substitution) is unprovable. Only subsequently (and so to speak by chance) does it turn out that this formula is precisely the one by which the proposition itself was expressed.”

It is a question worth asking, if we heed Abel-Luis Peralta, who is a Graduate in Scientific Calculus and Computer Science in the Faculty of Exact Sciences at the National University of La Plata in Buenos Aires, Argentina; and who has been contending in a number of posts on his Academia web-page that:

(i) Gödel’s semantic definition of ‘[R(n) : n] ‘, and therefore of ‘\neg Bew[R(n) : n] ‘, is not only:

(a) self-referential under interpretation—in the sense of the above quote (pp.597-598, van Heijenoort) from Gödel’s Introduction in his 1931 paper ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems I’ (pp.596-616, van Heijenoort);

but that:

(b) neither of the definitions can be verified by a deterministic Turing machine as yielding a valid formula of PM.

Peralta is, of course, absolutely right in his contentions.

However, such non-constructiveness is a characteristic of any set-theoretical system in which PM is interpretable; and in which, by Gödel’s self-confessed Platonism (apparent in his footnote #15 in the quote above), we do not need to establish that his definitions of ‘[R(n) : n] ‘ and ‘\neg Bew[R(n) : n] ‘ need to be verifiable by a deterministic Turing machine in order to be treated as valid formulas of PM.

Reason: By the usual axiom of separation of any formal set theory such as ZFC in which PM is interpreted, Gödel’s set-theoretical definition (p.598, Heijenoort):

n \in K \equiv \neg Bew[R(n) : n]

lends legitimacy to \neg Bew[R(n) : n] as a PM formula.

Thus Gödel can formally assume—without further proof, by appeal simply to the axiom of choice of ZFC—that the PM formulas with exactly one variable—of the type of natural numbers—can be well-ordered in a sequence in some way such as, for example (Fn.11, p.598, Heijenoort):

“… by increasing the sum of the finite sequences of integers that is the ‘class sign’;, and lexicographically for equal sums.”

We cannot, though, conclude from this that:

(ii) Gödel’s formally undecidable P-formula, say [(\forall x)R(x)] —whose Gödel-number is defined as 17Gen\ r in Gödel’s proof of his Theorem VI (on pp.607-609 of van Heijenoort)—also cannot be verified by a deterministic Turing machine to be a valid formula of Gödel’s Peano Arithmetic P.

Reason: The axioms of set-theoretical systems such as PM, ZF, etc. would all admit—under a well-defined interpretation, if any—infinite elements, in the putative domain of any such interpretation, which are not Turing-definable.

Nevertheless, to be fair to two generations of scholars who—apart from those who are able to comfortably wear the logician’s hat—have laboured in attempts to place the philosophical underpinnings of Gödel’s reasoning (in his 1931 paper) in a coherent perspective (see this post; also this and this), I think Gödel must, to some extent, be held responsible—but in no way accountable—for the lack of a clear-cut distinction between the non-constructivity implicit in his semantic proof in (i), and the finitarity that he explicitly ensures for his syntactic proof in (ii).

Reason: Neither in his title, nor elsewhere in his paper, does Gödel categorically state that his goal was:

(iii) not only to demonstrate the existence of formally undecidable propositions in PM, a system which admits non-finitary elements under any putative interpretation;

(iv) but also to prevent the admittance of non-finitary elements—precisely those which would admit conclusions such as (ii)—when demonstrating the existence of formally undecidable propositions in ‘related’ systems such as his Peano Arithmetic P.

He merely hints at this by stating (see quote below from pp.587-589 of van Heijenoort) that his demonstration of (iii) is a ‘sketch’ that lacked the precision which he intended to achieve in (iv):

“Before going into details, we shall first sketch the main idea of the proof, of course without any claim to complete precision. The formulas of a formal system (we restrict ourselves here to the system PM) in outward appearance are finite sequences of primitive signs (variables, logical constants, and parentheses or punctuation dots), and it is easy to state with complete precision which sequences of primitive signs are meaningful formulas and which are not….

by:

(v) weakening the implicit assumption—of the decidability of the semantic truth of PM-propositions under any well-defined interpretation of PM—which underlies his proof of the existence of formally undecidable set-theoretical propositions in PM;

The method of proof just explained can clearly be applied to any formal system that, first, when interpreted as representing a system of notions and propositions, has at its disposal sufficient means of expression to define the notions occurring in the argument above (in particular, the notion “provable formula”) and in which, second, every provable formula is true in the interpretation considered. The purpose of carrying out the above proof with full precision in what follows is, among other things, to replace the second of the assumptions just mentioned by a purely formal and much weaker one.”

and:

(vi) insisting—in his proof of the existence of formally undecidable arithmetical propositions in his Peano Arithmetic P—upon the introduction of a methodology for constructively assigning unique truth values to only those (primitive recursive) quantified number-theoretic assertions (#1 to #45 on pp.603-606 of van Heijenoort) that are bounded when interpreted over the domain N of the natural numbers (footnote #34 on p.603 of van Heijenoort):

“Wherever one of the signs (x) , (Ex) , or \varepsilon x occurs in the definitions below, it is followed by a bound on x . This bound serves merely to ensure that the notion defined is recursive (see Theorem IV). But in most cases the extension of the notion defined would not change if this bound were omitted.”

From today’s perspective, one could reasonably hold that—as Peralta implicitly contends—Gödel is misleadingly suggesting (in the initial quote above from pp.587-589 of van Heijenoort) that his definitions of ‘[R(n) : n] ‘ and ‘~Bew[R(n) : n] ‘ may be treated as yielding ‘meaningful’ formulas of PM which are well-definable constructively (in the sense of being definable by a deterministic Turing machine).

In my previous post I detailed precisely why such an assumption would be fragile, by showing how the introduction of the boundedness Gödel insisted upon in (vi) distinguishes:

(vii) Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (pp.598-599 of van Heijenoort), which admits Peralta’s contention (1);

from:

(viii) Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (pp.607-609 of van Heijenoort), which does not admit the corresponding contention (ii).

Moreover, we note that:

(1) Whereas Gödel can—albeit non-constructively—claim that his definition of ‘Bew[R(n) : n] ‘ yields a formula in PM, we cannot claim, correspondingly, that his primitive recursive formula Bew(x) is a formula in his Peano Arithmetic P.

(2) The latter is a number-theoretic relation defined by Gödel in terms of his primitive recursive relation #45, ‘xBy ‘, as:

#46. Bew(x) \equiv (\exists y)yBx .

(3) In Gödel’s terminology, ‘Bew(x) ‘ translates under interpretation over the domain N of the natural numbers as:

x is the Gödel-number of some provable formula [F] of Gödel’s Peano Arithmetic P’.

(4) However, unlike Gödel’s primitive recursive functions and relations #1 to #45, both ‘(\exists y)yBx ‘ and ‘\neg (\exists y)yBx ‘ are number-theoretic relations which are not primitive recursive—which means that they are not effectively decidable by a Turing machine under interpretation in N.

(5) Reason: Unlike in Gödel’s definitions #1 to #45 (see footnote #34 on p.603 of van Heijenoort, quoted above), there is no bound on the quantifier ‘(\exists y) ‘ in the definition of Bew(x) .

Hence, by Turing’s Halting Theorem, we cannot claim—in the absence of specific proof to the contrary—that there must be some deterministic Turing machine which will determine whether or not, for any given natural number m , the assertion Bew(m) is true under interpretation in N.

This is the crucial difference between Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (which admits Peralta’s contention (i)), and Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (which does not admit his contention (i)).

(6) We cannot, therefore—in the absence of specific proof to the contrary—claim by Gödel’s Theorems V or VII that there must be some P-formula, say [Bew_{_{PA}}(x)] (corresponding to the PM-formula Bew[R(n) : n] ), such that, for any given natural number m :

(a) If Bew(m) is true under interpretation in N, then [Bew_{_{PA}}(m)] is provable in P;

(b) If \neg Bew(m) is true under interpretation in N, then \neg [Bew_{_{PA}}(m)] is provable in P.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

A: Is Gödel’s reasoning really kosher?

Many scholars yet harbour a lingering suspicion that Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x,p)] involves a latent contradiction—arising from a putative, implicit, circular self-reference—that is masked by unverifiable, even if not patently invalid, mathematical reasoning.

The following proof of Gödel’s Theorem VI of his 1931 paper is intended to:

\bullet strip away the usual mathematical jargon that shrouds proofs of Gödel’s argument which makes his—admittedly arcane—reasoning difficult for a non-logician to unravel;

and

\bullet show that, and why—unlike in the case of the paradoxical ‘Liar’ sentence: ‘This sentence is a lie’—Gödel’s proposition [(\forall x)R(x, p)] does not involve any circular self-reference that could yield a Liar-like contradiction, either in a formal mathematical language, or when interpreted in any language of common discourse.

B: Gödel’s 45 primitive recursive arithmetic functions and relations

We begin by noting that:

(1) In his 1931 paper on formally ‘undecidable’ arithmetical propositions, Gödel shows that, given a well-defined system of Gödel-numbering, every formula of a first-order Peano Arithmetic such as PA can be Gödel-numbered by Gödel’s primitive recursive relation #23, Form(x) , which is true if, and only if, x is the Gödel-number (GN) of a formula of PA.

(2) So, given any natural number n , (1) allows us to decompose n and effectively determine whether, or not, n is the GN of some PA formula.

(3) Gödel also defines a primitive recursive relation #44, Bw(x) , which is true if, and only if, x is the GN of a finite sequence of formulas in PA, each of which is either an axiom, or an immediate consequence of two preceding formulas in the sequence.

(4) So, given any natural number n , (3) allows us to effectively determine whether, or not, the natural number n is the GN of a proof sequence in PA.

(5) Further, Gödel defines a primitive recursive relation #45, xBy , which is true if, and only if, x is the GN of a proof sequence in PA whose last formula has the GN y .

(6) Gödel then defines a primitive recursive relation, say Q(x,y) \equiv xBSUBy , such that, for any m,n :

mBSUBn is true if, and only if, m happens to be a GN that can be decomposed into a proof sequence whose last member is some PA formula [F(n)] , and n happens to be a GN that decomposes into the PA-formula [F(u)] with only one variable [u] .

(7) The essence of Gödel’s Theorem VI lies in answering the question:

Query 1: Is there any natural number n for which mBSUBn is true?

C: Gödel’s reasoning in Peano Arithmetic

(8) Now, by Gödel’s Theorem VII (a standard representation theorem of arithmetic), xBSUBy can be expressed in PA by some (formally well-defined) formula [\neg R(x,y)] such that, for any m,n :

(a) If mBSUBn is true, then [\neg R(m,n)] is PA-provable;

(b) If \neg mBSUBn is true, then [R(m,n)] is PA-provable.

(9) Further, by (6) and (8), for any m,n , if n is the GN of F(x) then:

(a) If mBSUBn is true, then [R(m,n)] is PA-provable; and m is a PA-proof of [F(n)] ;

(b) If \neg mBSUBn is true, then [\neg R(m,n)] is PA-provable; and m is not a PA-proof of [F(n)] .

(10) In his Theorem VI, Gödel then argues as follows:

(a) Let q be the GN of the formula [R(x,y)] defined in (8).

(b) Let p be the GN of [(\forall x)R(x,y)] .

(c) Let r be the GN of [R(x,p)] .

(d) Let 17Gen\ r be the GN of [(\forall x)R(x,p)] .

(11) We note that all the above primitive recursive relations are formally well-defined within the standard primitive recursive arithmetic PRA; and all the PA-formulas—as well as their corresponding Gödel-numbers—are well-defined in the first-order Peano Arithmetic PA.

In other words, as Gödel emphasised in his paper, the 46—i.e., 45 + xBSUBy —PRA functions and relations that he defines are all bounded, and therefore effectively decidable as true or false over the domain N of the natural numbers; whilst the PA-formulas that he defines do not involve any reference—or self-reference—to either the meaning or the truth/falsity of any PA-formulas under an interpretation in N , but only to their PA-provability which, he shows, is effectively decidable by his system of Gödel-numbering and his definition of the primitive recursive relation xBy .

(12) If we now substitute p for n , and [(\forall x)R(x,p)] for [F(n)] , in (9) we have (since p is the GN of [(\forall x)R(x,y)] ) that:

(i) If mBSUBp is true, then [R(\neg m,p)] is PA-provable; whence m is a PA-proof of [(\forall x)R(x,p)] ;

(ii) If \neg mBSUBp is true, then [R(m,p)] is PA-provable; whence m is not a PA-proof of [(\forall x)R(x,p)] .

Hence n = p answers Query 1 affirmatively.

D: Gödel’s conclusions

(13) Gödel then concludes that, if PA is consistent then:

By (12)(i), if mSUBp is true for some m , then both [R(\neg m,p)] and [(\forall x)R(x,p)] are PA-provable—a contradiction since, by Generalisation in PA, the latter implies that [R(m,p)] is provable in PA.

Hence [(\forall x)R(x,p)] , whose GN is 17Gen\ r , is not provable in PA if PA is consistent.

(14) Moreover, if PA is assumed to also be \omega -consistent (which means that we cannot have a PA-provable formula [\neg (\forall x)F(x)] such that [F(m)] is also provable in PA for any given numeral [m] ) then:

By (13), m is not a PA-proof of [(\forall x)R(x,p)] for any given m ; whence [R(m,p)] is PA-provable for any given m by (12)(ii).

Hence [\neg (\forall x)R(x,p)] , whose GN is Neg(17Gen r) , is not provable in PA.

E: Gödel’s [(\forall x)R(x,p)] does not refer to itself

We note that Gödel’s formula [(\forall x)R(x,p)] —whose GN is 17Gen\ r — does not refer to itself since it is defined in terms of the natural number p , and not in terms of the natural number 17Gen\ r .

F: Somewhere, far beyond Gödel

The consequences of Gödel’s path-breaking answer to Query 1 are far-reaching (as detailed in this thesis).

For instance, taken together with the proof that PA is categorical with respect to algorithmic computability (Corollary 7.2 of this paper), and that PA is not \omega -consistent (Corollary 8.4 of this paper), the above entails that:

\bullet There can be no interpretation of Gödel’s definition of his formally undecidable arithmetical proposition [(\forall x)R(x),p] over the domain of the natural numbers—whether expressed mathematically or in any language of common discourse—that could lead to a contradiction;

\bullet Gödel’s [(\forall x)R(x,p)] is not a formally undecidable arithmetical proposition, since [\neg (\forall x)R(x,p)] is PA-provable (see Corollary 8.2 of this paper).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Author’s version of a short note that appeared in the Communications of the ACM, October 2006, Vol. 49 No. 10, Pages 11-13, doi: 10.1145/1164394.1164406. The significance of this note is seen in this blogpost.

Peter Wegner’s and Dina Goldin’s “Viewpoint” (“Principles of Problem Solving,” July 2006) provoked strong criticism from Lance Fortnow [a professor of computer science at the University of Chicago] in his “Computational Complexity” blog (weblog.fortnow.com/, dated July 14, 2006), where he notes that:

“… Wegner and Goldin misinterpret the Church-Turing thesis …”.

What Fortnow apparently overlooked was that the classical Church-Turing Thesis—expressed as a strong identity—is committed to the following strong definition of computability:

(i) Definition: A number-theoretic function/relation (treated as a Boolean Function), say F(x_{_{1}}, x_{_{2}}, \ldots, x_{_{n}}) , is effectively computable if, and only if, there is an effective method T such that, given any sequence of natural numbers (a_{_{1}}, a_{_{2}}, \ldots, a_{_{n}}) , T will always compute F(a_{_{1}}, a_{_{2}}, \ldots, a_{_{n}}) .

The classical form of CT follows from (i):

(ii) Classical CT: A number-theoretic function/relation is effectively computable if, and only if, it is partial recursive/Turing-computable.

The question arises: Is such a strong commitment necessary?

The issue of whether (ii) is a definition, a theorem, or a thesis was the subject of a substantive exchange of views between Church and Gödel, neither of whom was apparently comfortable accepting (ii) as a thesis.

That their discomfort was well-founded is apparent if we replace (i) by the weaker and broader:

(iii) Definition: A number-theoretic function/relation, say F(x_{_{1}}, x_{_{2}}, \ldots, x_{_{n}}) , is effectively computable if, and only if, given any sequence of natural numbers (a_{_{1}}, a_{_{2}}, \ldots, a_{_{n}}) , there is an effective method T, which may depend on (a_{_{1}}, a_{_{2}}, \ldots, a_{_{n}}) , such that T will always compute F(a_{_{1}}, a_{_{2}}, \ldots, a_{_{n}}) .

Clearly, (i) implies (iii). Hence, by the dictum of Occam’s razor, (iii) is to be preferred as the definition of effective computability.

Moreover, the significance of (iii) over (i) is that it admits the broader possibility of number-theoretic functions/relations that are effectively computable instantiationally but not algorithmically.

The significance, and necessity, of a thesis linking effective computability with recursiveness/Turing-computability then emerges if we express CT as a weakened equivalence rather than as a strong identity:

(iv) Instantiational CT: A number-theoretic function/relation is effectively computable if, and only if, it is instantiationally equivalent to a partial recursive/Turing-computable function/relation.

We can then admit number-theoretic functions/relations that are effectively computable instantiationally but not algorithmically.

Are there any functions/relations that would justify (iv) intuitively?

Clearly, Turing’s Halting function and Chaitin’s Omegas are well-defined, total, number-theoretic functions that are effectively computable instantiationally but not algorithmically.

However, in the absence of (iv), we cannot link them instantiationally to recursive/Turing-computable functions.

On the other hand, if [(Ax)R(x)] is Gödel’s undecidable arithmetical proposition, then under the reasonable assumption that if a total arithmetical relation is Turing-computable is true, then the algorithm can be converted into a proof sequence in Peano Arithmetic (an assumption vindicated by Theorem 7.1 of this 2016 paper), it can be shown that the arithmetical relation, R(x) , is effectively decidable instantiationally but not algorithmically.

This case is significant since Gödel showed (in Theorem VII of his seminal 1931 paper on undecidable arithmetical propositions) that R(x) is, indeed, instantiationally equivalent to a primitive recursive relation (which is decidable algorithmically) without appeal to any form of CT.

So Wegner’s and Goldin’s reservations about the unrestricted adoption of classical CT as definitive, and their underlying belief—that there can be effectively computable functions/relations that are not Turing-computable—ought not to be dismissed so cursorily, even though the relevance, and validity, of the particular arguments they offer in support of their belief are not obvious.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

1. Two product of remainders theorems

Theorem: The function f(n) = \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is zero if, and only if, n is composite, where 0 \leq [\frac{n}{m}] < m is the remainder when m divides n , and p_{_{k}} is the k 'th prime.

Proof: If n is composite, then the remainder [\frac{n}{p_{_{k}}}] is necessarily 0 for some p_{_{k}} < p_{_{\pi(\sqrt{n})}} which is a prime factor of n . Conversely, if [\frac{n}{p_{_{k}}}] is 0 for some p_{_{k}} < p_{_{\pi(\sqrt{n})}} , then n is necessarily composite. The theorem follows.

Corollary The function f(n) = \prod_{_{k=2}}^{^{n-1}}[\frac{n}{k}] is zero if, and only if, n is composite, where 0 \leq [\frac{n}{m}] < m is the remainder when m divides n .

2. The function log_{_{e}}\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]

Prime_Function_1_100

Fig.2.1: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 100 .

Prime_Function_1_1000

Fig.2.2: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 1000 .

Prime_Function_1_10000000

Fig.2.3: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 10000000 .

3. The function log_{_{e}}\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]/\sqrt{n}

Prime_Function_2_100

Fig.3.1: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]/\sqrt{n}} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 100 .

Prime_Function_2_1000

Fig.3.2: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]/\sqrt{n}} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 1000 .

Prime_Function_2_10000000

Fig.3.3: The above 3D graph mirrors on y=0 and y=1 the values of the function log_{_{e}}{\prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}]/\sqrt{n}} when \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] is non-zero in the range 4 \leq n \leq 10000000 .

4. A product of remainders hypothesis

Hypothesis: \prod_{_{k=2}}^{^{\pi(\sqrt{n})}}[\frac{n}{p_{_{{k}}}}] = O(e^{^{\sqrt{n}}}) when [\frac{n}{p_{_{{k}}}}] is non-zero.

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

An Evidence-Based Proof of Lucas’ Gödelian Thesis and a Definitive Turing Test

Abstract: In this note I detail an evidence-based proof of Lucas’ Gödelian Thesis, and define a definitive Turing Test that differentiates between a human intelligence and a mechanical intelligence.

Introduction

In a paper: `The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of Cognitive Systems Research [An16][1], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for objectively evidencing such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … [Mu91][2], Section 1, Introduction.

“Intuitively we require that for each event-describing sentence, \phi_{o^{\iota}}n_{\iota} say (i.e. the concrete object denoted by n_{\iota} exhibits the property expressed by \phi_{o^{\iota}} ), there shall be an algorithm (depending on I, i.e. M^{*} ) to decide the truth or falsity of that sentence.” … [Lob59][3], p.165.

Evidence-based reasoning in Arithmetic: Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively evidencing such acceptance.

The philosophical ramifications of formally validating John Lucas’ original Gödelian argument in [Lu61][4]—against a reductionist account of the mind—now raise issues (as in [An15b][5]) that lie, and deserve consideration, beyond the formal argumentation of [An16][1].

For instance, the roots of this note lie in the Mechanist’s counter-argument against Lucas’ Gödelian Thesis, as articulated in [An07a][6] (see also [An07b][7] and [An07c][8]; as also this post and this post), which can now be viewed, and refuted, from a broader perspective that admits an evidence-based, Wittgensteinian, distinction between:

(1) mathematically proven, i.e., algorithmically computable (see Appendix B, Definition 2, here), `truth’; and

(2) meta-mathematically proven, i.e., algorithmically verifiable (see Appendix B, Definition 1, here), `truth’;

to which Theorem 1 (below) appeals.

The distinction formalises the reported (see [Lam17][9]) intent underlying Wittgenstein’s objection—in his `notorious’ paragraph in [Wi78][10]—towards conflating mathematical and meta-mathematical entailments, by showing that:

(a) whereas the Mechanist correctly argues (in [An07a][6]) that, for any given numeral [n] , a Universal Turing machine can always mathematically prove that Gödel’s arithmetic formula [R(n)] (as defined in Theorem 1 below) is a theorem in PA;

(b) no Turing machine can mathematically prove that the formula [(\forall x)R(x)] with Gödel number 17Gen\ r is a theorem in PA; and, ipso facto, meta-mathematically conclude—as a human can—that it could, for any given numeral [n] , prove that Gödel’s arithmetic formula [R(n)] is a theorem in PA.

Accordingly, I restrict myself here to merely drawing attention to Lucas’ concluding remarks in an informal defence of his Gödelian Thesis:

“Thus, though the Gödelian formula is not a very interesting formula to enunciate, the Gödelian argument argues strongly for creativity, first in ruling out any reductionist account of the mind that would show us to be, au fond, necessarily unoriginal automata, and secondly by proving that the conceptual space exists in which it (is) intelligible to speak of someone’s being creative, without having to hold that he must be either acting at random or else in accordance with an antecedently specifiable rule”. [Luxx][11].

and to briefly detailing only the import of the evidence-based reasoning introduced in [An12][12] for:

(i) Formally validating Lucas’ Gödelian Thesis as articulated in [An16][1]; and

(ii) Answering affirmatively Alan Turing’s earlier, but related, query in [Tu50][13] as to whether it is possible to definitively differentiate between a human intelligence and a mechanical intelligence.

Validating Lucas’ Gödelian Argument

We begin by noting that the introduction of evidence-based reasoning into the, conflicting, classical (Hilbert’s) and intuitionistic (Brouwer’s) interpretations of quantification (see [An15a][14], Sections 1.1 and 2.1) yields two—hitherto unsuspected and essentially different—well-defined interpretations of the first-order Peano Arithmetic PA (see Appendix A here), over the structure N of the natural numbers, which are complementary, and not contradictory (see [An16][1]).

The former yields the classical, weak, standard interpretation I_{PA(N, SV)} of PA over N (see Appendix B here), which is non-finitarily well-defined relative to the assignment of weak algorithmically verifiable Tarskian truth values (see Appendix C here) to the compound formulas of PA under I_{PA(N, SV)} (see [An16][1], Theorem 5.6). The well-definedness follows from the non-finitary proof of consistency for PA detailed in [An16][1], Theorem 5.7.

The latter yields an unsuspected, strong, finitary interpretation I_{PA(N, SC)} of PA over N , which is finitarily well-defined relative to the assignment of strong algorithmically computable Tarskian truth values to the compound formulas of PA under I_{PA(N, SC)} (see [{An16][1], Theorem 6.7). The well-definedness follows from the finitary proof of consistency for PA detailed in [An16][1], Theorem 6.8.

We shall now see that the complementarity can also be viewed as validating Lucas’ Gödelian argument, if we treat it as the claim that (see [An16][1], Thesis 1):

Theorem 1 There can be no mechanist model of human reasoning if the standard interpretation I_{PA(N, SV)} of the first-order Peano Arithmetic PA can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions, and the finitary interpretation I_{PA(N, SC)} of PA can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions.

Notation: We use square brackets to differentiate between a symbolic expression—such as [(\exists x)P(x)] —which denotes a formula of a formal language L , and the symbolic expression—such as (\exists x)P^{*}(x) —that denotes its intended meaning under a well-defined interpretation of L , where the L -formula [P(x)] interprets as the relation P^{*}(x) .

Proof: We note that Kurt Gödel has shown meta-mathematically how to construct an arithmetical formula with a single variable, say [R(x)] —Gödel refers to this formula only by its Gödel number r in [Go31][15], p.25(12)—such that:

\bullet [R(x)] is not PA-provable; but

\bullet [R(n)] is PA-provable for any PA numeral [n] .

Hence, for any numeral [n] , Gödel’s primitive recursive relation xB \lceil [R(n)] \rceil must hold for some natural number m :

\bullet where xBy denotes Gödel’s primitive recursive relation (see [Go31][15], p. 22(45)):

`x is the Gödel-number of a proof sequence in PA whose last term is the PA formula with Gödel-number y ‘;

\bullet and \lceil [R(n)] \rceil denotes the Gödel-number of [R(n)] ;

We also note (see [An16][1], Theorem 2.1), that we cannot conclude finitarily from Tarski’s definitions (see Appendix C, Definitions 3 to 10 here) whether, or not, a quantified PA formula [(\forall x)F(x)] is algorithmically verifiable as always true, under I_{PA(N, SV)} , if [F(x)] is algorithmically verifiable (see Appendix B, Definition 1 here) under I_{PA(N, SV)} , but not algorithmically computable (see Appendix B, Definition 2 here) under I_{PA(N, SC)} .

Now:

(i) Since Gödel has shown meta-mathematically that the PA-formula [R(n)] is PA-provable for any PA-numeral [n] , it follows that:

(a) For any natural number n , there is always a deterministic algorithm which will provide evidence that the interpretation R^{*}(n) of [R(n)] under I_{PA(N, SV)} is an algorithmically verifiable true arithmetical proposition.

Moreover:

(ii) By [An16][1], Corollary 8.2, the formula [\neg(\forall x)R(x)] is provable in PA. Hence, since PA is finitarily consistent (see [An16], Theorem 6.8), we can mathematically conclude, under I_{PA(N, SC)} , that:

(a) There is no deterministic algorithm which, for any numeral [n] , will provide evidence that the interpretation R^{*}(n) of [R(n)] under I_{PA(N, SC)} is an algorithmically computable true arithmetical proposition.

However:

(iii) Since PA is also non-finitarily consistent (see [An16][1], Theorem 5.7), we cannot contradict (i)(a) by non-finitarily interpreting quantification under I_{PA(N, SV)} and mathematically concluding from the PA-provability of the formula [\neg(\forall x)R(x)] either that:

(a) For some unspecified natural number n , there is a deterministic algorithm which provides evidence that the interpretation R^{*}(n) of [R(n)] under I_{PA(N, SV)} is not an algorithmically verifiable true arithmetical proposition.

or that:

(b) For some unspecified natural number n , there is no deterministic algorithm which will provide evidence that the interpretation R^{*}(n) of [R(n)] under I_{PA(N, SV)} is an algorithmically verifiable true arithmetical proposition.

Instead:

(iv) By [An16][1], Corollary 8.3, we can only conclude, under I_{PA(N, SV)} , that the PA-provability of the formula [\neg(\forall x)R(x)] implies that we cannot mathematically conclude from the axioms and rules of inference of PA that:

(a) For any natural number n , there is always a deterministic algorithm which will provide evidence that the interpretation R^{*}(n) of [R(n)] under I_{PA(N, SV)} is an algorithmically verifiable true arithmetical proposition.

If we now assume that the finitary interpretation I_{PA(N, SC)} of PA can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions, whence any mechanical witness can only reason mathematically—i.e., finitarily from the PA axioms and rules of inference, as in (iv) (see [An16][1], Theorem 7.1)—then although, for any numeral [n] , a mechanical witness can give evidence under the finitary interpretation I_{PA(N, SC)} that the PA formula [R(n)] holds in N , no mechanical witness can conclude finitarily under the finitary interpretation I_{PA(N, SC)} of PA that, for any numeral [n] , the PA formula [R(n)] holds in N (since, by Corollary 8.2 in [An16][1], the formula [\neg (\forall x)R(x)] is provable in PA).

Whereas, if we assume that the standard interpretation I_{PA(N, SV)} of PA can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions—whence a human witness can also reason meta-mathematically, i.e., non-finitarily, from the PA axioms and rules of inference, as in (i)—then a human witness can conclude under the non-finitary standard interpretation I_{PA(N, SV)} of PA that, for any numeral [n] , the PA formula [R(n)] holds in N . The theorem follows. … \Box

Significance of differentiating between strong algorithmically computable truth, and weak algorithmically verifiable truth

We note that the importance of differentiating between:

(i) the strong, algorithmically computable, `truth’—of the provable formulas of a formal mathematical language L —which follows by finitary mathematical reasoning from the axioms and rules of inference of L under a well-defined interpretation; and

(ii) the weak, algorithmically verifiable, `truth’—of the provable formulas of L —which follows by non-finitary meta-mathematical reasoning from the axioms and rules of inference of L under a well-defined interpretation;

is:

(a) reflected in Thoralf Skolem’s ([Sk22][16], p.295) cautionary remarks against inviting paradox by conflating entailments of different formal systems over different domains;

(b) implicit in, and an essential component of, Lampert’s interpretation of Ludwig Wittgenstein’s objection—in the latter’s `notorious’ paragraph in [Wi78][10]—to the conclusions that Gödel drew from his undifferentiated mathematical/meta-mathematical reasoning in [Go31][15]:

“The most crucial aspect of any comparison of two different types of unprovability proofs is the question of what serves as the “criterion of unprovability” (I, Section 15). According to Wittgenstein, such a criterion should be a purely syntactic criteria independent of any meta-mathematical interpretation of formulas. It is algorithmic proofs relying on nothing but syntactic criteria that serve as a measure for assessing meta-mathematical interpretations, not vice-versa.” … [Lam17][9]

A definitive Turing Test

Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man? \ldots In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.” … [Tu50][13], Section 5 and Objection (3).

Theorem 1 can also be viewed as a definitive Turing Test between a logician and any Turing machine TM.

In other words, we can demonstrate that the algorithmically computable architecture of any conceivable Turing machine (digital computer) has inherent limitations which constrain it from answering the following Query in the affirmative; whereas the human brain is not constrained similarly.

Query 1: Can you prove that, for any numeral [n] , Gödel’s arithmetic formula [R(n)] is a theorem in the first-order Peano Arithmetic PA, where [R(x)] is defined by its Gödel number r in eqn.12, and [(\forall x)R(x)] is defined by its Gödel number 17Gen\ r in eqn.13, on p.25 of [Go31][15]? Answer only either `Yes’ or `No’.

Logician: Yes.

By Gödel’s meta-mathematical reasoning on p.26(2) of [Go31][15], a logician can conclude, for any numeral [n] , that the formula [R(n)] is a theorem in PA; even though the formula [(\forall x)R(x)] is not a theorem in PA.

TM: No.

By Corollary 8.2 in [An16][1], the formula [\neg (\forall x)R(x)] is provable in PA and so, by Theorem 7.1 in [An16], no Turing machine can prove that the formula [(\forall x)R(x)] with Gödel number 17Gen\ r is a theorem in PA and, ipso facto, conclude that, for any numeral [n] , Gödel’s arithmetic formula [R(n)] is a theorem in PA.

Appendix A: The Peano Arithmetic PA

Appendix B: The Two Interpretations of PA

Appendix C: Tarski’s Inductive Definitions

References

Return to 1 [An16] Anand, B. S. 2016. The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis. In Cognitive Systems Research, 40. Elsevier.

Return to 2 [Mu91] Murthy, C. R. 1991. An evaluation semantics for classical proofs. In Proceedings of Sixth IEEE Symposium on Logic in Computer Science, 96–109. Also Cornell TR 91-1213, 1991.

Return to 3 [Lob59] Löb, M. H. 1959. Constructive Truth. In Proceedings of the Colloquium Held at Amsterdam, 1957, Studies in Logic and the Foundations of Mathematics, Eds. L. E. J. Brouwer, E.W. Beth, A. Heyting, Constructivity in Mathematics, A. Heyting (ed.), pp.159-168, North-Holland Publishing Company, Amsterdam.

Return to 4 [Lu61] Lucas, J. R. 1961. Minds, Machines and Gödel. Philosophy XXXVI:112–127. Reprinted in The Modeling of Mind. Kenneth M.Sayre and Frederick J.Crosson, eds., Notre Dame Press, 1963, pp.269-270; and Minds and Machines, ed. Alan Ross Anderson, Prentice-Hall, 1964, pp.43-59.

Return to 5 [An15b] Anand, B. S. 2015. Algorithmically Verifiable Logic vis à vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics? Paper presented on 26’th June at the Workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, University of Istanbul, Istanbul, Turkey.

Return to 6 [An07a] Anand, B. S. 2007. The Mechanist’s Challege. The Reasoner, Vol(1)5 p5-6.

Return to 7 [An07b] Anand, B. S. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – I. The Reasoner, Vol(1)6 p3-4.

Return to 8 [An07c] Anand, B. S. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II. The Reasoner, Vol(1)7 p2-3.

Return to 9 [Lam17] Lampert, T. 2017. Wittgenstein and Gödel: An Attempt to Make “Wittgenstein’s Objection” Reasonable. In Philosophia Mathematica. nkx017.

Return to 10 [Wi78] Wittgenstein, L. 1937. Remarks on the Foundations of Mathematics. Cambridge: MIT Press, 1978 edition.

Return to 11 [Luxx] Lucas, J. R. 19xx. The Gödelian Argument: Turn Over the Page. Web essay.

Return to 12 [An12] Anand, B. S. 2012. Evidence-based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012. University of Birmingham, Birmingham, UK.

Return to 13 [Tu50] Turing, A. M. 1950. Computing Machinery and Intelligence. In Mind, Volume LIX, Issue 236, 1 October 1950:433–460.

Return to 14 [An15a] Anand, B. S. 2015. Why Hilbert’s and Brouwer’s interpretations of quantification are complementary and not contradictory. In Epsilon 2015 Workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, 10th June 2015 – 12th June 2015. Paper presented on 10th June at the University of Montpellier, Montpellier, France.

Return to 15 [Go31] Gödel, K. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated from the German original by Elliott Mendelson. In Martin Davis (ed.), The Undecidable, 1965. Dover edition 2004, pp.5-38, Dover Publications Inc., Mineola, NY 1150.

Return to 16 [Sk22] Skolem, T. 1922. Some remarks on axiomatized set theory. Text of an address delivered in Helsinki before the Fifth Congress of Scandinavian Mathematicians, 4-7 August 1922. In Jean van Heijenoort (ed.) From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931, 1967 edition, pp.464-479, Harvard University Press, Cambridge, Massachusetts.

References in Appendices

[Ba16] Bauer, A. 2016. Five Stages of Accepting Constructive Mathematics. In Bull. Math. Soc. (N. S.). Electronically published on October 3, 2016.

[Ct75] Chaitin, G. J. 1975. A Theory of Program Size Formally Identical to Information Theory. In Journal of the ACM (JACM), volume 22, Issue 3, July 1975, pp.329-340. Association for Computing Machinery, New York.

[Ct82] Chaitin, G. J. 1982. Gödel’s Theorem and Information. In International Journal of Theoretical Physics, volume 21, pp.941-954. Kluwer Academic Publishers-Plenum Publishers.

[Hi27] Hilbert, D. 1927. The Foundations of Mathematics. Text of an address delivered in July 1927 at the Hamburg Mathematical Seminar. In Jean van Heijenoort (ed.) From Frege to Gödel: A source book in Mathematical Logic, 1878 – 1931, 1967 edition, pp.464-479, Harvard University Press, Cambridge, Massachusetts.

[Me64] Mendelson, E. 1964. Introduction to Mathematical Logic. Princeton: Van Norstrand, 1964 edition.

On the distribution of the prime numbers

Note of M. J. E. Littlewood, presented by M. J. Hadamard. In Mathematical Analysis, Comptes Rendus: Séance du 22 Jun 1914; p.1869.

(Attempted translation from the French original by Bhupinder Singh Anand on October 14, 2018. Corrections and/or clarifications—especially where marked (?)—are invited, and welcome.)

1. M. E. Schmidt has established the existence of a positive number K , such that

(1) \Pi(x) - Li\ x + \frac{1}{2}. Li \sqrt{x} < -K. \frac{\sqrt{x}}{log\ x}

and \Pi(x) - Li\ x + \frac{1}{2}. Li \sqrt{x} > K. \frac{\sqrt{x}}{log\ x}

for monotonically (?) increasing values of x.

Also, by Riemann’s Hypothesis on the zeros (roots?) of \zeta(s), we have

(2) \Pi(x) - Li\ x = O(\sqrt{x}. log\ x) .

Between (1) and (2) there is a gap which I now propose to narrow (?) by replacing the inequalities (1) by

(3) \Pi(x) - Li\ x < -K. \frac{\sqrt{x}. log log log\ x}{log\ x}

and \Pi(x) - Li\ x > K. \frac{\sqrt{x}. log log log\ x}{log\ x}

Clearly the inequality \Pi(x) < Li\ x , generally presumed on empirical grounds, would not hold for all values of x.

The inequalities (3) are equivalent to the following:

(4) \psi (x) - x < -K. \sqrt{x}. log log log\ x ,

and \psi (x) - x > K. \sqrt{x}. log log log\ x ,

Where \psi (x) is Tchebichef’s well-known function. I shall show (4) on the assumption that the Riemann Hypothesis holds (?). In the contrary case (?), we already know more (?).

I define \eta = log\ x , and denote by

\frac{1}{2} \pm i\gamma_{1},\ \frac{1}{2} \pm i\gamma_{2},\ \ldots , where 0 < \gamma_{1} \leq \gamma_{2} \leq \ldots

the complex zeros of \zeta(s) . We know [footnote 1: We can find formulas equivalent to (4), (5), and (6) in the Handbuch by M. Landau, p.365, 387,388] that

(5) \frac{\psi(x) - x}{\sqrt{x}} = -2.\sum_{_{1}}^{^{\infty}}\frac{sin \gamma_{n} \eta}{\gamma_{n}} + O(1) ,

(6) \frac{\psi(x) - x}{\sqrt{x}} = -2.\sum_{_{\gamma_{n} \leq T}}\frac{sin \gamma_{n} \eta}{\gamma_{n}} + O(1) .

uniformly for T < x^{^{2}} ,

(7) \sum_{_{1}}^{^{\infty}}\frac{sin \gamma_{n} \eta}{\gamma_{n}} = O(\eta^{^{2}}) .

(8) \gamma_{n} = g(n) + O(1)

where t = g(T) \sim \frac{2 \pi T}{log\ T} is the inverse function of

T = \frac{t\ log\ t}{2\pi} - \frac{(1+ log\ 2\pi)t}{2\pi} .

2. Lemma. — Let \gamma_{_{0}} be any positive number, and

f(z) = f(\xi + i\gamma) = \sum_{_{1}}^{^{\infty}}\frac{e^{^{-\gamma_{n}(\xi + i\eta)}}}{\gamma_{n}}  .

There are values of z such that 0 < \xi \leq 1,\ \eta \geq \eta_{_{0}} , and

-\texttt{I}f(z)= \sum_{_{1}}^{^{\infty}} \frac{sin \gamma_{n} \eta}{\gamma_{n}}\ e^{^{-\gamma_{n}\xi}} < -K. log log\ \eta .

-\texttt{I}f(z)= \sum_{_{1}}^{^{\infty}} \frac{sin \gamma_{n} \eta}{\gamma_{n}}\ e^{^{-\gamma_{n}\xi}} > K. log log\ \eta

I consider, for example, the second inequality. We can show, first, that

(9) -\texttt{I}f(\xi + i \xi) = \sum_{_{1}}^{^{n}}\frac{sin \gamma_{_{n}} \xi}{\gamma_{_{n}}}.e^{^{-\gamma_{_{n}}\xi}} \sim A\ log\frac{1}{\xi}\ A > 0 ,

when \xi tends to zero. This follows, essentially, by elementary reasoning, from the formula:

f(\xi + i \xi) = \int_{_{1}}^{^{\infty}}\frac{e^{^{-(\xi + i \xi.g(n))}}}{g(n)}.dn + O  \int_{_{1}}^{^{\infty}}\frac{e^{^{-\xi.g(n)}}}{g(n)}.(\xi + \frac{1}{n}).dn ,

as a consequence of (8). We can also infer from (8) the existence of an a > 0 , such that:

(10) \sum_{_{\gamma_{_{n}}\xi > a}}\frac{e^{^{-\gamma_{_{n}}\xi}}}{\gamma_{_{n}}} > \frac{1}{4}A.log\frac{1}{\xi} .

I shall now apply a theorem due to Kronecker, which M. H. Bohr recognised as of primary importance to the theory of Dirichlet series. Let (x) = |x - \{x\}| , where \{x\} is entire in the neighbourhood (?) of x . There is, by Kronecker’s theorem, a T such that:

\eta_{_{0}} < T < \eta_{_{0}}(\frac{1}{\xi} + 1)^{^{N}} and (\frac{\gamma_{_{n}}.T}{2\pi})<\xi

for n \leq N . I choose N = \frac{a}{\xi},\ \eta = T + \xi ; whence, after (10),

|\texttt{I}f(\xi + i \eta) - \texttt{I}f(\xi + i \xi)| \leq \sum_{_{1}}^{^{N}}(\frac{|sin \gamma_{_{n}} \eta - sin \gamma_{_{n}} \xi|}{\gamma_{_{n}}}) + 2.\sum_{_{N+1}}^{^{\infty}}(\frac{e^{^{-\gamma_{_{n}}\xi}}}{\gamma_{_{n}}})

< 2\pi N \xi + (\frac{1}{2}A + \varepsilon).log\frac{1}{\xi} < 2\pi a + (\frac{1}{2}A + \varepsilon).log\frac{1}{\xi} .

Thus

(11) -\texttt{I}f(\xi + i \eta) > K.log\frac{1}{\xi} .

However

\eta < \xi + \eta_{_{0}}(\frac{1}{\xi} + 1)^{^{\frac{a}{\xi}}} ,

and (11) holds for arbitrary small values of \xi ; which can easily be seen to yield the inequality of the lemma.

3. It suffices therefore, in order to establish the relations in (3), to show that an assumption, such that

(12) \psi(x) - 1 < \delta. \sqrt{x}.log log log\ x

for all \delta > 0 and x > x_{_{0}}\delta , contradicts the lemma.

From (6) and (7), we can show the existence of a curve C where (?) \xi = \xi(\eta) , where \xi(\eta) is positive, continuous, and increasing, such that

(13?) |\sum_{_{1}}^{^{?}}\frac{sin \gamma_{_{n}}\eta}{\gamma_{_{n}}} - \sum_{_{1}}^{^{\infty}}\frac{sin \gamma_{_{n}}\eta}{\gamma_{_{n}}}.e^{^{-\gamma_{_{n}}\xi}}| < K

for 0 < \xi \leq \xi_{_{n}} . From (5), (12) and (13), we conclude

-\texttt{I} f(z) < \delta. log log\ \eta

[\eta > \eta_{_{0}}(\delta)]

uniformly in the region 0 < \xi \leq \xi_{_{n}} . Suppose

g(z) = e^{^{i f(z)}}(log\ z)^{^{-2.\delta}}

We then have g(z) = O(1) on C and to the right of \xi = 1 . We also have, by (5), (6), (7) and (8),

|\sum_{_{1}}^{^{n}}\frac{sin \gamma_{_{\nu}}\eta}{\gamma_{_{\nu}}}| = O(\eta^{^{2}})

uniformly in n ; whence we conclude

|\texttt{I}f(z)| = O(\eta^{^{2}}),\ g(z) = e^{^{0\eta^{^{2}}}}

uniformly in the region \xi(\eta) \leq \xi \leq 1 . Finally, by an extension of a theorem due to Lindelöf, we have

g(z) = O(1),\ -\texttt{I} f(z) < 3.\delta.log log\ \eta

for 0 < \xi \leq 1 and \eta_{_{0}}(\delta) < \eta , which contradicts the lemma in paragraph 3.

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Michael Atiyah’s reasoning on the, possible, formal undecidability of the Riemann Hypothesis RH (even though presented as a non-finitary proof), and his main paper on the fine structure constant, are intriguing not so much for their argumentation per se, but because they raise four foundational issues that need to be addressed beyond the context of his reasoning.

1. Relation of \zeta (\sigma + it) to the distribution of primes is valid only for \sigma > 1

It is significant that Atiyah does not relate RH to the the distribution of primes, but only to the zeros of the Riemann Zeta-function \zeta(\sigma + it) ; even though the conventionally perceived significance of RH is essentially that the behavior of \zeta (\sigma + it) , using analytic continuation for defining the function for \sigma \leq 1 , may be treated as yielding the most reliable estimates for the number \pi (n) of primes less than n for all natural numbers n > 2 .

Perhaps he too is uncomfortable (as I am) with unrestrictedly deducing such conclusions about the behaviour of the primes from the behaviour of \zeta (\sigma + it) , using analytic contin- uation, since the relationship of \zeta (\sigma + it) to the primes is well-defined only for \sigma >1 .

Although continuing a function analytically may satisfy the mathematician’s obsession to extend to its maximum the domain over which a function can be theoretically defined, it tends to obscure, under an interpretation, that which the function was originally defined to represent.

In other words, if an event (say, for instance the emergence of primes in an Eratosthenes sieve operation) is described by some mathematical function which is defined only for points along the x -axis for x>0 , then how can we deduce any putative property or behaviour of the event from the behaviour of the function when x<0 ; i.e., in areas where the function might, but the event does not, exist?

To see this note that, classically (E. C. Titchmarsh: The Theory of the Riemann Zeta-Function), the Zeta-function \zeta(\sigma + it) relates the distribution of the primes through the Euler product to a fundamental Dirichlet series, where both are convergent for \sigma > 1 , by the argument that:

[1]\ \ldots\ \zeta(\sigma + it) = \prod_{_{p}}(1-\frac{1}{p^{\sigma + it}})^{-1} = \sum_{n=1}^{\infty}(\frac{1}{n^{\sigma + it}})

where the Euler product ranges over all primes p .

Now, if \sigma > 1 , we can re-arrange the terms of the Dirichlet series to yield (see p.145 in Derbyshire, J. 2002. Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics. Published by Plume. 2004. Penguin Group (USA) Inc., NY):

[2]\ \ldots\ \{\sum_{n=1}^{\infty}(\frac{1}{n^{\sigma + it}})\}(1-\frac{1}{2^{\sigma -1 + it}})= \sum_{n=1}^{\infty}((-1^{n+1})\frac{1}{n^{\sigma + it}})

using the fact that, if the infinite sum a+b+c+d+e+f+ ... is absolutely convergent, then:

a-b+c-d+e-f+ ... = (a+b+c+d+e+f+ ...) - 2(b+d+f+ ...)

However, we can now view \zeta'(\sigma + it) as an analytic continuation of \zeta(\sigma + it) , defined implicitly for \sigma > 0,\ \sigma \neq 1 by:

[3]\ \ldots\ \zeta'(\sigma + it)(1-\frac{1}{2^{\sigma -1 + it}})= \sum_{n=1}^{\infty}((-1^{n+1})\frac{1}{n^{\sigma + it}})

In such case, though, since the Euler product diverges everywhere for \sigma \leq 1 , whilst (1-\frac{1}{2^{\sigma -1 + it}}) converges for \sigma < 1 , it is not obvious whether the re-arrangement of terms in the implicit definition of \zeta'(\sigma + it) admits the claim that:

[4]\ \ldots\ \prod_{_{p}}(1-\frac{1}{p^{\sigma + it}})^{-1} = \zeta'(\sigma + it) = \{\sum_{n=1}^{\infty}((-1^{n+1})\frac{1}{n^{\sigma + it}})\}(1-\frac{1}{2^{\sigma -1 + it}})^{-1}

for \sigma > 1 , and that the Euler product can be analytically continued through this identity for 0 < \sigma < 1 .

Reason: Since the Euler product is defined only for \sigma > 1 , it is not obvious how the behaviour of \zeta'(\sigma + it) in any region \sigma \leq 1 can be related to the distribution of the primes merely because of some incidental identification of the Euler product with \zeta'(\sigma + it) outside this region.

Incidental, since the distribution of primes is related to \zeta'(\sigma + it) by the identity, for \sigma > 1 :

[5]\ \ldots\ log_{e} \{\prod_{_{p}}(1-\frac{1}{p^{\sigma + it}})^{-1}\} = s \int_{_{2}}^{^{\infty}}(\frac{\pi(x)}{x(x^{\sigma + it}-1)})dx

where the emergence of \pi(x) also involves the re-arrangement of an infinite series which is contingent upon the condition that \sigma > 1 .

(Just as some behaviour towards me of my brother-in-law is conditional upon, and antecedent to, my marriage on a specific date.)

The above suggests that it is perhaps worth reflecting, for a moment, upon the implications of conventional reasoning for mathematically modelling the behavior of real world events as exemplified, for instance, by a light signal S which is emitted from some origin along an x -axis, so its distance from the origin at time t>0 is x=ct , where c is the speed of light.

Obviously, the function x=ct is well-defined for values of t \leq 0 .

However, no such value of the function can describe any property of the signal S , since S did not exist for t \leq 0 .

In other words, the behaviour of a mathematical expression at limiting points may not be unique—as is necessary and, prima facie, implicitly assumed, in any analytic continuation—but may be contingent on that which such an expression is intended to represent under an evidence-based interpretation.

The following para in Section 5, p.223, of this intriguing paper by Yuriy Zayko seems to make a similar point:

“… the \zeta -function calculations for real arguments are connected by certain contradictions that are resolved if we assume that the geometry of the numerical continuum is different from the Euclidean one. Note that the distinctive feature of the results obtained is the presentation of calculation as the process (as it really is) and the introduction of the concept of time, what is not typical for traditional mathematics, that claims its statements are timeless.”

Namely that, in order to analytically continue \zeta(\sigma + it) uniquely outside \sigma > 1 , we may need to define additional parameters—such as, say, the geometry over which the function is defined—which would allow us to describe the behaviour of \zeta(\sigma + it) at \sigma = 1 uniquely.

(Prima facie, conventional wisdom does not feel the need to establish such uniqueness for the analytic continuation of a function.)

In other words, divergence of a function at a point ought not to be accepted as necessarily implying that the function cannot be defined at the point, but only that the function may not have been well-defined at that point due to some implicit assumptions (as was shown in the case of parallel lines by the recognition of non-Euclidean geometry).

This, of course, reflects the thesis of my investigations into evidence-based reasoning; which is that the properties of any mathematical expression are determined not only by what can be formally proven within the formal language in which the expression is well-defined, but also by the evidence-based truth assignments to the expression under a well-defined, evidence-based, interpretation.

2. Limit of infinite iterations needs justification; which is why physicists resort to renormalisation

In case this seems far-fetched, Sections 4 to 6 of this abstract geometrically represent and analyse a two-dimensional Zeno-type gedanken where—in striking contrast to the classical, one-dimensional, Zeno argument—even the theoretical limiting behaviour of a putative virus cluster, or the putative behaviour of an elastic string, under a specified iterative transform- ation, does not correspond to the putative Cauchy limit of the function that seeks to describe a property of the iteration!

In other words, admitting such a limit in these gedanken can only be described as admitting a misleading mathematical myth into our putative representation of the corresponding natural phenomena.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem deduces the behaviour of the prime counting function \pi (n) , as n \rightarrow \infty , from the limiting behaviour of \zeta' (\sigma + it) along the line \sigma = 1 .

However, as these graphs (see also this post) of evidence-based, non-heuristic, estimations of \pi (n) illustrate, rather than unrestrictedly assuming that the existence of a Cauchy limit for a Cauchy sequence that represents a real world event implies that the event must tend to a limit corresponding to the Cauchy limit of its representation, we may need to seriously consider the possibility that \frac{\pi (x)}{x/log_{e}(x)} may be discontinuous in the limit x \rightarrow \infty .

(Or, in the jargon of physicists, \pi (x) too may need renormalization as x \rightarrow \infty .)

3. Fine structure constant: Algorithmically verifiable but not algorithmically computable.

Curiously, Professor Atiyah’s evaluation of the fine structure constant \alpha in this paper also seems to involve deductions from the limiting behaviour of the function defined by his eqn.7.1 on p.8 of the paper.

Whether such a mathematical limit, which involves constructing `an infinite sequence of iterated exponentials, followed by taking the weak closure in Hilbert space’ can be proven to correspond to the value of the fine structure constant \alpha that it seeks to evaluate is not obvious.

Reason: It can be reasonably argued (see p.273 of Section 29.6 in Chapter 29 of this thesis) that the fine structure constant \alpha can only be represented mathematically by a function that is algorithmically verifiable, but not algorithmically computable (see Definitions 5.2 and 5.3 on p.38, Section 5.1 of this thesis).

That Atiyah too—albeit implicitly—views the digits in the decimal representation of \alpha to be defined by a function that is algorithmically verifiable, but not algorithmically computable, in the above sense is evidenced by his comments on p.10 of his discussion on the computation of the initial values of the fine structure constant:

7.7. The two formulae (1.1) and (7.1) now follow by mimicry from the corresponding formulae for \pi and \lambda . More precisely, each truncated version, for fixed n is analytic. The mimicry is then applied before we pass to the limit.

7.8. The next section on numerics will carry out the computations and check the answers at every level n . This provides a proof analogous to those of Archimedes and Euler and is sufficient for physicists. But, as mentioned in 6.6, mathematical logicians since Gödel have raised the bar for the notion of proof. It is probable that such computations will never fail, i.e. a counterexample will never be found, but that it is not possible to prove this without additional axioms.

7.9. The logical issues raised in 7.8 appear in a different form in the size of the steps needed for the effective computation of further decimals in the expansion of Ж. I asserted, for example, that 16 should be sufficient to produce 9 figure accuracy. There is no algorithm that will confirm this fact, though general theory will guarantee that some (unknown) large number would be sufficient. But someone doing the calculation is likely to stumble on a sufficient number, probably finding that 16 is enough. But 12 figure accuracy might well require 32. So, the logical qualms that hard-nosed physicists ignore at the conceptual level, come back to haunt them at the computational level. Gödel can only be ignored at your peril.”

Should it turn out that Atiyah has, indeed, given a valid mathematical definition of the fine structure constant \alpha which—like Chaitin’s family of \Omega constants—is algorithmically verifiable, even if not algorithmically computable, that in itself could be viewed as a major achievement (which the title and introductory paragraphs of his main paper implicitly indicate as his goal), far over-shadowing in significance whether or not his perceived perspective of its implications for RH is, or is not, eventually vindicated.

4. Mapping infinite real dimensions as finite integer dimensions violates Skolem’s dictum

Mapping infinite real dimensions as finite integer dimensions, as Atiyah apparently does in Sections 2.1 to 2.6 of his paper, seems to implicitly violate Skolem’s dictum cautioning about the paradoxes inherent in corresponding entities and entailments across the domains of different formal systems.

For instance, Section 22.4 on p.189 of this thesis analyses how ignoring Skolem’s dictum leads to the seriously misleading conclusions usually drawn from Goodstein’s theorem, if we do not distinguish between the axiomatic constraints within which the properties of arithmetically defined numerals of a first-order Peano Arithmetic are defined, and the axiomatic constraints within which the properties of the corresponding, intuitively isomorphic, set-theoretically defined finite ordinals of a first-order set theory such as ZF are defined.

The significance of this for Atiyah’s following reasoning is not obvious:

“In this section I will indicate the main strategy and formulate the precise statements which will be proved. The key question is how to extend the Archimedes-Euler approach to \pi , from the commutative world of \mathbb{C} to the non-commutative world of Hamilton’s quaternions \mathbb{H} , constructed by an infinite iteration of exponentials, discrete or continuous. The key idea goes back to von Neumann as I now explain.

2.1 The von Neumann hyperfinite factor A of type II. The hyperfinite type II-1 factor A is unique up to isomorphism. It is constructed by an infinite sequence of iterated exponentials, followed by taking the weak closure in Hilbert space. This process converts type I, with integer dimensions, to type II with real dimensions. A dimension which is formally infinite in type I becomes finite in type II, so I call the process renormalization. All factors have an invariant trace mapping the factor to its centre. Any inner automorphism gives an isomorphic but different trace. The comparison between two such automorphisms is what leads to the Todd map T (see 3.4).

2.2 The Hirzebruch Formalism. Hirzebruch [1], following in the footsteps of Euler and Riemann, introduced a formal algebraic process of multiplicative sequences. In such processes he defined exponentials over \mathbb{Q} . He showed that any such exponential has a generating function, and he focused on the Todd exponential, whose generating function is the Bernoulli function \frac{x}{1-e^{-x}} . The fact that this function is analytic implies that the Hirzebruch process extends from \mathbb{Q} to \mathbb{R} , implementing the weak closure of 2.1.”

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard.

(From Gödel’s Last Letter, September 23, 2018. )

The problem that I have with unrestrictedly drawing conclusions about the behaviour of the primes from the behaviour of \zeta(\sigma + it) using analytic continuation is that relationship of \zeta(\sigma + it) to the primes is well-defined only for \sigma >1 .

In other words, if some function is defined only for points along the x -axis for x>0 , then how can we deduce any behaviour of the function for x \leq 0 ; i.e., in areas where the function does not exist?

Consider, for a moment, the implications of such reasoning for a real world model of such behaviour as exemplified, for instance, by a light signal S which is emitted from the origin along the x -axis, so its distance from the origin at time t>0 is x=ct , where c is the speed of light.

Obviously, the function x=ct is well-defined for values of t \leq 0 .

However, any such value cannot describe any property of the signal S , since S does not exist for t \leq 0 .

In case this seems far-fetched, Sections 4 to 6 here analyse a Zeno-type example, where even the theoretical limiting behaviour of an elastic string, under a specified iterative transformation, is not given by the putative Cauchy limit (which can only be described as a misleading mathematical myth) of the function that seeks to describe the iteration.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem draws conclusions about the behaviour of the prime counting function \pi(n) , as n \rightarrow \infty , by the limiting behaviour of \zeta(\sigma + it) along \sigma =1 , notwithstanding that such a limit too may be mythical!
Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A: A conventional perspective on why the prime divisors of an integer are not mutually independent

In a post Getting to the Roots of Factoring on the blog Gödel’s Lost Letter and P = NP (hosted jointly by him and Professor of Computer Science at Georgia Tech College of Computing, Richard Lipton) Professor of Computer Science and Engineering at the University of Buffalo, Kenneth W. Regan `revisits’ a 1994 paper on factoring by his co-host.

What I find intriguing in the above post is that the following extract subsumes, almost incidentally, a possibly implicit belief which may be inhibiting not only a resolution of the computational complexity of Integer Factoring (see this investigation) by conventional methods but, for similar reasons, also resolution of a large class of apparently unrelated problems (see, for instance, this investigation and this previous post):

“Here is the code of the algorithm. … the input x is a product of two prime numbers, \phi is a polynomial in just one variable, and \gcd refers to the greatest-common-divisor algorithm expounded by Euclid around 300 B.C. …

\star Repeat until exit:

\star\ a := a random number in 1,\dots, x-1;

\star\ b := \phi(a)\ mod(x);

\star if (\gcd(b,x) > 1) then exit.

Exiting enables carrying out the two prime factors of x

How many iterations must one expect to make through this maze before exit? How and when can the choice of the polynomial \phi speed up the exploration? …

Note that we cannot consider the events b \equiv 0\ mod(p) and b \equiv 0\ mod(q) to be independent, even though p and q are prime, because b = \phi(a) and \phi may introduce bias.”

The following reasoning shows why it is not obvious whether the assertion that:

“… we cannot consider the events b \equiv 0\ mod(p) and b \equiv 0\ mod(q) to be independent”

is intended to narrowly reflect a putative proof in the particular context of the assertion, or to echo a more general belief.

B: Defining the probability that a given integer is divisible by a given smaller integer

(1) We consider the questions:

(a) What is the probability for any given n>i>1 and i \geq 0, where i>u \geq 0, that:

n+u \equiv\ 0\ (mod\ i)?

(b) What is the compound probability for any given n>i,\ j>1, where i \neq j, i>u \geq 0, and j>v \geq 0, that:

n+u \equiv\ 0\ (mod\ i), and n+v \equiv\ 0\ (mod\ j)?

(c) What is the compound probability for any given n>i,\ j>1, where i \neq j, i>u \geq 0, and j>v \geq 0, that:

i divides n+u, and j divides n+v?

C: Why the prime divisors of an integer are mutually independent

(2) We note that:

(a) The answer to query (1a) above is that the probability the roll of an i-sided cylindrical die will yield the value u is \frac{1}{i} by the probability model for such an event as definable over the probability space (0, 1, 2, \ldots, i-1);

(b) The answer to query (1b) above is that the probability the simultaneous roll of one i-sided cylindrical die and one j-sided cylindrical die will yield the values u and v, respectively, is \frac{1}{i.j} by the probability model for such a simultaneous event as defined over the probability space \{(u, v): i > u \geq 0, j > v \geq 0\}.

(3) We trivially conclude that:

The compound probability of determining u and v correctly from the simultaneous roll of one i-sided cylindrical die and one j-sided cylindrical die, is the product of the probability of determining u correctly from the roll of an i-sided cylindrical die, and the probability of determining v correctly from the roll of a j-sided cylindrical die.

(4) We further conclude non-trivially that the answer to query (1c) above is given by:

(a) If i and j are co-prime, the compound probability of correctly determining that i divides n and j divides n from the simultaneous roll of one i-sided cylindrical die and one j-sided cylindrical die, is the product of the probability of correctly determining that i divides n from the roll of an i-sided cylindrical die, and the probability of correctly determining that j divides n from the roll of a j-sided cylindrical die.

(b) The assumption that i and j be co-prime is also necessary, since 4(a) would not always be the case if i and j were not co-prime.

For instance, let j=2i. The probability that an i-sided cylindrical die will then yield 0—and allow us to conclude that i divides n—is \frac{1}{i}, and the probability that a j-sided cylindrical die will then yield 0—and allow us to conclude that j divides n—is \frac{1}{j}; but the probability of determining both that i divides n, and that j divides n, from a simultaneous roll of the two cylindrical dice is \frac{1}{j}, and not \frac{1}{i.j}.

(5) We thus conclude non-trivially that, if p and q are two unequal primes, the probability of determining whether p divides n is independent of the probability of determining whether q divides n.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand42_Resolving_EPR_UNILOG_2013_Extended_Abstract

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 85 other followers

Recent posts

Diagonal Argument

Math, science, their history, and assorted trivia and quadrivia.

math - update

blogging & searching for true math ...

George Lakoff

George Lakoff has retired as Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley. He is now Director of the Center for the Neural Mind & Society (cnms.berkeley.edu).

LobeLog

Critical Perspectives on U.S. Foreign Policy

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Quanta Magazine

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The Brains Blog

Since 2005, a leading forum for work in the philosophy and science of mind

Logic Matters

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

A Neighborhood of Infinity

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Combinatorics and more

Gil Kalai's blog

Mathematics and Computation

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Foundations of Mathematics, Logic & Computability

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

John D. Cook

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Shtetl-Optimized

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Nanoexplanations

the blog of Aaron Sterling

Eric Cavalcanti

Quantum physicist

East Asia Forum

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability