You are currently browsing the tag archive for the ‘arithmetic’ tag.

Why primality is polynomial time, but factorisation is not

Differentiating between the signature of a number and its value

A brief review: The significance of evidence-based reasoning

In a paper: The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis’, which appeared in the December 2016 issue of Cognitive Systems Research [An16], I briefly addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for objectively evidencing such acceptance in the sense of Chetan Murthy and Martin Löb:

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … Chetan. R. Murthy: [Mu91], \S 1 Introduction.

“Intuitively we require that for each event-describing sentence, $\phi_{o^{\iota}}n_{\iota}$ say (i.e. the concrete object denoted by $n_{\iota}$ exhibits the property expressed by $\phi_{o^{\iota}}$), there shall be an algorithm (depending on I, i.e. $M^{*}$) to decide the truth or falsity of that sentence.” … Martin H Löb: [Lob59], p.165.

Definition 1 (Evidence-based reasoning in Arithmetic): Evidence-based reasoning accepts arithmetical propositions as true under an interpretation if, and only if, there is some specified methodology for objectively evidencing such acceptance.

The significance of introducing evidence-based reasoning for assigning truth values to the formulas of a first-order Peano Arithmetic, such as PA, under a well-defined interpretation (see Section 3 in [An16]), is that it admits the distinction:

(1) algorithmically verifiable truth’ (Definition 2}); and

(2) algorithmically computable truth’ (Definition 3).

Definition 2 (Deterministic algorithm): A deterministic algorithm computes a mathematical function which has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.

Note that a deterministic algorithm can be suitably defined as a realizer‘ in the sense of the Brouwer-Heyting-Kolmogorov rules (see [Ba16], p.5).

For instance, under evidence-based reasoning the formula $[(\forall x)F(x)]$ of the first-order Peano Arithmetic PA must always be interpreted weakly under the classical, standard, interpretation of PA (see [An16], Theorem 5.6) in terms of algorithmic verifiability (see [An16], Definition 1); where, if the PA-formula $[F(x)]$ interprets as an arithmetical relation $F^{*}(x)$ over $N$:

Definition 2 (Algorithmic verifiability): The number-theoretical relation $F^{*}(x)$ is algorithmically verifiable if, and only if, for any natural number $n$, there is a deterministic algorithm $AL_{(F,\ n)}$ which can provide evidence for deciding the truth/falsity of each proposition in the finite sequence $\{F^{*}(1), F^{*}(2), \ldots, F^{*}(n)\}$.

Whereas $[(\forall x)F(x)]$ must always be interpreted strongly under the finitary interpretation of PA (see [An16], Theorem 6.7) in terms of algorithmic computability ([An16], Definition 2), where:

Definition 3 (Algorithmic computability): The number theoretical relation $F^{*}(x)$ is algorithmically computable if, and only if, there is a deterministic algorithm $AL_{F}$ that can provide evidence for deciding the truth/falsity of each proposition in the denumerable sequence $\{F^{*}(1), F^{*}(2), \ldots\}$.

The significance of the distinction between algorithmically computable reasoning based on algorithmically computable truth, and algorithmically verifiable reasoning based on algorithmically verifiable truth, is that it admits the following, hitherto unsuspected, consequences:

(i) PA has two well-defined interpretations over the domain $N$ of the natural numbers (including $0$):

(a) the weak non-finitary standard interpretation $I_{PA(N, SV)}$ ([An16], Theorem 5.6),

and

(b) a strong finitary interpretation $I_{PA(N, SC)}$ ([An16], Theorem 6.7);

(ii) PA is non-finitarily consistent under $I_{PA(N, SV)}$ ([An16], Theorem 5.7);

(iii) PA is finitarily consistent under $I_{PA(N, SC)}$ ([An16], Theorem 6.8).

The significance of evidence-based reasoning for Computational Complexity

In this investigation I now show the relevance of evidence-based reasoning, and of distinguishing between algorithmically verifiable and algorithmically computable number-theoretic functions (as defined above), for Computational Complexity is that it assures us a formal foundation for placing in perspective, and complementing, an uncomfortably counter-intuitive entailment in number theory—Theorem 2 below—which has been treated by conventional wisdom as sufficient for concluding that the prime divisors of an integer cannot be proven to be mutually independent.

However, I show there that such informally perceived barriers are, in this instance, illusory; and that admitting the above distinction illustrates:

(a) Why the prime divisors of an integer are mutually independent Theorem 2;

(b) Why determining whether the signature (Definition 3 below) of a given integer $n$—coded as the key in a modified Bazeries-cylinder (see Definition 7 of this paper) based combination lock—is that of a prime, or not, can be done in polynomial time $O(log_{_{e}}n)$ (Corollary 4 of this paper); as compared to the time $\ddot{O}(log_{_{e}}^{15/2}n)$ given by Agrawal et al in [AKS04], and improved to $\ddot{O}(log_{_{e}}^{6}n)$ by Lenstra and Pomerance in [LP11], for determining whether the value of a given integer $n$ is that of a prime or not.

(c) Why it can be cogently argued that determining a factor of a given integer cannot be polynomial time.

Definition 4 (Signature of a number): The signature of a given integer $n$ is the sequence $a_{_{n,i}}$ where $n + a_{_{n,i}} \equiv 0\ mod\ (p_{_{i}})$ for all primes $p_{_{i}}\ such\ that\ 1\leq i \leq \pi(\sqrt{n})$.

Unique since, if $p_{_{\pi(\sqrt{m})+1}}^{2} > m \geq p_{_{\pi(\sqrt{m})}}^{2}$ and $p_{_{\pi(\sqrt{n})+1}}^{2} > n \geq p_{_{\pi(\sqrt{n})}}^{2}$ have the same signature, then $|m - n| = c_{_{1}}.\prod_{i=1}^{\pi(\sqrt{m})}p_{_{i}} = c_{_{2}}.\prod_{i=1}^{\pi(\sqrt{n})}p_{_{i}}$; whence $c_{_{1}} = c_{_{2}} = 0$ since $\prod_{i=1}^{k}p_{_{i}} > (\prod_{i=2}^{k-2}p_{_{i}}).p_{_{k}}^{^{2}} > p_{_{k+1}}^{2}$ for $k > 4$ by appeal to Bertrand’s Postulate $2.p_{_{k}} > p_{_{k+1}}$; and the uniqueness is easily verified for $k \leq 4$.

Definition 5 (Value of a number): The value of a given integer $n$ is any well-defined interpretation—over the domain of the natural numbers—of the (unique) numeral $[n]$ that represents $n$ in the first-order Peano Arithmetic PA.

We note that Theorem 2 establishes a lower limit for [AKS04] and [LP11], because determining the signature of a given integer $n$ does not require knowledge of the value of the integer as defined by the Fundamental Theorem of Arithmetic.

Theorem 1: (Fundamental Theorem of Arithmetic): Every positive integer $n > 1$ can be represented in exactly one way as a product of prime powers:

$n=p_{1}^{n_{1}}p_{2}^{n_{2}}\cdots p_{k}^{n_{k}}=\prod _{i=1}^{k}p_{i}^{n_{i}}$

where $p_{1} < p_{2} < \ldots < p_{k}$ are primes and the $n_{i}$ are positive integers (including $0$).

Are the prime divisors of an integer mutually independent?

In this paper I address the query:

Query 1: Are the prime divisors of an integer $n$ mutually independent?

Definition 6 (Independent events): Two events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other.

Intuitively, the prime divisors of an integer seem to be mutually independent by virtue of the Fundamental Theorem of Arithmetic

Moreover, the prime divisors of $n$ can also be seen to be mutually independent in the usual, linearly displayed, Sieve of Eratosthenes, where whether an integer $n$ is crossed out as a multiple of a prime $p$ is obviously independent of whether it is also crossed out as a multiple of a prime $q \neq p$:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 …

Despite such compelling evidence, conventional wisdom appears to accept as definitive the counter-intuitive conclusion that although we can see it as true, we cannot mathematically prove the following proposition as true:

Proposition 1: Whether or not a prime $p$ divides an integer $n$ is independent of whether or not a prime $q \neq p$ divides the integer $n$.

We note that such an unprovable-but-intuitively-true conclusion makes a stronger assumption than that in Gödel’s similar claim for his arithmetical formula $[(\forall x)R(x)]$—whose Gödel-number is $17Gen\ r$—in [Go31], p.26(2). Stronger, since Gödel does not assume his proposition to be intuitively true, but shows that though the arithmetical formula with Gödel-number $17Gen\ r$ is not provable in his Peano Arithmetic $P$ yet, for any $P$-numeral $[n]$, the formula $[R(n)]$ whose Gödel-number is $Sb \left(r \begin{array}{c}17 \\ Z(n)\end{array}\right)$ is $P$-provable, and therefore meta-mathematically true under any well-defined Tarskian interpretation of $P$ (cf., [An16], Section 3.).

Expressed in computational terms (see [An16], Corollary 8.3), under any well-defined interpretation of $P$, Gödel’s formula $[R(x)]$ translates as an arithmetical relation, say $R'(x)$, such that $R'(n)$ is algorithmically verifiable, but not algorithmically computable, as always true over $N$, since $[\neg (\forall x)R(x)]$ is $P$-provable ([An16], Corollary 8.2).

We thus argue that a perspective which denies Proposition 1 is based on perceived barriers that reflect, and are peculiar to, only the argument that:

Theorem 2: There is no deterministic algorithm that, for any given $n$, and any given prime $p \geq 2$, will evidence that the probability $\mathbb{P}(p\ |\ n)$ that $p$ divides $n$ is $\frac{1}{p}$, and the probability $\mathbb{P}(p\not|\ n)$ that $p$ does not divide $n$ is $1 - \frac{1}{p}$.

Proof By a standard result in the Theory of Numbers ([Ste02], Chapter 2, p.9, Theorem 2.1, we cannot define a probability function for the probability that a random $n$ is prime over the probability space $(1, 2, 3, \ldots, )$.

(Compare with the informal argument in [HL23], pp.36-37.)

In other words, treating Theorem 2 as an absolute barrier does not admit the possibility—which has consequences for the resolution of outstanding problems in both the theory of numbers and computational complexity—that Proposition 1 is algorithmically verifiable, but not algorithmically computable, as true, since:

Theorem 3: For any given $n$, there is a deterministic algorithm that, given any prime $p \geq 2$, will evidence that the probability $\mathbb{P}(p\ |\ n)$ that $p$ divides $n$ is $\frac{1}{p}$, and the probability $\mathbb{P}(p\not|\ n)$ that $p$ does not divide $n$ is $1 - \frac{1}{p}$.

Author’s working archives & abstracts of investigations

Can Gödel be held responsible for not clearly distinguishing—in his seminal 1931 paper on formally undecidable propositions (pp.596-616, ‘From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931‘, Jean van Heijenoort, Harvard University Press, 1976 printing)—between the implicit circularity that is masked by the non-constructive nature of his proof of undecidability in PM, and the lack of any circularity in his finitary proof of undecidability in Peano Arithmetic?

“The analogy of this argument with the Richard antinomy leaps to the eye. It is closely related to the “Liar” too;[Fn.14] for the undecidable proposition $[R (q); q]$ states that $q$ belongs to $K$, that is, by (1), that $[R (q); q]$ is not provable. We therefore have before us a proposition that says about itself that it is not provable [in PM].[Fn.15]

[Fn.14] Any epistemological antinomycould be used for a similar proof of the existence of undecidable propositions.”

[Fn.15] Contrary to appearances, such a proposition involves no faulty circularity, for initially it [only] asserts that a certain well-defined formula (namely, the one obtained from the $q$th formula in the lexicographic order by a certain substitution) is unprovable. Only subsequently (and so to speak by chance) does it turn out that this formula is precisely the one by which the proposition itself was expressed.”

It is a question worth asking, if we heed Abel-Luis Peralta, who is a Graduate in Scientific Calculus and Computer Science in the Faculty of Exact Sciences at the National University of La Plata in Buenos Aires, Argentina; and who has been contending in a number of posts on his Academia web-page that:

(i) Gödel’s semantic definition of ‘$[R(n) : n]$‘, and therefore of ‘$\neg Bew[R(n) : n]$‘, is not only:

(a) self-referential under interpretation—in the sense of the above quote (pp.597-598, van Heijenoort) from Gödel’s Introduction in his 1931 paper ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems I’ (pp.596-616, van Heijenoort);

but that:

(b) neither of the definitions can be verified by a deterministic Turing machine as yielding a valid formula of PM.

Peralta is, of course, absolutely right in his contentions.

However, such non-constructiveness is a characteristic of any set-theoretical system in which PM is interpretable; and in which, by Gödel’s self-confessed Platonism (apparent in his footnote #15 in the quote above), we do not need to establish that his definitions of ‘$[R(n) : n]$‘ and ‘$\neg Bew[R(n) : n]$‘ need to be verifiable by a deterministic Turing machine in order to be treated as valid formulas of PM.

Reason: By the usual axiom of separation of any formal set theory such as ZFC in which PM is interpreted, Gödel’s set-theoretical definition (p.598, Heijenoort):

$n \in K \equiv \neg Bew[R(n) : n]$

lends legitimacy to $\neg Bew[R(n) : n]$ as a PM formula.

Thus Gödel can formally assume—without further proof, by appeal simply to the axiom of choice of ZFC—that the PM formulas with exactly one variable—of the type of natural numbers—can be well-ordered in a sequence in some way such as, for example (Fn.11, p.598, Heijenoort):

“… by increasing the sum of the finite sequences of integers that is the ‘class sign’;, and lexicographically for equal sums.”

We cannot, though, conclude from this that:

(ii) Gödel’s formally undecidable P-formula, say $[(\forall x)R(x)]$—whose Gödel-number is defined as $17Gen\ r$ in Gödel’s proof of his Theorem VI (on pp.607-609 of van Heijenoort)—also cannot be verified by a deterministic Turing machine to be a valid formula of Gödel’s Peano Arithmetic P.

Reason: The axioms of set-theoretical systems such as PM, ZF, etc. would all admit—under a well-defined interpretation, if any—infinite elements, in the putative domain of any such interpretation, which are not Turing-definable.

Nevertheless, to be fair to two generations of scholars who—apart from those who are able to comfortably wear the logician’s hat—have laboured in attempts to place the philosophical underpinnings of Gödel’s reasoning (in his 1931 paper) in a coherent perspective (see this post; also this and this), I think Gödel must, to some extent, be held responsible—but in no way accountable—for the lack of a clear-cut distinction between the non-constructivity implicit in his semantic proof in (i), and the finitarity that he explicitly ensures for his syntactic proof in (ii).

Reason: Neither in his title, nor elsewhere in his paper, does Gödel categorically state that his goal was:

(iii) not only to demonstrate the existence of formally undecidable propositions in PM, a system which admits non-finitary elements under any putative interpretation;

(iv) but also to prevent the admittance of non-finitary elements—precisely those which would admit conclusions such as (ii)—when demonstrating the existence of formally undecidable propositions in ‘related’ systems such as his Peano Arithmetic P.

He merely hints at this by stating (see quote below from pp.587-589 of van Heijenoort) that his demonstration of (iii) is a ‘sketch’ that lacked the precision which he intended to achieve in (iv):

“Before going into details, we shall first sketch the main idea of the proof, of course without any claim to complete precision. The formulas of a formal system (we restrict ourselves here to the system PM) in outward appearance are finite sequences of primitive signs (variables, logical constants, and parentheses or punctuation dots), and it is easy to state with complete precision which sequences of primitive signs are meaningful formulas and which are not….

by:

(v) weakening the implicit assumption—of the decidability of the semantic truth of PM-propositions under any well-defined interpretation of PM—which underlies his proof of the existence of formally undecidable set-theoretical propositions in PM;

The method of proof just explained can clearly be applied to any formal system that, first, when interpreted as representing a system of notions and propositions, has at its disposal sufficient means of expression to define the notions occurring in the argument above (in particular, the notion “provable formula”) and in which, second, every provable formula is true in the interpretation considered. The purpose of carrying out the above proof with full precision in what follows is, among other things, to replace the second of the assumptions just mentioned by a purely formal and much weaker one.”

and:

(vi) insisting—in his proof of the existence of formally undecidable arithmetical propositions in his Peano Arithmetic P—upon the introduction of a methodology for constructively assigning unique truth values to only those (primitive recursive) quantified number-theoretic assertions (#1 to #45 on pp.603-606 of van Heijenoort) that are bounded when interpreted over the domain N of the natural numbers (footnote #34 on p.603 of van Heijenoort):

“Wherever one of the signs $(x)$, $(Ex)$, or $\varepsilon x$ occurs in the definitions below, it is followed by a bound on $x$. This bound serves merely to ensure that the notion defined is recursive (see Theorem IV). But in most cases the extension of the notion defined would not change if this bound were omitted.”

From today’s perspective, one could reasonably hold that—as Peralta implicitly contends—Gödel is misleadingly suggesting (in the initial quote above from pp.587-589 of van Heijenoort) that his definitions of ‘$[R(n) : n]$‘ and ‘$~Bew[R(n) : n]$‘ may be treated as yielding ‘meaningful’ formulas of PM which are well-definable constructively (in the sense of being definable by a deterministic Turing machine).

In my previous post I detailed precisely why such an assumption would be fragile, by showing how the introduction of the boundedness Gödel insisted upon in (vi) distinguishes:

(vii) Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (pp.598-599 of van Heijenoort), which admits Peralta’s contention (1);

from:

(viii) Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (pp.607-609 of van Heijenoort), which does not admit the corresponding contention (ii).

Moreover, we note that:

(1) Whereas Gödel can—albeit non-constructively—claim that his definition of ‘$Bew[R(n) : n]$‘ yields a formula in PM, we cannot claim, correspondingly, that his primitive recursive formula $Bew(x)$ is a formula in his Peano Arithmetic P.

(2) The latter is a number-theoretic relation defined by Gödel in terms of his primitive recursive relation #45, ‘$xBy$‘, as:

#46. $Bew(x) \equiv (\exists y)yBx$.

(3) In Gödel’s terminology, ‘$Bew(x)$‘ translates under interpretation over the domain N of the natural numbers as:

$x$ is the Gödel-number of some provable formula $[F]$ of Gödel’s Peano Arithmetic P’.

(4) However, unlike Gödel’s primitive recursive functions and relations #1 to #45, both ‘$(\exists y)yBx$‘ and ‘$\neg (\exists y)yBx$‘ are number-theoretic relations which are not primitive recursive—which means that they are not effectively decidable by a Turing machine under interpretation in N.

(5) Reason: Unlike in Gödel’s definitions #1 to #45 (see footnote #34 on p.603 of van Heijenoort, quoted above), there is no bound on the quantifier ‘$(\exists y)$‘ in the definition of $Bew(x)$.

Hence, by Turing’s Halting Theorem, we cannot claim—in the absence of specific proof to the contrary—that there must be some deterministic Turing machine which will determine whether or not, for any given natural number $m$, the assertion $Bew(m)$ is true under interpretation in N.

This is the crucial difference between Gödel’s semantic proof of the existence of formally undecidable set-theoretical propositions in PM (which admits Peralta’s contention (i)), and Gödel’s syntactic proof of the existence of formally undecidable arithmetical propositions in the language of his Peano Arithmetic P (which does not admit his contention (i)).

(6) We cannot, therefore—in the absence of specific proof to the contrary—claim by Gödel’s Theorems V or VII that there must be some P-formula, say [Bew$_{_{PA}}(x)]$ (corresponding to the PM-formula $Bew[R(n) : n]$), such that, for any given natural number $m$:

(a) If $Bew(m)$ is true under interpretation in N, then [Bew$_{_{PA}}(m)]$ is provable in P;

(b) If $\neg Bew(m)$ is true under interpretation in N, then $\neg$[Bew$_{_{PA}}(m)]$ is provable in P.

Author’s working archives & abstracts of investigations

A: Is Gödel’s reasoning really kosher?

Many scholars yet harbour a lingering suspicion that Gödel’s definition of his formally undecidable arithmetical proposition $[(\forall x)R(x,p)]$ involves a latent contradiction—arising from a putative, implicit, circular self-reference—that is masked by unverifiable, even if not patently invalid, mathematical reasoning.

The following proof of Gödel’s Theorem VI of his 1931 paper is intended to:

$\bullet$ strip away the usual mathematical jargon that shrouds proofs of Gödel’s argument which makes his—admittedly arcane—reasoning difficult for a non-logician to unravel;

and

$\bullet$ show that, and why—unlike in the case of the paradoxical ‘Liar’ sentence: ‘This sentence is a lie’—Gödel’s proposition $[(\forall x)R(x, p)]$ does not involve any circular self-reference that could yield a Liar-like contradiction, either in a formal mathematical language, or when interpreted in any language of common discourse.

B: Gödel’s 45 primitive recursive arithmetic functions and relations

We begin by noting that:

(1) In his 1931 paper on formally ‘undecidable’ arithmetical propositions, Gödel shows that, given a well-defined system of Gödel-numbering, every formula of a first-order Peano Arithmetic such as PA can be Gödel-numbered by Gödel’s primitive recursive relation #23, $Form(x)$, which is true if, and only if, $x$ is the Gödel-number (GN) of a formula of PA.

(2) So, given any natural number $n$, (1) allows us to decompose $n$ and effectively determine whether, or not, $n$ is the GN of some PA formula.

(3) Gödel also defines a primitive recursive relation #44, $Bw(x)$, which is true if, and only if, $x$ is the GN of a finite sequence of formulas in PA, each of which is either an axiom, or an immediate consequence of two preceding formulas in the sequence.

(4) So, given any natural number $n$, (3) allows us to effectively determine whether, or not, the natural number $n$ is the GN of a proof sequence in PA.

(5) Further, Gödel defines a primitive recursive relation #45, $xBy$, which is true if, and only if, $x$ is the GN of a proof sequence in PA whose last formula has the GN $y$.

(6) Gödel then defines a primitive recursive relation, say $Q(x,y) \equiv xBSUBy$, such that, for any $m,n$:

$mBSUBn$ is true if, and only if, $m$ happens to be a GN that can be decomposed into a proof sequence whose last member is some PA formula $[F(n)]$, and $n$ happens to be a GN that decomposes into the PA-formula $[F(u)]$ with only one variable $[u]$.

(7) The essence of Gödel’s Theorem VI lies in answering the question:

Query 1: Is there any natural number $n$ for which $mBSUBn$ is true?

C: Gödel’s reasoning in Peano Arithmetic

(8) Now, by Gödel’s Theorem VII (a standard representation theorem of arithmetic), $xBSUBy$ can be expressed in PA by some (formally well-defined) formula $[\neg R(x,y)]$ such that, for any $m,n$:

(a) If $mBSUBn$ is true, then $[\neg R(m,n)]$ is PA-provable;

(b) If $\neg mBSUBn$ is true, then $[R(m,n)]$ is PA-provable.

(9) Further, by (6) and (8), for any $m,n$, if $n$ is the GN of $F(x)$ then:

(a) If $mBSUBn$ is true, then $[R(m,n)]$ is PA-provable; and $m$ is a PA-proof of $[F(n)]$;

(b) If $\neg mBSUBn$ is true, then $[\neg R(m,n)]$ is PA-provable; and $m$ is not a PA-proof of $[F(n)]$.

(10) In his Theorem VI, Gödel then argues as follows:

(a) Let $q$ be the GN of the formula $[R(x,y)]$ defined in (8).

(b) Let $p$ be the GN of $[(\forall x)R(x,y)]$.

(c) Let $r$ be the GN of $[R(x,p)]$.

(d) Let $17Gen\ r$ be the GN of $[(\forall x)R(x,p)]$.

(11) We note that all the above primitive recursive relations are formally well-defined within the standard primitive recursive arithmetic PRA; and all the PA-formulas—as well as their corresponding Gödel-numbers—are well-defined in the first-order Peano Arithmetic PA.

In other words, as Gödel emphasised in his paper, the 46—i.e., $45 + xBSUBy$—PRA functions and relations that he defines are all bounded, and therefore effectively decidable as true or false over the domain $N$ of the natural numbers; whilst the PA-formulas that he defines do not involve any reference—or self-reference—to either the meaning or the truth/falsity of any PA-formulas under an interpretation in $N$, but only to their PA-provability which, he shows, is effectively decidable by his system of Gödel-numbering and his definition of the primitive recursive relation $xBy$.

(12) If we now substitute $p$ for $n$, and $[(\forall x)R(x,p)]$ for $[F(n)]$, in (9) we have (since $p$ is the GN of $[(\forall x)R(x,y)]$) that:

(i) If $mBSUBp$ is true, then $[R(\neg m,p)]$ is PA-provable; whence $m$ is a PA-proof of $[(\forall x)R(x,p)]$;

(ii) If $\neg mBSUBp$ is true, then $[R(m,p)]$ is PA-provable; whence $m$ is not a PA-proof of $[(\forall x)R(x,p)]$.

Hence $n = p$ answers Query 1 affirmatively.

D: Gödel’s conclusions

(13) Gödel then concludes that, if PA is consistent then:

By (12)(i), if $mSUBp$ is true for some $m$, then both $[R(\neg m,p)]$ and $[(\forall x)R(x,p)]$ are PA-provable—a contradiction since, by Generalisation in PA, the latter implies that $[R(m,p)]$ is provable in PA.

Hence $[(\forall x)R(x,p)]$, whose GN is $17Gen\ r$, is not provable in PA if PA is consistent.

(14) Moreover, if PA is assumed to also be $\omega$-consistent (which means that we cannot have a PA-provable formula $[\neg (\forall x)F(x)]$ such that $[F(m)]$ is also provable in PA for any given numeral $[m]$) then:

By (13), $m$ is not a PA-proof of $[(\forall x)R(x,p)]$ for any given $m$; whence $[R(m,p)]$ is PA-provable for any given $m$ by (12)(ii).

Hence $[\neg (\forall x)R(x,p)]$, whose GN is $Neg(17Gen r)$, is not provable in PA.

E: Gödel’s $[(\forall x)R(x,p)]$ does not refer to itself

We note that Gödel’s formula $[(\forall x)R(x,p)]$—whose GN is $17Gen\ r$— does not refer to itself since it is defined in terms of the natural number $p$, and not in terms of the natural number $17Gen\ r$.

F: Somewhere, far beyond Gödel

The consequences of Gödel’s path-breaking answer to Query 1 are far-reaching (as detailed in this thesis).

For instance, taken together with the proof that PA is categorical with respect to algorithmic computability (Corollary 7.2 of this paper), and that PA is not $\omega$-consistent (Corollary 8.4 of this paper), the above entails that:

$\bullet$ There can be no interpretation of Gödel’s definition of his formally undecidable arithmetical proposition $[(\forall x)R(x),p]$ over the domain of the natural numbers—whether expressed mathematically or in any language of common discourse—that could lead to a contradiction;

$\bullet$ Gödel’s $[(\forall x)R(x,p)]$ is not a formally undecidable arithmetical proposition, since $[\neg (\forall x)R(x,p)]$ is PA-provable (see Corollary 8.2 of this paper).

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A: A conventional perspective on why the prime divisors of an integer are not mutually independent

In a post Getting to the Roots of Factoring on the blog Gödel’s Lost Letter and P = NP (hosted jointly by him and Professor of Computer Science at Georgia Tech College of Computing, Richard Lipton) Professor of Computer Science and Engineering at the University of Buffalo, Kenneth W. Regan revisits’ a 1994 paper on factoring by his co-host.

What I find intriguing in the above post is that the following extract subsumes, almost incidentally, a possibly implicit belief which may be inhibiting not only a resolution of the computational complexity of Integer Factoring (see this investigation) by conventional methods but, for similar reasons, also resolution of a large class of apparently unrelated problems (see, for instance, this investigation and this previous post):

“Here is the code of the algorithm. … the input $x$ is a product of two prime numbers, $\phi$ is a polynomial in just one variable, and $\gcd$ refers to the greatest-common-divisor algorithm expounded by Euclid around 300 B.C. …

$\star$ Repeat until exit:

$\star\ a :=$ a random number in $1,\dots, x-1$;

$\star\ b := \phi(a)\ mod(x)$;

$\star$ if $(\gcd(b,x) > 1)$ then exit.

Exiting enables carrying out the two prime factors of $x$

How many iterations must one expect to make through this maze before exit? How and when can the choice of the polynomial $\phi$ speed up the exploration? …

Note that we cannot consider the events $b \equiv 0\ mod(p)$ and $b \equiv 0\ mod(q)$ to be independent, even though $p$ and $q$ are prime, because $b = \phi(a)$ and $\phi$ may introduce bias.”

The following reasoning shows why it is not obvious whether the assertion that:

“… we cannot consider the events $b \equiv 0\ mod(p)$ and $b \equiv 0\ mod(q)$ to be independent”

is intended to narrowly reflect a putative proof in the particular context of the assertion, or to echo a more general belief.

B: Defining the probability that a given integer is divisible by a given smaller integer

(1) We consider the questions:

(a) What is the probability for any given $n>i>1$ and $i \geq 0$, where $i>u \geq 0$, that:

$n+u \equiv\ 0\ (mod\ i)$?

(b) What is the compound probability for any given $n>i,\ j>1$, where $i \neq j$, $i>u \geq 0$, and $j>v \geq 0$, that:

$n+u \equiv\ 0\ (mod\ i)$, and $n+v \equiv\ 0\ (mod\ j)$?

(c) What is the compound probability for any given $n>i,\ j>1$, where $i \neq j$, $i>u \geq 0$, and $j>v \geq 0$, that:

$i$ divides $n+u$, and $j$ divides $n+v$?

C: Why the prime divisors of an integer are mutually independent

(2) We note that:

(a) The answer to query (1a) above is that the probability the roll of an $i$-sided cylindrical die will yield the value $u$ is $\frac{1}{i}$ by the probability model for such an event as definable over the probability space $(0, 1, 2, \ldots, i-1)$;

(b) The answer to query (1b) above is that the probability the simultaneous roll of one $i$-sided cylindrical die and one $j$-sided cylindrical die will yield the values $u$ and $v$, respectively, is $\frac{1}{i.j}$ by the probability model for such a simultaneous event as defined over the probability space $\{(u, v): i > u \geq 0, j > v \geq 0\}$.

(3) We trivially conclude that:

The compound probability of determining $u$ and $v$ correctly from the simultaneous roll of one $i$-sided cylindrical die and one $j$-sided cylindrical die, is the product of the probability of determining $u$ correctly from the roll of an $i$-sided cylindrical die, and the probability of determining $v$ correctly from the roll of a $j$-sided cylindrical die.

(4) We further conclude non-trivially that the answer to query (1c) above is given by:

(a) If $i$ and $j$ are co-prime, the compound probability of correctly determining that $i$ divides $n$ and $j$ divides $n$ from the simultaneous roll of one $i$-sided cylindrical die and one $j$-sided cylindrical die, is the product of the probability of correctly determining that $i$ divides $n$ from the roll of an $i$-sided cylindrical die, and the probability of correctly determining that $j$ divides $n$ from the roll of a $j$-sided cylindrical die.

(b) The assumption that $i$ and $j$ be co-prime is also necessary, since 4(a) would not always be the case if $i$ and $j$ were not co-prime.

For instance, let $j=2i$. The probability that an $i$-sided cylindrical die will then yield $0$—and allow us to conclude that $i$ divides $n$—is $\frac{1}{i}$, and the probability that a $j$-sided cylindrical die will then yield $0$—and allow us to conclude that $j$ divides $n$—is $\frac{1}{j}$; but the probability of determining both that $i$ divides $n$, and that $j$ divides $n$, from a simultaneous roll of the two cylindrical dice is $\frac{1}{j}$, and not $\frac{1}{i.j}$.

(5) We thus conclude non-trivially that, if $p$ and $q$ are two unequal primes, the probability of determining whether $p$ divides $n$ is independent of the probability of determining whether $q$ divides $n$.

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A: The twin prime conjecture

In the post Notes on the Bombieri asymptotic sieve on his blog What’s new, 2006 Fields Medal awardee and Professor of Mathematics at UCLA, Terence Tao, writes and comments that:

“The twin prime conjecture, still unsolved, asserts that there are infinitely many primes $p$ such that $p+2$ is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that:

$\sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (2\Pi_2+o(1)) x$ … (1)

as $x \rightarrow \infty$, where $\Lambda$ is the von Mangoldt function and $\Pi_2 = 0.6606\dots$ is the twin prime constant:

$\prod_{p>2} (1 - \frac{1}{(p-1)^2})$.

Because $\Lambda$ is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.

One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. …

It may also be possible that the twin prime conjecture could be proven by non-analytic means, in a way that does not lead to significantly new estimates on the sum $\sum_{n \leq x} \Lambda(n) \Lambda(n+2)$ (though this sum will of course have to go to infinity as $x \to \infty$ if the twin prime conjecture holds).”

B: Why heuristic approaches to the twin prime conjecture may not suffice

1. What seems to prevent a non-heuristic determination of the limiting behaviour of prime counting functions is that the usual approximations of $\pi(n)$ for finite values of $n$ are apparently derived from real-valued functions which are asymptotic to $\pi(x)$, such as $\frac{x}{log_{e}x}$, $Li(x)$ and Riemann’s function $R(x) = \sum_{n=1}^{\infty}\frac{\mu(n)}{(n)}li(x^{1/n})$.

2. The degree of approximation for finite values of $n$ is thus determined only heuristically, by conjecturing upon an error term in the asymptotic relation that can be seen to yield a closer approximation than others to the actual values of $\pi(n)$.

3. Moreover, currently, conventional approaches to evaluating prime counting functions for finite $n$ may also subscribe to the belief:

(i) either—explicitly (see here)—that whether or not a prime $p$ divides an integer $n$ is not independent of whether or not a prime $q \neq p$ divides the integer $n$;

(ii) or—implicitly (since the twin-prime problem is yet open)—that a proof to the contrary must imply that if $P(n\ is\ a\ prime)$ is the probability that $n$ is a prime, then $\sum_{_{i = 1}}^{^{\infty}} P(i\ is\ a\ prime) = 1$.

4. If so, then conventional approaches seem to conflate the two probabilities:

(i) The probability $P(a)$ of selecting a number that has the property of being prime from a given set $S$ of numbers;

Example 1: I have a bag containing $100$ numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability $P(b)$ of determining that a given integer $n$ is prime.

Example 2: I give you a $5$-digit combination lock along with a $10$-digit number $n$. The lock only opens if you set the combination to a proper factor of $n$ which is greater than $1$. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

5. In case 4(i), if the precise proportion of primes to non-primes in $S$ is definable, then clearly $P(a)$ too is definable.

However if $S$ is the set $N$ of all integers, and we cannot define a precise ratio of primes to composites in $N$, but only an order of magnitude such as $O(\frac{1}{log_{_{e}}n})$, then equally obviously $P(a)$ cannot be defined in $N$ (see Chapter 2, p.9, Theorem 2.1, here).

6. In case 4(ii) it follows that $P(b) = \frac{1}{\pi(\sqrt{n})}$, since we can show (see Corollary 2.9 on p.14 of this investigation) that whether or not a prime $p$ divides a given integer $n$ is independent of whether or not a prime $q \neq p$ divides $n$.

7. We thus have that $\pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$. Hence, even though we cannot define the probability $P(n\ is\ a\ prime)$ of selecting a number from the set $N$ of all natural numbers that has the property of being prime, $\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$ can be treated as the de facto probability that a given $n$ is prime.

8. Moreover, by considering the asymptotic density of the set of all integers that are not divisible by the first $\pi(\sqrt {n})$ primes $p_{_{1}}, p_{_{2}}, \ldots, p_{_{\pi(\sqrt {n})}}$ we can show that, for any $n$, the expected number of such integers in any interval of length $(p_{_{\pi(\sqrt{ n})+1}}^{2} - p_{_{\pi(\sqrt n)}}^{2})$ is:

$(p_{_{\pi(\sqrt{ n})+1}}^{2} - p_{_{\pi(\sqrt n)}}^{2})\prod_{i = 1}^{\pi(\sqrt{n})}(1 - \frac{1}{p_{_{i}}})$.

9. We can then show that a non-heuristic approximation for the number of primes less than or equal to $n$ is given for all $n$ by:

$\pi(n) \approx \sum_{j = 1}^{n}\prod_{i = 1}^{\pi(\sqrt{j})}(1 - \frac{1}{p_{_{i}}}) \sim a.\frac{n}{log_{e}n} \rightarrow \infty$

for some constant $a > 2.e^{-\gamma} \approx 1.12292 \ldots$.

10. We can show, similarly, that the expected number of Dirichlet and twin primes in the interval ($p_{_{\pi(\sqrt {n})}}^{2},\ p_{_{\pi(\sqrt{ n})+1}}^{2}$) can be estimated similarly; and conclude that the number of such primes $\leq n$ is, in each case, cumulatively approximated non-heuristically by a function that $\rightarrow \infty$.

11. The method can, moreover, be generalised to apply to a large class of prime counting functions (see Section 5.A on p.22 of this investigation).

C: A non-heuristic proof that there are infinitely many twin primes

1. In particular, instead of estimating:

$\sum_{_{n \leq x}}\Lambda(n)\Lambda(n+2)$

heuristically by analytic considerations, one could also estimate the number $\pi_{_{2}}(x)$ of twin primes $\leq x$ non-heuristically as:

$\sum_{_{n \leq x}}P(n).P(n+2)$

where $P(n)$ is the de facto probability that a given $n$ is prime; and where we need to allow for the possibility that the two probabilities may not be independent.

2. One way of approaching this would be to define an integer $n$ as a $\mathbb{TW}(k)$ integer if, and only if, $r_{p_{_{i}}}(n) \neq 0$ and $r_{p_{_{i}}}(n) \neq 2$ for all $1 \leq i \leq k$, where $0 \leq r_{_{i}}(n) \leq i-1$ is defined for all $i \geq 0$ by:

$n + r_{_{i}}(n) \equiv 0\ (mod\ i)$

3. Note that if $n$ is a $\mathbb{TW}(k)$ integer, then both $n$ and $n+2$ are not divisible by any of the first $k$ primes $\{p_{_{1}}, p_{_{2}}, \ldots, p_{_{k}}\}$.

4. The asymptotic density of $\mathbb{TW}(k)$ integers over the set of natural numbers is then:

$\mathbb{D}(\mathbb{TW}(k)) = \prod_{i=2}^{k}(1 - \frac{2}{p_{_{i}}})$.

5. Further, if $p_{_{k}}^{2} \leq n \leq p_{_{k+1}}^{2}$ is a $\mathbb{TW}(k)$ integer, then $n$ is a prime and either $n+2$ is also a prime, or $n+2 = p_{_{k +1}}^{2}$.

6. If we define $\pi_{_{\mathbb{TW}(k)}}(n)$ as the number of $\mathbb{TW}(k)$ integers $\leq n$, the expected number of $\mathbb{TW}(k)$ integers in any interval $(a, b)$ is given by:

$\pi_{_{\mathbb{TW}(k)}}(b) - \pi_{_{\mathbb{TW}(k)}}(a) \approx (b-a)\prod_{i=2}^{k}(1 - \frac{2}{p_{_{i}}})$

7. Since $\pi_{_{\mathbb{TW}(k)}}(p_{_{k+1}}^{2}) - \pi_{_{\mathbb{TW}(k)}}(p_{_{k}}^{2})$ is at most one less than the number of twin-primes in the interval $(p_{_{k+1}}^{2} - p_{_{k}}^{2})$, it follows that:

$\pi_{_{\mathbb{TW}(k)}}(p_{_{k+1}}^{2}) - \pi_{_{\mathbb{TW}(k)}}(p_{_{k}}^{2}) + 1 \geq \pi_{_{2}}(p_{_{k+1}}^{2}) - \pi_{_{2}}(p_{_{k)}}^{2}) \geq \pi_{_{\mathbb{TW}(k)}}(p_{_{k+1}}^{2}) - \pi_{_{\mathbb{TW}(k)}}(p_{_{k}}^{2})$

8. Now, the expected number of $\mathbb{TW}(k)$ integers in the interval $(p_{_{k+1}}^{2} - p_{_{k}}^{2})$ is given by:

$\pi_{_{\mathbb{TW}(k)}}(p_{_{k+1}}^{2}) - \pi_{_{\mathbb{TW}(k)}}(p_{_{k}}^{2}) \approx (p_{_{k+1}}^{2} - p_{_{k}}^{2})\prod_{i=2}^{k}(1 - \frac{2}{p_{_{i}}})$.

9. We conclude that the number $\pi_{_{2}}(p_{_{k+1}}^{2})$ of twin primes $\leq p_{_{k+1}}^{2}$ is given by the cumulative non-heuristic approximation:

$\sum_{j=1}^{k} (\pi_{_{2}}(p_{_{j+1}}^{2}) - \pi_{_{2}}(p_{_{j}}^{2})) = \pi_{_{2}}(p_{_{k+1}}^{2}) \approx \sum_{j=1}^{k} (p_{_{j+1}}^{2} - p_{_{j}}^{2})\prod_{i=2}^{j}(1 - \frac{2}{p_{_{i}}}) \rightarrow \infty$.

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In this post I address two critical issues, as raised in private correspondence with researchers, which may illuminate some objections to Gödel’s reasoning and conclusions that have been raised elsewhere by Wittgenstein, Floyd, Putnam et al.:

(i) By Rosser’s reasoning, doesn’t simple consistency suffice for defining an undecidable arithmetical proposition?

(ii) Doesn’t Gödel’s undecidable formula assert its own unprovability?

NOTE: The following correspondence refers copiously to this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

Subsequently, most of the cited results were detailed formally in the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

A: Doesn’t simple consistency suffice for defining Rosser’s undecidable arithmetical proposition?

You claim that the PA system is $\omega$-inconsistent, and that Gödel’s first theorem holds vacuously. But by Rosser’s result, simple consistency suffices.

Well, it does seem surprising that Rosser’s claim—that his ‘undecidable’ proposition only assumes simple consistency—has not been addressed more extensively in the literature. Number-theoretic expositions of Rosser’s proof have generally remained either implicit or sketchy (see, for instance, this post).

Note that Rosser’s proposition and reasoning involve interpretation of an existential quantifier, whilst Gödel’s proposition and reasoning only involve interpretation of a universal quantifier.

The reason why Rosser’s claim is untenable is that—in order to interpret the existential quantifier as per Hilbert’s $\epsilon$-calculus—Rosser’s argument needs to assume his Rule C (see Elliott Mendelson, Introduction to Mathematical Logic, 1964 ed., p.73), which implicitly implies that Gödel’s arithmetic P—in which Rosser’s argumentation is grounded—is $\omega$-consistent .

See, for instance, this analysis of (a) Wang’s outline of Rosser’s argument on p.5, (b) Beth’s outline of Rosser’s argument on p.6, and (c) Mendelson’s exposition of Rosser’s argument in Section 4.2 on p.8.

Moreover, the assumption is foundationally fragile, because Rule C invalidly assumes that we can introduce an ‘unspecified’ formula denoting an ‘unspecified’ numeral into PA even if the formula has not been demonstrated to be algorithmically definable in terms of the alphabet of PA.

See Theorem 8.5 and following remarks in Section 8, pp.7-8 of this paper that was presented in June 2015 at the workshop on Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics, University of Montpellier, France.

B: As I see it, rule C is only a shortcut.

As I see it, rule C is only a shortcut; it is totally eliminable. Moreover, it is part of predicate logic, not of the Peano’s arithmetic.

Assuming that Rule C is a short cut which can always be eliminated is illusory, and is tantamount to invalidly (see Corollary 8.6, p.17 of the Epsilon 2015 paper) claiming that Hilbert’s $\epsilon$ calculus is a conservative extension of the first-order predicate calculus.

Reason: Application of Rule C invalidly (see Theorem 8.5 and following remarks in Section 8, pp.7-8 of the Epsilon 2015 paper) involves introduction of a new individual constant, say $[d]$, in a first-order theory $K$ (see Mendelson 1964, p.74, I(iv)); ‘invalidly’ since Rule C does not qualify that $[d]$ must be algorithmically computable from the alphabet of $K$—which is necessary if $K$ is first-order.

Notation: We use square brackets to indicate that the expression within the brackets denotes a well-formed formula of a formal system, say $K$, that is to be viewed syntactically merely as a first-order string of $K$—i.e, one which is finitarily constructed from the alphabet of the language of $K$—without any reference to its meaning under any interpretation of $K$.

Essentially, Rule C mirrors in $K$ the intuitionistically objectionable postulation that the formula $[(\exists x)F(x)]$ of $K$ can always be interpreted as:

$F'(a)$ holds for some element $a$

in the domain of the interpretation of $K$ under which the formula $[F(x)]$ interprets as the relation $F'(x)$.

The Epsilon 2015 paper shows that this is not a valid interpretation of the formula $[(\exists x)F(x)]$ under any finitary, evidence-based, interpretation of $K$.

That, incidentally, is a consequence of the proof that PA is not $\omega$-consistent; which itself is a consequence of (Theorem 7.1, p.15, of the Epsilon 2015 paper):

Provability Theorem for PA: A PA formula $[F(x)]$ is provable if, and only if, $[F(x)]$ interprets as an arithmetical relation $F'(x)$ that is algorithmically computable as always true (see Definition 3, p.7, of the Epsilon 2015 paper) over the structure $\mathbb{N}$ of the natural numbers.

Compare with what Gödel has essentially shown in his famous 1931 paper on formally undecidable arithmetical propositions, which is that (Lemma 8.1, p.16, of the Epsilon 2015 paper):

Gödel: There is a PA formula $[R(x, p)]$—which Gödel refers to by its Gödel number $r$—which is not provable in PA, even though $[R(x, p)]$ interprets as an arithmetical relation that is algorithmically verifiable as always true (see Definition 4, p.7, of the Epsilon 2015 paper) over the structure $\mathbb{N}$ of the natural numbers.

C: If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

If I have understood correctly, you claim that the PA system is $\omega$-inconsistent from an intuitionistic point of view? If you by-pass the intuitionist objections, would all logicist and post-formalist theories hold?

There is nothing to bypass—the first-order Peano Arithmetic PA is a formal axiomatic system which is $\omega$-inconsistent as much for an intuitionist, as it is for a realist, a finitist, a formalist, a logicist or a nominalist.

Philosophers may differ about beliefs that are essentially unverifiable; but the $\omega$-incompleteness of PA is a verifiable logical meta-theorem that none of them would dispute.

D: Isn’t Gödel’s undecidable formula $[(\forall x)R(x, p)]$—which Gödel refers to by its Gödel number $17Gen\ r$—self-referential?

Isn’t Gödel’s undecidable formula $[(\forall x)R(x, p)]$—which Gödel refers to by its Gödel number $17Gen\ r$—self-referential and covertly paradoxical?

According to Wittgenstein it interprets in any model as a sentence that is devoid of sense, or even meaning. I think a good reason for this is that the formula is simply syntactically wrongly formed: the provability of provability is not defined and can not be consistently defined.

What you propose may be correct, but for automation systems of deduction wouldn’t $\omega$-inconsistency be much more problematic than undecidability?

How would you feel if a syntax rule is proposed, that formulas containing numerals are instantiations of open formulas that may not be part of the canonical language? Too daring, may be?

Let me briefly respond to the interesting points that you have raised.

1. The $\omega$-inconsistency of PA is a meta-theorem; it is a Corollary of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper).

2. Gödel’s PA-formula $[(\forall x)R(x, p)]$ is not an undecidable formula of PA. It is merely unprovable in PA.

3. Moreover, Gödel’s PA-formula $[\neg(\forall x)R(x, p)]$ is provable in PA, which is why the PA formula $[(\forall x)R(x, p)]$ is not an undecidable formula of PA.

4. Gödel’s PA-formula $[(\forall x)R(x, p)]$ is not self-referential.

5. Wittgenstein correctly believed—albeit purely on the basis of philosophical considerations unrelated to whether or not Gödel’s formal reasoning was correct—that Gödel was wrong in stating that the PA formula $[(\forall x)R(x, p)]$ asserts its own unprovability in PA.

Reason: We have for Gödel’s primitive recursive relation $Q(x, y)$ that:

$Q(x, p)$ is true if, and only if, the PA formula $[R(x, p)]$ is provable in PA.

However, in order to conclude that the PA formula $[(\forall x)R(x, p)]$ asserts its own unprovability in PA, Gödel’s argument must further imply—which it does not—that:

$(\forall x)Q(x, p)$ is true (and so, by Gödel’s definition of $Q(x, y)$, the PA formula $[(\forall x)R(x, p)]$ is not provable in PA) if, and only if, the PA formula $[(\forall x)R(x, p)]$ is provable in PA.

In other words, for the PA formula $[(\forall x)R(x, p)]$ to assert its own unprovability in PA, Gödel must show—which his own argument shows is impossible, since the PA formula $[(\forall x)R(x, p)]$ is not provable in PA—that:

The primitive recursive relation $Q(x, p)$ is algorithmically computable as always true if, and only if, the arithmetical relation $R'(x, p)$ is algorithmically computable as always true (where $R'(x, p)$ is the arithmetical interpretation of the PA formula $[R(x, p)]$ over the structure $\mathbb{N}$ of the natural numbers).

6. Hence, Gödel’s PA-formula $[(\forall x)R(x, p)]$ is not covertly paradoxical.

7. IF Wittgenstein believed that the PA formula $[(\forall x)R(x, p)]$ is empty of meaning and has no valid interpretation, then he was wrong, and—as Gödel justifiably believed—he could not have properly grasped Gödel’s formal reasoning that:

(i) ‘$17Gen\ r$ is not $\kappa$-provable’ is a valid meta-theorem if PA is consistent, which means that:

‘If PA is consistent and we assume that the PA formula $[(\forall x)R(x, p)]$ is provable in PA, then the PA formula $[\neg(\forall x)R(x, p)]$ must also be provable in PA; from which we may conclude that the PA formula $[(\forall x)R(x, p)]$ is not provable in PA’

(ii) ‘$Neg(17Gen\ r)$ is not $\kappa$-provable’ is a valid meta-theorem ONLY if PA is $\omega$-consistent, which means that:

‘If PA is $\omega$-consistent and we assume that the PA formula $[\neg(\forall x)R(x, p)]$ is provable in PA, then the PA formula $[(\forall x)R(x, p)]$ must also be provable in PA; from which we may conclude that the PA formula $[\neg(\forall x)R(x, p)]$ is not provable in PA’.

8. In fact the PA formula $[(\forall x)R(x, p)]$ has the following TWO meaningful interpretations (the first of which is a true arithmetical meta-statement—since the PA formula $[R(n)]$ is provable in PA for any PA-numeral $[n]$—but the second is not—since the PA formula $[(\forall x)R(x, p)]$ is not provable in PA):

(i) For any given natural number $n$, there is an algorithm which will verify that each of the arithmetical meta-statements ‘$R'(1, p)$ is true’, ‘$R'(2, p)$ is true’, …, ‘$R'(n, p)$ is true’ holds under the standard, algorithmically verifiable, interpretation $\mathbb{M}$ of PA (see \S 5, p.11 of the Epsilon 2015 paper);

(ii) There is an algorithm which will verify that, for any given natural number $n$, the arithmetical statement ‘$R'(n, p)$ is true’ holds under the finitary, algorithmically computable, interpretation $\mathbb{B}$ of PA (see \S 6, p.13 of the Epsilon 2015 paper).

9. IF Wittgenstein believed that the PA formula $[(\forall x)R(x, p)]$ is not a well-defined PA formula, then he was wrong.

Gödel’s definition of the PA formula $[(\forall x)R(x, p)]$ yields a well-formed formula in PA, and cannot be treated as ‘syntactically wrongly formed’.

10. The Provability Theorem for PA shows that both ‘proving something in PA’ and ‘proving that something is provable in PA’ are finitarily well-defined meta-mathematical concepts.

11. The Provability Theorem for PA implies that PA is complete with respect to the concepts of satisfaction, truth and provability definable in automated deduction systems, which can only define algorithmically computable truth.

12. The Provability Theorem for PA implies that PA is categorical, so you can introduce your proposed syntax rule ONLY if it leads to a conservative extension of PA.

13. Whether ‘daring’ or not, why would you want to introduce such a rule?

E: Consider these two statements of yours …

Consider these two statements of yours:

“(iv): $p$ is the Gödel-number of the formula $[(\forall x)][R(x, y)]$ of PA” and

“D(4): Gödel’s PA-formula $[(\forall x)R(x, p)]$ is not self-referential.”

If ‘$p$‘ is the Gödel-number of the open formula in para (iv), and the second argument of the closed formula $R$ in para D(4) is ‘$p$‘, then the second formula is obtained by instantiating the variable ‘$y$‘ in the first with its own Gödel-number.

So how would you call, in one word, the relation between the entire formula (in D(4)) and its second argument?

Para D(4) is an attempt to clarify precisely this point.

1. Apropos the first statement ‘(iv)’ cited by you:

From a pedantic perspective, the “relation between the entire formula (in D(4)) and its second argument” cannot be termed self-referential because the “second argument”, i.e., $p$, is the Gödel-number of the PA formula $[(\forall x)R(x, y)]$, and not that of “the entire formula (in 4)”, i.e., of the formula $[(\forall x)R(x, p)]$ itself (whose Gödel number is $17Gen\ r$).

Putting it crudely, $17Gen\ r$ is neither self-referential—nor circularly defined—because it is not defined in terms of $17Gen\ r$, but in terms of $p$.

2. Apropos the second statement ‘D(4)’ cited by you:

I would interpret:

Gödel’s PA-formula $[(\forall x)R(x, p)]$ is self-referential

to mean, in this particular context, that—as Gödel wrongly claimed:

$[(\forall x)R(x, p)]$ asserts its own unprovability in PA.

Now, if we were to accept the claim that $[(\forall x)R(x, p)]$ is self-referential in the above sense, then (as various critics of Gödel’s reasoning have pointed out) we would have to conclude further that Gödel’s argument leads to the contradiction:

$(\forall x)Q(x, p)$ is true—and so, by Gödel’s definition of $Q(x, y)$—the PA formula $[(\forall x)R(x, p)]$ is not provable in PA—if, and only if, the PA formula $[(\forall x)R(x, p)]$ is provable in PA.

However, in view of the Provability Theorem of PA (Theorem 7.1, p.15, of the Epsilon 2015 paper), this contradiction would only follow if Gödel’s argument were to establish (which it does not) that:

The primitive recursive relation $Q(x, p)$ is algorithmically computable as always true if, and only if, the arithmetical interpretation $R'(x, p)$ of the PA formula $[R(x, p)]$ is algorithmically computable as always true over the structure $\mathbb{N}$ of the natural numbers.

The reason Gödel cannot claim to have established the above is that his argument only proves the much weaker meta-statement:

The arithmetical interpretation $R'(x, p)$ of the PA formula $[R(x, p)]$ is algorithmically verifiable as always true over the structure $\mathbb{N}$ of the natural numbers.

Ergo—contrary to Gödel’s claim— Gödel’s PA-formula $[(\forall x)R(x, p)]$ is not self-referential (and so, even though Gödel’s claimed interpretation of what his own reasoning proves is wrong, there is no paradox in Gödel’s reasoning per se)!

F: Is the PA system $\omega$-inconsistent without remedy?

Is the PA system $\omega$-inconsistent without remedy? Is it possible to introduce a new axiom or new rule which by-passes the problematic unprovable statements of the Gödel-Rosser Theorems?

1. Please note that the first-order Peano Arithmetic PA is:

(i) consistent (Theorem 7.3, p.15, of the Epsilon 2015 paper); which means that for any PA-formula $[A]$, we cannot have that both $[A]$ and $[\neg A]$ are Theorems of PA;

(ii) complete (Theorem 7.1, p.15, of the Epsilon 2015 paper); which means that we cannot add an axiom to PA which is not a Theorem of PA without inviting inconsistency;

(iii) categorical (Theorem 7.2, p.15, of the Epsilon 2015 paper); which means that if $\mathbb{M}$ is an interpretation of PA over a structure $\mathbb{S}$, and $\mathbb{B}$ is an interpretation of PA over a structure $\mathbb{T}$, then $\mathbb{S}$ and $\mathbb{T}$ are identical and denote the structure $\mathbb{N}$ of the natural numbers defined by Dedekind’s axioms; and so PA has no model which contains an element that is not a natural number (see Footnote 54, p.16, of the Epsilon 2015 paper).

2. What this means with respect to Gödel’s reasoning is that:

(i) PA has no undecidable propositions, which is why it is not $\omega$-consistent (Corollary 8.4, p.16, of the Epsilon 2015 paper);

(ii) The Gödel formula $[(\forall x)R(x, p)]$ is not provable in PA; but it is algorithmically verifiable as true (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically verifiable standard interpretation $\mathbb{M}$ of PA (see Section 5, p.11, of the Epsilon 2015 paper) over the structure $\mathbb{N}$ of the natural numbers;

(iii) The Gödel formula $[(\forall x)R(x, p)]$ is not provable in PA; and it is algorithmically computable as false (Corollary 8.3, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation $\mathbb{B}$ of PA (see Section 6, p.13, of the Epsilon 2015 paper) over the structure $\mathbb{N}$ of the natural numbers;

(iv) The Gödel formula $[\neg(\forall x)R(x, p)]$ is provable in PA; and it is therefore also algorithmically verifiable as true under the algorithmically verifiable standard interpretation $\mathbb{M}$ of PA over the structure $\mathbb{N}$ of the natural numbers—which means that the logic by which the standard interpretation of PA assigns values of ‘satisfaction’ and ‘truth’ to the formulas of PA (under Tarski’s definitions) may be paraconsistent (see http://plato.stanford.edu/entries/logic-paraconsistent) since PA is consistent;

(v) The Gödel formula $[\neg(\forall x)R(x, p)]$ is provable in PA; and it is therefore algorithmically computable as true (Corollary 8.2, p.16, of the Epsilon 2015 paper) under the algorithmically computable finitary interpretation $\mathbb{B}$ of PA over the structure $\mathbb{N}$ of the natural numbers.

3. It also means that:

(a) The “Gödel-Rosser Theorem” is not a Theorem of PA;

(b) The “unprovable Gödel sentence” is not a “problematic statement”;

(c) The “PA system” does not require a “remedy” just because it is “$\omega$-inconsistent”;

(d) No “new axiom or new rule” can “by-pass the unprovable sentence”.

4. Which raises the question:

Why do you see the “unprovable Gödel sentence” as a “problematic statement” that requires a “remedy” which must “by-pass the unprovable sentence”?

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Ferguson’s and Priest’s thesis

In a brief, but provocative, review of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘A Dictionary of Logic‘, philosophers Thomas Macaulay Ferguson and Graham Priest, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

A nineteenth century parody of the struggle to define ‘truth’ objectively

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in this nineteenth century quote:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

Making sense of mathematical propositions about infinite processes

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning here).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

Gödel’s influence on Turing’s reasoning

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

What is logic: using Ockham’s razor

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the EPR paradox in the following paper that I presented on 26’th June at the workshop on Emergent Computational Logics at UNILOG’2015, Istanbul, Turkey:

where I introduced the definition:

A finite set $\lambda$ of rules is a Logic of a formal mathematical language $\mathcal{L}$ if, and only if, $\lambda$ constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of $\mathcal{L}$; and

(b) Of truth/falsity to the sentences of the Theory $T(\mathcal{U})$ which is defined semantically by the $\lambda$-interpretation of $\mathcal{L}$ over a structure $\mathcal{U}$.

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

$\bullet$ Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as true’ under an algorithmically computable interpretation of PA over the structure $\mathbb{N}$ of the natural numbers;

$\bullet$ Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

$\bullet$ Interpret such representations unambiguously; and

$\bullet$ Conclude further:

$\bullet$ First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

$\bullet$ Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A Economist: The return of the machinery question

In a Special Report on Artificial Intelligence in its issue of 25th June 2016, ‘The return of the machinery question‘, the Economist suggests that both cosmologist Stephen Hawking and enterpreneur Elon Musk share to some degree the:

“… fear that AI poses an existential threat to humanity, because superintelligent computers might not share mankind’s goals and could turn on their creators”.

B Our irrational propensity to fear that which we are drawn to embrace

Surprising, since I suspect both would readily agree that, if anything should scare us, it is our irrational propensity to fear that which we are drawn to embrace!

And therein should lie not only our comfort, but perhaps also our salvation.

For Artificial Intelligence is constrained by rationality; Human Intelligence is not.

An Artificial Intelligence must, whether individually or collectively, create and/or destroy only rationally. Humankind can and does, both individually and collectively, create and destroy irrationally.

C Justifying irrationality

For instance, as the legatees of logicians Kurt Goedel and Alfred Tarski have amply demonstrated, a Human Intelligence can easily be led to believe that some statements of even the simplest of mathematical languages—Arithmetic—must be both ‘formally undecidable’ and ‘true’, even in the absence of any objective yardstick for determining what is ‘true’!

D Differentiating between Human reasoning and Mechanistic reasoning

An Artificial Intelligence, however, can only treat as true that which can be proven—by its rules—to be true by an objective assignment of ‘truth’ and ‘provability’ values to the propositions of the language that formally expresses its mechanical operations—Arithmetic.

The implications of the difference are not obvious; but that the difference could be significant is the thesis of this paper which is due to appear in the December 2016 issue of Cognitive Systems Research:

E Respect for evidence-based ‘truth’ could be Darwinian

More importantly, the paper demonstrates that both Human Intelligence—whose evolution is accepted as Darwinian—and Artificial Intelligence—whose evolution it is ‘feared’ may be Darwinian—share a common (Darwinian?) respect for an accountable concept of ‘truth’.

A respect that should make both Intelligences fitter to survive by recognising what philosopher Christopher Mole describes in this invitational blogpost as the:

“… importance of the rapport between an organism and its environment”

—an environment that can obviously accommodate the birth, and nurture the evolution, of both intelligences.

So, it may not be too far-fetched to conjecture that the evolution of both intelligences must also, then, share a Darwinian respect for the kind of human values—towards protecting intelligent life forms—that, no matter in how limited or flawed a guise, is visibly emerging as an inherent characteristic of a human evolution which, no matter what the cost could, albeit optimistically, be viewed as struggling to incrementally strengthen, and simultaneously integrate, individualism (fundamental particles) into nationalism (atoms) into multi-nationalism (molecules) and, possibly, into universalism (elements).

F The larger question: Should we fear an extra-terrestrial Intelligence?

From a broader perspective yet, our apprehensions about the evolution of a rampant Artificial Intelligence created by a Frankensteinian Human Intelligence should, perhaps, more rightly be addressed—as some have urged—within the larger uncertainty posed by SETI:

Is there a rational danger to humankind in actively seeking an extra-terrestrial intelligence?

I would argue that any answer would depend on how we articulate the question and that, in order to engage in a constructive and productive debate, we need to question—and reduce to a minimum—some of our most cherished mathematical and scientific beliefs and fears which cannot be communicated objectively.

Author’s working archives & abstracts of investigations

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

We investigate whether the probabilistic distribution of prime numbers can be treated as a heuristic model of quantum behaviour, since it too can be treated as a quantum phenomena, with a well-defined binomial probability function that is algorithmically computable, where the conjectured values of $\pi(n)$ differ from actual values with a binomial standard deviation, and where we define a phenomena as a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable, but not algorithmically computable.

1. Thesis: The concept of ‘mathematical truth’ must be accountable

The thesis of this investigation is that a major philosophical challenge—which has so far inhibited a deeper understanding of the quantum behaviour reflected in the mathematical representation of some laws of nature (see, for instance, this paper by Eamonn Healey)—lies in holding to account the uncritical acceptance of propositions of a mathematical language as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology of accountability for objectively evidencing such acceptance.

2. The concept of ‘set-theoretical truth’ is not accountable

Since current folk lore is that all scientific truths can be expressed adequately, and communicated unambiguously, in the first order Set Theory ZF, and since the Axiom of Infinity of ZF cannot—even in principle—be objectively evidenced as true under any putative interpretation of ZF (as we argue in this post), an undesirable consequence of such an uncritical acceptance is that the distinction between the truths of mathematical propositions under interpretation which can be objectively evidenced, and those which cannot, is not evident.

3. The significance of such accountability for mathematics

The significance of such a distinction for mathematics is highlighted in this paper due to appear in the December 2016 issue of Cognitive Systems Research, where we address this challenge by considering the two finitarily accountable concepts of algorithmic verifiability and algorithmic computability (first introduced in this paper at the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, Birmingham, UK).

(i) Algorithmic verifiability

A number-theoretical relation $F(x)$ is algorithmically verifiable if, and only if, for any given natural number $n$, there is an algorithm $AL_{(F,\ n)}$ which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence $\{F(1), F(2), \ldots, F(n)\}$.

(ii) Algorithmic computability

A number theoretical relation $F(x)$ is algorithmically computable if, and only if, there is an algorithm $AL_{F}$ that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence $\{F(1), F(2), \ldots\}$.

(iii) Algorithmic verifiability vis à vis algorithmic computability

We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

From the point of view of a finitary mathematical philosophy—which is the constraint within which an applied science ought to ideally operate—the significant difference between the two concepts could be expressed by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function (Thesis 1 on p.9 of this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey).

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true.

We show in the CSR paper how such accountability helps define finitary truth assignments that differentiate human reasoning from mechanistic reasoning in arithmetic by identifying two, hitherto unsuspected, Tarskian interpretations of the first order Peano Arithmetic PA, under both of which the PA axioms interpret as finitarily true over the domain $N$ of the natural numbers, and the PA rules of inference preserve such truth finitarily over $N$.

4. The ambit of human reasoning vis à vis the ambit of mechanistic reasoning

One corresponds to the classical, non-finitary, putative standard interpretation of PA over $N$, and can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions.

The other corresponds to a finitary interpretation of PA over $N$ that circumscibes the ambit of mechanistic reasoning about ‘true’ arithmetical propositions, and establishes the long-sought for consistency of PA (see this post); which establishes PA as a mathematical language of unambiguous communication for the mathematical representation of physical phenomena.

5. The significance of such accountability for the mathematical representation of physical phenomena

The significance of such a distinction for the mathematical representation of physical phenomena is highlighted in this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey, where we showed how some of the seemingly paradoxical elements of quantum mechanics may resolve if we define:

Quantum phenomena: A phenomena is a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable but not algorithmically computable.

6. The mathematical representation of quantum phenomena that is determinate but not predictable

By considering the properties of Gödel’s $\beta$ function (see $\S$4.1 on p.8 of this preprint)—which allows us to strongly represent any non-terminating sequence of natural numbers by an arithmetical function—it would follow that, since any projection of the future values of a quantum-phenomena-associated, algorithmically verifiable, function is consistent with an infinity of algorithmically computable functions, all of whose past values are identical to the algorithmically verifiable past values of the function, the phenomena itself would be essentially unpredicatable if it cannot be represented by an algorithmically computable function.

However, since the algorithmic verifiability of any quantum phenomena shows that it is mathematically determinate, it follows that the physical phenomena itself must observe determinate laws.

7. Such representation does not need to admit multiverses

Hence (contrary to any interpretation that admits unverifiable multiverses) only one algorithmically computable extension of the function is consistent with the law determining the behaviour of the phenomena, and each possible extension must therefore be associated with a probability that the next observation of the phenomena is described by that particular extension.

8. Is the probability of the future behaviour of quantum phenomena definable by an algorithmically computable function?

The question arises: Although we cannot represent quantum phenomena explicitly by an algorithmically computable function, does the phenomena lend itself to an algorithmically computable probability of its future behaviour in the above sense?

9. Can primes yield a heuristic model of quantum behaviour?

We now show that the distribution of prime numbers denoted by the arithmetical prime counting function $\pi(n)$ is a quantum phenomena in the above sense, with a well-defined probability function that is algorithmically computable.

10. Two prime probabilities

We consider the two probabilities:

(i) The probability $P(a)$ of selecting a number that has the property of being prime from a given set $S$ of numbers;

Example 1: I have a bag containing $100$ numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability $P(b)$ of determining a proper factor of a given number $n$.

Example 2: I give you a $5$-digit combination lock along with a $10$-digit number $n$. The lock only opens if you set the combination to a proper factor of $n$ which is greater than $1$. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

11. The probability of a randomly chosen number from the set of natural numbers is not definable

Clearly the probability $P(a)$ of selecting a number that has the property of being prime from a given set $S$ of numbers is definable if the precise proportion of primes to non-primes in $S$ is definable.

However if $S$ is the set $N$ of all integers, and we cannot define a precise ratio of primes to composites in $N$, but only an order of magnitude such as $O(\frac{1}{log_{_{e}}n})$, then equally obviously $P(a) = P(n\ is\ a\ prime)$ cannot be defined in $N$ (see Chapter 2, p.9, Theorem 2.1, here).

12. The prime divisors of a natural number are independent

Now, the following paper proves $P(b) = \frac{1}{\pi(\sqrt{n})}$, since it shows that whether or not a prime $p$ divides a given integer $n$ is independent of whether or not a prime $q \neq p$ divides $n$:

Why Integer Factorising cannot be polynomial time

We thus have that $\pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$, with a binomial standard deviation.

Hence, even though we cannot define the probability $P(n\ is\ a\ prime)$ of selecting a number from the set $N$ of all natural numbers that has the property of being prime, $\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$ can be treated as the putative non-heuristic probability that a given $n$ is a prime.

13. The distribution of primes is a quantum phenomena

The distribution of primes is thus determinate but unpredictable, since it is representable by the algorithmically verifiable but not algorithmically computable arithmetical number-theoretic function $Pr(n) = p_{_{n}}$, where $p_{_{n}}$ is the $n$‘th prime.

The Prime Number Generating Theorem and the Trim and Compact algorithms detailed in this 1964 investigation illustrate why the arithmetical number-theoretic function $Pr(n)$ is algorithmically verifiable but not algorithmically computable (see also this Wikipedia proof that no non-constant polynomial function $Pr(n)$ with integer coefficients exists that evaluates to a prime number for all integers $n$.).

Moreover, although the distribution of primes is a quantum phenomena with probabilty $\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$, it is easily seen (see Figs. 7-11 on pp.23-26 of this preprint) that the generation of the primes is algorithmically computable.

14. Why the universe may be algorithmically computable

By analogy, this suggests that although the measurable values of some individual properties of particles in the universe over time may represent a quantum phenomena, the universe itself may be algorithmically computable if the laws governing the generation of all the particles in the universe over time are algorithmically computable.

Author’s working archives & abstracts of investigations

Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

1. Since, by the Prime Number Theorem, the number of primes $\leq \sqrt n$ is $O(\frac{\sqrt n}{log_{_{e}}\sqrt n})$, it would follow that determining a factor of $n$ requires at least one logical operation for each prime $\leq \sqrt n$, and therefore cannot be done in polynomial time—whence $P \neq NP$IF whether or not a prime $p$ divides an integer $n$ were independent of whether or not a prime $q \neq p$ divides the integer $n$.

2. Currently, conventional approaches to determining the computational complexity of Integer Factorising apparently appeal critically to the belief that:

(i) either—explicitly (see here)—that whether or not a prime $p$ divides an integer $n$ is not independent of whether or not a prime $q \neq p$ divides the integer $n$;

(ii) or—implicitly (since the problem is yet open)—that a proof to the contrary must imply that if $P(n\ is\ a\ prime)$ is the probability that $n$ is a prime, then $\sum_{_{i = 1}}^{^{\infty}} P(i\ is\ a\ prime) = 1$.

3. If so, then conventional approaches seem to conflate the two probabilities:

(i) The probability $P(a)$ of selecting a number that has the property of being prime from a given set $S$ of numbers;

Example 1: I have a bag containing $100$ numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability $P(b)$ of determining that a given integer $n$ is prime.

Example 2: I give you a $5$-digit combination lock along with a $10$-digit number $n$. The lock only opens if you set the combination to a proper factor of $n$ which is greater than $1$. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

4. In case 3(i), if the precise proportion of primes to non-primes in $S$ is definable, then clearly $P(a)$ too is definable.

However if $S$ is the set $N$ of all integers, and we cannot define a precise ratio of primes to composites in $N$, but only an order of magnitude such as $O(\frac{1}{log_{_{e}}n})$, then equally obviously $P(a)$ cannot be defined in $N$ (see Chapter 2, p.9, Theorem 2.1, here).

5. In case 3(ii) the following paper proves $P(b) = \frac{1}{\pi(\sqrt{n})}$, since it shows that whether or not a prime $p$ divides a given integer $n$ is independent of whether or not a prime $q \neq p$ divides $n$:

Why Integer Factorising cannot be polynomial time

Not only does it immediately follow that $P \neq NP$ (see here), but we further have that $\pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$, with a binomial standard deviation. Hence, even though we cannot define the probability $P(n\ is\ a\ prime)$ of selecting a number from the set $N$ of all natural numbers that has the property of being prime, $\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}})$ can be treated as the de facto probability that a given $n$ is prime, with all its attended consequences for various prime-counting functions and the Riemann Zeta function (see here).

Author’s working archives & abstracts of investigations

### Start here

Join 45 other followers

NOOR ANAND CHAWLA

A Jaunt Through Life

Diagonal Argument

Math, science, their history, and assorted trivia and quadrivia.

Math - Update

blogging & searching for true math ...

George Lakoff

George Lakoff has retired as Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley. He is now Director of the Center for the Neural Mind & Society (cnms.berkeley.edu).

LobeLog

Critical Perspectives on U.S. Foreign Policy

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Quanta Magazine

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The Brains Blog

Since 2005, a leading forum for work in the philosophy and science of mind

Logic Matters

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

A Neighborhood of Infinity

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Combinatorics and more

Gil Kalai's blog

Mathematics and Computation

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Foundations of Mathematics, Logic & Computability

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

John D. Cook

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Shtetl-Optimized

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Nanoexplanations

the blog of Aaron Sterling

Eric Cavalcanti

Quantum physicist

East Asia Forum

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability