(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

We investigate whether the probabilistic distribution of prime numbers can be treated as a heuristic model of quantum behaviour, since it too can be treated as a quantum phenomena, with a well-defined binomial probability function that is algorithmically computable, where the conjectured values of \pi(n) differ from actual values with a binomial standard deviation, and where we define a phenomena as a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable, but not algorithmically computable.

1. Thesis: The concept of ‘mathematical truth’ must be accountable

The thesis of this investigation is that a major philosophical challenge—which has so far inhibited a deeper understanding of the quantum behaviour reflected in the mathematical representation of some laws of nature (see, for instance, this paper by Eamonn Healey)—lies in holding to account the uncritical acceptance of propositions of a mathematical language as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology of accountability for objectively evidencing such acceptance.

2. The concept of ‘set-theoretical truth’ is not accountable

Since current folk lore is that all scientific truths can be expressed adequately, and communicated unambiguously, in the first order Set Theory ZF, and since the Axiom of Infinity of ZF cannot—even in principle—be objectively evidenced as true under any putative interpretation of ZF (as we argue in this post), an undesirable consequence of such an uncritical acceptance is that the distinction between the truths of mathematical propositions under interpretation which can be objectively evidenced, and those which cannot, is not evident.

3. The significance of such accountability for mathematics

The significance of such a distinction for mathematics is highlighted in this paper due to appear in the December 2016 issue of Cognitive Systems Research, where we address this challenge by considering the two finitarily accountable concepts of algorithmic verifiability and algorithmic computability (first introduced in this paper at the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, Birmingham, UK).

(i) Algorithmic verifiability

A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

(ii) Algorithmic computability

A number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

(iii) Algorithmic verifiability vis à vis algorithmic computability

We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

From the point of view of a finitary mathematical philosophy—which is the constraint within which an applied science ought to ideally operate—the significant difference between the two concepts could be expressed by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function (Thesis 1 on p.9 of this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey).

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true.

We show in the CSR paper how such accountability helps define finitary truth assignments that differentiate human reasoning from mechanistic reasoning in arithmetic by identifying two, hitherto unsuspected, Tarskian interpretations of the first order Peano Arithmetic PA, under both of which the PA axioms interpret as finitarily true over the domain N of the natural numbers, and the PA rules of inference preserve such truth finitarily over N.

4. The ambit of human reasoning vis à vis the ambit of mechanistic reasoning

One corresponds to the classical, non-finitary, putative standard interpretation of PA over N, and can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions.

The other corresponds to a finitary interpretation of PA over N that circumscibes the ambit of mechanistic reasoning about ‘true’ arithmetical propositions, and establishes the long-sought for consistency of PA (see this post); which establishes PA as a mathematical language of unambiguous communication for the mathematical representation of physical phenomena.

5. The significance of such accountability for the mathematical representation of physical phenomena

The significance of such a distinction for the mathematical representation of physical phenomena is highlighted in this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey, where we showed how some of the seemingly paradoxical elements of quantum mechanics may resolve if we define:

Quantum phenomena: A phenomena is a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable but not algorithmically computable.

6. The mathematical representation of quantum phenomena that is determinate but not predictable

By considering the properties of Gödel’s \beta function (see \S4.1 on p.8 of this preprint)—which allows us to strongly represent any non-terminating sequence of natural numbers by an arithmetical function—it would follow that, since any projection of the future values of a quantum-phenomena-associated, algorithmically verifiable, function is consistent with an infinity of algorithmically computable functions, all of whose past values are identical to the algorithmically verifiable past values of the function, the phenomena itself would be essentially unpredicatable if it cannot be represented by an algorithmically computable function.

However, since the algorithmic verifiability of any quantum phenomena shows that it is mathematically determinate, it follows that the physical phenomena itself must observe determinate laws.

7. Such representation does not need to admit multiverses

Hence (contrary to any interpretation that admits unverifiable multiverses) only one algorithmically computable extension of the function is consistent with the law determining the behaviour of the phenomena, and each possible extension must therefore be associated with a probability that the next observation of the phenomena is described by that particular extension.

8. Is the probability of the future behaviour of quantum phenomena definable by an algorithmically computable function?

The question arises: Although we cannot represent quantum phenomena explicitly by an algorithmically computable function, does the phenomena lend itself to an algorithmically computable probability of its future behaviour in the above sense?

9. Can primes yield a heuristic model of quantum behaviour?

We now show that the distribution of prime numbers denoted by the arithmetical prime counting function \pi(n) is a quantum phenomena in the above sense, with a well-defined probability function that is algorithmically computable.

10. Two prime probabilities

We consider the two probabilities:

(i) The probability P(a) of selecting a number that has the property of being prime from a given set S of numbers;

Example 1: I have a bag containing 100 numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability P(b) of determining a proper factor of a given number n.

Example 2: I give you a 5-digit combination lock along with a 10-digit number n. The lock only opens if you set the combination to a proper factor of n which is greater than 1. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

11. The probability of a randomly chosen number from the set of natural numbers is not definable

Clearly the probability P(a) of selecting a number that has the property of being prime from a given set S of numbers is definable if the precise proportion of primes to non-primes in S is definable.

However if S is the set N of all integers, and we cannot define a precise ratio of primes to composites in N, but only an order of magnitude such as O(\frac{1}{log_{_{e}}n}), then equally obviously P(a) = P(n\ is\ a\ prime) cannot be defined in N (see Chapter 2, p.9, Theorem 2.1, here).

12. The prime divisors of a natural number are independent

Now, the following paper proves P(b) = \frac{1}{\pi(\sqrt{n})}, since it shows that whether or not a prime p divides a given integer n is independent of whether or not a prime q \neq p divides n:

Why Integer Factorising cannot be polynomial time

We thus have that \pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}), with a binomial standard deviation.

Hence, even though we cannot define the probability P(n\ is\ a\ prime) of selecting a number from the set N of all natural numbers that has the property of being prime, \prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}) can be treated as the putative non-heuristic probability that a given n is a prime.

13. The distribution of primes is a quantum phenomena

The distribution of primes is thus determinate but unpredictable, since it is representable by the algorithmically verifiable but not algorithmically computable arithmetical number-theoretic function Pr(n) = p_{_{n}}, where p_{_{n}} is the n‘th prime.

The Prime Number Generating Theorem and the Trim and Compact algorithms detailed in this 1964 investigation illustrate why the arithmetical number-theoretic function Pr(n) is algorithmically verifiable but not algorithmically computable (see also this Wikipedia proof that no non-constant polynomial function Pr(n) with integer coefficients exists that evaluates to a prime number for all integers n.).

Moreover, although the distribution of primes is a quantum phenomena with probabilty \prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}), it is easily seen (see Figs. 7-11 on pp.23-26 of this preprint) that the generation of the primes is algorithmically computable.

14. Why the universe may be algorithmically computable

By analogy, this suggests that although the measurable values of some individual properties of particles in the universe over time may represent a quantum phenomena, the universe itself may be algorithmically computable if the laws governing the generation of all the particles in the universe over time are algorithmically computable.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand