You are currently browsing the category archive for the ‘Education’ category.

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Ferguson’s and Priest’s thesis

In a brief, but provocative, review of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘A Dictionary of Logic‘, philosophers Thomas Macaulay Ferguson and Graham Priest, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

A nineteenth century parody of the struggle to define ‘truth’ objectively

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in this nineteenth century quote:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

Making sense of mathematical propositions about infinite processes

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

The Liar paradox

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning here).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

Gödel’s influence on Turing’s reasoning

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

What is logic: using Ockham’s razor

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the EPR paradox in the following paper that I presented on 26’th June at the workshop on Emergent Computational Logics at UNILOG’2015, Istanbul, Turkey:

Algorithmically Verifiable Logic vis à vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

where I introduced the definition:

A finite set \lambda of rules is a Logic of a formal mathematical language \mathcal{L} if, and only if, \lambda constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of \mathcal{L}; and

(b) Of truth/falsity to the sentences of the Theory T(\mathcal{U}) which is defined semantically by the \lambda-interpretation of \mathcal{L} over a structure \mathcal{U}.

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

\bullet Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as `true’ under an algorithmically computable interpretation of PA over the structure \mathbb{N} of the natural numbers;

\bullet Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

\bullet Interpret such representations unambiguously; and

\bullet Conclude further:

\bullet First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

\bullet Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A Economist: The return of the machinery question

In a Special Report on Artificial Intelligence in its issue of 25th June 2016, ‘The return of the machinery question‘, the Economist suggests that both cosmologist Stephen Hawking and enterpreneur Elon Musk share to some degree the:

“… fear that AI poses an existential threat to humanity, because superintelligent computers might not share mankind’s goals and could turn on their creators”.

B Our irrational propensity to fear that which we are drawn to embrace

Surprising, since I suspect both would readily agree that, if anything should scare us, it is our irrational propensity to fear that which we are drawn to embrace!

And therein should lie not only our comfort, but perhaps also our salvation.

For Artificial Intelligence is constrained by rationality; Human Intelligence is not.

An Artificial Intelligence must, whether individually or collectively, create and/or destroy only rationally. Humankind can and does, both individually and collectively, create and destroy irrationally.

C Justifying irrationality

For instance, as the legatees of logicians Kurt Goedel and Alfred Tarski have amply demonstrated, a Human Intelligence can easily be led to believe that some statements of even the simplest of mathematical languages—Arithmetic—must be both ‘formally undecidable’ and ‘true’, even in the absence of any objective yardstick for determining what is ‘true’!

D Differentiating between Human reasoning and Mechanistic reasoning

An Artificial Intelligence, however, can only treat as true that which can be proven—by its rules—to be true by an objective assignment of ‘truth’ and ‘provability’ values to the propositions of the language that formally expresses its mechanical operations—Arithmetic.

The implications of the difference are not obvious; but that the difference could be significant is the thesis of this paper which is due to appear in the December 2016 issue of Cognitive Systems Research:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning‘.

E Respect for evidence-based ‘truth’ could be Darwinian

More importantly, the paper demonstrates that both Human Intelligence—whose evolution is accepted as Darwinian—and Artificial Intelligence—whose evolution it is ‘feared’ may be Darwinian—share a common (Darwinian?) respect for an accountable concept of ‘truth’.

A respect that should make both Intelligences fitter to survive by recognising what philosopher Christopher Mole describes in this invitational blogpost as the:

“… importance of the rapport between an organism and its environment”

—an environment that can obviously accommodate the birth, and nurture the evolution, of both intelligences.

So, it may not be too far-fetched to conjecture that the evolution of both intelligences must also, then, share a Darwinian respect for the kind of human values—towards protecting intelligent life forms—that, no matter in how limited or flawed a guise, is visibly emerging as an inherent characteristic of a human evolution which, no matter what the cost could, albeit optimistically, be viewed as struggling to incrementally strengthen, and simultaneously integrate, individualism (fundamental particles) into nationalism (atoms) into multi-nationalism (molecules) and, possibly, into universalism (elements).

F The larger question: Should we fear an extra-terrestrial Intelligence?

From a broader perspective yet, our apprehensions about the evolution of a rampant Artificial Intelligence created by a Frankensteinian Human Intelligence should, perhaps, more rightly be addressed—as some have urged—within the larger uncertainty posed by SETI:

Is there a rational danger to humankind in actively seeking an extra-terrestrial intelligence?

I would argue that any answer would depend on how we articulate the question and that, in order to engage in a constructive and productive debate, we need to question—and reduce to a minimum—some of our most cherished mathematical and scientific beliefs and fears which cannot be communicated objectively.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

We investigate whether the probabilistic distribution of prime numbers can be treated as a heuristic model of quantum behaviour, since it too can be treated as a quantum phenomena, with a well-defined binomial probability function that is algorithmically computable, where the conjectured values of \pi(n) differ from actual values with a binomial standard deviation, and where we define a phenomena as a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable, but not algorithmically computable.

1. Thesis: The concept of ‘mathematical truth’ must be accountable

The thesis of this investigation is that a major philosophical challenge—which has so far inhibited a deeper understanding of the quantum behaviour reflected in the mathematical representation of some laws of nature (see, for instance, this paper by Eamonn Healey)—lies in holding to account the uncritical acceptance of propositions of a mathematical language as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology of accountability for objectively evidencing such acceptance.

2. The concept of ‘set-theoretical truth’ is not accountable

Since current folk lore is that all scientific truths can be expressed adequately, and communicated unambiguously, in the first order Set Theory ZF, and since the Axiom of Infinity of ZF cannot—even in principle—be objectively evidenced as true under any putative interpretation of ZF (as we argue in this post), an undesirable consequence of such an uncritical acceptance is that the distinction between the truths of mathematical propositions under interpretation which can be objectively evidenced, and those which cannot, is not evident.

3. The significance of such accountability for mathematics

The significance of such a distinction for mathematics is highlighted in this paper due to appear in the December 2016 issue of Cognitive Systems Research, where we address this challenge by considering the two finitarily accountable concepts of algorithmic verifiability and algorithmic computability (first introduced in this paper at the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, Birmingham, UK).

(i) Algorithmic verifiability

A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

(ii) Algorithmic computability

A number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

(iii) Algorithmic verifiability vis à vis algorithmic computability

We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

From the point of view of a finitary mathematical philosophy—which is the constraint within which an applied science ought to ideally operate—the significant difference between the two concepts could be expressed by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function (Thesis 1 on p.9 of this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey).

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true.

We show in the CSR paper how such accountability helps define finitary truth assignments that differentiate human reasoning from mechanistic reasoning in arithmetic by identifying two, hitherto unsuspected, Tarskian interpretations of the first order Peano Arithmetic PA, under both of which the PA axioms interpret as finitarily true over the domain N of the natural numbers, and the PA rules of inference preserve such truth finitarily over N.

4. The ambit of human reasoning vis à vis the ambit of mechanistic reasoning

One corresponds to the classical, non-finitary, putative standard interpretation of PA over N, and can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions.

The other corresponds to a finitary interpretation of PA over N that circumscibes the ambit of mechanistic reasoning about ‘true’ arithmetical propositions, and establishes the long-sought for consistency of PA (see this post); which establishes PA as a mathematical language of unambiguous communication for the mathematical representation of physical phenomena.

5. The significance of such accountability for the mathematical representation of physical phenomena

The significance of such a distinction for the mathematical representation of physical phenomena is highlighted in this paper that was presented on 26th June at the workshop on Emergent Computational Logics at UNILOG’2015, 5th World Congress and School on Universal Logic, Istanbul, Turkey, where we showed how some of the seemingly paradoxical elements of quantum mechanics may resolve if we define:

Quantum phenomena: A phenomena is a quantum phenomena if, and only if, it obeys laws that can only be represented mathematically by functions that are algorithmically verifiable but not algorithmically computable.

6. The mathematical representation of quantum phenomena that is determinate but not predictable

By considering the properties of Gödel’s \beta function (see \S4.1 on p.8 of this preprint)—which allows us to strongly represent any non-terminating sequence of natural numbers by an arithmetical function—it would follow that, since any projection of the future values of a quantum-phenomena-associated, algorithmically verifiable, function is consistent with an infinity of algorithmically computable functions, all of whose past values are identical to the algorithmically verifiable past values of the function, the phenomena itself would be essentially unpredicatable if it cannot be represented by an algorithmically computable function.

However, since the algorithmic verifiability of any quantum phenomena shows that it is mathematically determinate, it follows that the physical phenomena itself must observe determinate laws.

7. Such representation does not need to admit multiverses

Hence (contrary to any interpretation that admits unverifiable multiverses) only one algorithmically computable extension of the function is consistent with the law determining the behaviour of the phenomena, and each possible extension must therefore be associated with a probability that the next observation of the phenomena is described by that particular extension.

8. Is the probability of the future behaviour of quantum phenomena definable by an algorithmically computable function?

The question arises: Although we cannot represent quantum phenomena explicitly by an algorithmically computable function, does the phenomena lend itself to an algorithmically computable probability of its future behaviour in the above sense?

9. Can primes yield a heuristic model of quantum behaviour?

We now show that the distribution of prime numbers denoted by the arithmetical prime counting function \pi(n) is a quantum phenomena in the above sense, with a well-defined probability function that is algorithmically computable.

10. Two prime probabilities

We consider the two probabilities:

(i) The probability P(a) of selecting a number that has the property of being prime from a given set S of numbers;

Example 1: I have a bag containing 100 numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability P(b) of determining a proper factor of a given number n.

Example 2: I give you a 5-digit combination lock along with a 10-digit number n. The lock only opens if you set the combination to a proper factor of n which is greater than 1. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

11. The probability of a randomly chosen number from the set of natural numbers is not definable

Clearly the probability P(a) of selecting a number that has the property of being prime from a given set S of numbers is definable if the precise proportion of primes to non-primes in S is definable.

However if S is the set N of all integers, and we cannot define a precise ratio of primes to composites in N, but only an order of magnitude such as O(\frac{1}{log_{_{e}}n}), then equally obviously P(a) = P(n\ is\ a\ prime) cannot be defined in N (see Chapter 2, p.9, Theorem 2.1, here).

12. The prime divisors of a natural number are independent

Now, the following paper proves P(b) = \frac{1}{\pi(\sqrt{n})}, since it shows that whether or not a prime p divides a given integer n is independent of whether or not a prime q \neq p divides n:

Why Integer Factorising cannot be polynomial time

We thus have that \pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}), with a binomial standard deviation.

Hence, even though we cannot define the probability P(n\ is\ a\ prime) of selecting a number from the set N of all natural numbers that has the property of being prime, \prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}) can be treated as the putative non-heuristic probability that a given n is a prime.

13. The distribution of primes is a quantum phenomena

The distribution of primes is thus determinate but unpredictable, since it is representable by the algorithmically verifiable but not algorithmically computable arithmetical number-theoretic function Pr(n) = p_{_{n}}, where p_{_{n}} is the n‘th prime.

The Prime Number Generating Theorem and the Trim and Compact algorithms detailed in this 1964 investigation illustrate why the arithmetical number-theoretic function Pr(n) is algorithmically verifiable but not algorithmically computable (see also this Wikipedia proof that no non-constant polynomial function Pr(n) with integer coefficients exists that evaluates to a prime number for all integers n.).

Moreover, although the distribution of primes is a quantum phenomena with probabilty \prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}), it is easily seen (see Figs. 7-11 on pp.23-26 of this preprint) that the generation of the primes is algorithmically computable.

14. Why the universe may be algorithmically computable

By analogy, this suggests that although the measurable values of some individual properties of particles in the universe over time may represent a quantum phenomena, the universe itself may be algorithmically computable if the laws governing the generation of all the particles in the universe over time are algorithmically computable.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A. A mathematical physicist’s conception of thinking about infinity in consistent ways

John Baez is a mathematical physicist, currently working at the math department at U. C. Riverside in California, and also at the Centre for Quantum Technologies in Singapore.

Baez is not only academically active in the areas of network theory and information theory, but also socially active in promoting and supporting the Azimuth Project, which is a platform for scientists, engineers and mathematicians to collaboratively do something about the global ecological crisis.

In a recent post—Large Countable Ordinals (Part 1)—on the Azimuth Blog, Baez confesses to a passionate urge to write a series of blogs—that might even eventually yield a book—about the infinite, reflecting both his fascination with, and frustration at, the challenges involved in formally denoting and talking meaningfully about different sizes of infinity:

“I love the infinite. … It may not exist in the physical world, but we can set up rules to think about it in consistent ways, and then it’s a helpful concept. … Cantor’s realization that there are different sizes of infinity is … part of the everyday bread and butter of mathematics.”

B. Why thinking about infinity in a consistent way must be constrained by an objective, evidence-based, perspective

I would cautiously submit however that (as I briefly argue in this blogpost), before committing to any such venture, whether we can think about the “different sizes of infinity” in “consistent ways“, and to what extent such a concept is “helpful“, are issues that may need to be addressed from an objective, evidence-based, computational perspective in addition to the conventional self-evident, intuition-based, classical perspective towards formal axiomatic theories.

C. Why we cannot conflate the behaviour of Goodstein’s sequence in Arithmetic with its behaviour in Set Theory

Let me suggest why by briefly reviewing—albeit unusually—the usual argument of Goodstein’s Theorem (see here) that every Goodstein sequence over the natural numbers must terminate finitely.

1. The Goodstein sequence over the natural numbers

First, let g(1, m, [2]), g(2, m, [3]), g(3, m, [4]), \ldots, be the terms of the Goodstein sequence G(m) for m over the domain N of the natural numbers, where [i+1] is the base in which the hereditary representation of the i‘th term of the sequence is expressed.

Some properties of Goodstein’s sequence over the natural numbers

We note that, for any natural number m, R. L. Goodstein uses the properties of the hereditary representation of m to construct a sequence G(m) \equiv \{g(1, m, [2]),\ g(2, m, [3]), \ldots\} of natural numbers by an unusual, but valid, algorithm.

Hereditary representation: The representation of a number as a sum of powers of a base b, followed by expression of each of the exponents as a sum of powers of b, etc., until the process stops. For example, we may express the hereditary representations of 266 in base 2 and base 3 as follows:

226_{[2]} \equiv 2^{8_{[2]}}+2^{3_{[2]}}+2 \equiv 2^{2^{(2^{2^{0}}+2^{0})}}+2^{2^{2^{0}}+2^{2^{0}}}+2^{2^{0}}

226_{[3]} \equiv 2.3^{4_{[3]}}+2.3^{3_{[3]}}+3^{2_{[3]}}+1 \equiv 2.3^{(3^{3^{0}}+3^{0})}+2.3^{3^{3^{0}}}+3^{2.3^{0}}+3^{0}

We shall ignore the peculiar manner of constructing the individual members of the Goodstein sequence, since these are not germane to understanding the essence of Goodstein’s argument. We need simply accept for now that G(m) is well-defined over the structure N of the natural numbers, and has, for instance, the following properties:

g(1, 226, [2]) \equiv 2^{2^{2+1}}+2^{2+1}+2

g(2, 226, [3]) \equiv (3^{3^{3+1}}+3^{3+1}+3)-1

g(2, 226, [3]) \equiv 3^{3^{3+1}}+3^{3+1}+2

g(3, 226, [4]) \equiv (4^{4^{4+1}}+4^{4+1}+2)-1

g(3, 226, [4]) \equiv 4^{4^{4+1}}+4^{4+1}+1

If we replace the base [i+1] in each term g(i, m, [i+1]) of the sequence G(m) by [n], we arrive at a corresponding sequence of, say, Goodstein’s functions for m over the domain N of the natural numbers.

Where, for instance:

g(1, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+n

g(2, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+2

g(3, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+1

It is fairly straightforward (see here) to show that, for all i \geq 1:

Either g(i, m, [n]) > g(i+1, m, [n]), or g(i, m, [n]) = 0.

Clearly G(m) terminates in N if, and only if, there is a natural number k > 0 such that, for any i > 0, we have either that g(i, m, [k]) > g(i+1, m, [k]) or that g(i, m, [k]) = 0.

However, since we cannot, equally clearly, immediately conclude from the axioms of the first-order Peano Arithmetic PA that such a k must exist merely from the definition of the G(m) sequence in N, we cannot immediately conclude from the above argument that G(m) must terminate finitely in N.

2. The Goodstein sequence over the finite ordinal numbers

Second, let g_{o}(1, m, [2_{o}]), g_{o}(2, m, [3_{o}]), g_{o}(3, m, [4_{o}]), \ldots, be the terms of the Goodstein sequence G_{o}(m) over the domain \omega of the finite ordinal numbers 0_{o}, 1_{o}, 2_{o}, \ldots, where \omega is Cantor’s least transfinite ordinal.

If we replace the base [(i+1)_{o}] in each term g_{o}(i, m, [(i+1)_{o}]) of the sequence G_{o}(m) by [c], where c ranges over all ordinals upto \varepsilon_{0}, it is again fairly straightforward to show that:

Either g_{o}(i, m, [c]) >_{o} g_{o}(i+1, m, [c]), or g_{o}(i, m, [c]) = 0_{o}.

Clearly, in this case too, G_{o}(m) terminates in \omega if, and only if, there is an ordinal k_{o}>_{o} 0_{o} such that, for all finite i > 0, we have either that g_{o}(i, m, [k_{o}]) >_{o} g_{o}(i+1, m, [k_{o}]), or that g_{o}(i, m, [k_{o}]) =_{o} 0_{o}.

3. Goodstein’s argument over the transfinite ordinal numbers

If we, however, let c =_{o} \omega then—since the ZF axioms do not admit an infinite descending set of ordinals—it now immediately follows that we cannot have:

g_{o}(i, m, [\omega]) >_{o} g_{o}(i+1, m, [\omega]) for all i > 0.

Hence G_{o}(m) must terminate finitely in \omega, since we must have that g(i, m, [\omega]) =_{o} 0_{o} for some finite i > 0.

4. The intuitive justification for Goodstein’s Theorem

The intuitive justification—which must implicitly underlie any formal argument—for Goodstein’s Theorem then is that, since the finite ordinals can be meta-mathematically seen to be in a 1-1 correspondence with the natural numbers, we can conclude from (2) above that every Goodstein sequence over the natural numbers must also terminate finitely.

5. The fallacy in Goodstein’s argument

The fallacy in this conclusion is exposed if we note that, by (2), G_{o}(m) must terminate finitely in \omega even if G(m) did not terminate in N!

6. Why we need to heed Skolem’s cautionary remarks

Clearly, if we heed Skolem’s cautionary remarks (reproduced here) about unrestrictedly corresponding conclusions concerning elements of different formal systems, then we can validly only conclude that the relationship of ‘terminating finitely’ with respect to the ordinal inequality ‘>_{o}‘ over an infinite set S_{0} of finite ordinals in any putative interpretation of a first order Ordinal Arithmetic cannot be obviously corresponded to the relationship of ‘terminating finitely’ with respect to the natural number inequality ‘>‘ over an infinite set S of natural numbers in any interpretation of PA.

7. The significance of Skolem’s qualification

The significance of Skolem’s qualification is highlighted if we note that we cannot force PA to admit a constant denoting a ‘completed infinity’, such as Cantor’s least ordinal \omega, into either PA or into any interpretation of PA without inviting inconsistency.

(The proof is detailed in Theorem 4.1 on p.7 of this preprint. See also this blogpage).

8. PA is finitarily consistent

Moreover, the following paper, due to appear in the December 2016 issue of Cognitive Systems Research, gives a finitary proof of consistency for the first-order Peano Arithmetic PA:

The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis.

9. Why ZF cannot have an evidence-based interpretation

It also follows from the above-cited CSR paper that ZF axiomatically postulates the existence of an infinite set which cannot be evidenced as true even under any putative interpretation of ZF.

10. The appropriate conclusion of Goodstein’s argument

So, if a ‘completed infinity’ cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency, it would follow in Russell’s colourful phraseology that the appropriate conclusion to be drawn from Goodstein’s argument is that:

(i) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(ii) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of notional interest.

Which raises the issue not only of whether we can think about the different sizes of infinity in a consistent way, but also to what extent we may need to justify that such a concept is helpful to an emerging student of mathematics.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

In a recent paper A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory, authors Adam Yedidia and Scott Aaronson argue upfront in their Introduction that:

Like any axiomatic system capable of encoding arithmetic, ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency. Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.

The question arises:

How reasonable is it to build modern mathematics on top of a Set Theory such as ZF?

Some immediate points to ponder upon (see also reservations expressed by Stephen G. Simpson in Logic and Mathematics and in Partial Realizations of Hilbert’s Program):

1. “Like any axiomatic system capable of encoding arithmetic, …”

The implicit assumption here that every ZF formula which is provable about the finite ZF ordinals must necessarily interpret as a true proposition about the natural numbers is fragile since, without such an assumption, we can only conclude from Goodstein’s argument (see Theorem 1.1 here) that a Goodstein sequence defined over the finite ZF ordinals must terminate even if the corresponding Goodstein sequence over the natural numbers does not terminate!

2. “ZFC is constrained by Gödel’s two incompleteness theorems. The first incompleteness theorem states that if ZFC is consistent (it never proves both a statement and its opposite), then ZFC cannot also be complete (able to prove every true statement). The second incompleteness theorem states that if ZFC is consistent, then ZFC cannot prove its own consistency.”

The implicit assumption here is that ZF is \omega-consistent, which implies that ZF is consistent and must therefore have an interpretation over some mathematically definable structure in which ZF theorems interpret as ‘true’.

The question arises: Must such ‘truth’ be capable of being evidenced objectively, or is it only of a subjective, revelationary, nature (which may require truth-certification by evolutionarily selected prophets—see Nathanson’s remarks as cited in this post)?

The significance of seeking objective accountbility is that in a paper, “The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis“, which is due to appear in the December 2016 issue of Cognitive Systems Research, we show (see also this post) that the first-order Peano Arithmetic PA:

(i) is finitarily consistent; but

(ii) is not \omega-consistent; and

(iii) has no ‘undecidable’ arithmetical proposition (whence both of Gödel’s Incompleteness Theorems hold vacuously so far as the arithmetic of the natural numbers is concerned).

3. “Because we have built modern mathematics on top of ZFC, we can reasonably be said to have assumed ZFC’s consistency.”

Now, one justification for such an assumption (without which it may be difficult to justify building modern mathematics on top of ZF) could be the belief that acquisition of set-theoretical knowledge by students of mathematics has some essential educational dimension.

If so, one should take into account not only the motivations of such a student for the learning of mathematics, but also those of a mathematician for teaching it.

This, in turn, means that both the content of the mathematics which is to be learnt (or taught), as well as the putative utility of such learning (or teaching) for a student (or teacher), merit consideration.

Considering content, I would iconoclastically submit that the least one may then need to accomodate is the following distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA, which is the language of science; and

2. The first-order Set Theory ZF, which is the language of science fiction.

A distinction that is reflected in Stephen G. Simpson’s more conservative perspective in Partial Realizations of Hilbert’s Program (\S6.4, p.15):

Finitistic reasoning (read ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (read ‘First-order Set Thyeory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.

Reason:

(i) PA has two, hitherto unsuspected, evidence-based interpretations (see this post), the first of which can be treated as circumscribing the ambit of human reasoning about `true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about `true’ arithmetical propositions.

It is this language of arithmetic—formally expressed as PA—that provides the foundation for all practical applications of mathematics where the latter could be argued as having an essential educational dimension.

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see paragraph 4.2 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about `true’ set-theoretical propositions, or that of mechanistic reasoning about `true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that are only mentally conceivable by mathematicians (subjectively?), and have no physical counterparts, or immediately practical applications of mathematics, which could meaningfully be argued as having an essential educational dimension.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

The distinction is lost when—as seems to be the case currently—we treat the acquisition of mathematical knowledge as necessarily including the body of essentially set-theoretic theorems—to the detriment, I would argue, of the larger body of aspiring students of mathematics whose flagging interest in acquiring such a wider knowledge in universities around the world reflects the fact that, for most students, their interests seem to lie primarily in how a study of mathematics can enable them to:

(a) adequately abstract and precisely express through human reasoning their experiences of the world in which they live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of their latent potential in acieving their personal real-world goals.

In other words, it is not obvious how how any study of mathematics that has the limited goals (a) and (b) can have any essentially educational dimension that justifies the assumption that ZF is consistent.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

“If I have seen a little further it is by standing on the shoulders of Giants”

Prior to Isaac Newton’s tribute (above) to Rene Descartes and Robert Hooke in a letter to the latter, it was reportedly the 12th century theologian and author John of Salisbury who was recorded as having used an even earlier version of this humbling admission—in a treatise on logic called Metalogicon, written in Latin in 1159, the gist of which is translatable as:

“Bernard of Chartres used to say that we are like dwarfs on the shoulders of giants, so that we can see more than they, and things at a greater distance, not by virtue of any sharpness of sight on our part, or any physical distinction, but because we are carried high and raised up by their giant size.

(Dicebat Bernardus Carnotensis nos esse quasi nanos, gigantium humeris insidentes, ut possimus plura eis et remotiora videre, non utique proprii visus acumine, aut eminentia corporis, sed quia in altum subvenimur et extollimur magnitudine gigantea.)”

Contrary to a contemporary interpretation of the remark:

\bullet ‘standing on the shoulders of Giants’

as describing:

\bulletbuilding on previous discoveries“,

it seems to me that what Bernard of Chartres apparently intended was to suggest that it doesn’t necessarily take a genius to see farther; only someone both humble and willing to:

\bullet first, clamber onto the shoulders of a giant and have the self-belief to see things at first-hand as they appear from a higher perspective (achieved more by the nature of height—and the curvature of our immediate space as implicit in such an analogy—than by the nature of genius); and,

\bullet second, avoid trying to see things first through the eyes of the giant upon whose shoulders one stands (for the giant might indeed be a vision-blinding genius)!

It was this latter lesson that I was incidentally taught by—and one of the few that I learnt (probably far too well for better or worse) from—one of my Giants, the late Professor Manohar S. Huzurbazaar, in my final year of graduation in 1964.

The occasion: I protested that the axiom of infinity (in the set theory course that he had just begun to teach us) was not self-evident to me, as (he had explained in his introductory lecture) an axiom should seem if a formal theory were to make any kind of coherent sense under interpretation.

Whilst clarifying that his actual instruction to us had not been that an axiom should necessarily ‘seem’ self-evident, but only that it should ‘be treated’ as self-evident, Professor Huzurbazar further agreed that the set-theoretical axiom of infinity was not really as self-evident as an axiom ideally ought to seem in order to be treated as self-evident.

To my natural response asking him if it seemed at all self-evident to him, he replied in the negative; adding, however, that he believed it to be ‘true’ despite its lack of an unarguable element of ‘self-evidence’.

It was his remarkably candid response to my incredulous—and youthfully indiscreet—query as to how an unimpeachably objective person such as he (which was his defining characteristic) could hold such a subjective belief that has shaped my thinking ever since.

He said that he had ‘had’ to believe the axiom to be ‘true’, since he could not teach us what he did with ‘conviction’ if he did not have such faith!

Although I did not grasp it then, over the years I came to the realisation that committing to such a belief was the price he had willingly paid for a responsibility that he had recognised—and accepted—consciously at a very early age in his life (when he was tutoring his school going nephew, the renowned physicist Jayant V. Narlikar):

Nature had endowed him with the rare gift shared by great teachers—the capacity to reach out to, and inspire, students to learn beyond their instruction!

It was a responsibility that he bore unflinchingly and uncompromisingly, eventually becoming one of the most respected and sought after teachers (of his times in India) of Modern Algebra (now Category Theory), Set Theory and Analysis at both the graduate and post-graduate levels.

At the time, however, Professor Huzurbazar pointedly stressed that his belief should not influence me into believing the axiom to be true, nor into holding it as self-evident.

His words—spoken softly as was his wont—were:

“Challenge it”.

Although I chose not to follow an academic career, he never faltered in encouraging me to question the accepted paradigms of the day when I shared the direction of my reading and thinking (particularly on Logic and the Foundations of Mathematics) with him on the few occasions that I met him over the next twenty years.

Moreover, even if the desired self-evident nature of the most fundamental axioms of mathematics (those of first-order Peano Arithmetic and Computability Theory) were to be shown as formally inconsistent with a belief in the ‘self-evident’ truth of the axiom of infinity (a goal that continues to motivate me), I believe that the shades of Professor Huzurbazaar would feel more liberated than bruised by the ‘fall’.

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 47 other subscribers

Recent posts

Xena

Mathematicians learning Lean by doing.

The Universe of Tim Andersen

Author and physicist, editor of The Infinite Universe

Matt Baker's Math Blog

Thoughts on number theory, graphs, dynamical systems, tropical geometry, pedagogy, puzzles, and the p-adics

Mathematics without Apologies, by Michael Harris

An unapologetic guided tour of the mathematical life

Igor Pak's blog

Views on life and math

Joel David Hamkins

mathematics and philosophy of the infinite

NOOR ANAND CHAWLA

A Jaunt Through Life

Diagonal Argument

Math, science, their history, and assorted trivia and quadrivia.

Math - Update

blogging & searching for true math ...

George Lakoff

George Lakoff has retired as Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley. He is now Director of the Center for the Neural Mind & Society (cnms.berkeley.edu).

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Quanta Magazine

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The Brains Blog

Since 2005, a leading forum for work in the philosophy and science of mind

Logic Matters

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

A Neighborhood of Infinity

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Combinatorics and more

Gil Kalai's blog