You are currently browsing the tag archive for the ‘non-locality’ tag.

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Ferguson’s and Priest’s thesis

In a brief, but provocative, review of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘A Dictionary of Logic‘, philosophers Thomas Macaulay Ferguson and Graham Priest, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

A nineteenth century parody of the struggle to define ‘truth’ objectively

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in this nineteenth century quote:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

Making sense of mathematical propositions about infinite processes

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

The Liar paradox

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning here).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

Gödel’s influence on Turing’s reasoning

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

What is logic: using Ockham’s razor

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the EPR paradox in the following paper that I presented on 26’th June at the workshop on Emergent Computational Logics at UNILOG’2015, Istanbul, Turkey:

Algorithmically Verifiable Logic vis à vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

where I introduced the definition:

A finite set \lambda of rules is a Logic of a formal mathematical language \mathcal{L} if, and only if, \lambda constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of \mathcal{L}; and

(b) Of truth/falsity to the sentences of the Theory T(\mathcal{U}) which is defined semantically by the \lambda-interpretation of \mathcal{L} over a structure \mathcal{U}.

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

\bullet Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as `true’ under an algorithmically computable interpretation of PA over the structure \mathbb{N} of the natural numbers;

\bullet Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

\bullet Interpret such representations unambiguously; and

\bullet Conclude further:

\bullet First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

\bullet Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Advertisements

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

A Economist: The return of the machinery question

In a Special Report on Artificial Intelligence in its issue of 25th June 2016, ‘The return of the machinery question‘, the Economist suggests that both cosmologist Stephen Hawking and enterpreneur Elon Musk share to some degree the:

“… fear that AI poses an existential threat to humanity, because superintelligent computers might not share mankind’s goals and could turn on their creators”.

B Our irrational propensity to fear that which we are drawn to embrace

Surprising, since I suspect both would readily agree that, if anything should scare us, it is our irrational propensity to fear that which we are drawn to embrace!

And therein should lie not only our comfort, but perhaps also our salvation.

For Artificial Intelligence is constrained by rationality; Human Intelligence is not.

An Artificial Intelligence must, whether individually or collectively, create and/or destroy only rationally. Humankind can and does, both individually and collectively, create and destroy irrationally.

C Justifying irrationality

For instance, as the legatees of logicians Kurt Goedel and Alfred Tarski have amply demonstrated, a Human Intelligence can easily be led to believe that some statements of even the simplest of mathematical languages—Arithmetic—must be both ‘formally undecidable’ and ‘true’, even in the absence of any objective yardstick for determining what is ‘true’!

D Differentiating between Human reasoning and Mechanistic reasoning

An Artificial Intelligence, however, can only treat as true that which can be proven—by its rules—to be true by an objective assignment of ‘truth’ and ‘provability’ values to the propositions of the language that formally expresses its mechanical operations—Arithmetic.

The implications of the difference are not obvious; but that the difference could be significant is the thesis of this paper which is due to appear in the December 2016 issue of Cognitive Systems Research:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning‘.

E Respect for evidence-based ‘truth’ could be Darwinian

More importantly, the paper demonstrates that both Human Intelligence—whose evolution is accepted as Darwinian—and Artificial Intelligence—whose evolution it is ‘feared’ may be Darwinian—share a common (Darwinian?) respect for an accountable concept of ‘truth’.

A respect that should make both Intelligences fitter to survive by recognising what philosopher Christopher Mole describes in this invitational blogpost as the:

“… importance of the rapport between an organism and its environment”

—an environment that can obviously accommodate the birth, and nurture the evolution, of both intelligences.

So, it may not be too far-fetched to conjecture that the evolution of both intelligences must also, then, share a Darwinian respect for the kind of human values—towards protecting intelligent life forms—that, no matter in how limited or flawed a guise, is visibly emerging as an inherent characteristic of a human evolution which, no matter what the cost could, albeit optimistically, be viewed as struggling to incrementally strengthen, and simultaneously integrate, individualism (fundamental particles) into nationalism (atoms) into multi-nationalism (molecules) and, possibly, into universalism (elements).

F The larger question: Should we fear an extra-terrestrial Intelligence?

From a broader perspective yet, our apprehensions about the evolution of a rampant Artificial Intelligence created by a Frankensteinian Human Intelligence should, perhaps, more rightly be addressed—as some have urged—within the larger uncertainty posed by SETI:

Is there a rational danger to humankind in actively seeking an extra-terrestrial intelligence?

I would argue that any answer would depend on how we articulate the question and that, in order to engage in a constructive and productive debate, we need to question—and reduce to a minimum—some of our most cherished mathematical and scientific beliefs and fears which cannot be communicated objectively.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought

Christopher Mole is an associate professor of philosophy at the University of British Columbia, Vancouver. He is the author of Attention is Cognitive Unison: An Essay in Philosophical Psychology (OUP, 2011), and The Unexplained Intellect: Complexity, Time, and the Metaphysics of Embodied Thought (Routledge, 2016).

In his preface to The Unexplained Intellect, Mole emphasises that his book is an attempt to provide arguments for (amongst others) the three theses that:

(i) “Intelligence might become explicable if we treat intelligence thought as if it were some sort of computation”;

(ii) “The importance of the rapport between an organism and its environment must \ldots be understood from a broadly computational perspective”;

(iii) “\ldots our difficulties in accounting for our psychological orientation with respect to time are indications of the need to shift our philosophical focus away from mental states—which are altogether too static—and towards a theory of the mind in which it is dynamic mental entities that are taken to be metaphysically foundational”.

The Brains blog

Mole explains at length his main claims in The Unexplained Intellect—and the cause that those claims serve—in a lucid and penetrating, VI-part, series of invited posts in The Brains blog (a leading forum for work in the philosophy and science of mind that was founded in 2005 by Gualtiero Piccinini, and has been administered by John Schwenkler since late 2011).

In these posts, Mole seeks to make the following points.

I: The Unexplained Intellect: The mind is not a hoard of sentences

We do not currently have a satisfactory account of how minds could be had by material creatures. If such an account is to be given then every mental phenomenon will need to find a place within it. Many will be accounted for by relating them to other things that are mental, but there must come a point at which we break out of the mental domain, and account for some things that are mental by reference to some that are not. It is unclear where this break out point will be. In that sense it is unclear which mental entities are, metaphysically speaking, the most fundamental.

At some point in the twentieth century, philosophers fell into the habit of writing as if the most fundamental things in the mental domain are mental states (where these are thought of as states having objective features of the world as their truth-evaluable contents). This led to a picture in which the mind was regarded as something like a hoard of sentences. The philosophers and cognitive scientists who have operated with this picture have taken their job to be telling us what sort of content these mental sentences have, how that content is structured, how the sentences come to have it, how they get put into and taken out of storage, how they interact with one another, how they influence behaviour, and so on.

This emphasis on states has caused us to underestimate the importance of non-static mental entities, such as inferences, actions, and encounters with the world. If we take these dynamic entities to be among the most fundamental of the items in the mental domain, then — I argue — we can avoid a number of philosophical problems. Most importantly, we can avoid a picture in which intelligent thought would be beyond the capacities of any physically implementable system.

II: The Unexplained Intellect: Computation and the explanation of intelligence

A lot of philosophers think that consciousness is what makes the mind/body problem interesting, perhaps because they think that consciousness is the only part of that problem that remains wholly philosophical. Other aspects of the mind are taken to be explicable by scientific means, even if explanatorily adequate theories of them remain to be specified.

\ldots I’ll remind the reader of computability theory’s power, with a view to indicating how it is that the discoveries of theoretical computer scientists place constraints on our understanding of what intelligence is, and of how it is possible.

III: The Unexplained Intellect: The importance of computability

If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would (be) absolutely inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.

IV: The Unexplained Intellect: Consequences of imperfection

The lesson to be drawn is that, if we think of intelligence as involving the maintenance of satisfiable beliefs, and if we think of our beliefs as corresponding to a set of representational states, then our intelligence would depend on a run of good luck the chances of which are unknown.

My suggestion is that we can reach a more explanatorily satisfactory conception of intelligence if we adopt a dynamic picture of the mind’s metaphysical foundations.

V: The Unexplained Intellect: The importance of rapport

I suggest that something roughly similar is true of us. We are not guaranteed to have satisfiable beliefs, and sometimes we are rather bad at avoiding unsatisfiability, but such intelligence as we have is to be explained by reference to the rapport between our minds and the world.

Rather than starting from a set of belief states, and then supposing that there is some internal process operating on these states that enables us to update our beliefs rationally, we should start out by accounting for the dynamic processes through which the world is epistemically encountered. Much as the three-colourable map generator reliably produces three-colourable maps because it is essential to his map-making procedure that borders appear only where they will allow for three colorability, so it is essential to what it is for a state to be a belief that beliefs will appear only if there is some rapport between the believer and the world. And this rapport — rather than any internal processing considered in isolation from it — can explain the tendency for our beliefs to respect the demands of intelligence.

VI: The Unexplained Intellect: The mind’s dynamic foundations

\ldots memory is essentially a form of epistemic retentiveness: One’s present knowledge counts as an instance of memory when and only when it was attained on the basis of an epistemic encounter that lies in one’s past. One can epistemically encounter a proposition as the conclusion of an argument, and so can encounter it before the occurrence of any event to which it pertains, but one cannot encounter an event in that way. In the resulting explanation of memory’s temporal asymmetry, it is the dynamic events of epistemic encountering to which we must make reference. These encounters, and not the knowledge states to which they lead, do the lion’s share of the explanatory work.

A: Simplifying Mole’s perspective

It may help simplify Mole’s thought-provoking perspective if we make an arbitrary distinction between:

(i) The mind of an applied scientist, whose primary concern is our sensory observations of a ‘common’ external world;

(ii) The mind of a philosopher, whose primary concern is abstracting a coherent perspective of the external world from our sensory observations; and

(iii) The mind of a mathematician, whose primary concern is adequately expressing such abstractions in a formal language of unambiguous communication.

My understanding of Mole’s thesis, then, is that:

(a) although a mathematician’s mind may be capable of defining the ‘truth’ value of some logical and mathematical propositions without reference to the external world,

(b) the ‘truth’ value of any logical or mathematical proposition that purports to represent any aspect of the real world must be capable of being evidenced objectively to the mind of an applied scientist; and that,

(c) of the latter ‘truths’, what should interest the mind of a philosopher is whether there are some that are ‘knowable’ completely independently of the passage of time, and some that are ‘knowable’ only partially, or incrementally, with the passage of time.

B. Support for Mole’s thesis

It also seems to me that Mole’s thesis implicitly subsumes, or at the very least echoes, the belief expressed by Chetan R. Murthy (‘An Evaluation Semantics for Classical Proofs‘, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, 1991; also Cornell TR 91-1213):

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …”

If so, the thesis seems significantly supported by the following paper that is due to appear in the December 2016 issue of ‘Cognitive Systems Research’:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Goedelian Thesis

The CSR paper implicitly suggests that there are, indeed, (only?) two ways of assigning ‘true’ or ‘false’ values to any mathematical description of real-world events.

C. Algorithmic computability

First, a number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence (cf. ibid Murthy 91) for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

(We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ on p.503 of Stephen Cole Kleene’s ‘Introduction to Metamathematics‘, North Holland Publishing Company, Amsterdam.)

D. Algorithmic verifiability

Second, a number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

We note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

The following theorem (Theorem 2.1, p.37 of the CSR paper) shows that although every algorithmically computable relation is algorithmically verifiable, the converse is not true:

Theorem: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable.

E. The significance of algorithmic ‘truth’ assignments for Mole’s theses

The significance of such algorithmic ‘truth’ assignments for Mole’s theses is that:

Algorithmic computability—reflecting the ambit of classical Newtonian mechanics—characterises natural phenomena that are determinate and predictable.

Such phenomena are describable by mathematical propositions that can be termed as ‘knowable completely’, since at any point of time they are algorithmically computable as ‘true’ or ‘false’.

Hence both their past and future behaviour is completely computable, and their ‘truth’ values are therefore ‘knowable’ independent of the passage of time.

Algorithmic verifiability—reflecting the ambit of Quantum mechanics—characterises natural phenomena that are determinate but unpredictable.

Such phenomena are describable by mathematical propositions that can only be termed as ‘knowable incompletely’, since at any point of time they are only algorithmically verifiable, but not algorithmically computable, as ‘true’ or ‘false’

Hence, although their past behaviour is completely computable, their future behaviour is not completely predictable, and their ‘truth’ values are not independent of the passage of time.

F. Where Mole’s implicit faith in the adequacy of set theoretical representations of natural phenomena may be misplaced

It also seems to me that, although Mole’s analysis justifiably holds that the:

\ldots importance of the rapport between an organism and its environment”

has been underacknowledged, or even overlooked, by existing theories of the mind and intelligence, it does not seem to mistrust, and therefore ascribe such underacknowledgement to any lacuna in, the mathematical and epistemic foundations of the formal language in which almost all descriptions of real-world events are currently sought to be expressed, which is the language of the set theory ZF.

G. Any claim to a physically manifestable ‘truth’ must be objectively accountable

Now, so far as applied science is concerned, history teaches us that the ‘truth’ of any mathematical proposition that purports to represent any aspect of the external world must be capable of being evidenced objectively; and that such ‘truths’ must not be only of a subjective and/or revelationary nature which may require truth-certification by evolutionarily selected prophets.

(Not necessarily religious—see, for instance, Melvyn B. Nathanson’s remarks, “Desperately Seeking Mathematical Truth“, in the Opinion piece in the August 2008 Notices of the American Mathematical Society, Vol. 55, Issue 7.)

The broader significance of seeking objective accountability is that it admits the following (admittedly iconoclastic) distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA as the language of science; and

2. The first-order Set Theory ZF as the language of science fiction.

It is a distinction that is faintly reflected in Stephen G. Simpson’s more conservative perspective in his paper ‘Partial Realizations of Hilbert’s Program‘ (#6.4, p.15):

“Finitistic reasoning (my read: ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (my read: ‘First-order Set Theory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.”

The distinction is supported by the formal argument (detailed in the above-cited CSR paper) that:

(i) PA has two, hitherto unsuspected, evidence-based interpretations, the first of which can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about ‘true’ arithmetical propositions.

What this means is that the language of arithmetic—formally expressed as PA—can provide all the foundational needs for all practical applications of mathematics in the physical sciences. This was was the point that I sought to make—in a limited way, with respect to quantum phenomena—in the following paper presented at Unilog 2015, Istanbul last year:

Algorithmically Verifiable Logic vis `a vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

(Presented on 26’th June at the workshop on ‘Emergent Computational Logics’ at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see Theorem 1 in \S4 of this post), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about ‘true’ set-theoretical propositions, or that of mechanistic reasoning about ‘true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that—although of possible interest to philosophers of science—are only mentally conceivable by mathematicians subjectively, and have no verifiable physical counterparts, or immediately practical applications of mathematics, that can materially impact on the study of physical phenomena.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

H. The importance of Mole’s ‘rapport’

Accordingly, I see it as axiomatic that the relationship between an evidence-based mathematical language and the physical phenomena that it purports to describe, must be in what Mole terms as ‘rapport’, if we view mathematics as a set of linguistic tools that have evolved:

(a) to adequately abstract and precisely express through human reasoning our observations of physical phenomena in the world in which we live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of our co-operative potential in acieving a better understanding of physical phenomena.

This is the perspective that I sought to make in the following paper presented at Epsilon 2015, Montpellier, last June, where I argue against the introduction of ‘unspecifiable’ elements (such as completed infinities) into either a formal language or any of its evidence-based interpretations (in support of the argument that since a completed infinity cannot be evidence-based, it must therefore be dispensible in any purported description of reality):

Why Hilbert’s and Brouwer’s interpretations of quantification are complementary and not contradictory.’

(Presented on 10th June at the Epsilon 2015 workshop on ‘Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics’, 10th June 2015 – 12th June 2015, University of Montpellier, France.)

I. Why mathematical reasoning must reflect an ‘agnostic’ perspective

Moreover, from a non-mathematician’s perspective, a Propertarian like Curt Doolittle would seem justified in his critique (comment of June 2, 2016 in this Quanta review) of the seemingly ‘mystical’ and ‘irrelevant’ direction in which conventional interpretations of Hilbert’s ‘theistic’ and Brouwer’s ‘atheistic’ reasoning appear to have pointed mainstream mathematics for, as I argue informally in an earlier post, the ‘truths’ of any mathematical reasoning must reflect an ‘agnostic’ perspective.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Hawking’s provocative pronouncement

On 15th May 2011, Stephen Hawking made the extraordinarily provocative pronouncement—almost en passant—in the early moments of his talk at Google’s annual Zeitgeist:

“Philosophy is dead”!

Amongst those he managed to successfully provoke were:

\bullet Members of Professor Anthony Beaver’s LinkedIn group ‘Computing and Philosophy’;

and:

\bullet Readers of this Aljazeera Opinion piece by the Philosophers Professor Santiago Zabala and Professor Creston Davis.

A deliberately restrained—but forceful and well-stated—comment in response to Hawking’s provocation in the Aljazeera Opinion piece by Analytic Philosopher Justin Clarke-Doane set me thinking.

What about Hawking’s Thesis?

After we are done with dismissing or rationalising what Hawking said, shouldn’t we also engage in giving at least as serious consideration to the thesis that seemed to have been the backdrop to his pronouncement on the demise of philosophy as a potent tool of enquiry into the truth of scientific propositions?

This is his seemingly prophetic opinion that what we would term as the assignment of objective truth to mathematically expressed scientific propositions will, some day, be based entirely on the evidence provided by simple functional languages[1] (read Turing machines) that are the `fundamental particles’ of Technology.

A seeming contradiction

Seemingly, there is straightaway a contradiction here.

The belief that all of natures laws and natural phenomena can be expressed in the algorithmically determinable truths of mathematical languages can be meta-mathematically expressed by saying that the Church-Turing Thesis must limit human intelligence as it does (by definition) mechanical intelligence.

In other words, even if the Thesis is not formally provable (hence not seen to be true) by a mechanical intelligence, it must be true for a human intelligence!

Apart from this, such a view may also then bind us to the conclusion that at least our universe is characterised by both non-locality and non-determinism (hence possibly the need for multiverses).

Whatever the argument for tolerating multiverses, I have suggested elsewhere that it should not, and perhaps need not, be because we are bound by scientific truths evidenced only by Hawking’s Technology.

What do you think?

\bullet Is Hawking’s thesis plausible or contradictory?

If the latter, then:

\bullet Does that validate Lucas’ Gödelian argument?

Return to 1: For instance see Chetan R. Murthy, 1991, An Evaluation Semantics for Classical Proofs, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, (also Cornell TR 91-1213), 1991.

(Paper Presented on 7th April at the workshop on `Logical Quantum Structures‘ at UNILOG’2013, 4th World Congress and School on Universal Logic, 29th March 2013 – 7th April 2013, Rio de Janeiro, Brazil. See also the paper Algorithmically Verifiable Logic vis a vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics? and presentation on 26th June at the workshop on `Emergent Computational Logics‘ at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

Some disturbing features of the standard Copenhagen interpretation of Quantum Theory

Amongst the philosophically disturbing features of the standard Copenhagen interpretation of Quantum Theory[1] are its:

\bullet Essential indeterminateness [2];

\bullet Essential separation of the world into `system’ and `observer’ [3].

\bullet In 1935 Albert Einstein, Boris Podolsky and Nathan Rosen noted [4] that accepting Quantum Theory, but denying these features of the Copenhagen interpretation, logically entails accepting either that the world is non-local [5] (thus contradicting Special Relativity), or that there are hidden variables [6] that would eliminate the need for accepting these features as necesary to any sound interpretation of Quantum Theory.

\bullet In 1952 David Bohm proposed [7] an alternative mathematical development of the existing Quantum Theory (essentially equivalent to it, but based on Louis de Broglie’s pilot wave theory) whose interpretation appealed to hidden variables [8] (presumably hidden natural laws that were implicitly assumed by Bohm to be representable by well-defined classically computable mathematical functions [9]) that eliminated the need for indeterminism and the separation of the world into `system’ and `observer’.

\bullet In 1964 John Stewart Bell showed [10] that any interpretation of Quantum Theory which appeals to (presumably classically computable) hidden variables in the above sense is necessarily non-local.

However, our foundational investigations into an apparently unrelated area [11] suggest that if the above presumption—concerning an appeal by Bohm and Bell to functions that are implicitly assumed to be classically computable—is correct, then the hidden variables in the Bohm-de Broglie interpretation of Quantum Theory could as well be presumed to involve natural laws that are mathematically representable only by functions that are algorithmically verifiable but not algorithmically computable [12] —in which case the interpretation might avoid being held as appealing to `non-locality’.

The underlying perspective of this thesis

The underlying perspective of this thesis is that:

(1) Classical physics assumes that all the observable laws of nature can be mathematically represented in terms of well-defined functions that are algorithmically computable (as defined in the next section).

Since the functions are well-defined, their values are pre-existing and pre-determined as mappings that are capable of being known in their infinite totalities to an omniscient intelligence, such as Laplace’s intellect [13].

(2) However, the overwhelming experimental verification of the mathematical predictions of Quantum Theory suggests that the actual behavior of the real world cannot be assumed as pre-existing and pre-determined in this sense.

In other words, the consequences of some experimental interactions are theoretically incapable of being completely known in advance even to an omniscient intelligence, such as Laplace’s intellect.

So all the observable laws of nature cannot be represented mathematically in terms of functions that are algorithmically computable (as defined in the next section).

(3) Hence, either there is no way of representing all the observable laws of nature mathematically in a deterministic model, or all the observable laws of nature can be represented mathematically in a deterministic model—but in terms of functions that are `computable’ in a non-predictable sense (we define these in the next section as algorithmically verifiable functions).

(4) The Copenhagen interpretation appears to opt for the first option in (3) above, and hold that there is no way of representing all the observable laws of nature mathematically in a deterministic model.

Hence the interpretation is not overly concerned with the seemingly essential non-locality of Quantum Theory, and its conflict with the deterministic mathematical representation of the laws of Special Relativity.

(5) The Bohm-de Broglie interpretation appears to reject the first option in (3) above, and to propose a way of representing all the observable laws of nature mathematically in terms of functions that are presumably taken implicitly to be algorithmically computable.

However, the Bohm-de Broglie interpretation has not so far been viewed as being capable of mathematically representing the seemingly essential non-local feature of Quantum Theory.

Let us therefore, for the moment, consider the second option in (3) above from the perspective suggested by the Birmingham paper; i.e., that the apparently non-local feature of Quantum Theory may actually be indicative of a non-constructive and `counter intuitive-to-human-intelligence’ phenomena in nature that can, however, be mathematically represented by functions that are algorithmically verifiable, but not algorithmically computable.

Algorithmic verifiabilty and algorithmic computability

We define the two concepts [14]:

Algorithmic verifiability: A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL_{(F,\ n)} which can provide objective evidence [15] for deciding the truth/falsity of each proposition in the finite sequence \{F(1), F(2), \ldots, F(n)\}.

Algorithmic computability: A number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL_{F} that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence \{F(1), F(2), \ldots\}.

We note that algorithmic computability implies the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions [16], whereas algorithmic verifiability does not imply the existence of an algorithm that can decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

From the point of view of a finitary mathematical philosophy—which is the constraint within which an applied science ought to ideally operate—the significant difference between the two concepts could be expressed by saying that we may treat the decimal representation of a real number as corresponding to a physically measurable limit [17]—and not only to a mathematically definable limit—if and only if such representation is definable by an algorithmically computable function.

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true.

Theorem 1: There are mathematical functions that are algorithmically verifiable but not algorithmically computable.

Proof: (a) Since any real number is mathematically definable as the limit of a Cauchy sequence of rational numbers:

Let R(n) denote the n^{th} digit in the decimal expression of the real number R in binary notation.

Then, for any given natural number n, there is an algorithm AL_{(R,\ n)} that can decide the truth/falsity of each proposition in the finite sequence:

\{R(1)=0, R(2)=0, \ldots, R(n)=0\}.

Hence, for any real number R, the relation R(x)=0 is algorithmically verifiable trivially.

(b) Since it follows from Alan Turing’s Halting argument [18] that there are algorithmically uncomputable real numbers:

Let [R(n)] denote the n^{th} digit in the decimal expression of an algorithmically uncomputable real number R in binary notation.

By (a), the relation [R(x)=0] is algorithmically verifiable trivially.

However, by definition there is no algorithm AL_{R} that can decide the truth/falsity of each proposition in the denumerable sequence:

\{[R(1)=0], [R(2)=0], \ldots\}.

Hence the relation [R(x)=0] is algorithmically verifiable but not algorithmically computable. \Box

Some mathematical constants are definable only by algorithmically verifiable but not algorithmically computable functions

We note that:

(i) All the mathematically defined functions known to, and used by, science are algorithmically computable, including those that define transcendental numbers such as \pi,\ e, etc.

They can be computed algorithmically as they are all definable as the limit of some well-defined infinite series of rationals.

(ii) The existence of mathematical constants that are defined by functions which are algorithmically verifiable but not algorithmically computable—suggested most famously by Georg Cantor’s diagonal argument—has been a philosophically debatable deduction.

Such existential deductions have been viewed with both suspicion and scepticism by scientists such as Henri Poincaré, L. E. J. Brouwer, etc., and disputed most vociferously on philosophical grounds by Ludwig Wittgenstein [19].

(iii) A constructive definition of an arithmetical Boolean function [R(x)] that is true (hence algorithmically verifiable [20]) but not provable in Peano Arithmetic (hence algorithmically not computable [21]) was given by Kurt Gödel in his 1931 paper on formally undecidable arithmetical propositions [22].

It was also disputed vociferously by Wittgenstein [23].

(iv) The definition of a number-theoretic function that is algorithmically verifiable but not algorithmically computable was also given by Alan Turing in his 1936 paper on computable numbers [24].

He defined a halting function, say H(n), that is 0 if, and only if, the Turing machine with code number n halts on input n. Such a function is mathematically well-defined, but assuming that it defines an algorithmically computable real number leads to a contradiction.

Turing thereupon concluded the mathematical existence of algorithmically uncomputable real numbers.

(v) A definition of a number-theoretic function that is algorithmically verifiable but not algorithmically computable was given by Gregory Chaitin [25].

He defined a class of constants—denoted by \Omega—which is such that if C(n) is the n^{th} digit in the decimal expression of an \Omega constant, then the function C(x) is algorithmically verifiable but not algorithmically computable.

Some physical constants may be representable by real numbers that are definable only by algorithmically verifiable but not algorithmically computable functions

Now, we note that:

“… the numerical values of dimensionless physical constants are independent of the units used. These constants cannot be eliminated by any choice of a system of units. Such constants include:

\bullet\ \alpha, the fine structure constant, the coupling constant for the electromagnetic interaction (\approx 1/137.036). Also the square of the electron charge, expressed in Planck units. This defines the scale of charge of elementary particles with charge.

\bullet\ \mu or \beta, the proton-to-electron mass ratio, the rest mass of the proton divided by that of the electron (\approx 1836.15). More generally, the rest masses of all elementary particles relative to that of the electron.

\bullet\ \alpha_{s}, the coupling constant for the strong force (\approx 1)

\bullet\ \alpha{G}, the gravitational coupling constant (\approx 10^{-38}) which is the square of the electron mass, expressed in Planck units. This defines the scale of the mass of elementary particles.

At the present time, the values of the dimensionless physical constants cannot be calculated; they are determined only by physical measurement. This is one of the unsolved problems of physics. …

The list of fundamental dimensionless constants decreases when advances in physics show how some previously known constant can be computed in terms of others. A long-sought goal of theoretical physics is to find first principles from which all of the fundamental dimensionless constants can be calculated and compared to the measured values. A successful `Theory of Everything’ would allow such a calculation, but so far, this goal has remained elusive.”

Dimensionless physical constant – Wikipedia.

This suggests that:

Thesis 1: Some of the dimensionless physical constants are only representable in a mathematical language as real numbers that are defined by functions which are algorithmically verifiable, but not algorithmically computable.

In other words, we cannot treat such constants as denoting—even in principle—a measurable limit, as we could a constant that is representable mathematically by a real number that is definable by algorithmically computable functions.

From the point of view of mathematical philosophy, this distinction would be expressed by saying that the sequence of digits in the decimal representation of an `unmeasurable’ physical constant cannot be treated in a mathematical language as a `completed’ infinite sequence, whilst the corresponding sequence in the decimal representation of a `measurable’ physical constant can be treated as a `completed’ infinite sequence.

Zeno’s argument: The dichotomy between a continuous physical reality and its discretely representable mathematical theory

Zeno’s paradoxical arguments highlight the philosophical and theological dichotomy between our essentially `continuous’ perception of the physical reality that we seek to capture with our measurements, and the essential `discreteness’ of any mathematical language of Arithmetic in which we seek to express such measurements.

The distinction between algorithmic verifiability and algorithmic computability of Arithmetical functions can be seen as reflecting the dichotomy mathematically.

Classical laws of nature

For instance, the distinction suggests that classical mechanics can be held as complete with respect to the algorithmically computable representation of the physical world, in the sense that:

Thesis 2: Classical laws of nature determine the nature and behaviour of all those properties of the physical world which are mathematically describable completely at any moment of time t(n) by algorithmically computable functions from a given initial state at time t(0).

Neo-classical laws of nature

On the other hand, the distinction also suggests that:

Thesis 3: Neo-classical laws of nature determine the nature and behaviour of those properties of the physical world which are describable completely at any moment of time t(n) by algorithmically verifiable functions; however such properties are not completely describable by algorithmically computable functions from any given initial state at time t(0).

Since such behaviour [26] follows fixed laws and is determinate (even if not algorithmically predictable by classical laws), Albert Einstein could have been justified in his belief that:

“… God doesn’t play dice with the world”

and in holding that:

“I like to think that the moon is there even if I am not looking at it”.

Incompleteness: Arithmetical analogy

The distinction also suggests that neither classical mechanics nor neo-classical quantum mechanics can be described as `mathematically complete’ with respect to the algorithmically verifiable behaviour of the physical world.

The analogy here is that Gödel showed in 1931 [27] that any formal arithmetic is not mathematically complete with respect to the algorithmically verifiable nature and behaviour of the natural numbers [28].

However it can be argued that the first-order Peano Arithmetic PA is complete [29] with respect to the algorithmically computable nature and behaviour of the natural numbers.

In this sense, the EPR paper may not be entirely wrong in holding that:

“We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete.”

Conjugate properties

The above also suggests that:

Thesis 4: The nature and behaviour of two conjugate properties F_{1} and F_{2} of a particle P that are determined by neo-classical laws are described mathematically at any time t(n) by two algorithmically verifiable, but not algorithmically computable, functions f_{1} and f_{2}.

In other words, it is the very essence of the neo-classical laws determining the nature and behaviour of the particle that—at any time t(n)—we can only determine either f_{1}(n) or f_{2}(n), but not both.

Hence measuring either one makes the other indeterminate as we cannot go back in time.

However, this does not contradict the assumption that any property of an object must obey some deterministic natural law for any possible measurement that is made at any time.

Entangled particles

The above similarly suggests that:

Thesis 5: The nature and behaviour of an entangled property of two particles P and Q is determined by neo-classical laws and are describable mathematically at any time t(n) by two algorithmically verifiable—but not algorithmically computable—functions f_{1} and g_{1}.

In other words, it is the very essence of the neo-classical laws determining the nature and behaviour of the entangled properties of two particles that—at any time t(n)—determining one immediately gives the state of the other without measurement since the properties are entangled.

Again, this does not contradict the assumption that any property of an object must obey some deterministic natural law for any possible measurement that is made at any time.

Nor does it require any information to travel from one particle to another consequent to a measurement.[29a]

Schrödinger’s cat

If [F(x)] is an algorithmically verifiable but not algorithmically computable Boolean function, we can take the query:

Is F(n)=0 for all natural numbers?

as corresponding mathematically to the Schrödinger question:

Is the cat dead or alive at any given time t?

We can then argue that there is no mathematical paradox involved in Schrödinger‘s assertion that the cat is both dead and alive, if we take this to mean that:

I may either assume the cat to be alive until a given time t (in the future), or assume the cat to be dead until the time t, without arriving at any logical contradiction in my existing Quantum description of nature.

In other words:

Once we accept Quantum Theory as a valid description of nature, then there is no paradox in stating that the theory essentially cannot predict the state of the cat at any moment of future time (so the inability to predict does not arise out of a lack of sufficient information about the laws of the system that Quantum theory is describing, but stems from the very nature of these laws).

The mathematical analogy for the above would be:

Once we accept that Peano Arithmetic is consistent [30] and categorical [31] then we cannot deduce from the axioms of PA whether F(n)=0 for all natural numbers, or whether F(n)=1 for some natural number.

Conclusion
To sum up, we suggest that the paradoxical element in the EPR argument may disappear if we could argue that:

(i) All properties of physical reality can be deterministic—in the sense that any physical property can have one, and only one, value at any time t(n)—where the value is completely determined by some natural law which need not, however, be algorithmic.

(ii) There are elements of such a physical reality whose properties at any time t(n) are determined completely in terms of their putative properties at some earlier time t(0).

Such properties are predictable mathematically since they are representable mathematically by algorithmically computable functions.

The Laws of Classical Mechanics describe the nature and behaviour of such physical reality only.

(iii) There can be elements of such a physical reality whose properties at any time t(n) cannot be theoretically determined completely from their putative properties at some earlier time t(0).

Such properties are unpredictable mathematically since they are only representable mathematically by algorithmically verifiable, but not algorithmically computable, functions.

The Laws of Quantum Mechanics describe the nature and behaviour of such physical reality.

The need for constructive mathematical foundations

The following perspective [32] emphasises the need for a universally common, constructive, foundation for the mathematical representation of elements of reality such as those considered above:

“Our investigations lead us to consider the possibilities for `reuniting the antipodes’. The antipodes being classical mathematics (CLASS) and intuitionism (INT).

… It therefore seems worthwhile to explore the `formal’ common ground of classical and intuitionistic mathematics. If systematically developed, many intuitionistic results would be seen to hold classically as well, and thus offer a way to develop a strong constructive theory which is still consistent with the rest of classical mathematics.

Such a constructive theory can form a conceptual framework for applied mathematics and information technology.

These sciences now use an ad-hoc approach to reality since the classical framework is inadequate. … and can easily use the richness of ideas already present in classical mathematics, if classical mathematics were to be systematically developed along the common grounds before the unconstructive elements are brought in.”

Frank Waaldijk [33]

“… we propose that Laplacian determinism be seen in the light of constructive mathematics and Church’s Thesis.

This means amongst other things that infinite sequences (of natural numbers; a real number is then given by such an infinite sequence) are never `finished’, instead we see them developing in the course of time.

Now a very consequent, therefore elegant interpretation of Laplacian determinism runs as follows.

Suppose that there is in the real world a developing-infinite sequence of natural numbers, say \alpha. Then how to interpret the statement that this sequence is `uniquely determined’ by the state of the world at time zero?

At time zero we can have at most finite information since, according to our constructive viewpoint, infinity is never attained. So this finite information about \alpha supposedly enables us to `uniquely determine’ \alpha in its course of time.

It is now hard to see another interpretation of this last statement, than the one given by Church’s Thesis, namely that this finite information must be a (Turing-)algorithm that we can use to compute \alpha (n) for any n \in \mathbb(N).

With classical logic and omniscience, the previous can be stated thus:

`for every (potentially infinite) sequence of numbers (a_{n})_{n \in \mathbb{N}} taken from reality there is a recursive algorithm \alpha such that \alpha (n) = a_{n} for each n \in \mathbb{N}. This statement is sometimes denoted as `CT_{phys}‘,

… this classical omniscient interpretation is easily seen to fail in real life.

Therefore we adopt the constructive viewpoint.

The statement `the real world is deterministic’ can then best be interpreted as:

`a (potentially infinite) sequence of numbers (a_{n})_{n \in \mathbb{N}} taken from reality cannot be apart from every recursive algorithm \alpha (in symbols: \neg \forall \alpha \in \sigma_{\omega REC} \exists n \in \mathbb{N}\ [\alpha(n) \neq a_{n}])’.”

Frank Waaldijk [34]

References

Be64 John Stewart Bell. 1964. On the Einstein Podolsky Rosen Paradox Physics Vol 1, No. 3, pp.195-200, 1964. Reprinted in J. S. Bell, Speakable and unspeakable in quantum mechanics, Cambridge, 2004, p. 14-21.

Bo52 David Bohm. 1952. A Suggested Interpretation of the Quantum Theory in Terms of `Hidden Variables’, I. Physical Review 85 n.2, pp.166-179 (1952) and A Suggested Interpretation of the Quantum Theory in Terms of `Hidden Variables’, II. Physical Review 85 n.2, pp.180-193 (1952).

BBJ03 George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. Computability and Logic. (4th ed). Cambridge University Press, Cambridge.

Ch82 Gregory J. Chaitin. 1982. Gödel’s Theorem and Information. International Journal of Theoretical Physics 22 (1982), pp. 941-954.

EPR35 Albert Einstein, Boris Yakovlevich Podolsky, Nathan Rosen. 1935. Can Quantum-Mechanical Description of Physical Reality be Considered Complete? InPhysical Review PHYS REV X – vol. 47, no. 10, pp.777-780. Bibcode 1935PhRv\ldots 47…777E. doi:10.1103/PhysRev.47.777.

Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. pp.5-38.

Kl52 Stephen Cole Kleene. 1952. Introduction to Metamathematics. North Holland Publishing Company, Amsterdam.

Mu91 Chetan R. Murthy. 1991. An Evaluation Semantics for Classical Proofs. Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, (also Cornell TR 91-1213), 1991.

Sc35 Erwin Schrödinger. Die gegenwärtige Situation in der Quantenmechanik, Naturwissenschaftern. 23: pp. 807-812; 823-823, 844-849. (1935). English translation: John D. Trimmer, The present situation in Quantum Mechanics, Proceedings of the American Philosophical Society 124, pp. 323-38 (1980), reprinted in Quantum Theory and Measurement, p. 152 (1983).

Sh11 Sheldon Goldstein et al. 2011. Bell’s Theorem, Scholarpedia, 6(10):8378, doi:10.4249/scholarpedia.8378.

Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp. 544-546.

Wa03 Frank Waaldijk. 2003. On the foundations of constructive mathematics. Web paper.

Wi78 Ludwig Wittgenstein. 1937. Remarks on the Foundations of Mathematics. 1978 ed., MIT Press, Cambridge.

An12 Bhupinder Singh Anand. 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

An12a Bhupinder Singh Anand. 2012. Some consequences of interpreting the associated logic of the first-order Peano Arithmetic PA finitarily. Draft.

An13 … 2013. A suggested mathematical perspective for the EPR argument. Paper Presented on 7th April at the workshop on `Logical Quantum Structures‘ at UNILOG’2013, 4th World Congress and School on Universal Logic, 29th March 2013 – 7th April 2013, Rio de Janeiro, Brazil.

Notes

Return to 1: “Because it consists of the views developed by a number of scientists and philosophers during the second quarter of the 20th Century, there is no definitive statement of the Copenhagen interpretation”, Wikipedia; cf., Copenhagen Interpretation of Quantum Mechanics, Stanford Encyclopedia of Philosophy; also The Copenhagen Interpretation of Quantum Mechanics by Ben Best.

Return to 2: “It is a general principle of orthodox formulations of quantum theory that measurements of physical quantities do not simply reveal pre-existing or pre-determined values, the way they do in classical theories. Instead, the particular outcome of the measurement somehow “emerges” from the dynamical interaction of the system being measured with the measuring device, so that even someone who was omniscient about the states of the system and device prior to the interaction couldn’t have predicted in advance which outcome would be realized.” … Sh11.

Return to 2: Highlighted famously by Erwin Schrödinger’s caustic observation on the dubious condition of his Platonic pet:

“One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The \psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.” … Sc35, §5.

Return to 3: cf. Sh11.

Return to 4: EPR35.

Return to 5: “`Non-local’ … means that there exist interactions between events that are too far apart in space and too close together in time for the events to be connected even by signals moving at the speed of light.” … Sh11.

Return to 6: “Traditionally, the phrase `hidden variables’ is used to characterize any elements supplementing the wave function of orthodox quantum theory.” … Sh11.

Return to 7: Bo52.

Return to 8: “This terminology is, however, particularly unfortunate in the case of the de Broglie-Bohm theory, where it is in the supplementary variables—definite particle positions—that one finds an image of the manifest world of ordinary experience.” … Sh11.

Return to 9: Which could be considered as having pre-existing or pre-determined mathematical values over the domain over which the functions are well-defined.

Return to 10: Be64.

Return to 11: Concerning evidence-based and finitary interpretations of the first order Peano Arithmetic PA, in An12.

Return to 12: As defined below—hence mathematically determinate but unpredictable.

Return to 13: “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all the forces that animate Nature and the mutual positions of the beings that comprise it, if this intellect were vast enough to submit its data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom: for such an intellect nothing could be uncertain; and the future just like the past would be present before its eyes.” … Pierre Simon Laplace, A Philosophical Essay on Probabilities.

Return to 14: See An12.

Return to 15: `It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …” … Mu91.

Return to 16: We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ in Kl52, p.503.

Return to 17: In the sense of a physically `completable’ infinite sequence (as needed to resolve Zeno’s paradox).

Return to 18: Tu36, p.132, §8.

Return to 19: Wi78.

Return to 20: An12a, Corollary 8, p.27.

Return to 21: An12a, Corollary 8, p.27.

Return to 22: Go31.

Return to 23: Wi78.

Return to 24: Tu36.

Return to 25: Ch82.

Return to 26: A putative model for such behaviour is given in Wa03, §1.5, p.5:

“The second way to model our real world is to assume that it is deterministic. …

It would be worthwhile to explore the consequences of a deterministic world with incomplete information (since under the assumption of determinancy in the author’s eyes this comes closest to real life).

That is a world in which each infinite sequence is given by an algorithm, which in most cases is completely unknown.

We can model such a world by introducing two players, where player I picks algorithms and hands out the computed values of these algorithms to player II, one at a time.

Sometimes player I discloses (partial) information about the algorithms themselves.

Player II can of course construct her or his own algorithms, but still is confronted with recursive elements of player I about which she/he has incomplete information.”

Return to 27: Go31.

Return to 28: Which—as shown in An12—is the behaviour sought to be captured by the Standard interpretation of the first order Peano Arithmetic PA.

Return to 29: And categorical, as argued in An12a.

Return to 29a: For a model of widely separated outputs that are correlated but don’t allow communication, which demonstrates that non-locality doesn’t imply communication, see this paper Wim van Dam: Implausible Consequences of Superstrong Nonlocality and this explanatory blog A Neighborhood of Infinity: Distributed computing with alien technology.

Return to 30: An12 establishes the `consistency’.

Return to 31: An12a establishes the `categoricity’—which means that any two models of the Arithmetic are isomorphic.

Return to 32: Wa03.

Return to 33: ibid., §1.6, p.5.

Return to 34: ibid., §7.2, p.24.

Bhupinder Singh Anand

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Recent posts

George Lakoff

George Lakoff has retired as Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley. He is now Director of the Center for the Neural Mind & Society (cnms.berkeley.edu).

LobeLog

Critical Perspectives on U.S. Foreign Policy

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Quanta Magazine

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The Brains Blog

Since 2005, a leading forum for work in the philosophy and science of mind

Logic Matters

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

A Neighborhood of Infinity

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Combinatorics and more

Gil Kalai's blog

Mathematics and Computation

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Foundations of Mathematics, Logic & Computability

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

John D. Cook

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Shtetl-Optimized

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Nanoexplanations

the blog of Aaron Sterling

Eric Cavalcanti

Quantum physicist

East Asia Forum

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Tanya Khovanova's Math Blog

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The polymath blog

Massively collaborative mathematical projects