You are currently browsing the tag archive for the ‘computational complexity’ tag.

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Ferguson’s and Priest’s thesis

In a brief, but provocative, review of what they term as “the enduring evolution of logic” over the ages, the authors of Oxford University Press’ recently released ‘A Dictionary of Logic‘, philosophers Thomas Macaulay Ferguson and Graham Priest, take to task what they view as a Kant-influenced manner in which logic is taught as a first course in most places in the world:

“… as usually ahistorical and somewhat dogmatic. This is what logic is; just learn the rules. It is as if Frege had brought down the tablets from Mount Sinai: the result is God-given, fixed, and unquestionable.”

Ferguson and Priest conclude their review by remarking that:

“Logic provides a theory, or set of theories, about what follows from what, and why. And like any theoretical inquiry, it has evolved, and will continue to do so. It will surely produce theories of greater depth, scope, subtlety, refinement—and maybe even truth.”

However, it is not obvious whether that is prescient optimism, or a tongue-in-cheek exit line!

A nineteenth century parody of the struggle to define ‘truth’ objectively

For, if anything, the developments in logic since around 1931 has—seemingly in gross violation of the hallowed principle of Ockham’s razor, and its crude, but highly effective, modern avatar KISS—indeed produced a plethora of theories of great depth, scope, subtlety, and refinement.

These, however, seem to have more in common with the, cynical, twentieth century emphasis on subjective, unverifiable, ‘truth’, rather than with the concept of an objective, evidence-based, ‘truth’ that centuries of philosophers and mathematicians strenuously struggled to differentiate and express.

A struggle reflected so eloquently in this nineteenth century quote:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

… Lewis Carroll (Charles L. Dodgson), ‘Through the Looking-Glass’, chapter 6, p. 205 (1934 ed.). First published in 1872.

Making sense of mathematical propositions about infinite processes

It was, indeed, an epic struggle which culminated in the nineteenth century standards of rigour successfully imposed—in no small measure by the works of Augustin-Louis Cauchy and Karl Weierstrasse—on verifiable interpretations of mathematical propositions about infinite processes involving real numbers.

A struggle, moreover, which should have culminated equally successfully in similar twentieth century standards—on verifiable interpretations of mathematical propositions containing references to infinite computations involving integers—sought to be imposed in 1936 by Alan Turing upon philosophical and mathematical discourse.

The Liar paradox

For it follows from Turing’s 1936 reasoning that where quantification is not, or cannot be, explicitly defined in formal logical terms—eg. the classical expression of the Liar paradox as ‘This sentence is a lie’—a paradox cannot per se be considered as posing serious linguistic or philosophical concerns (see, for instance, the series of four posts beginning here).

Of course—as reflected implicitly in Kurt Gödel’s seminal 1931 paper on undecidable arithmetical propositions—it would be a matter of serious concern if the word ‘This’ in the English language sentence, ‘This sentence is a lie’, could be validly viewed as implicitly implying that:

(i) there is a constructive infinite enumeration of English language sentences;

(ii) to each of which a truth-value can be constructively assigned by the rules of a two-valued logic; and,

(iii) in which ‘This’ refers uniquely to a particular sentence in the enumeration.

Gödel’s influence on Turing’s reasoning

However, Turing’s constructive perspective had the misfortune of being subverted by a knee-jerk, anti-establishment, culture that was—and apparently remains to this day—overwhelmed by Gödel’s powerful Platonic—and essentially unverifiable—mathematical and philosophical 1931 interpretation of his own construction of an arithmetical proposition that is formally unprovable, but undeniably true under any definition of ‘truth’ in any interpretation of arithmetic over the natural numbers.

Otherwise, I believe that Turing could easily have provided the necessary constructive interpretations of arithmetical truth—sought by David Hilbert for establishing the consistency of number theory finitarily—which is addressed by the following paper due to appear in the December 2016 issue of ‘Cognitive Systems Research‘:

The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Gödelian thesis‘.

What is logic: using Ockham’s razor

Moreover, the paper endorses the implicit orthodoxy of an Ockham’s razor influenced perspective—which Ferguson and Priest seemingly find wanting—that logic is simply a deterministic set of rules that must constructively assign the truth values of ‘truth/falsity’ to the sentences of a language.

It is a view that I expressed earlier as the key to a possible resolution of the EPR paradox in the following paper that I presented on 26’th June at the workshop on Emergent Computational Logics at UNILOG’2015, Istanbul, Turkey:

Algorithmically Verifiable Logic vis à vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?

where I introduced the definition:

A finite set \lambda of rules is a Logic of a formal mathematical language \mathcal{L} if, and only if, \lambda constructively assigns unique truth-values:

(a) Of provability/unprovability to the formulas of \mathcal{L}; and

(b) Of truth/falsity to the sentences of the Theory T(\mathcal{U}) which is defined semantically by the \lambda-interpretation of \mathcal{L} over a structure \mathcal{U}.

I showed there that such a definitional rule-based approach to ‘logic’ and ‘truth’ allows us to:

\bullet Equate the provable formulas of the first order Peano Arithmetic PA with the PA formulas that can be evidenced as `true’ under an algorithmically computable interpretation of PA over the structure \mathbb{N} of the natural numbers;

\bullet Adequately represent some of the philosophically troubling abstractions of the physical sciences mathematically;

\bullet Interpret such representations unambiguously; and

\bullet Conclude further:

\bullet First that the concept of infinity is an emergent feature of any mechanical intelligence whose true arithmetical propositions are provable in the first-order Peano Arithmetic; and

\bullet Second that discovery and formulation of the laws of quantum physics lies within the algorithmically computable logic and reasoning of a mechanical intelligence whose logic is circumscribed by the first-order Peano Arithmetic.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

1. Since, by the Prime Number Theorem, the number of primes \leq \sqrt n is O(\frac{\sqrt n}{log_{_{e}}\sqrt n}), it would follow that determining a factor of n requires at least one logical operation for each prime \leq \sqrt n, and therefore cannot be done in polynomial time—whence P \neq NPIF whether or not a prime p divides an integer n were independent of whether or not a prime q \neq p divides the integer n.

2. Currently, conventional approaches to determining the computational complexity of Integer Factorising apparently appeal critically to the belief that:

(i) either—explicitly (see here)—that whether or not a prime p divides an integer n is not independent of whether or not a prime q \neq p divides the integer n;

(ii) or—implicitly (since the problem is yet open)—that a proof to the contrary must imply that if P(n\ is\ a\ prime) is the probability that n is a prime, then \sum_{_{i = 1}}^{^{\infty}} P(i\ is\ a\ prime) = 1.

3. If so, then conventional approaches seem to conflate the two probabilities:

(i) The probability P(a) of selecting a number that has the property of being prime from a given set S of numbers;

Example 1: I have a bag containing 100 numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.

(ii) The probability P(b) of determining that a given integer n is prime.

Example 2: I give you a 5-digit combination lock along with a 10-digit number n. The lock only opens if you set the combination to a proper factor of n which is greater than 1. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.

4. In case 3(i), if the precise proportion of primes to non-primes in S is definable, then clearly P(a) too is definable.

However if S is the set N of all integers, and we cannot define a precise ratio of primes to composites in N, but only an order of magnitude such as O(\frac{1}{log_{_{e}}n}), then equally obviously P(a) cannot be defined in N (see Chapter 2, p.9, Theorem 2.1, here).

5. In case 3(ii) the following paper proves P(b) = \frac{1}{\pi(\sqrt{n})}, since it shows that whether or not a prime p divides a given integer n is independent of whether or not a prime q \neq p divides n:

Why Integer Factorising cannot be polynomial time

Not only does it immediately follow that P \neq NP (see here), but we further have that \pi(n) \approx n.\prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}), with a binomial standard deviation. Hence, even though we cannot define the probability P(n\ is\ a\ prime) of selecting a number from the set N of all natural numbers that has the property of being prime, \prod_{_{i = 1}}^{^{\pi(\sqrt{n})}}(1-\frac{1}{p_{_{i}}}) can be treated as the de facto probability that a given n is prime, with all its attended consequences for various prime-counting functions and the Riemann Zeta function (see here).

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

This argument laid the foundation for this later post and this investigation.

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

Abstract: We show the joint probability \mathbb{P}(p_{i} | n\ \cap\ p_{j} | n) that two unequal primes p_{i},\ p_{j} divide any integer n is the product \mathbb{P}(p_{i} | n).\mathbb{P}(p_{j} | n). We conclude that the prime divisors of any integer n are independent; and that Integer Factorising is necessarily of the order O(n/log_{e}\ n).

\S 1 The residues r_{i}(n)

We define the residues r_{i}(n) for all n \geq 2 and all i \geq 2 as below:

Definition 1: n + r_{i}(n) \equiv 0\ (mod\ i) where i > r_{i}(n) \geq 0.

Since each residue r_{i}(n) cycles over the i values (i-1, i-2, \ldots, 0), these values are all incongruent and form a complete system of residues [1] mod\ i.

We note that:

Lemma 1: r_{i}(n) = 0 if, and only if, i is a divisor of n.

\S 2 The probability \mathbb{P}(e)

By the standard definition of the probability [2] \mathbb{P}(e) of an event e, we then have that:

Lemma 2: For any n \geq 2,\ i \geq 2 and any given integer i > u \geq 0, the probability \mathbb{P}(r_{i}(n) = u) that r_{i}(n) = u is 1/i, and the probability \mathbb{P}(r_{i}(n) \neq u) that r_{i}(n) \neq u is 1 - 1/i.

We note the standard definition [3]:

Definition 2: Two events e_{i} and e_{j} are mutually independent for i \neq j if, and only if, \mathbb{P}(e_{i}\ \cap\ e_{j}) = \mathbb{P}(e_{i}).\mathbb{P}(e_{j}).

\S 3 The prime divisors of any integer n are mutually independent

We then have that:

Lemma 3: If p_{i} and p_{j} are two primes where i \neq j then, for any n \geq 2, we have:

\mathbb{P}((r_{p_{_{i}}}(n) = u) \cap (r_{p_{_{j}}}(n) = v)) = \mathbb{P}(r_{p_{_{i}}}(n) = u).\mathbb{P}(r_{p_{_{j}}}(n) = v)

where p_{i} > u \geq 0 and p_{j} > v \geq 0.

Proof: The p_{i}.p_{j} numbers v.p_{i} + u.p_{j}, where p_{i} > u \geq 0 and p_{j} > v \geq 0, are all incongruent and form a complete system of residues [4] mod\ (p_{i}.p_{j}). Hence:

\mathbb{P}((r_{p_{_{i}}}(n) = u) \cap (r_{p_{_{j}}}(n) = v)) = 1/p_{i}.p_{j}.

By Lemma 2:

\mathbb{P}(r_{p_{_{i}}}(n) = u).\mathbb{P}(r_{p_{_{j}}}(n) = v) = (1/p_{i})(1/p_{j}).

The lemma follows. \Box

If u = 0 and v = 0 in Lemma 3, so that both p_{i} and p_{j} are prime divisors of n, we conclude by Definition 2 that:

Corollary 1: \mathbb{P}((r_{p_{_{i}}}(n) = 0) \cap (r_{p_{_{j}}}(n) = 0)) = \mathbb{P}(r_{p_{_{i}}}(n) = 0).\mathbb{P}(r_{p_{_{j}}}(n) = 0).

Corollary 2: \mathbb{P}(p_{i} | n\ \cap\ p_{j} | n) = \mathbb{P}(p_{i} | n).\mathbb{P}(p_{j} | n).

Theorem 1: The prime divisors of any integer n are mutually independent.

Since n is a prime if, and only if, it is not divisible by any prime p \leq \sqrt{n} we may, without any loss of generality, take integer factorising to mean determining at least one prime factor p \leq \sqrt{n} of any given n \geq 2.

\S 4 Integer Factorising is not in P

It then immediately follows from Theorem 1 that:

Corollary 3: Integer Factorising is not in P.

Proof: We note that any computational process to identify a prime divisor of n \geq 2 must necessarily appeal to a logical operation for identifying such a factor.

Since n may be the square of a prime, it follows from Theorem 1 that we necessarily require at least one logical operation for each prime p \leq \sqrt{n} in order to logically identify a prime divisor of n.

Moreover, since the number of such primes is of the order O(n/log_{e}\ n), any deterministic algorithm that always computes a prime factor of n cannot be polynomial-time—i.e. of order O((log_{e}\ n)^{c}) for any c—in the length of the input n.

The corollary follows if P is the set of such polynomial-time algorithms. \Box

Acknowledgements

I am indebted to my erstwhile classmate, Professor Chetan Mehta, for his unqualified encouragement and support for my scholarly pursuits over the past fifty years; most pertinently for his patiently critical insight into the required rigour without which the argument of this 1964 investigation would have remained in the informal universe of seemingly self-evident truths.

References

GS97 Charles M. Grinstead and J. Laurie Snell. 1997. Introduction to Probability. Second Revised Edition, 1997, American Mathematical Society, Rhode Island, USA.

HW60 G. H. Hardy and E. M. Wright. 1960. An Introduction to the Theory of Numbers 4th edition. Clarendon Press, Oxford.

Ko56 A. N. Kolmogorov. 1933. Foundations of the Theory of Probability. Second English Edition. Translation edited by Nathan Morrison. 1956. Chelsea Publishing Company, New Yourk.

An05 Bhupinder Singh Anand. 2005. Three Theorems on Modular Sieves that suggest the Prime Difference is O(\pi(p(n)^{1/2})). Private investigation.

Notes

Return to 1: HW60, p.49.

Return to 2: Ko56, Chapter I, Section 1, Axiom III, p.2; see also GS97, Chapter 1, Section 1.2, Definition 1.2, p.19.

Return to 3: Ko56, Chapter VI, Section 1, Definition 1, pg.57 and Section 2, pg.58; see also GS97, Chapter 4, Section 4.1, Theorem 4.1, pg.140.

Return to 4: HW60, p.52, Theorem 59.

Author’s working archives & abstracts of investigations

Bhupinder Singh Anand

In this post we shall beg indulgence for wilfully ignoring Wittgenstein’s dictum: “Whereof one cannot speak, thereof one must be silent.”

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1 The Background

In his comments on this earlier post (on the need for a better consensus on the definition of `effective computability‘) socio-proctologist Vivek Iyer—who apparently prefers to be known here only through this blogpage, and whose sustained interest has almost compelled a more responsible response in the form of this blogpage—raised an interesting query (albeit obliquely) on whether commonly accepted Social Welfare Functions—such as those based on Muth rationality—could possibly be algorithmically verifiable, but not algorithmically computable:

“We know the mathematical properties of market solutions but assume we can know the Social Welfare Function which brought it about. We have the instantiation and know it is computable but we don’t know the underlying function.”

He later situated the query against the perspective of:

“… the work of Robert Axtell http://scholar.google.com/citations?user=K822uYQAAAAJ&hl=en- who has derived results for the computational complexity class of market (Walrasian) eqbm. This is deterministically computable as a solution for fixed points in exponential time, though easily or instantaneously verifiable (we just check that the market clears by seeing if there is anyone who still wants to sell or buy at the given price). Axtell shows that bilateral trades are complexity class P and this is true of the Subjective probabilities.”

Now, the key para of Robert Axtell’s paper seemed to be:

“The second welfare theorem states that any Pareto optimal allocation is a Walrasian equilibrium from some endowments, and is usually taken to mean that a social planner/society can select the allocation it wishes to achieve and then use tax and related regulatory policy to alter endowments such that subsequent market processes achieve the allocation in question. We have demonstrated above that the job of such a social planner would be very hard indeed, and here we ask whether there might exist a computationally more credible version of the second welfare theorem”…Axtell p.9

Axtell’s aim here seemed to be to maximise in polynomial time the Lyapunov function V(x(t)) (given on Axtell p.7; which is a well-defined number-theoretic formula over the real numbers) to within a specific range of its limiting value over a given set of possible endowment allocations.

Distribution of Resources

Prima facie Axtell’s problem seemed to lie within a general class of problems concerning the distribution of finite resources amongst a finite population.

For instance, the total number of ways, say P(n, k) in which any budgeted measure of n discrete endowments can be allocated over k agents is given by the partition function P(n, k) (which I wrongly commented upon as being the factorisation function F(n, k) generated by \eta_{k}(s)):

\prod_{i=1}^{\infty}\frac{1}{(1-ax^{i})} = \sum_{n=1}^{\infty}\sum_{k=1}^{\infty}P(n, k)a^{k}x^{n}

We noted that if we take one of the allocations E as the equilibrium (ideal/desired destination) distribution, then a general problem of arriving at an ideal Distribution of Resources from a given starting distribution could ask whether there is always a path that is polynomial in P(n, k) and such that, starting from any arbitrary distribution, there is a minimum cost for passing (in the worst case) through all the possible distributions irreversibly (where we assume that it is not feasible under any series of individual rational exchanges for a particular resource distribution over the population to repeat itself with time).

We assumed there that, for any given set of distributions S_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\} and any A_{j}, there is a minimum measure m_{S_{i}S_{j}} which determines the cost of progressing from the set of distributions A_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\} to the set of distributions S_{j}=\{A_{1},\ A_{2},\ \ldots,\ A_{j}\}, where A_{j} is not in S_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\}.

We noted that mathematically the above can be viewed as a variation of the Travelling Salesman problem where the goal is to minimise the total cost of progressing from a starting distribution A_{1} to the ideal distribution E, under the influence of free market forces based on individual rational exchanges that are only regulated to ensure—through appropriate taxation and/or other economic tools—that the cost of repeating a distribution is not feasible.

We further noted that, since the Travelling Salesman Problem is in the complexity class NP, it would be interesting to identify where exactly Axtell’s problem reduces to the computability complexity class P (and where we ignore, for the moment, the PvNP separation problem raised in this earlier post).

\S 2 Defining Market Equilibrium in a Simplified Stock Exchange

In order to get a better perspective on the issue of an equilibrium, we shall now speculate on whether we can reasonably define a market equilibrium in a simplified terminating market situation (i.e., a situation which always ends when there is no possible further profitable activity for any participant), where our definition of an equilibrium is any terminal state of minimum total cost and maximum total gain.

For instance, we could treat the population in question as a simplified on-line Stock Exchange with k speculators B_{1},\ B_{2},\ \ldots, B_{k} and n scrips, where both k and n are fixed.

Now we can represent a distribution A_{(t)} at time t by one of the ways, say P(n,\ k), in which n can be partitioned into k parts as n = a_{i} + a_{2} + \ldots + a_{k}, where a_{i} is the number of scrips (mandatorily \geq 1) held by speculator B{i} at time t (which can fluctuate freely based on market forces of supply and demand).

We note that the generating function for P(n,\ k) is given by the partition function:

\prod_{i=1}^{\infty}\frac{1}{(1-ax^{i})} = \sum_{n=1}^{\infty}\sum_{k=1}^{\infty}P(n, k)a^{k}x^{n}

One could then conceive of a Transaction Corpus Tax (TCT) as a cess on each transaction levied by the Regulator (such as India’s SEBI) on the Exchange:

\bullet Where TCT is directly recovered by the Exchange from individual speculators along with an appropriate Transaction Processing Fee (TPF) and a periodic Exchange Maintenance Cost (EMC); and

\bullet Where, unless the market is in a terminating situation, the maintenance charge EMC is uniformly leviable periodically, even if no transaction has taken place, to ensure that trading activity never halts voluntarily.

Now, given an initial distribution, say A_{0}, of the scrips amongst the population k, we can assume that any transaction by say speculator B_{i} at time t_{j} would incur a TCT cost f_{t_{j}}(A_{j},\ A_{j+1}), plus a TPF and a (possibly discounted) EMC.

Obviously the gain to B_{i} must exceed TCT+TPF+EMC for the trade to be a rational one.

However, what is important to note here (which obviously need not apply in dissimilar cases such as Axtell’s) is that apparently none of the following factors:

(a) the price of the scrip being traded at any transaction;

(b) the quantum of the gain (which need not even be quantifiable in any particular transaction);

(c) the quantum of TPF or EMC;

seem to play any role in reaching the Equilibrium Distribution E_{(n,\ k)} for the particular set of k speculators \{B_{1},\ B_{2},\ \ldots,\ B_{k}\} and n scrips.

Moreover, the only restriction is on TCT, to the effect that:

f_{t_{i}}(A_{i},\ A_{j}) = \infty for any i if j < i

In other words, no transaction can take place that requires a distribution to be repeated.

One way of justifying such a Regulatory restriction would be that if a distribution were allowed to repeat itself, a cabal could prevent market equilibrium by artificially fuelling speculation aimed at merely inflating the valuations of the n scrips over a selected distribution.

By definition, starting with any distribution A_{0}, it would seem that the above activity must come to a mandatory halt after having passed necessarily through all the possible P(k,\ n) distributions.

If so, this would reduce the above to a Travelling Salesman Problem TSP if we define the Equilibrium Distribution E_{(n,\ k)} as the (not necessarily unique) distribution which minimises the Total Transaction Corpus Tax to speculators in reaching the Equilibrium through all possible distributions.

In other words, the Equilibrium Distribution is—by definition—the one for which \sum_{i=0}^{P(n,\ k)} f_{t_{i}}(A_{i},\ A_{i+1}) is a minimum; and is one that can be shown to always exist.

\S 3 What do you think (an afterthought)?

Assuming all the speculators B_{1},\ B_{2},\ \ldots, B_{k} agree that their individual profitability can only be assured over a terminating distribution cycle if, and only if, the Total Transaction Corpus Tax \sum_{i=0}^{P(n,\ k)} f_{t_{i}}(A_{i},\ A_{i+1}) is minimised (not an unreasonable agreement to seek in a country like India that once had punitive 98% rates of Income Tax), can E_{(n,\ k)} be interpreted as a Nash equilibrium?

In other words, in the worst case where the quantum of Transaction Corpus Tax can be arbitrarily revised and levied with retrospective effect at any transaction, is there a Nash strategy that can guide the set B_{1},\ B_{2},\ \ldots, B_{k} of speculators into ensuring that they eventually arrive at an Equilibrium Distribution E_{(n,\ k)}?

References

Ax03 Robert Axtell. 2003. The Complexity of Exchange. In Econometrica, Vol. 29, No. 3 (July 1961).

Mu61 John F. Muth. 1961. Rational Expectations and the Theory of Price Movements. In Econometrica, Vol. 29, No. 3 (July 1961).

Bhupinder Singh Anand

Readability

Try reading in +125 magnification

Start here

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 47 other subscribers

Recent posts

Xena

Mathematicians learning Lean by doing.

The Universe of Tim Andersen

Author and physicist, editor of The Infinite Universe

Matt Baker's Math Blog

Thoughts on number theory, graphs, dynamical systems, tropical geometry, pedagogy, puzzles, and the p-adics

Mathematics without Apologies, by Michael Harris

An unapologetic guided tour of the mathematical life

Igor Pak's blog

Views on life and math

Joel David Hamkins

mathematics and philosophy of the infinite

NOOR ANAND CHAWLA

A Jaunt Through Life

Diagonal Argument

Math, science, their history, and assorted trivia and quadrivia.

Math - Update

blogging & searching for true math ...

George Lakoff

George Lakoff has retired as Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley. He is now Director of the Center for the Neural Mind & Society (cnms.berkeley.edu).

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Quanta Magazine

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

The Brains Blog

Since 2005, a leading forum for work in the philosophy and science of mind

Logic Matters

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

A Neighborhood of Infinity

Reviewing classical interpretations of Cantor's, Gödel's, Tarski's, and Turing's reasoning and addressing some grey areas in the foundations of mathematics, logic and computability

Combinatorics and more

Gil Kalai's blog