**So where exactly does the buck stop?**

Another reason why Lucas and Penrose should not be faulted for continuing to believe in their well-known Gödelian arguments against computationalism lies in the lack of an adequate consensus on the concept of `effective computability’.

For instance, Boolos, Burgess and Jeffrey (2003: Computability and Logic, 4th ed.~CUP, p37) define a diagonal *halting* function, , any value of which can be computed effectively, although there is no single algorithm that can effectively compute .

“According to Turing’s Thesis, since is not Turing-computable, cannot be effectively computable. Why not? After all, although no Turing machine computes the function , we were able to compute at least its first few values, For since, as we have noted, the empty function we have . And it may seem that we can actually compute for any positive integer —if we don’t run out of time.”

… ibid. 2003. p37.

Now, the straightforward way of expressing this phenomenon should be to say that there are well-defined real numbers that are instantiationally computable, but not algorithmically computable.

Yet, following Church and Turing, such functions are labeled as effectively uncomputable!

The issue here seems to be that, when using language to express the abstract objects of our individual, and common, mental `concept spaces’, we use the word `exists’ loosely in three senses, without making explicit distinctions between them.

First, we may mean that an individually conceivable object exists, within a language , if it lies within the range of the variables of . The existence of such objects is necessarily derived from the grammar, and rules of construction, of the appropriate constant terms of the language—generally finitary in recursively defined languages—and can be termed as constructive in by definition.

Second, we may mean that an individually conceivable object exists, under a formal interpretation of in another formal language, say **′**, if it lies within the range of a variable of under the interpretation.

Again, the existence of such an object in **′** is necessarily derivable from the grammar, and rules of construction, of the appropriate constant terms of **′**, and can be termed as constructive in **′** by definition.

Third, we may mean that an individually conceivable object exists, in an interpretation of , if it lies within the range of an interpreted variable of , where is a Platonic interpretation of in an individual’s subjective mental conception (in Brouwer’s sense).

Clearly, the debatable issue is the third case.

So the question is whether we can—and, if so, how we may—correspond the Platonically conceivable objects of various individual interpretations of , say , **′**, **′****′**, …, unambiguously to the mathematical objects that are definable as the constant terms of .

If we can achieve this, we can then attempt to relate to a common external world and try to communicate effectively about our individual mental concepts of the world that we accept as lying, by consensus, in a common, Platonic, `concept-space’.

For mathematical languages, such a common `concept-space’ is implicitly accepted as the collection of individual intuitive, Platonically conceivable, perceptions—, **′**, **′****′**, …,—of the standard intuitive interpretation, say , of Dedekind’s axiomatic formulation of the Peano Postulates.

Reasonably, if we intend a language or a set of languages to be adequate, first, for the expression of the abstract concepts of collective individual consciousnesses, and, second, for the unambiguous and effective communication of those of such concepts that we can accept as lying within our common concept-space, then we need to give effective guidelines for determining the Platonically conceivable mathematical objects of an individual perception of that we can agree upon, by common consensus, as corresponding to the constants (mathematical objects) definable within the language.

Now, in the case of mathematical languages in standard expositions of classical theory, this role is sought to be filled by the Church-Turing Thesis (CT). Its standard formulation postulates that every number-theoretic function (or relation, treated as a Boolean function) of , which can intuitively be termed as effectively computable, is partial recursive / Turing-computable.

However, CT does not succeed in its objective completely.

Thus, even if we accept CT, we still cannot conclude that we have specified explicitly that the domain of consists of only constructive mathematical objects that can be represented in the most basic of our formal mathematical languages, namely, first-order Peano Arithmetic (PA) and Recursive Arithmetic (RA).

The reason seems to be that CT is postulated as a strong identity, which, prima facie, goes beyond the minimum requirements for the correspondence between the Platonically conceivable mathematical objects of and those of PA and RA.

“We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers.”

… Church 1936: An unsolvable problem of elementary number theory, Am.~J.~Math., Vol.~58, pp.~345–363.

“The theorem that all effectively calculable sequences are computable and its converse are proved below in outline.

… Turing 1936: On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, ser.~2.~vol.~42 (1936–7), pp.~230–265.

This violation of the principle of Occam’s Razor is highlighted if we note (e.g., Gödel 1931: On undecidable propositions of Principia Mathematica and related systems I, Theorem VII) that, pedantically, every recursive function (or relation) is not shown as identical to a unique arithmetical function (or relation), but (*see the comment following Lemma 9 of this paper*) only as instantiationally equivalent to an infinity of arithmetical functions (or relations).

Now, the standard form of CT only postulates algorithmically computable number-theoretic functions of as effectively computable.

It overlooks the possibility that there may be number-theoretic functions and relations which are effectively computable / decidable instantiationally in a Tarskian sense, but not algorithmically.

**References**

**BBJ03** George S. Boolos, John P. Burgess, Richard C. Jeffrey. 2003. *Computability and Logic.* (4th ed). Cambridge University Press, Cambridge.

**Go31** Kurt Gödel. 1931. *On formally undecidable propositions of Principia Mathematica and related systems I.* Translated by Elliott Mendelson. In M. Davis (ed.). 1965. *The Undecidable.* Raven Press, New York. pp.5-38.

**Lu61** John Randolph Lucas. 1961. *Minds, Machines and Gödel.* In *Philosophy.* Vol. 36, No. 137 (Apr. – Jul., 1961), pp. 112-127, Cambridge University Press.

**Lu03** John Randolph Lucas. 2003. *The Gödelian Argument: Turn Over the Page.* In Etica & Politica / Ethics & Politics, 2003, 1.

**Lu06** John Randolph Lucas. 2006. *Reason and Reality.* Edited by Charles Tandy. Ria University Press, Palo Alto, California.

**Me64** Elliott Mendelson. 1964. *Introduction to Mathematical Logic.* Van Norstrand. pp.145-146.

**Pe90** Roger Penrose. 1990. *The Emperor’s New Mind: Concerning Computers, Minds and the Laws of Physics.* 1990, Vintage edition. Oxford University Press.

**Pe94** Roger Penrose. 1994. *Shadows of the Mind: A Search for the Missing Science of Consciousness.* Oxford University Press.

**Sc67** Joseph R. Schoenfield. 1967. *Mathematical Logic.* Reprinted 2001. A. K. Peters Ltd., Massachusetts.

**Ta33** Alfred Tarski. 1933. *The concept of truth in the languages of the deductive sciences.* In *Logic, Semantics, Metamathematics, papers from 1923 to 1938.* (p152-278). ed. John Corcoran. 1983. Hackett Publishing Company, Indianapolis.

**Wa63** Hao Wang. 1963. *A survey of Mathematical Logic.* North Holland Publishing Company, Amsterdam.

**An07a** Bhupinder Singh Anand. 2007. *The Mechanist’s challenge.* In *The Reasoner*, Vol(1)5 p5-6.

**An07b** … 2007. *Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – I.* In *The Reasoner,* Vol(1)6 p3-4.

**An07c** … 2007. *Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – II.* In *The Reasoner*, Vol(1)7 p2-3.

**An08** … 2008. *Can we really falsify truth by dictat?.* In *The Reasoner*, Vol(2)1 p7-8.

**An12** … 2012. *Evidence-Based Interpretations of PA.* In *Proceedings* of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.

## 11 comments

Comments feed for this article

November 15, 2013 at 7:54 pm

indglish‘there are well-defined real numbers that are instantiationally computable, but not algorithmically computable. Yet, following Church and Turing, such functions are labeled as effectively uncomputable!’

Very well written and thought provoking post. I don’t know the ‘Pure’ side but from the Applied side- in Econ, eg. Muth Rationality- it is ‘on the money’! We know the mathematical properties of market solutions but assume we can know the Social Welfare Function which brought it about. We have the instantiation and know it is computable but we don’t know the underlying function.

November 16, 2013 at 6:10 am

Bhupinder Singh AnandThanks; that

isan intriguing observation!Are you suggesting that, for instance:

(a) Muth’s subjective probability distribution of outcomes as reflected in the rational expectation of a human intelligence may reflect the values of functions that are algorithmically verifiable, but not algorithmically computable, arrived at on the basis of non-finitary reasoning;

whereas:

(b) his objective probability distribution of outcomes as reflected in the logical expectation of a mechanical intelligence can reflect only the values of recursive functions that are algorithmically computable, arrived at on the basis of necessarily finitary reasoning?

I ask the question in this loaded manner because I have argued in this post on the EPR paradox that:

(c) quantum probability distribution of outcomes as measured by a human intelligence may reflect the values of functions that are algorithmically verifiable, but not algorithmically computable, arrived at on the basis of non-finitary reasoning;

whereas:

(d) classical probability distribution of outcomes as predicted by a mechanical intelligence can reflect only the values of recursive functions that are algorithmically computable, arrived at on the basis of necessarily finitary reasoning.

Kind regards.

November 17, 2013 at 2:34 am

Vivek IyerI’m afraid I’m not trained in this field so I can only speak loosely. I’d refer you to the work of Robert Axtell http://scholar.google.com/citations?user=K822uYQAAAAJ&hl=en- who has derived results for the computational complexity class of market (Walrasian) eqbm. This is deterministically computable as a solution for fixed points in exponential time, though easily or instantaneously verifiable (we just check that the market clears by seeing if there is anyone who still wants to sell or buy at the given price). Axtell shows that bilateral trades are complexity class P and this is true of the Subjective probabilities. Obviously, a noisy market means there is some error term attaching to the Objective solution but it is random provided markets are ‘efficient’ in the sense of not throwing away any information. Also the assumption is that the market is quite large so each agent is a price-taker so that eliminates ‘strange attractors’ arising from strategic behavior.

There is a problem, here, however. Essentially, we are looking at co-evolved complexity. This means the Kolmogorov baseline language continually shifts. So we don’t know whether what we are ‘computing’ or ‘verifying’ is actually the kind of thing Physicists or Pure Mathematicians are talking about. In particular, we don’t know if what is happening is finitary or deterministic. At any rate, that is my naive take on this as expressed here http://socioproctology.blogspot.co.uk/2013/09/barzakh-bibliothetic-order-borgess.html

I’m assuming that Quantum Mechanics allows the type of co-evolved complexity we see in Econ or Biology such that computation gets easier because information gets ‘dammed up as ‘capacitance diversity’ while there is epigenetic ‘canalisation’. Another way to say the same thing is, just as the Anthropic principle let’s us say things about the Universe which we can verify to be true (because we exist here) but not effectively compute, so too with any other co-evolved process.

My apologies for my imprecise language!

November 23, 2013 at 6:01 am

Bhupinder Singh AnandAxtell shows that bilateral trades are complexity class PThanks for highlighting this interesting reference to the idealised representation of some real-world economic situations in a mathematical language.

The key para of Robert Axtell’s paper seems to be:

“The second welfare theorem states that any Pareto optimal allocation is a Walrasian equilibrium from some endowments, and is usually taken to mean that a social planner/society can select the allocation it wishes to achieve and then use tax and related regulatory policy to alter endowments such that subsequent market processes achieve the allocation in question. We have demonstrated above that the job of such a social planner would be very hard indeed, and here we ask whether there might exist a computationally more credible version of the second welfare theorem”…

Axtell p.9Axtell’s aim seems to be to maximise in polynomial time the Lyapunov function (given on

Axtell p.7; which is a well-defined number-theoretic formula over the real numbers) to within a specific range of its limiting value over a given set of possible endowment allocations.Distribution of ResourcesPrima facie, Axtell’s problem seems to lie within a general class of problems concerning the distribution of finite resources amongst a finite population.

For instance, the total number of ways, say in which any budgeted measure of discrete endowments can be allocated over agents (one of which can be taken as the social planner, treated as the repository of the undistributed endowments at any time) is given by the function:

.

If we take one of the allocations as the equilibrium (ideal/desired destination) distribution, then a general problem of arriving at an ideal Distribution of Resources from a given starting distribution could ask whether there is always a path that is polynomial in and such that, starting from any arbitrary distribution, there is a minimum social cost for passing (in the worst case) through all the possible distributions irreversibly (where we assume that it is not feasible under any series of individual rational exchanges for a particular resource distribution over the population to repeat itself with time).

We assume here that, for any given set of distributions and any , there is a minimum measure which determines the social cost of progressing from the distribution to the distribution , where is not in .

Mathematically the above is merely a variation of the Travelling Salesman problem where the goal is to minimise the total social cost of progressing from a starting distribution to the ideal distribution , under the influence of free market forces based on individual rational exchanges that are only regulated by a social planner which ensures—through appropriate taxation and/or other economic tools—that the cost of repeating a distribution is not feasible.

Since the Travelling Salesman Problem is in NP, it would be interesting to identify where exactly Axtell’s problem reduces to the computational complexity class P.

Of course, if Axtell’s were to represent a natural phenomena that can only be defined as a quantum probability, then a case could be made out that is a function that is algorithmically verifiable, but not algorithmically computable.

However, that does not seem to be the case—nor does it seem to be Axtell’s thesis—in this instance.

November 23, 2013 at 5:40 pm

indglish‘it would be interesting to identify where exactly Axtell’s problem reduces to the computational complexity class P’.

I think this is done through arbitrary assumptions

Axtell defines ‘individual rationality’ in a way that rules out speculative behavior- i.e. buying or selling on the basis of what you think the Global Market will do- as well as ‘strategic behavior’- i.e. buying or selling on the basis of what you think Global Markets ought to do. Thus he is able to work with Lyapunov functions.

He also arbitrarily introduces other assumptions- equivalent to a hysteresis free middle-man (i.e. someone everybody trades with, neutrally, regardless of their past history with him or of strategic considerations) and the existence of a stable numeraire (money). Essentially, he is adding back into his decentralized model precisely the sort of information which it is very costly to acquire. This is why, for Development to occur, you first have to have a coercive State acting like the Walrasian auctioneer and making horrible mistakes most of the time. The thing won’t just spontaneously arise because bilateral exchanges aren’t ‘open-market’ and are fraught with suspicion and strategic behavior.

Essentially, the ‘bilateral trick’ works by acting like a Maxwell demon partitioning things neatly to get its desired result. But speculative and strategic behavior generally arises because the partition is either fuzzy or contested or both.

Still, Axtell’s paper has value- though k-laterality is no great improvement on the standard model because it assumes what the other provides (viz. a middleman and a numeraire) – in highlighting the dynamics of co-evolved complexity, which may have nothing to do with Lyapunov functions except in degenerate cases- in driving Economic or Biological processes.

It may be- though I am entirely ignorant of any currently promising way forward- that the real problem for General Equilibrium/ Welfare Economics, is how to re-cast its subject matter in terms of presentations of computably enumerable reals. We believe we can rank possible states of the World (if we can’t this type of Econ or Political Philosophy is empty) which are not effectively computable and thus whose internal properties we can’t be sure about.

As you say in the body of this post- ‘Reasonably, if we intend a language or a set of languages to be adequate, first, for the expression of the abstract concepts of collective individual consciousnesses, and, second, for the unambiguous and effective communication of those of such concepts that we can accept as lying within our common concept-space, then we need to give effective guidelines for determining the Platonically conceivable mathematical objects of an individual perception of M that we can agree upon, by common consensus, as corresponding to the constants (mathematical objects) definable within the language.’

It may be that the huge volume of literature in this area, some quite mathematically sophisticated or otherwise thought provoking (e.g Chichilnisky) is simply misguided polemics arising out of a semantic confusion of the sort you are pointing out here.

Many thanks for this.

November 25, 2013 at 8:27 am

Bhupinder Singh AnandI am not familiar with the economic concepts that Axtell et al are attempting to formalise mathematically, so cannot assess whether or not their mathematical representation has a sound model (in the sense of having a consistent physical interpretation).

However, if we assume for the moment that such an interpretation is logically feasible, what I am unable to see is how the various mathematical assumptions (Propositions) that are explicit in Axtell’s paper can lead him to the mathematical conclusion that:

“Proposition 10:The number of interactions is bounded from above by , therefore the computational complexity of -lateral exchange is .”Now, my understanding of what you have noted regarding the use of a Maxwell demon is that the various assumptions in Axtell’s bilateral trick seem designed only to ensure irreversibility of the changes in the total endowment distribution, so that a particular distribution is never repeated, possibly as a result of the (apparently non-repeatable):

“… continuous, strictly convex preferences, represented for agent by utility function .” …

Axtell, p.6If so, what I am unable to identify are any restriction that would preclude the possibility that Axtell’s trading process must go through all possible distributions before it finally lands by default on the desired (final) equilibrium distribution from where no other distribution is feasible (I assume that the existence of some equilibrium distribution is assured by other considerations such as `Nash’, etc.).

It is in this sense that I see Axtell’s problem as being (in the worst case) an instance of the Travelling Salesman Problem (TSP).

Reason:It is not difficult to see that, given any node (distribution) as the desired (final) equilibrium node, we can always create a non-intersecting path that starts at any given node (not ) and passes through all the other nodes once, and only once, before terminating perforce at the desired (final) equilibrium node .Now, whilst the problem of creating any such path (even a `minimal’ one) is in , the TSP problem asserts that creating

theminimal such path is in (where the `minimal’ being measured refers to the costs incurred for incorporating a particular node into a particular path).What is not clear to me is how Axtell arrives at the conclusion that his restrictions do

notamount to asking fortheminimal trading process (path) from a given distribution to the desired (ideal) final distribution.Incidentally, what did you have in mind whilst remarking:

“… the real problem for General Equilibrium/ Welfare Economics, is how to re-cast its subject matter in terms of presentations of computably enumerable reals”?

As to the issue of “… misguided polemics arising out of a semantic confusion”, isn’t that both an essential and valuable part of the human condition that is driven by the engine of evolution, more particularly in the social sciences?

Kind regards.

November 25, 2013 at 6:54 pm

indglishSorry, I might have accidentally sent an incomplete reply.

I found this a very thought provoking comment. What is particularly provoking is that it is difficult to clarify one’s intuitions in this matter without ‘throwing out the baby with the bathwater’!

Taking your last point first, I agree that a driver for co-evolved complexity is the fact that Interpretations involve ‘essentially contested concepts’.

My feeling is that Welfare Econ assumes what it needs to prove- viz that computably enumerable reals are undominated. Muth Rationality makes no assumptions on this point. It may be that there is a non-deterministic, or even non-finitary or dialethic or other equally strange solution which dominates anything amenable to algorithmic methods. Vide http://socioproctology.blogspot.co.uk/2013/09/whither-walras.html

Axtell is able to get Prop.10 because he is assuming that there will always be ‘money’ and a ‘middleman’ at every stage. This is like putting rubber-bands on the trade history vector confining it to Lyapunov stability, quick convergence etc. As you point out, this turns it into TSP or rather Multi-TSP. The joke is that, on Axtell’s assumptions, any agent (provided he is unaware he is the ‘Dictator’) can solve the Social Calculation problem almost instantaneously. There is no need for either bilateral trades or Walrasian tattonement!

You may have formed a very poor impression of Economists on the basis of this paper. Yet Axtell is one of the good guys.

Perhaps if more theoretical Economists commit to the sort of program you outline here, they can concentrate on looking at representations- opening up the black box, so to speak- so as to identify concurrency or race hazard type problems which have an analogue in real time Socio-economic processes and relate to open problems in O.R- rather than rehash arguments of an ideological nature which no longer have salience.

Your way of looking at the Second Theorem of Welfare Econ stresses ergodics, or requires a ‘knight’s tour’, so that all possible states can be ranked and intensional behavioural rules can be specified in a manner that yields a Lyapunov energy function Axtell’s approach highlights wealth based hysteresis effects which, absent the sort of restrictive assumptions which would make Socialist Calculation trivially easy for a Dictator, suggest- at least to my mind- that the intensional behavioural rules for agents, for Economies which don’t crash or degenerate, are going to have some interesting aesthetic properties- like Lissajous curves- but with the unexpected result that complex harmonics gets internalized- in the manner in which Muth rationality solves the problem of the ‘cobweb’- i.e. harmonically oscillating market prices.

I suspect that future generations will find that Muth, rather than the people who got Nobel Prizes using his seminal idea, had anticipated much that will prove engrossing and perhaps rescue Economics from its current debauched state.

Many thanks for your insights in this respect

Best wishes

November 27, 2013 at 2:40 am

Bhupinder Singh AnandAxtell is able to get Prop.10 because he is assuming that there will always be ‘money’ and a ‘middleman’ at every stage. This is like putting rubber-bands on the trade history vector confining it to Lyapunov stability, quick convergence etc. As you point out, this turns it into TSP or rather Multi-TSP.I still can’t quite see how Axtell justifies the computational complexity in his Proposition 10, because the problem is always translatable to a TSP under some very basic assumptions that, prima facie, should be common to all economic theories that admit a unique equilibrium in such a case.

Reason:Given a fixed population of agents with a fixed bank of discrete endowments, any such theory must have some definition of a feasibility function, say , which quantifies the likelihood of the redistribution of endowments from any distribution to any distribution at the redistribution stage , where it is not necessary that , and where we assume that for all (i.e. any distribution other than the equilibrium is unstable).Clearly if is the equilibrium distribution (assumed as unique), then by definition for any (i.e. the equilibrium distribution is the only stable one, since there is no feasible redistribution from ).

The challenge then, is to define at any redistribution stage in such a way that for any (i.e. any redistribution of to always has a positive feasibility compared to the redistribution to ).

Under the above assumptions it follows that—no matter what the starting distribution—before landing at the equilibrium distribution, the process of redistribution under such a feasibility function must pass through all the possible distributions defined by:

.

So, if we translate the feasibility as the cost involved in travelling from point to point at the corresponding stage of a TSP, then the only difference between the various economic theories is to see which definition of the feasibility function interprets closest to the real life situation experimentally.

In other words, whether the particular definition of feasibility involves:

“… putting rubber-bands on the trade history vector confining it to Lyapunov stability, quick convergence etc.”,

or whether, as you note:

“… the intensional behavioural rules for agents, for Economies which don’t crash or degenerate, are going to have some interesting aesthetic properties—like Lissajous curves—but with the unexpected result that complex harmonics gets internalized—in the manner in which Muth rationality solves the problem of the ‘cobweb’—i.e. harmonically oscillating market prices”,

the issue appears to be one that—as Axtell’s paper suggests—is capable of resolution experimentally (though perhaps not—as the above perspective suggests—with associated computational complexity as suggested by Axtell in his Proposition 10).

Kind regards.

November 27, 2013 at 8:23 am

indglishI’m afraid didn’t get your notion of a feasibility function, in particular the equation given, though it seems a promising approach.

Rereading what I myself wrote, I see it wasn’t clear- I should have said something like ‘ to show that k bilaterality fulfills the Second fundamental theorem (which is a MTSP, as you point out because it has to visit every possible state otherwise achieved by a Central planner levying lumpsum taxes/subsidies) Axtell constrains trade histories to geometric convergence by allowing agents to only belong to one trading group in any given time. But this means that there is always a group (though its composition might change) which acts like the universal middleman and which establishes a numeraire (money). But this means there is a sort of ‘rubber band’ connecting trajectories to the Core’.

I’m a bit confused about this because I am not used to think of Gen Eqbm as analogous to TSP though clearly there must be something like your feasibility function in some branch of the literature.

Would it be possible for you to either clarify your equation a little or point to a link so I can get an idea of its significance?

Incidentally, re. your mention of Quantum processes, did you come across this fascinating paper?- ttp://www.technologyreview.com/view/522016/quantum-light-harvesting-hints-at-entirely-new-form-of-computing/

As you point out- is Axtell is to prove his contention that k-bilaterality yields an analogue of the Second fundamental theorem then it is MTSP- so how can it be P? I think the answer is because something illicit is going on- viz the landscape has been fixed in advance and ‘rubber bands’ have been attached to everything. No doubt, co-evolution does achieve this and, as you point out, experimental work may show that the sort of processes going on in ‘light harvesting’ are actually a ubiquitous property of co-evolved systems.

Re. your feasibility function which passes through all possible distributions- I’ve a nagging feeling there is some relevant literature which I’ve looked at many years ago. Perhaps Vela Velupillai has something in this line. I suspect that my ‘applied’ background is preventing me from grasping the central notion here.

Many thanks and best wishes

November 29, 2013 at 6:53 am

Bhupinder Singh AnandSoon after posting my earlier response to your last comment, I realised that your sustained interest compels a more structured blogpage!

Thanks and regards.

November 28, 2013 at 9:34 pm

Bhupinder Singh AnandI’m afraid didn’t get your notion of a feasibility function, in particular the equation given, though it seems a promising approach. … Would it be possible for you to either clarify your equation a little or point to a link so I can get an idea of its significance?1. The concept of `feasiblity’ as a facilitator is the one used by Axtell, but perhaps a better quantifier for the scenario that I described could be the inverse concept of ‘cost’ as an inhibitor.

2. What I have in mind is that, for instance, we could treat the population in question as a simplified on-line Stock Exchange with speculators and scrips, where both and are fixed.

One could then conceive of a Transaction Corpus Tax () as a cess levied by the Regulator (such as India’s SEBI) on the Exchange.

This is directly recovered by the Exchange from individual speculators along with an appropriate Transaction Processing Fee () and a periodic Exchange Maintenance Cost ().

The maintenance charge is uniformly leviable periodically, even if no transaction has taken place, which ensures that activity never halts voluntarily.

Now, given an intial distribution, say , of the scrips amongst the population , we can assume that any transaction by say speculator at time would incur a cost , plus a and a (possibly discounted) .

Obviously the gain to must exceed for the trade to be a rational one.

However, what is important to note here (which obviously need not apply in dissimilar cases such as Axtell’s) is that none of:

(a) the quantity or price of the scrip being traded;

(b) the quantum of the gain (which need not even be quantifiable in any particular transaction);

(c) the quantum of or ;

play any role in reaching the Equilibrium Distribution for the particular set of speculators and scrips .

Moreover, the only requirement on is that:

for any if

(i.e. no transaction can take place that requires a distribution to be repeated).

By definition, starting with any distribution , the above activity only comes to a mandatory halt after having passed through all the possible distributions.

This, now, reduces to a Travelling Salesman Problem TSP if we

the Equilibrium Distribution as the (not necessarily unique) distribution which minimises the Total Transaction Corpus Tax to speculators in reaching the Equilibrium through all possible distributions.defineIn other words, the Equilibrium Distribution is—by definition—the one for which is a minimum; and is one that can be shown to always exist.

Kind regards.