*In this post we shall beg indulgence for wilfully ignoring Wittgenstein’s dictum: “Whereof one cannot speak, thereof one must be silent.”*

(*Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.*)

** The Background**

In his comments on this earlier post (on the need for a better consensus on the definition of `effective computability‘) *socio-proctologist* Vivek Iyer—who apparently prefers to be known here only through this blogpage, and whose sustained interest has almost compelled a more responsible response in the form of this blogpage—raised an interesting query (albeit obliquely) on whether commonly accepted Social Welfare Functions—such as those based on Muth rationality—could possibly be algorithmically verifiable, but not algorithmically computable:

“We know the mathematical properties of market solutions but assume we can know the Social Welfare Function which brought it about. We have the instantiation and know it is computable but we don’t know the underlying function.”

He later situated the query against the perspective of:

“… the work of Robert Axtell http://scholar.google.com/citations?user=K822uYQAAAAJ&hl=en- who has derived results for the computational complexity class of market (Walrasian) eqbm. This is deterministically computable as a solution for fixed points in exponential time, though easily or instantaneously verifiable (we just check that the market clears by seeing if there is anyone who still wants to sell or buy at the given price). Axtell shows that bilateral trades are complexity class P and this is true of the Subjective probabilities.”

Now, the key para of Robert Axtell’s paper seemed to be:

“The second welfare theorem states that any Pareto optimal allocation is a Walrasian equilibrium from some endowments, and is usually taken to mean that a social planner/society can select the allocation it wishes to achieve and then use tax and related regulatory policy to alter endowments such that subsequent market processes achieve the allocation in question. We have demonstrated above that the job of such a social planner would be very hard indeed, and here we ask whether there might exist a computationally more credible version of the second welfare theorem”…*Axtell p.9*

Axtell’s aim here seemed to be to maximise in polynomial time the Lyapunov function (given on *Axtell p.7*; which is a well-defined number-theoretic formula over the real numbers) to within a specific range of its limiting value over a given set of possible endowment allocations.

Prima facie Axtell’s problem seemed to lie within a general class of problems concerning the distribution of finite resources amongst a finite population.

For instance, the total number of ways, say in which any budgeted measure of discrete endowments can be allocated over agents is given by the partition function (which I wrongly commented upon as being the factorisation function generated by ):

We noted that if we take one of the allocations as the equilibrium (ideal/desired destination) distribution, then a general problem of arriving at an ideal Distribution of Resources from a given starting distribution could ask whether there is always a path that is polynomial in and such that, starting from any arbitrary distribution, there is a minimum cost for passing (in the worst case) through all the possible distributions irreversibly (where we assume that it is not feasible under any series of individual rational exchanges for a particular resource distribution over the population to repeat itself with time).

We assumed there that, for any given set of distributions and any , there is a minimum measure which determines the cost of progressing from the set of distributions to the set of distributions , where is not in .

We noted that mathematically the above can be viewed as a variation of the Travelling Salesman problem where the goal is to minimise the total cost of progressing from a starting distribution to the ideal distribution , under the influence of free market forces based on individual rational exchanges that are only regulated to ensure—through appropriate taxation and/or other economic tools—that the cost of repeating a distribution is not feasible.

We further noted that, since the Travelling Salesman Problem is in the complexity class , it would be interesting to identify where exactly Axtell’s problem reduces to the computability complexity class (and where we ignore, for the moment, the separation problem raised in this earlier post).

** Defining Market Equilibrium in a Simplified Stock Exchange**

In order to get a better perspective on the issue of an equilibrium, we shall now speculate on whether we can reasonably define a market equilibrium in a simplified terminating market situation (i.e., a situation which always ends when there is no possible further profitable activity for any participant), where our definition of an equilibrium is any terminal state of minimum total cost and maximum total gain.

For instance, we could treat the population in question as a simplified on-line Stock Exchange with speculators and scrips, where both and are fixed.

Now we can represent a distribution at time by one of the ways, say , in which can be partitioned into parts as , where is the number of scrips (mandatorily ) held by speculator at time (which can fluctuate freely based on market forces of supply and demand).

We note that the generating function for is given by the partition function:

One could then conceive of a Transaction Corpus Tax () as a cess on each transaction levied by the Regulator (such as India’s SEBI) on the Exchange:

Where is directly recovered by the Exchange from individual speculators along with an appropriate Transaction Processing Fee () and a periodic Exchange Maintenance Cost (); and

Where, unless the market is in a terminating situation, the maintenance charge is uniformly leviable periodically, even if no transaction has taken place, to ensure that trading activity never halts voluntarily.

Now, given an initial distribution, say , of the scrips amongst the population , we can assume that any transaction by say speculator at time would incur a cost , plus a and a (possibly discounted) .

Obviously the gain to must exceed for the trade to be a rational one.

However, what is important to note here (which obviously need not apply in dissimilar cases such as Axtell’s) is that apparently none of the following factors:

(a) the price of the scrip being traded at any transaction;

(b) the quantum of the gain (which need not even be quantifiable in any particular transaction);

(c) the quantum of or ;

seem to play any role in reaching the Equilibrium Distribution for the particular set of speculators and scrips.

Moreover, the only restriction is on , to the effect that:

for any if

In other words, no transaction can take place that requires a distribution to be repeated.

One way of justifying such a Regulatory restriction would be that if a distribution were allowed to repeat itself, a cabal could prevent market equilibrium by artificially fuelling speculation aimed at merely inflating the valuations of the scrips over a selected distribution.

By definition, starting with any distribution , it would seem that the above activity must come to a mandatory halt after having passed necessarily through all the possible distributions.

If so, this would reduce the above to a Travelling Salesman Problem TSP if we ** define** the Equilibrium Distribution as the (not necessarily unique) distribution which minimises the Total Transaction Corpus Tax to speculators in reaching the Equilibrium through all possible distributions.

In other words, the Equilibrium Distribution is—by definition—the one for which is a minimum; and is one that can be shown to always exist.

** What do you think (an afterthought)?**

Assuming all the speculators agree that their individual profitability can only be assured over a terminating distribution cycle if, and only if, the Total Transaction Corpus Tax is minimised (not an unreasonable agreement to seek in a country like India that once had punitive 98% rates of Income Tax), can be interpreted as a *Nash* equilibrium?

In other words, in the worst case where the quantum of Transaction Corpus Tax can be arbitrarily revised and levied with retrospective effect at any transaction, is there a Nash strategy that can guide the set of speculators into ensuring that they eventually arrive at an Equilibrium Distribution ?

**References**

**Ax03** Robert Axtell. 2003. *The Complexity of Exchange.* In *Econometrica,* Vol. 29, No. 3 (July 1961).

**Mu61** John F. Muth. 1961. *Rational Expectations and the Theory of Price Movements.* In *Econometrica,* Vol. 29, No. 3 (July 1961).

## 1 comment

Comments feed for this article

November 29, 2013 at 8:16 pm

indglishThis is very illuminating and clearly illustrates that the ‘ergodic axiom’ (Samuelson ’69 argued that Economics is only science if ergodicity holds and Muth is interpreted by Lucas & other efficient market proponents in that way) implies TSP which is NP hard.

Can Axtell’s, or some other non-ergodic (i.e. hysteresis laden) approach, really do the same job in polynomial time? Without an EMC type condition there is no reason why the system shouldn’t halt outside the core so let us put that in. But is this enough to ensure that there is a purely k bilateral way of generating everything required by the second fundamental theorem? One reason why this may be so is that ‘tabu search’ for bilateral agents is of polynomial form but not so for the Walrasian auctioneer.

Still, there’s a lot more here to chew over- in particular, your post on PvsNP separation is very thought provoking in this context.

Many thanks for this