In this post we shall beg indulgence for wilfully ignoring Wittgenstein’s dictum: “Whereof one cannot speak, thereof one must be silent.”

(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)

\S 1 The Background

In his comments on this earlier post (on the need for a better consensus on the definition of `effective computability‘) socio-proctologist Vivek Iyer—who apparently prefers to be known here only through this blogpage, and whose sustained interest has almost compelled a more responsible response in the form of this blogpage—raised an interesting query (albeit obliquely) on whether commonly accepted Social Welfare Functions—such as those based on Muth rationality—could possibly be algorithmically verifiable, but not algorithmically computable:

“We know the mathematical properties of market solutions but assume we can know the Social Welfare Function which brought it about. We have the instantiation and know it is computable but we don’t know the underlying function.”

He later situated the query against the perspective of:

“… the work of Robert Axtell http://scholar.google.com/citations?user=K822uYQAAAAJ&hl=en- who has derived results for the computational complexity class of market (Walrasian) eqbm. This is deterministically computable as a solution for fixed points in exponential time, though easily or instantaneously verifiable (we just check that the market clears by seeing if there is anyone who still wants to sell or buy at the given price). Axtell shows that bilateral trades are complexity class P and this is true of the Subjective probabilities.”

Now, the key para of Robert Axtell’s paper seemed to be:

“The second welfare theorem states that any Pareto optimal allocation is a Walrasian equilibrium from some endowments, and is usually taken to mean that a social planner/society can select the allocation it wishes to achieve and then use tax and related regulatory policy to alter endowments such that subsequent market processes achieve the allocation in question. We have demonstrated above that the job of such a social planner would be very hard indeed, and here we ask whether there might exist a computationally more credible version of the second welfare theorem”…Axtell p.9

Axtell’s aim here seemed to be to maximise in polynomial time the Lyapunov function V(x(t)) (given on Axtell p.7; which is a well-defined number-theoretic formula over the real numbers) to within a specific range of its limiting value over a given set of possible endowment allocations.

Distribution of Resources

Prima facie Axtell’s problem seemed to lie within a general class of problems concerning the distribution of finite resources amongst a finite population.

For instance, the total number of ways, say P(n, k) in which any budgeted measure of n discrete endowments can be allocated over k agents is given by the partition function P(n, k) (which I wrongly commented upon as being the factorisation function F(n, k) generated by \eta_{k}(s)):

\prod_{i=1}^{\infty}\frac{1}{(1-ax^{i})} = \sum_{n=1}^{\infty}\sum_{k=1}^{\infty}P(n, k)a^{k}x^{n}

We noted that if we take one of the allocations E as the equilibrium (ideal/desired destination) distribution, then a general problem of arriving at an ideal Distribution of Resources from a given starting distribution could ask whether there is always a path that is polynomial in P(n, k) and such that, starting from any arbitrary distribution, there is a minimum cost for passing (in the worst case) through all the possible distributions irreversibly (where we assume that it is not feasible under any series of individual rational exchanges for a particular resource distribution over the population to repeat itself with time).

We assumed there that, for any given set of distributions S_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\} and any A_{j}, there is a minimum measure m_{S_{i}S_{j}} which determines the cost of progressing from the set of distributions A_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\} to the set of distributions S_{j}=\{A_{1},\ A_{2},\ \ldots,\ A_{j}\}, where A_{j} is not in S_{i}=\{A_{1},\ A_{2},\ \ldots,\ A_{i}\}.

We noted that mathematically the above can be viewed as a variation of the Travelling Salesman problem where the goal is to minimise the total cost of progressing from a starting distribution A_{1} to the ideal distribution E, under the influence of free market forces based on individual rational exchanges that are only regulated to ensure—through appropriate taxation and/or other economic tools—that the cost of repeating a distribution is not feasible.

We further noted that, since the Travelling Salesman Problem is in the complexity class NP, it would be interesting to identify where exactly Axtell’s problem reduces to the computability complexity class P (and where we ignore, for the moment, the PvNP separation problem raised in this earlier post).

\S 2 Defining Market Equilibrium in a Simplified Stock Exchange

In order to get a better perspective on the issue of an equilibrium, we shall now speculate on whether we can reasonably define a market equilibrium in a simplified terminating market situation (i.e., a situation which always ends when there is no possible further profitable activity for any participant), where our definition of an equilibrium is any terminal state of minimum total cost and maximum total gain.

For instance, we could treat the population in question as a simplified on-line Stock Exchange with k speculators B_{1},\ B_{2},\ \ldots, B_{k} and n scrips, where both k and n are fixed.

Now we can represent a distribution A_{(t)} at time t by one of the ways, say P(n,\ k), in which n can be partitioned into k parts as n = a_{i} + a_{2} + \ldots + a_{k}, where a_{i} is the number of scrips (mandatorily \geq 1) held by speculator B{i} at time t (which can fluctuate freely based on market forces of supply and demand).

We note that the generating function for P(n,\ k) is given by the partition function:

\prod_{i=1}^{\infty}\frac{1}{(1-ax^{i})} = \sum_{n=1}^{\infty}\sum_{k=1}^{\infty}P(n, k)a^{k}x^{n}

One could then conceive of a Transaction Corpus Tax (TCT) as a cess on each transaction levied by the Regulator (such as India’s SEBI) on the Exchange:

\bullet Where TCT is directly recovered by the Exchange from individual speculators along with an appropriate Transaction Processing Fee (TPF) and a periodic Exchange Maintenance Cost (EMC); and

\bullet Where, unless the market is in a terminating situation, the maintenance charge EMC is uniformly leviable periodically, even if no transaction has taken place, to ensure that trading activity never halts voluntarily.

Now, given an initial distribution, say A_{0}, of the scrips amongst the population k, we can assume that any transaction by say speculator B_{i} at time t_{j} would incur a TCT cost f_{t_{j}}(A_{j},\ A_{j+1}), plus a TPF and a (possibly discounted) EMC.

Obviously the gain to B_{i} must exceed TCT+TPF+EMC for the trade to be a rational one.

However, what is important to note here (which obviously need not apply in dissimilar cases such as Axtell’s) is that apparently none of the following factors:

(a) the price of the scrip being traded at any transaction;

(b) the quantum of the gain (which need not even be quantifiable in any particular transaction);

(c) the quantum of TPF or EMC;

seem to play any role in reaching the Equilibrium Distribution E_{(n,\ k)} for the particular set of k speculators \{B_{1},\ B_{2},\ \ldots,\ B_{k}\} and n scrips.

Moreover, the only restriction is on TCT, to the effect that:

f_{t_{i}}(A_{i},\ A_{j}) = \infty for any i if j < i

In other words, no transaction can take place that requires a distribution to be repeated.

One way of justifying such a Regulatory restriction would be that if a distribution were allowed to repeat itself, a cabal could prevent market equilibrium by artificially fuelling speculation aimed at merely inflating the valuations of the n scrips over a selected distribution.

By definition, starting with any distribution A_{0}, it would seem that the above activity must come to a mandatory halt after having passed necessarily through all the possible P(k,\ n) distributions.

If so, this would reduce the above to a Travelling Salesman Problem TSP if we define the Equilibrium Distribution E_{(n,\ k)} as the (not necessarily unique) distribution which minimises the Total Transaction Corpus Tax to speculators in reaching the Equilibrium through all possible distributions.

In other words, the Equilibrium Distribution is—by definition—the one for which \sum_{i=0}^{P(n,\ k)} f_{t_{i}}(A_{i},\ A_{i+1}) is a minimum; and is one that can be shown to always exist.

\S 3 What do you think (an afterthought)?

Assuming all the speculators B_{1},\ B_{2},\ \ldots, B_{k} agree that their individual profitability can only be assured over a terminating distribution cycle if, and only if, the Total Transaction Corpus Tax \sum_{i=0}^{P(n,\ k)} f_{t_{i}}(A_{i},\ A_{i+1}) is minimised (not an unreasonable agreement to seek in a country like India that once had punitive 98% rates of Income Tax), can E_{(n,\ k)} be interpreted as a Nash equilibrium?

In other words, in the worst case where the quantum of Transaction Corpus Tax can be arbitrarily revised and levied with retrospective effect at any transaction, is there a Nash strategy that can guide the set B_{1},\ B_{2},\ \ldots, B_{k} of speculators into ensuring that they eventually arrive at an Equilibrium Distribution E_{(n,\ k)}?

References

Ax03 Robert Axtell. 2003. The Complexity of Exchange. In Econometrica, Vol. 29, No. 3 (July 1961).

Mu61 John F. Muth. 1961. Rational Expectations and the Theory of Price Movements. In Econometrica, Vol. 29, No. 3 (July 1961).

Bhupinder Singh Anand

Advertisements