8 Accumulation of what?

Una fides, pondus, mensura sit idem Et status illaesus totius orbis erit — Let there be one single faith, weight and measure, and the world shall be free from harm.

—Guillaume Budé (1468–1540)

His labor took him about one minute to learn.

—Upton Sinclair, The Jungle

Physicists commonly speak of five fundamental quantities: distance, time, mass, electrical charge and heat. These quantities are fundamental in the sense that every other physical quantity can be derived from them. Velocity is distance divided by time; acceleration is the time derivative of velocity; force is mass multiplied by acceleration; etc. The elementary particles of the physical world — be they electrons, quarks, taus, positrons or strings — as well as the relationships between them and the larger structures to which they give rise — are all characterized by these fundamental quantities.92

Political economy pretends to mimic this structure. Here, too, we have elementary particles: the neoclassical util and the Marxist abstract labour. Presumably, these are the basic particles that all higher entities of production, consumption and wealth are made of and can be reduced to. But there is a difference. As we shall see in this chapter, unlike the elementary particles of physics, utils and abstract labour are not counted in terms of fundamental quantities. On the contrary, their quantitative dimensions are derived from production, consumption and wealth; that is, they are deduced from the very phenomena they are supposed to explain. And as if to make a bad situation worse, even this reverse derivation is problematic, not to say impossible, since the assumptions it is based on are patently false.

These considerations serve to further deepen the enigma of capital. We have already seen that political economists find it difficult to theorize why capital accumulates. The measurement riddle shows that they don’t even know what gets accumulated.

## What gets accumulated?

To a lay person, the question may seem simple to answer: money. Capitalists accumulate when they grow richer; they decumulate when they become poorer. And that is certainly true, but not entirely. To see what is missing, suppose that the actual holdings of a capitalist haven’t changed but that their prices have all risen by 10 per cent, thus making him 10 per cent richer. Now assume further that the overall price level — measured by the GDP price deflator — has also grown at the same rate of 10 per cent, so that the ‘amount’ of commodities the capitalist can buy with his assets remains the same. The capitalist has certainly accumulated in nominal terms, but this increase was merely a price phenomenon. Since the process has affected neither the ‘productive capacity’ of his assets nor their ‘purchasing power’, from a material perspective he has ended right where he started. For this reason, political economists — conservative and critical alike — insist that when measuring accumulation we ignore the price of capital and concentrate only on its material, or ‘real’, quantity.

There is, of course, nothing very unusual about this insistence. After all, political economy is concerned primarily with material processes, so it seems only sensible that the same emphasis should apply to capital. The only problem is that in order to focus on ‘real’ quantities, we first have to separate them from prices; and surprising at it may sound, in general the two cannot be separated.

## Separating quantity from price

To understand the difficulty, let’s put aside the theory for a moment and look at what the statisticians do. Their procedure is straightforward: they assume that the dollar market value of any basket of commodities (MV) is equal to its ‘real’ quantity (Q) times its unit price (P), and then they rearrange the equation. Symbolically, they start from:

$$$MV = Q \times P \tag{1}$$$

Which is equivalent to:

$$$Q = \frac{MV}{P} \tag{2}$$$

These formulae are taken to be completely general. They apply to any basket of commodities at any point in time — from the contents of a supermarket cart pushed by a London shopper in 2008, to the annual output of the Chinese economy in 2000, to the global stock of ‘capital goods’ in 1820. Given data on the market value and price of any set of commodities, calculating its ‘real’ quantity and growth rate is a simple matter of plugging in the numbers and computing the results.

To illustrate, suppose the U.S. Bureau of Economic Analysis wishes to calculate the ‘real’ rate of accumulation in the automobile industry from 1990 to 2000. The statisticians know that, over the decade, the market value (at replacement cost) of the industry’s capital stock (MV) grew by 93 per cent and that 17 per cent of that increase was due to a rise in unit price (P). Based on these data, the statisticians can easily tell us that the ‘real’ rate of accumulation — measured by the rate of growth of Q — was 65 per cent (1.93 / 1.17 –1 ≈ 0.65).93

A clean, simple computation, no doubt, only that it never works.

The failure is as general as the formulas. The calculation fails with ‘capital goods’, just as it fails with GDP, private consumption, gross investment or any other collection of heterogeneous commodities. And the reason is embarrassingly simple. Equation (2) above tells us that in order to compute the quantity we first need to know the price. What it doesn’t say is that in order to know the price we first need to know the quantity. . . .

To see the circularity, consider the following facts. An automotive factory is made of many different tools, machines and structures. Over time, the nature of these items tends to change. They may take less time and effort to produce; they may become more or less ‘productive’ due to technical improvement and wear and tear; their composition may change with new machines replacing older ones; they may be used to produce different and even entirely new output; etc. The result of these many changes is that today’s automobile factories are not the same as yesterday’s, or as last year’s. The price index of automobile factories, however, is supposed to track, over time, the price of the very same factories. The obvious question, then, is: ‘How can such an index be computed when the underlying factories — the “things” whose price the index is supposed to measure — keep changing from one year to the next?’

Clearly, in order to measure the price of capital — or of any other collection of different commodities — we must first denominate its underlying ‘substance’ in some homogenous units. And here we come to the Cartesian ‘crux of the matter’. Like Heraclitus, we face the permanent flux of an ever-changing capital stock. Yet, in the spirit of Parmenides, we cannot accept this flux since we need an eternally stable entity to price. And so we fall back on the compromise position of Democritus. According to this compromise, capitalist phenomena indeed are ever-changing — but this permanent shift is merely the rearrangement of the capitalist atoma. What the naked eye sees as a maelstrom of different commodities, the political economist interprets as the reshuffling of identical, irreducible particles.

For the neoclassicists, the common substance of commodities is the ‘utils’ they generate.94 Like any other set of commodities, a collection of machines, factories and structures may be heterogeneous and constantly changing. But no matter how varied and variable its components, say the neoclassicists, they can all be reduced to ‘standard efficiency units’, elementary particles of productive capacity counted in terms of the utility they create. In this way, an automobile factory capable of producing 1,000 utils is equivalent to two factories each producing 500 utils. As factories change over time, we can simply measure their changing ‘magnitude’ in terms of their greater or lesser util-generating capacity.

In contrast to the neoclassicists, Marx approached the problem from the input side, arguing that capital, like any other commodity, could be quantified in terms of the socially necessary abstract labour required to produce it. So if we begin with an automotive factory that takes, on average, 10 million hours of abstract labour to construct, and add to it another factory that takes, on average, only 5 million hours to build, we end up with an aggregate capital whose ‘magnitude’ is equivalent to 15 million hours of abstract labour.

These brave reductions, though, do not help us in the least. In fact, they are totally redundant. After all, had we known the ‘util productivity’ or ‘abstract labour contents’ of capital, this knowledge would already tell us its ‘real’ magnitude, making the whole statistical exercise unnecessary. . . .

## Quantifying utility

### Let the price tell all

The neoclassical notion of universal utility got off to a bad start.95 The initial proclamations were decidedly optimistic. In the eighteenth century, mathematician Daniel Bernoulli stated that ‘people with common sense evaluate money in proportion to the utility they can obtain from it’ and went ahead to build on this conviction the quantitative law of diminishing marginal utility (Bernoulli 1738: 33). In the nineteenth century, Philosophical Radical Jeremy Bentham promised his adherents that, in the utilitarian calculus of pleasure and pain, ‘Prejudice apart, the game of push-pin is of equal value with the arts and sciences of music and poetry’ (1825: Book 3, Ch. 1). He was also convinced that utility could be compared between individuals, though he never bothered to indicate how.

The early neoclassicists, however, weren’t nearly as sanguine. Cooled down by the less egalitarian John Stuart Mill, they didn’t like the idea of interpersonal comparisons (which implied that society could benefit from greater income equality). But their doubts went deeper. They didn’t think the ‘quantity of pleasure’ from a game of pushpin could be compared to that of music and poetry. In fact, they admitted quite openly that universal utility is impossible to measure and, indeed, difficult to even fathom. The interesting thing, though, is that this recognition did not deter them in the least. ‘If you cannot measure, measure anyhow,’ complained their in-house critic Frank Knight (quoted in Bernstein 1996: 219). Although the neoclassicists conceded that utility is unquantifiable, they went ahead to build their entire science of ‘economics’ on the util, an unquantifiable unit. In hindsight, it is hard to think of a more fitting beginning for a successful religion.

In his discussion of utility, Francis Edgeworth, one of the early mathematical economists, admitted that these ‘atoms of pleasure’ do present ‘peculiar difficulties’. Like the invisible Ether, ‘they are not easy to distinguish and discern; more continuous than sand, more discreet than liquid; as it were nuclei of the just-perceivable, embedded in circumambient semi-consciousness’. But not to worry:

We cannot count the golden sands of life; we cannot number the ‘innumerable smile’ of seas of love; but we seem to be capable of observing that there is here a greater, there a less, multitude of pleasure units, mass of happiness; and that is enough.

(Edgeworth 1881: 8–9)

In fact, Edgeworth was so enthralled with the prospect of quantifying the semi-conscious that he offered to build a ‘hedonimeter’ that would continuously register ‘the height of pleasure experienced by an individual’ (ibid.: 101).96 And Edgeworth wasn’t alone in disregarding the odds. His contemporary, Stanley Jevons, was equally ardent:

A unit of pleasure or pain is difficult even to conceive; but it is the amount of these feelings which is continually promoting us to buying and selling, borrowing and lending, labouring and resting, producing and consuming; and it is from the quantitative effects of the feelings that we must estimate their comparative amounts.

Jevons’ last sentence here should strike a chord with neoclassicists. It justifies a logical U-turn that Paul Samuelson would later call ‘revealed preferences’ (Samuelson 1938). The reason for the U-turn is simple. We cannot measure the utility that drives behaviour, but there is always the option of going in reverse. All it takes is to examine actual behaviour (‘the quantitative effects of the feelings’) and then assume that, in a perfectly competitive equilibrium, this behaviour ‘reveals’ the underlying relative utilities (‘comparative amounts’).

And so arose the infamous neoclassical circularity: ‘Utility is the quality in commodities that makes individuals want to buy them, and the fact that individuals want to buy commodities shows that they have utility’ (Robinson 1962: 48, original emphases). This circularity, though, operates only at the theoretical level. At the practical level, neoclassicists go in one direction only: from the phenomena to utility. And the phenomenon they find most revealing is price:

Utility is taken to be correlative to Desire or Want. It has been already argued that desires cannot be measured directly, but only indirectly, by the outward phenomena to which they give rise: and that in those cases with which economics is chiefly concerned the measure is found in the price which a person is willing to pay for the fulfilment or satisfaction of his desire.

And so the path was laid out, and the laity followed — only to find in the end what it assumed in the beginning.

### Finding equilibrium

Let’s trace this path. We’ll use the example of Energy User-Producer, Inc., a hypothetical corporation that owns two types of assets: automotive factories and oil rigs. The company engages in building, acquiring and selling these assets, so over time their numbers change (for simplicity, we assume that the factories and rigs themselves are of the same type and do not change over time, an assumption that we relax in our discussion of hedonic regression below).

The bottom part of Figure 8.1 shows the historical evolution of the firm’s holdings. We can see that in 1970 it owned 33 automotive factories and 20 oil rigs, and that in subsequent years the former number declined, reaching 15 in 2007, while the latter rose to 47 (since this is a hypothetical corporation, the number of factories and rigs is concocted out of thin air, but the same logic would apply had we used actual numbers).

Note: The number of automotive factories and oil rigs is hypothetical. The ‘quantity’ of capital with a 1970 equilibrium assumes that the ‘util-generating capacities’ of an automotive factory and an oil rig have a ratio of 2:1, while the ‘quantity’ of capital with a 1974 equilibrium assumes that the ratio is 1:1.

It is easy to calculate the company’s dollar market value for any year t. We simply take the number and price of the automotive factories it owns in that year (denoted by N1 and P1, respectively) together with the number and price of its oil rigs (N2 and P2) and plug them into the following expression:

$$$MV_1 = N\mathit{1}_t P\mathit{1}_t + N\mathit{2}_t P\mathit{2}_t \tag{3}$$$

For instance, if in 1970 an automotive factory cost $40 million and an oil rig$20 million, the company’s 33 factories and 20 oil rigs would be worth $1.72 billion: $$$\1.72~\text{bn} = 33 \times \40~\text{mn} + 20 \times \20~\text{mn} \tag{4}$$$ Now, assume that 1970 was a year of perfectly competitive equilibrium. According to the neoclassical scriptures, this happy situation means that the 2:1 ratio between the price of automotive factories and oil rigs ‘reveals’ to us their relative efficiencies. It tells us that automotive factories generate twice the universal utils as an oil rig. With this assumption, we can then use Equation (5) to compute the overall ‘quantity’ of capital (Q) owned by Energy User-Producer, Inc. for any year t:97 $$$Q_t = N\mathit{1}_t \times \40~\text{mn} + N\mathit{2}_t \times \20~\text{mn} \tag{5}$$$ The results, normalized to Q1970 = 100, are depicted by the thin line in the upper part of Figure 8.1.98 We can see that the increase in the number of the presumably less productive oil rigs was more than offset by the decline in the number of the presumably more productive automotive factories. As a result, the overall ‘quantity’ of capital fell by 11 per cent throughout the period. Of course, there is no particular reason to assume that 1970 was a year of perfectly competitively equilibrium. Any other year could do equally well. So, for argument’s sake, let us pick 1974 as our equilibrium year and see what it means for the computations. Many things happened between 1970 and 1974. The most important for our purpose were probably the threefold rise in the price of crude oil and the accompanying increase in the price of energy-producing equipment, including oil rigs. So, for the sake of illustration, let us assume that, by 1974, the price of oil rigs doubled to$40 million, while the price of automotive factories remained unchanged at $40 million. Now, since these are equilibrium prices, the ‘revealed’ efficiency ratio between automotive factories and oil rigs must be 1:1 (as opposed to 2:1 in a 1970 equilibrium). To compute the ‘quantity’ of capital owned by Energy User-Producer Inc., we would now use Equation (6): $$$Q_t = N\mathit{1}_t \times \40~\text{mn} + N\mathit{2}_t \times \40~\text{mn} \tag{6}$$$ The results of this computation, again normalized to Q1970 = 100, are plotted by the thick line in the upper portion of Figure 8.1. The difference from the previous calculations is striking: whereas equilibrium in 1970 implies an 11 per cent fall in the ‘quantity’ of capital, equilibrium in 1974 shows a 17 per cent increase. The reason for this divergence is that, compared with the first assumption, the ‘relative efficiency’ of oil rigs is assumed to be twice as large, with the result that each additional oil rig adds to the aggregate util-generating capacity double what it did before. So now we have a dilemma. Since we are dealing with the same collection of capital goods, obviously only one of these ‘quantity’ series can be correct — but which one? To answer this simple question all we need to know is in which of the two years — 1970 or 1974 — the world was in perfectly competitive equilibrium. The embarrassing truth, though, is that no one can tell. The consequence of this inability is dire. Depending on the year at which prices were in equilibrium, the divergence between the two measures in Figure 8.1 could be much bigger or much smaller, only that we would never know it. Moreover, the predicament goes beyond capital goods. It applies to any heterogeneous basket of commodities, and it grows more intractable the more encompassing the aggregation. Neoclassical theorists love to assume the problem away by stipulating a world that simply leaps from one equilibrium to the next. In our illustration, this assumption would make both 1970 and 1974 equilibrium years, eliminating the difficulty before it even arose. Unfortunately, though, this assumption doesn’t serve the statisticians. The reason is that, with ongoing equilibrium, all ‘pure’ price changes — i.e. changes that do not correspond to alterations of the commodity — must be attributed to variations in technology or tastes; yet, if tastes keep changing, we lose the basis for temporal comparisons. With consumer preferences having shifted, the same automotive factory may represent in 1974 a different ‘util generating capacity’ than it did in 1970; similarly, an increase in the number of rigs may denote a decrease as well as an increase in their ‘util generating capacity’, all depending on the jerky desires of consumers. So once the util dimensions of commodities are no longer fixed, we lose our benchmark and can no longer talk about their ‘real’ quantities. Therefore, here the statisticians part ways with the theoreticians. They assume, usually implicitly, that equilibrium occurs only infrequently. And then they convince themselves that they can somehow identify these special points of equilibrium, even though there is nothing in neoclassical manuals to tell them how to do so. ### Quantity without equilibrium Elusive equilibrium is devastating for measurement. Consider again our example of oil rigs and automotive factories. The first produce energy and the second use it, so their relative prices depend crucially on the global political economy of the Middle East, home to two thirds of the world’s known oil reserves and one third of its daily output. The difficulty for the statisticians is that this region, with its complex power conflicts between large corporate alliances, regional governments, religious movements and superpowers, never settles into a perfectly competitive equilibrium; but if so, how can we use the relative prices of oil rigs and automotive factories as a measure of their relative ‘quantities’? And the problem is hardly unique to oil rigs and automotive factories. Indeed, given that the entire world is criss-crossed by huge corporate alliances, complex government formations, contending social groups, mass persuasion, extensive coercion and the frequent use of violence, how can any market ever be in equilibrium? And if that is the case, what then is left of our attempt to quantify the ‘capital stock’ or any ‘real’ magnitude? The statisticians try to assume the problem away by mentioning it as little as possible. The most recent UN guidelines on the system of national accounts, published in 1993, contain only ten instances of the word ‘equilibrium’, none of which appear in Section XVI on Price and Volume Measures, while the word ‘disequilibrium’ is not mentioned even once in the entire manual (United Nations. Department of Economic and Social Affairs 1993). The more recent OECD document on Measuring Capital (2001) no longer refers to either concept. Fortunately, an earlier version of the UN national account guidelines, published in 1977, before the victory of neoliberalism, was not as tight-lipped. This older version provides occasional advice on how to deal with unfortunate ‘imperfections’, so we can at least get a glimpse of how the statisticians conceive the problem. One ‘special case of difficulty’ identified here is internal, or non-arm’s length, transactions between related enterprises or branches of the same company. Since transfer prices set under these conditions may be ‘quite arbitrary’, to use the UN’s wording, the advice is to ‘abandon value [meaning price] as one of the primary measures’ and replace it with a ‘measure of physical quantity’ combined with an estimate of ‘what the equivalent market price would have been’ (United Nations. Department of Economic and Social Affairs 1977: 12). Needless to say, the authors do not explain why we would need prices if we already knew the physical measures. They are also silent on where we could find ‘equivalent’ market prices in a world where all markets have already been contaminated by power. Another challenge is the repeated need to ‘splice’ different price series when a new product (say an MP3 player) replaces an older one (a CD player). Since the splicing involves a price comparison of two different objects, the guidelines recommend that the replacement ‘should take place at a time when the assumption that the price differences between the two products are proportional to the quality differences is most likely to be true’ (p. 10). In short, splice only in equilibrium. But since nobody knows when that happens (if ever), the guidelines concede that, in practice, the decision ‘must be essentially pragmatic’ — that is, arbitrary. The impossibility of equilibrium blurs the very meaning of ‘well-being’ — and therefore of the underlying substance that utils supposedly measure. The neoclassical conception of well-being is anchored in the ‘sovereign consumer’, but how can we fathom this individual sovereign in a world characterized by conflict, power, compulsion and brain-washing? How many consumers can remain ‘autonomous’ under such conditions? And if they are not ‘autonomous’, who is in the driver’s seat? Henry Ford is reputed to have said that, ‘If I’d asked people what they wanted, they would have asked for a faster horse’. But even if we take a less condescending view and assume that consumers do have some autonomy, how do we separate their authentic needs and wants from falsely induced ones? These are not nitpicking questions of high theory. The ambiguities here have very practical implications. For example, the national accounts routinely add the value of chemical weapons to that of medicine — this under the assumption that, in perfectly competitive equilibrium, one dollar’s worth of the former improves our lives just as much as one dollar’s worth of the latter. Consumers, however, do not live under such conditions, and rarely are they asked whether their country should produce chemical weapons, in what quantities or for what purpose. So how do we know that producing such weapons (Bentham’s pushpins) indeed makes consumers better off in the first place, let alone better off to an extent equal to how well off they would have been had the same dollars been spent on medicine (Bentham’s poetry)? Perhaps the production of chemical weapons undermines their well-being? And if so, should we not subtract chemical weapons from GDP rather than add them to it?99 And it’s not like medicine and weapons are in any way special. What should we do, for instance, with items such as private security services, cancerous cigarettes, space stations, poisonous drugs, polluting cars, imbecile television programmes, repetitive advertisements and cosmetic surgery? In the absence of perfectly competitive equilibrium, how do we decide which of these items respond to autonomous wants, which ones help create false needs, and which are simply imposed from above? Neoclassicists cannot answer these questions. Yet, unless they do, all ‘real’ measurements of material aggregates — from the flows of the national accounts to the stocks of capital equipment and personal wealth — remain meaningless.100 ### Hedonic regression The final nail in the neoclassical measurement coffin is the inability to separate changes in price from changes in quality. The gist of the problem is simple. Even if we could do the impossible and somehow measure relative quantities at any given point in time, as time passes the ‘things’ we measure are themselves changing. In our example of Energy User-Producer, Inc., we assumed that the automotive factories and oil rigs remained the same over time and that only their number changed. But in practice this is rarely if ever the case, and the implications for measurement are devastating. To illustrate the consequences, suppose an oil rig made in 1984 cost 40 per cent more than one made in 1974. Assume further that the new rig is different than the old one, having been altered in order to make it more productive. From a neoclassical perspective the higher price is at least partly due to the new rig’s greater ‘quantity’ of capital. The question is how big are the respective effects? What part of the increase represents the augmented quantity and what part is a ‘pure’ price change? As the reader must realize by now, the simple answer is that we do not know and, indeed, cannot know. The statisticians, though, again are forced to pretend. Since the measured items are constantly being transformed, they need to be, again and again, reduced back to their universal units, however fictitious. Luckily (or maybe not so luckily), neoclassical theorists have come up with a standard procedure for this very purpose, fittingly nicknamed ‘hedonic regression’. The idea was first floated in the late 1930s and had a long gestation period. But since the 1990s, with the advance of cheap computing, it has spread rapidly and is now commonly used by most national statistical services.101 How would a statistician use this procedure to estimate the changing quantity of the oil rig in the above example? Typically, she would begin by specifying the rig’s underlying ‘characteristics’, such as size, capacity, weight, durability, safety, and so on. Next, she would regress variations in the price of the rig against observed changes in these different characteristics and against time (or a constant in cross-section studies); the estimated coefficient associated with each characteristic would then be taken as the weight of that characteristic in the ‘true’ quantity change of the rig, while the time coefficient (or constant) would correspond to the ‘pure’ price change. Finally, she would decompose the observed price change into two parts, one reflecting the increased quantity of capital and the other reflecting pure price change.102 Hedonic regressions are particularly convenient since they cannot be ‘tested’, at least not in the conventional econometric sense. Standard econometric models are customary judged based on their overall ‘explanatory power’ measured by such statistics as R2 or F-tests, on the ‘significance’ of their coefficients indicated by t-statistics and on other bureaucratic criteria. Hedonic regressions are different. These models do not try simply to explain variations in price by variations in the commodity characteristics. Rather, their purpose is to decompose the overall variations in price into two parts — one attributed to changes in characteristics and a residual interpreted as a ‘pure’ price change. However, since there is no a priori reason to expect one part to be bigger than the other, we have to accept whatever the empirical estimates tell us. If changes in characteristics happen to account for 95 per cent of the price variations, then the ‘pure’ price change must be 5 per cent. And if the results are the opposite, then the ‘pure’ price change must be 95 per cent. This peculiar situation makes hedonic computations meaningful only under very stringent conditions. First, the characteristics that the statistician enumerates in her model must be the ones, and the only ones, that define the items’ ‘quality’ (so that we can neither omit from nor add to the list of specified characteristics). Second, the statistician must use the ‘correct’ regression model (for instance, she must know with certainty that the ‘true’ model is linear rather than log-linear or exponential). And last but not least, the world to which she applies this regression must be in a state of perfectly competitive equilibrium, and the ‘economic agents’ of this world must hold their tastes unchanged for the duration of the estimates. Now, since the model cannot be tested, these three conditions must be true. Otherwise we end up with meaningless results without ever suspecting as much. Sadly, though, the latter outcome is precisely what we end up with in practice: the first two conditions can be true only by miraculous fluke, while the third is a social oxymoron. And since none of the necessary conditions hold, we can safely state that, strictly speaking, all ‘real’ estimates of a qualitatively changing capital stock are arbitrary and therefore meaningless. Indeed, for that matter every quantitative measure of a qualitatively changing aggregate — including consumption, investment, government expenditure and the GDP itself — is equally bogus. Statements such as ‘real GDP has grown by 2.3 per cent’ or ‘the capital stock has contracted by 0.2 per cent’ have no clear meaning. We may see that these stocks and flows are changing, but we cannot say by how much or even in which direction. ## Quantifying labour values Do the Marxists avoid these measurement impossibilities?103 On the surface, it looks as though they should. In contrast to the neoclassicists whose capital is measured in subjective utility, Marxists claim to be counting it in objective labour units; and so, regardless of how and why capital accumulates, at least its magnitude should be problem free. But it isn’t. ### Concrete versus abstract labour According to Marx, the act of labour has two distinct aspects, concrete and abstract. Concrete labour creates use value. Building makes a house, tailoring creates a coat, driving creates transportation, surgery improves health, etc. By contrast, abstract labour creates value. Unlike concrete labour which is unique in its characteristics, abstract labour is universal, ‘a productive activity of human brains, nerves, and muscles … the expenditure of human labour in general … the labour-power which, on average, apart from any special development, exists in the organism of every individual’ (Marx 1909, Vol. 1: 51). The key question is how to quantify this abstract labour. In the above quotation Marx uses a ‘biological’ yardstick. Abstract labour, he says, is the ‘activity of human brains, nerves and muscles’. But as it stands, this yardstick is highly problematic, for at least two reasons. First, there is no way to know how much of this ‘biological activity’ is embodied in the concrete labour of a cotton picker as compared to that of a carpenter or a CEO. Second, and perhaps more importantly, relying on physiology and biology here goes against Marx’s own insistence that abstract labour is a social category. Marx resolves this problem, almost in passing, by resorting to another distinction — one that he makes between skilled labour and unskilled, or simple, labour. The solution involves two steps. The first step establishes a quantitative equivalence between the two types of labour: ‘Skilled labour’, Marx says, ‘counts only as simple labour intensified, or rather as multiplied simple labour, a given quantity of skilled being considered equal to a greater quantity of simple labour’ (ibid.). The second step, a few lines later, ties this equivalence to abstract labour: ‘A commodity’, he asserts, ‘may be the product of the most skilled labour, but its value, by equating it to the product of simple unskilled labour, represents a definite quantity of the latter labour alone’. In other words, abstract labour is equated with unskilled labour, and since skilled labour is merely unskilled labour intensified, we can now express any type of labour in terms of its abstract labour equivalence. This solution is difficult to accept. The very parity between abstract and unskilled labour seems to contradict Marx’s most basic assumption. For Marx, skilled and unskilled labour are two types of concrete labour whose characteristics belong to the qualitative realm of use value. And yet, here Marx says that labour skills are also related to each other quantitatively and that this relationship is the very basis of value. This claim can have two interpretations. One is that objective labour value depends on subjective use value — a possibility that Marx categorically rules out in the first pages of Das Kapital. The other interpretation is that there are in fact two types of use value: subjective use value in consumption and objective use value in production. The first type concerns the relationship between people and commodities; as such, it cannot be quantified and is justly ignored. The second type involves the technical relationship of production and hence is fully measurable. Unfortunately, this latter interpretation isn’t convincing either. As we have seen in Chapter 7, it is impossible to objectively delineate the productive process; but, then, how can we hope to objectively measure something that cannot even be delineated? Value theorists, though, seem to insist that use value in production can be measured. So, for argument’s sake, let’s accept this claim and assume that unskilled labour indeed is a measurable quantum. What would it take to compute how much of this quantum is embedded in commodities? For this calculation to be feasible, one or both of two conditions must be met.104 The first condition is satisfied when a commodity is produced entirely by unskilled labour. In this case, all we need to do is count the number of socially necessary hours required to make the commodity. A second condition becomes required if production involves different levels and types of skills. In this case, simple counting is no longer possible, and the theory works only if there exists an objective process by which skilled labour can be converted or reduced to unskilled (abstract) labour. Let us consider each condition in turn. ### A world of unskilled automatons? Begin with the first situation, whereby all labour is unskilled. According to Marx, this condition is constantly generated by capitalism, which relentlessly strives to de-humanize, de-skill and simplify labour, to turn live labour into a universal abstraction, a ‘purely mechanical activity, hence indifferent to its particular form’ (Marx 1857: 297). The abstraction process does not mean that all labour has to be the same. It is rather that capitalism tends to generate and enforce skills that are particularly flexible, easy to acquire and readily transferable. In this sense, abstract labour is not only an analytical category, but a real thing, created and recreated by the very process of capitalist development.105 This claim, central to the classical Marxist framework and later bolstered by Harry Braverman’s work on Labor and Monopoly Capital (1975), elicited considerable controversy, particularly on historical grounds. Workers, pointed out the critics, were forever resisting their subjugation by capital and in reality prevented labour from being simplified to the extremes depicted by Marx (cf. Thompson 1964; for a review, see Elger 1979).106 According to Cornelius Castoriadis, Marx’s conception of abstract labour was not only historically incorrect, but logically contradictory (1988, Vol. 1: General Introduction). First, if workers indeed were systematically deskilled, degraded and debilitated, asked Castoriadis, how could they possibly become, as Marx repeatedly insisted, historical bearers of socialist revolution and architects of a new society? According to this description, it seems more likely that they’d end up as raw material for a fascist revolution.107 Second, and crucially for our purpose here, according to Castoriadis capitalism itself cannot possibly survive the abstraction of labour as argued by Marx. The complexity, dynamism and incessant restructuring of capitalist production required not morons, but thinking agents. If workers were ever to be reduced to automatons, doing everything by the book like Hasek’s The Good Soldier Schweik (1937), capitalist production would come to a halt in no time. The foregoing argument does not mean that capitalists do not try to automate their world, only that they cannot afford to completely succeed in doing so. In other words, over and above the conflict between capitalists who seek to objectify labour and workers who resist it, there is the inner contradiction of social mechanization itself: the tendency of mechanization to make power inflexible and therefore vulnerable. As we shall see later in the book, social mechanization in capitalism occurs mostly at the level of ownership, through the capitalization of power, a process which in turn leaves capitalist labour considerable autonomy and qualitative diversity. Resistance by workers and the richness of their skills are crucial barriers to the measurement of value. Even if a portion of the labour force indeed is ‘unskilled’ in Marx’s sense, few if any commodities are produced solely by such labour. And since most, if not all, labour processes involve some skilled labour, it is obvious that the first condition does not apply. Value cannot be derived simply by counting unskilled hours.108 ### Reducing skilled to unskilled labour The diversity of labour skills leave us with the second requirement, namely, that there must exist a mechanism to ‘reduce’ the myriad of skilled labour to units of unskilled (abstract) labour. According to Marx, this mechanism is both omnipresent and obvious: ‘Experience shows that this reduction is constantly being made … established by a social process that goes on behind the backs of the producers. . . .’ (1909, Vol. 1: 51–52). But in fact the process is anything but trivial. The starting point of the reduction is the assumption that labour skills are created through spending on education and training, on and off the job. In this sense, ‘skilled labour power’ is a commodity like any other — i.e. a product of labour. It is produced by the study and training time of the worker himself, as well as by the labour of those who educate and train him. The value that the skill assumes in this process is equivalent to the socially necessary abstract labour time required for the subsistence of the worker, his teachers and trainers throughout the skill-creation process. Now, skilled labour supposedly creates more value than unskilled labour, and the question is how much more? Marxists have devoted far less attention to this question than it deserves, and, as a result, their take on the subject hasn’t changed much since the issue was first examined by Marx and Hilferding.109 Marx answered the question from the output side, by pointing to the greater ‘physical productivity’ of skilled labour. His solution, though, is both circular and incomplete. It is circular insofar as physical productivity can be compared across different commodities only by resorting to prices and wages. This reverse derivation would have us conclude that, given their wage differential, a Pfizer chemist earning$150,000 a year must create 15 times the value of an Intel assembly-line worker whose wage is only \$10,000 — a logic that is reminiscent of Gary Becker’s (1964) ‘human capital’ and that Marx rightly would (or at least should) have rejected. The solution is also incomplete because Marx nowhere explains why the additional value-creating capacity of skilled labour should bear any particular relationship to the labour cost of acquiring the skill. The fact that an engineer trains 10 per cent longer does not mean she will create 10 per cent more value; it could also be 1 per cent, 20 per cent or any other number.

An alternative reduction, drawn from the input side, was offered by Hilferding (1904). Skilled labour, he argued, simply transfers its added production time to the commodity it produces. For instance, if it takes the equivalent of 40,000 hours of unskilled labour to teach and train a brain surgeon who performs 20,000 hours of surgery over her working life, then one hour of her skilled labour is equivalent to 3 hours of unskilled labour ((40,000 + 20,000) / 20,000 = 3).

However, this solution too turns out to be open-ended. For our purpose, the most important problem is that Hilferding counts as skill-creating value not the total number of hours necessary to create the skill, but only those hours that the worker or his employer end up paying for. And it is here that the bifurcations of political economy and its equilibrium assumptions again come back to haunt us.

Since in reality the ‘economy’ is neither fully commodified nor separate from ‘politics’, much of the education and training is free — provided by the household, community and government. Moreover, since the ‘economy’ — however defined — is rarely if ever in a fully-competitive equilibrium, there is no guarantee that the procured education and training are transacted at value.110

To complicate things further, note that so far we have taken it as a given that one can actually specify the process of ‘producing’ the skill. But can we? Although it seems evident that skills are developed through education, on-the-job learning and broader cultural influences, this is a qualitative joint process, and therefore one that suffers from the intractable indeterminacies examined in Chapters 6 and 7. Last but not least, there can be no easy agreement on what constitutes unskilled labour at any point in time (should we use as our benchmark a UK high-school graduate, an Indian peasant, or an African bushman?). And if that basic unit cannot be specified, where do we start from?111

As a consequence of these difficulties, the Marxists, like the neoclassicists, have ended up without an elementary particle. No one, from Marx onward, has been able to measure the unit of abstract labour, and, with time, fewer and fewer Marxists think it is worth trying. A tiny but dedicated minority continues to work around this handicap, putting much effort and ingenuity into mapping the value structure of capitalist societies without a basic unit of value. But most Marxists, including many who otherwise adhere to the theory’s principles, have abandoned this quest. They endorse the liberal methods and data. They use, often without a second thought, the national accounts of growth and inflation, along with their quantitative estimates of the ‘capital stock’. They see no problem in deflating nominal values by hedonic price indices to obtain ‘real’ quantities, or in using equilibrium-based econometrics to draw dialectical conclusions. Their voice is still Marx’s, but their hands have long been those of the hedonists.112

The retreat from labour in favour of hedonic accounting serves to dilute and blur the basic concepts of Marxism. Without ‘abstract labour’, there can be no ‘labour values’ and therefore no way to define, let alone measure, ‘surplus value’ and ‘exploitation’. And with these basic concepts thrown into question, their negations too become suspect. Neo-Marxists like to highlight the deviations of ‘monopoly capitalism’ (Sweezy 1942; Baran and Sweezy 1966), ‘unequal exchange’ (Emmanuel 1972) and the ‘smoke and mirrors’ of ‘accumulation by dispossession’ (Harvey 2004) from Marx’s ‘expanded reproduction’ under the ‘law of value’. But since this law has no units in which to be denominated (leaving aside its other problems), it isn’t clear what exactly these new social formations deviate from.

Needless to say, no amount of neoclassical data on ‘real’ growth and accumulation can undo this gridlock. Marxists cannot hope to save their method by banking on the very logic they try to undo. They will simply sink deeper into the quicksands of utilitarian impossibilities.

## A clean slate

Is there a way out of these circularities and contradictions? In our view, the answer is yes — but not within the existing framework of political economy. This framework identifies the quantitative ‘essence’ of capital with the material sphere of production and consumption, and that assumption makes its problems insoluble. Utils and abstract labour — even if they did exist in some sense — do not have fundamental quantities that can be measured. They therefore have to be derived in reverse, from the very phenomena they try to explain. And even this inverted derivation falls apart, because it is built on assumptions that are patently false if not logically contradictory.

What we need, then, is not a revision, but a radical change. We need to develop a new political economy based on new methods, new categories and new units. Our own notion of capital as power offers such a beginning.

1. There is a debate on whether all five quantities are in fact ‘fundamental’, and indeed on the very existence of ‘fundamental’ quantities to begin with (see for instance Laughlin 2005). But these deeper questions need not concern us here.

2. The data in this example pertain to the ‘net stock of private fixed assets’ in the sector defined as ‘motor vehicles, bodies and trailers, and parts’; they are taken from the Fixed Asset Tables of the U.S. Bureau of Economic Analysis.

3. The term ‘util’ was coined by Irving Fisher in his doctoral dissertation (1892: 18).

4. For a sympathetic history of utility theory, see Stigler (1950).

5. On the early quest for measurable utility, see Colander (2007).

6. This computation contains an important caveat that must be noted. According to neoclassical theory, the equilibrium price ratio is proportionate to the relative efficiencies of the marginal automotive factory and oil rig, but that isn’t enough for our purpose here. In order to compute the overall ‘quantity’ of capital, we need to know the relative efficiencies of the average factory and rig, and that latter information is not revealed by prices. Neoclassical manuals tell us that the average and marginal efficiencies are generally different, and that save for special assumptions there is no way to impute one from the other. This discrepancy is assumed away by the national account statisticians. The standard practice — which we follow here for the sake of simplicity — is to take equilibrium prices as revealing average efficiencies and utilities. This bypass enables the measurement of ‘real’ quantities, at a minor cost of making the result more or less meaningless.

7. A ‘normalized’ series is calibrated to make its value equal to a given number in a particular ‘base year’ (in this case, 100 in the base year 1970). To achieve this calibration, we simply divide every observation in the series by the value of the series in the base year and multiply the result by 100 (so the normalized series = Qt / Q1970 × 100). The absolute magnitudes of the observations change, but since all the observations are divided and multiplied by the same numbers, their magnitudes relative to each other remain unaltered.

8. In practice, the ‘real’ quantity of weaponry is measured in exactly the same way as the ‘real’ quantity of food or clothing — i.e. by deflating its money value by its price (U.S. Department of Commerce. Bureau of Economic Analysis 2005: 33–35). Presumably, this ‘real’ quantity represents the nation’s happiness from annihilating one enemy multiplied by the number of enemies the weapons can kill (plus the collateral damage). The statisticians are understandably secretive on how they actually determine this utilitarian quantum.

9. Alternative ‘green’ measures, such as the Index of Sustainable Economic Welfare (ISEW) and the Genuine Progress Indicator (GPI), do not really solve the quantification problem. These measures, pioneered by Herman Daly and John Cobb (1989), try to recalibrate the conventional national accounts for ‘externalities’ and other considerations. They do so by (1) adding the supposed welfare impact of non-market activities, (2) subtracting the assumed effect of harmful and unsustainable market activities and (3) correcting the resulting measure for income inequalities. Although the aim of these recalibrations is commendable, they remain hostage to the very principles they try to transcend. Not only do they begin with conventional national accounting as raw data, but their subsequent additions and subtractions accept the equilibrium assumptions, logic and imputations of utilitarian accounting (for a recent overview of ‘green accounting’, see Talberth, Cobb, and Slattery 2007).

10. For early studies on the subject, see Court (1939), Ulmer (1949), Stone (1956) and Lancaster (1971). Later collections and reviews include Griliches (1971), Berndt and Triplett (1990), Boskin et al. (1996), Banzhaf (2001) and Moulton (2001). For a critical assessment, see Nitzan (1989).

11. For instance, suppose oil rigs have two characteristics, ‘extracting capacity’ and ‘durability’, and that x1 and x2 represent the temporal rates of change of these two characteristics, respectively. A simple-minded, cross-section hedonic regression could then look like Equation (1):

$p = b_0 + b_1 x_1 + b_2 x_2 + u, ~~~~~~ (1)$

where p is the overall rate of change of the price of oil rigs, b0 is the ‘pure’ price change, b1 and b2 are the respective contributions of changes in ‘extracting capacity’ and in ‘durability’ to changes in the ‘quantity’ of oil rigs, and u is a statistical error term. Now suppose that, based on our statistical estimates, b1 = 0.4 and b2 = 0.6, so that:

$p = b_0 + 0.4 x_1 + 0.6 x_2 ~~~~~~ (2)$

Next, consider a situation in which ‘new and improved’ rigs have twice the ‘extracting capacity’ of older rigs (x1 = 2), the same durability (x2 = 0), and a price tag 40 per cent higher (p = 0.4). Plugging these numbers back into Equation (2), we would then conclude that new rigs have 80 per cent more ‘quantity of capital’ (0.4 × 2 + 0.6 × 0 = 0.8) and that the pure price change must have been a negative 40 per cent (b0 = –0.4).

12. As noted, most neo-Marxists have abandoned the labour theory of value. As we shall see at the end of this section, however, the question remains relevant to their inquiry as well.

13. Note that in and of themselves, these conditions, although required, may be insufficient.

14. ‘The indifference to the particular kind of labor corresponds to a form of society in which individuals pass with ease from one kind of work to another, which makes it immaterial to them what particular kind of work may fall to their share. . . . This state of affairs has found its highest development in the most modern of bourgeois societies, the United States. It is only here that the abstraction of the category “labor”, “labor in general”, labor sans phrase, the starting point of modern political economy, becomes realized in practice’ (Marx 1911: 299). The slaughter-house worker in Upton Sinclair’s novel The Jungle (1906) needs no more than a minute to learn his job.

15. It is of course true that, over time, the distribution of skills undergoes major shifts, cyclical as well as secular. But a significant portion of these shifts occurs through the training of new workers rather than the quick re-training of existing ones. The rigidity of existing specialization seems obvious for highly skilled workers. Few engineers can readily do the work of accountants, few pilots can just start working as doctors, and few managers can easily replace their computer programmers. Moreover, the rigidity is also true for many so-called blue-collar workers, such as auto mechanics, carpenters, truck drivers and farmers, who could rarely do each other’s work.

16. This was a potential that Mussolini, a self-declared student of Lenin and editor of a socialist tabloid, was quick to grasp. George Mosse (2000), whose father owned a tabloid chain in the Weimer Republic, comments in his memoirs that, unlike the socialists, the fascists didn’t try to impose on workers their intellectual fantasies of freedom. They understood what workers wanted and gave them exactly what they deserved: entertainment, soccer and a strong hand. The dilemma for socialists was expressed, somewhat tongue in cheek, by George Orwell:

If there was hope, it must lie in the proles, because only there, in those swarming disregarded masses, eighty-five percent of the population of Oceania, could the force to destroy the Party ever be generated. . . . [The] proles, if only they could somehow become conscious of their own strength, would have no need to conspire. They needed only to rise up and shake themselves like a horse shaking off flies. If they chose they could blow the Party to pieces tomorrow morning. Surely, sooner or later it must occur to them to do that. And yet –!. . . . Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.

(1948: 69–70, original emphasis)

17. One can sidestep the problem by following Braverman (1975), who distinguished ‘manual’ labour from ‘scientific management’ and ‘mental work’. The difficulty here is not only in how to draw the line between the two, but also in what to do with the second group (class?). Unless we continue to consider them as workers, the very emergence and growth of this group imply a built-in counter-tendency in capitalism, one in which labour can be simplified and abstracted only by having some of it transformed into ‘management’ and ‘mental’ activity — a sort of ‘elimination of labour by labour’. Moreover, since this latter type of ‘non-manual’ activity, whether labour or not, cannot be counted in abstract terms, how do we account for its productive contribution to value and surplus value?

18. For a critical assessment of the problematic reduction of skilled to unskilled labour, see Harvey (1985).

19. In the hypothetical situation that the ‘true’ value of acquiring the skill happens to be paid for in full, we end up with multiple rates of exploitation whose magnitudes are inversely related to the cost of acquiring the respective skills. This result, which goes counter to Marx’s assumption of equalizing tendencies, arises because, for Hilferding, the cost of acquiring the skill acts like constant capital: it merely transfers its own value to the commodity.

20. To avoid some of these difficulties, as well as to account for the important role of labour market segmentation, some writers offered to develop a labour theory of value based on heterogeneous rather than abstract labour (see, for instance, Bowles and Gintis 1977; Steedman 1980). This approach, though, merely substitutes one reduction for another, since we now need to ‘translate’ the value vector into a price scalar. Others, such as Itoh (1987; 1988) suggested moving in the opposite direction by treating an hour of labour uniformly, regardless of its concrete nature. This latter measurement may be justified from the point of view of the worker, though the analytical implication is that we can no longer use labour values to explain capitalist prices and accumulation.

21. Even Shaikh and Tonak (1994), who have tried to devise specifically Marxist national accounts, ended up employing a neoclassical price index in order to convert ‘nominal’ measures of output to their ‘real’ counterparts (Appendix J).