Thursday, November 02, 2006

The worst mistake ever?

Of course, I went ahead and downloaded the new book by Gregory Clark. I have NOT read it- it is 453 pages long- but the opening passages are striking.

The basic outline of world economic history is surprisingly simple. Indeed it can be summarized in one diagram: figure 1.1. Before 1800 income per person – the food, clothing, heat, light, housing, and furnishings available per head - varied across societies and epochs. But there was no upward trend. A simple but powerful mechanism explained in this book, the Malthusian Trap, kept incomes within a range narrow by modern standards.

Thus the average inhabitant in the world of 1800 was no better off than the average person of 100,000 BC. Indeed, most likely, consumption per person declined as we approached 1800. The lucky denizens of wealthy societies such as eighteenth century England or the Netherlands managed a material life style equivalent to the Neolithic. But the vast swath of humanity in East and South Asia, particularly in Japan and in China, eked out a living in conditions that seem to have been significantly poorer than those of cavemen.

The quality of life quality also failed to improve on any other observable dimension. Life expectancy was the same in 1800 as for the original foragers of the African savannah, 30-35 years at birth. Stature, a measure both of the quality of the diet, and of children’s exposure to disease, was higher in the Neolithic than in 1800. And while foragers likely satisfied their material wants with small amounts of work, the modest comforts of the English in 1800 were purchased only through a life of unrelenting drudgery. Nor did the variety of their material consumption improve. The average forager had a diet, and a work life, much more varied than the typical English worker of 1800 even though the English table by them included such exotics as tea, pepper, and sugar.

Finally hunter-gatherer societies are egalitarian. Material consumption varies little across the members. In contrast great inequality was a pervasive feature of the agrarian economies that dominated the world of 1800. The riches of a few dwarfed the pinched allocation of the masses. Considering even the broadest definition of material life, the trend, if anything, was downward from the Stone Age to 1800. And for the poor of 1800, those who lived on unskilled wages alone, the hunter-gatherer life would have been a clear improvement. Some will object that material living conditions, even including life expectancy and work efforts, give little impression of the other dimensions by which life changed between the Neolithic and 1800: dimensions such as security, stability, and personal safety. But we shall see below that however broadly we picture living conditions, things do not improve before 1800.

This reminded me of the famous article in Discover, by Jared Diamond, arguing that we should never have invented Agriculture.

Scattered throughout the world, several dozen groups of so-called primitive people, like the Kalahari bushmen, continue to support themselves that way. It turns out that these people have plenty of leisure time, sleep a good deal, and work less hard than their farming neighbors. For instance, the average time devoted each week to obtaining food is only 12 to 19 hours for one group of
Bushmen, 14 hours or less for the Hadza nomads of Tanzania. One Bushman, when asked why he hadn’t emulated neighboring tribes by adopting agriculture, replied, "Why should we, when there are so many mongongo nuts in the world?"

While farmers concentrate on high-carbohydrate crops like rice and potatoes, the mix of wild plants and animals in the diets of surviving hunter-gatherers provides more protein and a bettter balance of other nutrients. In one study, the Bushmen’s average daily food intake (during a month when food was plentiful) was 2,140 calories and 93 grams of protein, considerably greater than the recommended daily allowance for people of their size. It’s almost inconceivable that Bushmen, who eat 75 or so wild plants, could die of starvation the way hundreds of thousands of Irish farmers and their families did during the potato famine of the 1840s.

There is evidence that adopting agriculture ruined the health of most people for generations

Skeletons from Greece and Turkey show that the average height of hunger-gatherers toward the end of the ice ages was a generous 5’ 9" for men, 5’ 5" for women. With the adoption of agriculture, height crashed, and by 3000 B. C. had reached a low of only 5’ 3" for men, 5’ for women. By classical times heights were very slowly on the rise again, but modern Greeks and Turks have still not regained the average height of their distant ancestors.


At Dickson Mounds, located near the confluence of the Spoon and Illinois rivers, archaeologists have excavated some 800 skeletons that paint a picture of the health changes that occurred when a hunter-gatherer culture gave way to intensive maize farming around A. D. 1150. Studies by George Armelagos and his colleagues then at the University of Massachusetts show these early farmers paid a price for their new-found livelihood. Compared to the hunter-gatherers who preceded them, the farmers had a nearly 50 per cent increase in enamel defects indicative of malnutrition, a fourfold increase in iron-deficiency anemia (evidenced bya bone condition called porotic hyperostosis), a theefold rise in bone lesions reflecting infectious disease in general, and an increase in degenerative conditions of the spine, probably reflecting a lot of hard physical labor. "Life expectancy at birth in the pre-agricultural community was bout twenty-six years," says Armelagos, "but in the post-agricultural community it was nineteen years. So these episodes of nutritional stress and infectious disease were seriously affecting their ability to survive."

Why was agriculture so bad for human health?

There are at least three sets of reasons to explain the findings that agriculture was bad for health.

First, hunter-gatherers enjoyed a varied diet, while early farmers obtained most of their food from one or a few starchy crops. The farmers gained cheap calories at the cost of poor nutrition. (today just three high-carbohydrate plants–wheat, rice, and corn–provide the bulk of the calories consumed by the human species, yet each one is deficient in certain vitamins or amino acids essential to life.) Second, because of dependence on a limited number of crops, farmers ran the risk of starvation if one crop failed. Finally, the mere fact that agriculture encouraged people to clump together in crowded societies, many of which then carried on trade with other crowded societies, led to the spread of parasites and infectious disease. (Some archaeologists think it was the crowding, rather than agriculture, that promoted disease, but this is a chicken-and-egg argument, because crowding encourages agriculture and vice versa.)

Epidemics couldn’t take hold when populations were scattered in small bands that constantly shifted camp. Tuberculosis and diarrheal disease had to await the rise of farming, measles and bubonic plague the appearnce of large cities.

and things get worse

Besides malnutrition, starvation, and epidemic diseases, farming helped bring another curse upon humanity: deep class divisions. Hunter-gatherers have little or no stored food, and no concentrated food sources, like an orchard or a herd of cows: they live off the wild plants and animals they obtain each day. Therefore, there can be no kings, no class of social parasites who grow fat on food seized from others. Only in a farming population could a healthy, non-producing élite set itself above the disease-ridden masses. Skeletons from Greek tombs at Mycenae c. 1500 B. C. suggest that royals enjoyed a better diet than commoners, since the royal skeletons were two or three inches taller and had better teeth (on the average, one instead of six cavities or missing teeth). Among Chilean mummies from c. A. D. 1000, the élite were distinguished not only by ornaments and gold hair clips but also by a fourfold lower rate of bone lesions caused by disease.

Farming may have been even worse for women

Farming may have encouraged inequality between the sexes, as well. Freed from the need to transport their babies during a nomadic existence, and under pressure to produce more hands to till the fields, farming women tended to have more frequent pregnancies than their hunter-gatherer counterparts–with consequent drains on their health. Among the Chilean mummies for example, more women than men had bone lesions from infectious disease.

Women in agricultural societies were sometimes made beasts of burden. In New Guinea farming communities today I often see women staggering under loads of vegetables and firewood while the men walk empty-handed. Once while on a field trip there studying birds, I offered to pay some villagers to carry supplies from an airstrip to my mountain camp. The heaviest item was a 110-pound bag of rice, which I lashed to a pole and assigned to a team of four men to shoulder together. When I eventually caught up with the villagers, the men were carrying light loads, while one small woman weighing less than the bag of rice was bent under it, supporting its weight by a cord across her temples.

So why did our silly ancestors take up agriculture at all? The proposed explanation seems to be based on a kind of group selection.

One answer boils down to the adage "Might makes right." Farming could support many more people than hunting, albeit with a poorer quality of life. (Population densities of hunter-gatherers are rarely over on eperson per ten square miles, while farmers average 100 times that.) Partly, this is because a field planted entirely in edible crops lets one feed far more mouths than a forest with scattered edible plants. Partly, too, it’s because nomadic hunter-gatherers have to keep their children spaced at four-year intervals by infanticide and other means, since a mother must carry her toddler until it’s old enough to keep up with the adults. Because farm women don’t have that burden, they can and often do bear a child every two years.

As population densities of hunter-gatherers slowly rose at the end of the ice ages, bands had to choose between feeding more mouths by taking the first steps toward agriculture, or else finding ways to limit growth. Some bands chose the former solution, unable to anticipate the evils of farming, and seduced by the transient abundance they enjoyed until population growth caught up with increased food production. Such bands outbred and then drove off or killed the bands that chose to remain hunter-gatherers, because a hundred malnourished farmers can still outfight one healthy hunter. It’s not that hunter-gatherers abandonded their life style, but that those sensible enough not to abandon it were forced out of all areas except the ones farmers didn’t want.

I am much more optimistic about the future than Diamond is in this article. He seems to expect the return of what Gregory Clark calls the Malthusian Trap. I suspect that no wwe know the trick of getting out of that trap, we are unlikely to go back there.

The Very Origins

Tyler Cowen's latest "Economics Scene" article in the New York Times is about the oldest question in Economics: the Nature and Causes of the Wealth of Nations. He describes the work of Gregory Clark at the University of Davis.

In “A Farewell to Alms: A Brief Economic History of the World” (forthcoming, Princeton University Press), Gregory Clark, an economics professor at the University of California, Davis, identifies the quality of labor as the fundamental factor behind economic growth. Poor labor quality discourages capital from flowing into a country, which means that poverty persists. Good institutions never have a chance to develop.

Tyler believes that

Professor Clark’s analysis counters Jared M. Diamond, who in his “Guns, Germs and Steel” (W. W. Norton & Company, 1999) located the ultimate sources of European advantage in geography, like safety from tropical diseases, and a greater number of available animals that could be domesticated.

I cannot agree- Diamond's book was about the very long-term development of various continents: about why Eurasia developed so differently from Africa, Australia and the Americas over the past 10,000 years. Diamond's thesis is that Eurasia was caught in a race between technology and population, and standards of living consequently stagnated, while other continents did not manage even this. Their technology and populations were stagnant, and in Australasia, technology actually regressed. He tries to explain this difference ("dynamic" stagnation in Eurasia, and "static" stagnation in the other continents).

However, he quotes some very interesting examples from Gregory Clark's book (emphasis added)

As early as the 19th century, textile factories in the West and in India had essentially the same machinery, and it was not hard to transport the final product. Yet the difference in cultures could be seen on the factory floor. Although Indian labor costs were many times lower, Indian labor was far less efficient at many basic tasks.

For instance, when it came to “doffing” (periodically removing spindles of yarn from machines), American workers were often six or more times as productive as their Indian counterparts, according to measures from the early to mid-20th century. Importing Western managers did not in general narrow these gaps. As a result, India failed to attract comparable capital investment.


An independent estimate by two economics professors at the University of Wisconsin, Madison, Rodolfo E. Manuelli and Ananth Seshadri, (“Human Capital and the Wealth of Nations") suggests that if variations in the quality of labor across nations are taken into account, other productivity factors need differ by only 27 percent to explain differences in per capita income.

Professor Clark questions whether the poorest parts of the world will ever develop. Japan has climbed out of poverty, and now China is improving rapidly, but Dr. Clark views these successes as built upon hundreds of years of earlier cultural foundations. Formal education is no panacea, since well-functioning institutions are needed for it to be effective.

I am sure that this is not as simple as Tyler and Clark say it is, and I was reminded of this post from Stumbling and Mumbling, on the problems which confronted early European Industrialists.

Evidence from pre-capitalist society suggests that people stop working once they have achieved subsistence. Andre Gorz describes how this blighted early factories:

The worker did not ask: ‘how much can I earn in a day if I do as much work as possible?’ but: ‘how much must I work in order to earn the wage which I earned before and which takes care of my traditional needs?’

The unwillingness of the workers to do a full day’s labour, day after day, was the principal reason why the first factories went bankrupt. (Critique of Economic Reason, p21.)

It was also, he says, the reason why factory owners wanted child labour; only children were sufficiently pliable.

I suspect we are as far as ever from answering the old question.

Investing II

The New York Times article which I cited in my previous post mentioned index funds that employ non-conventional weighting methodologies, for fear of over-investing in bubbles. Over at Stumbling and Mumbling, Chris Dillow adds much wisdom.

An equal-weighted basket of FTSE 350 stocks has hugely out-performed the capitalization-weighted index - 36% against 21%. This is because the cap-weighted index has been dragged down by poor performance by mega-cap stocks, Glaxo and Vodafone, and because mid-cap stocks have out-performed larger ones. Our letter baskets benefit from this equal-weighting. But fund managers, being closet trackers, have lost by being roughly cap-weighted.

But beware

As Isaac Tabner says, over the long-run, cap-weighted indices beat equal-weighted ones.

And he signs off with a very smart point indeed.

There's a big difference between being rational and being right.
As my new chum Daniel Finkelstein says, you cannot judge the quality of a decision by its outcome.

Read the whole thing- its the kind of writing you won't often encounter in the mainstream press.

Wednesday, November 01, 2006


This morning, I spoke to an "Investment advisor" at a large mutual fund which already has some of my money invested in Index Funds. He strongly urged me to buy one of their actively-managed funds: claiming that they offer wonderful returns. I demurred, and said I was looking at adding to my stake in the Index Fund. He immediately turned around and said that I should diversify my investments- have I considered looking at other Mutual Funds?
He was actually asking me to go to another company! I suspect they are making no money from my investment, even though my assets have more than doubled in the past 18 months.
This article in the New York Times may be relevant.

This year through September, only 28.5 percent of actively managed large-capitalization funds — which try to beat the market through stock selection — were able to outpace the S.& P. 500 index of large-cap stocks, according to a new study by S.& P. In the third quarter alone, it was even worse, with only one in five actively managed large-capitalization funds beating the index.

That isn’t terribly surprising, said Rosanne Pane, mutual fund strategist at S.& P., because active managers tend to have difficulty beating indexes when market leadership changes. And in the third quarter, many stocks that had paced the market for much of this decade began to fall behind. Small-company stocks were finally beaten by shares of big, blue-chip companies; sectors like energy also started to lose ground.

Still, such transitional periods aren’t the only good times for indexing. S.& P. research shows that while active management fared poorly in the third quarter, it has actually been lagging behind the indexes for a considerable period.

Over the five years through the end of the third quarter — a span that included both bull and bear markets — only 29.1 percent of large-cap funds managed to beat the S.& P. 500. What’s more, only 16.4 percent of mid-cap funds beat the S.& P. 400 index of mid-cap stocks, and 19.5 percent of small-cap funds outpaced the S.& P. 600 index of small-company shares. “The long term does seem to favor the indexes,” Ms. Pane said.

However, thats not the only reason Index Funds shine (emphasis added)

For John C. Bogle, founder of the Vanguard Group, which started the first retail stock index fund 30 years ago, the recent success of indexing is self-evident.

“The reality is, fads come and go and styles of investing come and go,” he said. “The only things that go on forever are costs and taxes.” And by simply buying all the stocks in an equity benchmark and holding them for the long run, traditional index funds minimize the transaction costs and capital gains taxes associated with investing, he said.

Mr. Bogle argued that while indexing grew in popularity in the late 1990s — when the Vanguard 500 Index fund, which tracks the S.& P. 500, was consistently returning more than 20 percent a year — the strategy is even more valuable in a period of modest returns. If equities gain only 6 or 7 percent annually in the coming years, the higher investment management fees, transaction costs and taxes associated with actively managed portfolios will take a disproportionate bite out of a fund’s gross returns, he said.

Yet it’s precisely during these stretches of modest returns when individual investors tend to take indexing for granted. After all, there’s nothing sexy about earning mid-single-digit returns through an index fund.

There are alternatives to traditional Index Funds, which invest in shares in proportion to their market capitalization.

Research Affiliates, an asset management firm in Pasadena, Calif., has built its own set of indexes that get around the problem of market-cap weightings.

The Research Affiliates Fundamental Indexes, or RAFI, instead weigh stocks on other factors such as sales, book value, free cash flow and dividends.

Jason Hsu, director of research and investment management at Research Affiliates, says that this prevents an index from becoming too oriented toward the fastest-growing and largest-capitalization stocks, for example, just because of market momentum.

Essentially, this could save the portfolio from over-investing in inflated stocks. Should be a good diversification tool. Would also like to get hold of some REITs. Where are they?

Growing World

Robert Shiller has a nice article in Project Syndicate, describing what he has learnt from the new Penn World Table, version 6.2.

Among the 82 countries for which 2004 data are now available, there has been really good news: real per capita GDP has risen by an average of 18.9% between 2000 and 2004, or 4.4% per year. People generally are a lot better off than they were just a few years ago. At this rate, real per capita GDP will double every 16 years.

However, there has been little change in the relative ranking of countries.

Despite all the talk about the Chinese economic miracle, China’s ranking has risen only slightly, from 61st out of 82 countries in 2000 to 60th in 2004 – even though per capita real GDP grew by 44% between 2000 and 2004, or 9.6% a year, the highest of the major countries.

The reason China has not risen higher is that other countries have been growing too, and because the gaps between countries are enormous. The range between the poorest and the richest countries in the world is a factor of more than 100. The average real per capita GDP of the top 25% of countries is 15 times that of the bottom 25%.


If such growth rates continue, we will see relatively poor countries like India, Indonesia, the Philippines, or Nicaragua reach the average levels currently enjoyed by advanced countries in 50 years. But, of course, they will not have caught up with these countries, for those countries will have moved ahead too.

I myself cannot be too concerned by this. I think its more important to keep the poor countries
growing, and get their citizens to feel optimistic about the future. This would not only save lives which would otherwise be lost to cholera, dysentry, and malaria but would help temper ethnic conflict as people adopt escape the straightjacket of "zero-sum" thinking. This is the same reason I am relatively sanguine about inequality in wealthy countries.

Schiller then hearkens back to the work of Galbraith, and asks goods account for all this growth.

But real per capita GDP in the US is now three times higher than it was in 1958. What have people been spending all that extra money on? Is it all dictated by advertisers and salesmen who are inventing needs?

According to my calculations comparing 1958 and 2005 data from the US Department of Commerce, Americans spent 27% of the huge increase in income between 1958 and 2005 on medical care, 23% on their homes, 12% on transportation, 10% on recreation, and 9% on personal business activities. The kinds of things that advertisers and salesmen typically promote were relatively unimportant. Food got only 8% of the extra money, clothing only 3%, and personal care 1%. Unfortunately, idealistic activities also received little of the extra money: 3% for welfare and religious activities, and a similar share for education.

Thus, most of the extra money was spent on staying healthy, having a nice home, traveling and relaxing, and doing a little business.

Sounds they have the right priorities.

Tuesday, October 31, 2006

Tainted reality

Its been known for some time that what we see depends on what we expect to see. That is the basis of illusions such as one to the right, which I took from Wikipedia. Square A is actually just the same shade of gray as square B.

BPS research digest reports on a related effect.

Karl Gegenfurtner and colleagues presented 14 participants with strangely coloured fruits – for example a pink banana – against a grey background. The participants’ task was to adjust the colour of the banana until it blended exactly with the grey background. It sounds easy, but the participants couldn’t do it because as they adjusted the colour, they compensated not just for the banana’s actual pink pigmentation, but also for a yellowness that only existed in their mind, thus leaving the banana with a slight bluish hue. That is, their memory for the typical colour of a banana was interfering with their performance.

By contrast, the participants didn’t have any trouble adjusting the colour of anonymous spots of light to make them blend in with the grey background – thus suggesting it wasn’t some quirk of the experimental set-up that was causing the participants difficulties with the fruit and veg.

Moreover, when presented with a banana that had been correctly adjusted to perfectly blend in with the grey background, the participants reported that it looked slightly yellow – a percept generated by their own mind, not by the actual colour of the banana.

It came from the East

ALDaily pointed me to this New Yorker review of the book “The Ghost Map" by Steven Johnson. I keep forgetting how new Cholera is.

Hippocrates mentioned cholera as a common post-childhood disease, but given that he thought it might be brought on by eating goat’s meat he was probably referring to a less malign form of diarrhea. It was almost certainly not the life-threatening epidemic disease that emerged from India in 1817 and which then began its spread around the world, travelling, as Snow said, “along the great tracks of human intercourse”—colonialism and global trade. The first pandemic of what the British and the Americans called Asiatic cholera (or cholera morbus) reached Southeast Asia, East Africa, the Middle East, and the Caucasus, but petered out in 1823. A second pandemic, between 1826 and 1837, also originated in India, but this time it took a devastating toll on both Europe and America, arriving in Britain in the autumn of 1831 and in America the following year. By 1833, twenty thousand people had died of cholera in England and Wales, with London especially hard hit. A third pandemic swept England and Wales in 1848-49 (more than fifty thousand dead) and again in 1854, when thirty thousand died in London alone.

The description of the disease will loosen your bowels

Cholera is a horrific illness. The onset of the disease is typically quick and spectacular; you can be healthy one moment and dead within hours. The disease, left untreated, has a fatality rate that can reach fifty per cent. The first sign that you have it is a sudden and explosive watery diarrhea, classically described as “rice-water stool,” resembling the water in which rice has been rinsed and sometimes having a fishy smell. White specks floating in the stool are bits of lining from the small intestine. As a result of water loss—vomiting often accompanies diarrhea, and as much as a litre of water may be lost per hour—your eyes become sunken; your body is racked with agonizing cramps; the skin becomes leathery; lips and face turn blue; blood pressure drops; heartbeat becomes irregular; the amount of oxygen reaching your cells diminishes. Once you enter hypovolemic shock, death can follow within minutes. A mid-nineteenth-century English newspaper report described cholera victims who were “one minute warm, palpitating, human organisms—the next a sort of galvanized corpse, with icy breath, stopped pulse, and blood congealed—blue, shrivelled up, convulsed.” Through it all, and until the very last stages, is the added horror of full consciousness. You are aware of what’s happening: “the mind within remains untouched and clear,—shining strangely through the glazed eyes . . . a spirit, looking out in terror from a corpse.”

The received wisdom was that diseases reflect an imbalance of the four humours familiar to Indians from Ayurveda (blood, phlegm, yellow bile, and black bile), and that epidemic diseases were caused by atmospheric miasmas.

The fact that the poor suffered most in many epidemics was readily accommodated by the miasmal theory: certain people—those who lived in areas where the atmosphere was manifestly contaminated and who led a filthy and unwholesome way of life—were “predisposed” to be afflicted. The key indicator of miasma was stench. An aphorism of the nineteenth-century English sanitary reformer Edwin Chadwick was “All smell is disease.” Sydenham’s belief in a subterranean origin of miasmas gradually gave way to the view that they were caused by the accumulation of putrefying organic materials—a matter of human responsibility. As Charles E. Rosenberg’s hugely influential work “The Cholera Years” (1962) noted, when Asiatic cholera first made its appearance in the United States, in 1832, “Medical opinion was unanimous in agreeing that the intemperate, the imprudent, the filthy were particularly vulnerable.” During an early outbreak in the notorious Five Points neighborhood of Manhattan, a local newspaper maintained that this was an area inhabited by the most wretched specimens of humanity: “Be the air pure from Heaven, their breath would contaminate it, and infect it with disease.” The map of cholera seemed so intimately molded to the moral order that, as Rosenberg put it, “to die of cholera was to die in suspicious circumstances.” Rather like syphilis, it was taken as a sign that you had lived in a way you ought not to have lived. “The great mass of people . . . don’t know that the miasma of an unscavenged street or impure alley is productive of cholera and disease,” the English liberal economic activist Richard Cobden observed in 1853. “If they did know these things, people would take care that they inhabited better houses.”

Élite presumptions to the contrary, the London poor did not enjoy living in squalor. In 1849, a group of them wrote a joint letter to the London Times:

"We live in muck and filthe. We aint got no priviz, no dust bins, no drains, no water-splies . . . . The Stenche of a Gully-hole is disgustin. We all of us suffer, and numbers are ill, and if the Colera comes Lord help us. . . . We are livin like piggs, and it aint faire we shoulde be so ill treted. "

But some sanitary reformers, Florence Nightingale among them, opposed contagionism precisely because they believed that the poor were personally responsible for their filth: contagionism undermined your ability to hold people to account for their unwholesome way of life. Whereas, in a miasmal view of the world, the distribution of disease followed the contours of morality—your nose just knew it—infection by an external agent smacked of moral randomness.

The hero of the tale is John Snow- an anesthetist and a founding member of the London Epidemiological Society- who wielded data and common sense to great effect. He asked some good questions.

Why was it, he wondered, that people most exposed to these supposedly noxious miasmas—sewer workers, for example—were no more likely to be afflicted with cholera than anyone else? Snow also knew that the concentration of gases declined rapidly over distance, so how could a miasma arising from one source pollute the atmosphere of a whole neighborhood, or even a city? Why, if many of those closest to the stench were unaffected, did some of those far removed from it become ill? And there were some notable outbreaks of cholera that didn’t appear to fit with the moral and evidential underpinnings of miasmal theory. Sometimes the occupants of one building fell ill while those in an adjacent building, at least as squalid, escaped. Moreover, cholera attacked the alimentary, not the respiratory, tract. Why should that be, if the vehicle of contagion was in the air as opposed to something ingested?

He came to the water supply

From medieval times, water had been drawn both from urban wells and from the Thames and its tributaries. In the early seventeenth century, the so-called New River was constructed; it carried Hertfordshire spring water, by gravity alone, to Clerkenwell, a distance of almost forty miles. During the eighteenth century and the early nineteenth, a number of private water companies were established, taking water from the Thames and using newly invented steam pumps to deliver it by iron pipe. By the middle of the nineteenth century, there were about ten companies supplying London’s water. Many of these companies drew their water from within the Thames’s tidal section, where the city’s sewage was also dumped, thus providing customers with excrement-contaminated drinking water. In the early eighteen-fifties, Parliament had ordered the water companies to shift their intake pipes above the tideway by August of 1855: some complied quickly; others dragged their feet.

When cholera returned, in 1854, Snow was able to identify a number of small districts served by two water companies, one still supplying a fecal cocktail and one that had moved its intake pipes to Thames Ditton, above the tidal section. Snow compiled tables showing a strong connection in these districts between cholera mortality and water source. Snow’s “grand experiment” was supposed to be decisive: there were no pertinent variables distinguishing the two populations other than the origins of their drinking water. As it turned out, the critical evidence came not from this study of commercially piped river water but from a fine-grained map showing the roles of different wells. Snow lived on Sackville Street, just around the corner from the Royal Academy of Arts, and in late August cholera erupted practically next door, in an area of Soho. It was, Snow later wrote, “the most terrible outbreak of cholera which ever occurred in this kingdom”—more than five hundred deaths in ten days.

He produced one of the most famous maps ever-one that Edward Tufte praises in his book "The visual display of quantitative information".

Using the Weekly Return of Births and Deaths, which was published by William Farr, a statistician in the Office of the Registrar-General, and a staunch anti-contagionist, Snow homed in on the microstructure of the epidemic. He began to suspect contaminated water in a well on Broad Street whose pump served households in about a two-block radius. The well had nothing to do with commercially piped water—which in this neighborhood happened to be relatively pure—but it was suspicious nonetheless. Scientists at the time knew no more about the invisible constituents of the water supply than they did about the attributes of specific miasmas—Snow wrote that the “morbid poison” of cholera “must necessarily have some sort of structure, most likely that of a cell,” but he could not see anything that looked relevant under the microscope—so even Snow still used smell as an important diagnostic sign. He recorded a local impression that, at the height of the outbreak, the Broad Street well water had an atypically “offensive smell,” and that those who were deterred by it from drinking the water did not fall ill. What Snow needed was not the biological or chemical identity of the “morbid poison,” or formal proof of causation, but a powerful rhetoric of persuasion. The map Snow produced, in 1854, plotted cholera mortality house by house in the affected area, with bars at each address that showed the number of dead. The closer you lived to the Broad Street pump, the higher the pile of bars. A few streets away, around the pump at the top of Carnaby Street, there were scarcely any bars, and slightly farther, near the Warwick Street pump, there were none at all.

This is a post by Tufte, describing a visit to John Snow's cholera-infected waterpump, and the image above is part of the map itself. Snow backed this up with additional data

Snow’s study of the neighborhood enabled him to add persuasive anecdotal evidence to the anonymity of statistics. Just across from the Broad Street pump was the Poland Street workhouse, whose wretched inmates, living closely packed in miserable conditions, should have been ideal cholera victims. Yet the disease scarcely touched them. The workhouse, it emerged, had its own well and a piped supply from a company with uncontaminated Thames water. Similarly, there were no cholera deaths among the seventy workers in the Lion Brewery, on Broad Street. They drank mainly malt liquor, and the brewery had its own well. What Snow called the “most conclusive” evidence concerned a widow living far away, in salubrious Hampstead, and her niece, who lived in “a high and healthy part of Islington”: neither had gone anywhere near Broad Street, and both succumbed to cholera within days of its Soho outbreak. It turned out that the widow used to live in the affected area, and had developed a taste for the Broad Street well water. She had secured a supply on August 31st, and, when her niece visited, both drank from the same deadly bottle.

Next, Snow had to show how the Broad Street well had got infected, and for this he made use of the detailed knowledge of a local minister, Henry Whitehead. The minister had at first been skeptical of Snow’s waterborne theories, but became convinced by the evidence the doctor was gathering. Whitehead discovered that the first, or “index,” case of the Soho cholera was a child living on Broad Street: her diapers had been rinsed in water that was then tipped into a cesspool in front of a house just a few feet away from the well. The cesspool leaked and so, apparently, did the well. Snow persuaded the parish Board of Guardians to remove the handle from the Broad Street pump, pretty much ending the Soho cholera outbreak. There’s now a replica of the handleless pump outside a nearby pub named in John Snow’s honor.

All his efforts did not secure is immediate victory, but the unbearable stench of the sewage-laden Thames finally forced the Government to act

In the oppressively hot summer of 1858, London was overwhelmed by what the papers called “the Great Stink.” The already sewage-loaded Thames had begun to carry the additional burden of thousands of newly invented flush water closets, and improved domestic sanitation was producing the paradoxical result of worsened public sanitation. The Thames had often reeked before, but this time politicians fled the Houses of Parliament, on the river’s embankment, or attended with handkerchiefs pressed to their noses. “Whoso once inhales the stink can never forget it,” a newspaper reported, “and can count himself lucky if he live to remember it.” Measures to clean up the Thames had been on the agenda for some years, but an urgent fear of miasmas broke a political logjam, and gave immediate impetus to one of the great monuments of Victorian civil engineering: Sir Joseph Bazalgette’s system of municipal sewers, designed to deposit London’s waste below the city and far from the intakes of its water supply. (The system became fully operational in the mid-eighteen-seventies, and its pipes and pumps continue to serve London today.)

In the event, the Great Stink’s effects on municipal health were negligible: the Weekly Return showed no increase in deaths from epidemic disease, confounding miasmatists’ expectations. When cholera returned to London in 1866, its toll was much smaller, and the main outbreak was traced to a section of Bazalgette’s system which had yet to be completed. In many people’s opinion, Snow, who had died in 1858, now stood vindicated. And yet the improved municipal water system that rid the city of cholera had been promoted by sanitary reformers who held to the miasmal theory of disease—people who believed that sewage-laden drinking water was only a minor source of miasmas, but disgusting all the same. The right things were done, but not necessarily for the right scientific reasons.

The best we can hope for.

Sunday, October 29, 2006

What good is happiness? It can't buy you money.

Robert H. Frank is a distinguished left-leaning Economist. In "Passions within Reason", he invented an explanation for why emotions exist. In this "Economic Scene" article, he argues that
1. Economic Growth does not make a society happier over time

Many critics of economic growth interpret this finding to imply that continued economic growth should no longer be a policy goal in developed countries. They argue that if money buys happiness, it is relative, not absolute, income that matters. As incomes grow, people quickly adapt to their new circumstances, showing no enduring gains in measured happiness. Growth makes the poor happier in low-income countries, critics concede, but not in developed countries, where those at the bottom continue to experience relative deprivation.

2. Economic Growth is still important because happiness is not the point of growth.

Subjective well-being is typically measured from responses to survey questions like, “All things considered, how satisfied are you with your life these days?” People’s responses are informative. They tend to be consistent over time and are highly correlated with assessments of them made by their friends. Positive self-assessments are strongly linked with behaviors indicating psychological health. Thus, people who report high levels of subjective well-being are more likely to initiate social contacts with friends and more likely to respond to requests for assistance from strangers. They are less likely than others to suffer from psychosomatic illnesses, seek psychological counseling or attempt suicide.
In short, self-assessments of subjective well-being tell us something important about human welfare. Yet the mere fact that they do not ratchet up over time provides little reason to question the desirability of economic growth.

Why is this? Because our emotions (motivational system) evolved to ensure that we can never be permanently happy- we are survival machines for our genes.

The purpose of the human motivational system, according to psychologists, is not to make people feel happy, but rather to motivate actions that promote successful life outcomes. To be effective, this system should be flexible and adaptive, which it is. For example, people who become disabled typically experience deep depression after their accidents, but often adapt surprisingly quickly, soon reporting a mix of moods similar to what they had experienced before. Lottery winners invariably experience joy on receiving their windfalls, but often describe such feelings as fleeting.

Since life is a continuing competitive struggle, this is as it should be. Accident victims who can recover their psychological footing quickly will function more effectively in their new circumstances than those who dwell unhappily on their misfortune. Windfall recipients who quickly recover their hunger for more will compete more effectively than those who linger in complacent euphoria.

A Holocaust survivor once told me that his existence in the camps took place in two separate psychological spaces. In one, he was acutely aware of the unspeakable horror of his situation. But in the other, life seemed eerily normal. In this second space, each day presented challenges, and days in which he coped relatively successfully with them felt much like the good days of the past. To survive, he explained, it was critical to spend as much time as possible in the second space and as little as possible in the first.

These observations highlight the weakness of subjective well-being as a metric of welfare. The fact that people adapt quickly to new circumstances, good or bad, is just a design feature of the brain’s motivational system. The fact that a paraplegic may continue to be happy does not imply that his condition has not reduced his welfare.

Indeed, many well-adjusted paraplegics report that they would undergo surgery entailing substantial risk of death if doing so promised to restore their mobility. Similarly, the fact that people may adapt quickly to higher incomes says nothing about whether economic growth makes them better off.

This is a profound observation. Tyler Cowen quotes Daniel Ariely making a similar point about his life after suffering severe burn over his entire body personal reflections are only in partial agreement with the literature on well being (see also Levav 2002). In terms of agreement with adaptation, I find myself to be relatively happy in day-to-day life – beyond the level predicted (by others as well as by myself) for someone with this type of injury. Mostly, this relative happiness can be attributed to the human flexibility of finding activities and outlets that can be experienced and finding in these, fulfillment, interest, and satisfaction. For example, I found a profession that provides me with a wide-ranging flexibility in my daily life, reducing the adverse effects of my limitations on my ability. Being able to find happiness in new ways and to adjust one’s dreams and aspirations to a new direction is clearly an important human ability that muffles the hardship of wrong turns in life circumstances. It is possible that individuals who are injured at later stages of their lives, when they are more set in terms of their goals, have a more difficult time adjusting to such life-changing events.

However, these reflections also point to substantial disagreements with the current literature on well-being. For example, there is no way that I can convince myself that I am as happy as I would have been without the injury. There is not a day in which I do not feel pain, or realize the disadvantages in my situation. Despite this daily awareness, if I had participated in a study on well-being and had been asked to rate my daily happiness on a scale from 0 (not at all happy) to 100 (extremely happy), I would have probably provided a high number, probably as high as I would have given if I had not had this injury. Yet, such high ratings of daily happiness would have been high only relative to the top of my privately defined scale, which has been adjusted downward to accommodate the new circumstances and possibilities (Grice 1975). Thus, while it is possible to show that ratings of happiness are not influenced much based on large life events, it is not clear that this measure reflects similar affective states.

As a mental experiment, imagine yourself in the following situation. How you would rate your overall life satisfaction a few years after you had sustained a serious injury. How would your ratings reflect the impact of these new circumstances? Now imagine that you had a choice to make whether you would want this injury. Imagine further that you were asked how much you would have paid not to have this injury. I propose that in such cases, the ratings of overall satisfaction would not be substantially influenced by the injury, while the choice and willingness to pay would be - and to a very large degree. Thus, while I believe that there is some adaptation and adjustment to new life circumstances, I also believe that the extent to which such adjustments can be seen as reflecting true adaptation (such as in the physiological sense of adaptation to light for example) is overstated. Happiness can be found in many places, and individuals cannot always predict their ability to do so. Yet, this should not undermine our understanding of horrific life events, or reduce our effort to eliminate them.

Economic growth is not about making people happier, but about increasing their freedom of action. People (individually and collectively) get to make choices that would otherwise have been infeasible. Without the economic growth of the past 50 years, Dan Ariely would not have survived his burns, and society would not have been able to spare the effort and skill that went into rehabilitating him.

Thursday, October 26, 2006

Pangur Ban

I met this cat in "Rattle bag"- a book of poems compiled by Ted Hughes and Seamus Heaney.

This poem was written in 8th century Ireland by a student at the monastery of Carinthia (Irish, 8th century) on a copy of St.Paul's Epistles. (The translation is by Robin Flower)
I and Pangur Ban my cat,
'Tis a like task we are at:
Hunting mice is his delight,
Hunting words I sit all night.

Better far than praise of men
'Tis to sit with book and pen;
Pangur bears me no ill-will,
He too plies his simple skill.

'Tis a merry task to see
At our tasks how glad are we,
When at home we sit and find
Entertainment to our mind.

Oftentimes a mouse will stray
In the hero Pangur's way;
Oftentimes my keen thought set
Takes a meaning in its net.

'Gainst the wall he sets his eye
Full and fierce and sharp and sly;
'Gainst the wall of knowledge I
All my little wisdom try.

When a mouse darts from its den,
O how glad is Pangur then!
O what gladness do I prove
When I solve the doubts I love!

So in peace our task we ply,
Pangur Ban, my cat, and I;
In our arts we find our bliss,
I have mine and he has his.

Practice every day has made
Pangur perfect in his trade;
I get wisdom day and night
Turning darkness into light.

Rhythmic grumbling

Back in Bombay, 11;20 p.m

I thought Deepavali would be past
but I returned while ropes of colored lights
still stammered in the windows,
and flights of bullying rockets
roused me from my bed
to sit listening
to little boys playing
with toy pistols in the street below.
They sound like gardeners clipping hedges in the park.

Jayan insists that I include his version:

Deepavali past
Ropes of colored lights on windows
Flights of bullying rockets
Tearing the smoked up sky

Almost unbearable

Tyler Cowen says it is about the behavioral economics of pain. The only way I could read this article by Dan Ariely was by taking frequent breaks.

However, it is wonderfully written.

Wednesday, October 25, 2006

The Roman Way

Nick Szabo recently blogged about an article on the Lex Gabina, published in the New York Times by Robert Harris.

I recently devoured two novels by Robert Harris: "Imperium" and "Pompeii". "Imperium" is by far the better novel, but "Pompeii" has magnificent passages describing the eruption of Pompeii, and wonderful descriptions of the achievements of Roman engineering.
The book begins with this quote:

How can we withhold our respect from a water system that, in the first century AD, supplied the city of Rome with substantially more water than was supplied in 1985 to New York City?

A Trevor Hodges,
(Author of Roman Aqueducts & Water Supply)

And this passage does resonate with me

Men mistook measurement for understanding. And they always had to put themselves at the centre of everything. That was their greatest conceit. The earth is becoming warmer- it must be our fault! The mountain is destroying us- we have not propitiatied the gods! It rains too much, it rains too little- a comfort to think that these thjings are somehow connected to our behaviour, that if we lived only a little better, a little more frugally, our virtue would be rewarded.

Tuesday, October 24, 2006


What are we supposed to make of this "news"?

There are 20 reported cases of HIV positive patients in the Patna Police hospital. Concerned by the rising numbers of HIV cases in the state police force, doctors have forwarded some suggestions to headquarters.

How many would be an acceptable number? Is this number out of proportion, considering how many people outside the Police force are HIV+?

They feel that all new recruits in the Bihar Police force should carry HIV negative certificates and that men presently serving in the force should undergo HIV tests. They also say that such tests should be carried out periodically.

What is the intention here? If this is good for the Police, why should this not be done for the general population as well?

Says Bihar Home Secretary Afzal Amanullah, "Doctors have reported that there has been a considerable rise in HIV positice (sic) cases among officers, not only among those ranked lower and constables, but senior officials as well."

Oh my goodness, not only petty constables, but officers as well? Intolerable. But why?

The real numbers of those suffering from the dreaded virus could be mind-boggling and the government's intervention is expected before the situation compeletly (sic) gets out of hand.

What should the Government do? Surely it can't be so difficult to ask a few questions when you are handed a story.

Monday, October 23, 2006

Parting of ways

Via ALdaily: The New Statesman has published a fascinating article by William Dalrymple on a 19th century clash of civilizations.

At 4pm on a hazy, warm, sticky winter's day in Rangoon in November 1862, soon after the end of the monsoon, a shrouded corpse was escorted by a small group of British soldiers to an anonymous grave at the back of a walled prison enclosure. The enclosure lay overlooking the muddy brown waters of the Rangoon River, a little downhill from the great gilt spire of the Shwedagon Pagoda. Around it lay the newly built cantonment area of the port - a pilgrimage town that had been seized, burned and occupied by the British only ten years earlier.

The bier of the State Prisoner - as the deceased was referred to - was accompanied by his two sons and an elderly mullah. The ceremony was brief. The British authorities had made sure not only that the grave was already dug, but that quantities of lime were on hand to guarantee the rapid decay of both bier and body. When the shortened funeral prayers had been recited, the earth was thrown over the lime, and the turf carefully replaced to disguise the place of burial. A week later the British Commissioner, Captain H N Davis, wrote to London to report what had passed, adding:

Have since visited the remaining State Prisoners - the very scum of the reduced Asiatic harem; found all correct . . . The death of the ex-King may be said to have had no effect on the Mahomedan part of the populace of Rangoon, except perhaps for a few fanatics who watch and pray for the final triumph of Islam. A bamboo fence surrounds the grave, and by the time the fence is worn out, the grass will again have properly covered the spot, and no vestige will remain to distinguish where the last of the Great Moghuls rests.

His point in the article appears to be that, as that the British achieved ascendency in India, and evangelical Christians became more prominent among the British, they gradually changed character from being just another trading community in a vast subcontinent, to a foreign presence which aggressively rejected any hint of being influenced by the country they inhabited.

The wills written by dying East India Company servants show that the practice of cohabiting with Indian bibis quickly declined: they turn up in one in three wills between 1780 and 1785, but are present in only one in four between 1805 and 1810. By the middle of the century, they have all but disappeared. In half a century, a vibrantly multicultural world refracted back into its component parts; children of mixed race were corralled into what became in effect a new Indian caste - the Anglo-Indians - who were left to run the railways, posts and mines.

He draws a parallel with our times.

Just like it is today, this process of pulling apart - of failing to talk, listen or trust each other - took place against the background of an increasingly aggressive and self-righteous west, facing ever stiffer Islamic resistance to western interference. For, as anyone who has ever studied the story of the rise of the British in India will know well, there is nothing new about the neo-cons. The old game of regime change - of installing puppet regimes, propped up by the west for its own political and economic ends - is one that the British had well mastered by the late 18th century.

By the 1850s, the British had progressed from aggressively removing independent-minded Muslim rulers, such as Tipu Sultan, who refused to bow before the will of the hyperpower, to destabilising and then annexing even the most pliant Muslim states. In February 1856, the British unilaterally annexed the prosperous kingdom of Avadh (or Oudh), using the excuse that the nawab, Wajid Ali Shah, a far-from-belligerent dancer and epicure, was "debauched".

The war that followed was essentially religious

The eventual result of this clash of rival fundamentalisms came in 1857 with the cataclysm of the Great Mutiny. Of the 139,000 sepoys of the Bengal army, all but 7,796 turned against their British masters, and the great majority headed straight to Zafar's court in Delhi, the centre of the storm. Although it had many causes and reflected many deeply held political and economic grievances - particularly the feeling that the heathen foreigners were interfering in the most intimate way with a part of the world to which they were entirely alien - the uprising was articulated as a war of religion, and especially as a defensive action against the rapid inroads that missionaries, Christian schools and Christian ideas were making in India, combined with a more generalised fight for freedom from occupation and western interference.

Although the great majority of the sepoys were Hindus, in Delhi a flag of jihad was raised in the principal mosque, and many of the insurgents described themselves as mujahedin or jihadis. Indeed, by the end of the siege, after a significant proportion of the sepoys had melted away, hungry and dis pirited, the proportion of jihadis in Delhi grew to be about half of the total rebel force, and included a regiment of "suicide ghazis" from Gwalior who had vowed never to eat again and to fight until they met death at the hands of the kafirs, "for those who have come to die have no need for food".

One of the causes of unrest, according to a Delhi source, was that "the British had closed the madrasas". These words had no resonance to the Marxist historians of the 1960s who looked for secular and economic grievances to explain the uprising. Now, in the aftermath of the attacks of 11 September 2001 and 7 July 2005, they are phrases we understand all too well. Words such as jihad scream out of the dusty pages of the Urdu manuscripts, demanding attention.

There is a direct link between the jihadis of 1857 and those we face today. The reaction of the educated Delhi Muslims after 1857 was to reject both the west and the gentle Sufi traditions of the late Mughal emperors, whom they tended to regard as semi-apostate puppets of the British; instead, they attempted to return to what they regarded as pure Islamic roots.

With this in mind, disillusioned refugees from Delhi founded a mad rasa in the Wahhabi style at Deoband, in Delhi, that went back to Koranic basics and rigorously stripped out anything European from the curriculum. One hundred and forty years later, it was out of Deobandi madrasas in Pakistan that the Taliban emerged to create the most retrograde Islamic regime in modern history, a regime that in turn provided the crucible from which emerged al-Qaeda, and the most radical Islamist counter-attack the modern west has yet had to face.

A fascinating tale, but there is nothing uniquely sub-continental about this story: every country in Asia, Africa, and South America has a similar tale to tell. China and Korea suffered unbelievable horrors in the same period. General Gordon became a martyr to Victorian England when the Sudanese mahdi killed him at Khartoum.
It may be possible to trace the origins of today's Islamic jihads to events that are over a century old, but I am not sure that it helps. They were a negligible presence until a bare 20 years ago, and they the origins of today's terror are equally close to hand: the U.S. armed and trained the mujahideen to battle the Soviet infidel, Pakistan gave them succor in an effort to win American kudos (and arms with which to confront India) and the Saudis funded them to appease their own people.
The fuel that feeds this constant skirmishing is the evident illegitimacy of the governments of every Islamic state from Islamabad to Casablanca- it is not only the British who parted ways with the people they ruled. Their unfortunate people, growing more impoverished by the year, and repressed by their political masters, are seeking to escape into an imagined past of Islamic purity and potency.
I am sure Dalrymple's book will illuminate the 19th century, but will cast only an indirect light on the early 21st century.

Now for something quite different

Nick Szabo has posted some really good entries recently.

One post is about the contrasting fates of medieval China and Portugal. I loved the image on the right, which contrasts the size of Columbus' Santa Maria with that of one of the Zheng He "treasure ships" which the Chinese Emperor sent off on a voyage to "show the flag".

Another nice blog entry is about the Pigeonhole principle.
The pigeonhole principle readily proves that there are people in Ohio with the same number of hairs on their head, that you can't eliminate the possibility of hash collisions when the set of possible input data is larger than the possible outputs, that if there are at least two people in a room then there must be at least two people in that room with the same number of cousins in the room, and that a lossless data compression algorithm must always make some files longer. This is just the tip of the iceberg of what the pigeonhole principle can help prove.
And he claims his math is rusty!

Sunday, October 22, 2006

With stupidity the gods themselves contend..

More than two centuries after the great works of David Hume and Adam Smith, 20 years after the "Japan scare", the head of Der Spiegel's Berlin office publishes this.

The world war for wealth calls for a different, but every bit as contradictory, solution


The two camps are divided between Europe and America on the one side and Asia on the other. But so far there has been no shouting, no bluster and no shooting. Nor have there been any threats, demands or accusations. On the contrary, there is an atmosphere of complete amiability wherever our politicians and business executives might travel in Asia. At airports in Beijing, Jakarta, Singapore and New Delhi red carpets lie ready, Western national anthems can be played flawlessly on cue -- and they even parry Western complaints about intellectual property theft, environmental damage and human rights abuses with a polite patience that can only be admired. The Asians are the friendliest conquerors the world has ever seen.


Their secret is stoic perseverance, the weapon they use to pursue their own interests while at the same time disregarding ours. What looks like a market economy in Asia, actually follows the rules of a type of society which former German chancellor Ludwig Erhard liked to call a "termite state." In a termite state, it is the collective rather than the individual which sets the agenda. Tasks that serve the aims of society's leaders are assigned to the individual in a clandestine manner that is barely perceptible to outsiders. It is a state that encourages as much collective behavior as possible but only as much freedom as necessary. We don't know what they feel, we don't know what they think and we have no way of guessing what they are planning. Indeed, this is what makes China a dark superpower.

Beyond parody.

The battle of the loonies

I adore Richard Dawkins' writings on Evolution but, while I generally agree with him on religion, I find his zeal disturbing. Terry Eagleton's incoherent, incomprehensible review in the London Review of Books, however, is bound to confirm Dawkins' opinions of his opponents.
What could Terry mean by

For Judeo-Christianity, God is not a person in the sense that Al Gore arguably is. Nor is he a principle, an entity, or ‘existent’: in one sense of that word it would be perfectly coherent for religious types to claim that God does not in fact exist. He is, rather, the condition of possibility of any entity whatsoever, including ourselves. He is the answer to why there is something rather than nothing. God and the universe do not add up to two, any more than my envy and my left foot constitute a pair of objects.

How does he know that there is such a condition? And did God himself whisper in Terry Eagleton's ear so that he knows this:

This, not some super-manufacturing, is what is traditionally meant by the claim that God is Creator. He is what sustains all things in being by his love; and this would still be the case even if the universe had no beginning. To say that he brought it into being ex nihilo is not a measure of how very clever he is, but to suggest that he did it out of love rather than need. The world was not the consequence of an inexorable chain of cause and effect. Like a Modernist work of art, there is no necessity about it at all, and God might well have come to regret his handiwork some aeons ago. The Creation is the original acte gratuit. God is an artist who did it for the sheer love or hell of it, not a scientist at work on a magnificently rational design that will impress his research grant body no end.

This is the point at which I gave up and went to sleep.

Hey, You Got Something To Eat?

asks A Goat, in The Onion.

Saturday, October 21, 2006

Dilbert Rules

Read and learn

Where all are above average

The Business Standard has published a grotesquely shoddy study on the performance of Fund Managers in India. Really, you expect better from this paper.

Indian equity fund managers have managed to beat the benchmark indices hands down, despite the stock markets going through tumultuous times over the past decade and a half.

The first-ever ranking of fund managers, based on performance throughout their career, reveals that 90 per cent of the fund managers had a 50 per cent rate of outperformance versus the benchmark Nifty index. In other words, of the 32 equity fund managers ranked, 29 bettered the Nifty at least half the time.

This may be meaningful because

“Apart from their skills, a key reason for the large-scale outperformance is that mutual funds are a tiny fraction of the whole market in our country, while in the US, mutual funds are the market itself, limiting the scope of fund managers bettering the market,” added Kumar.

Translation: these dudes have been thriving on the mistakes of the retail investors who swarm our markets while, in the US, one Fund Manager can outperform only at the expense of another.

While every one of the leading fund managers seemed to have a unique style of investing, a common thread was a bias towards growth. “Since most companies in India have still not attained their full potential, focusing on growth can bring in rich rewards,” said Subramanian of Franklin Templeton.

Meaning: we take more risks, which pay off in a rising market, but can sink us if the market turns against us.
However, the number one reason for this performance has not been explicitly presented: Sample selection bias.

The study ranked only equity managers with a minimum five-year track record and debt managers with at least two years of experience. The detailed methodology and results have been covered in the magazine, along with the profile of the five leading managers in the two categories.

Simply put, what do the authors want me to take away from such a study? That it is possible to beat the market over the long term, that its particularly easy to do this in India, and so I should hand my money over to these Fund Manegers. Now, they may be right, but this study inspires no confidence.

What they should have done is to look at all the Fund Managers who were in business 5 years ago, and then seen how they performed over these years. That would have given us a feel for how easy it is for these particular professionals to outperform, given the large number of amateurs mucking about in the marketplace.
The way they have conducted this study, all the Fund Managers who went out of business in these past 5 years have not been counted- hence the conclusion that "90% of the Fund Managers has a 50% rate of outperformance versus the benchmark Nifty at least half the time."
This article has a nice section on how the Fund Mangement industry manages to make itself look good.

There are a number of ways that active fund managers have been able to promote the illusion that as a group they are adding investment value. The investment management industry has many tricks to make it appear as if everyone is doing better than average.

For a start they can load the dice by opening to the public a lot of historically profitable funds. Fund managers introduce creation bias into the equation by starting a lot of aggressive new funds. The way this trick works is they give seed capital to a number of promising young portfolio managers every year. These managers then invest and trade aggressively for the next year or two, establishing a track record. The fund managers that didn't do well get the flick, their money gets plowed into the more successful fund and then the investment company opens the new fund to the public and puts enormous marketing hype behind it. In this way managers can ensure that all new funds (that the public hear about) have excellent track records.

There are studies that have found that brand new funds often underperform their own track records, and as a group seem to do worse than more established funds, this is probably why. Note that these new funds don't do any better than average following their launch, but they do get to brag about impressive past performance.

The second trick is to bury the evidence if one of their public funds ever falls behind. Survivorship bias is introduced when fund managers close or merge their less successful funds with more successful ones. By continually weeding out the weaker funds, a fund manager can present a prospectus showing the entire range of investment options being market beaters. Similarly, when a fund is culled it is usually deleted from most databases, so history is rewritten by the winners. Professor Burton Malkiel, author of the excellent book A Random Walk Down Wall Street studied this phenomenon and estimates survivorship bias could add as much as 1.5%pa to the performance of the median fund manager. Investors do not benefit from survivorship bias, real world investors do lose money on funds that are deleted. All that improves is the historical average performance of fund databases.

Third trick is to throw a lot of hype behind high performing funds. There is little evidence that winning funds are able to sustain their high performance over the long term, so this can only be seen as a cynical marketing exercise, cashing in on last year's luck. Naturally funds that don't perform well don't advertise much, so all you see are ads showing high performance. (Rajeev: rather as your colleagues at work would are more likely to tell you about the winning stocks that they purchased, not the losers they have on their portfolio)

The fourth trick is as scurrilous as the rest, even a very poor fund that has underperformed over the longer term can have a good year or two, so as long as these funds only talk about recent past performance they can avoid the prickly question of longer term past results.

Similarly, funds can always brag about past glories, proudly displaying the fund manager of the year ribbon they won 3 years ago on every advertisement, and brag about a high long term performance even if the last few years have been dreadful.

Thursday, October 19, 2006

The Poet-Scholar

The Harvard Business Review has a surprisingly enjoyable interview with James G. March, "professor emeritus in management, sociology, political science, and education at Stanford University". From the introduction

In these pages, three years ago, consultants Laurence Prusak and Thomas H. Davenport reported the findings of a survey of prominent management writers who
identified their own gurus. Although his is an unfamiliar name to most readers of this periodical, James G. March appeared on more lists than any other person except Peter Drucker.


March is perhaps best known for his pioneering contributions to organization and management theory. He has coauthored two classic books: Organizations (with Herbert A. Simon) and A Behavioral Theory of the Firm (with Richard M. Cyert). Together with Cyert and Simon, March developed a theory of the firm that incorporates aspects of sociology, psychology, and economics to provide an alternative to neoclassical theories. The underlying idea is that although managers make decisions that are intendedly rational, the rationality is “bounded” by human and organizational limitations. As a result, human behavior is not always what might be predicted when rationality is assumed.

One of the themes of his work appears to be the importance of the irrational- of behavior that does not seek to justify itself. It is unusual for the subject of an interview to insist on his irrelevance

If there is relevance to my ideas, then it is for the people who contemplate the ideas to see, not for the person who produces them. For me, a feature of scholarship that is generally more significant than relevance is the beauty of the ideas. I care that ideas have some form of elegance or grace or surprise—all the things that beauty gives you.


No organization works if the toilets don’t work, but I don’t believe that finding solutions to business problems is my job. If a manager asks an academic consultant what to do and that consultant answers, then the consultant should be fired. No academic has the experience to know the context of a managerial problem well enough to give specific advice about a specific situation. What an academic consultant can do is say some things that, in combination with the manager’s knowledge of the context, may lead to a better solution.


The scholar tries to figure out, What’s going on here? What are the underlying processes making the system go where it’s going? What is happening, or what might happen? Scholars talk about ideas that describe the basic mechanisms shaping managerial history—bounded rationality, diffusion of legitimate forms, loose coupling, liability of newness, competency traps, absorptive capacity, and the like. In contrast, experiential knowledge focuses on a particular context at a particular time and on the events of personal experience. It may or may not generalize to broader things and longer time periods; it may or may not flow into a powerful theory; but it
provides a lot of understanding of a particular situation. A scholar’s knowledge cannot address a concrete, highly specific context, except crudely. Fundamental academic knowledge becomes more useful in new or changing environments, when managers are faced with the unexpected or the unknown. It provides alternative frames for looking at problems rather than solutions to them.

His ideas on friendship and love reflect the influence of Kierkegaard's idea of a "leap of faith"

We justify actions by their consequences. But providing consequential justification is only a part of being human. It is an old issue, one with which Kant and Kierkegaard, among many others, struggled. I once taught a course on friendship that reinforced this idea for me. By the end of the course, a conspicuous difference had emerged between some of the students and me.

They saw friendship as an exchange relationship: My friend is my friend because he or she is useful to me in one way or another. By contrast, I saw friendship as an arbitrary relationship: If you’re my friend, then there are various obligations that I have toward you, which have nothing to do with your behavior. We also talked about trust in that class. The students would say, “Well, how can you trust people unless they are trustworthy?” So I asked them why they called that trust. It sounded to me like a calculated exchange. For trust to be anything truly meaningful, you have to trust somebody who isn’t trustworthy. Otherwise, it’s just a standard rational transaction.

He argues for the value of being foolish

That paper sometimes gets cited—by people who haven’t read it closely—as generic enthusiasm for silliness. Well, maybe it is, but the paper actually focused on a much narrower argument. It had to do with how you make interesting value systems. It seemed to me that one of the important things for any person interested in understanding or improving behavior was to know where preferences come from rather than simply to take them as given.

So, for example, I used to ask students to explain the factual anomaly that there are more interesting women than interesting men in the world. They were not allowed to question the fact. The key notion was a developmental one: When a woman is born, she’s usually a girl, and girls are told that because they are girls they can do things for no good reason. They can be unpredictable, inconsistent, illogical. But then a girl goes to school, and she’s told she is an educated person. Because she’s an educated person, a woman must do things consistently, analytically, and so on. So she goes through life doing things for no good reason and then figuring out the reasons, and in the process, she develops a very complicated value system—one that adapts very much to context. It’s such a value system that permitted a woman who was once sitting in a meeting I was chairing to look at the men and say, “As nearly as I can tell, your assumptions are correct. And as nearly as I can tell, your conclusions follow from the assumptions. But your conclusions are wrong.” And she was right. Men, though, are usually boys at birth. They are taught that, as boys, they are straightforward, consistent, and analytic. Then they go to school and are told that they’re straightforward, consistent, and analytic. So men go through life being straightforward, consistent, and analytic—with the goals of a two-year-old. And that’s why men are both less interesting and more predictable than women. They do not combine their analysis with foolishness.


Well, there are some obvious ways. Part of foolishness, or what looks like foolishness, is stealing ideas from a different domain. Someone in economics, for example, may borrow ideas from evolutionary biology, imagining that the ideas might be relevant to evolutionary economics. A scholar who does so will often get the ideas wrong; he may twist and strain them in applying them to his own discipline. But this kind of cross-disciplinary stealing can be very rich and productive.

It’s a tricky thing, because foolishness is usually that—foolishness. It can push you to be very creative, but uselessly creative. The chance that someone who knows no physics will be usefully creative in physics must be so close to zero as to be indistinguishable from it. Yet big jumps are likely to come in the form of foolishness that, against long odds, turns out to be valuable. So there’s a nice tension between how much foolishness is good for knowledge and how much knowledge is good for foolishness.

Another source of foolishness is coercion. That’s what parents often do. They say, “You’re going to take dance lessons.” And their kid says, “I don’t want to be a dancer.” And the parents say, “I don’t care whether you want to be a dancer. You’re going to take these lessons.” The use of authority is one of the more powerful ways to encourage foolishness. Play is another. Play is disinhibiting. When you play, you are allowed to do things you would not be allowed to do otherwise. However, if you’re not playing and you want to do those same things, you have to justify your behavior. Temporary foolishness gives you experience with a possible new you—but before you can make the change permanent, you have to provide reasons.

Of course, all these can be questioned and someone like Richard Dawkins would probably make mincemeat of ideas like the "leap of faith", and March recognizes the potentially tragic consequences of such foolishness

It’s all a question of balance. Soon after I wrote my paper on the technology of foolishness, I presented it at a conference in Holland. This was around 1971. One of my colleagues from Yugoslavia, now Croatia, came up and said, “That was a great talk, but please, when you come to Yugoslavia, don’t give that talk. We have enough foolishness.” And I think he may have been right.

I suspect Hume saw the relationship between the passions and reason better than anyone else. Whether we like it or not, passion governs us- nobody ever chose to refrain from suicide because of a cost-benefit analysis. We live on because we like
living, and can ask for no other justification.

The poem "As I walked out one evening" by W.H. Auden seems apt.

Tuesday, October 17, 2006

The Imperial Style Part Deux

The New York Times has an article on Shing-Tung Yau- the "Emperor of Math", who was recently involved in controversy with Grigory Perelman.

In 1979, Shing-Tung Yau, then a mathematician at the Institute for Advanced Study in Princeton, was visiting China and asked the authorities for permission to visit his birthplace, Shantou, a mountain town in Guangdong Province.

At first they refused, saying the town was not on the map. Finally, after more delays and excuses, Dr. Yau found himself being driven on a fresh dirt road through farm fields to his hometown, where the citizens slaughtered a cow to celebrate his homecoming. Only long after he left did Dr. Yau learn that the road had been built for his visit.

A fascinating tale.

The Imperial Style

The New Yorker has a fine review of "Academic charisma and the origins of the research university" by William Clark.

At a Berlin banquet in 1892, Mark Twain, himself a worldwide celebrity, stared in amazement as a crowd of a thousand young students “rose and shouted and stamped and clapped, and banged the beer-mugs” when the historian Theodor Mommsen entered the room

One of the more interesting topics is how Germany's universities took the lead in the 19th century. The driver was competition among a swarm of small states- somewhat like Singapore today.

The heart of Clark’s story, however, takes place not during the Middle Ages but from the Renaissance through the Enlightenment, and not in France but in the German lands of the Holy Roman Empire. This complex assembly of tiny territorial states and half-timbered towns had no capital to rival Paris, but the little clockwork polities transformed the university through the simple mechanism of competition. German officials understood that a university could make a profit by attaining international stature. Every well-off native who stayed home to study and every foreign noble who came from abroad with his tutor—as Shakespeare’s Hamlet left Denmark to study in Saxon Wittenberg—meant more income. And the way to attract customers was to modernize and rationalize what professors and students did.

This competition led to constant innovation.

Bureaucrats pressured universities to print catalogues of the courses they offered—the early modern ancestor of the bright brochures that spill from the crammed mailboxes of families with teen-age children. Gradually, the bureaucrats devised ways to insure that the academics were fulfilling their obligations. In Vienna, Clark notes, “a 1556 decree provided for paying two individuals to keep daily notes on lecturers and professors”; in Marburg, from 1564 on, the university beadle kept a list of skipped lectures and gave it, quarterly, to the rector, who imposed fines. Others demanded that professors fill in Professorenzetteln, slips of paper that gave a record of their teaching activities. Professorial responses to such bureaucratic intrusions seem to have varied as much then as they do now. Clark reproduces two Professorenzetteln from 1607 side by side. Michael Mästlin, an astronomer and mathematician who taught Kepler and was an early adopter of the Copernican view of the universe, gives an energetic full-page outline of his teaching. Meanwhile, Andreas Osiander, a theologian whose grandfather had been an important ally of Luther, writes one scornful sentence: “In explicating Luke I have reached chapter nine.”

And then

In an even more radical break with the past, professors began to be appointed on the basis of merit. In many universities, it had been routine for sons to succeed their fathers in chairs, and bright male students might hope to gain access to the privileged university caste by marrying a professor’s daughter. By the middle of the eighteenth century, however, reformers in Hanover and elsewhere tried to select and promote professors according to the quality of their published work, and an accepted hierarchy of positions emerged. The bureaucrats were upset when a gifted scholar like Immanuel Kant ignored this hierarchy and refused to leave the city of his choice to accept a desirable chair elsewhere. Around the turn of the nineteenth century, the pace of transformation reached a climax.

In these years, intellectuals inside and outside the university developed a new myth, one that Clark classes as Romantic. They argued that Wissenschaft—systematic, original research unencumbered by superstition or the authority of mere tradition—was the key to all academic achievement. If a university wanted to attract foreign students, it must appoint professors who could engage in such scholarship. At a great university like Göttingen or Berlin, students, too, would do original research, writing their own dissertations instead of paying the professors to do so, as their fathers probably had. Governments sought out famous professors and offered them high salaries and research funds, and stipends for their students. The fixation on Wissenschaft placed the long-standing competition among universities on an idealistic footing.

Between 1750 and 1825, the research enterprise established itself, along with institutions that now seem eternal and indispensable: the university library, with its acquisitions budget, large building, and elaborate catalogues; the laboratory; the academic department, with its fellowships and specialized training. So did a new form of teaching: the seminar, in which students learned by doing, presenting reports on their original research for the criticism of their teachers and colleagues. The new pedagogy prized novelty and discovery; it was stimulating, optimistic, and attractive to students around the world. Some ten thousand young Americans managed to study in Germany during the nineteenth century. There, they learned that research defined the university enterprise. And that is why we still make our graduate students write dissertations and our assistant professors write books. The multicultural, global faculty of the American university still inhabits the all-male, and virtually all-Christian, research universities of Mommsen’s day.

On the other hand, in England

He also uses the ancient universities of Oxford and Cambridge as a traditionalist foil to the innovations of Germany. Well into the nineteenth century, these were the only two universities in England, and dons—who were not allowed to marry—lived side by side with undergraduates, in an environment that had about it more of the monastery than of modernity. The tutorial method, too, had changed little, and colleges were concerned less with producing great scholars than with cultivating a serviceable crop of civil servants, barristers, and clergymen.

The review ends with a note of concern.

If Clark helps us to understand why the contemporary university seems such an odd, unstable compound of novelty and conservatism, he also leaves us with some cause for unease. Mommsen may have liked to see himself as a buccaneering capitalist, but his money came from the state. Today, by contrast, dwindling public support has forced university administrators to look for other sources of funding, and to assess professors and programs through the paradigm of the efficient market. Outside backers tend to direct their support toward disciplines that offer practical, salable results—the biological sciences, for instance, and the quantitative social sciences—and universities themselves have an incentive to channel money into work that will generate patents for them. The new regime may be a good way to get results, but it’s hard to imagine that this style of management would have found much room for a pair of eccentrics like James Watson and Francis Crick, or for the kind of long-range research that they did. As for the humanities, once the core of the enterprise—well, humanists these days bring in less grant money than Mommsen, and their salaries and working conditions reflect that all too clearly. The inefficient and paradoxical ways of doing things that, for all their peculiarity, have made American universities the envy of the world are changing rapidly. What ironic story will William Clark have to tell a generation from now?

He is probably right, but the example of people like Fred Kavli gives us reason for hope. The problem is not that the loss of State funding, but the ballooning costs of science- especially in areas like Particle Physics. The benefits of competition are not to be sneezed at, and when faced with a resource crunch, smart people can innovate, as is seen with recent work on Particle Accelerators.