Tuesday, October 31, 2006

Tainted reality

Its been known for some time that what we see depends on what we expect to see. That is the basis of illusions such as one to the right, which I took from Wikipedia. Square A is actually just the same shade of gray as square B.

BPS research digest reports on a related effect.

Karl Gegenfurtner and colleagues presented 14 participants with strangely coloured fruits – for example a pink banana – against a grey background. The participants’ task was to adjust the colour of the banana until it blended exactly with the grey background. It sounds easy, but the participants couldn’t do it because as they adjusted the colour, they compensated not just for the banana’s actual pink pigmentation, but also for a yellowness that only existed in their mind, thus leaving the banana with a slight bluish hue. That is, their memory for the typical colour of a banana was interfering with their performance.

By contrast, the participants didn’t have any trouble adjusting the colour of anonymous spots of light to make them blend in with the grey background – thus suggesting it wasn’t some quirk of the experimental set-up that was causing the participants difficulties with the fruit and veg.

Moreover, when presented with a banana that had been correctly adjusted to perfectly blend in with the grey background, the participants reported that it looked slightly yellow – a percept generated by their own mind, not by the actual colour of the banana.

It came from the East

ALDaily pointed me to this New Yorker review of the book “The Ghost Map" by Steven Johnson. I keep forgetting how new Cholera is.

Hippocrates mentioned cholera as a common post-childhood disease, but given that he thought it might be brought on by eating goat’s meat he was probably referring to a less malign form of diarrhea. It was almost certainly not the life-threatening epidemic disease that emerged from India in 1817 and which then began its spread around the world, travelling, as Snow said, “along the great tracks of human intercourse”—colonialism and global trade. The first pandemic of what the British and the Americans called Asiatic cholera (or cholera morbus) reached Southeast Asia, East Africa, the Middle East, and the Caucasus, but petered out in 1823. A second pandemic, between 1826 and 1837, also originated in India, but this time it took a devastating toll on both Europe and America, arriving in Britain in the autumn of 1831 and in America the following year. By 1833, twenty thousand people had died of cholera in England and Wales, with London especially hard hit. A third pandemic swept England and Wales in 1848-49 (more than fifty thousand dead) and again in 1854, when thirty thousand died in London alone.

The description of the disease will loosen your bowels

Cholera is a horrific illness. The onset of the disease is typically quick and spectacular; you can be healthy one moment and dead within hours. The disease, left untreated, has a fatality rate that can reach fifty per cent. The first sign that you have it is a sudden and explosive watery diarrhea, classically described as “rice-water stool,” resembling the water in which rice has been rinsed and sometimes having a fishy smell. White specks floating in the stool are bits of lining from the small intestine. As a result of water loss—vomiting often accompanies diarrhea, and as much as a litre of water may be lost per hour—your eyes become sunken; your body is racked with agonizing cramps; the skin becomes leathery; lips and face turn blue; blood pressure drops; heartbeat becomes irregular; the amount of oxygen reaching your cells diminishes. Once you enter hypovolemic shock, death can follow within minutes. A mid-nineteenth-century English newspaper report described cholera victims who were “one minute warm, palpitating, human organisms—the next a sort of galvanized corpse, with icy breath, stopped pulse, and blood congealed—blue, shrivelled up, convulsed.” Through it all, and until the very last stages, is the added horror of full consciousness. You are aware of what’s happening: “the mind within remains untouched and clear,—shining strangely through the glazed eyes . . . a spirit, looking out in terror from a corpse.”

The received wisdom was that diseases reflect an imbalance of the four humours familiar to Indians from Ayurveda (blood, phlegm, yellow bile, and black bile), and that epidemic diseases were caused by atmospheric miasmas.

The fact that the poor suffered most in many epidemics was readily accommodated by the miasmal theory: certain people—those who lived in areas where the atmosphere was manifestly contaminated and who led a filthy and unwholesome way of life—were “predisposed” to be afflicted. The key indicator of miasma was stench. An aphorism of the nineteenth-century English sanitary reformer Edwin Chadwick was “All smell is disease.” Sydenham’s belief in a subterranean origin of miasmas gradually gave way to the view that they were caused by the accumulation of putrefying organic materials—a matter of human responsibility. As Charles E. Rosenberg’s hugely influential work “The Cholera Years” (1962) noted, when Asiatic cholera first made its appearance in the United States, in 1832, “Medical opinion was unanimous in agreeing that the intemperate, the imprudent, the filthy were particularly vulnerable.” During an early outbreak in the notorious Five Points neighborhood of Manhattan, a local newspaper maintained that this was an area inhabited by the most wretched specimens of humanity: “Be the air pure from Heaven, their breath would contaminate it, and infect it with disease.” The map of cholera seemed so intimately molded to the moral order that, as Rosenberg put it, “to die of cholera was to die in suspicious circumstances.” Rather like syphilis, it was taken as a sign that you had lived in a way you ought not to have lived. “The great mass of people . . . don’t know that the miasma of an unscavenged street or impure alley is productive of cholera and disease,” the English liberal economic activist Richard Cobden observed in 1853. “If they did know these things, people would take care that they inhabited better houses.”

Élite presumptions to the contrary, the London poor did not enjoy living in squalor. In 1849, a group of them wrote a joint letter to the London Times:

"We live in muck and filthe. We aint got no priviz, no dust bins, no drains, no water-splies . . . . The Stenche of a Gully-hole is disgustin. We all of us suffer, and numbers are ill, and if the Colera comes Lord help us. . . . We are livin like piggs, and it aint faire we shoulde be so ill treted. "

But some sanitary reformers, Florence Nightingale among them, opposed contagionism precisely because they believed that the poor were personally responsible for their filth: contagionism undermined your ability to hold people to account for their unwholesome way of life. Whereas, in a miasmal view of the world, the distribution of disease followed the contours of morality—your nose just knew it—infection by an external agent smacked of moral randomness.

The hero of the tale is John Snow- an anesthetist and a founding member of the London Epidemiological Society- who wielded data and common sense to great effect. He asked some good questions.

Why was it, he wondered, that people most exposed to these supposedly noxious miasmas—sewer workers, for example—were no more likely to be afflicted with cholera than anyone else? Snow also knew that the concentration of gases declined rapidly over distance, so how could a miasma arising from one source pollute the atmosphere of a whole neighborhood, or even a city? Why, if many of those closest to the stench were unaffected, did some of those far removed from it become ill? And there were some notable outbreaks of cholera that didn’t appear to fit with the moral and evidential underpinnings of miasmal theory. Sometimes the occupants of one building fell ill while those in an adjacent building, at least as squalid, escaped. Moreover, cholera attacked the alimentary, not the respiratory, tract. Why should that be, if the vehicle of contagion was in the air as opposed to something ingested?

He came to the water supply

From medieval times, water had been drawn both from urban wells and from the Thames and its tributaries. In the early seventeenth century, the so-called New River was constructed; it carried Hertfordshire spring water, by gravity alone, to Clerkenwell, a distance of almost forty miles. During the eighteenth century and the early nineteenth, a number of private water companies were established, taking water from the Thames and using newly invented steam pumps to deliver it by iron pipe. By the middle of the nineteenth century, there were about ten companies supplying London’s water. Many of these companies drew their water from within the Thames’s tidal section, where the city’s sewage was also dumped, thus providing customers with excrement-contaminated drinking water. In the early eighteen-fifties, Parliament had ordered the water companies to shift their intake pipes above the tideway by August of 1855: some complied quickly; others dragged their feet.

When cholera returned, in 1854, Snow was able to identify a number of small districts served by two water companies, one still supplying a fecal cocktail and one that had moved its intake pipes to Thames Ditton, above the tidal section. Snow compiled tables showing a strong connection in these districts between cholera mortality and water source. Snow’s “grand experiment” was supposed to be decisive: there were no pertinent variables distinguishing the two populations other than the origins of their drinking water. As it turned out, the critical evidence came not from this study of commercially piped river water but from a fine-grained map showing the roles of different wells. Snow lived on Sackville Street, just around the corner from the Royal Academy of Arts, and in late August cholera erupted practically next door, in an area of Soho. It was, Snow later wrote, “the most terrible outbreak of cholera which ever occurred in this kingdom”—more than five hundred deaths in ten days.

He produced one of the most famous maps ever-one that Edward Tufte praises in his book "The visual display of quantitative information".

Using the Weekly Return of Births and Deaths, which was published by William Farr, a statistician in the Office of the Registrar-General, and a staunch anti-contagionist, Snow homed in on the microstructure of the epidemic. He began to suspect contaminated water in a well on Broad Street whose pump served households in about a two-block radius. The well had nothing to do with commercially piped water—which in this neighborhood happened to be relatively pure—but it was suspicious nonetheless. Scientists at the time knew no more about the invisible constituents of the water supply than they did about the attributes of specific miasmas—Snow wrote that the “morbid poison” of cholera “must necessarily have some sort of structure, most likely that of a cell,” but he could not see anything that looked relevant under the microscope—so even Snow still used smell as an important diagnostic sign. He recorded a local impression that, at the height of the outbreak, the Broad Street well water had an atypically “offensive smell,” and that those who were deterred by it from drinking the water did not fall ill. What Snow needed was not the biological or chemical identity of the “morbid poison,” or formal proof of causation, but a powerful rhetoric of persuasion. The map Snow produced, in 1854, plotted cholera mortality house by house in the affected area, with bars at each address that showed the number of dead. The closer you lived to the Broad Street pump, the higher the pile of bars. A few streets away, around the pump at the top of Carnaby Street, there were scarcely any bars, and slightly farther, near the Warwick Street pump, there were none at all.

This is a post by Tufte, describing a visit to John Snow's cholera-infected waterpump, and the image above is part of the map itself. Snow backed this up with additional data

Snow’s study of the neighborhood enabled him to add persuasive anecdotal evidence to the anonymity of statistics. Just across from the Broad Street pump was the Poland Street workhouse, whose wretched inmates, living closely packed in miserable conditions, should have been ideal cholera victims. Yet the disease scarcely touched them. The workhouse, it emerged, had its own well and a piped supply from a company with uncontaminated Thames water. Similarly, there were no cholera deaths among the seventy workers in the Lion Brewery, on Broad Street. They drank mainly malt liquor, and the brewery had its own well. What Snow called the “most conclusive” evidence concerned a widow living far away, in salubrious Hampstead, and her niece, who lived in “a high and healthy part of Islington”: neither had gone anywhere near Broad Street, and both succumbed to cholera within days of its Soho outbreak. It turned out that the widow used to live in the affected area, and had developed a taste for the Broad Street well water. She had secured a supply on August 31st, and, when her niece visited, both drank from the same deadly bottle.

Next, Snow had to show how the Broad Street well had got infected, and for this he made use of the detailed knowledge of a local minister, Henry Whitehead. The minister had at first been skeptical of Snow’s waterborne theories, but became convinced by the evidence the doctor was gathering. Whitehead discovered that the first, or “index,” case of the Soho cholera was a child living on Broad Street: her diapers had been rinsed in water that was then tipped into a cesspool in front of a house just a few feet away from the well. The cesspool leaked and so, apparently, did the well. Snow persuaded the parish Board of Guardians to remove the handle from the Broad Street pump, pretty much ending the Soho cholera outbreak. There’s now a replica of the handleless pump outside a nearby pub named in John Snow’s honor.

All his efforts did not secure is immediate victory, but the unbearable stench of the sewage-laden Thames finally forced the Government to act

In the oppressively hot summer of 1858, London was overwhelmed by what the papers called “the Great Stink.” The already sewage-loaded Thames had begun to carry the additional burden of thousands of newly invented flush water closets, and improved domestic sanitation was producing the paradoxical result of worsened public sanitation. The Thames had often reeked before, but this time politicians fled the Houses of Parliament, on the river’s embankment, or attended with handkerchiefs pressed to their noses. “Whoso once inhales the stink can never forget it,” a newspaper reported, “and can count himself lucky if he live to remember it.” Measures to clean up the Thames had been on the agenda for some years, but an urgent fear of miasmas broke a political logjam, and gave immediate impetus to one of the great monuments of Victorian civil engineering: Sir Joseph Bazalgette’s system of municipal sewers, designed to deposit London’s waste below the city and far from the intakes of its water supply. (The system became fully operational in the mid-eighteen-seventies, and its pipes and pumps continue to serve London today.)

In the event, the Great Stink’s effects on municipal health were negligible: the Weekly Return showed no increase in deaths from epidemic disease, confounding miasmatists’ expectations. When cholera returned to London in 1866, its toll was much smaller, and the main outbreak was traced to a section of Bazalgette’s system which had yet to be completed. In many people’s opinion, Snow, who had died in 1858, now stood vindicated. And yet the improved municipal water system that rid the city of cholera had been promoted by sanitary reformers who held to the miasmal theory of disease—people who believed that sewage-laden drinking water was only a minor source of miasmas, but disgusting all the same. The right things were done, but not necessarily for the right scientific reasons.

The best we can hope for.

Sunday, October 29, 2006

What good is happiness? It can't buy you money.

Robert H. Frank is a distinguished left-leaning Economist. In "Passions within Reason", he invented an explanation for why emotions exist. In this "Economic Scene" article, he argues that
1. Economic Growth does not make a society happier over time

Many critics of economic growth interpret this finding to imply that continued economic growth should no longer be a policy goal in developed countries. They argue that if money buys happiness, it is relative, not absolute, income that matters. As incomes grow, people quickly adapt to their new circumstances, showing no enduring gains in measured happiness. Growth makes the poor happier in low-income countries, critics concede, but not in developed countries, where those at the bottom continue to experience relative deprivation.

but
2. Economic Growth is still important because happiness is not the point of growth.

Subjective well-being is typically measured from responses to survey questions like, “All things considered, how satisfied are you with your life these days?” People’s responses are informative. They tend to be consistent over time and are highly correlated with assessments of them made by their friends. Positive self-assessments are strongly linked with behaviors indicating psychological health. Thus, people who report high levels of subjective well-being are more likely to initiate social contacts with friends and more likely to respond to requests for assistance from strangers. They are less likely than others to suffer from psychosomatic illnesses, seek psychological counseling or attempt suicide.
In short, self-assessments of subjective well-being tell us something important about human welfare. Yet the mere fact that they do not ratchet up over time provides little reason to question the desirability of economic growth.

Why is this? Because our emotions (motivational system) evolved to ensure that we can never be permanently happy- we are survival machines for our genes.

The purpose of the human motivational system, according to psychologists, is not to make people feel happy, but rather to motivate actions that promote successful life outcomes. To be effective, this system should be flexible and adaptive, which it is. For example, people who become disabled typically experience deep depression after their accidents, but often adapt surprisingly quickly, soon reporting a mix of moods similar to what they had experienced before. Lottery winners invariably experience joy on receiving their windfalls, but often describe such feelings as fleeting.

Since life is a continuing competitive struggle, this is as it should be. Accident victims who can recover their psychological footing quickly will function more effectively in their new circumstances than those who dwell unhappily on their misfortune. Windfall recipients who quickly recover their hunger for more will compete more effectively than those who linger in complacent euphoria.

A Holocaust survivor once told me that his existence in the camps took place in two separate psychological spaces. In one, he was acutely aware of the unspeakable horror of his situation. But in the other, life seemed eerily normal. In this second space, each day presented challenges, and days in which he coped relatively successfully with them felt much like the good days of the past. To survive, he explained, it was critical to spend as much time as possible in the second space and as little as possible in the first.

These observations highlight the weakness of subjective well-being as a metric of welfare. The fact that people adapt quickly to new circumstances, good or bad, is just a design feature of the brain’s motivational system. The fact that a paraplegic may continue to be happy does not imply that his condition has not reduced his welfare.

Indeed, many well-adjusted paraplegics report that they would undergo surgery entailing substantial risk of death if doing so promised to restore their mobility. Similarly, the fact that people may adapt quickly to higher incomes says nothing about whether economic growth makes them better off.

This is a profound observation. Tyler Cowen quotes Daniel Ariely making a similar point about his life after suffering severe burn over his entire body

...my personal reflections are only in partial agreement with the literature on well being (see also Levav 2002). In terms of agreement with adaptation, I find myself to be relatively happy in day-to-day life – beyond the level predicted (by others as well as by myself) for someone with this type of injury. Mostly, this relative happiness can be attributed to the human flexibility of finding activities and outlets that can be experienced and finding in these, fulfillment, interest, and satisfaction. For example, I found a profession that provides me with a wide-ranging flexibility in my daily life, reducing the adverse effects of my limitations on my ability. Being able to find happiness in new ways and to adjust one’s dreams and aspirations to a new direction is clearly an important human ability that muffles the hardship of wrong turns in life circumstances. It is possible that individuals who are injured at later stages of their lives, when they are more set in terms of their goals, have a more difficult time adjusting to such life-changing events.

However, these reflections also point to substantial disagreements with the current literature on well-being. For example, there is no way that I can convince myself that I am as happy as I would have been without the injury. There is not a day in which I do not feel pain, or realize the disadvantages in my situation. Despite this daily awareness, if I had participated in a study on well-being and had been asked to rate my daily happiness on a scale from 0 (not at all happy) to 100 (extremely happy), I would have probably provided a high number, probably as high as I would have given if I had not had this injury. Yet, such high ratings of daily happiness would have been high only relative to the top of my privately defined scale, which has been adjusted downward to accommodate the new circumstances and possibilities (Grice 1975). Thus, while it is possible to show that ratings of happiness are not influenced much based on large life events, it is not clear that this measure reflects similar affective states.

As a mental experiment, imagine yourself in the following situation. How you would rate your overall life satisfaction a few years after you had sustained a serious injury. How would your ratings reflect the impact of these new circumstances? Now imagine that you had a choice to make whether you would want this injury. Imagine further that you were asked how much you would have paid not to have this injury. I propose that in such cases, the ratings of overall satisfaction would not be substantially influenced by the injury, while the choice and willingness to pay would be - and to a very large degree. Thus, while I believe that there is some adaptation and adjustment to new life circumstances, I also believe that the extent to which such adjustments can be seen as reflecting true adaptation (such as in the physiological sense of adaptation to light for example) is overstated. Happiness can be found in many places, and individuals cannot always predict their ability to do so. Yet, this should not undermine our understanding of horrific life events, or reduce our effort to eliminate them.

Economic growth is not about making people happier, but about increasing their freedom of action. People (individually and collectively) get to make choices that would otherwise have been infeasible. Without the economic growth of the past 50 years, Dan Ariely would not have survived his burns, and society would not have been able to spare the effort and skill that went into rehabilitating him.

Thursday, October 26, 2006

Pangur Ban

I met this cat in "Rattle bag"- a book of poems compiled by Ted Hughes and Seamus Heaney.

This poem was written in 8th century Ireland by a student at the monastery of Carinthia (Irish, 8th century) on a copy of St.Paul's Epistles. (The translation is by Robin Flower)
I and Pangur Ban my cat,
'Tis a like task we are at:
Hunting mice is his delight,
Hunting words I sit all night.

Better far than praise of men
'Tis to sit with book and pen;
Pangur bears me no ill-will,
He too plies his simple skill.

'Tis a merry task to see
At our tasks how glad are we,
When at home we sit and find
Entertainment to our mind.

Oftentimes a mouse will stray
In the hero Pangur's way;
Oftentimes my keen thought set
Takes a meaning in its net.

'Gainst the wall he sets his eye
Full and fierce and sharp and sly;
'Gainst the wall of knowledge I
All my little wisdom try.

When a mouse darts from its den,
O how glad is Pangur then!
O what gladness do I prove
When I solve the doubts I love!

So in peace our task we ply,
Pangur Ban, my cat, and I;
In our arts we find our bliss,
I have mine and he has his.

Practice every day has made
Pangur perfect in his trade;
I get wisdom day and night
Turning darkness into light.

Rhythmic grumbling

Back in Bombay, 11;20 p.m

I thought Deepavali would be past
but I returned while ropes of colored lights
still stammered in the windows,
and flights of bullying rockets
roused me from my bed
to sit listening
to little boys playing
with toy pistols in the street below.
They sound like gardeners clipping hedges in the park.


Postscript:
Jayan insists that I include his version:

Deepavali past
Ropes of colored lights on windows
Flights of bullying rockets
Tearing the smoked up sky

Almost unbearable

Tyler Cowen says it is about the behavioral economics of pain. The only way I could read this article by Dan Ariely was by taking frequent breaks.

However, it is wonderfully written.

Wednesday, October 25, 2006

The Roman Way

Nick Szabo recently blogged about an article on the Lex Gabina, published in the New York Times by Robert Harris.

I recently devoured two novels by Robert Harris: "Imperium" and "Pompeii". "Imperium" is by far the better novel, but "Pompeii" has magnificent passages describing the eruption of Pompeii, and wonderful descriptions of the achievements of Roman engineering.
The book begins with this quote:

How can we withhold our respect from a water system that, in the first century AD, supplied the city of Rome with substantially more water than was supplied in 1985 to New York City?

A Trevor Hodges,
(Author of Roman Aqueducts & Water Supply)

And this passage does resonate with me

Men mistook measurement for understanding. And they always had to put themselves at the centre of everything. That was their greatest conceit. The earth is becoming warmer- it must be our fault! The mountain is destroying us- we have not propitiatied the gods! It rains too much, it rains too little- a comfort to think that these thjings are somehow connected to our behaviour, that if we lived only a little better, a little more frugally, our virtue would be rewarded.

Tuesday, October 24, 2006

Imbeciles

What are we supposed to make of this "news"?

There are 20 reported cases of HIV positive patients in the Patna Police hospital. Concerned by the rising numbers of HIV cases in the state police force, doctors have forwarded some suggestions to headquarters.

How many would be an acceptable number? Is this number out of proportion, considering how many people outside the Police force are HIV+?

They feel that all new recruits in the Bihar Police force should carry HIV negative certificates and that men presently serving in the force should undergo HIV tests. They also say that such tests should be carried out periodically.

What is the intention here? If this is good for the Police, why should this not be done for the general population as well?

Says Bihar Home Secretary Afzal Amanullah, "Doctors have reported that there has been a considerable rise in HIV positice (sic) cases among officers, not only among those ranked lower and constables, but senior officials as well."

Oh my goodness, not only petty constables, but officers as well? Intolerable. But why?

The real numbers of those suffering from the dreaded virus could be mind-boggling and the government's intervention is expected before the situation compeletly (sic) gets out of hand.

What should the Government do? Surely it can't be so difficult to ask a few questions when you are handed a story.

Monday, October 23, 2006

Parting of ways

Via ALdaily: The New Statesman has published a fascinating article by William Dalrymple on a 19th century clash of civilizations.

At 4pm on a hazy, warm, sticky winter's day in Rangoon in November 1862, soon after the end of the monsoon, a shrouded corpse was escorted by a small group of British soldiers to an anonymous grave at the back of a walled prison enclosure. The enclosure lay overlooking the muddy brown waters of the Rangoon River, a little downhill from the great gilt spire of the Shwedagon Pagoda. Around it lay the newly built cantonment area of the port - a pilgrimage town that had been seized, burned and occupied by the British only ten years earlier.

The bier of the State Prisoner - as the deceased was referred to - was accompanied by his two sons and an elderly mullah. The ceremony was brief. The British authorities had made sure not only that the grave was already dug, but that quantities of lime were on hand to guarantee the rapid decay of both bier and body. When the shortened funeral prayers had been recited, the earth was thrown over the lime, and the turf carefully replaced to disguise the place of burial. A week later the British Commissioner, Captain H N Davis, wrote to London to report what had passed, adding:

Have since visited the remaining State Prisoners - the very scum of the reduced Asiatic harem; found all correct . . . The death of the ex-King may be said to have had no effect on the Mahomedan part of the populace of Rangoon, except perhaps for a few fanatics who watch and pray for the final triumph of Islam. A bamboo fence surrounds the grave, and by the time the fence is worn out, the grass will again have properly covered the spot, and no vestige will remain to distinguish where the last of the Great Moghuls rests.

His point in the article appears to be that, as that the British achieved ascendency in India, and evangelical Christians became more prominent among the British, they gradually changed character from being just another trading community in a vast subcontinent, to a foreign presence which aggressively rejected any hint of being influenced by the country they inhabited.

The wills written by dying East India Company servants show that the practice of cohabiting with Indian bibis quickly declined: they turn up in one in three wills between 1780 and 1785, but are present in only one in four between 1805 and 1810. By the middle of the century, they have all but disappeared. In half a century, a vibrantly multicultural world refracted back into its component parts; children of mixed race were corralled into what became in effect a new Indian caste - the Anglo-Indians - who were left to run the railways, posts and mines.

He draws a parallel with our times.

Just like it is today, this process of pulling apart - of failing to talk, listen or trust each other - took place against the background of an increasingly aggressive and self-righteous west, facing ever stiffer Islamic resistance to western interference. For, as anyone who has ever studied the story of the rise of the British in India will know well, there is nothing new about the neo-cons. The old game of regime change - of installing puppet regimes, propped up by the west for its own political and economic ends - is one that the British had well mastered by the late 18th century.

By the 1850s, the British had progressed from aggressively removing independent-minded Muslim rulers, such as Tipu Sultan, who refused to bow before the will of the hyperpower, to destabilising and then annexing even the most pliant Muslim states. In February 1856, the British unilaterally annexed the prosperous kingdom of Avadh (or Oudh), using the excuse that the nawab, Wajid Ali Shah, a far-from-belligerent dancer and epicure, was "debauched".

The war that followed was essentially religious

The eventual result of this clash of rival fundamentalisms came in 1857 with the cataclysm of the Great Mutiny. Of the 139,000 sepoys of the Bengal army, all but 7,796 turned against their British masters, and the great majority headed straight to Zafar's court in Delhi, the centre of the storm. Although it had many causes and reflected many deeply held political and economic grievances - particularly the feeling that the heathen foreigners were interfering in the most intimate way with a part of the world to which they were entirely alien - the uprising was articulated as a war of religion, and especially as a defensive action against the rapid inroads that missionaries, Christian schools and Christian ideas were making in India, combined with a more generalised fight for freedom from occupation and western interference.

Although the great majority of the sepoys were Hindus, in Delhi a flag of jihad was raised in the principal mosque, and many of the insurgents described themselves as mujahedin or jihadis. Indeed, by the end of the siege, after a significant proportion of the sepoys had melted away, hungry and dis pirited, the proportion of jihadis in Delhi grew to be about half of the total rebel force, and included a regiment of "suicide ghazis" from Gwalior who had vowed never to eat again and to fight until they met death at the hands of the kafirs, "for those who have come to die have no need for food".

One of the causes of unrest, according to a Delhi source, was that "the British had closed the madrasas". These words had no resonance to the Marxist historians of the 1960s who looked for secular and economic grievances to explain the uprising. Now, in the aftermath of the attacks of 11 September 2001 and 7 July 2005, they are phrases we understand all too well. Words such as jihad scream out of the dusty pages of the Urdu manuscripts, demanding attention.

There is a direct link between the jihadis of 1857 and those we face today. The reaction of the educated Delhi Muslims after 1857 was to reject both the west and the gentle Sufi traditions of the late Mughal emperors, whom they tended to regard as semi-apostate puppets of the British; instead, they attempted to return to what they regarded as pure Islamic roots.

With this in mind, disillusioned refugees from Delhi founded a mad rasa in the Wahhabi style at Deoband, in Delhi, that went back to Koranic basics and rigorously stripped out anything European from the curriculum. One hundred and forty years later, it was out of Deobandi madrasas in Pakistan that the Taliban emerged to create the most retrograde Islamic regime in modern history, a regime that in turn provided the crucible from which emerged al-Qaeda, and the most radical Islamist counter-attack the modern west has yet had to face.

A fascinating tale, but there is nothing uniquely sub-continental about this story: every country in Asia, Africa, and South America has a similar tale to tell. China and Korea suffered unbelievable horrors in the same period. General Gordon became a martyr to Victorian England when the Sudanese mahdi killed him at Khartoum.
It may be possible to trace the origins of today's Islamic jihads to events that are over a century old, but I am not sure that it helps. They were a negligible presence until a bare 20 years ago, and they the origins of today's terror are equally close to hand: the U.S. armed and trained the mujahideen to battle the Soviet infidel, Pakistan gave them succor in an effort to win American kudos (and arms with which to confront India) and the Saudis funded them to appease their own people.
The fuel that feeds this constant skirmishing is the evident illegitimacy of the governments of every Islamic state from Islamabad to Casablanca- it is not only the British who parted ways with the people they ruled. Their unfortunate people, growing more impoverished by the year, and repressed by their political masters, are seeking to escape into an imagined past of Islamic purity and potency.
I am sure Dalrymple's book will illuminate the 19th century, but will cast only an indirect light on the early 21st century.

Now for something quite different

Nick Szabo has posted some really good entries recently.

One post is about the contrasting fates of medieval China and Portugal. I loved the image on the right, which contrasts the size of Columbus' Santa Maria with that of one of the Zheng He "treasure ships" which the Chinese Emperor sent off on a voyage to "show the flag".

Another nice blog entry is about the Pigeonhole principle.
The pigeonhole principle readily proves that there are people in Ohio with the same number of hairs on their head, that you can't eliminate the possibility of hash collisions when the set of possible input data is larger than the possible outputs, that if there are at least two people in a room then there must be at least two people in that room with the same number of cousins in the room, and that a lossless data compression algorithm must always make some files longer. This is just the tip of the iceberg of what the pigeonhole principle can help prove.
And he claims his math is rusty!

Sunday, October 22, 2006

With stupidity the gods themselves contend..

More than two centuries after the great works of David Hume and Adam Smith, 20 years after the "Japan scare", the head of Der Spiegel's Berlin office publishes this.

The world war for wealth calls for a different, but every bit as contradictory, solution

and

The two camps are divided between Europe and America on the one side and Asia on the other. But so far there has been no shouting, no bluster and no shooting. Nor have there been any threats, demands or accusations. On the contrary, there is an atmosphere of complete amiability wherever our politicians and business executives might travel in Asia. At airports in Beijing, Jakarta, Singapore and New Delhi red carpets lie ready, Western national anthems can be played flawlessly on cue -- and they even parry Western complaints about intellectual property theft, environmental damage and human rights abuses with a polite patience that can only be admired. The Asians are the friendliest conquerors the world has ever seen.

and

Their secret is stoic perseverance, the weapon they use to pursue their own interests while at the same time disregarding ours. What looks like a market economy in Asia, actually follows the rules of a type of society which former German chancellor Ludwig Erhard liked to call a "termite state." In a termite state, it is the collective rather than the individual which sets the agenda. Tasks that serve the aims of society's leaders are assigned to the individual in a clandestine manner that is barely perceptible to outsiders. It is a state that encourages as much collective behavior as possible but only as much freedom as necessary. We don't know what they feel, we don't know what they think and we have no way of guessing what they are planning. Indeed, this is what makes China a dark superpower.

Beyond parody.

The battle of the loonies

I adore Richard Dawkins' writings on Evolution but, while I generally agree with him on religion, I find his zeal disturbing. Terry Eagleton's incoherent, incomprehensible review in the London Review of Books, however, is bound to confirm Dawkins' opinions of his opponents.
What could Terry mean by

For Judeo-Christianity, God is not a person in the sense that Al Gore arguably is. Nor is he a principle, an entity, or ‘existent’: in one sense of that word it would be perfectly coherent for religious types to claim that God does not in fact exist. He is, rather, the condition of possibility of any entity whatsoever, including ourselves. He is the answer to why there is something rather than nothing. God and the universe do not add up to two, any more than my envy and my left foot constitute a pair of objects.

How does he know that there is such a condition? And did God himself whisper in Terry Eagleton's ear so that he knows this:

This, not some super-manufacturing, is what is traditionally meant by the claim that God is Creator. He is what sustains all things in being by his love; and this would still be the case even if the universe had no beginning. To say that he brought it into being ex nihilo is not a measure of how very clever he is, but to suggest that he did it out of love rather than need. The world was not the consequence of an inexorable chain of cause and effect. Like a Modernist work of art, there is no necessity about it at all, and God might well have come to regret his handiwork some aeons ago. The Creation is the original acte gratuit. God is an artist who did it for the sheer love or hell of it, not a scientist at work on a magnificently rational design that will impress his research grant body no end.

This is the point at which I gave up and went to sleep.

Hey, You Got Something To Eat?

asks A Goat, in The Onion.

Saturday, October 21, 2006

Dilbert Rules


Read and learn

Where all are above average

The Business Standard has published a grotesquely shoddy study on the performance of Fund Managers in India. Really, you expect better from this paper.

Indian equity fund managers have managed to beat the benchmark indices hands down, despite the stock markets going through tumultuous times over the past decade and a half.

The first-ever ranking of fund managers, based on performance throughout their career, reveals that 90 per cent of the fund managers had a 50 per cent rate of outperformance versus the benchmark Nifty index. In other words, of the 32 equity fund managers ranked, 29 bettered the Nifty at least half the time.

This may be meaningful because

“Apart from their skills, a key reason for the large-scale outperformance is that mutual funds are a tiny fraction of the whole market in our country, while in the US, mutual funds are the market itself, limiting the scope of fund managers bettering the market,” added Kumar.

Translation: these dudes have been thriving on the mistakes of the retail investors who swarm our markets while, in the US, one Fund Manager can outperform only at the expense of another.
And

While every one of the leading fund managers seemed to have a unique style of investing, a common thread was a bias towards growth. “Since most companies in India have still not attained their full potential, focusing on growth can bring in rich rewards,” said Subramanian of Franklin Templeton.

Meaning: we take more risks, which pay off in a rising market, but can sink us if the market turns against us.
However, the number one reason for this performance has not been explicitly presented: Sample selection bias.

The study ranked only equity managers with a minimum five-year track record and debt managers with at least two years of experience. The detailed methodology and results have been covered in the magazine, along with the profile of the five leading managers in the two categories.

Simply put, what do the authors want me to take away from such a study? That it is possible to beat the market over the long term, that its particularly easy to do this in India, and so I should hand my money over to these Fund Manegers. Now, they may be right, but this study inspires no confidence.

What they should have done is to look at all the Fund Managers who were in business 5 years ago, and then seen how they performed over these years. That would have given us a feel for how easy it is for these particular professionals to outperform, given the large number of amateurs mucking about in the marketplace.
The way they have conducted this study, all the Fund Managers who went out of business in these past 5 years have not been counted- hence the conclusion that "90% of the Fund Managers has a 50% rate of outperformance versus the benchmark Nifty at least half the time."
This article has a nice section on how the Fund Mangement industry manages to make itself look good.

There are a number of ways that active fund managers have been able to promote the illusion that as a group they are adding investment value. The investment management industry has many tricks to make it appear as if everyone is doing better than average.

For a start they can load the dice by opening to the public a lot of historically profitable funds. Fund managers introduce creation bias into the equation by starting a lot of aggressive new funds. The way this trick works is they give seed capital to a number of promising young portfolio managers every year. These managers then invest and trade aggressively for the next year or two, establishing a track record. The fund managers that didn't do well get the flick, their money gets plowed into the more successful fund and then the investment company opens the new fund to the public and puts enormous marketing hype behind it. In this way managers can ensure that all new funds (that the public hear about) have excellent track records.

There are studies that have found that brand new funds often underperform their own track records, and as a group seem to do worse than more established funds, this is probably why. Note that these new funds don't do any better than average following their launch, but they do get to brag about impressive past performance.

The second trick is to bury the evidence if one of their public funds ever falls behind. Survivorship bias is introduced when fund managers close or merge their less successful funds with more successful ones. By continually weeding out the weaker funds, a fund manager can present a prospectus showing the entire range of investment options being market beaters. Similarly, when a fund is culled it is usually deleted from most databases, so history is rewritten by the winners. Professor Burton Malkiel, author of the excellent book A Random Walk Down Wall Street studied this phenomenon and estimates survivorship bias could add as much as 1.5%pa to the performance of the median fund manager. Investors do not benefit from survivorship bias, real world investors do lose money on funds that are deleted. All that improves is the historical average performance of fund databases.

Third trick is to throw a lot of hype behind high performing funds. There is little evidence that winning funds are able to sustain their high performance over the long term, so this can only be seen as a cynical marketing exercise, cashing in on last year's luck. Naturally funds that don't perform well don't advertise much, so all you see are ads showing high performance. (Rajeev: rather as your colleagues at work would are more likely to tell you about the winning stocks that they purchased, not the losers they have on their portfolio)

The fourth trick is as scurrilous as the rest, even a very poor fund that has underperformed over the longer term can have a good year or two, so as long as these funds only talk about recent past performance they can avoid the prickly question of longer term past results.

Similarly, funds can always brag about past glories, proudly displaying the fund manager of the year ribbon they won 3 years ago on every advertisement, and brag about a high long term performance even if the last few years have been dreadful.

Thursday, October 19, 2006

The Poet-Scholar

The Harvard Business Review has a surprisingly enjoyable interview with James G. March, "professor emeritus in management, sociology, political science, and education at Stanford University". From the introduction

In these pages, three years ago, consultants Laurence Prusak and Thomas H. Davenport reported the findings of a survey of prominent management writers who
identified their own gurus. Although his is an unfamiliar name to most readers of this periodical, James G. March appeared on more lists than any other person except Peter Drucker.

and

March is perhaps best known for his pioneering contributions to organization and management theory. He has coauthored two classic books: Organizations (with Herbert A. Simon) and A Behavioral Theory of the Firm (with Richard M. Cyert). Together with Cyert and Simon, March developed a theory of the firm that incorporates aspects of sociology, psychology, and economics to provide an alternative to neoclassical theories. The underlying idea is that although managers make decisions that are intendedly rational, the rationality is “bounded” by human and organizational limitations. As a result, human behavior is not always what might be predicted when rationality is assumed.

One of the themes of his work appears to be the importance of the irrational- of behavior that does not seek to justify itself. It is unusual for the subject of an interview to insist on his irrelevance

If there is relevance to my ideas, then it is for the people who contemplate the ideas to see, not for the person who produces them. For me, a feature of scholarship that is generally more significant than relevance is the beauty of the ideas. I care that ideas have some form of elegance or grace or surprise—all the things that beauty gives you.

and

No organization works if the toilets don’t work, but I don’t believe that finding solutions to business problems is my job. If a manager asks an academic consultant what to do and that consultant answers, then the consultant should be fired. No academic has the experience to know the context of a managerial problem well enough to give specific advice about a specific situation. What an academic consultant can do is say some things that, in combination with the manager’s knowledge of the context, may lead to a better solution.

and

The scholar tries to figure out, What’s going on here? What are the underlying processes making the system go where it’s going? What is happening, or what might happen? Scholars talk about ideas that describe the basic mechanisms shaping managerial history—bounded rationality, diffusion of legitimate forms, loose coupling, liability of newness, competency traps, absorptive capacity, and the like. In contrast, experiential knowledge focuses on a particular context at a particular time and on the events of personal experience. It may or may not generalize to broader things and longer time periods; it may or may not flow into a powerful theory; but it
provides a lot of understanding of a particular situation. A scholar’s knowledge cannot address a concrete, highly specific context, except crudely. Fundamental academic knowledge becomes more useful in new or changing environments, when managers are faced with the unexpected or the unknown. It provides alternative frames for looking at problems rather than solutions to them.

His ideas on friendship and love reflect the influence of Kierkegaard's idea of a "leap of faith"

We justify actions by their consequences. But providing consequential justification is only a part of being human. It is an old issue, one with which Kant and Kierkegaard, among many others, struggled. I once taught a course on friendship that reinforced this idea for me. By the end of the course, a conspicuous difference had emerged between some of the students and me.

They saw friendship as an exchange relationship: My friend is my friend because he or she is useful to me in one way or another. By contrast, I saw friendship as an arbitrary relationship: If you’re my friend, then there are various obligations that I have toward you, which have nothing to do with your behavior. We also talked about trust in that class. The students would say, “Well, how can you trust people unless they are trustworthy?” So I asked them why they called that trust. It sounded to me like a calculated exchange. For trust to be anything truly meaningful, you have to trust somebody who isn’t trustworthy. Otherwise, it’s just a standard rational transaction.

He argues for the value of being foolish

That paper sometimes gets cited—by people who haven’t read it closely—as generic enthusiasm for silliness. Well, maybe it is, but the paper actually focused on a much narrower argument. It had to do with how you make interesting value systems. It seemed to me that one of the important things for any person interested in understanding or improving behavior was to know where preferences come from rather than simply to take them as given.

So, for example, I used to ask students to explain the factual anomaly that there are more interesting women than interesting men in the world. They were not allowed to question the fact. The key notion was a developmental one: When a woman is born, she’s usually a girl, and girls are told that because they are girls they can do things for no good reason. They can be unpredictable, inconsistent, illogical. But then a girl goes to school, and she’s told she is an educated person. Because she’s an educated person, a woman must do things consistently, analytically, and so on. So she goes through life doing things for no good reason and then figuring out the reasons, and in the process, she develops a very complicated value system—one that adapts very much to context. It’s such a value system that permitted a woman who was once sitting in a meeting I was chairing to look at the men and say, “As nearly as I can tell, your assumptions are correct. And as nearly as I can tell, your conclusions follow from the assumptions. But your conclusions are wrong.” And she was right. Men, though, are usually boys at birth. They are taught that, as boys, they are straightforward, consistent, and analytic. Then they go to school and are told that they’re straightforward, consistent, and analytic. So men go through life being straightforward, consistent, and analytic—with the goals of a two-year-old. And that’s why men are both less interesting and more predictable than women. They do not combine their analysis with foolishness.

and

Well, there are some obvious ways. Part of foolishness, or what looks like foolishness, is stealing ideas from a different domain. Someone in economics, for example, may borrow ideas from evolutionary biology, imagining that the ideas might be relevant to evolutionary economics. A scholar who does so will often get the ideas wrong; he may twist and strain them in applying them to his own discipline. But this kind of cross-disciplinary stealing can be very rich and productive.

It’s a tricky thing, because foolishness is usually that—foolishness. It can push you to be very creative, but uselessly creative. The chance that someone who knows no physics will be usefully creative in physics must be so close to zero as to be indistinguishable from it. Yet big jumps are likely to come in the form of foolishness that, against long odds, turns out to be valuable. So there’s a nice tension between how much foolishness is good for knowledge and how much knowledge is good for foolishness.

Another source of foolishness is coercion. That’s what parents often do. They say, “You’re going to take dance lessons.” And their kid says, “I don’t want to be a dancer.” And the parents say, “I don’t care whether you want to be a dancer. You’re going to take these lessons.” The use of authority is one of the more powerful ways to encourage foolishness. Play is another. Play is disinhibiting. When you play, you are allowed to do things you would not be allowed to do otherwise. However, if you’re not playing and you want to do those same things, you have to justify your behavior. Temporary foolishness gives you experience with a possible new you—but before you can make the change permanent, you have to provide reasons.

Of course, all these can be questioned and someone like Richard Dawkins would probably make mincemeat of ideas like the "leap of faith", and March recognizes the potentially tragic consequences of such foolishness

It’s all a question of balance. Soon after I wrote my paper on the technology of foolishness, I presented it at a conference in Holland. This was around 1971. One of my colleagues from Yugoslavia, now Croatia, came up and said, “That was a great talk, but please, when you come to Yugoslavia, don’t give that talk. We have enough foolishness.” And I think he may have been right.

I suspect Hume saw the relationship between the passions and reason better than anyone else. Whether we like it or not, passion governs us- nobody ever chose to refrain from suicide because of a cost-benefit analysis. We live on because we like
living, and can ask for no other justification.

The poem "As I walked out one evening" by W.H. Auden seems apt.

Tuesday, October 17, 2006

The Imperial Style Part Deux

The New York Times has an article on Shing-Tung Yau- the "Emperor of Math", who was recently involved in controversy with Grigory Perelman.

In 1979, Shing-Tung Yau, then a mathematician at the Institute for Advanced Study in Princeton, was visiting China and asked the authorities for permission to visit his birthplace, Shantou, a mountain town in Guangdong Province.

At first they refused, saying the town was not on the map. Finally, after more delays and excuses, Dr. Yau found himself being driven on a fresh dirt road through farm fields to his hometown, where the citizens slaughtered a cow to celebrate his homecoming. Only long after he left did Dr. Yau learn that the road had been built for his visit.

A fascinating tale.

The Imperial Style

The New Yorker has a fine review of "Academic charisma and the origins of the research university" by William Clark.

At a Berlin banquet in 1892, Mark Twain, himself a worldwide celebrity, stared in amazement as a crowd of a thousand young students “rose and shouted and stamped and clapped, and banged the beer-mugs” when the historian Theodor Mommsen entered the room

One of the more interesting topics is how Germany's universities took the lead in the 19th century. The driver was competition among a swarm of small states- somewhat like Singapore today.

The heart of Clark’s story, however, takes place not during the Middle Ages but from the Renaissance through the Enlightenment, and not in France but in the German lands of the Holy Roman Empire. This complex assembly of tiny territorial states and half-timbered towns had no capital to rival Paris, but the little clockwork polities transformed the university through the simple mechanism of competition. German officials understood that a university could make a profit by attaining international stature. Every well-off native who stayed home to study and every foreign noble who came from abroad with his tutor—as Shakespeare’s Hamlet left Denmark to study in Saxon Wittenberg—meant more income. And the way to attract customers was to modernize and rationalize what professors and students did.

This competition led to constant innovation.

Bureaucrats pressured universities to print catalogues of the courses they offered—the early modern ancestor of the bright brochures that spill from the crammed mailboxes of families with teen-age children. Gradually, the bureaucrats devised ways to insure that the academics were fulfilling their obligations. In Vienna, Clark notes, “a 1556 decree provided for paying two individuals to keep daily notes on lecturers and professors”; in Marburg, from 1564 on, the university beadle kept a list of skipped lectures and gave it, quarterly, to the rector, who imposed fines. Others demanded that professors fill in Professorenzetteln, slips of paper that gave a record of their teaching activities. Professorial responses to such bureaucratic intrusions seem to have varied as much then as they do now. Clark reproduces two Professorenzetteln from 1607 side by side. Michael Mästlin, an astronomer and mathematician who taught Kepler and was an early adopter of the Copernican view of the universe, gives an energetic full-page outline of his teaching. Meanwhile, Andreas Osiander, a theologian whose grandfather had been an important ally of Luther, writes one scornful sentence: “In explicating Luke I have reached chapter nine.”

And then

In an even more radical break with the past, professors began to be appointed on the basis of merit. In many universities, it had been routine for sons to succeed their fathers in chairs, and bright male students might hope to gain access to the privileged university caste by marrying a professor’s daughter. By the middle of the eighteenth century, however, reformers in Hanover and elsewhere tried to select and promote professors according to the quality of their published work, and an accepted hierarchy of positions emerged. The bureaucrats were upset when a gifted scholar like Immanuel Kant ignored this hierarchy and refused to leave the city of his choice to accept a desirable chair elsewhere. Around the turn of the nineteenth century, the pace of transformation reached a climax.

In these years, intellectuals inside and outside the university developed a new myth, one that Clark classes as Romantic. They argued that Wissenschaft—systematic, original research unencumbered by superstition or the authority of mere tradition—was the key to all academic achievement. If a university wanted to attract foreign students, it must appoint professors who could engage in such scholarship. At a great university like Göttingen or Berlin, students, too, would do original research, writing their own dissertations instead of paying the professors to do so, as their fathers probably had. Governments sought out famous professors and offered them high salaries and research funds, and stipends for their students. The fixation on Wissenschaft placed the long-standing competition among universities on an idealistic footing.

Between 1750 and 1825, the research enterprise established itself, along with institutions that now seem eternal and indispensable: the university library, with its acquisitions budget, large building, and elaborate catalogues; the laboratory; the academic department, with its fellowships and specialized training. So did a new form of teaching: the seminar, in which students learned by doing, presenting reports on their original research for the criticism of their teachers and colleagues. The new pedagogy prized novelty and discovery; it was stimulating, optimistic, and attractive to students around the world. Some ten thousand young Americans managed to study in Germany during the nineteenth century. There, they learned that research defined the university enterprise. And that is why we still make our graduate students write dissertations and our assistant professors write books. The multicultural, global faculty of the American university still inhabits the all-male, and virtually all-Christian, research universities of Mommsen’s day.

On the other hand, in England

He also uses the ancient universities of Oxford and Cambridge as a traditionalist foil to the innovations of Germany. Well into the nineteenth century, these were the only two universities in England, and dons—who were not allowed to marry—lived side by side with undergraduates, in an environment that had about it more of the monastery than of modernity. The tutorial method, too, had changed little, and colleges were concerned less with producing great scholars than with cultivating a serviceable crop of civil servants, barristers, and clergymen.

The review ends with a note of concern.

If Clark helps us to understand why the contemporary university seems such an odd, unstable compound of novelty and conservatism, he also leaves us with some cause for unease. Mommsen may have liked to see himself as a buccaneering capitalist, but his money came from the state. Today, by contrast, dwindling public support has forced university administrators to look for other sources of funding, and to assess professors and programs through the paradigm of the efficient market. Outside backers tend to direct their support toward disciplines that offer practical, salable results—the biological sciences, for instance, and the quantitative social sciences—and universities themselves have an incentive to channel money into work that will generate patents for them. The new regime may be a good way to get results, but it’s hard to imagine that this style of management would have found much room for a pair of eccentrics like James Watson and Francis Crick, or for the kind of long-range research that they did. As for the humanities, once the core of the enterprise—well, humanists these days bring in less grant money than Mommsen, and their salaries and working conditions reflect that all too clearly. The inefficient and paradoxical ways of doing things that, for all their peculiarity, have made American universities the envy of the world are changing rapidly. What ironic story will William Clark have to tell a generation from now?

He is probably right, but the example of people like Fred Kavli gives us reason for hope. The problem is not that the loss of State funding, but the ballooning costs of science- especially in areas like Particle Physics. The benefits of competition are not to be sneezed at, and when faced with a resource crunch, smart people can innovate, as is seen with recent work on Particle Accelerators.

Monday, October 16, 2006

Cell phones at sea

I first heard of this development in the Economist, but Greg Mankiw blogs that fishermen in my home state are taking their mobile phones to sea with them, and posts the abstract of a paper by Robert Jensen of the Kennedy School .

Thursday, October 12, 2006

The Charms of the Bourgeoisie

The new Nobel Laureate has an article out in the Wall Street Journal, on his current interest- the "dynamism" of economies.

There are two economic systems in the West. Several nations--including the U.S., Canada and the U.K.--have a private-ownership system marked by great openness to the implementation of new commercial ideas coming from entrepreneurs, and by a pluralism of views among the financiers who select the ideas to nurture by providing the capital and incentives necessary for their development. Although much innovation comes from established companies, as in pharmaceuticals, much comes from start-ups, particularly the most novel innovations. This is free enterprise, a k a capitalism.

The other system--in Western Continental Europe--though also based on private ownership, has been modified by the introduction of institutions aimed at protecting the interests of "stakeholders" and "social partners." The system's institutions include big employer confederations, big unions and monopolistic banks. Since World War II, a great deal of liberalization has taken place. But new corporatist institutions have sprung up: Co-determination (cogestion, or Mitbestimmung) has brought "worker councils" (Betriebsrat); and in Germany, a union representative sits on the investment committee of corporations. The system operates to discourage changes such as relocations and the entry of new firms, and its performance depends on established companies in cooperation with local and national banks. What it lacks in flexibility it tries to compensate for with technological sophistication. So different is this system that it has its own name: the "social market economy" in Germany, "social democracy" in France and "concertazione" in Italy.

He explains that those who created the European systems had wonderful faith in the intelligence and benevolence of centralized systems.

When building the massive structures of corporatism in interwar Italy, theoreticians explained that their new system would be more dynamic than capitalism--maybe not more fertile in little ideas, such as might come to petit-bourgeois entrepreneurs, but certainly in big ideas. Not having to fear fluid market conditions, an entrenched company could afford to develop radical innovation. And with industrial confederations and state mediation available, such companies could arrange to avoid costly duplication of their investments. The state and its instruments, the big banks, could intervene to settle conflicts about the economy's direction. Thus the corporatist economy was expected to usher in a new futurismo that was famously symbolized by Severini's paintings of fast trains. (What was important was that the train was rushing forward, not that it ran on time.)

Friedrich Hayek and others showed that a pure capitalist system, by enabling experimentation and encouraging the participation of everyone, would maximize dynamism.

First, virtually everyone right down to the humblest employees has "know-how," some of what Michael Polanyi called "personal knowledge" and some merely private knowledge, and out of that an idea may come that few others would have. In its openness to the ideas of all or most participants, the capitalist economy tends to generate a plethora of new ideas.

Second, the pluralism of experience that the financiers bring to bear in their decisions gives a wide range of entrepreneurial ideas a chance of insightful evaluation. And, importantly, the financier and the entrepreneur do not need the approval of the state or of social partners. Nor are they accountable later on to such social bodies if the project goes badly, not even to the financier's investors. So projects can be undertaken that would be too opaque and uncertain for the state or social partners to endorse. Lastly, the pluralism of knowledge and experience that managers and consumers bring to bear in deciding which innovations to try, and which to adopt, is crucial in giving a good chance to the most promising innovations launched. Where the Continental system convenes experts to set a product standard before any version is launched, capitalism gives market access to all versions.

He then argues that the great benefit of such dynamism accrues to the employees

The concept that people need problem-solving and intellectual development originates in Europe: There is the classical Aristotle, who writes of the "development of talents"; later the Renaissance figure Cellini, who jubilates in achievement; and Cervantes, who evokes vitality and challenge. In the 20th century, Alfred Marshall observed that the job is in the worker's thoughts for most of the day. And Gunnar Myrdal wrote in 1933 that the time will soon come when more satisfaction derives from the job than from consuming. The American application of this Aristotelian perspective is the thesis that most, if not all, of such self-realization in modern societies can come only from a career. Today we cannot go tilting at windmills, but we can take on the challenges of a career. If a challenging career is not the main hope for self-realization, what else could be? Even to be a good mother, it helps to have the experience of work outside the home.

And a comment that probably applies to India:

Why, then, if the "downside" is so exaggerated, is capitalism so reviled in Western Continental Europe? It may be that elements of capitalism are seen by some in Europe as morally wrong in the same way that birth control or nuclear power or sweatshops are seen by some as simply wrong in spite of the consequences of barring them. And it appears that the recent street protesters associate business with established wealth; in their minds, giving greater latitude to businesses would increase the privileges of old wealth. By an "entrepreneur" they appear to mean a rich owner of a bank or factory, while for Schumpeter and Knight it meant a newcomer, a parvenu who is an outsider. A tremendous confusion is created by associating "capitalism" with entrenched wealth and power. The textbook capitalism of Schumpeter and Hayek means opening up the economy to new industries, opening industries to start-up companies, and opening existing companies to new owners and new managers. It is inseparable from an adequate degree of competition. Monopolies like Microsoft are a deviation from the model.

He argues that even Rawls' conception of Justice appears risk-averse only because we have always insisted that work is only about money

Yet the tone here is wrong. As Kant also said, persons are not to be made instruments for the gain of others. Suppose the wage of the lowest- paid workers was foreseen to be reduced over the entire future by innovations conceived by entrepreneurs. Are those whose dream is to find personal development through a career as an entrepreneur not to be permitted to pursue their dream?

To respond, we have to go outside Rawls's classical model, in which work is all about money. In an economy in which entrepreneurs are forbidden to pursue their self-realization, they have the bottom scores in self-realization--no matter if they take paying jobs instead--and that counts whether or not they were born the "least advantaged." So even if their activities did come at the expense of the lowest-paid workers, Rawlsian justice in this extended sense requires that entrepreneurs be accorded enough opportunity to raise their self-realization score up to the level of the lowest-paid workers--and higher, of course, if workers are not damaged by support for entrepreneurship. In this case, too, then, the introduction of entrepreneurial dynamism serves to raise Rawls's bottom scores.

Over at Asymmetrical Information, Jane Galt has a fine post on the relative importance of economic versus civil liberty. Her point would resonate with anyone who has shut his trap simply to avoid annoying his manager.

Working in technology in the 1990's, I had a fair number of friends and colleagues from the former Soviet Union. One of the things that surprised me was the way they described living under totalitarianism in the 1970's and 1980's. To them, the risk you took in joining the wrong group or saying the wrong thing was not, as it had been under Stalin, the risk of the KGB showing up one misty night to make you "disappear". It wasn't even going to the (horrible and often deadly) Soviet jails. The risk was that you would lose your job, or your apartment, or both. This was a very, very effective deterrant to any sort of dissidence.

When the State is the sole employer, where do you go if you anger it?

Obituary

Paul Halmos is dead.

Are women underpaid?

The New Economist blogs on "Were women's wages customary"- a paper by Joyce Burnett of Wabash College on the gap between the wages paid to men and women ("Throughout history and all over the world women have earned lower wages than men"). She asks whether this reflected custom or market forces.

The best way to resolve the question of who was right is to look for evidence that is not the expression of someone’s opinion, but is direct evidence from output. Evidence from production functions gives us such direct evidence. The available evidence from production functions uniformly indicates that women had lower marginal productivity than men.

Using census data to estimate the marginal products of men and women in the US in 1860, Craig and Field-Hendrey find that women were about 60 percent as productive as men in agriculture, and 40 to 50 percent as productive in manufacturing. Cox and Nye use data on nineteenth-century French manufacturing firms to estimate the marginal
product of male and female workers and find productivity ratios ranging from 0.37 to 0.63. When they test for wage discrimination, they find no evidence of wage discrimination.

Benjamin and Brandt use a 1936 household survey in China to estimate the contribution of men and women to family income in general and crop income specifically; they find that women contributed 62 percent as much as men to farm production.60 Women are also less productive than men in agriculture in developing countries today; Jacoby finds that women were 46 percent as productive as men in Peruvian agriculture in the 1980s. While the estimates of the productivity ratio vary depending on the industry and location, all of the estimates suggest that women were substantially less productive than men in manual labor.

Thus I conclude that women’s wages, at least in competitive sectors such as agriculture and textile manufacturing, were not customary in the sense that they were lower than women’s productivity.

In the comments, knzn asks whether this confuses cause and effect: maybe because custom set men's wages higher, they were employed in high-productivity jobs while women were relegated to low-productivity jobs. I don't think that would survive in a competitive market where arbitrage would result in low-wage women being employed in high-productivity jobs, so that their wages get bid up. Lafayette wonders whether the difference may have been because the women could not do heavy manual labor. If that is the case, we would see the gap close as mechanization reduces the need for muscle vs brain.

Here is a previous post on this topic.

Two comments:
1. I am pretty sure there is discrimination against women, as there has been against almost any group you can think of. The case of Joan Robinson is a good example. She helped found the theory of monopolistic competition, and won the Cambridge controversy ('As historian Mark Blaug put it, Samuelson made a "declaration of unconditional surrender"'), and yet never won the Nobel and did not become a full professor at Cambridge University until 1965, when she was 62 years old. ('Perhaps not coincidentally, this was the year her husband retired from Cambridge.')
2. The wages reflect poductivity in the market, which is fair. However, I expect this understates the total output of women, which includes unpaid labor at home.

Very Big

The world's third largest refinery complex is coming up at Jamnagar, India. The Guardian reports.

But Reliance says it gambled on a "paradigm shift" in the economics of the refinery business. The company, which began as a textile trader but moved into producing polyester, had noticed that India was importing millions of tonnes of refined hydrocarbons a year. Its managers projected prices creeping upwards largely due to three global oil trends.

First the oil being produced from the world's hydrocarbon reservoirs was increasingly "sour", or heavy, full of sulphur and other impurities that older refineries could not cope with.

Second was that no new capacity was being built around the world. Environmental concerns and the rising costs of infrastructure projects discouraged the oil majors from putting up refineries in Europe and America. No new oil refinery has been built in the US since the 1980s as environmental legislation has tightened.

Third was Reliance's belief that Asian economies would become dynamos of world growth - inevitably increasing demand for petro-products. It also saw that many European countries wanted cleaner petroleum, which required complex refining techniques. According to its strategists, commercial logic dictated that new, hi-tech refineries would be needed - and soon. Reliance, Mr Meswani says, decided to build big.

Thanks to the New Economist.

Tuesday, October 10, 2006

Necessary but not sufficient

Brad de Long has an article out at Project Syndicate. Our politicians would do well to read it. The article is about how the Mexican economy has developed in the years since NAFTA was signed.

Since NAFTA, Mexican real GDP has grown at 3.6% per year, and exports have boomed, going from 10% of GDP in 1990 and 17% of GDP in 1999 to 28% of GDP today. Next year, Mexico’s real exports will be five times what they were in 1990.

This is as expected- Mexico is right next to the world's largest economy, and has tariff-free and quota-free access to the US market. The macroeconomic portents are excellent

We see great strengths in the Mexican economy – a stable macroeconomic environment, fiscal prudence, low inflation, little country risk, a flexible labor force, a strengthened and solvent banking system, successfully reformed poverty-reduction programs, high earnings from oil, and so on.

The averages, however, mask reality.

But the 3.6% rate of growth of GDP, coupled with a 2.5% per year rate of population and increase, means that Mexicans’ mean income is barely 15% above that of the pre-NAFTA days, and that the gap between their mean income and that of the US has widened. Because of rising inequality, the overwhelming majority of Mexicans live no better off than they did 15 years ago. (Indeed, the only part of Mexican development that has been a great success has been the rise in incomes and living standards that comes from increased migration to the US, and increased remittances sent back to Mexico.)

What could be holding the country back?

To be sure, economic deficiencies still abound in Mexico. According to the OECD, these include a very low average number of years of schooling, with young workers having almost no more formal education than their older counterparts; little on-the-job training; heavy bureaucratic burdens on firms; corrupt judges and police; high crime rates; and a large, low-productivity informal sector that narrows the tax base and raises tax rates on the rest of the economy. But these deficiencies should not be enough to neutralize Mexico’s powerful geographic advantages and the potent benefits of neo-liberal policies, should they?

Apparently they are. The demographic burden of a rapidly growing labor force appears to be greatly increased when that labor force is not very literate, especially when inadequate infrastructure, crime, and official corruption also take their toll.

Sounds familiar? This is not to say that we should roll back the reforms we have undertaken. Just that we need to be more cautious, more humble.
Paul Krugman once wrote a famous article that argued that there was no "Asian Miracle"- the tremendous growth that South East Asia saw in the 80s was the result of the backwardness of those economies. Starting from a small base, they could easily achieve tremendous rates of growth, but diminishing returns would inevitably set in.
India is starting from an even smaller base, and our human capital is even poorer. As we grow, we may eventually find that the educated middle class is employed at wages that are comparable to the the rest of the world, and so no longer enjoy any sort of cost advantage, while the teeming multitudes of illiterate, undernourished peasants find themselves wandering outside the nimbus of the Economic Miracle.

Expectations

Edmund Phelps wins the Nobel prize for Economics. Tyler Cowen has a great roundup of his work- which I can only vaguely understand. Edmund Phelps was a prospect in 2004, when the award went to Kydland and Prescott.

Arnold Kling links to an article by Phelps where he discusses the impact of demographics on the dollar. Relevant to my earlier post.

Monday, October 09, 2006

Keep it simple

This years IgNobel prize "for literature" went to Princeton University psychologist Daniel M. Oppenheimer for his paper "Consequences of Erudite Vernacular Utilized Irrespective of Necessity: Problems With Using Long Words Needlessly," which appeared in the March issue of Applied Cognitive Psychology.

"It turns out that somewhere between two-thirds and three-quarters of people admit to replacing short words with longer words in their writing in an attempt to sound smarter," Oppenheimer said in an e-mail. "The problem is that this strategy backfires -- such writing is reliably judged to come from less intelligent authors.

I sure hope this result gets about- may save me some painful reading.
Tyler Cowen once blogged about a paper "False modesty: When Disclosing Good News Looks Bad" by Rick Harbaugh and Theodore To. The point is that its marginal performers who have the greatest incentive to reveal good news.

Starting in 1998, Los Angeles health officials began requiring restaurants to post large hygiene grades at their entrances, with a high proportion of grades being an A (see Jin and Leslie, 2003). Why was it necessary to require even A restaurants to disclose their grade? Suppose diners have their own opinions based on experience or reputation, so good restaurants tend to do well even without disclosure. In this case it is the worst restaurants within the A category who have the strongest incentive to prove that they meet basic hygiene standards. Given this incentive, disclosure of even an A grade can be interpreted by diners as a bad sign.

Or consider whether a person with a PhD should use the title “Dr.” In many environments PhDs are relatively rare so using a title is a strongly favorable signal of the person’s professional credentials and we would expect titles to be used frequently. But in other environments, such as research universities,PhDs are quite common. In some fields faculty interact frequently with non-academics so a PhD might still be worth boasting about, but in other fields most interactions are between academics who expect each other to have PhDs. In these fields using a title might then be interpreted not just as redundant, but as a signal of insecurity that the person fears being thought of as unqualified without the title.

Addendum: Another post on counter-signalling.

Demography is destiny

The Economist has a survey on "Talent"- the "world's most sought-after commodity".

Two facts struck me:

the picture in much of the developed world is haunted by demography. By 2025 the number of people aged 15-64 is projected to fall by 7% in Germany, 9% in Italy and 14% in Japan.

and

RHR International, a consultancy, claims that America's 500 biggest companies will lose half their senior managers in the next five years or so, when the next generation of potential leaders has already been decimated by the re-engineering and downsizing of the past few decades. At the top of the civil service the attrition rate will be even higher.

Wealthy westerners wish to retire, while continuing to live the good life. Countries like India, which have relatively young populations, need to increase output and employment. One natural response would be for the former to invest in the latter, and then use the returns from their investments to purchase goods for consumption in their dotage. Will this happen?

Probably not. As the same article puts it "How can India talk about its IT economy lifting the country out of poverty when 40% of its population cannot read?" India and other pretender economies will be hobbled by their failure to invest in human capital, which can take far longer to accumulate than the physical infrastruture which we are known to lack. In this post, Jane Galt discusses the role of Human Capital in the post-war recoveries of Germany and Japan. In the article mentioned above, the Economist reports

Both India and China are suffering from acute skills shortages at the more sophisticated end of their economies. Wage inflation in Bangalore is close to 20%, and job turnover is double that (“Trespassers will be recruited” reads a sign in one office). The few elite institutions, such as India's Institutes of Technology, cannot meet demand.

So what now? The recent period of high profits is likely to pass as wage bills grow. As profits shrink, marginal businesses will go bust. Older Europeans and Americans will be encouraged, by higher salaries and changed labor laws, to stay on at work for longer than planned. Companies will adapt in many ways: 1. some will simply try to outcompete their rivals on wages. Not a good idea, as it continue to be difficult to distinguish between the ones who are truly worth the new wages, and the marginal employees 2. Others will try and use innovative salary structures to try and ensure that their wages get results 3. In some lines of business, changed organizational structures may become common.

Overall, though, I am sure we will see more of the same- at least as far as Economics is concerned. How the decline and aging of the west will affect global politics is a different matter.