Wednesday, February 29, 2012

On Merit

In this very wise essay, Michael Young reminds us that the 1958 book in which he coined the word "Meritocracy" was meant to be a warning.
It is good sense to appoint individual people to jobs on their merit. It is the opposite when those who are judged to have merit of a particular kind harden into a new social class without room in it for others.
There are any number of arguments against Meritocracy:
  1. It reassures the powerful and it demoralizes the powerless.
  2. It removes all checks on rent-seeking. As Michael Young notes in his essay
    So assured have the elite become that there is almost no block on the rewards they arrogate to themselves. The old restraints of the business world have been lifted and, as the book also predicted, all manner of new ways for people to feather their own nests have been invented and exploited.
  3. It confuses merit and marketability.
As always, I've learned a lot from Chris Dillow.

There is one other argument which I encountered on Andrew Gelman's blog, but have not seen elsewhere. In a post about an article by James Flynn (of the "Flynn effect"), Gelman says
Flynn also points out that the promotion and celebration of the concept of “meritocracy” is also, by the way, a promotion and celebration of wealth and status–these are the goodies that the people with more merit get.
Thus, because we believe that we reward virtue with wealth and power, not only do the wealthy and powerful gain legitimacy, but wealth and power themselves gain a moral sheen by association with virtue. At the limit, we no longer believe that virtue can be its own reward: it has to be accompanied by the insignia of social acclaim.

Tuesday, February 28, 2012


Christina and David Romer's new paper, “The Incentive Effects of Marginal Tax Rates: Evidence from the Interwar Era,” says that the incentive effects of marginal tax rates are
consistently positive, small, and precisely estimated
This really should not be a surprise. Most people would rather be rich than poor, but most people get up and go to work in the morning not because they want to be rich, but because they must or because they love their work. People who love their work will work harder as they get richer.

James Kwak notes that these results are based on the richest of the rich, the top .05% of the population and hence the ones who would find it easiest to slack off if they decided that work isn't worth it; the rest of us would have even less choice
To put this in perspective, an elasticity of 0.19 implies that tax revenues would be maximized with a tax rate of 84 percent.
Should we raise taxes on the wealthy?

The normative aspect of taxation is genuinely difficult. I think most people who oppose higher taxes on the wealthy do so for ethical reasons. They feel it would be unjust to people who have worked hard for their wealth, and have benefited others. I often feel similar scruples, and certainly we shouldn't be taxing people merely because we can.

However, I also agree with Hayek on desert. The world is full of intelligent, hard-working, decent people. For every successful man who we admire for his intelligence and energy there are a dozen who we haven't heard of but who are equally worthy in any meaningful sense. The market rewards you not for what you have done, but for what "it" wants you and others to do more of. Certainly I don't see any good reason why the extremely wealthy should pay tax at a lower rate than those poorer than them.

The doubts which I do have are about government's ability to serve our best interests even if they wish to, though the solution to that would be to avoid overcomplicating government. Whatever your opinion of the "right" rate of tax to apply to the extremely wealthy, these estimates should make you revise that upwards.

Monday, February 27, 2012

A little light Sociology

I follow several blogs about Sociology and Anthropology. Their authors see things differently enough from me so that I have to make an effort to understand what they are saying. I frequently disagree with them, but I also change my mind often. It is an interesting exercise.

I took this image from a post by Lisa Wade over at Sociological Images.

It is difficult for us to see ourselves as others see us, and it is difficult for others to see us as we see ourselves. Lisa writes
Many believe that the U.S. is at the pinnacle of social and political evolution. One of the consequences of this belief is the tendency to define whatever holds in the U.S. as ideal and, insofar as other countries deviate from that, define them as problematic.
I suspect she wrote this only because she assumed that she would be read mostly by Americans. The situation in the cartoon is perfectly symmetrical. All people assume their way of life is "natural". Born into our cultures like fish to water, we cannot usually see ourselves from the outside. We assume that people from other communities are all alike. Because we are not aware of how we have internalized our way of life, we believe that we act freely (and why not?).

Because we cannot imagine ourselves into the situation of others, who have internalized their cultures as completely as we have ours, we assume that they act out of fear. As this post by the Last Psychiatrist shows, the media has a lot to answer for. This should be useful when we read about Iran in the newspapers.

Note: None of this is to say that I am a thorough-going cultural relativist. Yet.

Sunday, February 26, 2012

Archaeology is hard

Even though rice rarely survives in archaeological deposits, all plants absorb tiny amounts of silica from groundwater. The silica fills some of the plant's cells and when the plant decays it leaves microscopic cell-shaped stones, called phytoliths, in the soil. Careful study of phytoliths can reveal not just whether rice was being eaten but also whether it was domesticated.

Yan and MacNeish dug a sixteen-foot trench in Diaotonghuan cave near the Yangzi valley...
From Why the West Rules- for now

Thursday, February 23, 2012

Links for 23 February 2012

First, a charming article by Carl Zimmer in the, about biologist Thomas Seeley and what he has learned about bees. When they began to talk about what we can learn about group decision making from bee-hives, this reader (who has read about eusociality and inclusive fitness) began to mutter to himself "but hives are essentially one big organism! How can decision-making in hives have any lessons for genetically diverse groups of humans?! What about free-riders?"

Whereupon I had to tell myself that Seeley knows that, just shut up and try to understand.
Groups work well, he argues, if the power of leaders is minimized. A group of people can propose many different ideas—the more the better, in fact. But those ideas will only lead to a good decision if listeners take the time to judge their merits for themselves, just as scouts go to check out potential homes for themselves.

Groups also do well if they’re flexible, ensuring that good ideas don’t lose out simply because they come late in the discussion. And rather than try to debate an issue until everyone in a group agrees, Seeley advises using a honeybee-style quorum. Otherwise the debate will drag on.
There are any number of ways for groups of people to take decisions collectively. Seeley is describing a way to use norms to arrive at better decisions. It won't work in all conditions. There are reasons why it works well in New England. However, I think our love affair with leadership is such that we may not be giving it a chance where it could be useful and given our present norms, it may not make sense for any one leader to try and change the way we take decisions.

Second, Felix Salmon rips into Bob Shiller for recommending that countries issue GDP bonds. Excellent points all. One point he makes severely weakens JW Mason's argument for rent control. JW Mason argues that
if an asset lasts forever, the share of its present value -- which is what matters for the decision to buy/build it -- that comes from the later years of its life goes arbitrarily close to zero.
When I read that, I should've remembered that rents typically go up, and so what Salmon says here about bonds more or less applies
If the coupons are steadily increasing, however, the math becomes very dangerous. The coupons will rise at the rate of nominal GDP growth, which in the US will probably be somewhere in the 4% to 5% range over the long term. As a result, if you’re a risk-averse person who wants a perpetual US government security and your discount rate is say 3%, then the expected value of a singe Trill is actually infinite. Of course, no security trades at a price of infinity. But the fact that valuations can get so high in a low-interest-rate environment is all you need to know about just how volatile Trill prices could get.
I like Shiller though, and I hope Salmon is wrong in his guess of why Shiller backs GDP bonds.

Third, another excellent post by Felix Salmon, this time about the market for art. I agree with everything he says here
I do hold out some small hope that the Chinese art market will provide a correction to this syndrome — there, I’m told, the value of an art work is (at least sometimes) much less a function of its recognizability as the work of a certain artist, and much more a function of the way that it can fit itself into a long artistic tradition.
I recently exchanged emails with a close friend, in which I expressed a similar hope: that the (expected) rise of China, India, South America, and hopefully Africa will someday change art forever, that the European tradition which we all now consider to the mainstream of art worldwide will prove to be a tributary of a something much greater. It will mean that many artists we today consider established masters will be relegated by our descendents into the second rank, but it is our best bet for a fresh start in art.

Fourth, via a tweet by @EpicureanDealmaker, an excellent review of David Graeber's new book Debt: The First 5000 years by Gabriel Rossman.
Other interesting points he makes on debt are various ways that it becomes a moral obligation such that debtors are seen as sinners and religious salvation is seen as a spiritual analog to redemption. This helps explain something I never completely understood when watching The Sopranos, which is why gangsters first go to the trouble of getting someone to incur an illegal debt before shaking them down? It turns out that the point of loan-sharking instead of mere naked extortion is the victim feels a certain moral obligation to repay the debt and so loan sharks exploiting gambling addicts has the same logic as how many grifts (e.g., 419 advanced-fee fraud, the fiddle game, etc.) first involve the victim as co-conspirator in a crime against a real or imagined third party.
I've not read the book yet, but the many excellent reviews have made me really think and shaken some of my assumptions. However, like Grossman, I am very worried about Graeber's biases. We'll see.

Fifth, Richard Wiseman pays tribute to psychologist Ulrich Neisser. The monograph which he links to is great fun to read, and I had no idea that Neisser was the grandfather of the Invisible Gorilla experiment.

Sixth, via @MathUpdate on Twitter, an article: Ten misconceptions from the history of analysis and their debunking. I've only just begun to read this, but seriously, we need more such works. Too many laypeople assume that Mathematics is complete and have no idea what living Mathematicians do, but why blame them when there are so many mathematicians who assume that Mathematics is complete and all that living Mathematicians are doing is uncovering what remains hidden of its Platonic reality. Works like these help to complicate that picture, bring to the forefront the actual human process of making Mathematics.

Wednesday, February 22, 2012

Links for 22nd February 2012

First, I am so glad to see a neuroscence article which says that there is something we do for which we do not have a specialized brain region. Tom Stafford writes about why we find it so easy to remember faces, and so difficult to remember names.
Or to put it another way, we might use the same word – ‘remember’ – to describe our ability to place faces and names, but in fact we are describing two different psychological processes: recognition and recall.
Second, Alex Tabarrok at Marginal Revolution describes the secret agreement that created modern China.
Word of the secret agreement leaked out and local bureaucrats cut off Xiaogang from fertilizer, seeds and pesticides. But amazingly, before Xiaogang could be stopped, farmers in other villages also began to abandon collective property. In Beijing, Mao Zedong was dead and a new set of rulers, seeing the productivity improvements, decided to let the experiment proceed.
Incentives aren't everything, but they aren't nothing either.

Third, pretty much every introductory Economics textbook I have looked at discussed the evils of Rent Control. JW Mason has two posts at The Slack Wire, asking whether rent control really is really such a bad thing. I am not entirely convinced by his arguments, but I am impressed.
On why Rent Control is desirable
The most compelling argument for rent control is neighborhood stabilization, the idea that social capital in an urban environment requires stable residence patterns. If prices are volatile, and this leads to a lot of residential turnover, the result can be a less desirable neighborhood for everyone.
And on why it won't deter new investment because in the United States all existing Rent Control only apply to units built before a certain date. In NYC, it is 1974.
The significance of this is that, even if an asset lasts forever, the share of its present value -- which is what matters for the decision to buy/build it -- that comes from the later years of its life goes arbitrarily close to zero. Say the discount rate is 6 percent. Then 95 percent of the value of a perpetuity comes from the returns in the first 50 years. 99.7 percent comes from returns in the first 100 years
Fourth, I've just started to try GitHub and enjoyed reading this article in Wired. Nice insights into the culture of Silicon Valley. Maybe Jack Welch should read it.
And three months after that night in the sports bar, Wanstrath got a message from Geoffrey Grosenbach, the founder of PeepCode, a online learning site that had started using GitHub. “I’m hosting my company’s code here,” Grosenbach said. “I don’t feel comfortable not-paying you guys. Can I just send a check?”
It was the first of many. In July 2008, Microsoft acquired Powerset, the startup that was providing Preston-Werner with a day job. The software giant offered Preston-Werner a $300,000 bonus and stock options to stay on board for another three years. But he quit, betting everything on GitHub.

The Superman in the corner office

I was a remarkably simple-minded young man. To see this, you only need to know that I actually used to believe what I read in the business press. Since Businessweek told me that Jack Welch is a great manager, I believed them. All I can say in my defense is that all young people I've met have been stupid. That, though, is a subject for another post.

Now Felix Salmon shows us that Jack and Suzy Welch believe that they have a hammer, and that all employees are nails. This is their advice for Zuckerberg on how to manage post-IPO Facebook.
With all this exultant “barking,” there also needs be bite — in the form of frequent, rigorous performance reviews. The facts are, if Facebook wants urgency, speed and intensity around its mission, those behaviors must be explicit values that, when demonstrated, result in bonus money and upward mobility — or not.
This is how Welch managed GE. Why does he believe that this is a good way to run Facebook? Has he given it any real thought? Does he think all organizations are the same? As Salmon writes
What’s more, it’s far from clear that the best way to motivate a Silicon Valley engineer is to dangle an annual bonus in front of his face and tell him that if he works hard he could get an extra couple of months’ salary at the end of the year. Rather, the best way to get the most out of engineers is to surround them with other great engineers, in a collegial atmosphere where everybody works hard and everybody does really well building great products that everybody is proud of.
We are prone to self-serving bias. Successful people tend to assume that what worked for them will work for anyone, at any time. "Do as I did, and you will be alright" they say. To believe anything else would be to accept the power of contingency, to admit that their success may have no timeless lessons.

This, though, only explains why Welch is willing to supply a steady stream of unoriginal commentary. How about the demand side? Why is Welch paid good money to prattle on about how to manage a company he knows nothing about?

Economics is built on the principle that people respond to incentives. This is a useful guide when thinking about economic problems, but it becomes an ideology when you no longer think about the specifics of the situation. For example, it is elementary economics that a minimum wage increases unemployment, but does that mean that abolishing the minimum wage is the best way to reduce unemployment? In this post about the minimum wage, Dillow shows us that we don't live in an Econ 101 world
Let’s instead do some simple maths. There are 4.06 million 18-24 year-olds not in full-time education. 2.77 million of these are in work, and 1.29m - 31.8% - are unemployed or inactive. This means that to get the employment rate up to 80% for this group would require an increase in employment of 17.3%.
How could such a rise be achieved by abolishing the NMW? There are only two possibilities: either the abolition allows wages to fall a very long way; or the price elasticity of demand for young workers is very high.
He tackles the same question in this magnificent post where he explicitly calls the reliance of incentives an "ideology"
Nick Clegg says we need a cap on benefits “not least to increase the incentives to work.” This runs into an obvious objection - that the big problem right now is not so much a lack of incentives to work but a lack of opportunities to do so. There are 2.69 million unemployed chasing 459,000 vacancies - that’s 5.85 unemployed per opening.
He goes on to discuss the cognitive biases which cause people to over-estimate the importance of incentives to work. Read it all. Read it again.

Managerialism is the ideology of my generation. I see it all around, and I have not been immune to it myself. It is not the same as Management Theory but is the idolatry of Management. It is the belief that the problems of the world can be solved, that the solution is "management" (or "leadership"), that there are great leaders but (thank goodness!) you can learn to be one too. It is why journals on management have abandoned mundane matters of operations, trade, and policy to become glossy lifestyle magazines about the rich and powerful. Chris Dillow has described the problems with this ideology in yet another splendid post.

In this tweet, John Cook quotes Stanislav Datskovskiy
Employers much prefer that workers be fungible, rather than maximally productive.
This makes excellent sense for employers, since highly productive employees who are difficult to replace can claim most of the value they create. In today's large corporations, power resides with the managers, not with the shareholders who are ostensibly the owners of the business. Managerialist ideology makes firms themselves fungible: the good manager can be good in any context, and so the firm needs him far more than he needs it. The balance of power moves further towards the managers and away from the owners.

Tuesday, February 21, 2012

Links for 21 February 2012

First, a wonderful post by John Cook, on how to deal with singularities when integrating functions.

Second, a post in which Chris Dillow asks whether Management matters. He describes a study of medium-sized manufacturers which seems to say it does. He doesn't agree, because his idea of what "management" means is different.
The practices it identifies are largely about whether there are good feedback mechanisms in place. Is the production process sufficiently well monitored that errors can be eliminated and efficiencies identified? Is there good performance appraisal of employees? Are goals clear and sensible? And so on.

What we have here, then, is not a story about CEO’s “strategic vision”, or about the power of great individuals, but about day-to-day administrative structures. Common sense says these must matter, and must be basic good practice for any firm.
I think he is right, but there could be even more to this. Few things are so disruptive of good administration as management which wants to be doing things all the time, trampling their way through the organization like a herd of elephants cavorting in a field, shaking things up in reorganization after reorganization. Thus, the fact that these organizations have good practices may mean simply that they are lead by sensible people who know to put good systems in place and generally stay out of the way. In any case, I am entirely in favor of calling the things which we do "administration" rather than "management", as Joel Spolsky puts it in this recent essay.
The “management team” isn’t the “decision making” team. It’s a support function. You may want to call them administration instead of management, which will keep them from getting too big for their britches.
Of course, maybe it is simply selection bias. There are times when management do have to shake things up if the firm is to have a chance of returning to good health, and maybe these firms have been doing well for other reasons altogether, and their systems have had a chance to mature. We see that firms which have good systems seem to be doing well, and we assume that the one caused the other.

Um, but there is this awkward point
However, Bloom and colleagues find that less than one fifth of the variance in total factor productivity can be explained by these management practices. I’m surprised by how low this is.

Third, another old post by Dillow, in which he asks whether politics should be based on morality.

My answer would be that it will be whether we want it to be or not, since morality seems to be a basic drive in human beings. We cannot help it. This is not always a bad thing but it can be, especially in a democracy where the median voter will not be unduly discomfited if he gives in to his taste for revenge, moral outrage, and spasms of maudlin sentimentality. I find Jonathan Haidt's work quite persuasive.
How much of moral thinking is innate? Haidt sees morality as a "social construction" that varies by time and place. We all live in a "web of shared meanings and values" that become our moral matrix, he writes, and these matrices form what Haidt, quoting the science-fiction writer William Gibson, likens to "a consensual hallucination." But all humans graft their moralities on psychological systems that evolved to serve various needs, like caring for families and punishing cheaters. Building on ideas from the anthropologist Richard Shweder, Haidt and his colleagues synthesize anthropology, evolutionary theory, and psychology to propose six innate moral foundations: care/harm, fairness/cheating, liberty/oppression, loyalty/betrayal, authority/subversion, and sanctity/degradation.
Chris Dillow feels that we should try to keep morality out of politics, and offers excellent reasons to do so. I believe that what he wants is desirable, but impossible. As he points out in his post, we should be looking at structures and asking why things turn out the way they do. I fear that his post is thought provoking but fundamentally similar to the moralistic thinking he deplores.

Cordelia Fine at The Cellular Scale

The Cellular Scale reviews Cordelia Fine's Delusions of Gender. This is a book I must read. As the review summarizes it
  • You are bound to find differences when you are looking for them.
  • Differences are more likely to be reported and publicized than similarities
  • There are glaring flaws in many neuroscience studies showing brain differences between men and women
  • Even if all the studies showing brain differences between men and women were taken as true, that still wouldn't mean that the differences are 'hard wired' or 'inherent' or 'because of evolution'
  • Even if all the brain differences are real, and even if they are 'hard wired', that still doesn't mean that women and men actually think differently
For me the key passage was this
Since everyone knows men and women are different re: genitalia, let's test whether they are different in brain or behavior. This may seem totally reasonable, but a counter example is finger-print pattern. People can be grouped by their fingerprint pattern into 'loop-shape' or 'swirl-shape' people. This fingerprint pattern is determined genetically, but since it is not an obvious difference (you probably don't even know which group you belong to), no one has ever tested whether 'loop-shape' people have bigger hippocampi than 'swirl-shape' people.
Again, because "no difference" is not interesting, every study which talks about the observed differences between men and women is shadowed by all the studies which did not find any differences and so were not even published. This is a common issue in Science, but its good to be reminded of it.

The Platonic essence of a book

The Epicurean Dealmaker demolishes E-Books can't burn, Tim Parks' recent essay in the NYRB.

Where Tim Parks' says
The e-book, by eliminating all variations in the appearance and weight of the material object we hold in our hand and by discouraging anything but our focus on where we are in the sequence of words (the page once read disappears, the page to come has yet to appear) would seem to bring us closer than the paper book to the essence of the literary experience.
The Dealmaker responds
A book, properly considered, is a recorded performance of a piece of literature, just like a CD is a recorded performance of a particular piece of music. While musicians have more artistic discretion in interpreting a piece than a book designer and publisher do, the latter are not aesthetically invisible. They subtly influence a book’s format and packaging: font, margins, page breaks, cover art, etc. The sequence, timing, pace, and even completion of the work—its interpretation—lie in the hands of a reader, but the packaging and presentation of the physical object is not. And because reading is a performance, the time and place where you read is important, too. Reading Lord Jim on a plane is not the same as reading it on a tropical beach. The former is forgettable; the latter is not, as I can personally attest.

Beethoven’s Ninth Symphony is the same music, whether it is interpreted by the Berlin Philharmonic or the Boise Symphony. But nobody ever hears Beethoven’s Ninth Symphony: they hear a performance of it. By the same token, nobody ever reads Ulysses, they read a version of it, as presented to them through the medium of some sort of delivery device at a particular time and place, and interpreted according to their own engagement, interest, aptitude, and sensitivity.
He is absolutely right, of course. The whole things is wonderfully written, and reads well even on a laptop screen.

Monday, February 20, 2012

An axe, not a scalpel

I came across this Neuroconscience article through a tweet by Uta Frith.

Reading it, I wonder how many of the fMRI results we read about will survive even 20 years. I am grateful to them for this wonderfully clear description of some of the challenges involved in using this tool. Some of the good bits points are below.
The essential problem of fMRI is that, while it provides decent spatial resolution, the data is acquired slowly and indirectly via the blood-oxygenation level dependent (BOLD) signal. The BOLD signal is messy, slow, and extremely complex in its origins. Although we typically assume increasing BOLD signal equals greater neural activity, the details of just what kind of activity (e.g. excitatory vs inhibitory, post-synaptic vs local field) are murky at best.

Setting aside the worry about what neural activity IS measured by BOLD signal, there is still the very real threat of non-neural sources like respiration and cardiovascular function confounding the final result.

If cognitive load differs between conditions, or your groups (for example, a PTSD and a control group) react differently to the stimuli, respiration and pulse rates might easily begin to overlap your sampling frequency, confounding the BOLD signal.

You’ve probably guessed where I’m going with this: hold your breath in the fMRI and you get massive alterations in the BOLD signal. Your participants don’t even need to match the sampling frequency of the paradigm to confound the BOLD; they simply need to breath at slightly different rates in each group or condition and suddenly your results are full of CO2 driven false positives! This is a serious problem for any kind of unconstrained experimental design, especially those involving poorly conceptualized social tasks or long periods of free activity. Imagine now that certain regions of the brain might respond differently to levels of CO2.
The whole piece is well worth reading.

Olde England

I thought I had already blogged this passage From AJP Taylor's English History. Apparently not.
Until August 1914 a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state, beyond the post office and the policeman. He could live where he liked and as he liked. He had no official number or identity card.

He could travel abroad or leave his country for ever without a passport or any sort of official permission. He could exchange his money for any other currency without restriction or limit. He could buy goods from any country in the world on the same terms as he bought goods at home. For that matter, a foreigner could spend his life in this country without permit and without informing the police.

Unlike the countries of the European continent, the state did not require its citizens to perform military service. An Englishman could enlist, if he chose, in the regular army, the navy, or the territorials. He could also ignore, if he chose, the demands of national defence. Substantial householders were occasionally called on for jury service.


Interview about embodied cognition with Andrew Wilson and Sabrina Golonka in Psychology Today. Good discussion of what it is not, and the example they use (of how people catch balls in flight) gets the essential point across: there is no Newtonian model in our minds, we don't use a "computational, representational system that mentally transforms the input into motor commands" to predict where the ball will fall and then go there. As in so many other areas, we use heuristics.

Jeff at Cheap Talk has a lovely little post about why moving to California may disappoint. The ones who move are the ones who are over-optimistic, and so are more likely to be disillusioned.
It also explains why people who are forced to leave California, say for job-related reasons, are pleasantly surprised at how happy they can be in the Midwest. Since they hadn’t moved voluntarily already, its likely that they underestimated how happy they would be.
Surely this also works in the other direction: you expect something to be really bad, and you are pleasantly surprised. In both cases, it is better to be a pessimist than an optimist. Lesson learnt.

Karl Smith at Modeled Behavior makes a point which wise men have made before, but which we never seem to learn: being sincere is just not good enough.
It is this blog’s overarching conceit that none do more harm than those who seek to do a principled good. The selfish will always accept a Coasian bargain. True believers will stop at nothing.

Sunday, February 19, 2012

A quick note

To say that this tweet by Emanuel Derman goes to the heart of the matter.
RT "@ritholtz: How your brain tells you where you are $$" …why not say "the molecules in the neuron know"? Thats Puzzle

Friday, February 17, 2012

Settling down

Nowhere outside the Hilly Flanks did people have so much to defend. Even in 7000 BCE, almost everyone outside this region was a forager, shifting seasonally, and even when they had begun to settle down in villages, such as Mehrgarh in modern Pakistan or Shanghan in the Yangzi Delta, these were simple places by the standards of Jericho. If hunter-gatherers from any other place on earth had been airlifted to Cayonu or Catalhoyik they would not, I suspect have known what hit them. Gone would be their caves or little clusters of huts, replaced by bustling towns with sturdy houses, great stores of food, powerful art, and religious monuments. They would find themselves working hard, dying young, and hosting an unpleasant array of microbes. They would rub shoulders with rich and poor, and chafe under rejoice in men's authority over women and parents' over children. They might even discover that some people had the riht to murder them in rituals. And they might well wonder why people had inflicted all this on themselves.
From Ian Morris' Why The West Rules- For Now.


Note: Edited for clarity.

John D Cook quotes Hayek in this post.
Man in a complex society can have no choice but between adjusting himself to what to him must seem the blind forces of the social process and obeying the orders of a superior.
I guess this is Hayek's answer to the question of just how many ways there are in which we can organize production. Coase assumes the same options in his writings about the nature of the firm, when he contrasts hierarchies and markets.

"Complex" is the key word. "Simpler" societies did have other ways of organizing these matters. For much of human history, we lived in a world without markets, where economic life was organized by means of convention and tradition. It is markets which are the real innovation, and as the work of Elinor Ostrom has shown, convention and tradition play a role even in wealthy societies today.

I certainly do not romanticize traditional societies, and I do think markets are preferable to bosses. Even if you are working for a boss, you are infinitely better off when there is a market for your work. However, reading that passage from Hayek could leave you with the impression that there are only two ways to organize economic life. There used to be more, and we may yet come up with others.

After all, it was Hayek who said that "the mind cannot foresee its own advance."

Thursday, February 16, 2012

Simply being

Note: Edited to correct some egregious mistakes in grammar.

Here is Steven Landburg arguing that if we believe rocks exist, then we should also believe that mathematical objects exist. After all, rocks are mostly empty spaces.
Now here’s what genuinely baffles me: Apparently there are people in this world (and even, occasionally, in the comments section of this blog) who haven’t the slightest doubt about the existence of rocks, galaxies, squirrels, and the rest of the physical universe, but who suddenly turn into hardcore skeptics re the existence of mathematical objects like the natural numbers.
and also
Why on earth would, say, a scientist, commit to the belief that there’s a physical universe out there but not a mathematical one, when we know that our perceptions of the physical universe demand constant revision, whereas our perceptions of the mathematical universe are largely eternal. My conception of the natural numbers is very close to Euclid’s; my conception of an atom bears almost no resemblance to Demosthenes’s.
He then goes on to discuss what this implies for the existence of God. Now, Steven Landsburg is far more intelligent than I am, and also incomparably more knowledgeable about Mathematics. In this matter, though, I think he is wrong.

I really doubt it is true that his "conception of the natural numbers is very close to Euclid's." We now think of the natural numbers as being embedded in the real number continuum, a concept which Euclid would have found incoherent or even abhorrent. Why stop at the real numbers? We've gone and created Non-standard Analysis, with its infinitesimal and infinite numbers. Whether this changes our conception of the natural numbers depends on what you mean by "conception." We can now subtract any natural number from any other, which Euclid would have said is nonsense. I think that is a big change in our conception of the natural numbers. And we can divide any natural number by any non-zero natural number, and we believe the result is meaningful. Again, a big change in our conception of the natural numbers.

None of this, however, is my point. I just think that the very word "exists" is too baggy and commodious to be useful by itself. When we say something exists, we should say (or it should be commonly agreed) in what sense we mean to use the word. Rocks exist in the sense that when I kick one with my foot, it hurts. Numbers exist in the sense that I can use to them to count and measure, and they form a coherent system. Words are conventional things, and we are being lazy when we simply assert that something "Does exist!" "Doesn't!"

When I was in college, I went to visit a friend of a friend, and he was reading a book or article titled "Do holes exist?" I don't remember anything about it except the title, and the impression it made on me: "is this what philosophers spend their time on?" I guess it must have been one of these. These days, I wouldn't be so dismissive: to remind us that meaning is often a matter of convention is a valuable service.

I think Reuben Hersh may have thought much more deeply about these matters than Landsburg. I would love to see a debate between them.

Wednesday, February 15, 2012

Norvig contra Chomsky

I came across this post at LanguageLog through this tweet by Ms Moximer (@Melody).

It is all good, but for me the best parts were these remarks by Martin Kay (emphasis added)
Now I come to the fourth point, which is ambiguity. This, I take it, is where statistics really come into their own. Symbolic language processing is highly nondeterministic and often delivers large numbers of alternative results because it has no means of resolving the ambiguities that characterize ordinary language. This is for the clear and obvious reason that the resolution of ambiguities is not a linguistic matter. After a responsible job has been done of linguistic analysis, what remain are questions about the world. They are questions of what would be a reasonable thing to say under the given circumstances, what it would be reasonable to believe, suspect, fear or desire in the given situation. If these questions are in the purview of any academic discipline, it is presumably artificial intelligence.
This is precisely the point.

Chomsky and Norvig are talking about different problems.

Chomsky looks at the work which has been done, for example in machine translation, and finds it inadequate because machines make some very obvious blunders. These blunders are evidently because the machine has no "understanding" of that which it translating. However, how could a machine ever understand language? It would require it to understand our way of life, to understand what we mean by "love", and "fear" and "hope" and "home" and "laughter", and indeed "understanding". It would have to have beliefs, comparable to the beliefs we humans hold. How many other animals would understand what we mean by "laughter"? Would an intelligent alien species understand?

This is not to say that Chomsky is wrong. Maybe this is what Wittgenstein would say too; that people use language, and that to understand language, you need to be a person, and you need to understand how we live. I just liked the distinction between questions about language and questions about the world.

Thursday, February 09, 2012

Plain speaking

Andy Walsh writes about his running partner
The idea is as follows. Whenever Dave has a meeting with a representative of any of the abovementioned agencies, he takes with him his Penalty Box, into which the relevant factotum must pay a forfeit if she uses any of the following expressions:

acceptable (or unacceptable); appropriate (or inappropriate); empower(ing); person centred (or person oriented); developmental; non-judgemental; rights-based; forward-looking ; in partnership. If a project or service is ever said to be rolled out then Dave claims a double forfeit. And if any mention of the date is made in such a way as to imply it has a particular moral relevance then that is triple. Hence if a social worker were to say of his opinion that it is “judgemental and not an appropriate comment to make in this, the 21st Century” then he’d hit paydirt.
Dave may be an angry man, but I have to respect his spirit and intelligence.
Dave’s strategy has a pleasing consequence, one that is more than merely financial. He has discovered that in being denuded of the above expressions the social worker, probation officer and counsellour suffers a pleasing paralysis of expression and of thought.Meetings that used to take several hours are now over in minutes. It has become obvious to him that the Wittgensteinians have a point: that there is no pre-linguistic “given”, that thought and experience are mediated by and logically consequent upon language. Strip these statutory representatives of their language game and they become like putty in his hands. He used to spend his time running from these people, now he knows that, with the help of his Penalty Box, he can philosophise them away.
It is easy to mock governments, bureaucrats, and politicians for their crimes against language: the stock phrases, the empty gestures, but equally grave atrocities are committed in any large organization, including corporations which are supposedly disciplined by the need to earn a profit. I would love to see such a scheme introduced into all workplaces, except that we may be left dumbfounded, unable to speak without our favorite cliches to hand. "Globalization", "localization", "customer centric", "new paradigm", "cloud computing", "restructuring", "agility", "rigorous". Corporate language is anything but rigorous. I cannot remember the last time I heard "wisdom", "patience', "kindness", "humility", "moderation".

Perhaps large corporations are refuges for mediocre minds, but surely small firms are little furnaces of original thinking, since they have to be oh-so-responsive to the darwinian pressures of the marketplace? I don't know, but I suspect not, since the main requirement for survival is to be able to anticipate the needs of your customers, and the best way to do that is to be something like them. I don't think original thinking is necessary to operate any business, or indeed in academics or anywhere else in life. Note that I am not saying that original thinking cannot be observed in large organizations, or small ones, or in academia, or government, nor that the market doesn't discipline corporations more than governments, or start-ups more than large firms. All I am saying is that the market only requires you to be somewhat better than your competition is some way, as judged by your customers. It rewards you for being useful to someone, rather than original thinking, and that is not a bad thing.

We use language to communicate, but we communicate more than facts and opinions, we also signal loyalty, telegraph subservience, assert authority, warn of rebellion. Organizations are little societies above all, and societies are all about hierarchy. Again, I am not saying that organizations are only about sociology! However, the similarities between North Korea and your employer or customer are not entirely coincidental. Xavier Marquez had a fascinating blog post about cults of personality.
Here is where cults of personality come in handy. The dictator wants a credible signal of your support; merely staying silent and not saying anything negative won’t cut it. In order to be credible, the signal has to be costly: you have to be willing to say that the dictator is not merely ok, but a superhuman being, and you have to be willing to take some concrete actions showing your undying love for the leader. (You may have had this experience: you are served some food, and you must provide a credible signal that you like it so that the host will not be offended; merely saying that you like it will not cut it. So you will need to go for seconds and layer on the praise). Here the concrete action required of you is typically a willingness to denounce others when they fail to say the same thing, but it may also involve bizarre pilgrimages, ostentatious displays of the dictator’s image, etc.
Read the whole thing! Perhaps this is why we are presented with the sight of apparently intelligent people who seem to have replaced their brains with jargon generators: they (we?) are merely signalling their commitment to the supposed values of the organization, and the direction it is moving in. Fortunately, employees, unlike North Koreans, have the option to exit, which prevents organizations from collapsing completely into self-contained little worlds like North Korea. When companies are in crisis, the exigencies of survival take precedence over the need to signal loyalty, and a temporary period of plain speaking follows. Something like this happened in Cuba when the Soviet Union collapsed, and cheap imports were no longer available. Plain talking is more likely when senior management is replaced, and the new leaders can blame the old ones for all the woes of the unit, and take whatever steps are necessary to save it, and employees can demonstrate their new loyalties by denouncing the old dispensation and recommending changes. This obviously hasn't yet happened in Cuba, though Raoul has taken over from Fidel. Again, motives are more complex (and admirable!) than I am indicating here, and the benefits of this newfound freedom are real, whatever the motivation may be: a period of Hayekian discovery within the centrally planned Universe of the corporation.

Why do (should?) we care for plain language? I do think there is an aesthetic argument for good writing, though that need not always be clear writing. Imaginative literature relies on artful language for its effect, and this may require it to be obscure. For most other writing, however, where we claim to be describing how something is, it may be good manners to be clear, so your readers can more easily examine your argument. It may also be a sign that you have taken some trouble to think matters through, and to express yourself clearly, though some people seem to be congenitally incapable of expressing themselves clearly. Besides these, reading Orwell had convinced me that it is because it is difficult to lie in plain language, while jargon can be used to cloak many crimes.
If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.
However, this passage from Kevin Macleod as quoted by Brad DeLong, makes a good case that Orwell is simply wrong on this point.
There's no necessary connection between political truth and verbal clarity. Let's take some writers whose politics Orwell would reject. Nothing could be stronger than Orwell's detestation of Fabianism and Stalinism. George Bernard Shaw stood for the first and more or less endorsed the second, yet The Intelligent Woman's Guide to Socialism is a delight to read. Not all the Communists and fellow-travellers were hacks. (Orwell's citation of a rant from that quarter is tellingly under-referenced: 'Communist pamphlet' - I ask you!) T. A. Jackson and A. L. Morton wrote their best-known books in clear and vivid English. John Strachey was at his most lucid when he was at his most wrong. Professor J. B. S. Haldane's science essays are still read for pleasure. The Trotskyist C. L. R. James wrote one literary masterpiece; Trotsky himself was constitutionally incapable of writing a dull page, and in Max Eastman he found a translator worthy of his style.
This now seems correct to me. "Truth" is about how well your model of reality fits the world, clarity is about how easily your readers can understand what you are saying. However, the world is complicated, and human society is the worst of it. For example, an economy is a whirlpool of feedback loops and circular flows, and we will always find it diffcult to trace the consequences of any policy or event. Think how hard it is to see how even simple programs work. To believe that we will be able to judge the truth of a theory if we are given a clear, elegant exposition, we need to assume that we already know the truth of the matter and can then compare it to the theory. This, however, is an empirical question. By definition, we can understand a clear writer more easily than an obscure one, but to judge whether the what he has writen is the truth is another matter altogether. At best, we will be able to more easily detect some of the more obvious logical errors he has made.


Wednesday, February 08, 2012

Twitterstream Waltzingmonkey 08 Feb 2012

Freedom! :
Bryan Caplan : being single is a luxury.
MT @lucaswiman: just had my "I had the same startup idea but it sounded illegal" moment. I finally feel like Silicon Valley is my true home
RT @RogerHighfield: I fixed my iMac's desire to be a hairdryer by unplugging #macsareasannoyingaspcsreally
taxes: large effect on timing, moderate effect on financing, little-to-no effect on behaviour (how hard people work):
RT @lucaswiman: Tom Blomfield: Automate Everything (via Instapaper)
RT @rosieseed: irony in one perfect picture.


The creator of Ruby on Rails speaking about why he uses Ruby. It may seem like a topic only a programming geek could enjoy, but it is actually an interesting example of the use of a speech to rally the troops.

"Why Ruby?" - RubyConf X Keynote from David Heinemeier Hansson on Vimeo.

Hansson speaks of the human experience of writing programs in a language, the importance of trusting the users of a language, and the value of freedom. That seems to be his main point, that Ruby trusts its developers, and doesn't babysit them. He asserts that the creators of other languages (he names Java) just don't trust their developers, and so deny them freedom.

It is a bravura performance. I loved this quote from Larry Wall
The very fact that it's possible to write messy programs in Perl is also what makes it possible to write programs that are cleaner in Perl than they could ever be in a language that attempts to enforce cleanliness. The potential for greater good goes right along with the potential for greater evil.
There is so much I agree with in the talk: the point that when people are denied the opportunity to do what they really want to do, they go off and find some way to do it anyway, the idea that not everything should be functional, that you should do some things just because you want to, that it is lovely that Ruby is such a developer-centric language. Above all, though, I was intrigued by the talk as an example of rhetoric in action. This is something I increasingly notice about how people around me use language. It was interesting to see what he left out, and what he left in, what he emphasized, and what he downplayed.

He treats "freedom" as a simple matter of being left alone to do what you want, and of course he is right. However, while he find it offensive that Java is strongly typed, there are good reasons why the people who designed it took those decisions, and it is not only about not trusting your users. I wonder how useful Ruby would have been back in 1995, when Java was created, and hardware wasn't what we are used to today. He talks about the culture of Unit Testing in Ruby, and this is something I applaud. However, it is also especially necessary when your developers can change almost anything, including the language itself. It is much easier to to trust your developers today, with modern hardware and testing tools.

For an interesting contrast, John D Cook writing about Java and trust.
This is uncomfortable to talk about, and so the decision is usually left implicit. Nobody wants to say out loud that they’re designing software for an army of mediocre programmers to implement, but that is the default assumption. And rightfully so. Most developers have middling ability, by definition.
Maybe the future does belong to languages which trust give developers more freedom, but that doesn't mean that the past should've belonged to them too.

Tuesday, February 07, 2012

Twitterstream 07 Feb 2012

My tweets and re-tweets today; hopefully use some of this in my blogging someday

RT @elidourado: More rapes committed in America against men than women.
RT @daniel_lende: Debt: The First 5000 Years – Extended Interview Great interview - 50 plus minutes with @DavidGraeber
RT @DKThomp: The Death (and Life) of Marriage in America cc @justinwolfers
RT @WiringTheBrain: I've got your missing heritability right here - or why geneticists are asking the wrong question:
RT @ModeledBehavior: The Deserving Poor
RT @sciammind: RT @sebastianseung: "the mind is the music that neural networks play" Terry Sejnowski on CONNECTOME
Tsk tsk. RT @PennyRed: This is absolutely brilliant - Stock Photos of Women Looking Remorseful After Sexual Encounters
RT @harpers: “We are not revolutionaries in mink coats!” shouted one speaker. “I am!” replied a woman in a mink coat.—Weekly Review:
RT @TwopTwips: MEN. Make it a Valentine's Day she'll always remember by simply forgetting it. (via @a_stoth)

Saturday, February 04, 2012

Here I am again, making it up as I go along

Jeff Atwood at Coding Horror doesn't like it that Apple refuses to place more than one button on the front of the iPhone.
Apple's done a great job of embodying simplicity and clean design, but I often think they go too far, particularly at the beginning. For example, the first Mac didn't even have cursor keys. Everything's a design call, and somewhat subjective, but like Goldilocks, I'm going to maintain that the secret sauce here is not to get the porridge too cold (no buttons) or too hot (3 or more buttons), but just right. I'd certainly be a happier iPhone user if I didn't have to think so much about what's going to happen when I press my home button for the hundredth time in a day.
He is right, of course, and he also admits that what is "just right" is subjective.

However, I can think of at least one good reason why Apple might have decided to follow this rule.

Imagine you are leading a team of developers, and you tell them to design a phone. All existing phones and cluttered. Start with a clean slate, you tell them, think different. Your designers know what phones look like, and cannot unlearn that knowledge. You are likely to get a design which is very similar to what exists, with some minor tweaks. Alternatively, you get a radical, impractical design.

However, tell them to design a phone with exactly one physical button, and you give them something to start with. To make it work, they will have to rethink everything about the phone, but the people on your team are less likely to fight each other on behalf of their individual "visions" of what a radical new design should be like. The rule is one of those constraints which serve to goad creativity, and channel it.

As Eugene Wallingford writes in that post I've linked to above
Just as too much freedom can paralyze a novice with an overabundance of possibility, too much freedom can inhibit the competent programmer from creating a truly wonderful solution. Sticking with a form requires the programmer to think about resources and relationships in a way that unconstrained design spaces do not. This certainly seems to be the case in the arts, where writers, visual artists, and musicians use form as a vehicle for channeling their creative impulses.
I don't think "two buttons" would have worked as well. It wouldn't have distinguished their product quite so radically, and it might not have energized the designers as effectively.

Obviously, not all constraints serve to release potential creativity. Mediocre companies are great at creating constraints which serve only to irritate and frustrate. You need a master designer: we know that Apple had that.

Thursday, February 02, 2012

Respice te, hominem te memento

Over at the Google Blog, Joshua Bloch informs us that practically every implementation of one of the most famous algorithms in the world is defective.
Fast forward to 2006. I was shocked to learn that the binary search program that Bentley proved correct and subsequently tested in Chapter 5 of Programming Pearls contains a bug. Once I tell you what it is, you will understand why it escaped detection for two decades. Lest you think I'm picking on Bentley, let me tell you how I discovered the bug: The version of binary search that I wrote for the JDK contained the same bug. It was reported to Sun recently when it broke someone's program, after lying in wait for nine years or so.
Of course, the bug only manifests itself under extreme conditions (arrays containing a billion or more elements), but this is
  1. an elementary algorithm
  2. which we have known about since the 1940s
  3. and which has been studied in detail by some of the most intelligent people in the world
  4. and taught to the brightest students at some of the finest schools in the world
  5. and implemented in every computer language around
  6. and used to build any number of applications
and we find a bug now?

Bloch continues
And now we know the binary search is bug-free, right? Well, we strongly suspect so, but we don't know. It is not sufficient merely to prove a program correct; you have to test it too. Moreover, to be really certain that a program is correct, you have to test it for all possible input values, but this is seldom feasible. With concurrent programs, it's even worse: You have to test for all internal states, which is, for all practical purposes, impossible.

The binary-search bug applies equally to mergesort, and to other divide-and-conquer algorithms. If you have any code that implements one of these algorithms, fix it now before it blows up. The general lesson that I take away from this bug is humility: It is hard to write even the smallest piece of code correctly, and our whole world runs on big, complex pieces of code.
What can we be sure of? Like a object lesson in the need for scepticism. I don't want to overstate the case, but we can't catch a simple bug in an elementary algorithm; almost half a millenium after Vesalius, we discover a "new" muscle in our own bodies, and we still imagine that we can be sure we know why an economy goes into recession, discern how to "manage" large and complex corporations, or foresee how the world climate will fare in a hundred years.

Postscript: None of this is to say that I don't think that AGW is not a reality, or that we shouldn't do anything about it if it is, or that the Economy does or doesn't need stimulus; only that we should be much more modest in our claims of knowledge, and positively applaud people who acknowledge their mistakes, while mocking and reviling those who are thrust their little truths on us. Post-Postscript: Since his post dates back to 2006, maybe I should've written that Bloch "informed us", but what he reported is news at least to me.

Like Salmon

Mo Costandi at the Guardian

The journey undertaken by newborn neurons in the adult mouse brain is like the cellular equivalent of the arduous upstream migration of salmon returning to their hatching river. Soon after being born in the subventricular zone near the back of the brain, these cells embark on a long-distance migration to the front-most tip of the brain. Their final destination – the olfactory bulb – is the furthest point from their birth place, and they travel two-thirds of the length of the brain to get there

Wednesday, February 01, 2012

Coasian Copyright

Simon Wardley has a suggestion.
If Congress wants to stop online piracy, there's another way. Ban all content which is not creative commons or equivalently licensed material (e.g. GPL) from the internet. Any infraction should be treated as other security violations and made the responsibility of the copyright holder for not taking enough security measures to ensure that their content never reached the internet.