Tuesday, February 20, 2007

Mirror, mirror on the wall..

Over at Chatty Apes, Jason Wheeden describes a very interesting finding.

Measurable symmetry accounts for less than 1% of the variance in the attractiveness of women's faces and less than 3% of the variance of the attractiveness of men's faces. Before we went and checked, we had both believed the widely circulated story that symmetry was a big deal in attractiveness.

Of course, I have come across this story myself, and believed it. One of the reasons why people believed this theory is that there is a plausible "explanation" for why people find symmetry attractive. This is exactly why scientists are supposed to replicate the findings of others. Feynman said that Science is what we have learned about how to keep from fooling ourselves. However, at least in this case, the problem was not that others did not try and replicate the results.

So how did it happen that so many of us believed that symmetry was a big deal in attractiveness judgments? There were studies that said so, obviously. But when we looked at the details, it turned out that the initial studies showing big effects typically involved samples of less than 20 faces each, which is irresponsibly small for correlational studies with open-ended variables. Once the bigger samples starting showing up, the effect basically disappeared for women and was shown to be pretty low for men. But no one believed the later, bigger studies, even most of their own authors -- pretty much everyone in my business still thinks that symmetry is a big deal in attractiveness.

So, the first lesson I learned: Small samples are dangerous. They're so dangerous that we need to force larger samples. How? My solution has been to ditch the old p<.05 significance standard. Right now, most social scientists allow themselves to call something a real finding if there is less than a 1 in 20 chance that it's from random noise. It's a standard that arose before computers, in a day when scientists ran their numbers by hand and so just didn't run very many numbers. These days, all you have to do to basically ensure at least one significant finding is to measure 7 variables -- the resulting correlation table has 21 correlations, and, just at random, you'd expect at least one to be p<.05. Look at my dissertation or any of my published work (two new ones are coming out soon), and you'll find that I'm using p<.005 -- a 1 in 200 standard. I don't think anyone has noticed -- I haven't had any reviewers comment on it at all. But what the .005 standard does is force larger samples, leading to more stable estimates, leading to more replicable results. So that was my first lesson from the health-attractiveness paper -- false positives are easy to get and powerfully resilient to correction, so we need better studies in the first place, which means bigger samples, in which case the only way to police it is using better significance criteria.

It appears that the site has been down for quite some time- I am relying on the Google Cache of the post, and I am quoting extensively in case the cache becomes unavailable. This is one reason I blog, to keep a copy of all the stuff I find interesting- just in case the source disappears. Redundancy.

HT: Robin Hanson at Overcoming Bias

5 comments:

gaddeswarup said...

"This is one reason I blog, to keep a copy of all the stuff I find interesting- just in case the source disappears. Redundancy."
Same with me, but I have been relying sometimes on URLs. Must follow your example.
Do you have any good references about taking samples, perecentage of errors etc. ? Thanks.
Swarup

Anonymous said...

Not yet a rule, but now a days researchers are being asked to give the effect size along with the significance. The effect size will tell whether the manipulation resulted in small, medium or big treatment effect and is independent of sample size. This takes care of the reliability of a significant effect when the sample size is really huge.

Rajeev Ramachandran said...

Hi Gaddeswarup, am reading this book my sister got me: Principles of Statistics, by MG Bulmer: http://www.amazon.com/Principles-Statistics-M-G-Bulmer/dp/0486637603/ref=si3_rdr_bb_product/102-5947816-1588102.. Looks good.
Cheers
Rajeev

gaddeswarup said...

Rajeev,
Thanks. Off and on, I have been looking at "Weighing the odds" by David Williams, Cambridge University Press. I wondered if there are some articles which give a quick survey. I will ask somebody in our dept, which I still visit once a week. Thanks again.

Anonymous said...
This comment has been removed by a blog administrator.