Saturday, January 6, 2007

Statistical Power in PlainSpeak


One of the things that my statistics person wants me to include in the current revision of my dissertation proposal is a power analysis. I think I mentioned my confusion about all things mathematical in rather gory detail a few days ago. Now that I've stopped hyperventilating about it I can get down to business and try to decipher the convoluted MathSpeak into something intelligible for a math ree-tard like myself. While I'm at it, to facilitate my own learning in the spirit of "see one/do one/teach one," I thought I'd share what I've learned & how I got there.

Okay. I've got my research questions. I've got my database with 300 subjects -half urban, half suburban. Now what the hell is a power analysis and why is it so important?

If you're not familiar with research, "significance" is a Big. Damn. Deal. In plain language, the term "significant" means that we'd be surprised if there weren't a real difference between the groups we're studying. For whatever reason significance is written as a "p" value (as in p=<.05). This is also known as alpha and is written with a small greek letter that looke like a sideways support ribbon. Don't ask me why. I think the way math people obfuscate things is silly. Here's the kicker though. When used in statistics, "significance" does not have the same meaning as when we use it in everyday conversation. It just means that the researcher is 95% sure that whatever he's found can be found in the general population and that it's not due to chance. Just because something is significant statistically, that doesn't mean it's important, clinically useful or even the least bit interesting. Ironically, sometimes a non-significant (meaning that the researcher is lesst han 95% sure that result is due to chance) is more important, clinically useful and/or interesting than the most significant findings.

Granted, if you have enough subjects and look long, hard and deep enough, you'll find whateveritis that you're looking for. The question is, is anybody going to give a shit one way or the other? I have wonderful professor who measures significance in giveashits. I'll ask him if I can share his musings on the subject, but I digress. What I am trying to get at is that the whole point of a power analysis is to determine how many subjects you're going to need to determine significance before going through all the hassle of conducting the research project. So actually it's a good thing because it can save the researcher a lot of grief in trying to scrape up subjects for the study, which can be extremely difficult, especially for psychological research.

A study with too low power (not enough information from enough subjects) won't turn up much of anything, even if the hypothesis is true. A study with power too high will turn up trivial shit that nobody cares about anyway (for example that there is more visible light at noon than at midnight in most places - things that make you go "duh").

Great - so how do you do a power analysis? Hire a statistics consultant who specializes in dissertations or get a software program to do it for you. There are even free programs available for download all over the internet that will do it for you, but you still have to understand what values you are entering for each field and why. In the end, the money you spend is worth not tearing your hair out - unless of course you like that look.

Some great websites that address this issue in (mostly) plain English are:
Statistically Significant Consulting

University of Oregon

University of Texas

Pitfalls of Data Analysis

Some excellent reference books can be found at Amazon (but then again what can't you get at Amazon?)

No comments: