Die P value, Die.

This comment discusses William “Matt” Briggs new book, which is on order.

One comment before this: such work is needed.

There is a book of this type every so often and the appeals apply to every field which does not do randomized controlled trials but much less so to randomized data (and often not at all). So- what fields are those? Sociology, economics, psychology, epidemiology, etc. You get the idea. The most public offenders are epidemiology and economics and climate modeling.

Epidemiology has corrective randomized trials and frequently is able to confirm the epidemiology or refute it. The biggest crisis of confidence probably came with the WHI which refuted prior evidence of a benefit from women taking hormone replacement therapy. That was such a big deal I still remember where I was when I first got the results. That led me to a path that resonates with the books author.

As models increase in complexity they replace data with assumptions. If you have enough data you can usually use simple models effectively. However, when you want to ask increasingly complicated questions you make increasingly complicated assumptions and often the people now using the model as a black box have no respect for the assumptions. When the assumptions are frequently violated they render the results all sorts of gibberish.

I now have reached a similar conclusion as VD’s summary of the book suggests – I only regard as real science randomized studies, models that can accurately predict, or science that is hierarchal in nature in the real world (the next experiment or application reveals the flaws in the previous implicitly). Most epidemiology and economics is preliminary, though not without its uses.

The attacks on the p-value have many solutions which I am sure the author addresses. Fisher enshrined it as a good tool for a reason but as with all tools there remain many caveats that need to constantly be shouted from the roof tops.

Matt Briggs has an advantage. He has a faith: he knows his Aquinas. He knows basic philosophy. And he understands that statistics is merely a mathematical way of modelling what is on the ground. To this knuckle dragging clinician, randomization is a tool used to deal with factors we are too lazy to account for, or do not know about.

A good review is linked to at his blog. It will be on my bookshelf.

UPDATE

Nic Steves corrected my use of names, and the book has arrived. I hope to review it, and the R for Data Science, once read.

4 thoughts on “Die P value, Die.

  1. When I first started getting deeper into Medicine, I realized the massive problem with Medical Research is Medical Researchers. The second thing I realized is that all of the extremely useful information is stuck in the minds of a few topic experts and actually getting to it is a pain because they haven’t written it down.

    I’d say something like this could have an effect on Economics as well, but that’s a bit harder. Somewhere in the 1970s, the major countries realized it was much better to just lie about their statistics. Oh, they’re “collected & analyzed”, but some careful control of the inputs shades all of the data to the desires of its masters.

  2. Pingback: Statistical Breakdown. | Dark Brightness

Comments are closed.