Thursday, August 10, 2017

Information that confirms an idea isn’t evidence.


Okay, this title seems counter-intuitive – how could information that supports an idea not be evidence for the idea? Sure, it may be “weak”, but it has to count at least a little, doesn’t it? Yet it turns out that, in most cases, data that supports an idea is more likely to be produced when the idea is false than when the idea is true.

It starts with the famous Ioannidis paper, “Why most published research results are false”. Table 4 outlines the positive-predictive values under a variety of different types of investigation (positive-predictive values (PPV) are the proportion of positive findings that are true-positives vs. all positives (true plus false-positives)). Note that most of our discourse – people who claim to have had a weirdly accurate reading from a medium, conspiracy theories making the round of social media, experiences of horrific side-effects from vaccines/statins/<insert substance of choice here>, etc. – doesn’t even remotely reach the level of “exploratory study” with respect to rigor. But even with some element of ‘rigor’, a positive result from an exploratory study is still tens to hundreds of times more likely to be a false-positive than a true-positive. This means that the ability to “confirm” an idea (i.e. find information which supports the idea) says much, much more about how easy it is to find confirming information even when the idea is false, than it says about whether the idea is true.

We saw this in my prior post, where hummingbird statements, which are often used as proof that a particular medium’s reading depends on anomalous information, are also easily produced when the idea that mediums are taping into anomalous information is false. This prevents us from being able to distinguish which of the great variety of contradictory and fantastical statements made about the afterlife may actually be true.

Unfortunately, confirmation bias tends to ensure that we spend our time looking for this confirming information, instead of looking for information that would help us distinguish between ideas that are true or false.

In another famous experiment, the Wason selection task, which asks you to turn over a card or two in order to test whether a rule about those cards is true, is a test of this bias. Fewer than 10% of the people taking the test (even intelligent university students), pick cards which adequately test the idea. Almost everyone picks the card that would confirm the idea, but few also pick the card that would tell you whether the idea is false. This leads us to think that we are building evidence for an idea, by finding more and more examples that confirm the idea, even when the idea is false.

When faced with evaluating whether something might be true, don’t look at the ‘evidence’ for the idea. Ask yourself, “what might I expect to see if this isn’t true?”  Perhaps what you’d expect to see if the idea isn’t true is pretty much the same as what you’d expect to see if the idea is true. If you spend even five minutes on Snopes looking at the plethora of false and unproven conspiracy theories out there, it becomes pretty obvious that no matter how sketchy the idea (Pizzagate anyone?), it’s pretty easy to build a case for it even when it’s false. The existence of ‘evidence’ may just tell you that ‘evidence’ is easy to produce, not that the conspiracy may be valid.

No comments:

Post a Comment