Alex Edmans is a Professor of Finance at London Business School.
The 2016 Oxford Dictionaries Word of the Year was ?post-truth?. Few would doubt that it was a deserved winner. In the Brexit referendum, the Leave campaign plastered buses with the message that the UK sends ?350 million a week to the EU, ignoring Britain?s rebate of ?100 million plus the substantial sums that the EU spends on the UK. The US election campaign was similarly filled with falsehoods and candidates contradicting their own previous statements.
But, for every cloud, there?s a potential silver lining. Psychologists refer to the importance of ?hitting bottom? before recovery from an addiction. While 2016 might seem a low point, with falsehoods potentially swinging critical votes, the bright side is that the public now realises that it shouldn?t accept everything at face value. Facebook and Twitter are taking seriously the concern that they may be a platform for ?fake news?, and policymakers, journalists and bloggers are increasingly trying to back up their assertions with facts. Surely, this is the perfect response to the post-truth 2016?
Not necessarily. In a recent TEDx talk, entitled ?From Post-Truth to Pro-Truth?, I explain why truth is not enough. Even if a fact is true, it may still be meaningless. This is because a fact, even if true, may describe only a single isolated case, and is not representative of what normally happens: an anecdote is not data. You can almost always find an anecdote to support a view that you want to support. An anti-immigration newspaper could find one example of an immigrant family who committed a crime, and turn that into a headline. A pro-immigration newspaper could find an example of an immigrant family who set up a business and created jobs, and turn that into a headline. Both headlines may be true, but they are equally meaningless, as they say nothing about immigrants in general.
Digesting data
Stories are powerful; they?re vivid; they bring ideas to life. It?s stories, not statistics, that win headlines, that make TED talks and self-help books memorable. But a single story is meaningless and misleading if it is not backed up by large scale data. While ?post-truth? is the reigning Oxford Dictionaries word of the year, the biggest problem is not that we live in a post-truth world. It?s that we live in a post-data world. We prefer a single story to tons of data.
Why is data so derided? In addition to being boring, it is also seen to be the realm of the experts. Just as 2016 saw the rise of ?post-truth? politics, it saw the collapse of the expert. The low point was perhaps Brexit MP Michel Gove famously saying ?people in this country have had enough of experts?. Experts are seen as out-of-touch elites. A multi-millionaire CEO warning about how Brexit would hinder free trade couldn?t possibly speak up for the person on the street who cares not about free trade but paying the mortgage. But data challenges elitism. It provides the evidence against which an assertion can be tested. Without data, a CEO can simply say ?in my experience? ? which can?t be shown to be false ? and claim that his or her experiences are more valid than those of the individual on the street because these words of wisdom come from the lips of a CEO.
But data is democratic. It considers a CEO as no more or less than anyone else. Data stands up for the person on the street and against the elites.
In Roman times, censuses looked only at who was considered politically important (property-owning men); data looks at the entire population. As Guardian journalist William Davies has written, ?What characterised ? knowledge as statistical was its holistic nature: it aimed to produce a picture of the nation as a whole. Statistics would do for populations what cartography did for territory.?
Feel the quality
So, if a single true story isn?t enough, surely it?s enough to quote studies that use large scale data? If these studies are by academics, all the better: isn?t that the best antidote to the denigration of the expert? Indeed, it?s currently in vogue to follow a statement with something like ?see Wilyman (2015)?, to claim the mantle of academic rigour. But the use of academic evidence is also dangerous. There is an enormous range in the quality of evidence ? the rigour of its methodology, its ability to distinguish causation from correlation, its power to control for extraneous factors. As a result, you can find evidence to support nearly any view you would like. Rather than evidence driving one?s opinion, a pre-conceived opinion drives the search for evidence. This is the classic problem of confirmation bias.
The public should not be impressed by a proposal being based on ?evidence? without considering its quality. The Wilyman paper could be anything from a flawed, unpublished draft to a Nobel Prize-winning article. Indeed, Wilyman (2015) is in fact an anti-vaccination study which has since been widely discredited and has never been published. The quality of evidence is not just an ?academic? debate, where one academic snobbishly tries to claim that his or her paper is higher-quality than another?s, but one with immense real-world implications.
Testing, testing
Three critical indicators of the quality of evidence ? often widely overlooked ? are whether the paper is published, where it is published, and who the authors are. A published paper has to go through rigorous peer review to check its scientific accuracy. The very top journals have the highest standards, using leading scholars at the world?s best business schools to scrutinise a manuscript. As Managing Editor of the?Review of Finance, Europe?s leading finance journal, I reject 97% of manuscripts. The 3% not rejected are not immediately accepted either. Instead, their status is ?revise-and-resubmit?. The reviewers communicate concerns that the authors need to address, and the paper can still be rejected at the next round. It is not unusual for a paper to take five years to be published after its first draft. A hard slog for the authors, but it helps ensure that the published results are correct.
As an example of how rigorous checks can overturn a paper?s conclusions, consider a witness?s submission to the Parliamentary Inquiry on Corporate Governance. This witness argued for disclosure of pay ratios, citing a finding ?that firm productivity is negatively correlated with pay disparity between top executive and lower level employees.? The assertion was based on an unpublished 2010 draft. Having gone through peer review and after methodology was improved, the 2013 published version reverses the preliminary findings and concludes ?that firm value and operating performance both increase with relative pay?. In addition to highlighting the importance of peer review, this example also shows how a partisan observer can find any paper to support a given pre-existing viewpoint, even an unpublished draft when a published version is available.
The second indicator is the quality of the journal in which a paper has been published. That a journal calls itself ?peer-reviewed? is far from sufficient to guarantee its rigour, since there is a vast range in the quality of reviewing standards. Journal quality can easily be checked by looking at one of the freely-available lists ? for example the?Financial Times?Top 50. Aa reader does not need to be an academic ?insider? to do this.
Of course, every paper starts out unpublished. How do we gauge quality of a new paper? The third indicator is the credentials of the authors, such as the quality of their institution and their track record of past top-tier publications ? credentials we would check for an expert witness in a trial. Again, lists of the top universities are freely available. Last November, a study on executive pay hit national headlines ? even though every single person I asked had not heard of the business school that released the study. This is not elitism, but simply a desire to use the best evidence. We would listen to a medical opinion from the Royal Marsden Hospital more than a hospital we have never heard of. An equally serious issue is that none of the newspapers who covered the study had even read it: it was not out yet. But, they were happy to take the authors? word for it, potentially because its findings played to national mood on executive pay.
Peer review is not perfect. Mistakes are made. But it is better to go with something checked than something unchecked. When considering treatment options for a medical condition, a patient would want to consider the world?s best evidence on the treatments? success, conducted by the top scientists and thoroughly checked. We should apply the same rigour when considering the health of the economy.
This article has been previously published by London Business School.
Featured Image Credits:? Pexels

Live News Daily is a trusted name in the digital news space, delivering accurate, timely, and in-depth reporting on a wide range of topics.