51601768-123RF

Server room data center interior 3D rendering © Sebastien Decoret /123RF Stock Photos “If you torture the data long enough, it will confess.”

Opening the keynote address at the Northern Finance Association Conference in September with this quote by Nobel Prize-winning economist Ronald Coase, Campbell Harvey, professor of finance at Duke University, spoke about the challenges of using data empirically.

He cited being careful about data selection, delegation, multiple testing problems and the manipulation of data.

Referring to a research paper by a major asset manager that looked at the persistence of active performance, he said it only considered managers in the top three quartiles of performance instead of using the entire universe of managers.

“The idea in the end of the paper is once you sensor 25 per cent of the bad performance, then 84 per cent of U.S. equity active managers have beat the S&P.”

This is an example of “blatant strategic data selection,” he added.

Another issue to be mindful of is delegation. Through another anecdote, Harvey highlighted how even if the researcher isn’t mining data, it may happen if they delegate to other employees.

The garden of forking paths, which is an important concept in statistics related to the multiple testing program, is another common issue. “If you try enough things and something will turn out to be significant, then you need to essentially increase the threshold for significance,” said Harvey.

Sharing an example of testing 20 variables, he noted the first researcher might find the first variable to be significant so stops testing. Then another researcher might start with the last variable and work backwards, with no significant finding until the first variable is reached. The second researcher might dismiss the significance of that first variable because one out of 20 might just be luck.

“The idea of the garden of forking paths is that these situations need to be treated equally.”

There are other examples of more severe data manipulation, where researchers will try multiple methods until they get a significant result they’re looking for, said Harvey.

He noted people might have heard the phrase ‘let the data speak.’

“But the message today is that data do not speak. It is really the interpreter of the data that speaks. And often there is an agenda that is associated with that researcher’s work. And it might be an academic researcher, it might be a practitioner, it might be a politician.”

The tools that researchers choose can shape the narrative, he added. “This message, I think, is increasingly important today in the era of big data, in the era of machine learning. There are so many predictors today, so many different techniques that are available. . . . It is just so tempting to let the machine collect the data, come up with result and then, ex-post, you spin a story about your result. That’s not the way that we should do research in finance.”

In finance, there are economic models that don’t exist in other fields that can provide a foundation to guide empirical work, he noted.

“The problem is not the egregious things that people do that are rare, like falsification, fabrication of data or plagiarism. Those are big things. It’s the next level. I call the falsification, the fabrication and the plagiarism . . . ‘hard misconduct.’ It’s the soft misconduct, and some of that we’ve seen today, that we really need to careful about.”