Not all research is equal - science vs propaganda

Nico Neumann.

Adland seems to lead a heated discussion about the results of a recent ThinkTV  Research Series, with the latest piece being called ‘Not all Reach is Equal’.  As a passionate scientist, I think it is crucial to have such debates, in particular in times of BS artists and decreasing consumer trust. If there is one thing that we have learnt from the Trump presidency, then it is that popularity and status do not necessarily mean someone is always right. Presented ‘facts’ need to be checked for correctness.

With regards to the discussed research, one of the biggest concerns is that the [original] ad-impact study was funded by ThinkTV and finds that TV is the best-performing medium in their test.

Surprising? Or just a coincidence?

Well, you should know that a study sponsored by a radio station finds that radio provides the best ROI. And while a Facebook-commissioned study finds that Facebook deserves much more credit than we all thought, a study commissioned by Google finds that YouTube provides better ROI than TV.

Recognise the pattern? A lot of coincidences.   

So, how can we help readers distinguish between proper science and propaganda?  

As a rule of thumb, everyone should consider three criteria that indicate red flags for any work calling itself ‘research’:

1)     The financial sponsor of the study is an industry lobby body or group who benefits from the results shown.

2)     There is only a ‘pseudo’ white paper or just a slide deck, lacking important technical details.

3)     There has been no peer review by fellow scientists.

The examples summarised earlier show why bullet point number 1) is a concern. Perhaps let me ask you: would ThinkTV pay large sums (sometimes up to one million) to have no results in the end? Or would they even publish a study that may show that TV does not work well?

Everyone can answer this question for themselves.

Second, anyone can generate ‘some numbers and results’ and call this ‘research’. You could ask two neighbours for their opinion, count seagulls or flip a coin and then perform any mathematical operation on this. That’s why a proper empirical study must outline the exact methodology and research design, beyond vague statements of algorithm names or overall numbers.

Third, unfortunately, research studies can become quite technical and any understanding of the appropriateness of a chosen method may require deep statistical or other expert knowledge. This is why academic journals have a peer review process, in which fellow scientists review studies (typically in a blind process – without knowing the author names – to not be biased).

Reviewers are usually very strict and ask many questions about the analyses and often demand additional evidence to ensure someone is not just trying to make wild claims for headlines. The more prestigious a journal, the more scrutiny is involved in this process. There is a clear hierarchy in research outlets and top research journals only publish a minor fraction (less than 10%) of what is submitted to them.

Now let’s take a look at the latest ThinkTV study. As a vigilant researcher and possible reviewer of the study, one would naturally have many questions:

What was the exact experimental design? How many participants were in each ad-exposure group after 1, 14 and 28 days? How many brands were in the brand choice design and were people allowed not to choose any brand? Was the shopping environment tested and not biased? Were the participants isolated from other influences? Were the results statistically significant?

In addition, any novel technology warrants extra care and tests. Specifically, how reliable is eye-tracking (gazing) technology via “AI” in the first place, in particular for mobile phones? Has the technology – on which all the findings rest – been independently verified to work properly before the ad tests?

Unfortunately, there are few technical details available currently, apart from some basic infographics and slides. It is, therefore, hard to judge the robustness of the findings.

Moreover, there is one statement in the research deck that is just not correct: claiming that a discrete choice model based on virtual shopping processes is “academically validated as the most realistic way to reveal consumers’ actual choice of brand”.

The most realistic way to reveal consumers actual purchase decisions are – well, consumers actual purchases decisions, which can be tested in the real world using real sales data, ideally from a field experiment.

While no research study is perfect – there will always be limitations – this is a possible, serious shortcoming for validity as virtual tasks tend to suffer from response bias. There is often a big difference between saying you buy something (claims) and really buying something (actions), in particular when you are asked about product preferences after you have been shown some product ads a few minutes ago.

In sum, everyone can decide how much they trust findings of individual studies and what constitutes propaganda or empirically sound work. However, there is a good reason why academic outlets that are experienced in research assessment require all technical details and a peer review.

Why? Because ‘not all research is equal’.

Nico Neumann is assistant professor and fellow for the centre for business analytics at Melbourne Business School.

comments powered by Disqus