Science. It Works. #4

Opinion: Buried in bullshit
Tom Farsides and Paul Sparks smell trouble.

There is a worrying amount of outright fraud in psychology, even if it may be no more common than in other disciplines. Consider the roll call of those who have in recent years had high-status peer-reviewed papers retracted because of confirmed or suspected fraud: Marc Hauser, Jens Förster, Dirk Smeesters, Karen Ruggiero, Lawrence Sanna, Michael LaCour and, a long way in front with 58 retractions, Diederik Stapel. It seems reasonable to expect that there will be further revelations and retractions.

That’s a depressing list, but out-and-out lies in psychology may be the least of our worries. Could most of what we hold to be true in psychology be wrong (Ioannidis, 2005)? We now turn to several pieces of evidence to demonstrate compellingly that contemporary psychology is liberally sprayed with bullshit (along with some suggestions of a clean-up).

Lies, damned lies and statistics
Almost all published studies report statistically significant effects even though very many of them have sample sizes that are too small to reliably detect the effects they report (Bakker et al., 2012; Cohen, 1962). Similarly, multi-study papers often report literally unfeasible frequencies of statistically significant effects (Schimmack, 2012).

In addition, many of the analyses and procedures psychologists use do not justify the conclusions drawn from them. . . .

So-called ‘p hacking’ also remains rife in psychology. . . .

Many researchers and reviewers simply do not have the methodological or statistical expertise necessary to effectively engage in science the way it is currently practised in mainstream psychology (Colquhoun, 2014; Lindsay, 2015). Scientists and reviewers also increasingly admit that they simply cannot keep up with the sheer volume and complexity of things in which they are allegedly supposed to have expertise (Siebert et al., 2015). . . .
Few successful attempts have been made to rigorously replicate findings in psychology. Recent attempts to do so have suggested that even studies almost identical to original ones rarely produce reassuring confirmation of their reported results (e.g. Open Science Collaboration: see www.https://osf.io/vmrgu).

The task of replication is made tougher because researchers control what information reviewers get exposed to, and journal editors then shape what information readers have access to. If readers want further information, they usually have to request it from the researchers and they, their institution or the publishing journal may place limits on what is shared. One consequence of this is that other researchers are considerably hampered in their ability to attempt replication or extension of the original findings. . . .

Traditionally, researchers are much less likely to submit manuscripts reporting experiments that did not find an effect, and journals are far less likely to accept them if they do (Cohen, 1962; Peplow, 2014). Most prestigious journals also have a strong preference for novel and dramatic findings over the replications and incremental discoveries that are typical in an established science. If researchers want to be published in high-ranking peer-reviewed journals, therefore, they are highly incentivised to present highly selective and therefore misleading accounts of their research (Giner-Sorolla, 2012).The current mechanisms of science production, then, place individual researchers in a social dilemma (Carter, 2015). Whatever others do and whatever the collective consequences, it is in the individual researcher’s best economic interest to downgrade the importance of truth in order to maximise publications, grants, promotion, media exposure, indicators of impact, and all the other glittering prizes valued in contemporary scientific and academic communities (Engel, 2015). This is especially the case when organisations and processes that might otherwise ameliorate such pressures instead exacerbate them because they too allow concerns for truth to be downgraded or swamped by other ambitions (e.g. journal sales, student recruitment, political influence, etc.) (Garfield, 1986). . . .

As it happens, we do think that our discipline has a lot to offer. But we also think that norms of assessing and representing it need to change considerably if we are to minimise our at least complicit contribution to the collective production and concealment of yet more bullshit. Here are some provisional and tentative recommendations.

1. Don’t give up. . . .

2. Prioritise scholarship. Psychologists and their institutions should do everything within their power to champion truth and to confront all barriers to it. If we have to choose between maintaining our professional integrity and obtaining further personal or institutional benefits, may we have the will (and support) to pursue the former.

3. Be honest. Championing truth requires honesty about ignorance, inadequacies, and mistakes (Salmon, 2003). . . .

4. Use all available evidence as effectively as possible. Important as they are, experiments are neither necessary nor sufficient for empiricism, scholarship or ‘science’ (see Robinson, 2000). To study important phenomena well, we need first to identify what they are and what central characteristics they have (Rozin, 2001). To study things thoroughly, we need to identify processes and outcomes other than those derived from our pet ‘theories’. Evaluating the research literature may well require skills different from those that have been dominant during much of its production (Koch, 1981). In particular, we have found particularly effective accurately describing others’ procedures and outcomes in ordinary language and then examining how well these justify the usually jargonistic ‘theoretical’ claims supposedly supported by them (cf. Billig, 2013).

5. Nurture nuance. Experiments within psychology are usually (at best) little more than demonstrations that something can occur. This is usually in service of rejecting a null hypothesis but it is almost as often misreported as suggesting (or showing or, worst of all, ‘proving’) something much more substantial – that something does or must occur. Perhaps the single most important thing psychology can do to quickly and substantially improve itself is to be much more careful about specifying and determining the boundary conditions for whatever phenomena it claims to identify (Ferguson, this issue; Lakens, 2014; Schaller, 2015).

6. Triage. Given that at least some areas of psychology seem awash with bullshit, we would be wise to prioritise evaluating topics of centrality and importance rather than on the basis that some reported findings are, for example, recent or amenable to testing using online experiments (Bevan, 1991). . . .

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s