Photo by bruce mars on Unsplash
Source Material Part 3: Evaluating Source Material and How it's Used
To revisit where we're at, imagine you have been reading a target popular press article (or another online source, such as a blog or social media post) and have dug a little deeper to check out the scholarly sources upon which they've based their writing.
Once you've determined that the author's main source or sources are published in a scholarly journal, it may still not be clear if the source is any good. For example, is the journal where the work is published a reputable publication that competently uses peer-review?
Identifying Problematic Publishers and Journals
First, we want to keep our eyes open to whether the author's source is published in a low quality venue, such as a predatory journal or vanity publisher. What are these?
"In scientific publishing, Predatory publishing, also write-only publishing or deceptive publishing, is an exploitative academic publishing business model that involves charging publication fees to authors without checking articles for quality and legitimacy, and without providing editorial and publishing services that legitimate academic journals provide."
Have a look at the links on the key terms above to learn a little more. There are lots of these fraudulent or low quality journals out there—check out this predatory journals link to see a list of predatory journals and publishers in scientific fields of study.
If you aren't a scientist, scholar, or other expert in a field of study, almost all journals in any given area are one's you haven't heard of. So, it's hard to know if a particular journal is one you should trust. It's your job to verify the source—at minimum, if you do a quick Google search, can you see if other scholars have noted problems with the journal? Don't trust work that is published in a journal with a poor reputation.
Identifying Poor Work in Good Journals
But publication in an upper-tier journal isn't perfectly indicative of good science. A famous example that set alarm bells ringing in my own field is Daryl Bem's paper published in the Journal of Personality and Social Psychology (a very well-regarded psychological journal) purportedly finding evidence that ESP (extrasensory perception) is real. Much has been said about this paper in the years since it's publication. Ultimately, the outlandishness of this paper's apparent findings was one piece of a much larger puzzle indicating that something was seriously amiss in much of how psychological science had been conducted to that point. Many exciting and necessary changes have been underway to improve the field.
So if poor research can sometimes be published in an apparently good journal, how do we know we can trust other works? That's a big question that scholars are struggling to figure out today. For a look at problems in science in general, see the recent book Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth, by Stuart Ritchie. For some interesting and thought provoking conversations about the evolving state of psychological science in particular, check out the Two Psychologists Four Beers and Nullius in Verba podcasts.
There are some things to look out for to boost your confidence in the science literature. One key thing to look for is open science, in which scientist aim to be transparent about everything they and their team does as researchers (see Center for Open Science and this linked article for an overview of recent advances). Some good signs are pre-registration of hypotheses and research methods and a willingness on the part of researchers to openly share materials and data they used. If you look carefully at peer-reviewed articles, you may be able to see mention of open science and pre-registration approaches. A quick shortcut is to search the document for these terms.
The transparency of open science allows others to see precisely what the researchers did to investigate their findings. It also allows other researchers to replicate the work, which is a critical part of the scientific process.
As a result of concerns about questionable research practices (QRPs), there has been a recent trend toward more openness. Accordingly, more attention is being paid to the value of pre-registration, whereby researchers register their hypotheses and methods before the work is conducted. This transparency places constraints on something all good scientists know is wrong—devising or revising hypotheses after the results are known. This is often referred to as HARKing (hypothesizing after results are known). HARKing is bad science, but without open science it is invisible to consumers of research, in the past, not a lot was in place to discourage it. With the movement toward pre-registration and sharing of materials/data, such practices should slowly be weeded out.
It's important to note that a basic knowledge of these problems and recent trends toward addressing them are good for you to understand as a consumer of science. However, the absence of open science and pre-registration in a published research study is not diagnostic of bad research. That is, their absence doesn't at all mean the research bad. The absence of such practices simply makes it harder to know whether good practices were used. By contrast, scholars' attempts at being open and honest about their process helps readers to better grasp if what they're reading is good work.
This makes everything for the non-expert super thorny and uncertain. Do consider, however, that it's better to acknowledge this complexity and uncertainty than to ignore it. I suggest that if you are seeing a lot of complexity and ambiguity where you didn't before and now have more questions about whether the science you consume is any good, you're becoming more informed and better equipped to know when you should and should not update your prior beliefs based on your reading.
It's okay to hang out in a grey area, not knowing whether you can believe in a particular research finding—it's much better than assuming that every finding you come across is representative of reality (which is certainly not the case!).
Identifying Effective and Ineffective Use of Scholarly Sources
Okay, imagine that you've determined the author cites some good research from a good journal. How do you decide if the author's scientific source has been used correctly or if there is something purposefully or accidentally misleading about the way the the author has used their sources?
Sometimes authors cite great research, but draw inferences they have no business drawing. For instance, popular press articles will sometimes cite correlational research to make causal claims, which can be incredibly misleading (researchers are, by the way, not immune from this mistake). Sometimes causal inference can seem legit to the uncritical reader, but ends up being a huge leap from what the the research actually says.
Another common problem is the cherry picking of supportive research. If the author is writing to make an argument, it's often easy to go find a single research study in support of the view (see also confirmation bias discussed in Metacognition module). This is what anti-vaccination folks do when they cite the discredited work of Andrew Wakefield that (falsely!) links childhood MMR vaccines and autism. In this case, there is one exceedingly poor source that might mistakenly be viewed as supportive of the link and much good research that contradicts any causal connection between MMR vaccines and autism.
Your aim is to determine whether you can be justifiably confident that the author has done their due diligence to research the topic thoroughly and without bias (see also the People & Context module). Ask whether the author is accurately representing the current state of understanding in relevant fields of study. Do independent researchers concur with the findings of the cited research? Is the information corroborated by multiple reliable sources or just a single source (i.e., has the finding been replicated)? Did the author bother looking at the full literature, or are they just pulling from what supports what they want to believe and spread?
It's also possible that popular press articles embellish aspects of the research. Authors might, for example:
-
Exaggerate the certainty of the findings.
-
Overextend applications of the findings to real life.
-
Leave out noted boundary conditions and limitations noted by the researchers.
Look to the scientific source material and compare how it's communicated in the popular press article. Is the piece you initially read being transparent about the usefulness and quality of the work they cite, including limitations (e.g., can causality be inferred or not?), flaws (e.g., were enough participants collected?), and alternate explanations (e.g., can the findings just as easily be explained in ways that differ from the explanation presented in the piece?).
In any case, it should be clear that understand what makes for good evidence is often difficult. If you don't want to do the hard work needed to think critically about what underlies the authors claim, you may not know whether you ought to be convinced by what's being said.
Learning Check
