top of page
bruce-mars-xj8qrWvuOEs-unsplash.jpg

Evaluating Source Material and How it's Used

Photo by bruce mars on Unsplash

Source Material Part 3: Evaluating Source Material and How it's Used

Once you've determined that the primary source is published in an academic journal, it may still not be clear if the work is competently peer-reviewed and the journal where it's published is a reputable publication.

 

First, we want to keep our eyes open to whether the source is published in a low quality venue, such as predatory journals and vanity publishers. What are these? Have a look at the links to learn more. There are lots of these low quality journals out there, and academics are constantly exposed to emails soliciting submissions. Coupled with this, there is a huge incentive for academics to be publishing their work. Unfortunately, that sometimes leads to the publishing of low quality work at venues that don't care about quality. Hopefully, most academics are skeptical of journals in their field that they haven't heard of before; however, if you aren't in academia almost all journals in any given area are one's you haven't heard of and it's hard to know if it's one you shouldn't trust. It's your job to verify—at minimum, if you do a quick Google search, have other scholars noted problems with the journal? Don't trust anything that comes from a journal with a poor reputation.

But publication in an upper-tier journal isn't perfectly indicative of good ideas or science. A famous example that set alarm bells ringing in my own field is Daryl Bem's paper published in the Journal of Personality and Social Psychology (a well-regarded psychological journal) purportedly finding evidence that ESP (extrasensory perception) is real. Much has been said about this paper in the years since it's publication. Ultimately, the outlandishness of this paper's apparent findings was one piece of a much larger puzzle indicating that something was seriously amiss in much of how psychological science had been conducted to that point. Many exciting and necessary changes are underway to improve the field.

So if poor research can sometimes be published in an apparently good journal, how do we know we can trust other works? That's the big question that scholars are struggling to figure out today. For a look at problems in science in general, see the recent book Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth, by Stuart Ritchie. For some interesting and thought provoking conversations about the evolving state of psychological science in particular, check out the Two Psychologists Four Beers and The Black Goat podcasts.

There are some things to look out for to boost your confidence in the science literature—the key thing to look for is open science, which is about being transparent about everything you and your team did as researchers (see Center for Open Science and this article for an overview of recent advances). Some good signs are pre-registration of hypotheses and research methods and a willingness on the part of researchers to share materials and data they used.

 

This transparency allows others to see precisely what the researchers did to investigate their findings. It also allows other researchers to replicate the work, which is a key part of the scientific process. As a result of concerns about questionable research practices (QRPs), there has been a recent trend toward more openness. Accordingly, more attention is being paid to the value of pre-registration, whereby researchers register their hypotheses and methods before the work is conducted. This transparency places constraints on something all good scientists know is wrong—devising or revising hypotheses after the results are known. This is often referred to as HARKing (hypothesizing after results are known). HARKing is bad science, but it happens sometimes in practice—in the past, not a lot was in place to discourage it. However, with the movement toward pre-registration and sharing of materials/data, such practices should slowly be weeded out.

 

It's important to note that these recent trends are good practice and useful clues for the reader; however, their absence—particularly in the past when they weren't common—is not diagnostic of bad research. It just makes it harder to know whether good practices were used.

 

This makes everything for the non-expert super thorny and uncertain. Do consider, however, that it's better to acknowledge this complexity and uncertainty than to ignore it—I suggest that if you are seeing a lot of complexity where you didn't before and now have more questions than you have answers, you're becoming more informed and better equipped to know when you should and should not update your prior beliefs based on your reading. It's okay to hang out in a grey area, not knowing whether you can believe in a particular research finding—it's much better than assuming that every finding you come across is representative of reality (which is certainly not the case!).

Effective Use of Sources

Okay, you've determined the author cites some good research from a good journal. How do you decide if the source has been used correctly or if there is something purposefully or accidentally misleading about the way the the author has used their sources? Sometimes authors cite great research, but draw inferences they have no business drawing. For instance, popular press articles will sometimes cite correlational research to make causal claims, which can be extremely misleading (researchers are, by the way, not immune from this mistake). Sometimes causal inference can seem legit to the uncritical reader, but ends up being a huge leap from what the the research says.

Another common problem is the cherry picking of supportive research. If the author is writing to make an argument, it's often easy to go find research in support of the view (see also confirmation bias discussed in Metacognition module). This is what anti-vaccination folks do when they cite the discredited work of Andrew Wakefield that (falsely!) links childhood vaccines and autism. In this case, there is one exceedingly poor source that could be viewed to support the link and much work that contradicts it. You want to determine whether you can be justifiably confident that the author has done their due diligence and researched the topic without bias (see also People & Context module). Ask whether the author is accurately representing the current state of understanding in relevant fields of study. Do independent researchers concur with the findings of the cited research? Is the information corroborated by multiple sources or just a single source (i.e., has the finding been replicated)? Did the author bother looking at the full literature, or are they just pulling from what supports what they want to believe and spread?

It's also possible that popular press articles embellish aspects of the research, like exaggerating the certainty of the findings, overextending applications of the findings to real life, or leaving out noted boundary conditions and limitations noted by the researchers. Look to the original source—is the piece you initially read being transparent about the usefulness and quality of the work, including limitations (e.g., can causality be inferred or not?), flaws (e.g., were enough participants collected?), and alternate explanations (e.g., can the findings just as easily be explained in ways that differ from the explanation presented in the piece?).

​​Learning Check

© Darcy Dupuis 2025

Contact

To provide feedback or to learn about using Fallible Fox content for personal, educational or organizational purposes, contact Darcy at dupuisdarcy@gmail.com

bottom of page