While tempting to let data stand in for objective assessment, we know that can never be the case. Problems abound with impact metrics due to both implicit and explicit bias in the entire scholarly communications lifecycle, from research and writing through publication and promotion.
Challenges with Journal Impact Factors Within a Discipline:
- Impact Factor analysis is limited to the scope of the tool you are using. For example, Journal Citation Reports is run through the database Web of Science which indexes around 8,000 journals. If the journal you want to know more about is not in Journal Citation Reports, the tool will not give the journal an impact factor.
- A high impact factor does not convey quality or validity. For example, retracted articles often invite high numbers of citations while experts dissect the reason for the retraction and the surrounding issues.
- Journals that publish a high number of literature reviews often have high impact factors. While these review articles are not adding new knowledge to the field, they are cited often.
- New journals are often underrepresented.
- Some journal editors may encourage authors to cite their works previously published in the journal - "gaming" the system.
Challenges with Impact Factors Across Disciplines:
- Using a two-year window to measure impact factors can skew disciplines that use much older literature and provide a challenge to acquiring accurate measurements. The five-year window of measurement is often used to intercept this within a specific discipline but makes cross-discipline measurements impossible.
- A discipline where many authors collaborate on a single paper will automatically generate a higher impact factor than in disciplines where authors publish alone or with one or two co-authors.
- Disciplines have different standards on how many citations are typically included in empirical research articles.
Challenges with quantitative metrics (i.e., citation counts, altmetrics):
- Much research has raised issues of citational justice. Implicit and explicit biases inherent in scholarly communications result in disparities in who gets cited. See more about this in the research shared in a webinar from Sheila Craft-Morgan.
- Researchers with technical skills and knowledge are more able to drum up altmetrics. Whether using automated bots to increase page hits or social media mentions or simply knowing to encourage others to help boost visibility in altmetric-friendly channels, those who know altmetrics will have better altmetrics.
- Google Scholar is a go-to aggregator of citation counts and quantitative measures, but it is still an automated system that lacks some quality control and vetting. Citation counts are often double-counted or include mentions in things that may not qualify as "scholarship" in a traditional sense, such as student papers or gray literature. While this may very well "prove" impact as well, it needs to be defined and explained.