anomalous-citation-patterns-in-the-world-of-citation-metrics
September 09, 2015
If you're familiar with the concept of citation metrics, it's highly likely that you are also familiar with the concept of anomalous citation patterns. These take many forms, however the most commonly discussed are those relating to self-citation and citation stacking. With every new release of Thomson Reuters'’ Journal Citation Reports, the academic community is flooded by blog posts, articles and editorials discussing the relative merits and flaws of citation metrics – most notably the Impact Factor – and the relative ease with which they can be manipulated. Examples include posts in both scholarly and non-scholarly sources, such as Scientific American, Times Higher Education and Scholarly Kitchen.
Of course, some metrics (such as the Eigenfactor and the SJR) restrict or omit self-citations from their calculations, while Thomson Reuters reserves the right to exclude titles from the Journal Citation Reports if it is found that ‘anomalous citation patterns’ are having an undue influence on their rankings. This exclusion is commonly perceived by the academic community as punishment for misbehavior, and can cause severe damage to the reputation of both editors and journals. Articles discussing these matters frequently associate the terms ‘citation stacking’ (a pattern of citation behavior) and ‘citation cartels’ (one possible cause of citation stacking), further strengthening the perceived link between anomalous citation patterns and culpable behaviour. However, let’s not forget that exclusion does not necessarily indicate culpability, and it is this question that draws attention to a disconnect between the perceptions of the academic community, and the policies implemented by Thomson Reuters – who insist that exclusion from the Journal Citation Reports is not a ‘punishment’, but rather an attempt to maintain the integrity of their dataset. As such, it is important that before assigning culpability, we pay attention to the various factors (other than citation manipulation) that can lead to ‘anomalous citation patterns’.
Self-Citation
Self-citation can refer to occasions when authors cite their own papers, or occasions when journals cite their own papers. Both of these forms of self-citation can distort citation metrics, however journal-level self-citation is far easier to detect.
Deliberate attempts to boost citation metrics through self-citation can cause considerable damage to a journal’s reputation, regardless of the benefit to its citation metrics. Practices such as loading editorials with self-citations, or ‘suggesting’ that authors include more citations to the receiving journal, stand to create bad feeling within the academic community. Moreover, they create a false impression of the usefulness of research, and (without appropriate checks) can lead to poor-quality titles dominating the journal rankings.
However, there are two key reasons (other than deliberate manipulation) why journals might have a higher-than-average self-citation rate:
- Niche or archive publications. If your journal is the journal that founded a discipline (and therefore holds the research archive), or if your journal is one of just a handful that covers your subject area, it is natural that many of your papers will cite other papers in your journal.
- Poor discipline coverage. If your competitor journals are not indexed in a given citation database, citations made by those journals will not be indexed. This means not only that your journal metrics will be calculated based on a fraction of your total citations (one of several reasons for the low metrics of SSH titles), but that the majority of visible citations to your journal will be from your journal – leading to a high rate of self-citation.
Citation Stacking
The waters become even murkier when we come to the issue of citation stacking. Citation stacking refers to anomalous citation activity that sees a disproportionate number of citations being exchanged between two or more journals. There is typically a ‘donor’ journal (the title giving the citations) and a recipient journal (the title receiving the citations). Such patterns can be evidence of a ‘citation cartel’, whereby groups of journals attempt to inflate their metrics through donating and receiving citations without inflating their self-citation rate.
Notably, in the case of citation stacking, both the donor and the recipient journals are suppressed if Thomson Reuters believes that citation stacking has had an undue influence on the rankings of either title. However, there are many reasons why journals may appear to be guilty of citation stacking, without any wrong-doing on their part.
- Poor discipline coverage. Once again, if only one or two of a journal’s main competitors are indexed, it stands to reason that the vast majority of its citations will come from those few publications.
- Author self-citation. I was recently unhappy to witness a journal being excluded from the Journal Citation Reports as a result of a single author group extensively citing their own papers from within a different publication. Despite the fact that these papers were swiftly retracted by the other publication, and that the authors in question had no ties to the editorial teams of either journal, the exclusion (for both journals) remains in place.
- Third-Party Influence. It is important to remember that some journals may be unwitting members of a citation cartel. I have seen occasions where author groups, linked to another publication, have acted as citation cartels – publishing papers in a range of journals, yet loading those papers with citations to their own title, thereby creating a donor/recipient scenario. While peer-review should catch the worst of such offenders, these trends can be difficult to spot when multiple reviewers are working on a collection of affected papers.
- Journal size. While all journals are vulnerable to exclusion for ‘anomalous citation patterns’, small journals are the most vulnerable. It is well understood that small journals tend to have the most fluctuating journal metrics, simply because the addition of a small number of citations can have a big influence on the average. The same is true for citation stacking – and, indeed, self-citation. It is difficult to imagine a journal such as Nature (with over 70,000 citations in the 2014 IF calculation) receiving enough citations (whether self-citations or citations from another publication) to be subject to the exclusion policy. In contrast, a small SSH title (with maybe 20 citations in the IF calculation) is vulnerable to excessive attention from a single paper.
Why is this important?
The perception of JCR exclusion as a ‘punishment’ has led to a prolific association between anomalous citation patterns and culpability – an association that can severely damage the reputations of editors and journals alike. However, it is an association that cannot always be upheld, and which, particularly in the case of citation stacking, can perpetrate injustices. Thomson Reuters insists that exclusion from the Journal Citation Reports is not a punishment for wrongdoing, knowing, perhaps, that it is impossible to judge culpability without a detailed investigation. As such, it is important that we increase our awareness of the complexities of the issue – not only to avoid unjustly allocating blame, but also to remind ourselves of the factors that could leave our own journals vulnerable.
Source: Steve Mann/Getty Images