The Impact Factor Defined

Daniela Ilieva-Koleva

ВУЗФ – имейл:

Raya Tsvetkova

ВУЗФ – имейл:

Raya Genova

ВУЗФ – имейл:

Абстракт: Проучването има за цел да даде обяснение и задълбочен анализ на така наречения импакт фактор (impact factor). Той се е превърнал в един от най-влиятелните инструменти в модерното проучване в академичните среди, въпреки че употребата му провокира много смесени реакции сред изследващите го и академичните изследователи. Освен това проучването цели да бъде в помощ на млади учени, докторанти и академични изследователи в разбирането и прилагането на impact factor в тяхната академична работа, за да се осигури повече правдоподобност при публикуване на статии и книги с академична и научна насоченост

Ключови думи: импакт фактор, значение на импакт фактор, академични публикации, научни публикации, научни изследвания.

Abstract: The following research aims to provide an explanation and detailed analysis of the impact factor. It has evolved to become one of the most influential tools in modern research and academia, although the use of it has provoked strongly mixed reactions among investigators and academic researchers. Additionally, the research aims to help young scientists, PhD students and professors in understanding and implementing the impact factor in their academic work when it comes to generating credible academic and scientific papers and books. 

Key words: impact factor, meaning of impact factor, academic publications, scientific publications, scientific research.


Publishing in the academic world is a challenging task when exploring the numerous possibilities and the outcomes of using them. When trying to find the best-suited outlet for an article, research paper, etc., an academic should consider many factors, and one of the most important ones is the impact factor.

The impact factor (IF) was first defined in 1955 by Dr. Eugene Garfield in Science [1]. In the 1960s the idea of an IF evolved in the Science Citation Index (SCI). This, consequently, prompted Thomson Reuters [2] to start generating the Journal Citation Reports (JCR) annually. The IF has since evolved to become one of the most influential tools in modern research and academia. However, the utilization of the IF has provoked contradicting reactions among investigators.

This paper focuses on what the IF is, where and how it is used, as well as delves into the criticisms and solutions related to the IF. It aims not only to give a general overview of the topic, its specifics and developments, but also to provide guidance for its use by academics and other interested parties.


The Impact Factor explained

The Journal Impact Factor (JIF) or Impact Factor (IF) is used for measuring “the relative importance of a journal within its field” [3]. This relative importance is evaluated most often through the average number of citations the journal receives from articles published in all other approved academic journals, books, newspapers, conference or seminar proceedings and other documents published online or offline. Journals with higher IF are likely to appear more often and in more “significant” end publications than those with IF.

The IF can be calculated yearly, half- yearly, quarterly or monthly. The topic has become so popular that citation analysis has flourished over the past four decades. In addition to that, the field has its own International Society of Scientometrics and Informetrics.

The determination of the IF is considered innovative in nature due to the uniqueness of its formula. However, as Hoeffel stresses, “the Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation.” [4] He explains that the best journals to publish in are usually the ones that have the sternest acceptance procedures – and they do have the biggest IF indeed, however, their procedures and prestige existed long before the IF started playing a role in academic life. More than giving innovation to the field, Hoeffel argues, the IF confirms the status quo. [4] The IF, therefore, brings its own set of benefits and drawbacks to the table since its establishment. Before delving further into analyzing the content and impact of the IF, its history will be reviewed to get a complete picture of how it came to be such an important part of academia.

Evolution of the Journal Impact Factor (JIF)

Envisioned in 1955 and implemented in the 1960s, the IF launched the first publication of the Journal Citation Reports (JCR) in 1972. Since then, people tend to rank the credibility of journals according to the data provided in that journal.

Initially, Irving H. Sher and Garfield started work on creating the IF for the purpose of choosing additional source journals for their work [5]. In order to achieve this task, they re-sorted the previously used author citation index (ACI) and turned it into the journal citation index, published in the JCR. This methodology of work taught them the importance of the “core” group of journals, who were the most highly credited and had a large number of citations; with that knowledge, they covered them in the newly formed Science Citation Index (SCI). The SCI is property of the Institute of Scientific Information, which is located in Philadelphia, Pennsylvania.

However, confusing cases started to appear when the SCI was implemented. For example, in 2004 there were 6500 articles published in the Journal of Biological Chemistry, and articles specifically published as Proceedings of the National Academy of Sciences received more than 300 000 citations for the year [5]. Due to the high number of citations in more popular journals, the selection of smaller journals would become nearly impossible with only this factor in place. This predisposed the creation of a more sophisticated and detailed, and more importantly, less discriminatory based on size, way to categorize journals – the IF. The JIF takes into account the data from the SCI; however, it now also enriches and widens it in several different ways, as further explained below.

The popularity of SCI and the JIF both resulted in a pronounced need for the Journal Citation Reports (JCR). It was a need both for the placement and evaluation of journals, and a need of researchers who need proper outlets. Nonetheless, Garfield stresses: “even before the Journal Citation Reports (JCR) appeared, we sampled the 1969 SCI to create the first published ranking by impact factor. Today, the JCR includes every journal citation in more than 5000 journals – about 15 million citations from 1 million source items per year”. [5] The JCR and the IF continue to grow and evolve even today.

Measures of the Impact Factor

The following table serves as an example of how the IF is measured for selected biomedical journals. It provides a list of journals ranked by IF for 2004. Garfield [5] clarifies that sorting in that way helps in the inclusion of relatively small in size journals in terms of number of articles published annually, which manage to be influential and relevant even with a smaller size. This eliminates the need for the more common sorting, which is either by number of citations (when journals are trying to show respectability) or by number of published articles (when journals show capacity and size).

It also highlights the two components of the IF: the number of articles produced and published, and the number of citations that are later generated by those articles [5]. The IF is formed as an equation, composed from:

  • The numerator: the number of citations for the chosen previous period
  • The denominator: the number of articles published in the chosen previous period

Ranked Journals by Impact Factor [78]

So for example, the impact factor for the first journal in the category can be calculated as follows:

(Number of citations) divided by (Number of articles)

or 185513/2258=82.158

The period considered for the calculation of the IF can vary. For example, in more dynamic fields in which rapid changes can be observed, it may be just the last year before publication. The opposite is also true – if the researcher needs information for the long-term IF of a journal, they can turn to a longer time frame. [5] Moreover, the period needs to be adjusted for other variables, which may change the need for a time-shift in perspective.

One example is the half-life or the “number of retrospective years required to find 50% of the cited references” [5] which varies based on how quickly findings remain relevant in a field. In this variable, slowly changing sciences like psychology have a longer half-life than fast-evolving ones like natural sciences, which have benefitted greatly from the development of technology in terms of understanding their premises. Further, citation density or “the average number of references cited per source article” [5] can be also adjusted based on field, in which the journal operates, since the density is usually very low for fields like mathematics, in comparison with fields such as biology. The citation studies can further be adjusted based on specialty of the journal, as having very narrow specialty would narrow the possibilities for volume of citations. Therefore, many factors and variables should be considered before choosing an appropriate period.

The formula used to measure the IF, moreover, eliminates the precognition that a bigger journal always has a bigger IF. Also, Garfield argues that the 80/20 rule applies in journals too – that 80% of citations are produced by 20% of articles published. [5] This means that even with significant increase in size, there are no guarantees of higher impact if the relative number of citations does not increase.

Analysis of the JIF

There are several benefits and drawbacks to the IF. Outlined by Seglen [6], some of the major insufficiencies and problems related to the use of impact factors can be found in the appendix. Aside from them however, the debate on the importance, significance and usefulness of the IF can be discussed in seven aspects, as Vanclay [7] points out. The author also annotates TRIF as the specific publication of the Thomson Reuters Impact Factor. The following points were made my him:

Aspect 1: Indication of journal quality

There are many authors, who have researched from differing perspectives the question of whether TRIF is a good indication of journal quality. On one hand, some authors [8][9][10][11][12] claim that the indication of journal quality is consistent with overall citation count and with use of articles [13]. This, as supported by other authors [14][15][16], suggests that TRIF is a rational measure of journal quality, when used correctly.

However, some authors [17] outline weaknesses of the aspect. According to them, there might be an unreliable correlation between TRIF and independent measures of evidence and quality [6][18][19][20][21]. This might be because authors believe there are many factors influencing citation rates [22][23][24], which would render the TRIF irrelevant for journal quality determination, or because authors see a weak correlation between TRIF and article rejection rates [25]. Many authors argue that TRIF cannot be an indication of journal quality since there are many components of quality that are not measured this way. [26][27] Another criticism is that it lacks robustness against single outliers [28][29] and appears to be an incomplete and inadequate measure of quality [30].

Aspect 2: Rigour of TRIF

Secondly, some weaknesses of the rigour of TRIF are that it is seen by many as open to manipulation [31][32][33][34][35][36] and that different editorial policies could influence TRIF [37][38][39]. Moreover, around 18% of journals have self-citation exceeding 20% [40], and self-citation is significantly correlated with TRIF [41][42][43] and contributes to fluctuation in TRIF [44][45][46]. Another weakness is the pervasive claim that “TRIF lacks transparency” [47].

However, in terms of rigour, even though the possibility of manipulation, a firm belief states that there is no proof of widespread manipulation [48] and, further, editorial citations contribute and sway little the results of TRIF [49].

Aspect 3: Normalization

A significant problem of TRIF is that it does not account for differences in disciplines in terms of norms of citations in the differing fields of study and work. This is why it warrants normalization [50].

The IF can be adjusted for these differences, in other words normalized, through refining the formula, which defined the IF. For that goal, to the equation are added two more variables: total number of citations in all journals in the field, and total number of published articles in the journals of the same particular discipline. [51] The new equation for a journal X for a year would be in this form:

Total number of citations for Journal X / Total number of articles in Journal X

Total number of citations to the journals of the particular discipline / Total number of citations in journals of the particular discipline

Therefore, with this adjustment, an important negative aspect of the IF is solved, paving way for the IF to be more accurate and reliable.

Aspect 4: Timeframe

Many authors outline the standard two-year period of TRIF as a weakness, because it is considered insufficient to incorporate the developments and real trends in academia and research publishing. [52][53]. Moreover, Vanclay [54] stresses that the differences in disciplines and fields of studies must be reflected in differing length of used variables.

However, other authors argue that a 5-year timeframe for the impact factors may actually follow a similar [55] or complementary pattern [56] to the 2-year TRIF, making this argument unstable.

Aspect 5: Distribution and statistical assumption

From a statistical viewpoint, non-normal distribution [57] journals cannot be ranked with great precision [58] in the TRIF. Even for articles written by the same author, citedness may vary significantly, creating a skewness that is difficult to ignore and which is difficult to impossible for predicting. [59]

On the other hand, it is possible and plausible to estimate standard errors [60], which would make statistical assumptions and predictions more plausible.

Moreover, there are no statistics to inform significance of the findings and ranking of journals [61].

Aspect 6: Database issues

Database problems may be a major source of bias for the classification and ranking based on TRIF [62]. Biases can be found in different forms, all stemming from a common initiator: difficulties in cataloguing correctly and keeping a sophisticated and comprehensive database. Some of these biases may arise from the limited number of journals scanned – or a language bias [63][64][65]; from insular citing patterns [66], or from errors arising from surname conventions [67][68].

Another weakness – journals with similar titles may be incorrectly dealt in a single TRIF [69]. Somewhere between 25-35% of all citations, studies show, contain errors – a fact that creates a vast and difficult to overcome problem [70]. According to Neuhaus et al [71], not all information available in terms of volume of citations can be truly seen as a reliable indication of the true data in the academic world.

Aspect 7: Unintended consequences

Finally, there usually are some unintended consequences of the TRIF. It sometimes threatens the viability of specialist journals [72] and disciplines [73], as journals with higher IF are primarily selected, while those with lower IF are rarely selected or ignored completely, even though the lower ranked journals may still contain more valuable information. The fact that the lower-ranking journals are being chosen rarely by prominent scholars may lead to a downward spiral, in which there are fewer high-quality articles published in them, leading to a lower citation rate, etc.

Another unintended consequence is that TRIF may “distort publication patterns away from prime audience” [74] and may actually shift the focus and objectives of the editorial board towards increasing the journal’s TRIF instead of focus on more diverse indicators of quality [75].

Improving the Impact Factor

Many of the issues listed and described above have been resolved in another Thomson Scientific database – the Journal Performance Indicators (JPI). Garfield [5] claims that the difference in the databases comes from the fact that the JPI connects every source to its unique set of citations, which makes calculations better.

Still, most authors and libraries to this day choose a journal for their goals based on the journal’s JIF, and associate a high JIF with prestige. [5] Which means that instead of trying to implement a wholly new system, it is worthwhile upgrading the current one.

It is worth mentioning that the IF has found a position, in which it improves the process of ranking journals as part of a bigger analysis. The Thomson Reuters [2] Journal Selection Process outlines the following indexes as highest in terms of quality of evaluation and selection, based on consistency and selection process rigor [76]:

  • Science Citation Index Expanded (SCIE);
  • Social Sciences Citation Index (SSCI);
  • Arts & Humanities Citation Index (AHCI).

Their evaluation process is so highly regarded because of the rigorous process of evaluation, which includes both qualitative and quantitative factors. The process begins with the basic publishing standards (such as peer review, ethical practices, format, language, etc.), and moves on to factors such as editorial content (convergence of goal of the journal and content), international focus and diversity of authors and staff, and finally – citation analysis, which uses the IF. [76] This way the impact factor is a part of a larger and more complex analysis, making it a tool that can optimize the view of a journal without limiting the analysis to just citations.

Further, in the search for improvement of the impact factor, a suggested development is the “audience factor” which focuses on the citing entities and finds a weighted mean of citations in a journal. Even with a completely new outlook, the audience factor has a high overall correlation with the IF. [77]

As the industry develops, new ideas and improvements are sure to follow and find proper use in an established and needed system for evaluating journals.



This paper explored the essence, application, development and various advantages and drawbacks of the IF. This method of evaluating journals has showed promise and has been adopted widely for the various purposes – spanning from the needs of librarians for the best journals to stock their shelves with, authors in search for the best place to publish, to readers and investors.

As discussed above, the imperfections of this unique in its method and approach tool, have been found to undermine its validity and value. However, tweaks in the formula, as well as its incorporation in larger mechanisms for evaluating journal quality, keep it relevant and useful for the necessary purposes.

Therefore, awareness of the purpose and use of the IF and its meaning for all interested parties is crucial for everyone involved in publishing and academia.



[1] Garfield, E. (1955). Citation indexes to science: a new dimension in documentation through association of ideas. Science, [online] 122(3159), pp. 108-111.

[2] Reuters, T. (2011). Journal Citation Reports. Thomson Reuters Products and Services. < >, Retrieved on 10.10.2016.

[3] (2016). Journal Impact Factor. <>

[4] Hoeffel MD, C. (1998). Journal Impact Factors. Allergy, 53(12), p.1225.

[5] Garfield, E. (2006). The History and Meaning of the Journal Impact Factor. JAMA: The Journal of the American Medical Association, 295(1), pp.90–93.

[6] Seglen, P. (1997). Why the Impact Factor of Journals Should Not Be Used for Evaluating Research. BMJ: British Medical Journal, [online] 314(7079), pp. 498-502.

[7]Vanclay, J. K. (2011). Impact factor: outdated artefact or stepping-stone to journal certification?. Scientometrics, 92(2), pp. 211-238.

[8] Hansen, H., Henriksen, J. (1997). How well does journal ‘impact’ work in the assessment of papers on clinical physiology and nuclear medicine? Clinical Physiology, vol. 17(4), pp. 409-418.

[9] Callaham, M., Wears, R., Weber, E. (2002). Journal Prestige, Publication Bias, and Other Characteristics Associated with Citation of Published Studies in Peer-Reviewed Journals. JAMA, vol. 287(21), pp. 2847-2850.

[10] Chapman, S., Ragg, M., McGeechan, K. (2009). Citation bias in reported smoking prevalence in people with schizophrenia. Australian and New Zealand Journal of Psychiatry, vol. 43(3), pp. 277-282.

[11] Haslam, N., Koval, P. (2010). Predicting long-term citation impact of articles in social and personality psychology. Psychological Reports, vol. 106(3), pp. 891-900.

[12] Hunt, G., Cleary, M., Walter, G. (2010). Psychiatry and the Hirsch h-index: The Relationship between Journal Impact Factors and Accrued Citations. Harvard Review of Psychiatry, vol. 18(4), pp. 207-219.

[13] Wulff, J., Nixon, N. (2004). Quality markers and use of electronic journals in an academic health sciences library. Journal of the Medical Library Association, vol. 92(3), pp. 315-322.

[14] Schoonbaert, D., Roelants, G. (1996). Citation analysis for measuring the value of scientific publications: Quality assessment tool or comedy of errors?. Tropical Medicine and International Health, vol. 1, pp. 739-752.

[15] Cartwright, V., McGhee, C. (2005). Ophthalmology and vision science research. Part 1: Understanding and using journal impact factors and citation indices. Journal of Cataract and Refractive Surgery, vol. 31(10): 1999-2007.

[16] Abramo, G., D’Angelo, C., Di Costa, F. (2010). Citations versus journal impact factor as proxy of quality: Could the latter ever be preferable?. Scientometrics, vol.84(3), pp. 821-833.

[17] Bain, C., Myles, P. (2005). Relationship between journal impact factor and levels of evidence in anaesthesia. Anaesthesia and Intensive Care, vol. 33(5), pp. 567-570.

[18]Seglen, P. (1989). Use of citation analysis and other bibliometric methods in evaluation of the quality of research [Bruk av siteringsanalyse og andre bibliometriske metoder i evaluering av forskningskvalitet]. Tidsskrift for den Norske laegeforening, vol. 109(31), pp. 3229-3224.

[19] Woolgar, S. (1991). Beyond the citation debate: towards a sociology of measurement technologies and their use in science policy. Science and Public Policy, vol.18: 319-26.

[20] Bath, F., Owen, V., Bath, P. (1998). Quality of full and final publications reporting acute stroke trials: A systematic review. Stroke, vol. 29(10), pp. 2203-2210.

[21] Schumm, W. (2010). A comparison of citations across multidisciplinary psychology journals: A case study of two independent journals. Psychological Reports, vol. 106, pp. 314-322.

[22] Knothe, G. (2006). Comparative citation analysis of duplicate or highly related publications. Journal of the American Society for Information Science and Technology, vol. 57(13), pp. 1830-1839.

[23] Calver, M., Bradley, J. (2010). Patterns of citations of open access and non-open access conservation biology journal papers and book chapters. Conservation Biology, vol. 24(3), pp. 872-880.

[24] Perneger, T. (2009). Citation analysis of identical consensus statements revealed journal-related bias. Journal of clinical epidemiology, vol. 63(6), pp. 660-664.

[25] Kurmis, A., Kurmis, T. (2006). Exploring the relationship between impact factor and manuscript rejection rates in radiologic journals. Academic Radiology, vol. 13(1), pp. 77-83.

[26] Bollen, J., Van de Sompel, H., Hagberg, A., Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, vol. 4(6): e6022

[28] Rousseau, R. (2002). Journal evaluation: technical and practical issues. Library Trends, vol. 50: 418.

[29] Metze, K. (2010). Bureaucrats, researchers, editors and the impact factor – a vicious circle that is detrimental to science. Clinics, vol. 65(10), pp. 937-940.

[30] Coleman, A. (2007). Assessing the value of a journal beyond the impact factor. Journal of the American Society for Information Science and Technology, vol. 58(8), pp. 1148-1161.

[31] Rousseau, R., Van Hooydonk, G. (1996). Journal production and journal impact factors. Journal of the American Society for Information Science, vol. 47, pp. 775-780.

[32] Garfield, E. (1999). Journal impact factor: A brief review. Canadian Medical Association Journal, vol. 161, pp. 979-980.

[33] Kurmis, A. (2003). Understanding the limitations of the journal impact factor. The journal of Bone and Joint surgery, 85(12), pp.2449-2454.

[34] Yu, G., Wang, L. (2007). The self-cited rate of scientific journals and the manipulation of their impact factors. Scientometrics, vol.73, pp. 321-330.

[35] Falagas, M., Alexiou, V. (2008). The top-ten in journal impact factor manipulation. Archivum Immunologiae et Therapiae Experimentalis, vol. 56(4): 223-226.

[36] Archambault, E., Lariviere, V. (2009). History of the journal impact factor: Contingencies and consequences. Scientometrics, vol. 79(3), pp. 635-649.

[37] Moed, H., Van Leeuwen, ThN., Reedijk, J. (1996). A critical analysis of the journal impact factors of Angewandte chemie and the Journal of the American Chemical Society: Inaccuracies in published impact factors based on overall citations only. Scientometrics, vol.37, pp. 105-116.

[38] Scully, C., Lodge, H. (2005). Impact factors and their significance; overrated or misused?. British Dental Journal, vol. 198, pp. 391-393.

[39] Foo, J. (2009) The retrospective analysis of bibliographical trends for nine biomedical engineering journals from 1999 to 2007. Annals of Biomedical Engineering, vol. 37(7): 1474-1481.

[40] Ha, T., Tan, S., Soo, K. (2006). The journal impact factor: too much of an impact? Ann Acad Med Singapore, vol. 35: 911–916.

[41] Straub, D., Anderson, C. (2009). Journal self-citation VI: Forced journal self-citation – Common, appropriate, ethical? Communications of the Association for Information Systems, 25: 57-66.

[42] Kurmis, T., Kurmis A. (2010). Self-citation rates among medical imaging journals and a possible association with impact factor. Radiography, vol.16, pp. 21-25.

[43] Mehrad, J., Goltaji, M. (2010). Correlation between journal self-citation with impact factor for the scientific publications in humanities published between 2001 and 2007 based on Persian journal citation report generated by Islamic science citation database. Information Sciences and Technology, vol. 25, pp.189-206.

[44] Leutner, D., Wirth, J. (2007). As mirrored by the journal: Themes and trends of educational psychology in the years 2005 to 2007. Zeitschrift fur Padagogische Psychologie 21: 195-202.

[45] Moller J, Retelsdorf J, Sudkamp A (2010) Editorial: As mirrored by the journal: Themes and trends of educational psychology in the years 2008 to 2010 [Editorial: Im Spiegel der Zeitschrift: Themen und Trends 23 der Pädagogischen Psychologie in den Jahren 2008 bis 2010]. Zeitschrift fur Padagogische Psychologie, vol.24, pp. 163-169.

[46] Campanario, J. (2011b). Large increases and decreases in journal impact factors in only one year: The effect of journal self-citations. Journal of the American Society for Information Science and Technology, vol. 62(2), pp. 230-235.

[47] Van Driel, M., De Maeseneer, J., De Sutter, A., De Bacquer, D., De Backer, G., Christiaens, T. (2008). How scientific is the assessment of the quality of scientific output using the journal impact factor? [Hoe wetenschappelijk is het beoordelen van wetenschappelijk werk aan de hand van impactfactoren van tijdschriften?]. Tijdschrift voor Geneeskunde, vol. 64: 471-476.

[48] Andrade, A., González-Jonte, R., Campanario, J. (2009). Journals that increase their impact factor at least fourfold in a few years: The role of journal self-citations. Scientometrics, 80(2), pp. 515-528.

[49] Campanario, J., Carretero, J., Marangon, V., Molina, A., Ros, G. (2011). Effect on the journal impact factor of the number and document type of citing records: A wide-scale study. Scientometrics, vol. 87(1), pp. 75-84.

[50] Ugolini, D., Bogliolo, A., Parodi, S., Casilli, C., Santi, L. (1997). Assessing research productivity in an oncology research institute: The role of the documentation center. Bulletin of the Medical Library Association, vol. 85: 33-38.

[51] Owlia, P., Vasei, M., Goliaei, B., Nassiri, I. (2011). Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. Journal of Biomedical Informatics, vol. 44, pp. 216-220.

[52] Van Leeuwen, T., Moed, H., Reedijk, J. (1999). Critical comments on Institute for Scientific Information impact factors: a sample of inorganic molecular chemistry journals. Journal of Information Science, vol. 25(6), pp. 189-198.

[53] McGarty, C. (2000). The citation impact factor in social psychology: a bad statistic that encourages bad science. Current Research in Social Psychology, vol. 5, pp. 1-16.

[54] Vanclay, J. (2009). Bias in the journal impact factor. Scientometrics, vol.78, pp. 3-12.

[55] Campanario, J. (2011a). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window. Scientometrics, vol. 87, pp. 189-204.

[56] Jacso, P. (2009). Five-year impact factor data in the Journal Citation Reports. Online Information Review, vol. 33, pp. 603-614.

[57] Weale, A., Bailey, M., Lear, P. (2004). The level of non-citation of articles within a journal as a measure of quality: A comparison to the impact factor. BMC Medical Research Methodology, vol.4, pp. 14.

[58] Greenwood, D. (2007). Reliability of journal impact factor rankings. BMC Medical Research Methodology, vol. 7:48.

[59] Seglen, P. (1992) The Skewness of Science. Journal of the American Society for Information Science, Oct1992, Vol. 43 Issue 9, pp.628-638.

[60] Schubert, A., Glanzel, W. (1983). Statistical reliability of comparisons based on the citation impact of scientific publications. Scientometrics, vol. 5, pp. 59-73.

[61] Leydesdorff, L., Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, vol. 61, pp. 2365-2369.

[62] Tam, C., Tan, S., Soo, K. (2006) The journal impact factor: Too much of an impact? Annals of the Academy of Medicine Singapore, vol.35: 911-916.

[63] Kotiaho, J., Tomkins, J., Simmons, L. (1999). Unfamiliar citations breed mistakes. Nature,  400:307

[64] Schopfel, J., Prost, H. (2009). Comparison of SCImago Journal Rank Indicator (SJR) with JCR journal impact factor (IF) for French journals [Le JCR facteur d’impact (IF) et le SCImago Journal Rank Indicator (SJR) des revues françaises : une étude comparative]. Psychologie Francaise, vol. 54, pp. 287-305.

[65] Poomkottayil, D., Bornstein, M., Sendi, P. (2011). Lost in translation: The impact of publication language on citation frequency in the scientific dental literature. Swiss Medical Weekly, vol. 141: w13148.

[66] Jacobs, G., Ip, B. (2005). Ring fenced research: The case of computer-assisted learning in health sciences. British Journal of Educational Technology, vol. 36(3): 361-377.

[67] Meneghini R, Packer AL, Nassi-Calo L (2008) Articles by Latin American authors in prestigious journals have fewer citations. PLoS ONE, vol. 3(11): e3804.

[68] Kumar, V., Upadhyay, S., Medhi, B. (2009). Impact of the impact factor in biomedical research: Its use and misuse. Singapore Medical Journal, vol. 50(8), pp. 752-755.

[69] Lange, L. (2002). The impact factor as a phantom: Is there a self-fulfilling prophecy effect of impact? Journal of Documentation, vol. 58(2), pp. 175-184.

[70] Todd, P., Guest, J., Lu, J., Chou, L. (2010). One in four citations in marine biology papers is inappropriate. Marine Ecology Progress Series, vol.408: 299-303.

[71] Neuhaus, C., Marx, W., Daniel, H-D. (2009). The publication and citation impact profiles of Angewandte Chemie and the journal of the American Chemical Society based on the sections of chemical abstracts: A case study on the limitations of the journal impact factor. Journal of the American Society for Information Science and Technology, vol.60, pp. 176-183.

[72] Zetterstrom, R. (1999). Impact factor and the future of Acta Paediatrica and other European medical journals. Acta Paediatrica, International Journal of Paediatrics, vol.88, pp. 793-796.

[73] Brown, H. (2007). How impact factors changed medical publishing – and science. BMJ, vol. 334, pp. 561–64.

[74] Postma, E. (2007). Inflated impact factors? The true impact of evolutionary papers in non-evolutionary journals. PLoS ONE, vol. 2(10): e999.

[75] Ketcham, C. (2008). The proper use of citation data in journal management. Archivum Immunologiae et Therapiae Experimentalis, vol. 56, pp. 357-362.

[76] Testa, J. (2016). The Thomson Reuters Journal Selection Process – IP & Science – Thomson Reuters. [online]

[77] Zitt, M. and Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the Association for Information Science and Technology, [online] 59(11), pp.1856-1860.

[78] Anon (2016) IDEAS/RePEc Simple Impact Factors for Journals [online] Available from:


Appendix 1: Problems associated with the use of journal impact factors [6]

Appendix 2: Useful links

Electronic Journal Submission form:

Websites for impact evaluation:

  4. (Submissions to European Journal of Marketing are made using ScholarOne Manuscripts, the online submission and peer review system. Registration and access is available at Guidelines:

A ranking of marketing journals:


Dr. Daniela Ilieva-Koleva, VUZF University, Associate Professor,

Raya Tsvetkova, VUZF University, Master Student,

Raya Genova, VUZF University, Student at the University of Sheffield Program,

Rhetoric and Communications e-Journal, Issue 25, November 2016

Електронно научно списание „Реторика и комуникации“, бр. 25, ноември 2016 г.