[PDF] [PDF] Bandwagon of impact factor for journal scientometrics - CORE

the standard automated tool to identify article citation counts and conduct citation analysis Google 2004 9 Thomson Reuters' Journal Citation Reports (JCR) is an annual a third of the scientific serials listed in Ulrichswebä which is incomplete itself Peer review under responsibility of Taibah University Production and 



Previous PDF Next PDF





[PDF] Journal Citation Reports

She has 25 peer reviewed Introduction to Journal Citation Reports (JCR) Granular article-level data that explains quality of the articles included in the journal Journal Publishing Standards • Timeliness of publication • Ethics Editorial



[PDF] General Guidelines on Classification of Publications

Peer-review of the work before publication is accepted - Scholarly of quality, and regularity of issues are eligible for ranking the same way standard journals are ranked Journal with Impact factor that falls in the top 25 percentile ranking based on the impact European Reference Index for the Humanities (ERIH)];



[PDF] Indicators Handbook - Overview

Journal Citation Reports Indicators within InCites, and all document types are included are in the journals meeting the quality standards of the JCR This is sometimes referred to as a “staff student ratio” and is an indication of the 



[PDF] Bibliometric Rankings of Journals Based on the Thomson Reuters

and Social Sciences The annual bibliometric measures given below are calculated for a Thomson Reuters Journal Citations Reports (JCR) calendar year , 



[PDF] Journal Impact Indicators Quick Reference Guide - Business

2 déc 2016 · Peer review has been a formal part of scientific communication since the first Standardized Impact Factor: Journal Citation Reports (JCR), SCImago Erasmus Research Institute of Management Journals Listing 2016



Citation-Based Journal Rankings: Key Questions - IEEE Xplore

Next, the guide reviews 19 standard citation metrics, including the h index, g index, impact factor, source normalized impact per paper, eigenfactor, article influence 



[PDF] Bandwagon of impact factor for journal scientometrics - CORE

the standard automated tool to identify article citation counts and conduct citation analysis Google 2004 9 Thomson Reuters' Journal Citation Reports (JCR) is an annual a third of the scientific serials listed in Ulrichswebä which is incomplete itself Peer review under responsibility of Taibah University Production and 

[PDF] standard vs control chemistry

[PDF] standard vs control in laboratory

[PDF] standardization of tools

[PDF] standards for cosmetics

[PDF] standards for cosmetics in india

[PDF] standards for online learning

[PDF] standing alone rule in braille

[PDF] standstill period procurement ireland

[PDF] stanford encyclopedia of philosophy de beauvoir

[PDF] stanford hospital board internal message covid 19 snopes

[PDF] stanford house glendale

[PDF] stanford ib requirements

[PDF] stanley bijective proofs

[PDF] star academy 2 paris latino live

[PDF] star academy 2 paris latino lyrics

Editorial Article

Bandwagon of impact factor for journal scientometrics

Salman Yousuf Guraya, FRCS

Department of Surgery, College of Medicine Taibah University, Almadinah Almunawwarah, Kingdom of Saudi Arabia

Received 1 April 2013; revised 5 April 2013; accepted 10 April 2013

Introduction

Bibliographic literature has developed a diverse range of tools in evaluating research mainly based on various ways of count- ing citations. 1 In order to prioritize the choice of quality infor- mation sources, researchers and scientists are in need of reliable decision aids. The ''impact factor"" (IF) is the most commonly used assessment aid for deciding which journals should receive a scholarly submission or attention from re- search readership. It is a journal-focused indicator that quan- tifies the readership a journal attracts. It does not necessarily indicate quality, but high impact factors indicate a probability of high quality. As an arithmetic mean of data originating from all authors of a journal with a high variance, it is inappli- cable to evaluate individual scientists. 2

Even recent methods,

such as the h-index 3 and the g-index, 4 that attempt to measure both the scientific productivity and apparent impact of a re- searcher depend on the researcher"s citation record over time. The IF has extensively penetrated academia and academic publishing, which has provoked significant modifications in publishing strategies by the academic publishers and editors 5 and in authors" publishing behavior. 6

Creating a higher num-

ber of mutually referenced papers from the same body of evi- dence, timing publications to have maximum exposure for accruing citations, and increasing the number of citation-

attracting review papers has become a common practice tomanipulate IFs. Looking into these crucial issues, I have de-

scribed the merits and demerits of IF with details of misappli- cation and manipulation techniques. What is journal impact factor and how it is calculated? The field of scholarly publications has witnessed enormous growth in the recent past. 7

To quantify the readership, journal"s

IF is used as a universally agreed metric. First introduced by Garfield, IF is defined as the number of citations within a given year to items published by a journal in the preceding two years divided by the number of citable items published by the journal in these two years. 8

It is the average number of citations a paper

of a journal attracts in the two years following its publication.

Thomson Reuters"Web of Science

(WoS) database has been the standard automated tool to identify article citation counts and conduct citation analysis. Google Scholar and subscrip- tion-based Scopus are other citation databases available since 2004.
9 Thomson Reuters" Journal Citation Reports (JCR) is an annual subscription-based online citation data report for any journal included in its Web of Science database, which in- cludes the Science Citation Index (SCI) and the Social Science almost 12,000 active journals and over 3,000 proceedings vol- umes. 10 a third of the scientific serials listed inUlrichsweb?which is incomplete itself. The JCR draws on citation reports of more than 8000 journals from more than 3300 publishers in over 66 countries. 11

Citations are compiled annually, and each unique

article-to-article link is counted as a citation.

IF has been shown to have the following pitfalls;

(a)Time window of impact factorA relatively short time window of two years is used to calculate IF although there are many disciplines in which citations to their Corresponding author: Professor of Surgery, College of Medicine Taibah University, Almadinah Almunawwarah, Kingdom of Saudi

Arabia.

Peer review under responsibility of Taibah University.Production and hosting by Elsevier Journal of Taibah University Medical Sciences (2013) 8(2), 69-71

Taibah University

Journal of Taibah University Medical Sciences

www.sciencedirect.com

1658-3612?2013 Taibah University. Production and hosting by Elsevier Ltd. All rights reserved.

papers do not reach a peak during this short time. The IF of a journal is determined by highly cited papers because the distribution of citations of papers is always skewed. Olson, by using a five-year window, concluded that majority of articles in his study were cited for the first time three years after publication. 12

Another study

reported citations of 31% and 69% for two-year and five-year windows, respectively. 13

Thus the mean IF of

journal using five-year IFs are logically substantially greater than the corresponding means for two-year IFs. (b)Self citations and active manipulations of impact factor Citations are viewed as the 'currency" of modern science and their analysis has become increasingly important for journal editors, authors and readers. 14

Authors are often

tempted to liberally and inappropriately cite their own previous publications in an attempt to raise their scien- tific ranking among the researcher community. On the same note, in 1997Leukemiahad been accused of forc- ing the authors to cite more articles fromLeukemia. 15 (c)Coverage and English-language preference by the SCI data The SCI covers less than one fourth of peer reviewed jour- nals worldwide, and exhibits a preference for English lan- guage journals. 16

Most of the publications included in

WoS databases until recently had been from English-lan- guage journals. This resulted in severe gaps of WoS dat- abases in comparison with other databases for citations of papers in non-English-language journals. 17

The accu-

racy of how citations are collected at ISI significantly influ- ences the final IF rankings and statistics. A fact-finding inquiry byNaturesuggested a significant undercount of ''citable"" items inNature Geneticsin1996andanerrone- ous inclusion of ''citable"" items other than those defined by ISI itself forNaturein 2000. 18 (d)Impact factor is an arithmetic measure of the journal, can"t predict quality of articlesThe majority ofNature published articles from year 2002 to 2003 received under

20 citations in 2004; 2.7% of the papers received over

100 citations with a record holder of 522 citations.

19 In

2009, a single paper attracting 5,624 citations pushed

the IF ofActa Crystallographica Aup from under 3 to

49.93, with all other papers of the journal having

attracted three or less citations. 20

Such variation renders

attempts to use IF for the evaluation of single papers or authors absurd. (e)Impact factor is an incomplete journal-focused metric Currently, researchers and publishers are using the growing databases WoS,SciVerseRScopus,Google Scholar,which are not comprehensive in contents and computed data. The extent of incompleteness can vary mainly depending on, among others, discipline, location and language of the scientist. 21

This reflects a big flaw in

calculating the citations for IF, necessitating the need for more comprehensive scientometrics for evaluating the journal"s scientific strength. (f)Subject areas and categories of articlesArticles in rapidly growing disciplines and subjects are cited more often than more traditional research fields, in particular theo- retical and mathematical areas. 22

This diversity leads to

the wide variance of IFs across subject categories and adversely affects the underrepresented fields of research. A given research field is often additionally cited by

related fields. Hence, clinical medicine draws heavilyon basic science resulting in three to five times more cita-

tions of basic medicine than its clinical counterpart. Consequently, basic science journals have a higher IF than clinical science journals which does not reflect the real essence of scientific citedness of the research.

Reviews are more likely to be cited than original

research papers. Journals publishing a substantial num- ber of review articles consequently attract more citations and thus are likely to achieve a higher IF. 23
(g)Retracted articlesInvalid articles such as retracted ones may still continue to be cited by other researchers as a valid work. Steen 24
examined all retractions from MED-

LINE from 1966 through 1997 and found that many

papers still cited retracted papers as valid research long after the retraction notice. Consequently, retracted and invalid articles pose significant bias in the calculation of IF of the journals.

Alternative journal scientometrics

Based on the outlined issues in calculating the IF of the journals, a growing body of researchers has suggested vari- ous remedies; Asai found that more accurate statistics could be calculated if the period count is based on months rather than a year. 25

Accordingly, he proposed anAdjusted Impact

Factorto count a weighted sum of citations per month over a time period of four years. Gla

¨nzel and Schoepflin reported

that three-year citation window was a good compromise be- tween fast growing disciplines and slowly aging theories. 26
Hirst introduced theDisciplinary Impact Factor(DIF) to overcome the subject bias. 27

It is based on the average num-

ber of times a journal was cited in a sub-field rather than the entire SCI database. For the assessment of individual authors, author-focused metrics may be employed which are calculated on the basis of citations of only the author to be evaluated. For almost every letter of the alphabet, a citation based index has been proposed. Of thosea-,b-,c-, y-, andz-indices, some of them admittedly very new, only theh-index has gained a widespread use. 28

Hindex and its applications

In 2005 Hirsch proposed that a scientist has indexhifhof his or herN p papershave at leasthcitations each and the other (Np?h) papers have6hcitations each. It is probably the most simple, author-focused index, defined as the number of papers of an author with citation numberPh. To get a higherhindex, an individual needs at least 2h+ 1 extra citations. 29

For exam-

ple, to increase the index from 4 to 5, at least 9 citations are needed. The higher thehindex the more citations are needed to increase it. It means that the difference between higherhin- dex values (25 and 26, for example) is much greater than be- tween lower values (4 and 5, for example). Thehindex can be employed to measure the research output of scientific insti- tutions 30
and countries. 31

Its only disadvantage is for younger

scientists with lower publication numbers, but it is at least based on the author"s publications. Since it can easily be manipulated by unethical self-citations, Schreiber has rightly suggested to exclude self-citations from its calculations and use ''the honesthindex (hh)"". 32

70S.Y. Guraya

Conclusion

The IF cannot assess the quality of individual articles, due to the qualitative variety of citations derived and computed from a journal. For the evaluation of individual researchers, jour- nal-focused metrics are inapplicable. Author-focused metrics, such as thehindex, are to be used. Due to major flaws and inaccuracies of IF, the researchers and the publishers have to seek a more reliable and accurate measure of journal scientometrics. Currently, despite signifi- cant criticism against IF, it represents the most popular quan- titative metrics of the journal.

References

1. Garfield E. The history and meaning of the journal impact factor.

JAMA 2006; 295(1): 90-93.

2. Krell F-T. The journal impact factor as a performance indicator.

Eur Sci Editing 2012; 38: 3-5.

3. Hirsch JE. An index to quantify an individual"s scientific research

output.Proc Natl Acad Sci U S A 2005; 102(46): 16569.

4. Egghe L. Theory and practise of the g-index.Scientometrics 2006;

69(1): 131-152.

5. Brown H. How impact factors changed medical publishing--and

science.BMJ 2007; 334(7593): 561.

6. Lawrence PA. The mismeasurement of science.Curr Biol 2007;

17(15): R583-R585.

7. Guraya SY. Journal of Taibah University Medical Sciences;

Journey to Success.J Taibah Univ Med Sci 2011; 6(2): 57-60.

8. Garfield E. Citation indexes for science. A new dimension in

documentation through association of ideas.Int J Epidemiol 2006;

35(5): 1123-1127.

9. MENTIONED I. The history and meaning of the journal impact

factor 2006.

10. Garfield E. The meaning of the impact factor. Revista internac-

ional de psicologı ´a clı´nica y de la salud.Int J Clin Health Psychol

2003; 3(2): 363-369.

11. Vanclay JK. Impact factor: outdated artefact or stepping-stone to

journal certification?Scientometrics 2012; 92(2): 211-238.

12. Olson JE. Top-25-business-school professors rate journals in

operations management and related fields.Interfaces 2005; 35(4):

323-338.

13. Stonebraker JS, Gil E, Kirkwood CW, Handfield RB. Impact

factor as a metric to assess journals where OM research is

published.J Operations Manag 2012; 30(1): 24-43.14. Smith DR. Impact factors, scientometrics and the history of

citation-based research.Scientometrics 2012; 92(2): 419-427.

15. Smith R. Journal accused of manipulating impact factor.BMJ

1997; 314(7079): 461.

16. Moed HF, Burger W, Frankfort J, Van Raan AF. The use of

bibliometric data for the measurement of university research performance.Res Policy 1985; 14(3): 131-149.

17. Sangwal K. Citation and impact factor distributions of scientific

journals published in individual countries.J Informetrics 2013;

7(2): 487-504.

18. Adam D. Citation analysis: the counting house.Nature 2002;

415(6873): 726-729.

19. Campbell P. Escape from the impact factor.Ethics Sci Environ

Politics 2008; 8(1): 5-7.

20. Dimitrov JD, Kaveri SV, Bayry J. Metrics: journal"s impact factor

skewed by a single paper.Nature 2010; 466(7303): 179.quotesdbs_dbs21.pdfusesText_27