Citation : criterion and it's measurements
From DrugPedia: A Wikipedia for Drug discovery
(→H-index: age and sex make it unreliable) |
|||
(11 intermediate revisions not shown.) | |||
Line 1: | Line 1: | ||
+ | '''''Identifying high-quality science is necessary for science to progress...''''' | ||
+ | |||
+ | How does one measure the quality of science? The question is not rhetorical; it is extremely relevant to promotion committees, funding agencies, national academies and politicians, all of whom need a means by which to recognize and reward good research and good researchers. Identifying high-quality science is necessary for science to progress, but measuring quality becomes even more important in a time when individual scientists and entire research fields increasingly compete for limited amounts of money. The most obvious measure available is the bibliographic record of a scientist or research institute—that is, the number and impact of their publications. | ||
+ | |||
+ | |||
+ | |||
==Purpose and importance of Citation== | ==Purpose and importance of Citation== | ||
Line 14: | Line 20: | ||
Giving proper attribution to those whose thoughts, words, and ideas you use is an important concept in scholarly writing. For these reasons, it is important to adopt habits of collecting the bibliographic information on source works necessary for correct citations in an organized and thorough manner. | Giving proper attribution to those whose thoughts, words, and ideas you use is an important concept in scholarly writing. For these reasons, it is important to adopt habits of collecting the bibliographic information on source works necessary for correct citations in an organized and thorough manner. | ||
+ | |||
+ | ==Misuse of Impact Factors== | ||
+ | |||
+ | * The impact factor is often misused to predict the importance of an individual publication based on where it was published. This does not work well since a small number of publications are cited much more than the majority - for example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the importance of any one publication will be different and on the average less than the overall number. The impact factor, however, averages over all articles and thus underestimates the citations of the top cited while exaggerating the number of citations of the average publication. | ||
+ | |||
+ | * Academic reviewers involved in programmatic evaluations, particularly those for doctoral degree granting institutions, often turn to ISI's proprietary IF listing of journals in determining scholarly output. This builds in a bias which automatically undervalues some types of research and distorts the total contribution each faculty member makes. | ||
+ | |||
+ | * The absolute value of an impact factor is meaningless. A journal with an IF of 2 would not be very impressive in Microbiology, while it would in Oceanography. Such values are nonetheless sometimes advertised by scientific publishers. | ||
+ | |||
+ | * The comparison of impact factors between different fields is invalid. Yet such comparisons have been widely used for the evaluation of not merely journals, but of scientists and of university departments. It is not possible to say, for example, that a department whose publications have an average IF below 2 is low-level. This would not make sense for Mechanical Engineering, where only two review journals attain such a value. | ||
+ | * Outside the sciences, impact factors are relevant for fields that have a similar publication pattern to the sciences (such as economics), where research publications are almost always journal articles, that cite other journal articles. They are not relevant for literature, where the most important publications are books citing other books. Therefore, Thomson Scientific does not publish a JCR for the humanities. Nor are they relevant for many areas of computer science, where the majority of the important publications appear in refereed conference proceedings and cite other conference proceedings. | ||
+ | |||
+ | * Even though in practice they are applied this way, impact factors cannot correctly be the only thing to be considered by libraries in selecting journals. The local usefulness of the journal is at least equally important, as is whether or not an institution's faculty member is editor of the journal or on its editorial review board. | ||
+ | |||
+ | * Though the impact factor was originally intended as an objective measure of the reputability of a journal (Garfield), it is now being increasingly applied to measure the productivity of scientists. The way it is customarily used is to examine the impact factors of the journals in which the scientist's articles have been published. This has obvious appeal for an academic administrator who knows neither the subject nor the journals. | ||
+ | |||
+ | * The absolute number of researchers, the average number of authors on each paper, and the nature of results in different research areas, as well as variations in citation habits between different disciplines, particularly the number of citations in each paper, all combine to make impact factors between different groups of scientists incommensurable. Generally, for example, medical journals have higher impact factors than mathematical journals and engineering journals. This limitation is accepted by the publishers; it has never been claimed that they are useful between fields--such a use is an indication of misunderstanding. | ||
+ | |||
+ | * HEFCE was urged by the Parliament of the United Kingdom Committee on Science and Technology to remind Research Assessment Exercise (RAE) panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. | ||
+ | |||
+ | ==Citations in supplementary information are invisible== | ||
+ | |||
+ | '''Frank Seeber1''' | ||
+ | |||
+ | 1. Fachbereich Biologie/Parasitologie, Philipps-Universität Marburg, Karl-von-Frisch-Strasse 8, 35032 Marburg, Germany | ||
+ | |||
+ | I would like to draw attention to a substantial drawback in publishing supporting scientific data online, in supplements to the printed research paper, usually because of space limitations. Unfortunately, the additional citations in this supplementary information are invisible to those services that rely on citations as a measure of the 'quality' of journals or of individual scientists, using them to determine impact factor, h-index or Scimago journal ranking, for example. | ||
+ | |||
+ | This becomes obvious when looking under the article heading for any citation that is referenced only in the supplement, using search engines such as PubMed, Scopus, Web of Science or Google Scholar. None will indicate that the particular reference is cited in the paper's supplement. This omission will affect ranking calculations, particularly for journals that post details of experimental methods in their supplements. | ||
+ | |||
+ | Like it or not, ranking of scientific achievement by citation-based methods is an important part of the scientific system, and journals should make all their citations accessible to those who need accurate numbers. The solution to this problem seems quite simple: the citations in the supplement have to be incorporated into the reference section of the main text by the authors. | ||
+ | |||
+ | ==Google Scholar== | ||
+ | |||
+ | '''Google Scholar: The New Generation of Citation Indexes''' [http://www.librijournal.org/pdf/2005-4pp170-180.pdf] | ||
==G-index== | ==G-index== | ||
Line 35: | Line 76: | ||
==H-index== | ==H-index== | ||
+ | he h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other people's publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number. | ||
+ | Hirsch suggested that, for physicists, a value for h of about 10-12 might be a useful guideline for tenure decisions at major research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences. | ||
===Advantages of H-index=== | ===Advantages of H-index=== | ||
Line 62: | Line 105: | ||
* The h-index does not account for the number of authors of a paper. If the impact of a paper is the number of citations it receives, it might be logical to divide that impact by the number of authors involved. (Some authors will have contributed more than others, but in the absence of information on contributions, the simplest assumption is to divide credit equally.) Not taking into account the number of authors could allow gaming the h-index and other similar indices: for example, two equally capable researchers could agree to share authorship on all their papers, thus increasing each of their h-indices. Even in the absence of such explicit gaming, the h-index and similar indices tend to favor fields with larger groups, e.g. experimental over theoretical. An individual h-index normalized by the average number of co-authors in the h-core has been introduced by Batista et al. They also found that the distribution of the h-index, although depends of the field, can be normalized by a simple reescaling factor. For example, assuming as standard the hs for Biology, the distribution of h for mathematics colapse with it if this h is multiplied by three, that is, a mathematician with h = 3 is equivalent to a biologist with h = 9. | * The h-index does not account for the number of authors of a paper. If the impact of a paper is the number of citations it receives, it might be logical to divide that impact by the number of authors involved. (Some authors will have contributed more than others, but in the absence of information on contributions, the simplest assumption is to divide credit equally.) Not taking into account the number of authors could allow gaming the h-index and other similar indices: for example, two equally capable researchers could agree to share authorship on all their papers, thus increasing each of their h-indices. Even in the absence of such explicit gaming, the h-index and similar indices tend to favor fields with larger groups, e.g. experimental over theoretical. An individual h-index normalized by the average number of co-authors in the h-core has been introduced by Batista et al. They also found that the distribution of the h-index, although depends of the field, can be normalized by a simple reescaling factor. For example, assuming as standard the hs for Biology, the distribution of h for mathematics colapse with it if this h is multiplied by three, that is, a mathematician with h = 3 is equivalent to a biologist with h = 9. | ||
+ | |||
+ | ==H-index: age and sex make it unreliable== | ||
+ | |||
+ | The h-index seems to be breaking away from the bibliometric pack, in the race to become a favoured measure of scientific performance [http://www.nature.com/nature/journal/v448/n7155/full/448737a.html Achievement index climbs the ranks] Nature 448, 737; 2007). However, if the h-index is to become an assessment tool commonly used by university administrators and government bureaucrats, those using it should be aware of its pitfalls. | ||
+ | |||
+ | As noted in your News story, tallying how many papers a researcher publishes (their productivity) gives undue merit to those who publish many inconsequential papers. But at least for ecologists and evolutionary biologists, the h-index is highly correlated with productivity (r = 0.77; see C. D. Kelly and M. D. Jennions Trends Ecol. Evol. 21, 167–170; 2006). | ||
+ | |||
+ | This is worrisome, because the h-index is easily misconstrued as an equitable measure of research quality. We offer two examples. | ||
+ | |||
+ | First, female ecologists and evolutionary biologists publish fewer papers than their male counterparts, and they have significantly lower h-indices. Should administrators therefore conclude that men are better researchers? No. The gender difference vanishes if we control for productivity. It seems unlikely that this phenomenon is restricted to ecology and evolution. | ||
+ | |||
+ | Second, the h-index increases with age and using the ratio of the two can be problematic. Therefore, reliably comparing the performance of younger researchers with older ones is difficult. | ||
+ | |||
+ | ==H-index: however ranked, citations need context== | ||
+ | |||
+ | Michael C. Wendl1 | ||
+ | |||
+ | 1. Washington University Medical School, 4444 Forest Park Boulevard, Box 8501, St Louis, Missouri 63108, USA | ||
+ | |||
+ | |||
+ | The h-index (the number n of a researcher's papers that have received at least n citations) may paint a more objective picture of productivity than some metrics, as your News story 'Achievement index climbs the ranks' (Nature 448, 737; 2007) points out. But for all such metrics, context is critical. | ||
+ | |||
+ | Many citations are used simply to flesh out a paper's introduction, having no real significance to the work. Citations are also sometimes made in a negative context, or to fraudulent or retracted publications. Other confounding factors include the practice of 'gratuitous authorship' and the so-called 'Matthew effect', whereby well-established researchers and projects are cited disproportionately more often than those that are less widely known. Finally, bibliometrics do not compensate for the well-known citation bias that favours review articles. | ||
+ | |||
+ | ==Ratings games== | ||
+ | |||
+ | '''Researchers have two rare opportunities to influence the ways in which they may be assessed in future.''' | ||
+ | |||
+ | How to judge the performance of researchers? Whether one is assessing individuals or their institutions, everyone knows that most citation measures, while alluring, are overly simplistic. Unsurprisingly, most researchers prefer an explicit peer assessment of their work. Yet those same researchers know how time-consuming peer assessment can be. | ||
+ | |||
+ | Against that background, two new efforts to tackle the challenge deserve readers' attention and feedback. One, a citations metric, has the virtue of focusing explicitly on a researcher's cumulative citation achievements. The other, the next UK Research Assessment Exercise, is rooted in a deeper, more qualitative assessment, but feeds into a numerical rating of university departments, the results of which hang around the necks of the less successful for years. | ||
+ | |||
+ | Can there be a fair numerical measure of a researcher's achievements? Jorge Hirsch, a physicist at the University of California, San Diego, believes there can. He has thought about the weaknesses of current attempts to use citations — total counts of citations, averaged or peak citations, or counts of papers above certain citation thresholds — and has come up with the 'h-index'. This is the highest number of papers that a scientist has written that have each received at least that number of citations; an h-index of 50, for example, means someone has written 50 papers that have each had at least 50 citations. The citations are counted using the tables of citations-to-date provided by Thomson ISI of Philadelphia. Within a discipline, the approach generates a scale of comparison that does seem to reflect an individual's achievement thus far, and has already attracted favourable comment (see Index aims for fair ranking of scientists). The top ten physicists on this scale have h values exceeding 70, and the top ten biologists have h values of 120 or more, the difference reflecting the citation characteristics of the two fields. | ||
+ | |||
+ | The author placed his proposal on a preprint server last week (http://www.arxiv.org/abs/physics/0508025), thereby inviting comment before publication. Given the potential for indicators to be seized upon by administrators, readers should examine the suggestion and provide the author with peer assessment. | ||
+ | |||
+ | Whatever its virtues, any citation analysis raises as many questions as it answers and also tracks just one dimension of scientific outputs. Nature has consistently advocated caution in the deployment of the impact factor in particular as a criterion of achievement (an index that Hirsch's h indicator happily ignores).Wisely, the UK Research Assessment Exercise (RAE) has long committed itself to a broader view and the organizers of the next RAE, to take place in 2008, have prohibited assessment panels from judging papers by the impact factors of the journals in which they appeared. What the costs of that will be in panel members' time remains to be seen. | ||
+ | |||
+ | The common approach of the RAE's disciplinary panels is to assess up to four submitted outputs (typically research papers or patents) per researcher, of which a proportion will be assessed in some detail (25% for the biologists, 50% for the physicists). There will no doubt be something of a challenge in taking into account the fact that a typical publication has several co-authors. | ||
+ | |||
+ | These outputs will sit alongside indicators of the research environment such as funds and infrastructure, and of esteem, such as personal awards and prestige lectures. The specific indicators to be considered and the weightings applied are now open for public consultation (see http://www.rae.ac.uk/pubs/2005/04/docs/consult.doc). Given that the RAE is so influential both nationally and, as a technique, internationally, there is a lot at stake. Stakeholders should express any concerns they may have by the deadline of 19 September. | ||
+ | |||
+ | |||
+ | ==A possible way out of the impact-factor game== | ||
+ | |||
+ | '''Herman Tse1''' | ||
+ | |||
+ | 1. Department of Microbiology, The University of Hong Kong, Pokfulam, Hong Kong | ||
+ | Email: [email protected] | ||
+ | |||
+ | |||
+ | Your Editorial 'Unbalanced portfolio' (Nature 453, 1144;2008) defends the scientific autonomy of researchers against pressure from bureaucrats seeking maximum economic returns. Although this position is admirable and likely to be popular among researchers, it might also be worth reflecting on our current situation. | ||
+ | |||
+ | Few scientists nowadays can afford to pursue research for science's sake, as suggested in the Editorial. Rather, most of us are trapped in a game of numbers, in which all our research output can be reduced to one or more of the following metrics: impact factors, average citations per article, total number of articles published, and the h-index. | ||
+ | |||
+ | This reductionist attitude towards scientific research has fostered an unhealthy research environment, evident in the copious examples of 'salami slicing' that litter scientific journals. Furthermore, the rules and significance of the game are all but opaque to the lay public (and to some members of our own profession), which alienates their interest in our investigations. | ||
+ | |||
+ | But our research is more relevant for them if it can be measured by its economic return. It would be hard to argue that the pressure to publish is somehow better or more meaningful than the pressure to recoup economic returns. Done properly, research assessment based on a balance between publications and economic output may be a way out of the impact-factor game. | ||
+ | |||
+ | ==Citations: rankings weigh against developing nations== | ||
+ | |||
+ | D. C. Mishra1 | ||
+ | |||
+ | 1. National Geophysical Research Institute, Uppal Road, Hyderabad 500 007, Andhra Pradesh, India | ||
+ | |||
+ | Scientists and whole institutes are frequently judged by the number of citations of their papers in scientific journals, and project funding depends on it. But, as Clint Kelly and Michael Jennions note in Correspondence ('H-index: age and sex make it unreliable' Nature 449, 403; doi:10.1038/449403c 2007), the context and relevance of citations are crucial in reaching this judgement. | ||
+ | |||
+ | Researchers from developing nations often face another problem. In the name of local issues and the national interest, they are required to publish in national journals that rarely find a place among cited journals and have a very limited circulation abroad. | ||
+ | |||
+ | For example, a study of the Thomson Scientific Essential Science Indicators (ESI) during the past five years has found that the National Geophysical Research Institute (NGRI) in Hyderabad, India, scores among the top 1% of institutions publishing in the geosciences. During this period, the NGRI had 2,338 citations of 657 papers (http://www.in-cites.com/institutions/2007menu.html). But if it had not published more than half its publications in national journals — not all of which figure in the ESI database — the NGRI could have been ranked even nearer the top. | ||
+ | |||
+ | In formulating their criteria, publications from institutes and by individuals in local and national journals should also be taken into account: this could be done by assigning some weighted average. The total number of publications in national journals not counted by the ESI would then be considered and weighted in order to arrive at a more appropriate index. | ||
+ | |||
+ | |||
Line 69: | Line 186: | ||
2. Does the h index have predictive power? by J. E. Hirsch (PNAS) [http://www.pnas.org/content/104/49/19193.full] | 2. Does the h index have predictive power? by J. E. Hirsch (PNAS) [http://www.pnas.org/content/104/49/19193.full] | ||
+ | |||
+ | 3. Reflections on the h-index by Prof. Anne-Wil Harzing [http://www.harzing.com/pop_hindex.htm] | ||
+ | |||
+ | 4. wikipedia | ||
+ | |||
+ | 4. Nature Publishing Group (npg) |
Current revision
Identifying high-quality science is necessary for science to progress...
How does one measure the quality of science? The question is not rhetorical; it is extremely relevant to promotion committees, funding agencies, national academies and politicians, all of whom need a means by which to recognize and reward good research and good researchers. Identifying high-quality science is necessary for science to progress, but measuring quality becomes even more important in a time when individual scientists and entire research fields increasingly compete for limited amounts of money. The most obvious measure available is the bibliographic record of a scientist or research institute—that is, the number and impact of their publications.
[edit] Purpose and importance of Citation
In all types of scholarly and research writing it is necessary to document the source works that underpin particular concepts, positions, propositions and arguments with citations. These citations serve a number of purposes:
[edit] Help readers identify and relocate the source work
Readers often want to relocate a work you have cited, either to verify the information, or to learn more about issues and topics addressed by the work. It is important that readers should be able to relocate your source works easily and efficiently from the information included in your citations (see the “Citation Structure” topic on the following page for details), in the sources available to them - which may or may not be the same as the sources available to you .
[edit] Provide evidence that the position is well-researched
Scholarly writing is grounded in prior research. Citations allow you to demonstrate that your position or argument is thoroughly researched and that you have referenced, or addressed, the critical authorities relevant to the issues.
[edit] Give credit to the author of an original concept or theory presented
Giving proper attribution to those whose thoughts, words, and ideas you use is an important concept in scholarly writing. For these reasons, it is important to adopt habits of collecting the bibliographic information on source works necessary for correct citations in an organized and thorough manner.
[edit] Misuse of Impact Factors
- The impact factor is often misused to predict the importance of an individual publication based on where it was published. This does not work well since a small number of publications are cited much more than the majority - for example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the importance of any one publication will be different and on the average less than the overall number. The impact factor, however, averages over all articles and thus underestimates the citations of the top cited while exaggerating the number of citations of the average publication.
- Academic reviewers involved in programmatic evaluations, particularly those for doctoral degree granting institutions, often turn to ISI's proprietary IF listing of journals in determining scholarly output. This builds in a bias which automatically undervalues some types of research and distorts the total contribution each faculty member makes.
- The absolute value of an impact factor is meaningless. A journal with an IF of 2 would not be very impressive in Microbiology, while it would in Oceanography. Such values are nonetheless sometimes advertised by scientific publishers.
- The comparison of impact factors between different fields is invalid. Yet such comparisons have been widely used for the evaluation of not merely journals, but of scientists and of university departments. It is not possible to say, for example, that a department whose publications have an average IF below 2 is low-level. This would not make sense for Mechanical Engineering, where only two review journals attain such a value.
- Outside the sciences, impact factors are relevant for fields that have a similar publication pattern to the sciences (such as economics), where research publications are almost always journal articles, that cite other journal articles. They are not relevant for literature, where the most important publications are books citing other books. Therefore, Thomson Scientific does not publish a JCR for the humanities. Nor are they relevant for many areas of computer science, where the majority of the important publications appear in refereed conference proceedings and cite other conference proceedings.
- Even though in practice they are applied this way, impact factors cannot correctly be the only thing to be considered by libraries in selecting journals. The local usefulness of the journal is at least equally important, as is whether or not an institution's faculty member is editor of the journal or on its editorial review board.
- Though the impact factor was originally intended as an objective measure of the reputability of a journal (Garfield), it is now being increasingly applied to measure the productivity of scientists. The way it is customarily used is to examine the impact factors of the journals in which the scientist's articles have been published. This has obvious appeal for an academic administrator who knows neither the subject nor the journals.
- The absolute number of researchers, the average number of authors on each paper, and the nature of results in different research areas, as well as variations in citation habits between different disciplines, particularly the number of citations in each paper, all combine to make impact factors between different groups of scientists incommensurable. Generally, for example, medical journals have higher impact factors than mathematical journals and engineering journals. This limitation is accepted by the publishers; it has never been claimed that they are useful between fields--such a use is an indication of misunderstanding.
- HEFCE was urged by the Parliament of the United Kingdom Committee on Science and Technology to remind Research Assessment Exercise (RAE) panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published.
[edit] Citations in supplementary information are invisible
Frank Seeber1
1. Fachbereich Biologie/Parasitologie, Philipps-Universität Marburg, Karl-von-Frisch-Strasse 8, 35032 Marburg, Germany
I would like to draw attention to a substantial drawback in publishing supporting scientific data online, in supplements to the printed research paper, usually because of space limitations. Unfortunately, the additional citations in this supplementary information are invisible to those services that rely on citations as a measure of the 'quality' of journals or of individual scientists, using them to determine impact factor, h-index or Scimago journal ranking, for example.
This becomes obvious when looking under the article heading for any citation that is referenced only in the supplement, using search engines such as PubMed, Scopus, Web of Science or Google Scholar. None will indicate that the particular reference is cited in the paper's supplement. This omission will affect ranking calculations, particularly for journals that post details of experimental methods in their supplements.
Like it or not, ranking of scientific achievement by citation-based methods is an important part of the scientific system, and journals should make all their citations accessible to those who need accurate numbers. The solution to this problem seems quite simple: the citations in the supplement have to be incorporated into the reference section of the main text by the authors.
[edit] Google Scholar
Google Scholar: The New Generation of Citation Indexes [1]
[edit] G-index
The g-index is an index for quantifying the scientific productivity of physicists and other scientists based on their publication record. It was suggested in 2006 by Leo Egghe.
The index is calculated based on the distribution of citations received by a given researcher's publications.
Given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations.
An alternative definition is
Given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received on average at least g citations.
This index is very similar to the h-index, and attempts to address its shortcomings. Like the h-index, the g-index is a natural number and thus lacks in discriminatory power. Therefore, Richard Tol proposed a rational generalisation.
Tol also proposed a successive g-index.
Given a set of researchers ranked in decreasing order of their g-index, the g1-index is the (unique) largest number such that the top g1 researchers have on average at least a g-index of g1.
[edit] H-index
he h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other people's publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.
Hirsch suggested that, for physicists, a value for h of about 10-12 might be a useful guideline for tenure decisions at major research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences.
[edit] Advantages of H-index
The h-index was intended to address the main disadvantages of other bibliometric indicators, such as total number of papers or total number of citations. Total number of papers does not account for the quality of scientific publications, while total number of citations can be disproportionately affected by participation in a single publication of major influence. The h-index is intended to measure simultaneously the quality and sustainability of scientific output, as well as, to some extent, the diversity of scientific research. The h-index is much less affected by methodological papers proposing successful new techniques, methods or approximations, which can be extremely highly cited. For example, one of the most cited condensed matter theorists, John P. Perdew, has been very successful in devising new approximations within the widely used density functional theory. He has published 3 papers cited more than 5000 times and 2 cited more than 4000 times. Several thousand papers utilizing the density functional theory are published every year, most of them citing at least one paper of J.P. Perdew. His total citation count is close to 39 000, while his h-index is large, 51, but not unique. In contrast, the condensed-matter theorist with the highest h-index (94), Marvin L. Cohen, has a lower citation count of 35 000. One can argue that in this case the h-index reflects the broader impact of Cohen's papers in solid-state physics due to his larger number of highly-cited papers.
[edit] Criticism of H-index
- Michael Nielsen points out that "...the h-index contains little information beyond the total number of citations, and is not properly regarded as a new measure of impact at all". According to Nielsen, to a good approximation, h ~ sqrt(T)/2, where T is the total number of citations.
There are a number of situations in which h may provide misleading information about a scientist's output:
- The h-index is bounded by the total number of publications. This means that scientists with a short career are at an inherent disadvantage, regardless of the importance of their discoveries. For example, Évariste Galois' h-index is 2, and will remain so forever. Had Albert Einstein died in early 1906, his h-index would be stuck at 4 or 5, despite his being widely acknowledged as one of the most important physicists, even considering only his publications to that date.
- The h-index does not consider the context of citations. For example, citations in a paper are often made simply to flesh-out an introduction, otherwise having no other significance to the work. h also does not resolve other contextual instances: citations made in a negative context and citations made to fraudulent or retracted work. (This is true for other metrics using citations, not just for the h-index.)
- The h-index does not account for confounding factors. These include the practice of "gratuitous authorship", which is still common in some research cultures, the so-called Matthew effect, and the favorable citation bias associated with review articles.
- The h-index has been found to have slightly less predictive accuracy and precision than the simpler measure of mean citations per paper. However, this finding was contradicted by another study.
- The h-index is a natural number and thus lacks discriminatory power. Ruane and Tol therefore propose a rational h-index that interpolates between h and h+1.
- While the h-index de-emphasizes singular successful publications in favor of sustained productivity, it may do so too strongly. Two scientists may have the same h-index, say, h = 30, but one has 20 papers that have been cited more than 1000 times and the other has none. Clearly scientific output of the former is more valuable. Several recipes to correct for that have been proposed, such as the g-index, but none has gained universal support.
- The h-index is affected by limitations in citation data bases. Some automated searching processes find citations to papers going back many years, while others find only recent papers or citations. This issue is less important for those whose publication record started after automated indexing began around 1990. Citation data bases contain some citations that are not quite correct and therefore will not properly match to the correct paper or author.
- The h-index does not account for the number of authors of a paper. If the impact of a paper is the number of citations it receives, it might be logical to divide that impact by the number of authors involved. (Some authors will have contributed more than others, but in the absence of information on contributions, the simplest assumption is to divide credit equally.) Not taking into account the number of authors could allow gaming the h-index and other similar indices: for example, two equally capable researchers could agree to share authorship on all their papers, thus increasing each of their h-indices. Even in the absence of such explicit gaming, the h-index and similar indices tend to favor fields with larger groups, e.g. experimental over theoretical. An individual h-index normalized by the average number of co-authors in the h-core has been introduced by Batista et al. They also found that the distribution of the h-index, although depends of the field, can be normalized by a simple reescaling factor. For example, assuming as standard the hs for Biology, the distribution of h for mathematics colapse with it if this h is multiplied by three, that is, a mathematician with h = 3 is equivalent to a biologist with h = 9.
[edit] H-index: age and sex make it unreliable
The h-index seems to be breaking away from the bibliometric pack, in the race to become a favoured measure of scientific performance Achievement index climbs the ranks Nature 448, 737; 2007). However, if the h-index is to become an assessment tool commonly used by university administrators and government bureaucrats, those using it should be aware of its pitfalls.
As noted in your News story, tallying how many papers a researcher publishes (their productivity) gives undue merit to those who publish many inconsequential papers. But at least for ecologists and evolutionary biologists, the h-index is highly correlated with productivity (r = 0.77; see C. D. Kelly and M. D. Jennions Trends Ecol. Evol. 21, 167–170; 2006).
This is worrisome, because the h-index is easily misconstrued as an equitable measure of research quality. We offer two examples.
First, female ecologists and evolutionary biologists publish fewer papers than their male counterparts, and they have significantly lower h-indices. Should administrators therefore conclude that men are better researchers? No. The gender difference vanishes if we control for productivity. It seems unlikely that this phenomenon is restricted to ecology and evolution.
Second, the h-index increases with age and using the ratio of the two can be problematic. Therefore, reliably comparing the performance of younger researchers with older ones is difficult.
[edit] H-index: however ranked, citations need context
Michael C. Wendl1
1. Washington University Medical School, 4444 Forest Park Boulevard, Box 8501, St Louis, Missouri 63108, USA
The h-index (the number n of a researcher's papers that have received at least n citations) may paint a more objective picture of productivity than some metrics, as your News story 'Achievement index climbs the ranks' (Nature 448, 737; 2007) points out. But for all such metrics, context is critical.
Many citations are used simply to flesh out a paper's introduction, having no real significance to the work. Citations are also sometimes made in a negative context, or to fraudulent or retracted publications. Other confounding factors include the practice of 'gratuitous authorship' and the so-called 'Matthew effect', whereby well-established researchers and projects are cited disproportionately more often than those that are less widely known. Finally, bibliometrics do not compensate for the well-known citation bias that favours review articles.
[edit] Ratings games
Researchers have two rare opportunities to influence the ways in which they may be assessed in future.
How to judge the performance of researchers? Whether one is assessing individuals or their institutions, everyone knows that most citation measures, while alluring, are overly simplistic. Unsurprisingly, most researchers prefer an explicit peer assessment of their work. Yet those same researchers know how time-consuming peer assessment can be.
Against that background, two new efforts to tackle the challenge deserve readers' attention and feedback. One, a citations metric, has the virtue of focusing explicitly on a researcher's cumulative citation achievements. The other, the next UK Research Assessment Exercise, is rooted in a deeper, more qualitative assessment, but feeds into a numerical rating of university departments, the results of which hang around the necks of the less successful for years.
Can there be a fair numerical measure of a researcher's achievements? Jorge Hirsch, a physicist at the University of California, San Diego, believes there can. He has thought about the weaknesses of current attempts to use citations — total counts of citations, averaged or peak citations, or counts of papers above certain citation thresholds — and has come up with the 'h-index'. This is the highest number of papers that a scientist has written that have each received at least that number of citations; an h-index of 50, for example, means someone has written 50 papers that have each had at least 50 citations. The citations are counted using the tables of citations-to-date provided by Thomson ISI of Philadelphia. Within a discipline, the approach generates a scale of comparison that does seem to reflect an individual's achievement thus far, and has already attracted favourable comment (see Index aims for fair ranking of scientists). The top ten physicists on this scale have h values exceeding 70, and the top ten biologists have h values of 120 or more, the difference reflecting the citation characteristics of the two fields.
The author placed his proposal on a preprint server last week (http://www.arxiv.org/abs/physics/0508025), thereby inviting comment before publication. Given the potential for indicators to be seized upon by administrators, readers should examine the suggestion and provide the author with peer assessment.
Whatever its virtues, any citation analysis raises as many questions as it answers and also tracks just one dimension of scientific outputs. Nature has consistently advocated caution in the deployment of the impact factor in particular as a criterion of achievement (an index that Hirsch's h indicator happily ignores).Wisely, the UK Research Assessment Exercise (RAE) has long committed itself to a broader view and the organizers of the next RAE, to take place in 2008, have prohibited assessment panels from judging papers by the impact factors of the journals in which they appeared. What the costs of that will be in panel members' time remains to be seen.
The common approach of the RAE's disciplinary panels is to assess up to four submitted outputs (typically research papers or patents) per researcher, of which a proportion will be assessed in some detail (25% for the biologists, 50% for the physicists). There will no doubt be something of a challenge in taking into account the fact that a typical publication has several co-authors.
These outputs will sit alongside indicators of the research environment such as funds and infrastructure, and of esteem, such as personal awards and prestige lectures. The specific indicators to be considered and the weightings applied are now open for public consultation (see http://www.rae.ac.uk/pubs/2005/04/docs/consult.doc). Given that the RAE is so influential both nationally and, as a technique, internationally, there is a lot at stake. Stakeholders should express any concerns they may have by the deadline of 19 September.
[edit] A possible way out of the impact-factor game
Herman Tse1
1. Department of Microbiology, The University of Hong Kong, Pokfulam, Hong Kong Email: [email protected]
Your Editorial 'Unbalanced portfolio' (Nature 453, 1144;2008) defends the scientific autonomy of researchers against pressure from bureaucrats seeking maximum economic returns. Although this position is admirable and likely to be popular among researchers, it might also be worth reflecting on our current situation.
Few scientists nowadays can afford to pursue research for science's sake, as suggested in the Editorial. Rather, most of us are trapped in a game of numbers, in which all our research output can be reduced to one or more of the following metrics: impact factors, average citations per article, total number of articles published, and the h-index.
This reductionist attitude towards scientific research has fostered an unhealthy research environment, evident in the copious examples of 'salami slicing' that litter scientific journals. Furthermore, the rules and significance of the game are all but opaque to the lay public (and to some members of our own profession), which alienates their interest in our investigations.
But our research is more relevant for them if it can be measured by its economic return. It would be hard to argue that the pressure to publish is somehow better or more meaningful than the pressure to recoup economic returns. Done properly, research assessment based on a balance between publications and economic output may be a way out of the impact-factor game.
[edit] Citations: rankings weigh against developing nations
D. C. Mishra1
1. National Geophysical Research Institute, Uppal Road, Hyderabad 500 007, Andhra Pradesh, India
Scientists and whole institutes are frequently judged by the number of citations of their papers in scientific journals, and project funding depends on it. But, as Clint Kelly and Michael Jennions note in Correspondence ('H-index: age and sex make it unreliable' Nature 449, 403; doi:10.1038/449403c 2007), the context and relevance of citations are crucial in reaching this judgement.
Researchers from developing nations often face another problem. In the name of local issues and the national interest, they are required to publish in national journals that rarely find a place among cited journals and have a very limited circulation abroad.
For example, a study of the Thomson Scientific Essential Science Indicators (ESI) during the past five years has found that the National Geophysical Research Institute (NGRI) in Hyderabad, India, scores among the top 1% of institutions publishing in the geosciences. During this period, the NGRI had 2,338 citations of 657 papers (http://www.in-cites.com/institutions/2007menu.html). But if it had not published more than half its publications in national journals — not all of which figure in the ESI database — the NGRI could have been ranked even nearer the top.
In formulating their criteria, publications from institutes and by individuals in local and national journals should also be taken into account: this could be done by assigning some weighted average. The total number of publications in national journals not counted by the ESI would then be considered and weighted in order to arrive at a more appropriate index.
[edit] References
1. An index to quantify an individual's scientific research output by J. E. Hirsch (PNAS) [2]
2. Does the h index have predictive power? by J. E. Hirsch (PNAS) [3]
3. Reflections on the h-index by Prof. Anne-Wil Harzing [4]
4. wikipedia
4. Nature Publishing Group (npg)