Roberto Casati (CNRS-EHESS-ENS, Paris, France)
What follows is a series of back-of-postcard remarks about the thorny issue of research quality and how to assess it. Some of it is hypothetical. Some is based on published research, reports, editorials, or interviews. Definitely more empirical research on various aspects described here is needed. I was involved in a three-year project of the European Commission (LiquidPublications) and some of the ideas here are part of a in progress Green Paper we jointly wrote for the Commission. Some are going to be used in a report I am writing for the French Ministry of Research. Opinions expressed here are, of course, quite personal. I will be glad for any comment and references to existing research. Opinions expressed here are mine and should not be ascribed to my employer.
Rationale: A lot of research is funded and produced nowadays. Evaluation is of the essence: the output is measured in many different ways, projects are assessed before being funded, researchers have reputations, nations improve their wealth. What concerns me here is the question of potentially risky feedback loops of our necessary but ever more pervasive evaluation practices. I take it for granted that we need to evaluate research, and warn about possible difficulties.
Loop 1 – Biomedical sciences bias research evaluation
Fact: Biomedical issues are perceived as fundamental in our society.
A chain of consequences:
1. Biomedical research has the lion’s share in research fund allocation.
2. A large community of researchers is involved in biomedical research.
3. A large number of articles is published in the biomedical sciences
4. The large number of articles supports citation-based evaluation.
6. Citation-induced strategic behavior increases the number of articles and reinforces the prestige of some winner-take-all journals (ref. Marder, Kettenmann, Grillner).
Finally, comparatively high values of scientometric indicators for biomedical sciences lead to further requests of funds, and the loop is closed. More funds, therefore more article, therefore more funds.
Loop 2 – Citation-induced strategic behavior affects the quality of research.
In the above context, it has been observed that
1. Writing time increasingly dominates reading time. 5
1.1. Thus only abstracts tend to be read.
2. Time is spent in aiming for publication in top journals, less prestigious outlets are neglected, and PhD time and energy is in fact wasted if publication does not follow.6
3. Negative results are not published. This leads to unnecessary duplication of efforts. Ethical issues have been raised about unnecessary animal suffering (See EU recommendation)7.
More technical sub loops directly affect research output.
Fact: Referees eveluate papers by potential competitors.
1. Referees try to dump citations of teir own work in papers by competitors
2. And artificial and biased inflation of citations follows
3. Uncontroversial papers are preferred over controversial papers and innovation is stifled.
Loop 3 – The hidden costs of shifting research funds to project-based research
Fact: Research funding is increasingly project-based. This means that more and more scientific jobs are on short-term contracts.
1. Short-term researchers are labor-intensive but have lower commitment (they need to spend a sizable amount of their time in looking for the next job).
2. Senior researchers waste their time in training an intrinsically volatile workforce.
3. Lab research memory is lost.
4. Senior researchers use an ever increasing share of their time in administrative work: project writing, project administration; reporting.
5. Senior researchers delegate to contracted junior researchers the administrative part, and thus
6. Junior researchers have less time for doing research in a critical period of their life.
Loop 4 – Costs of political choices to innovation
Fact: Research funding is project-based and research lines are centrally (politically) steered.
1. Strategic behavior is generated: groups with the ability to write projects according to specifications are privileged over less strategic groups.
2. Strategically minded groups obtain more funding, that they can invest in even more project writing.
3. Corruption of research: projects are risk-aversive and novelty-aversive, their outcome is probably already available.
4. Furthermore, blue-sky research is inhibited, and potential for innovation is lost.
Loop 5- Humanities do not fit scientometrics well
Compliance with scientometrics derived from biomedical model, combined with small, fragmented communities in the humanities, combined with publication of books and book chapters rather than research articles, combined with importance of reading (determining lesser writing time, hence lesser productions in general), induces lower bibliometric indices for humanities.
1. This puts humanities at a double disadvantage in competition for funds: they receive less funds because they are perceived as less important than, say, cancer research; but in consequence less articles are produced, impact factors are lower, and a further competitive disadvantages follows (see Loop 1).
2. This ecology is favorable to corruption of humanities. (More generally, humanities are intrinsically study-intensive, not result-oriented, and thereby suffer from specific types of measurement.)
3. Neglect of research published in “minor” journals8
One cas imagine some lines of remediation here.
For instance, a vigourous re-assessment of the value of research in the humanities (to give but an example, even many issues that are perceived as mainly biomedical – aging, climate change, drugs, dyslexia, to mention but a few – are in fact much wider societal issues; not least the medicalization of society itself! Total spending on health care is 16% of GDP in US in 2007 and figures can increase to almost 49% in 20829)
Another possible remediation consists in re-thinking evaluation in the humanities; so as to avoid its collapse on biomedically inspired metrics. (Peer evaluation appears to be the safer bet here. However, there is a set of less traveled paths to the measurement of general impact of research in humanities: numbers of visitors at curated exhibits10; factoring in of time devoted to reading11; number of foreign editions of a book)]
Loop 5 – Humanities, continued; US example; costs of outsourcing evaluation
Fact: Publication metrics based on rules of thumb (“2 books required for tenure”)
1. Tenure is outsourced to University Presses’ editorial choices12
2. Corruption of the quality of publication: uninspiring books.
A Possible remediation: require actual reading of works by peer committees in universities, forbid reliance on general cv assessment, even less on indexes.
Loop 6 – Humanities, ctd.: costs of absence of agreed upon metrics
Fact: the humanities are in generally not evaluated
Consequences: Unnecessary and counterproductive absence of characterization of humanities13
Remediation: independent, evaluation-free tools for characterization (RIBAC)]
Loop 7 – Rhetorical insistence on “excellence”
Fact: Excellence in research is assumed to be a target in many policies.
1. Research that is around the median is neglected.
2. Difficulty for median research to improve
However median research has to exist 14 and should be supported, as no one can prejudge which research is excellent from the outset.
Loop 8 – Costs of Incentives (e.g. monetary incentives)
Fact: A number of incentives only work for those who are close to the top ten percent and are likely to receive them15
1. Frustration in median researcher
2. Feeling that a certain type of effort is not worth;
3. The median researcher moves towards the lower end of distribution
Loop 9 – Rhetorical insistence on indicators and their respective advantages
If we focus all of our attention on indicators, we may neglect some of the main issues; in particular
- deliberation about means is taken to be deliberation about ends of the research process.
- understanding and risks of strategic behavior is neglected
Remediation: in a limited extent, we may experiment with alternate models (on which more empirical research is needed). Here are two examples:
1. Lotteries for seed grants
- No project writing, no reporting
No evaluation process (big savings)
No personal bad feelings, and sometimes good gratification
2. Randomly elect researchers to pursue their own projects for a given time (one semester, one year)
3. One day per week of sandbox research, protected time for any researcher to devote to his/her own project
Loop 10 – Costs of reliance on automated agents: Google Scholar, ISIS
Fact: The use of some of the instruments (such as Google Scholar and Isis) to compute various indexes is more an more widespread
Risk: Opacity of the underlying databases
Possible Remediation: exploration of larges bases, including blogs; exclusion of commentaries by close researchers. Development of “homophily aware” citation search bots (such as the one developed in LiquidPublication).
Loop 11 – Costs of reliance on automated agents, continued: Google Books scenario
Fact: Erosion of effective reading time.
1. “snippet reading”: short quotes are read as snippets in Google Books16
2. main point of the book, its unity, is lost (L. Water).
3. books are going to be written for being read through Gbooks or automated agents, not by judging humans.
Loop 12 – The “Shangai effect” – costs for society of massive presence of private sector in higher education
Fact: ever inceasing focus on (opaque) indicators that put a premium on private universities which have vast sums at their disposal
“Rich parents would relish the opportunity to drive fees even higher, beyond the reach of less wealthy parents of more able children.”17
Loop 13 – Authors sign articles that they did not even read
“I am officially the author of over 296 peer-reviewed journal articles, as of May 26, 2006. Yes, that’s correct: 296 articles. Many of these I have not even read”: a very candid statement18 by physicist David.C.Williams. Notice that it is not uncommon to publish papers with hundreds of authors. See for instance “Measurement of the ZZ production cross section in pp? collisions at ?s=1.96 TeV”19, a paper with 421 authors, or “Initial Sequencing and Analysis of the Human Genome”, published by Nature and listing approximately 2900 authors20. It is simply incredible that unstructured lists of authors be considered and, what is worse, counted in measurement without any significant weighting.
Remediation: Gloria Origgi, Judith Simon and myself have proposed to add a “production box” to papers resulting from collaborations, in which microcredits are attributed to functional portions of the paper (such as figures, datasets, and even summaries). The “production box”, hollywood-style, has a line for the Lab Director who did not write (or even read) the article. 21
We need to think harder about what “productivity” really means: quality of research needs to be improved as well as quantity22.
1Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study Andreas Lundh, Marija Barbateskovic, Asbjørn Hróbjartsson, Peter C. Gøtzsche (2010) PLoS Medicine 7 (10) p. e1000354
4 Malle, Bertram F. (2006), The actor-observer asymmetry in causal attribution: A (surprising) meta-analysis. Psychological Bulletin, 132, 895-919. http://www.psychwiki.com/wiki/The_Actor-Observer_Asymmetry_in_Attribution:_A_%28Surprising%29_Meta_Analysis
5 King, Donald W.; Tenopir, Carol; Choemprayong; Songphan; Wu, Lei (2009), Scholarly Journal Information Seeking and Reading Patterns of Faculty at Five U.S. Universities, Learned Publishing, 22 (2) April 2009: 126-144. DOI: 10.1087/2009208
6Marder, Kettenmann, Grillner, in PNAS, http://fens.mdc-berlin.de/media/pdf/PNAS-Article-Marder-Kettenmann-Grillner.pdf
7The Economist, Catheter and mouse: Sharing information on failed animal experiments would help both scientists and rats : May 7th 2009 | from PRINT EDITION
Paul Farber (2009) Journal of the History of Biology 42 (1) p. 185-187
9Source: http://www.cbo.gov/ftpdocs/87xx/doc8758/HealthTOC.1.1.htm CBO. “The long-term outlook for health care spending”, nov. 2007.
11Donald W. King, Carol Tenopir, Songphan Choemprayong, and Lei Wu. “Scholarly Journal Information Seeking and Reading Patterns of Faculty at Five U.S. Universities,” Learned Publishing, 22 (2) April 2009: 126-144. DOI: 10.1087/2009208
*Carol Tenopir, Donald W. King, Sheri Edwards, Lei Wu. “Electronic Journals and Changes in Scholarly Article Seeking and Reading Patterns,” Aslib Proceedings: New Information Perspectives, 61 (1) February 2009: 5-32. DOI: 10.1108/00012530910932267.
*Carol Tenopir, Donald W. King, Jesse Spencer, and Lei Wu. “Variations in Article Seeking and Reading Patterns of Academics: What Makes a Difference?” Library & Information Science Research Volume 31, Issue 3, September 2009, Pages 139-148. Available online 18 April 2009. doi:10.1016/j.lisr.2009.02.002.
12 Lindsay Waters, Enemies of Promise
13In France, CNRS is moving to relatively sophisticated data collection that should do justice to the diversity of activities in the humanities. See http://archivesic.ccsd.cnrs.fr/docs/00/34/41/02/PDF/Classement_des_publications-v13.pdf. Work by Isabelle Sidéra et Michèle Dassa has produced the instrument Ribac: https://www.ribac-shs.cnrs.fr/ for collecting data. Lesson from RIBAC (French database for Social Sciences). It does most of the work for you – extracts data from HAL. Automatically updates your bibliography. Modifications can be entered incrementally. Very useful as researchers have to submit yearly activity reports. The incentive is strong to use RIBAC and, indirectly, to publish on HAL. Very good example of a successful process design.
15Ref. Martin, B., Research Productivity: some paths less travelled. Australian Universities’ Review, vol. 51, no. 1, February 2009, pp. 14-20 http://www.bmartin.cc/pubs/09aur.pdf
16Robert Darnton, The Case for Books.
17Howard Hotson, Don’t Look to the Ivy League, London Review of Books, vol. 33, No. 10, 19 May 2011, http://www.lrb.co.uk/v33/n10/howard-hotson/dont-look-to-the-ivy-league
21Casati, Roberto; Origgi, Gloria; Simon, Judith (2011), Micro-Credits in Scientific Publishing, Journal of Documentation (in print).