Scholarship is built on reputation, credibility, and demonstrable impact of social benefit. 'Openness’ presents a disruption of what has been traditional stand-ins for these qualities: name recognition, institutional regard, peer review selection in high-impact (commercial) publishers. For open science practices to be worthwhile for scholarly reputation, there needs to be new methods of evaluating achievement that reward Open Science practices. Currently, there are not yet any standards for the integration of Open Science in the evaluation of research work. However, there are initiatives and other components in this complex topic that are developing to move considerations of open evaluation along.

Components of evaluation

  • Open peer review
  • Open research metrics
  • FAIR principles
  • Evaluation discussions

Open peer review

Peer review is a process where something proposed (as for research or publication) is evaluated by a group of experts (peers) in the appropriate/relevant field.  Peer review is not one process but a composite of processes or models. Traditionally, it has been closed (anonymous) where the reviewers are chosen by the mediator (journal editor), are not disclosed to the author, feedback aggregated and anonymized by the mediator and delivered back to the author.  

Open Peer Review (OPR) can be summarized by the following transparency methods

  • Open identities (disclosure of names involved)
  • Open reports (review comments published with article)
  • Open participation (community can contribute)

These methods can be translated into the following models, as outlined by PLOS, can be used alone or in combination

  • Publishing peer review content
  • Open commenting from the wider community
  • Open discussion between authors, editors and reviewers
  • Open review before publication through preprints
  • Post-publication commenting
  • Sharing author or reviewer identities
  • Decoupling the peer review process from the publication process

Metrics and impact

Research metrics are measures used to quantify the influence or impact of scholarly work.  Some examples of this are bibliometrics (methods to analyze and track scholarly literature), citation analysis, and altmetrics (a more recent set of alternative methods that attempt to track and analyze scholarship through various digital media.)

Altmetrics offers the type of metrics that tracks engagement and activity of scholarly output outside commercial publishers. For example, if you are publishing open access journal articles from a ‘diamond’ publisher, sharing your research data, slides, figures code etc. on platforms like Open Science Framework or a repository. These ‘open’ activities and other, hidden work like peer review, are not included in traditional metrics. When persistent identifiers, like digital object identifiers, or URLs are associated with the output or activity, that activity or product can be traced through the open ecosystem. Beyond demonstrating types, sources, and counts of engagement, there has yet to be a common understanding of the value of these metrics.

There are many different types of quantitative data categories of altmetrics:  

  • Social media posts, mentions, comments
  • Views, downloads and captures (bookmarks, favourites,  from websites and repositories
  • Citation counts from citing behaviour in publications, and grey literature like policy reports
  • Response indicators on platforms like YouTube (like, dislikes, comments)

For assistance with metrics, including altmetrics, see Scholarly Communication: Bibliometrics & research impact services.

Bibliometrics & research impact

FAIR principles

In 2016, the ‘FAIR Guiding Principles for scientific data management and stewardship’ were published. The authors intended to provide guidelines to improve the findability, accessibility, interoperability and reuse of digital assets.

While these principles emphasize machine-actionability (capacity of computational systems to find, access, interoperate and reuse data without or minimal human intervention), these principles also help research planning and management to remove all barriers and improve efficiency of science propagation.

Major funding bodies, including the European Commission, promote FAIR data to maximise the integrity and impact of their research investment.

Graphic demonstrating FAIR: findable, accessible, interoperable, reusable
From Wikimedia Commons, the free media repository.

Evaluation discussions

The movement to improve the methods of evaluation began before Plan S or other widespread open adoption. In 2012, a set of recommendations, referred to the San Francisco Declaration on Research Assessment was created; these are recommendations, now commonly referred to as the DORA Declaration.

DORA has moved beyond just a focus of advocating for appropriate interpretation and application of bibliometrics and scientometrics, to assessment more broadly. Project TARA was initiated in 2021 to identify, understand and make visible the criteria and standards universities use to make hiring, promotion, and tenure decisions.

Other initiatives that are also working on values-informed evaluation frameworks include HuMetricsHSS. The aim of this team is to establish humane indicators of excellence in academia, focused particularly on the humanities and social sciences (HSS).

Resources