The peer review system has long been criticized for its lack of transparency, susceptibility to bias, and often slow pace. In recent years, there has been a noticeable shift toward greater openness in peer review to address these challenges. However, this shift also brings up concerns about maintaining the credibility of the process.

In this blog post, we’ll provide a broad overview of the evolving landscape of peer review and the emerging practices that are reshaping how scientific articles are vetted and validated.

The Rise of Open Peer Review

One of the most significant trends in peer reviewing is the move towards open peer review. Traditionally, peer reviews were conducted anonymously, with neither the reviewers nor the authors knowing each other’s identities.

This anonymity was intended to encourage honest feedback, but also raised concerns about accountability and the potential for bias.

Open peer review removes this anonymity. Reviewers’ identities might be revealed, and their comments are often published alongside the article. This transparency aims to foster a sense of responsibility and encourage more constructive criticism.

Moreover, it allows readers to see the dialogue between authors and reviewers, offering deeper insights into the research process.

Transparent Review Histories

Complementing this shift is the growing practice of publishing the full review history of an article. Journals and platforms are increasingly providing access to initial submissions, reviewer reports, author responses, and revised versions of manuscripts.

This approach not only adds further transparency to the peer-review process but also helps demystify the iterative nature of scientific research.

Transparent review histories serve as valuable educational tools, particularly for early-career researchers, by showcasing the evolution of research ideas and the impact of peer feedback.

These records offer a unique opportunity to learn about the expectations and standards within their field.

However, there are concerns that open review might lead to less candid feedback, with reviewers potentially softening their critiques out of politeness or fear of backlash.

Examples of Journals Promoting Open Peer Review

Several prominent journals have adopted transparent peer review systems to enhance openness and accountability in the scientific publishing process.

For instance, a number of Nature journals, including Nature Human Behaviour, Communications Biology, and Nature Ecology & Evolution, follow a transparent peer review model. In these journals, details about the peer review process are published alongside the article, including reviewer comments, author rebuttal letters, and in some cases, editorial decision letters.

Notably, while journals like Nature Communications, Communications Earth & Environment, and Communications Psychology apply transparent peer review to all published articles, other journals allow authors to opt into this transparency at the end of the peer review process, prior to acceptance.

Moreover, the peer review information is typically published online as a supplementary file, providing readers with a detailed view into the peer review process. You can view an example file here.

In addition to Nature journals, the Royal Society has also embraced open peer review. Journals like Royal Society Open Science, Open Biology, and Proceedings B require mandatory publication of peer review information, with reviewer reports made anonymous by default unless the reviewer chooses to sign their report.

Routledge Open Research exemplifies a novel approach to open peer review in scholarly publishing. The platform allows for rapid publication after comprehensive prepublication checks, ensuring research is promptly viewable and citable.

In this model, expert reviewers’ names and comments are published alongside the article to foster transparency. Authors are encouraged to revise their work, with all versions linked and independently citable. Articles that pass peer review are subsequently sent to major indexing databases and repositories.

An example of this open review process can be seen in the article What is wrong with conspiracy beliefs? by Alper and Yılmaz (2023).

For further reading on the topic of open peer review, you may find these articles helpful:

Collaborative and Decentralized Peer Review

Collaborative peer review is gaining attentions as a way to enhance the rigor and efficiency of the review process. In this approach, multiple reviewers collaborate openly with each other and the authors, often in real-time or through structured platforms.

This collective approach can lead to more comprehensive evaluations and faster decision-making while providing valuable training opportunities for early-career researchers.

For example, British Ecological Society journals encourage senior academics to review manuscripts alongside junior members of their labs. This practice provides valuable training for Early Career Researchers, whether it’s through working together on a manuscript or having the junior researcher draft the report with senior input and edits.

You can read more about co-reviewers’ experiences with this process here.

Decentralized Peer Review

Decentralized peer review is another innovative concept being explored. The paper titled Decentralized Peer Review in Open Science: A Mechanism Proposal by Finke and Hensel (2024) presents a proposal for a decentralized peer review system to improve transparency and efficiency. Key elements of the proposed system include:

  • Community Involvement: Tasks such as paper pre-selection and reviewer allocation are managed by the community, rather than traditional editors.
  • Smart Contracts: Acceptance decisions and conflict resolutions are determined by smart contracts based on reviewer scores.
  • Public Review Discussions: The review process includes public (anonymized) discussions with authors to ensure openness.
  • Reputation and Compensation: Reviewers are incentivized through a combination of payments and a reputation system, where high-quality reviews enhance their reputation.

These features aim to address various limitations of current peer review practices. For further details, you can read the full paper here.

For further exploration of similar approaches, you may also find the paper “Decentralizing Science: Towards an Interoperable Open Peer Review Ecosystem Using Blockchain” informative. This paper discusses the use of blockchain technology to create an interoperable open peer review ecosystem.

Preprints and Post-Publication Peer Review

Preprints offer researchers a way to share their work with the community before it goes through formal peer review. Essentially, a preprint is a draft version of a research paper that is made publicly available online.

This early release allows researchers to present their findings, methodologies, and conclusions to other scientists and the general public, often long before the study is published in a peer-reviewed journal.

The preprint process can speed up the dissemination of new ideas and results, enabling other researchers to see and comment on the work while it is still in its preliminary stages.

It also fosters a faster exchange of knowledge and encourages collaboration, as others can engage with the research early on. While they are not subject to the same rigorous review as journal articles, preprints play a significant role in the ongoing conversation within the scientific community.

To illustrate, arXiv is a prominent preprint repository that offers open access to scholarly papers across a wide range of scientific disciplines. Established in 1991, arXiv is operated by Cornell Tech.

The repository covers fields such as physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering, and economics.

All papers on arXiv are freely accessible to the public. While submissions are screened for relevance and basic formatting, they are not peer-reviewed.

Authors can also update their submissions with new versions so that the most recent research becomes available.

Managed by the Center for Open Science (COS), OSF Preprints is a platform that aggregates search results from various preprint providers, including general repositories like arXiv, as well as specialized platforms like bioRxiv for biology, CogPrints for cognitive sciences, SocArXiv for social sciences, and PsyArXiv for psychology.

Post-publication Peer Review

In this peer review model, appraisal and revision of a paper can continue after publication, often through comments or discussion forums alongside the paper. Importantly, post-publication peer review complements, rather than replaces, traditional pre-publication review.

To illustrate, PubPeer is an online platform that makes it easy for scientists and researchers to discuss and review research papers after they’ve been published.

It’s a space where you can comment on specific papers, ask questions, highlight potential issues like errors or inconsistencies, and engage in meaningful conversations with your peers.

One of the standout features of PubPeer is the option to comment anonymously. This encourages more open and honest discussions, as users can share their thoughts without worrying about professional repercussions.

AI and Automated Peer Review

The integration of Artificial Intelligence (AI) into academic publishing is beginning to influence the peer review process. There are two key areas where AI is being explored: First, AI tools are being utilized to assist in the initial screening of manuscripts. These tools can help check for plagiarism, assess the statistical validity of data, and even suggest potential reviewers based on their expertise. This approach is gaining popularity as it aims to improve efficiency and reduce human error, allowing editors to concentrate on more complex tasks.

To illustrate, Elsevier has been integrating AI tools to assist in the peer-review process. Their AI-driven systems help with initial manuscript screening, detecting plagiarism, and assessing the quality of submissions before they are sent out for peer review. This initiative aims to improve the efficiency and consistency of the review process.

The use of AI in conducting peer reviews, however, is a topic of ongoing discussion. While some see potential in AI for bringing consistency and speed to the review process, others question whether AI can fully grasp the nuances and context required for thorough peer evaluation. There are also concerns about the transparency of AI systems and the ethical implications of their use.

For example, the editorial policy of Royal Society journals prohibits referees inputting manuscript details into generative AI tools because they state that AI tools provide no guarantee of where data are being sent, saved, viewed, or used in the future.

The National Institute of Health (NIH) “prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals”

As AI continues to develop, its role in peer review may evolve, but the discussion about how it should be applied is likely to continue. Finding a balance between the capabilities of AI and the expertise of human reviewers will be key to ensuring the quality and integrity of academic publishing.

Balancing Innovation with Credibility

In conclusion, as the field of peer review continues to evolve, these new practices offer exciting possibilities for enhancing efficiency, transparency, and collaboration. However, it’s essential to balance these innovations with the need to maintain credibility in the review process. While striving for a swifter and more transparent peer review system, we must not compromise on the rigor and integrity that underpin academic publishing. The ongoing dialogue about the role of AI, open peer review, and other emerging practices will shape the future of scientific publishing.