AI in the peer review process: opportunity or problem?

8 Apr 2026 Monika Řičicová

No description

Imagine you submit a scientific article for review—and on the other side, a reviewer opens an artificial intelligence tool. According to a 2025 survey by the publisher Frontiers (1,600 researchers, 111 countries), more than half of reviewers today use AI when evaluating articles.

What do reviewers do with AI?

Mostly, they use it to help with writing reviews, summarizing results, or detecting plagiarism. However, very few incorporate AI where it could be most helpful. Experts’ opinions differ, however, on where AI could be most useful. Some see an opportunity for AI in verifying statistics or methodology, while others argue that AI should not intervene in those areas; as a review article in the Journal of Korean Medical Sciences (Doskaliuk et al., 2025) points out: assessing the novelty of research, interpreting complex data, or evaluating ethical standards remains the domain of human experts. Simply put, AI cannot “think” like an experienced reviewer.

AI in peer review as a problem

Many researchers upload manuscripts to third-party chatbots, which most publishers prohibit due to data and intellectual property protection. And as demonstrated by an experiment conducted by engineer Mima Rahimi of the University of Houston, who had GPT-5 review his article in Nature Communications—while the AI mimicked the structure of a review, it made factual errors and failed to provide truly useful feedback.

AI in peer review as an opportunity

Automated checks for grammar, formatting, terminology consistency, and structured feedback—these are areas where AI saves reviewers time and frees them up for actual scientific work. AI can also help formulate constructive and clear comments, which contributes to better communication between the reviewer and the author.

Why scientists don’t fully trust AI

Although most researchers acknowledge that AI improves the quality of manuscripts, they also fear its misuse. In a survey by publisher Wiley, 87% of respondents expressed concerns about AI errors, data security issues, and a lack of transparency. Potential biases encoded in training data also play an important role—for example, AI models may systematically favor research from certain regions or institutions.

What is the solution?

Frontiers has launched its own closed AI platform for reviewers, where manuscript confidentiality is protected. Leading medical journals are also beginning to implement clear rules for the use of AI in the peer review process—for example, JAMA requires reviewers to specify which AI tool they used and how. At the same time, a Frontiers survey shows that 35% of researchers are learning to work with AI entirely on their own, and 20% consider unclear rules to be the main barrier. The way forward lies in better support, the development of AI literacy, clear rules and open communication among all stakeholders.

AI in the peer review process is neither a savior nor a threat—it is a tool. And as with any tool, it all depends on how we use it.

Sources:


More articles

All articles

You are running an old browser version. We recommend updating your browser to its latest version.

More info