Feedback Loop: How AI Is Changing the Peer Review Game
Imagine spending months crafting a research paper, only to receive vague feedback that leaves you more confused than enlightened. Human peer review, while essential to academic publishing, is often slow and inconsistent. At the same time, there is frequent discussion around the role artificial intelligence (AI) can play in peer review. AI-reviewed manuscripts are controversial- for good reason. Ethical concerns such as transparency and accountability make human involvement in peer review a necessity. But what if AI could help in other areas- not by replacing reviewers, but by supporting them?
At the International Conference on Learning Representations (ICLR) in Singapore, researchers tested a novel tool called the Review Feedback Agent. This AI system, powered by large language models (LLMs), acted as a behind-the-scenes coach. It flagged reviews that were too vague, overly harsh, or lacking in detail. Out of more than 20,000 peer reviews analyzed, only 27% of reviewers revised their feedback- but those who did added an average of 80 words. In blind assessments, nearly 90% of these AI-enhanced reviews were judged superior to the originals.
The ripple effects were clear. Author rebuttals became longer and more focused, and reviewers who received AI coaching provided richer follow-ups. The result? A more constructive dialogue between authors and reviewers- one that benefits the entire scientific community.
Mentorship, Mirrors, and the Human Touch
AI’s role in peer review shouldn’t be to make decisions or replace human judgment. Instead, it can serve as a mentor for early-career researchers, guiding them on how to frame critiques. For seasoned reviewers, it acts as a mirror, helping them refine tone and elaborate on key points.
But there are challenges. How do we prevent AI-generated feedback from becoming formulaic? How do editors and authors build trust in a system where algorithms quietly shape critiques? These questions are at the heart of the debate.
Peer review is inherently human, rooted in expertise, nuance, and intellectual rigor. Replacing that with algorithms would miss the point. However, positioning AI as a supportive partner could raise the floor for review quality across the board.
As AI continues to evolve, journal policies must keep pace. Some experts, like William Carson, PhD, advocate for zero tolerance on AI use in peer review. Others, like Adrian Stanley, envision a “Swiss cheese” model- where both human and AI reviewers fill in the gaps. The key lies in thoughtful integration, clear guidelines, and ongoing dialogue.
There are many more ways how AI can offer support in the peer review process- ranging from assisting researcher in drafting manuscripts to accelerating production and ensuring research integrity. For more comprehensive overview of the challenges and opportunities of AI in peer review, read our latest contribution in Research Information.
Peer review will never be perfect. But with tools like the Review Feedback Agent, we’re seeing glimpses of a future where reviewers feel supported, authors feel heard, and science thrives on richer, more constructive exchanges.
AI isn’t here to take over if we won’t let it. It’s here to help us do better.
For more insights into peer review at Karger, check out this page.





