A Practical Protocol for Reviewing Climate Science Papers

Because “adequate quality” isn’t helpful — and most of us don’t have time to read a paper twice.

If you’ve ever been asked to peer review a paper and started reading without a plan — you’re not alone.

As I was working on another review, I began to wonder whether I was following some personal method — a kind of internal protocol — or if this was what others were doing too. I started looking into journal instructions and existing peer review guidelines. What struck me immediately was how incredibly vague the language often is: “adequate,” “appropriate,” “proper.” These words don’t help when you’re trying to sharpen your own review process.

Most of us learn how to peer review through our supervisors, as I did during my PhD. That kind of practical, informal training is a great start. But sometimes, it would be nice to have a succinct set of instructions — just to be sure you’re covering the right ground.

Later in your career, reviewing becomes second nature. It takes far less time and becomes part of the daily rhythm of scientific work. But for junior and mid-career researchers, it can still be a challenge to find your own way of doing it — to review thoroughly but efficiently, and to know what’s actually expected of you.

What I’ve put together here is a merged protocol: part Wiley’s peer review guidance, and part my own method. It’s meant to be succinct and inviting, while helping you avoid unnecessary rereading. Because let’s be honest — we rarely have time for that.

Let’s first take a moment to assess what the role of a reviewer actually is. If you search the instructions from major journals, you’ll mostly find another heap of relative terms without much substance. Here’s a clearer summary of what the task really involves:

Validate scientific methods, results, and interpretations
Assess originality, relevance, and ethical compliance
Provide constructive feedback for improvement
Advise the editor—though final decisions rest with them

That’s it. You’re not expected to fix the paper, teach the authors how to write, or guarantee the results are correct — your job is to judge whether the science makes sense and holds up.

And once an editor does manage to find you, chances are more review requests will follow. So the first thing you need to do is be selective. Ask yourself:

  • Do I actually have time to do this?
  • Is the paper so interesting that I’ll make time?
  • Am I confident I can review this without any conflicts of interest?

If the answer still leads you to “yes,” here’s how I would go about it.

Want to Go Deeper?

If you’re looking for a more in-depth (yet still accessible) guide to peer review, the Royal Meteorological Society has published an excellent resource. It goes beyond basic checklists to explain why each part of the process matters — and even includes examples of good vs. bad reviewer comments, which are incredibly helpful.

It’s not a direct download: you’ll need to enter and confirm your email to receive the link. But if you want to strengthen your reviewing skills or better understand the editorial perspective, it’s well worth the two clicks.
RMetS New Peer Review Guide

Downloadables

Peer Review Protocol (PDF)
A full, printable version of the step-by-step guide.
Download protocol PDF

Peer Review Checklist – Printable (PDF)
With open boxes for printing and manual ticking
Download checklist PDF

Peer Review Protocol

1. Quick Skim for Scope and Structure

Skim the introduction, figures, and conclusion. Ask:

  • Does the paper have a clear flow?
  • Are claims grounded early?
  • Are red flags like overreach or gaps visible?

2. Abstract Expectations & Questions

Read the abstract with focus. Write down:

  • What you expect in terms of methods, data, and claims.
  • Immediate questions (e.g., “Do they account for internal variability?”).

These questions become your guiding lens. Now, begin reading the full paper. At the very end, return to your expectations and questions and ask:

Were they addressed? Did the paper deliver what it set out to do?

This allows you to focus while reading, then assess coherence and completeness afterward—without needing a second pass.

3. Identify the Core Scientific Claim

While reading the main sections (methods, results, discussion), synthesize:

  • What is the paper’s main hypothesis, question, or contribution?
  • How do the methods and results support this claim?

Trace the logical thread of the paper as you go and note the central message.

4. Review Methods and Reproducibility

Are methods clearly described and replicable? Check for:

  • Data sources and preprocessing
  • Temporal/spatial resolution
  • Model parameters
  • Uncertainty treatment or validation runs

5. Spot Vague Language

Here’s what journals and reviews often say—without saying much:

🧴 Vague Term❓ Ask Instead
“Adequate method”Can someone reproduce it from the description?
“Appropriate dataset”Is it valid for the time/space scale being studied?
“Robust results”Have sensitivity tests or alternative scenarios been run?
“High-quality data”What are the biases, gaps, and preprocessing steps?

6. Evaluate Results, Figures, and Captions

Are results clear and valid?

  • Do figures/tables support the paper’s claims?
  • Are results compared to literature, baselines, or observations?
  • Are visual conventions followed?
    • Axes labeled (with units)?
    • Intuitive color scales (e.g. blue=cool)?
    • Uncertainty shown (e.g. shading, error bars)?
    • Colorblind-friendly?
      → Use: Coblis Color Blindness Simulator
  • Are captions self-contained?
    Each figure/table should:
    • Be numbered and referenced in the text
    • Include a short title or summary
    • Explain key variables, units, and data sources
    • Provide enough info to understand the figure without reading the main text

Good figures are more than decoration — they are standalone evidence.

7. Context and Limitations

  • Is the research well grounded in existing literature?
    • Does it cite relevant prior work (not just cherry-picked studies)?
    • Are competing perspectives or findings acknowledged?
  • Are limitations discussed honestly?
    • Do the authors explain what the study doesn’t cover?
    • Are assumptions, uncertainties, or boundary conditions clearly stated?

Honest science doesn’t pretend to be perfect — it places itself responsibly in context.

8. Spot Missing Elements

  • Is something important left out?
  • Key mechanisms (e.g., land use, aerosols)?
  • External validation datasets?
  • Contradictory studies?
  • Are uncertainties buried or omitted?

9. Final Review Summary (And Why It Matters)

Start with a neutral summary of the paper’s content (fictional example):

The authors compare four regional climate models (RCMs) over southern Europe to evaluate their ability to simulate extreme heat events during 1980–2020. Using standardized heatwave indices and ERA5 reanalysis as a reference, they show that two models consistently overestimate intensity but capture spatial patterns well. The study emphasizes the need for bias correction in applied impacts research.

💡 Why this matters:

  • Shows editors you understand the work
  • Frames your comments for the authors
  • Helps spot misalignments between reviewers

Write your review in three parts:

  1. Main strengths
    What does the paper do well? Highlight novelty, clarity, solid methodology, or valuable insights.
  2. Major comments
    These are issues that affect the validity, clarity, or strength of the paper’s conclusions.
    For example:
  • Unsupported claims
  • Missing data or comparisons
  • Ambiguous or incomplete methods
  1. Minor comments (structured by section)
    These are smaller points aimed at improving clarity and polish.
    Organize them by paper section (e.g., Introduction, Methods, Results…) to make them easy to follow.
    You can include:
  • Typos or unclear wording
  • Caption or figure suggestions
  • Requests for clarification or references

Grouping feedback like this keeps your review focused, constructive, and easy for both editors and authors to navigate.

Avoid empty phrases like:

  • ❌ “Needs more work” → ✅ “Missing validation against observational dataset X”
  • ❌ “Not convincing” → ✅ “Conclusion assumes linearity without testing nonlinearity”
  • ❌ “Insufficient detail” → ✅ “The methods lack description of spin-up procedure for model Y”

🧠 What if you genuinely feel something is “not convincing”?

That’s valid. Experienced reviewers often sense when something doesn’t hold — even before they can explain why. That’s not emotion; it’s trained scientific intuition.

Use it as a prompt:

  • Ask: What exactly feels unsupported or incomplete?
  • Re-examine the logic, evidence, framing, or what might be missing.
  • Then explain your hesitation in specific, actionable terms.

New to reviewing?
If you’re unsure whether something is flawed or just confusing, say so. It’s perfectly fine to write:

“I had difficulty following the logic in Section 3. It may be valid, but the argument isn’t clear to me as written.”

This helps the authors and editors know whether the issue is with the science itself, or the way it’s communicated.