
Is the peer-review process slowing science down?
Having spent more than six years publishing in academic journals, I always assumed that long review cycles were simply “how science works.” Months of revisions, rounds of reviewer comments, delays without explanation - that was normal.
It wasn’t until I co-founded a startup that I realised how differently things could work. In a startup, you iterate weekly. You test ideas, gather data, adjust course, and ship improvements continuously. Speed isn’t a luxury - it’s a necessity.
And suddenly it became obvious just how slow scientific publishing really is.
To be fair, expert reviewers add real value.
Some of the best feedback I’ve ever received came from reviewers who deeply understood the field and pushed the work to be more rigorous. But there’s also a trade-off: every layer of review, every admin step, every delay slows the dissemination of knowledge.
At the same time, I’ve seen a clear shift over the past few years. More researchers are sharing work instantly through preprints - ChemRxiv, bioRxiv, arXiv - where ideas reach the community in days rather than months. The broader audience evaluates results in real time, and iteration happens far faster than in traditional journals.
This contrast reveals a deeper tension: traditional peer review emphasises rigor, expert guidance, and controlled validation, but it is often slow and opaque - while open preprint culture offers speed, transparency, and the collective intelligence of the entire community, albeit with more uneven quality control.
Both systems have strengths. Both have weaknesses. But the gap between the pace of discovery and the pace of publication is widening.
If we want science to move at the speed of modern R&D then our publishing models need to evolve. Not to remove rigor, but to remove unnecessary friction.
Because today, what slows science down the most isn’t a lack of ideas. It’s the system meant to validate them.