Reviewing Papers by Reading Group

I love reading groups, and I just had an idea for how they might be able to save scientific peer review.

In case you're outside academia let me just describe what a reading group is, at least from a Computer Science research point of view.

A regular reading group will have some theme like Reinforcement Learning, Numerical Optimization or Computational Sustainability. These may sound incredibly focussed to those in the outside world yet to people in that research area represent a vast wave of new conference and journal papers every year that they struggle to keep up with. The reading group helps a bit with that by having a weekly meeting where someone has chosen a paper they or others have decided "should be read". What's more, each week some person who regularly attends will lead the discussion. This is great because it means that at least one person in the room will have taken the necessary few hours it requires for most people to properly digest a scientific paper. Everyone else will probably speed read it the hour before the meeting.

But an amazing thing happens when a group of 6-12 people, experts in their field, sit around a table and go through in detail what a recently published paper is claiming. Even if not everyone has prepared as thoroughly as they might have, everyone who engages usually gets something out of it. People usually leave the room knowing at least what the authors were trying to do. This can happen when you read a paper on your own, but it's also possible to convince yourself you know what a paper is doing and move one. But in a reading group someone usually asks the hard question, and then everyone notices: the problem with the proof, or the plot, or the badly stated problem at the beginning.

One thing I find with papers I've read in a reading group versus ones I've read on my own; I usually have a much more critical view of the ones read in a reading group. Given the combined criticism of everyone in the room you're bound to find the holes that exist in almost any paper. This is the essence of science, peer review, calling out a wildly optimistic claim or questioning the experimental protocol or the proof method and demanding more thoroughness.

It got me thinking, it's really a shame this only happens after a paper is accepted to a conference or journal. Surely the best way to review papers for acceptance would be to get some people to commit their reading group to read it, critique it, and send the feedback to the editors or program committee to guide their decision for acceptance.

In Computer Science a lot of our core research results are published initially in peer reviewed conferences. These conferences are as hard to get into as journals in other disciplines. Members of the program committee get lists of papers they need to review and are allowed to appoint others to submit additional reviews to ensure enough eyes have checked out a paper.

The problem is, this is a bunch of people reading the paper individually, writing up their review and sending off into the ether. Often the next stage involves reviewers discussing their reviews online. But when this is anonymous, between people who don't know each other and never meet, between people at very different levels of their academic career, the result is noisy to say the least.

It seems to me a much better approach would be to apportion out papers to those same program committee members who them promise to hold reading group sessions on each paper. The 'leader' of each session could rotate around the graduate students and postdocs in the group and that leader would collate the collective opinion of the group after the meeting. The program committee member would then have the same editorial and vetting responsibility they have now over those reviews before sending them off to the main paper selection committee. But their review would be much better informed and arguably would take less effort. At least it should take no more effort than it does now for the program committee members where all the reviewers read in isolation. A side effect is that a whole bunch of other people have read some good, and bad papers in a critical way. This would be very educational for graduate students about the way to read and critique a paper and give them a head start on the research that is eventually accepted.

Of course, a problem with this idea could come from spread unpublished work around to too many more people, if a paper is rejected the authors would have less confidence that their work is still private. In my field that isn't a huge issue but I know in some areas of science it's a very big deal.
Most likely someone has had this idea before. If you know of this kind of approach being used anywhere or have any other thoughts let me know in the discussion.