Question
How viable is the academic peer review system today?
Answer
The main problem is that it is just too damn slow and archaic. The arXiv model in physics is kinda the jury-rigged version of what it should be.
Of the 5 journal papers I published before giving up, one took 2 years to get through the system. I think I made things worse by doing the kind of unclassifiable stuff that doesn't easily fit anywhere, and triggers extra review cycles everywhere purely because reviewers go "huh? why the hell is this being submitted here?"
That's too long a feedback loop to keep people like me motivated, so the only alternative to create a decent motivation scheme for yourself is to increase your submission frequency so that at any given time you have feedback coming in on at least one of your papers, and at least one of your papers is getting out the door in any given month. But this is quantity over quality, and there are people (like me) who just don't like being a paper assembly line, and cranking out stuff on a publish-or-perish schedule.
arXiv appears to short-circuit the bs to some extent by allowing some peer discussion to happen at a friendly pre-processing stage. I really like that. Another common short-circuit is edited books. I've done a few of book chapters, and it's generally a pleasanter, quicker experience, but then they are considered a notch below the journal world in prestige, so you pay a cost if you use that channel to speed up your pipeline, and make your work more fun and sociable.
The second problem is with blind/double blind. I've gone back and forth about this several times, and ultimately concluded that it doesn't really help. All 3 (open, blind, double blind) have their unique sociological and bias problems, but I thought, on balance, blind was a good choice.
But now, having experienced how in situations like blogging you can get fantastic peer review processes, with very smart (and appropriate) people, going at breath-taking speed, with flexibility on the level of anonymity, I can't go back to academic publishing. It's just not fun anymore, and I don't believe it is really any better at vetting "truth" than something like blogging+open comments.
Blogging of course, hasn't yet risen to the level where it can handle the heavy-lift scholarly work (early on, I installed a LaTeX plugin on my blog, intending to do mathematical posts of an almost academic type, but that turned out not to suit the medium well). There are also obvious problems equating a commenting community or a blogroll-mutual-linking community with peer review, because the system can slide too easily into a mutual admiration society. But with a few checks and balances, it can get there.
I believe academic publishing can and eventually will, acquire a blogging-like infrastructure. The current structure of academic publishing is just obsolete, and wastes vasts amounts of human time and talent producing over-processed crap that mostly just vanishes into the black holes of the uncited.
Until that happens, I'll only participate if I absolutely must.
There is a third problem with peer review that I can't see how to fix at all... the extreme over-specialization all around, and the heavy use of computing. Requiring a reviewer to actually understand the paper in detail at a level where s/he can assess its originality, correctness and non-triviality, is rapidly becoming an impossible demand. Nobody has that kind of time. I could make no sense of 1/3 of the papers I was asked to review and/or judged that I'd have to study something for 2-3 months before I was competent enough to review it. And that's just to review. If you want reviewers to understand at a level where they can reproduce the results... hell, you'll have to pay them to devote the months of full-time work it would take. Especially in my area, where modeling and simulation are central, "reproducing" means reprogramming the models from scratch or auditing the code. Who the hell can do that anymore? Most times, I just had to accept on faith that the code was correct, based on the superficial plausibility of cherry-picked results graphs/tables.
Of the 5 journal papers I published before giving up, one took 2 years to get through the system. I think I made things worse by doing the kind of unclassifiable stuff that doesn't easily fit anywhere, and triggers extra review cycles everywhere purely because reviewers go "huh? why the hell is this being submitted here?"
That's too long a feedback loop to keep people like me motivated, so the only alternative to create a decent motivation scheme for yourself is to increase your submission frequency so that at any given time you have feedback coming in on at least one of your papers, and at least one of your papers is getting out the door in any given month. But this is quantity over quality, and there are people (like me) who just don't like being a paper assembly line, and cranking out stuff on a publish-or-perish schedule.
arXiv appears to short-circuit the bs to some extent by allowing some peer discussion to happen at a friendly pre-processing stage. I really like that. Another common short-circuit is edited books. I've done a few of book chapters, and it's generally a pleasanter, quicker experience, but then they are considered a notch below the journal world in prestige, so you pay a cost if you use that channel to speed up your pipeline, and make your work more fun and sociable.
The second problem is with blind/double blind. I've gone back and forth about this several times, and ultimately concluded that it doesn't really help. All 3 (open, blind, double blind) have their unique sociological and bias problems, but I thought, on balance, blind was a good choice.
But now, having experienced how in situations like blogging you can get fantastic peer review processes, with very smart (and appropriate) people, going at breath-taking speed, with flexibility on the level of anonymity, I can't go back to academic publishing. It's just not fun anymore, and I don't believe it is really any better at vetting "truth" than something like blogging+open comments.
Blogging of course, hasn't yet risen to the level where it can handle the heavy-lift scholarly work (early on, I installed a LaTeX plugin on my blog, intending to do mathematical posts of an almost academic type, but that turned out not to suit the medium well). There are also obvious problems equating a commenting community or a blogroll-mutual-linking community with peer review, because the system can slide too easily into a mutual admiration society. But with a few checks and balances, it can get there.
I believe academic publishing can and eventually will, acquire a blogging-like infrastructure. The current structure of academic publishing is just obsolete, and wastes vasts amounts of human time and talent producing over-processed crap that mostly just vanishes into the black holes of the uncited.
Until that happens, I'll only participate if I absolutely must.
There is a third problem with peer review that I can't see how to fix at all... the extreme over-specialization all around, and the heavy use of computing. Requiring a reviewer to actually understand the paper in detail at a level where s/he can assess its originality, correctness and non-triviality, is rapidly becoming an impossible demand. Nobody has that kind of time. I could make no sense of 1/3 of the papers I was asked to review and/or judged that I'd have to study something for 2-3 months before I was competent enough to review it. And that's just to review. If you want reviewers to understand at a level where they can reproduce the results... hell, you'll have to pay them to devote the months of full-time work it would take. Especially in my area, where modeling and simulation are central, "reproducing" means reprogramming the models from scratch or auditing the code. Who the hell can do that anymore? Most times, I just had to accept on faith that the code was correct, based on the superficial plausibility of cherry-picked results graphs/tables.