we're looking at rfrank's "roundless" experiment at fadedpage.com...
as i said yesterday, this test is a very very very very very good thing,
because distributed proofreaders has been bogged down in a morass
of "rounds" for many years now. their standard workflow now calls for
_three_ rounds of proofing, followed by _two_ rounds of formatting...
throw in a "preprocessing" round, and their "postprocessing", which
is following by "postprocessing verification", and you've got 8 rounds.
i don't know about you, but to me, that seems like a lot...
but that's not the worst of it. the worst is the resultant backlogs...
the problem arises because d.p. has thousands of proofers doing p1
(the first round of proofing), but d.p. only has hundreds that do p2
(the second round), and mere _dozens_ doing p3 ("final" proofing)...
needless to say, the large number of proofers doing p1 can proof
more than the smaller number doing p2, or the tiny number in p3.
the backlog created is (understandably) frustrating and demoralizing
for the proofers trying to keep up in p2, and is killing the p3 proofers.
there is also the gnawing feeling that not all pages _need_ 3 rounds.
indeed, _most_ pages in _most_ books are simple enough that they
can be finished in one round, two at the most. so the _inefficiency_
of the 3-round proofing is rather striking as well. the thought is that
each page should be proofed only as many times as that page needs;
this has been labeled as a "roundless" system.
aside from the backlogs of partially-done material, the other sign of
a problem with the dp.p. workflow is that production has flattened...
even though d.p. enjoys a constant stream of incoming volunteers,
thanks to all of the good-will that project gutenberg's free e-books
have generated over the years, d.p. output has leveled out at under
250 books per month, which works out to less than 3,000 per year.
against the backdrop of the _millions_ of books google has scanned,
this is a mere drop in the bucket. a small drop in a very large bucket.
rfrank doesn't go into all of this on his site. perhaps he didn't need to,
since the d.p. people he's recruited are well-acquainted with the issues.
but rfrank is also unclear on many of the details of his little experiment,
which is a more worrying matter.
specifically, i don't see a lot of experimental rigor here. it seems to me
that roger is unfamiliar with the mechanics of the scientific method and
its applicability to human social experiments. i see no evidence of any
stated hypotheses, nor any way such hypotheses can be disconfirmed...
the reason people developed the scientific method was because we found
that when we just fooled around "to see how things turn out", we often
ended up fooling ourselves about what we had seen, and what it meant.
we learned that we had to actually specify our hypotheses, and devise
tests (experiments) specifically designed to disconfirm our hypotheses.
otherwise, our brains are only too willing to accommodate what we find
as being "supportive" of our initial impressions. ("experimenter bias"
is the term by which this insidious phenomenon is most well-known.)
if i'm correct, this problem will surface in rfrank's future results, and
surface repeatedly, so there's no need for me to labor the point now.
but i wanted to frame this particular issue, here and now, in advance.
that's enough for today. see you tomorrow...
-bowerbird