we're looking at rfrank's "roundless" experiment at fadedpage.com...

***

i'm going over the data for one of the books rfrank used in his test,
and once again the results i observe are striking and unequivocal...

the proofers made hundreds of changes in this 240-page book, but
most of 'em could've been detected and fixed during preprocessing,
which would've made the workflow both more smooth and efficient.

sure, there is the occasional stealth scanno -- "array" for "army", and
"riot" for "not" -- which (one could argue) would seem to require the
word-by-word proofing that is expected at distributed proofreaders.
but they are few and far between, and in almost all cases innocuous.

and certainly one round of such close proofing will be all that would
be needed if the obvious-and-easy-to-automatically-detect errors
were found and fixed in preprocessing.  once these obvious glitches
have been fixed, the proofer is essentially doing _smooth-reading_...

this is of the utmost importance if you really want (as rfrank claims)
to have each page be "one and out", (i.e., be finished by one proofer).
otherwise once is simply not enough, not for a good many pages...

it's also the case that -- with the right tool -- doing preprocessing
is fun and exhilarating.  it's really a kick in the pants to be able to
improve a book so quickly and efficiently, and move it to "the finish".
compared to the boring nature of proofing, there is no comparison...

i have demonstrated this same finding on book after book after book,
with no exceptions, so i am quite confident that it is extremely robust.
all you have to do is look for it, and i assure you that you will find it...

i wonder why so many of you are so resistant to learning the truth...

-bowerbird