
ho ho ho! happy christmas eve! :+) ok, busy day for me, and i'm sure all of you too, so i'd guess you'll appreciate if i cut to the chase, and leave the extended discussion for next week. *** for now, i'll conditionally agree i missed 20 errors. but that number is kind of meaningless without also stating the number of errors that i _found_. if i found 2, while missing 20, that's extremely bad. if i found 2000, only missing 20, it's extraordinary. i would estimate that i found 80-200, meaning that my accuracy level was at the 80%-90% range, which is in the ballpark for a typical p1 proofer, _except_ i only worked for an hour, far less than a p1 round. moreover, i didn't perform the full range of checks. most significantly, i did not check any quotemarks. that's because i thought roger's preprocessing had probably _found_and_fixed_ all of those problems. however, since _7_ of them showed up as "missed", i decided that i should go back and do that check... more on the results of that check coming up later... *** the other info that roger needs to tell us about is how many errors that _he_ found and missed, and reveal how many "uniques" we each found... when we have those numbers for both proofings, we can then compute how many errors _remain_, which is the thing that we _really_ want to know, so we can decide if another proofing is "worth it". this is what don was talking about, except that the situation isn't nearly as vague as he believes. indeed, the specificity you can obtain from just a mere _handful_ of data can be quite startling, if you ensure that you select your data carefully. (this often means creating a test to produce it.) *** one of the things you should absolutely remember is that if you have found an error of a certain type, you _must_ check for more which might be lurking. i did that, with the errors of mine that roger listed. he found that i'd missed some "lie for he" scannos. and sure enough, a search turned up another one:
chance that lie may get work the first of the
roger listed a paragraph-termination error i missed. and a search revealed another two or three more:
The World Syndicate Publishing Co, be in it, Finny, I mean, but were giving the Christmas hymn beginning,
and, most notably for this book, there were also the double-quote errors roger discovered i'd missed.... the systematic search for those was quite interesting. *** the typographer for this p-book was badly hungover. the rate of errors in this p-book is surprisingly high, including an atypical number of outright misspellings. but the number of doublequote glitches is astounding. i have to verify them and write 'em up, but i'd guess that i found at least a dozen more, and maybe two... so -- at least in one sense -- you could say i missed even more errors than roger computed. but of course if i had done this doublequote check in the first place, and done it right, i wouldn't have missed any of these, as doublequotes _are_ a type of error that's detectable. so that check must definitely be a part of our routine... likewise with paragraph-terminators, also detectable... stealth scannos are a mixed situation. there are tests that can be done which _do_ help find stealth scannos. but these tests often pull up "false alarms" as well, and thus the cost-benefit ratio from doing them is unclear. that's why my general orientation is _not_ to do them, but rather to leave them for the smoothreaders to find, since detecting stealth scannos is often _easy_ for them. and that's what it boils down to -- to make this _easy_. *** more later, maybe today or tomorrow, and next week... watch out for flying reindeer! :+) -bowerbird