Re: Many solo projects out there in gutvol-d land?

walter said:
In that vein, how flexible is the DP software?
it depends. in general it's not all that flexible... but sometimes a little creativity goes a long way. even more inflexible than the software, however, is the willingness of its coders to change anything. and even when there is someone like dakretz who is willing to roll up his sleeves and do some work, the administrators won't let him. so there you go.
I've been wondering to what extent parallel P1 rounds might be helpful. ... Aforementioned suggestions may be silly, feel free to point out their silliness.
it's _good_ to "wonder" about things, walter. it means your mind is working on a solution. so that part isn't silly at all. and the part about parallel p1 rounds is not silly either. to the contrary, it's a good idea; might not _work_, but it's still a good _idea_. so that's not silly at all. what _is_ silly, however, is that -- in spite of the fact that people have had this good idea for a very long time now -- d.p. has _never_ actually _tested_ it directly to see if it works. oh, they've run some research, and tried out some things, but they've never actually done a full-on _experiment_ to test the hypothesis. so, for years and years a parallel-proof idea has been around, but we're still "wondering" whether it might work or not. _that_ is silly... for the record, once again, i've reassembled some data from various d.p. "experiments" (i'm using the term extremely loosely here) and i've even written the software that helps you reconcile two iterations of parallel proofs, so i can give you some conclusions on all that, namely that it doesn't give you better accuracy, and thus it certainly doesn't outweigh the cost of doing the reconciliation (which is rather high, even given a good tool), so i don't recommend it. however, a focused experiment on this matter would be good, so as to validate my findings... having said all that, though, there's a "variant" on parallel proofing that you might find interesting... taking o.c.r. results from 2 different sources and comparing them to find their differences and then resolving those differences and calling it "finished" _does_ happen to be an extremely effective strategy, since it avoids all the word-by-word proofing rounds. i documented all this on a thread on the d.p. forums, entitled "a revolutionary methodology for proofing", or something to that effect... you could look it up...
I find P2 proofing exceedingly boring because of the small number of errors that are left to be fixed in texts that are well-scanned and well-proofed in P1.
well, there's a lot that could be said about this, walter. perhaps first and foremost is that proofing _is_ boring. especially a word-by-word proofing on an accurate text.
I can't imagine how mind-numbing P3 will be if I ever become eligible for that 'status'.
since most of the o.c.r. errors are gone by the time of p3, most p3 proofers have resorted to trying to find errors in the book itself, errors that the publisher/typesetter made. this lets them leave a comment, so they can do something. for instance, in the book that i'm now examining which rfrank used in his "roundless" experiment, there were 50 comments left in a 240-page book, or 20% of the pages. of course, addressing all these comments is a task that is done by the postprocessor, which is one of many reasons why that job has become more taxing in the current era...
I can imagine that only having to look at the differences between redundant P1 proofed texts might be helpful since it would take two independent P1 proofers to overlook the same error to have it slip through.
well, yes, and that's the main argument for parallel proofing. but it ends up that yes, indeed, "two independent p1 proofers" often _do_ "overlook the same error" and it then slips through. and in the same manner, sometimes an independent p1 and p2 proofer "overlook the same error" and it then slips through to p3. now it would be great if we had some solid _data_ on the numbers, so we could decide how much energy we want to spend on catching these errors that slip through. we've found that _some_ errors can go as many as 7 or 8 or 9 rounds without being caught, but no one is suggesting we spend that many proofing rounds on every page... so we have to decide how many rounds we will expend our energy, in order to catch what percentage of errors. it's really that simple. and to make that decision, it would be great if we had some data. and it's silly -- ridiculous! -- that we have not collected that data.
Another potential improvement might be to make texts available to the next round on a per page basis instead of having to wait for all pages to be finished in the previous round.
well, now you're suggesting a "roundless" system, walter. which is also not a silly suggestion. unfortunately, it's not a _new_ suggestion either, so you're not advancing the art. what you _are_ doing is showing we have no data on _this_ particular wrinkle either, even though it's a very old idea... and again, this failure to collect data and test hypotheses is extremely silly, especially since we debate matters endlessly. like clara peller bellowing "where's the beef?", we should now make it a community slogan to demand "where's your data?" meanwhile, i keep myself busy by collecting what data i can, and writing the software tools that we need to do these jobs. and i talk and talk, but most people here are too busy being silly to listen to me. which i find to be endlessly amusing. :+) -bowerbird
participants (1)
-
Bowerbird@aol.com