i'm seeing some encouraging developments over at d.p.,
the most notable being an admission that the system has
not always used the time and energy of volunteers wisely.
i also see that the false veneer of "we're all one happy family"
has a few cracks in it now, which i take as a very healthy sign,
because it always struck me scarily as a little bit too cult-like.
but there's enough truth in it that volunteers will emerge from
the currently choppy seas with commitment intact, i believe...
even though there are several suggestions that i could make,
vis a vis volunteer energy, the big issue these days seems to be
the number of rounds, and how those rounds are construed...
rather than go over and post on the forums there, because
hey, you guys don't really want _me_ over there right now,
do you?, i'll just post a couple quick thoughts here instead...
the p1.5 experiment and garweyne's diff tool are interesting,
but um, you're really complicating the process unnecessarily.
and there is a very simple solution. i've suggested it before,
and another person has suggested a variant of it just recently...
(sorry, can't remember exactly who, i think it was jhellingman.)
garweyne's research on the diff tool in part seeks to determine
whether a change that is made was a "significant" one. he has
made some remarkable progress on capturing that information,
really remarkable, but rock bottom, it's still a difficult judgment.
more to the point, it's an unnecessary one.
as long as _changes_ are still being made to a page, any changes,
even "insignificant" ones, the page should be considered "in flux".
it is only when a number of people (that number unspecified here)
have looked at a page and determined "there's nothing here that
needs to be changed" that we can really consider that page "done".
(even then, it might still have errors. but that'll always be the case.)
so, the question does _not_ boil down to "what changes were made,
and are they significant?", a question that will be difficult to answer,
it boils down to "were any changes made to the page by this person?".
and _that_ question is dirt-simple to answer. if the text is the same,
then no changes were made. no complex analysis is required at all.
as long as changes are being made, not done. no changes, done.
sure, sometimes you might temporarily cycle through a loop where
one person does something, the next undoes it, the next re-does it,
the next undoes it, but sooner or later, that pattern will be broken.
(and you could jump out of it sooner, too, just by checking to see if
the current version of the page matches the one that is _two_ back.)
the beauty of this system is its simplicity. no complex analyses and
no "rating" of the proofers (since that seems to be damping morale).
just a simple method, with a foundation that is extremely intuitive...
this has direct relevance to the "backlog" problems d.p. is facing too.
that's because, if you keep sending the page back through p1 until
two (let us say) proofers both view it without making any changes,
and only _then_ is it sent to p2 (which is considered the "thin line".
because responsibility is charged for any errors that survive _it_),
then the _quantity_ of output from p1 will be curtailed somewhat,
but the _quality_ of that p1 output will be considerably improved,
to a point where you actually have great confidence in its accuracy.
with less quantity, and better quality, pages will breeze through p2.
for some pages -- easy ones, where the first proofer caught all --
that would mean only 3 p1 proofers would have to view the page.
for difficult pages, it might take 6 or 10 proofers before it's solid.
but so what? if that's what it takes, that's what it takes, and you'll
feel when it finally gets out that you did exactly how much it took
to get it _right_, no more, no less. and that's what you really want.
all pages are not equally difficult; you need a variable methodology.
any fixed-number-of-rounds system will result in too few rounds
for the difficult pages, and too many rounds for the simple pages...
and as far as i can tell from comments made, this type of system
would _not_ be hard to implement in the actual code of the site...
there are all kinds of things you could add to this that would make it
even more useful -- like automated diff feedback as learning guide
for proofers whenever their page was subjected to a further change,
or ranking of proofers based on the percentage of pages they did that
were touched/untouched by later proofers -- but since the important
point here is _simplicity_, i'll not bother to discuss those in any detail.
try it! you will quickly see that it works, and works well!
anyway, as always, that's my advice, take it or leave it, as you see fit.
but i get the impression that y'all really want to fix things now, and
that you are getting bogged in the difficulty of a certain approach,
when -- from my removed perspective -- i can see that a way that
is much simpler will actually _better_ solve your various problems...
best of luck. thanks for proofing!
-bowerbird