blah blah blah times 10

don? are you there? if so, please listen up and consider this... first of all, most of these jokers are now in my kill-filter. jim, passey, starner, marcello, you know, the whole crew. all because they failed to add value to the conversation. (or actively subtracted value.) so i really don't want to put you in that kill-filter too, don. but you're seriously going to have to step up your game... because you're not adding much value these days. just sayin'... *** don said:
There's a legitimate argument that, if we hope many people are going to be involved proofing a text, the instructions need to be clear, simple, and few;
that's pretty hard to argue with... but if someone wants to try and defend a position for instructions that are vague, difficult, and numerous, well, gee, i dunno, but that might be... entertaining...
and they should be matching the image,
that needs to be heavily caveated. i know that's the common mantra, but there are tons of "exceptions" and breaches that y'all are evidently just no longer capable of seeing... there are many reasons not to match a scan.
plus as little markup as possible to capture the essential semantics
i _agree_ with that. but i suspect that our "agreement" might be only temporary, due to yet-unspoken criteria.
to capture the essential semantics (to be defined.)
well, _some_ of you might still need to "define" the "essential semantics". but i gave you a list a rather long time ago, so my status is just fine.
Few would expect DP proofers to be applying complex formal markup like TEI, ReST, or latex.
you are extremely unclear if you're talking about d.p. as it currently exists, or as you think it should change, or whether you mean "p.g. times 10", or something new. so let me be perfectly clear when i say that i am _not_ interested in d.p. as it currently exists, or might change. i am interested in a "best practice" theoretical way only. having said that... we can't expect an ordinary person to know t.e.i. or latex. or to learn it. on the other hand, r.s.t. isn't all that complicated. and neither is markdown, or z.m.l., or other light systems...
ZML is the only markup that anyone has suggested might have a chance, but it has other issues.
_you_ might have "other issues" with z.m.l. but z.m.l. itself has no "other issues". speak clearly, and stop trying to spin, because you're no good at it. besides, the point i have always been making is that a light-markup system will work. you don't have to use z.m.l. -- there are other light-markup systems. and you can even invent your own, if you prefer that. but there's another, deeper, point still lurking here, the "unspoken" assumption that i mentioned above. the way you have put things just now informs us that you have already chopped the digitization process into a "proofing" stage, where it is expected that people are rather witless creatures only capable of basic decisions. then brighter people come along at some later stage and apply their brains to make the text ever-smarter. i reject that model. wholeheartedly. i say normal people can take pages from start to finish. maybe not every person, maybe not every page, but almost every person can _finish_ almost every page. and no, i'm not the only person who believes that. roger frank built a system on that very premise, and his experience was that it worked very well. if you treat people like they are dumb proofers, all you'll get from them will be dumb proofing... but if you encourage them to _finish_ the page, what you'll get from 'em is lotsa finished pages. the reason you don't see that kind of performance is because you haven't empowered your people to give it to you, thanks to the tools that are at hand. the first tool you have to give them is light-markup. they need a simple system that does the job _well_. the second tool you must give them is a _viewer_... once they have applied the light-markup, they need to be shown their html-rendered page, so they can compare it to the scan to make sure it "looks right". since your light-markup is designed in such a way that it is impossible to do "presentational" markup with it, if the display "looks right", it's because it is coded correctly from a _semantic_ perspective... if it doesn't "look right", it means it's coded wrong, and must be repaired. this is how you get accuracy. so you give your "proofers" a system (with tools) that allows them to _finish_ the page right away. ***
On the other hand, once the handwork is done, a version of the text that is exhaustively constructed with all the essential semantics (to be defined) clearly identified and located would, it is supposed, make it possible to write mapping modules to generate formatted texts that convey the structure and meaning of texts clearly and attractively.
that's a lot of words that don't really sum to much, at least not that couldn't be phrased more simply. the irony is that all of this confusion is introduced simply because you're _talking_ instead of _doing_. once you toss off all of the superficial aspects of applying a certain "type" of markup, you'll see that there is little disagreement on the essential base... all you guys are gonna apply _very_ similar .html... further -- according to your own rhetoric -- the .css is user-swappable, so nothing crucial can reside there. so once you get down to the doing of actual examples, you'll find there's nothing left to argue about in words. of course, then you'll probably fail to realize that you were wasting your time all along. indeed, you'll likely think you "accomplished" something, and proceed to slapping yourselves on the back and giving high-fives. but seriously, you're just wasting your time.
We have few examples of such a "master text" that has proven capable of providing the source for clear and attractive ebooks in various formats.
bullshit. bullshit bullshit bullshit. i have given numerous examples. and can provide more on demand. i'm not sure what it is that you think you gain by telling such a bald-faced lie, right in public.
I think no one is advocating any form of text that serves both purposes simultaneously, so that apparently leaves another step in the process - converting proofed text into some master text.
i'm not even going to go back and try to sort out what those "purposes" were, but i assure you that it is not necessary to "convert" proofed .zml into "some master text", because it already _is_ that...
Not many have offered to discuss how this would be done.
i've been discussing it for 10 years here.
Which leaves the suspicion that some might expect the DP "post-proofer" volunteers to fill this role.
i don't know what you're talking about. and i suspect that you don't know either.
That seems an unlikely scenario to me.
whatever...
But however it is accomplished, it would probably (it seems to me) to depend heavily on the path provided by the essential semantics (to be defined.)
there you go again.
I mentioned the McGuffey text as a possible test case for a discussion with real examples of alternatives.
you still haven't provided accurate text. so, do you want "real" or not? seriously.
I thank Lee Passey for picking up the cue.
an e-book with inaccurate text is worse than none at all.
I find it tedious to endure advocates who only discuss their own ideas, so this will have at least two of us for a while, anyway.
i hope the two of you have a nice long happy relationship.
I'm currently using the McGuffey text, and learning from how Lee has applied XHTML as markup
oh, how sweet is that. you're _learning_ from lee. darling. what a good relationship.
I'm building a list of candidates for "essential semantics".
i can hardly wait. for something i did 8 years ago.
It will be interesting to me to see how Lee views his construction from the same perspective.
isn't it wonderful to fall in love! you want to hear everything the other person has to say, about _everything_. it's so tender.
In the meantime, I'm using the resources of the WordPress installation on readingroo.ms see how much of the semantic content of McGuffey is available (as bowerbird would like it to be) inherently in the plain text,
don't be an idiot, don. it's _not_ that i would "like it to be" the case that the "semantic content" of a book can be gleaned from the type that is set on the page. it _is_ true. and it always has been. that's how human beings soak up the knowledge that's printed on the page. there's no one sitting next to 'em pointing out that "this is a chapter-header, and this here is a list..." the _words_and_typography_ do all of that implicitly. you technocrats are so caught up in your "separation" of _content_ versus _presentation_ that you _ignore_ the fact that it is presentation which signals structure. so those two things aren't independent in the slightest. they are woven together inextricably into a potent mix. so first you toss out the presentation as unimportant, and thus you need to reintroduce the structural info... you're like a food-processor who boils out all of the natural sugars, and then puts in artificial sweeteners. so, in the midst of you thinking you are oh so smart, you're really being just about as stupid as you can be.
and how much needs to be added, by constructing views that allow different versions of text to be viewed in parallel, and edited.
this is a nice vague chunk of text that sounds like it is a _teaser_ for some kind of demonstration. let's see it.
The host has the essential capabilities of a linux server, including utilities for I think all the formats commonly mentioned (except ZML),
screw you, don. i've had sites up that handle z.m.l. for ages now. something doesn't cease to exist just because you won't look at it. so anyone who wants to try out z.m.l. can do it in my sandbox. they don't need to go to yours. *** seriously, step up your game, dude. -bowerbird
participants (1)
-
Bowerbird@aol.com