
Bowerbird wrote:
further, the x.s.l.t. methodology that has always been the crucial linchpin in the "strategy" of x.m.l. advocates here is one of the ones that simon relegates to the past-tense. i find that interesting.
Do you think Simon would agree with your assessment of what he is saying? I think you are putting words into his mouth that he did not say or mean. XSLT is being *massively* used in quite a few XML applications, and successfully so. No doubt XSLT has its share of problems as all human-made systems have, but such problems have not stopped it from being used in real-world systems. XSLT is not a theoretical spec -- it is definitely not "vapour." DocBook is one notable success story. O'Reilly uses DocBook in much of its publishing workflow (it was interesting to hear Tim O'Reilly speak at Reading 2.0 -- he's a super-pragmatic person -- they use DocBook and XSLT/XSL-FO because it *makes sense to*.) Rosetta Solutions and other document conversion houses are moving fast to mastering in XML (Rosetta Solutions is using DocBook) and using XSLT (and the related XSL-FO) for outputting in various formats. It's been eye-opening to talk with the several conversion houses (as we have been doing for both OpenReader and LibraryCity.) I also recall seeing a couple online book projects in academia which master in TEI and use XSLT to generate XHTML and other formats. Do a check on Google of "TEI XSLT". 168,000 pages came up. Have fun. If PG/DP has failed so far to move to TEI-based (or other XML vocabulary) mastering, it has little to do with XML, XSLT, etc. It's simply the limited time of the volunteers. I notice that things in PG-Land tend to move slow anyway in most areas, particularly when it comes to change. Look at the problems you've had in getting text errors corrected! (Although maybe that's due to not submitting error reports to the right place.) DP is where most of the action is taking place these days, but even there, DP's long-planned move to a next-gen system (which includes a uniform XML-based mastering) appears to have been put on hold as well (or they're doing it in smaller increments.) They're too busy producing texts. It is the tyranny found in every limited-funded, volunteer organization (and even well-funded orgs): change tends to take a long time unless some bright light steps forward to make something happen. You will no doubt argue, and there is merit in your argument, that your ZML system (which is essentially regularized plain text) is the answer to all PG's and DP's woes, but then you have to *show* that for all the things they'd like to do with their texts in the long-term future, ZML has sufficient structural resolution. But your approach so far to convince others reminds me a lot of the famous advertising slogan of "Ralph's Pretty Good Grocery" (in Garrison Keilor's mythical small-town of Lake Wobegon): "If you can't find it at Ralph's, you can probably get along without it." What's needed on both sides of the debate is a clear cut requirements list of exactly what the "master" format is to accomplish/fulfill. Then this will determine whether the simpler regularized text approach is sufficient, or if an XML-based approach is called for. From my study the last few years in related systems, the XML-based approach is worth the extra work to get there, provided the XML vocabulary is properly chosen and consistently applied. So, your saying that "trust me, ZML is sufficient", is itself an insufficient statement. It's like George Bush saying "trust me, the invasion of Iraq is justified." You even come across like George Bush who "knows" what's good for us but doesn't bother to explain why. Just "trust me, I know what's good for you." Of course, as just noted, there's no agreed to requirements list on which to base any important decision upon, so this debate is sort of being conducted in the dark. Nevertheless, several of the main players in DP and PG have a pretty good intuitive feeling that regularized text is not sufficient. Since XML, properly done, will always surpass ZML in document structure resolution, then the conservative position is XML (better to have more machine-readable document structure than less -- one can always later scale back on the markup if found unnecessary. But if there's a million texts with insufficient structural resolution, then that's a BIG problem.) Also, developing a killer "viewer-app" system for ZML is not sufficient to prove the merit of ZML, either, since visual presentation is simply one use of digital texts. There's other uses such as non-visual presentation, inter-publication linking, annotation, searching/data-mining, machine translation, etc. There's no doubt uses not yet recognized which may require more, not less, document structural identification. Each use adds its own set of requirements. Jon Noring