re: [gutvol-d] XML: Interpublication and deep linking

jon said:
One argument brought up here yesterday is that the interlinking of publications, especially deep linking to spots within documents, is so brittle as to be useless (e.g., move the resource, the URL changes, voile' the link is broken.)
that wasn't my "argument" at all. my argument was that people who make promises about their ability to pull off deep linking cannot back them up _unless_ they posit a means of dealing with impermanence. but if such a means is accepted, _anyone_ can do deep linking.
It does, however, require a unique identifier for each resource.
and here it is in jon's post: weaseling via a "unique identifier". heck, if everything has a "unique identifier", _i_ can give you a deep-linking system based on zen markup language. easily. and -- as always -- _without_ the cost of any heavy-markup. you're just shifting the permanence requirement from the u.r.l. to the unique identifier. it's sleight-of-hand, and nothing more.
the W3C suite of related technologies of XLink/XPointer/XPath/xml:id/etc. --
one trick that a hand-waver counts on to distract you is to point to a bunch of acronyms as if _they_ solve the problem. but if you look at them closely, they're mired in the same muck.
It is not *my* system at all (as the poster alluded to), and makes no difference whether I myself build some sort of linking system or not.
it makes a difference whether _anyone_ builds such a system or not. considering how widespread and well-developed you claim x.m.l. to be, why isn't such a system _already_ developed and out proving its worth? as it is, i see the ability to link to paragraphs, but _only_provided_that_ every paragraph that you want to link to has heavy markup attached. i suppose we could deep-link to every darn _character_ if we wanted to, provided we were willing to bloat up every file to an unbelievable level. (in this regard, see
http://diveintomark.org/archives/2004/05/30/pink-numbers#w200405302127-53 for a silly word-based version.)
To reiterate, XML provides standardized and convenient hooks for linking and deep linking into publications.
you always _state_ that x.m.l. solves the problems. but you never tell us _how_ it solves the problems. you drop acronyms, wave your hands, and that's it. as usual, you're asking people to buy a pig in a poke. and as usual, i'm asking to see the pig, not the poke. -bowerbird

Bowerbird wrote:
jon said:
my argument was that people who make promises about their ability to pull off deep linking cannot back them up _unless_ they posit a means of dealing with impermanence. but if such a means is accepted, _anyone_ can do deep linking.
But deep linking is now done in the XML context. It is as near as your web browser. Here's a deep link to paragraph 395 in the XHTML version of "My Antonia": http://www.openreader.org/myantonia/basic-design-nopagenum/myantonia.html#p0... A fragment identifer (which is #p0395 pointing to the id of "p0395" in the document) is one of the supported constructs in the XPointer specification. The "My Antonia" document is XML.
heck, if everything has a "unique identifier", _i_ can give you a deep-linking system based on zen markup language. easily. and -- as always -- _without_ the cost of any heavy-markup.
Of course, but ZML is inferior to XML-based markup for deeplinking purposes. I won't bother to go into the several reasons.
you're just shifting the permanence requirement from the u.r.l. to the unique identifier. it's sleight-of-hand, and nothing more.
I'm not the inventor of identifiers. They are used everywhere. Isn't it more than a coincidence that PG has its own identifer system? For example, here's PG text #10396: http://www.gutenberg.org/etext/10396 Are you saying identifiers are not needed? And booksellers can get by without ISBN or UPC codes?
It is not *my* system at all (as the poster alluded to), and makes no difference whether I myself build some sort of linking system or not.
it makes a difference whether _anyone_ builds such a system or not.
You mean web browsers don't count? What I'm saying is that the XML suite of technologies provides an excellent framework by which a deeplinking system can be built. Web browsers *prove* this is easy to do. I've not heard anybody say "damn, it is so tough to implement fragment identifiers in web browsers". Notice my use of the word "hooks" to describe the benefits of XML in this particular application.
considering how widespread and well-developed you claim x.m.l. to be, why isn't such a system _already_ developed and out proving its worth?
You mean web browsers don't count? And I'll let the XML developer types here elaborate further (such as Marcello), but developing a deep linking system into XML documents is almost trivial (at least the portion of the system to locate the content -- what to do with the found content is another matter, but that's an issue to be dealt with no matter how the content was located.) It's just a matter of need at the time. With web browsers, there is clearly a need, so it has been done ever since Mosaic. Regarding XML, it is now just about everywhere. For example, the RSS feed sent by Bowerbird's blog is XML. More and more web pages are well-formed XML. In fact, I'd put the number of well-formed XML documents on the Internet in the several hundreds of millions (aren't there now over one billion web pages?) Heck, I think one finds XML in some automobile electronics. XML is everywhere, doing its job, usually under-the-hood. Just visit and read the archive to the "XML Cover Pages", and the amount of usage of XML and its related W3C technologies (for both content and data uses) is amazing. And the "Cover Pages" probably only skims the surface of XML implementation in the world.
as it is, i see the ability to link to paragraphs, but _only_provided_that_ every paragraph that you want to link to has heavy markup attached. i suppose we could deep-link to every darn _character_ if we wanted to, provided we were willing to bloat up every file to an unbelievable level.
One XPointer scheme (not sure if its a recommend yet) allows addressing right down to the word and character level.
http://diveintomark.org/archives/2004/05/30/pink-numbers#w200405302127-53
Cool!
To reiterate, XML provides standardized and convenient hooks for linking and deep linking into publications.
you always _state_ that x.m.l. solves the problems. but you never tell us _how_ it solves the problems. you drop acronyms, wave your hands, and that's it.
This is not a correct statement of my position. XML does not in and of itself solve problems, but it provides a uniform standardized framework within which many problems can be elegantly solved. Will XML and its associated standards solve all problems of Mankind? No. Will it toast bread? No. But with regards to the processing of textual content, it offers a lot of benefits. (It is also being found of great benefit in data exchange and processing applications which is also of interest to PG/DP with regards to the database aspects of storing the texts and associated metadata.) Looking specifically, XML and associated standards (such as the XLink/XPointer/Xpath/xml:id and related specifications) provides a standardized base upon which applications to deeplink within content are rapidly buildable, as they are needed.
as usual, you're asking people to buy a pig in a poke. and as usual, i'm asking to see the pig, not the poke.
Great! It is always important to get various sides of an issue. Jon
participants (2)
-
Bowerbird@aol.com
-
Jon Noring