
carlo said:
The individual files are more valuable if one wants to check a possible error, since one does not need to download the full zip file, just to look at one page.
and, more to the point, mounting them individually means that every page has its unique u.r.l. address pointing to it... you _could_ unzip one page, on-the-fly, and give each page an address that way; indeed, that's what internet archive does. but i think it's better to mount each one, and _zip_ 'em on-the-fly when necessary, as desired for downloading as a package.
Your images are 20 MB; assuming that this is representative of the collection, all the 40000 books could fit in one disk of 1 TB.
there aren't 40,000 books. more like 30,000. but now that michael's gone, who's counting?
all the 40000 books could fit in one disk of 1 TB.
and again, more to the point, my demo showing p.g. could grab the d.p. images via the backbone (and not force volunteers to do 2 slow re-routes) was a solution from a complete outsider like me... but p.g. and d.p. have an ongoing relationship, not? i mean, greg's on the p.g. board of directors, isn't he? so what they should _really_ do is buy a hard-disk and duplicate the entire contents of the d.p. site -- images, text, forums, all of it -- and then send that hard-disk to p.g., where the scans could be mounted the next day. plus d.p. would have another back-up. it's surprising that they haven't already done it, and mind-boggling that it hasn't even occurred to them, at least as far as you could tell from all appearances.
But a few years ago, when they started to keep page images it costed much more.
several policies seem not to have been revisited in many years now. p.g. is increasingly out-of-touch. and, as this very issue indicates so well, when it does try to update stuff, not much thought goes into things. -bowerbird p.s. so carlo, got your rewrap of p&p yet?