kevin said:
>  
On the Open Library System, I note that
>   high resolution gray-scale scans
>   (at least for the one project I checked)
>   are not archived,
>   though the black and white scans are

it's my understanding that d.p. has kept all scans, but
it's reasonable they wouldn't mount the high-res ones;
no sense letting the general public burn your bandwidth.

this, of course, is the problem with high-res files in general.

they're nice to have, for purposes of "preservation", but
you can't really make them "accessible" in a practical way
until computer resources become free across-the-board,
so -- in a practical sense -- they don't really do any good.

it's not just bandwidth, either.  storage problems quickly
ensue when each page of a book eats multiple megabytes.
and computers need lotsa power to crunch through them.

and sure, we can all see the day coming when all of these
resources _will_ be available to us.  but how soon is that?
are you willing to bet on it?  and don't forget that you are
a lucky first-worlder.  how soon until _everyone_ on the
whole planet has unlimited computing resources?  really?
are you willing to bet on it?  and if the third-worlders can't
have what you lucky people have, how long do you think
they will sit on the sidelines without a full-out revolution?

we need to think in real-world terms, and be _practical_...


>   I also note that there is no 'bulk' download function
>   to get a zip of all the files associated with a text.

yeah, that would be nice.  will d.p. offer that?  who knows?

in the meantime, you can learn the address of an image by
right-clicking it and choosing the appropriate menu-item.

for instance, here's the u.r.l. i recovered for one page:
>   http://pgdp01.us.archive.org/1/pgdp02-archive/texts/documents/43e52c83dd501/web_ready/001.png

subsequent scans have the same u.r.l., except "002.png",
"003.png", etc., so it's very easy to scrape them en masse.
(if anyone needs a scraper-program, just backchannel me.)

-bowerbird