
On the other side, lets just shut everthing down as most of the consumer computers have some way of displaying scans.
So we are just wasting everybodies time?
Yes and no. The Google "photocopies" of books available at books.google.com aka their PDF downloads which are just page images ARE useful, I can even read many of them successfully on my Kindle DX. There is even some charm in reading books in their original layout -- and some charm in seeing the occasional scanner's thumb. Reading pages and pages that have been scribbled on by 200 years of students is not very charming, IMHO. And the Google page images have the blotchy blurry heavy-font characteristics of bad photocopies. Even some of Google's EPUB files, which are just OCRs of these same books with all the scannos intact, can sometimes be an interesting read. The question is, in my mind, is Google preserving the books, and doing so for the public good or not? I suspect when Google digitizes the book the original is then trashed by the college library -- the whole point being they do not want to have to pay to maintain physical library books in various states of decay. Google then becomes the sole repository for this information -- excepting a smallish number of copies at TIA. Further, is Google dedicated to trying to keep this work public, or on the contrary is Google hoping for changes in the copyright law so that they can fully privatize these digitizations? Compare to what happens when volunteers at DP or PG correct a text and publish it in electronic form. Publically available? Yes. Available from a huge variety of redundant sources? Yes. Suitable to be republished easily on paper by either NFPs or For-Profit publishers? Yes. Reflowable so that it can be read comfortably on a wide variety of devices by people with differently aged eyes including by people with little or no vision? Yes. Yes. Yes. Etc. However, The DP/PG approach is extremely expensive compared to what Google is doing. Consider: Google Books == about 10 million books photo scanned. DP/PG == 30,000 books "fully restored." So Google's approach is about 300X faster than the DP/PG approach. My Conclusion: In the best of all world's there would be some measure of VALUE in choosing which books DP/PG chooses to put effort into fully restoring -- the idea that somehow DP/PG is going to be able to fully restore all the world's books is surely false. When someone at DP chooses to introduce a book that is expensive to do and the end result has relatively little value to society, that means other more important books will not be restored. It is not simply a question of "First Come First Serve" because on DP a worthy book can easily become stuck on the queues behind a less worthy book, such that the more worthy book is not allowed to be worked on by anybody. How does one measure "worthy vs. non-worthy?" Not a trivial matter, I admit. But to my mind one measure is obvious: Books that real people do not in practice want to read we should not bother to restore! I don't care if it's a book on ancient Sanskrit. If 1000 people want to read it, it's worth doing. If only 6 people want to read it, it's not worth doing. As a simple measure at least the total amount of time people spend reading the book has to exceed the amount of time volunteers spend preparing the book, or it's a loss to society. Again, the most popular books on PG are read 100,000 times more often than the least popular books. Now it's hard to find one of these most popular books to tackle today. But it is trivial to find a book to work on that will be 50X more popular than the average book DP finishes. Let Google deal with the unpopular books, and let DP/PG work on books that people actually *want* to read.