1. Choose a book to digitize

First of a series of notes I'm assembling in the blog to build from for a discussion of project 10X. Currently the identification of books to digitize and convert to ebooks is entirely up to the public. And it's not apparent that people are beating down the doors with titles they demand be published. The two publicly accessible DP sites often plead for new material for the beginning rounds. We see that even the production we do get is frequently a new version of an existing ebook. Yet we know there are major, well-known titles that are not available. My own projects, the volumes of Encyclopedia Britannica edition (the most renowned and most recently public-domain-available,) is not online in any quality digital form. There is no digital copy of Newton's Principia Mathematica in the English language. No copy of Ptolemy's Almagest. One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily among the most historically active areas in the world. So it may not be the most critical step in the process, but it will need to change and grow to support 10X, if the other steps progress. Even at the current rate, it would help to have some easily accessible set of lists of culturally valuable works, in all languages and traditions for people to use to search for sources, and a place to just post the images and/or urls if they find them. We now have a number of productive harvest sources for both images and crappy text digitization, but I've not seen a list of potential PG candidates, or a checklist of ebooks already in the existing catalog to guide anyone wanting to cull. (In fact, is there a checklist of existing titles in PG that's exhaustive and accessible? The rdf files don't qualify for casual use. Nearly every page of Encyclopedia Britannica is thick with both bibliographies, and with articles about authors and texts that are largely undigitized. I imagine many of them are already lost. How many other ebooks in the catalog would be similarly rich in authors and titles. As DP-Europe brought to our attention, there are entire cultures and languages whose written records are fast disappearing. We often take the identification and selection of books to be something we can assume takes care of itself; bu there's good reason to think curation in this area is important and in fact requires more attention if PG is to grow. Some will argue that we shouldn't be concerned about new texts while the existing catalog is in questionable condition. But that assumes that one needs to choose between the two; and that we will continue to suffer for the lack of a decent process for incremental improvement - something we'll need to discuss in further installments. But the most compelling counter-argument is PG's original mission statement - more free books for more people - combined with the known fact that books are disappearing from our reach daily. Some will argue that we shouldn't be concerned about new texts while the existing catalog is in questionable state. But that assumes that one needs to choose between the two; and that we will continue to suffer for the lack of a decent process for incremental improvement - something we'll need to discuss in further installments. But the most compelling counter-argument is PG's original mission statement - more free books for more people - combined with the known fact that books are disappearing from our reach daily.

On Fri, Oct 12, 2012 at 3:47 PM, don kretz <dakretz@gmail.com> wrote:
Yet we know there are major, well-known titles that are not available. My own projects, the volumes of Encyclopedia Britannica edition (the most renowned and most recently public-domain-available,) is not online in any quality digital form. There is no digital copy of Newton's Principia Mathematica in the English language. No copy of Ptolemy's Almagest.
And these all clash with the demands that we don't work on scholarly or hard works.
Nearly every page of Encyclopedia Britannica is thick with both bibliographies, and with articles about authors and texts that are largely undigitized. I imagine many of them are already lost.
I would bet not 1 in a 1,000. I think you're vastly underestimating the quality of libraries in the US and UK; if nothing else, they have warehoused every book of the type that EB has referenced. If an author or text was important enough to get an EB article, if it existed at the time, it still exists.
the known fact that books are disappearing from our reach daily.
The known fact? In the history of PG, I've only seen books become more accessible. The volumes I did for PG that I would be most concerned about losing completely, Oklahoma Sunshine, is held by 18 libraries, and 5 outside Oklahoma. The poetry pamphlets I did might be more in danger, but I don't remember their names, and I feel a little guilty about wasting people's time on them. -- Kie ekzistas vivo, ekzistas espero.

On Fri, Oct 12, 2012 at 4:21 PM, David Starner <prosfilaes@gmail.com> wrote:
On Fri, Oct 12, 2012 at 3:47 PM, don kretz <dakretz@gmail.com> wrote:
Yet we know there are major, well-known titles that are not available. My own projects, the volumes of Encyclopedia Britannica edition (the most renowned and most recently public-domain-available,) is not online in any quality digital form. There is no digital copy of Newton's Principia Mathematica in the English language. No copy of Ptolemy's Almagest.
And these all clash with the demands that we don't work on scholarly or hard works.
That's a stupid demand. Whatever we do is going to clash with a demand from someone.
Nearly every page of Encyclopedia Britannica is thick with both bibliographies, and with articles about authors and texts that are largely undigitized. I imagine many of them are already lost.
I would bet not 1 in a 1,000. I think you're vastly underestimating the quality of libraries in the US and UK; if nothing else, they have warehoused every book of the type that EB has referenced. If an author or text was important enough to get an EB article, if it existed at the time, it still exists.
Unresolved. I know that, much more often than I would expect, I can google for books or authors or place names or events, and the only matches I get are from various Encyclepaedia Britannica texts. Often enough it's a wikipedia article taken verbatim from EB 1911.
the known fact that books are disappearing from our reach daily.
The known fact? In the history of PG, I've only seen books become more
accessible. The volumes I did for PG that I would be most concerned about losing completely, Oklahoma Sunshine, is held by 18 libraries, and 5 outside Oklahoma. The poetry pamphlets I did might be more in danger, but I don't remember their names, and I feel a little guilty about wasting people's time on them.
I'll see what others say; but even anyway, don't you think that English language societies are outliers in this regard?
-- Kie ekzistas vivo, ekzistas espero. _______________________________________________ gutvol-d mailing list gutvol-d@lists.pglaf.org http://lists.pglaf.org/mailman/listinfo/gutvol-d

On Fri, Oct 12, 2012 at 4:33 PM, don kretz <dakretz@gmail.com> wrote:
I'll see what others say; but even anyway, don't you think that English language societies are outliers in this regard?
Not really. I'd trust that Norway's documents are better archived then the US's. The Middle East and North Africa is volatile enough that they may lose some documents, but for the most part, I think once something has hit the stage of published book, I don't think it's going to disappear overnight, no matter where it was printed. Stuff that's disappearing has never been published in the first place. -- Kie ekzistas vivo, ekzistas espero.

Going back how far? On Fri, Oct 12, 2012 at 4:40 PM, David Starner <prosfilaes@gmail.com> wrote:
On Fri, Oct 12, 2012 at 4:33 PM, don kretz <dakretz@gmail.com> wrote:
I'll see what others say; but even anyway, don't you think that English language societies are outliers in this regard?
Not really. I'd trust that Norway's documents are better archived then the US's. The Middle East and North Africa is volatile enough that they may lose some documents, but for the most part, I think once something has hit the stage of published book, I don't think it's going to disappear overnight, no matter where it was printed. Stuff that's disappearing has never been published in the first place.
-- Kie ekzistas vivo, ekzistas espero. _______________________________________________ gutvol-d mailing list gutvol-d@lists.pglaf.org http://lists.pglaf.org/mailman/listinfo/gutvol-d

David Price's In-Progress lists all books in PG, organized by author/title: http://www.dprice48.freeserve.co.uk/GutIP.html. It's full of "Suggested book to transcribe" entries. There are also PG's Offline Catalogs at http://www.gutenberg.org/wiki/Gutenberg:Offline_Catalogs. They're organized by etext number. If you're looking for authors in the public domain outside the U.S., look at http://publicdomainday.org/. Al -----Original Message----- From: gutvol-d-bounces@lists.pglaf.org [mailto:gutvol-d-bounces@lists.pglaf.org] On Behalf Of don kretz Sent: Friday, October 12, 2012 3:48 PM To: Project Gutenberg Volunteer Discussion Subject: [gutvol-d] 1. Choose a book to digitize First of a series of notes I'm assembling in the blog to build from for a discussion of project 10X. Currently the identification of books to digitize and convert to ebooks is entirely up to the public. And it's not apparent that people are beating down the doors with titles they demand be published. The two publicly accessible DP sites often plead for new material for the beginning rounds. We see that even the production we do get is frequently a new version of an existing ebook. Yet we know there are major, well-known titles that are not available. My own projects, the volumes of Encyclopedia Britannica edition (the most renowned and most recently public-domain-available,) is not online in any quality digital form. There is no digital copy of Newton's Principia Mathematica in the English language. No copy of Ptolemy's Almagest. One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily among the most historically active areas in the world. So it may not be the most critical step in the process, but it will need to change and grow to support 10X, if the other steps progress. Even at the current rate, it would help to have some easily accessible set of lists of culturally valuable works, in all languages and traditions for people to use to search for sources, and a place to just post the images and/or urls if they find them. We now have a number of productive harvest sources for both images and crappy text digitization, but I've not seen a list of potential PG candidates, or a checklist of ebooks already in the existing catalog to guide anyone wanting to cull. (In fact, is there a checklist of existing titles in PG that's exhaustive and accessible? The rdf files don't qualify for casual use. Nearly every page of Encyclopedia Britannica is thick with both bibliographies, and with articles about authors and texts that are largely undigitized. I imagine many of them are already lost. How many other ebooks in the catalog would be similarly rich in authors and titles. As DP-Europe brought to our attention, there are entire cultures and languages whose written records are fast disappearing. We often take the identification and selection of books to be something we can assume takes care of itself; bu there's good reason to think curation in this area is important and in fact requires more attention if PG is to grow. Some will argue that we shouldn't be concerned about new texts while the existing catalog is in questionable condition. But that assumes that one needs to choose between the two; and that we will continue to suffer for the lack of a decent process for incremental improvement - something we'll need to discuss in further installments. But the most compelling counter-argument is PG's original mission statement - more free books for more people - combined with the known fact that books are disappearing from our reach daily. Some will argue that we shouldn't be concerned about new texts while the existing catalog is in questionable state. But that assumes that one needs to choose between the two; and that we will continue to suffer for the lack of a decent process for incremental improvement - something we'll need to discuss in further installments. But the most compelling counter-argument is PG's original mission statement - more free books for more people - combined with the known fact that books are disappearing from our reach daily.

One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily among the most historically active areas in the world.
The answer to "lost works" is to page digitize them in high resolution and make the images available - hopefully somewhat better than what archive.org is doing. The only reason to go through the painful process that DP and PG entails is to create books that people actually want to read. If people don't want to read them, then don't bother. The scholars can always go back and do their research from the page scans. If your idea of "saving books" is to try to run them all through the DP and PG process, they you are right - they are lost forever.

Then you're in luck. My idea of "saving books" isn't to try to run them through the DP and PG process.. On Fri, Oct 12, 2012 at 7:46 PM, James Adcock <jimad@msn.com> wrote:
One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily among the most historically active areas in the world.
The answer to “lost works” is to page digitize them in high resolution and make the images available – hopefully somewhat better than what archive.org is doing.****
The only reason to go through the painful process that DP and PG entails is to create books that people actually want to read. If people don’t want to read them, then don’t bother. The scholars can always go back and do their research from the page scans.****
If your idea of “saving books” is to try to run them all through the DP and PG process, they you are right – they are lost forever.****
_______________________________________________ gutvol-d mailing list gutvol-d@lists.pglaf.org http://lists.pglaf.org/mailman/listinfo/gutvol-d

On Fri, October 12, 2012 4:47 pm, don kretz wrote:
One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily [sic] among the most historically active areas in the world.
I do not believe that it is now, or ever has been, the mission of Project Gutenberg to preserve world literature. According to the web site, the mission of Project Gutenberg (written in Michael Hart's inimitable style) is: "To encourage the creation and distribution of eBooks." There is nothing in the explanatory text surrounding this mission statement that suggests that preservation plays any part in PG's mission, although to be fair there is much in Mr. Hart's statement that is at odds with the current practices of Project Gutenberg. It seems to me that the mission of Project Gutenberg has nothing to do with the /preservation/ of literary works and everything to do with the /popularization/ and /accessibility/ of those works. And while Mr. Hart never said, "we encourage our volunteers to furnish us with as many rare texts as they can," he did say, "[W]e are happy to bring eBooks to our readers in as many formats as our volunteers wish to make.... [P]eople are still encouraged to send us eBooks in any format and at any accuracy level and we will ask for volunteers to convert them to other formats, and to incrementally correct errors as times goes on." I have come to believe that when Mr. Hart started Project Gutenberg on the donated mainframe time he understood the potential of storing mass amounts of text on computers, but he did not understand the transformative power of computing. He understood the power of the hard drive, but not the power of the CPU. Thus, when he first started placing text into storage instead of using a rich format that could be transformed into the Format Of Any Day, he chose to carefully, manually transform each text directly into the Format Of His Day, which in that day was 80 character lines, ASCII-only text, suitable for use on the VT52 terminal. Over time, the Format Of The Day has changed, but given the difficulty of up-converting VT52 format to more modern formats, and the fact that most modern operating systems can still display VT52 text files, however badly, most PG texts have remained in their original, sorry state. This state of affairs has persisted for so long that most of the PG old-timers see it as being not only normal, but desirable. Most of the complaints now leveled at PG are not that the archive is too incomplete, but that the contents of the archive are so visually unappealing as to be unusable. Thus, the true mission of Project Gutenberg, "[t]o encourage the creation and distribution of eBooks," is now no longer being satisfied. So, the first advice /I/ would give to someone wanting to volunteer at Project Gutenberg is to start by learning how to create an electronic book from an existing file (a tutorial to this effect should be created). Then s/he should practice what s/he has learned by taking an existing PG file that s/he is interested in, and which sucks, and make it suck less. The text can then be returned to PG for the kind of incremental update that Mr. Hard envisioned. This kind of approach provides a gentle introduction to the creation of e-texts. You can take an existing text, see how someone else has done it, see where the mistakes are, fix a few simple mistakes, check it in to PG, see what kind of feedback you get, fix more mistakes, take on a more challenging text, and so on until you're comfortable with markup. Then, go get some OCR'ed text from IA or Google, fix that up and check that in. When you finally get around to doing OCR yourself, you have all the underlying knowledge to fix up your own OCR. Stay in the shallow end until you learn how to tread water, and do not use the high dive until you are an expert swimmer.

The mission of PG is accessibility. Getting books to people, so they can actually read them. This is not exactly the same as preservation -- but touches to it in some ways. Yes, you can preserve a book in a sense by scanning it to very high standards, and then keeping those scans a few servers for patrons to use, and then lock away the original book in an ideally conditioned environment, taking wear-and-tear out of the picture for all but the most essential physical manipulations. But I think true preservation is more than keeping the artifacts for eternity, this is about keeping the expressions available and accessible, and part of our culture for as long as they deserve it ("deserve" being decided by the members of the public, not some artificial gatekeepers). For this, works need to be accessible.... Curiously, not our (lack of) standards, not even our limited capacity, but the laws of Copyright, cynically advertised as "promoting science and arts" are the biggest barrier to making works inaccessible by needlessly preventing even the slightest steps required to keep works accessible until they have become completely culturally irrelevant. Of course changing that will take some time, and we do have a huge backlog to work on until that change has been materialized. Turning then to the question of a master format. It didn't took me long back in 1996 on discovering TEI to be converted. This is exactly what we needed to get going with a master format. Yes, it takes time to learn, and yes, we do not yet have a fool-proof toolchain for it, but it will survive in the long run because it is reasonably well defined, and still flexible enough to deal with the large number of details we come across in real books. Still, TEI is not a stand-still target, and since its start has gone through several incarnations, and the whole customization stuff included with it make it hard as well (but compare that with the Unicode standard, few people will ever require the use of Cherokee or Deseret letters, but still they are in there, just in case). And yes, most tool-chains will require conventions to work well, but that can be solved... I don't see the point in defining yet another master format to get to something simpler than TEI. TEI has done the groundwork and heavy lifting, and works well, and while I agree that you need to make things simple, you can't really make them simpler than needed. Converting a simple work into TEI (a run of the mill novel) starting from reasonable clean HTML takes an hour or two; starting from Word doubles that, and plain text requires a walk-through of all pages (or scans) to add missing layout information. When I work on a text for PG for which a on-line source is already in existence, I often normalize both the OCR results and the alternative source towards TEI, and then run a compare to find issues in both. (As I have done with the "Old Frisian" text in the Oera Linda book recently posted). What I would understand is to have a more WYSIWYG interface on Distributed Proofreaders, that is, have people use a Wikipedia like mark-up, and then render that page as HTML internally, which will help to find more issues in the content. Jeroen. Quoting Lee Passey <lee@novomail.net>:
On Fri, October 12, 2012 4:47 pm, don kretz wrote:
One wonders how many historically significant books are being irretrievably lost in the destruction and violence in Syria and Egypt and Libya, literarily [sic] among the most historically active areas in the world.
I do not believe that it is now, or ever has been, the mission of Project Gutenberg to preserve world literature. According to the web site, the mission of Project Gutenberg (written in Michael Hart's inimitable style) is:
"To encourage the creation and distribution of eBooks."
There is nothing in the explanatory text surrounding this mission statement that suggests that preservation plays any part in PG's mission, although to be fair there is much in Mr. Hart's statement that is at odds with the current practices of Project Gutenberg.
It seems to me that the mission of Project Gutenberg has nothing to do with the /preservation/ of literary works and everything to do with the /popularization/ and /accessibility/ of those works. And while Mr. Hart never said, "we encourage our volunteers to furnish us with as many rare texts as they can," he did say, "[W]e are happy to bring eBooks to our readers in as many formats as our volunteers wish to make.... [P]eople are still encouraged to send us eBooks in any format and at any accuracy level and we will ask for volunteers to convert them to other formats, and to incrementally correct errors as times goes on."
I have come to believe that when Mr. Hart started Project Gutenberg on the donated mainframe time he understood the potential of storing mass amounts of text on computers, but he did not understand the transformative power of computing. He understood the power of the hard drive, but not the power of the CPU. Thus, when he first started placing text into storage instead of using a rich format that could be transformed into the Format Of Any Day, he chose to carefully, manually transform each text directly into the Format Of His Day, which in that day was 80 character lines, ASCII-only text, suitable for use on the VT52 terminal.
Over time, the Format Of The Day has changed, but given the difficulty of up-converting VT52 format to more modern formats, and the fact that most modern operating systems can still display VT52 text files, however badly, most PG texts have remained in their original, sorry state. This state of affairs has persisted for so long that most of the PG old-timers see it as being not only normal, but desirable.
Most of the complaints now leveled at PG are not that the archive is too incomplete, but that the contents of the archive are so visually unappealing as to be unusable. Thus, the true mission of Project Gutenberg, "[t]o encourage the creation and distribution of eBooks," is now no longer being satisfied.
So, the first advice /I/ would give to someone wanting to volunteer at Project Gutenberg is to start by learning how to create an electronic book from an existing file (a tutorial to this effect should be created). Then s/he should practice what s/he has learned by taking an existing PG file that s/he is interested in, and which sucks, and make it suck less. The text can then be returned to PG for the kind of incremental update that Mr. Hard envisioned.
This kind of approach provides a gentle introduction to the creation of e-texts. You can take an existing text, see how someone else has done it, see where the mistakes are, fix a few simple mistakes, check it in to PG, see what kind of feedback you get, fix more mistakes, take on a more challenging text, and so on until you're comfortable with markup. Then, go get some OCR'ed text from IA or Google, fix that up and check that in. When you finally get around to doing OCR yourself, you have all the underlying knowledge to fix up your own OCR.
Stay in the shallow end until you learn how to tread water, and do not use the high dive until you are an expert swimmer.
_______________________________________________ gutvol-d mailing list gutvol-d@lists.pglaf.org http://lists.pglaf.org/mailman/listinfo/gutvol-d

Hi All, Lets get some facts straight here. Mr Hart has always been interested in preserving TEXTS from day one! Go back to the newsnet archives for the proof. Yes, accessibility was a very important theme for him. He choose his Plain Vanilla Text Format as a "master format". WHY? It was the most greatest common denominator of the time. Furthermore, it fulfilled his requirement that the texts can be read by humans and computers alike!! PVT did have its draw backs. Formatting was lost, but was considered an acceptable cavet. The discussion of a master format goes backs to the early days. I tried to get Mr Hart to accept something else. The only flaw in Mr Harts vision was that he did not see, at the time, that users would want more pleasing looking texts. At the time this was not possible, without restricting accessibility across plattforms and devices. ( I read my first PG text on a Newton) Furthermore, not everyone had the tools or knowledge to handle anything more than simple text. Well, the formats being used today are easily human readable. The situation is even worse! The epub-standard can not be called a standard. It says devices may or may not implement a feature. E-readers are poorly designed, from a software perspective, it would not be hard for them to render HTML properly. The argument that they would not be fast enough or it would use to much memory is simply ridiculous! As far as HTML5 is of concern, we do not need to be on the cutting edge or use everything in it. Anybody, old enough to remember the browser wars? There were tons of web sites that look good in one browser, yet refused to work in another! Yet, most refused to use the greatest common denominator when designing there sites. It was not easy, but could be done. Computing has come a long way, but the ignorance is still there! If you want to preserve, the you need a good idea of what you want to preserve! To preserve the books in digital form all you need is a scan! of the book. To make it accessible you have to convert that to a format that preserves the layout. THAT is all that is needed! Really. All information will be there! Using textual markup has no added value for the purpose preserving the text. The so-called semantic markup (actually contextual) only over complicates the matter. The problem of layout is just shifted to another layer. Furthermore, it has absolutely of no academic value that I can see. It can not be used for text analysis, linguist analysis. So what is left for the purists out there. Whatever master format you use it should do two things adequately preserve the original layout, and be able to be converted to the formats acceptable by the devices available today and the future. regards Keith.

Whatever master format you use it should do two things adequately preserve the original layout, and be able to be converted to the formats acceptable by the devices available today and the future.
Those who have tried it find these are mutually exclusive goals. How to format something as simple as a "Dear John" letter found within a book becomes dependent on the width of the target class of devices. Wide devices need ways to use up that width or they look stupid. Narrow devices need ways to fit it all in or they look stupid. Formatting and display of information within tables is another, and more severe example of the difficulties we run into. And some very simple things simply "look" different on paper and on electronic devices, leading to different design choices, such as choice of font, and choice of paragraph formatting -- indent looks better on paper, whereas line-between (as Mr. Hart chose) looks better on electronic devices. And the format choices we have to work with today do not understand, nor support, the need to deal with these issues "automatically."

Hi James, All, You bring up an interesting point. You, also, prove my point of poor design of the devices and their software. As far as filling the page there are guide lines which work quite well. That is using percentages and multiples of the size of em for the font used. Tables are a problem, in there own right, and depend highly of the output media they where designed for. The other problem is graphics, which as tables do scale well. But, here the problem is display real-estate. Though, any device under 6" should not be considered a true reader. In other words, smart phones are not readers! Though they are fine reading text, if one can handle reading small font sizes. On the other side anything above 10" causes problems, too, because their is too much real-estate. But, then we are forgetting large format books. So, the status quo is that there is not way to preserve the original layout on the e-reader devices of today. Furthermore, any attempt to create a master format geared to the output devices will cause problems down the road. I believe the majority of books can be rendered decently in the 6" to 10" range as the books come in this format anyway. So were doe this leave now, practically all proposed master formats are not aimed at preserving the original layout, but at supporting more or less a particular output format. This all reminds me of the early days of computing where text processing and output was a pain. There was an answer: TeX. Today, I believe the answer lies in Lua(La)TeX or ConTexT. It has the potential to preserve the original format and output that. Furthermore it can be extended to contain any information one wants to add. The best is one can have it output the format one wants, with whatever parameters one cares to have. If something new comes along or changes just pop in a new Lua-module or change an existing one. The only draw back there is no tool chain. regards Keith. P.S. Yes, I need to provide some proof. when I get around to it. Am 18.10.2012 um 15:15 schrieb James Adcock <jimad@msn.com>:
Whatever master format you use it should do two things adequately preserve the original layout, and be able to be converted to the formats acceptable by the devices available today and the future.
Those who have tried it find these are mutually exclusive goals. How to format something as simple as a "Dear John" letter found within a book becomes dependent on the width of the target class of devices. Wide devices need ways to use up that width or they look stupid. Narrow devices need ways to fit it all in or they look stupid. Formatting and display of information within tables is another, and more severe example of the difficulties we run into. And some very simple things simply "look" different on paper and on electronic devices, leading to different design choices, such as choice of font, and choice of paragraph formatting -- indent looks better on paper, whereas line-between (as Mr. Hart chose) looks better on electronic devices. And the format choices we have to work with today do not understand, nor support, the need to deal with these issues "automatically."
participants (7)
-
Al Haines
-
David Starner
-
don kretz
-
James Adcock
-
jeroen@bohol.ph
-
Keith J. Schultz
-
Lee Passey