
Proofing is per se linear, has relatively few differences, and is aided by humans, and a new version is to be created and not a merge. The process is simple compare text A and B as long as they are equal and then gather the information as long as the differ, present the difference, offer possible changes, continue. Without much analysis one can see that this process is linear.
Agreed -- although again you run into problems when your assumptions break down. Pgdiff wasn't intended for these simply "change a couple letters within a line of text" problems. It was intended for problems of the nature of "I have two different editions of the text from two different continents one using English spellings and one using American spellings and having different linebreaks and different pagebreak and different intros and censorship and different indexes and I want to use one to help find scannos in the other." Yes it can be used for simpler tasks but if you have a simpler task you might be better off to figure out exactly what that task is and write a tool to match that task. Human edits within line tend to be char-by-char and you might be better off using a Levenshtein measure with the "token" set to be a char and the "string" set to be a line of text -- to give an obvious example -- since its not obvious to me how someone uses a mouse and a keyboard to make changes other than "insert a char" "delete a char" or "substitute a char" -- unless one uses cut and paste, in which case all assumptions are off again....