
Even in the context of the above, the scores would provide a great starting point for being improved with manual cataloging and literacy labeling.
I don't think so. It's downright useless for manual cataloging, as it only handles that one dimension.
Isn't "useless" a bit strong? Sure, it's only one dimension; that's true of any single piece of information. Right now, a manual cataloger looking for children's books would probably look for known titles and authors, search for some likely keywords ... and then what? How will they surface children's books that they don't already know about? A list of the "most readable" (no matter how flawed the metric) is a MUCH better starting point than the complete list of books at PG.
I don't think it will help literacy labeling much, either, which is best done manually.
Actually, readability scores are widely used in education. I'm sure they have their detractors, but that's true of almost anything. Even with manual labelling (which hasn't been done to date and therefore I don't see how it's an argument against an automated solution), scores are also useful.
It also seems a little weird to have some proprietary reading level numbers on the system, instead of the Fog index or the Flesch-Kincaid Readability tests. It feels like an advertisement.
I'm in favor of any and all readability scores. If these existing scores were already in place, I probably wouldn't have bothered to comment. Or, if the choice was Fog + F-K vs. some other score, I would choose the most common score. But I haven't seen anyone offer to add Fog or F-K, so I welcome useful info from any source. Just so it's clear: I have no connection with Rocket Reader. I'm not even sure if I ever heard of them before Greg's note. I've thought for a long time that it would be useful to include readability scores. Scott