TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
> I searched the archives and the web and found references to the Gunning Fog
> Index and the Flesch Readability Index, but I couldn't determine if these
> enforce a grade level standard
I haven't actually used either in quite some time, but I know that at
least one of them does. They are very "mechanical" processes, though,
and do not measure too accurately, IMO. They're based on length of
words and sentences, primarily, and I don't believe they
differentiate between Latin and Germanic-based words, simple and
complex sentences, or any of the other factors that require
Once upon a time, I had a grammar checker called Grammatik or
something like that that could apply both of these indices to your
documents. I suppose you could also do it by hand, if it were
acceptable to select random excerpts or something. (You probably
don't want to hand-count the letters and sentences in all of your
Keep in mind, though, that these indices are simple mechanical
devices for measuring a very complex and human thing. While they may
point out certain odd habits or patterns in your documentation, they
cannot begin to approach the kind of meaningful analysis a human
being could provide.
> (plus, if they're dated, and with the decline
> of literacy...)
I don't think they've modified the standards in some time, have they?
lisarea -at- druak -dot- dr -dot- lucent -dot- com