Grade-level writing - tool suggestions? (take II)

Subject: Grade-level writing - tool suggestions? (take II)
From: Geoff Hart <ghart -at- videotron -dot- ca>
To: techwr-l -at- lists -dot- raycomm -dot- com, Linda Hughes <Linda -dot- Hughes -at- respironics -dot- com>
Date: Thu, 19 Feb 2004 10:44:12 -0500

Linda Hughes responded to my suggestions:

<<Regulatory requirement. If we write in our product spec that the user manual for homecare ventilatory-assist equipment will be written at a 7th grade level, the FDA then requires us to live up to that (a bit like an ISO requirement). >>

Don't know anything about FDA regulations, but with that caveat: The simple solution, then, is not to mention the grade level at all in your spec--that's patronizing to those who will be reading this. ("Oh great. They dumbed it down so _I_ can understand.") Instead, aim for simplicity, and hire a human to ensure that it's as simple as you think. Even better: state in your specs that you will ask a few members of your target audience to review your documentation (a reality check) rather than relying on a synthetic metric to do the work for you. Then, rather than having to tell the FDA "we used a software tool", you can say "we tested it with real customers" and revised it until we were sure it worked.

Far, far more effective. And if you're in the U.S., you'll have much lower risk of a liability lawsuit because you used real people to test your documents. Isn't that why the FDA insists on three levels of clinical trials before releasing a drug on us?

<<I'm not so sure about that. The basics of the Flesch-Kincaid scale are syllables per word and words per sentence. Those criteria are certainly valid, even if one tries to beat the system by feeding it nonsense. In other words: The scale can't read, but that doesn't mean it's not a useful mathematical equation!>>

You're not sure because you didn't try my test, and because you haven't read the research. <g> Tests such as Flesch are used because there's this neurotic compulsion in a certain group within the educational community to create metrics. Metrics are great because they don't require any thought, and because you don't have to demonstrate that they're meaningful. They're numbers, so they _mus_t be meaningful. Sadly, that's not always true.

Think of it this way: There are two components to judging whether text is simple to understand and communicates effectively. First, there are purely mechanical measures: all else being equal, it's true that shorter, less convoluted sentences with shorter and more familiar words are easier to read.

Unfortunately, "all else" is never equal, which is where the second part comes in. No current software can judge the quality (simplicity, correctness, consistency, clarity, and ability to meet the audience's needs) of the semantic content. A sentence that easily passes the Flesch test can fail on each and every one of these, and that's hardly a desirable outcome. The semantic content is far more important than the purely mechanical aspects of the text, which is why I proposed that you try randomizing the words: It's always easier to read long, complex sentences that are well written than short, simple stretches of gibberish.

--Geoff Hart ghart -at- videotron -dot- ca
(try geoffhart -at- mac -dot- com if you don't get a reply)





Previous by Author: Grade-level writing - tool suggestions?
Next by Author: Bloated Word doc?
Previous by Thread: Formatting Bullet and Numbered Lists in FrontPage using .css
Next by Thread: Word to PDF: Gfx change


What this post helpful? Share it with friends and colleagues:


Sponsored Ads