Re: Quality

Subject: Re: Quality
From: Peter Neilson <neilson -at- alltel -dot- net>
To: "TECHWR-L" <techwr-l -at- lists -dot- techwr-l -dot- com>
Date: Tue, 23 Aug 2005 18:51:02 -0400

Terry Sole wrote in part:

Can anyone suggest a book or paper that deals with quality and/or performance metrics associated with technical writing. I realize that this is a very contentious issue but I'm looking for a place to start.

Not as such, but my wife is a quality engineer, so I have a
few ideas that have founding in the appropriate principles.

First of all, the technical writing must be intended to fulfill
some particular purpose. If you intend to measure the quality, then
you'll need a standard by which you can say the writing does or does
not meet its purpose, and perhaps by what degree. The task of
finding these standards is in itself a tall order, and could even
amount to an entirely separate project for every document!

Simply measuring "errors" (of whatever kind) may actually give no
indication of the quality. For instance, if a book is needed in
November, and the editorial staff correct it until December, then
perhaps all its value is lost when a deadline is missed. High
grammar or readability scores are trumped by the book's being
absent when needed.

If the manual is a set of instructions that are to be followed, then
the best test, in most circumstances, is to hand the manual to an
appropriate NEW user of the equipment or software, and to see what
happens. The test is called "fitness for use." It is very important
that the test subjects not be overly familiar with the material, or
the test will be totally invalid. This is a very difficult test to
accomplish, because the appropriate test subjects and their direct
management will usually mount substantial barriers. They will come
up with all sorts of reasons why more experienced test subjects
will be better. After the test, similar objections will be raised
as to why the (poor) results should be thrown out because the test
subjects were not good enough to do the testing. (In the quality
business this is sometimes known as "retest until good.")

As you see, I have a rather jaundiced view of any testing except
that done by actual users or their close and sufficiently naive
surrogates. I would push hard to avoid having my writing rated
by some automatic process that yielded a "quality" number.

On the other hand, if you are writing to a particular audience,
it isn't hard to develop certain ideas that are appropriate to
your actual job. Like maybe your readers need extensive indexes,
or lots of illustrations. Or maybe they ought to have drawings
instead of those horrid photos in the previous edition, of which
the boss was so proud because his son was the photographer.

Hope all this helps, and that I'm not coming across as a total



Now Shipping -- WebWorks ePublisher Pro for Word! Easily create online
Help. And online anything else. Redesigned interface with a new
project-based workflow. Try it today!

You are currently subscribed to techwr-l as:
archiver -at- techwr-l -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- techwr-l -dot- com
Send administrative questions to lisa -at- techwr-l -dot- com -dot- Visit for more resources and info.

Quality: From: Terry Sole

Previous by Author: Re: Why Aren't Open Source Tools Being Considered?
Next by Author: Re: meeting minutes--
Previous by Thread: Re: Quality
Next by Thread: RE: Quality

What this post helpful? Share it with friends and colleagues:

Sponsored Ads