TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: What is quality? From:Edward Bedinger <qwa -at- U -dot- WASHINGTON -dot- EDU> Date:Tue, 4 Jan 1994 23:58:29 GMT
In article <9401041744 -dot- AA25903 -at- us1rmc -dot- bb -dot- dec -dot- com>,
Typo? What tpyo? 04-Jan-1994 1238 <jong -at- tnpubs -dot- enet -dot- dec -dot- com> wrote:
>Jonathan Leer commented on documentation quality and asked if anyone has had
>much success tracking document performance. I would like to draw a distinction
>(which I thank Saul Carliner for pointing out) between evaluating and
>predicting the performance of a document. Evaluation is after the fact (or
>producing and distributing the document), while prediction is before or during
>Examples of document evaluation are reader comment cards and usability testing.
>Examples of predictors are documentation metrics such as the number
>of graphics, headings, and index hits per page. These metrics ought to
>correspond to your customers' critical success factors; in other words, you
>should measure things you know your customers want to see. (Your mileage may
>Both prediction and evaluation have a place and have value. Evaluations are
>inarguable -- when they are statistically valid, but they're never
>statistically valid, and usability testing is quite expensive and difficult.
So, suppose I know some useful facts about the people who will
use the documentation. E.g., they are trained professionals who
have been using one or another company's versions of this
specialized product since they finished the 6 month long certification
program. My company's manuals have always been really poor, but
there is little or no feedback from the field. Now it is
manual revision time. Obviously, my company places no weight
on usability testing, and there are no reports from professional users
to guide us in the re-write.
Do I already know enough about my readership to design the
manual and/or write for them?
Can I anticipate (predict) their critical success factors?
I think I am basically asking what sort of indices there are for
matching critical success factors to the users.
>Doc metrics are indirect and seemingly crude measurements (and collecting
>customer CSFs is subject to the same problems of statistical validity), but
>they are fast, cheap, and easy.
>I have done quite a bit of research on document quality and metrics,
>if anyone is interested in continuing this thread.