Documentation metrics?

Subject: Documentation metrics?
From: "Hart, Geoff" <Geoff-H -at- MTL -dot- FERIC -dot- CA>
To: "TECHWR-L" <techwr-l -at- lists -dot- raycomm -dot- com>
Date: Tue, 4 Jun 2002 09:14:08 -0400


Asha Mokashi wonders whether <<... any of you are involved with collecting
metrics for assessing user documentation quality in your
teams/organizations>>

We don't formally collect any metrics. The reason is simple: we don't
believe that you can place a numeric value on how good a document is. You
can certainly measure the number of errors picked up during review, and
other such quantifiable items, but the only real test of quality is whether
people can read the documentation and reach the conclusions you want them to
reach--or obtain all the facts they need to reach their own conclusion, as
the case may be. We publish scientific research reports, but the principles
apply equally well elsewhere.

<<What criteria do you use to verify if a document is good, in the absence
of user feedback ?>>

We start with a planning meeting in which our authors discuss their results
with their supervisor, the research director, and a member of the
Communications team to decide whether they have something to say that people
want to read. If so, the meeting defines exactly what they're going to write
about it. We then use several sorts of peer review: after the author has
produced a report, it's reviewed by a colleague in the same research group
for technical merit, plus by up to three external reviewers (a
manufacturer's rep, if appropriate; the person or company who collaborated
in the study; someone else knowledgeable in the field). The research
director then reviews the revision, as do I.

<<What processes do you follow to collect metrics and what are the benefits
and drawbacks ? >>

No formal metrics, but the review process does identify all the major
mistakes. And these are reviewed so (in theory) authors can learn from them
and (in practice) improve the report.

<<Is it possible to have an absolutely objective and complete measurement of
documentation quality in the absence of user feedback ?>>

There are no absolutes. But a good review and revision process can serve
much of the same goal, provided that your reviewers are comparable to the
eventual readers. When the reviewers differ significantly from the readers,
problems arise.

--Geoff Hart, geoff-h -at- mtl -dot- feric -dot- ca
Forest Engineering Research Institute of Canada
580 boul. St-Jean
Pointe-Claire, Que., H9R 3J9 Canada
"User's advocate" online monthly at
www.raycomm.com/techwhirl/usersadvocate.html
"Writing, in a way, is listening to the others' language and reading with
the others' eyes."--Trinh T. Minh-Ha, "Woman native other"

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Free copy of ARTS PDF Tools when you register for the PDF
Conference by May 15. Leading-Edge Practices for Enterprise
& Government, June 3-5, Bethesda,MD. www.PDFConference.com

Check out RoboDemo for tutorials! It makes creating full-motion software
demonstrations and other onscreen support materials easy and intuitive.
Need RoboHelp? Save $100 on RoboHelp Office in May with our mail-in rebate.
Go to http://www.ehelp.com/techwr-l
---
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit
http://www.raycomm.com/techwhirl/ for more resources and info.



Previous by Author: Are Indian tech writer firms as good as American firms?
Next by Author: Final documentation page break formatting?
Previous by Thread: Tech writers vs. users?
Next by Thread: FWIW - Hiring Activity Forecast


What this post helpful? Share it with friends and colleagues:


Sponsored Ads