TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
If metrics won't help you do your job, I suppose it depends on what you see
as your job.
If your job is, as you say, to have something approaching communication
ready when the boxes ship, then you have obviously done your job. The
client's emphasis isn't on quality standards, apparently, but on
desperation. Many such companies exist, especially in high tech. If you
didn't produce for him, somebody else would be glad to take his dollars to
We see things a bit differently here, and that's why we have trouble working
for such people. In our experience, the "bang it out by the due date"
standard is only one of many problems on such a job. Such a client doesn't
regulate his projects well, either, so he likely changes his product
abruptly, but doesn't forgive lateness of the manual. This is a classic
level one or two operation. Most such disappear into the mud, and
inattention to the details is a common reason. My 12-year-old thinks the
same way when he dashes out of the bathroom after using all the toilet
paper, making a hasty promise to himself to change it later. In a company
that has NO quality standards, the culture won't support usability testing.
As to the hoary quality standard of "I talked to a couple of users and they
liked the manual", you find during testing that even users who profess to
like something often won't or can't use it. I say again, with emphasis...*If
you ain't tested it, you don't know.* I realize the father of the manual
finds it hard to imagine, but just because there's an answer in
there...somewhere...it doesn't mean the damned thing is usable.
And speaking of testing, you may think it counterintuitive that a small
sample is good enough, but remarkably, it is. You're not amassing
statistics; you're pinpointing problems. It doesn't take a statistically
valid sample of joggers to find a piece of ragged pavement. Run eight of
them over the patch and if six of them stumble, you've got the problem
located. This is troubleshooting, not reproducible research. The story goes
that in the early days of testing Windows, Bill Gates was horrified to see a
user in the test lab run a mouse down a table leg trying to reach the bottom
of the screen. It didn't take fifty or a hundred users doing that to horrify
him. And with good reason.
A "heuristic method", by the way, is basically saying "my initial best
professional guess". It does not mean "hunches". It means projections based
on experience, education, and judgment. It is NOT meant as a stopping place,
but as a basis for further cycles of refinement. If you make your best
guess, write it, then ship it, you're not working heuristically.
Simply Written, Inc.
Featuring FrameMaker and the Clustar Method(TM)
"Better communication is a service to mankind."
Check our Web site for the upcoming Clustar class info http://www.simplywritten.com
> Well, having specific metrical targets is a nicety that I've never had the
> luxury of and from what I can fathom of it, they don't help me do my job,
> either. In the world I've lived in, the metrical target has been the ship
> date. The development team keeps coding and I keep on trying to put
> together a document that provides the best possible documentation for the
> user before they rip it from my hands and ship it. I've never been
> completely satisfied with what I've had to ship because there are always
> things that I've wanted to include but was prevented from doing so by the
> ship date.