RE: Compare and Contrast Doc Group Performance -- What'saWorld-ClassDoc Group Look Like?

Subject: RE: Compare and Contrast Doc Group Performance -- What'saWorld-ClassDoc Group Look Like?
From: "McLauchlan, Kevin" <Kevin -dot- McLauchlan -at- safenet-inc -dot- com>
To: <techwr-l -at- lists -dot- techwr-l -dot- com>
Date: Thu, 13 Mar 2008 09:57:04 -0400

Fred Ridder said:
> In manufacturing and in software development, defects are clear-cut.
> A manufactured product meets its specs or it doesn't. A code module
> performs according to the design spec or it doesn't. This makes the
> defects easy to identify as defects, easy to categorize as to severity
> and type, and easy to count, all of which makes it relatively easy to
> collect and analyze metrics. There's still a challenge in comparing
the
> metrics from one organization to those of another because of
> differences in overall operational methodology (e.g. how comprehensive
> an organization's specs are), but at least the approach to metrics
> is objective and comparable.

> In the case of information products, quality is a largely subjective
> assessment. Yes, there are clear-cut defects that can be identified
> and tracked--things like wrong information, missing/omitted
information,
> typos, etc. But many of the real quality differentiators are things
like
> ease of use (e.g. organization of information, navigation affordances,
> index), understandable and unambiguous expression of the information
> (e.g. clear writing, standardized terminology), and suitability for
the
> reader's need (e.g. audience analysis, task analysis, information
design).
> Each of these factors is a spectrum, requiring a judgement call as to
> the quality level. These important quality factors cannot be reduced
> to the kind of go/no-go (acceptable/defective) decision that makes
> metrics easy.

Not only that, there's the problem of companies and products that are
leading an expanding market... i.e., not a mature market.

For example, if your goal is to make a very flexible piece of equipment
that can be used in several [related] niches and can also be adapted by
people you haven't even met, for purposes you haven't even (nor has
anybody else) formally postulated, then the design spec is a moving
target.

That is, you write the spec to accomplish certain specific things for
certain specific customers or immediate potential customers, because you
know what those folks need. But you also include modularity and
expandability in the basic design to allow for adaptability and
expansion of the feature set. You have (say) a hard-coded feature set
_and_ an API.

So, the first release is a big success and you have some small number of
outstanding issues that are noted as defects but were not considered
show-stoppers. That's your performance. Well, it's your performance
measured and considered at that single snapshot moment. Consider your
product and your performance over a period of time (and what product
doesn't exist over a period of time?), and the picture changes. But it's
the same product, and arguably you're doing as well as ever at
designing, building, fixing. Which is the true picture? Some customers
(or potential customers) would say that you are doing fine - especially
if they use just a robust few elements of the feature set. Others would
say that you are doing badly, because one feature that they need doesn't
work the way your competitor's similar feature works, while other
features (including some that your competitor doesn't even offer) are
fabulously useful... or would be if that one crucial feature worked, but
you aren't going to fix it (make it work the way we need it to work) for
another six months.

Comes the next release, you will fix those defects and everything will
be just peachy... whoops! Certain customers have been buying and using
your product between Release 1 and Release 2, and they've found
problems, due to their implementations, that you didn't even test for,
at Release 1. Your original spec wasn't sufficiently comprehensive.
Well, it was, but the window moved. That can become really obvious when
you've been selling 100 units to each of a few 500-pound gorillas, and
suddenly an 800-pound gorilla says: "We like the _idea_ of your product,
and we don't see anybody else doing that right now, but for us, it
breaks right here. If we can have it fixed in time for our planned
roll-out of our big system in X months, we'll buy 500 immediately - if
not, we'll have to go with our old way and our old supplier, meaning
we'll have that much more invested in the old way and won't be looking
for your product again until 2012..."

Now you have to decide whether every release of the product represents a
new design, a new starting point for the purposes of your metrics.

Combine those issues with what Fred said, and you've got the reason why
this stuff is a rather arcane "science" at best.

Metrics share an attribute with their larger family, statistics - you
can choose and manipulate statistics or metrics to say almost whatever
you want.
Change the timeframe.
Add a parameter.
Remove a parameter.
Weight this.
Un-weight that.

The more a certain kind or flavor of metrics gets established, the more
there is pressure to apply the "standard" to what you do - whether it
makes objective sense or not. That's where you get the obfuscation and
erosion of value that other people have mentioned. "Software
development is people planning out a bunch of written stuff, writing the
stuff, testing the stuff, fixing the stuff, and shipping the stuff out
the door before something else breaks. Document creation is people
planning out a bunch of written stuff, writing the stuff, testing the
stuff, fixing the stuff, and shipping the stuff out the door before
something else breaks. Therefore, the same measurements should be
applicable for either. We already have the tools for applying those
measurements to software, so there's no reason they won't work for
customer documentation - which we've just said is an identical process.
Make it so."

OK, so if that effort now breaks, is it because the documentalists were
not performing as well as the code-monkeys? Or is it because, maybe,
there are important differences between the two processes that the glib
equation didn't cover? Other? What if it goes the other way? What if
the documentalists seem to shine on all metrics, compared to the
code-monkeys? Are the code-monkeys a bunch of lazy, incompetent simians?
Or is it possible that the metrics aren't illuminating the right things?
Apples and iguanas?

Kevin





The information contained in this electronic mail transmission
may be privileged and confidential, and therefore, protected
from disclosure. If you have received this communication in
error, please notify us immediately by replying to this
message and deleting it from your computer without copying
or disclosing it.


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Create HTML or Microsoft Word content and convert to Help file formats or
printed documentation. Features include support for Windows Vista & 2007
Microsoft Office, team authoring, plus more.
http://www.DocToHelp.com/TechwrlList

True single source, conditional content, PDF export, modular help.
Help & Manual is the most powerful authoring tool for technical
documentation. Boost your productivity! http://www.helpandmanual.com

---
You are currently subscribed to TECHWR-L as archive -at- web -dot- techwr-l -dot- com -dot-

To unsubscribe send a blank email to
techwr-l-unsubscribe -at- lists -dot- techwr-l -dot- com
or visit http://lists.techwr-l.com/mailman/options/techwr-l/archive%40web.techwr-l.com


To subscribe, send a blank email to techwr-l-join -at- lists -dot- techwr-l -dot- com

Send administrative questions to admin -at- techwr-l -dot- com -dot- Visit
http://www.techwr-l.com/ for more resources and info.


Follow-Ups:

References:
RE: Compare and Contrast Doc Group Performance -- What's aWorld-Class Doc Group Look Like?: From: John Rosberg
Re: Compare and Contrast Doc Group Performance -- What's aWorld-ClassDoc Group Look Like?: From: Gene Kim-Eng
RE: Compare and Contrast Doc Group Performance -- What's aWorld-ClassDoc Group Look Like?: From: Fred Ridder
Re: Compare and Contrast Doc Group Performance -- What's aWorld-ClassDoc Group Look Like?: From: jlshaeffer
RE: Compare and Contrast Doc Group Performance -- What's aWorld-ClassDoc Group Look Like?: From: Fred Ridder

Previous by Author: RE: Compare and Contrast Doc Group Performance -- What's aWorld-Class Doc Group Look Like?
Next by Author: obtuse message? or obtuse user?
Previous by Thread: RE: Compare and Contrast Doc Group Performance -- What's aWorld-ClassDoc Group Look Like?
Next by Thread: RE: Compare and Contrast Doc Group Performance -- What'saWorld-ClassDoc Group Look Like?


What this post helpful? Share it with friends and colleagues:


Sponsored Ads