Re: Can I Have Your Opinions and Ideas on Audience?

Subject: Re: Can I Have Your Opinions and Ideas on Audience?
From: "John M. Gear" <catalyst -at- PACIFIER -dot- COM>
Date: Fri, 28 Oct 1994 13:30:00 PDT

Relatively long reply follows to this query, exerpted from Laurel's post:
At 07:51 AM 10/28/94 CDT, Laurel Meier wrote:
><snip>
>I would like to explore the possibility of using this approach with our
>documentation, so instead of trying to satisfy the entire audience, we focus
>on creating documentation for an intended reader and clearly define in the
>documentation who this intended reader is.

>I have been able to define the problem and find theory to support this new
>approach, but I'm having difficulties finding source information on how to
>practically go about defining the intended reader in our books. The only
>idea I have so far is to put something in the About This Book section that
>states our assumptions about the reader's prerequisite knowledge and
>experience.

This is a *great* question. I've had perking away for some time the idea of
a standardized computer literacy/comfort test (on disk of course, packaged
with online help programs). My idea is that when you install the software
it asks you to self-identify your level of competency, from complete
babe-in-the-woods to
green-skinned-propeller-head-barely-able-to-converse-in-real-time [perhaps
not those exact terms ;-)]

Or, and this is more likely to be useful, you choose "self-test" and the
program coughs up fifty (? twenty? I bet the number of questions goes down
fast once you get the bugs worked out.) questions that result in a "score"
predicting the assistance you're going to need using the software.

The self test scores are used to configure the help files in the program and
perhaps set up some prompting routines based on number of tries to do
something etc. In other words, if I scored in the "novice" range the
program will ask me to confirm any irrevocable choices after a brief
explanation of the consequences. But if I scored in the Jason Fox category
(see "Foxtrot" on your nearest comics page) I don't get the prompts, cause I
probably just find them annoying anyway.

Of course, as you become familiar with the software you can dial up the
score, either by simply resetting it or retaking the test.

My intuition is that there are a basic set of skills/actions (analogous to
the therblings in industrial engineering) that separate users into
categories and that these can be identified and used to make meaningful
distinctions between levels of assistance required.

OK, that's the automated kind, what about the question actually asked?
Well, I think it's the same idea. In fact, you'd want to make sure it had
some validity on paper before you sank a lot of $$ into figuring it out for
programs.

For your purposes you need a reasonable size test group of people and a set
of basic skills and knowledge needed to successfully work the machine. You
do a job-task analysis where the job is operating the program to perform a
variety of tasks. Once having identified the key KS's (I'm not sure that
attitudes help here, the other third of KSAs), you devise questions that you
can use to separate those who have them from those that don't. (Did I
forget to say you have to rank the users by proficiency, or at least break
them into broad classes?)

Once you figure out what it is that the expert users know that the novices
don't and how to help the novices do it, then your task should be to simply
help the people picking up your manual place themselves in the right group.

Give me a pencil and paper test (or perhaps you still build it into the
program but at first you don't otherwise modify the program based on the
test results) and write your manual in three-column (or five row, or
whatever number of gradiations works best) with the column/row headings tied
to the skill classes that the test puts out. Once I know that my skills
pretty much put me in the average class, I'll read the instructions in the
middle column. If crafted properly, these should be targeted at my general
level of comfort.

IDEALLY, we devise a platform-independent test for the basic categories of
things (DOS, Unix, MAC, or whatever) and give away LOTS of these tests so
that everybody who wants to can know their skill level at any moment and buy
the manual for the appropriate level. You could even start selling software
preconfigured for the right level --- same end functionality, just more
overhead in some versions (more disk space required, probably slower, etc.,
but same end results attainable).

I think that some form of test is the missing link in efforts to target
documentation to users' needs; current steps in this direction (quick start
guides and expanded guides that tell you how to do the same things, quick
reference cards, etc.) rely on the user's estimate of their own
capabilities, which are obviously not necessarily reliable or accurate.

I apologize if this is much too windy a reply or misses the point of your
query. It just hit directly on something I've been pondering a while.

John Gear, Catalyst Consulting Services
Vancouver, WA


Previous by Author: Re: Login, Log in, Log into
Next by Author: Re: Sam Clemens is dead
Previous by Thread: Can I Have Your Opinions and Ideas on Audience?
Next by Thread: Punctuation Inside Quotes


What this post helpful? Share it with friends and colleagues:


Sponsored Ads