TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
> I suspect various other measurements can be worthwhile if used correctly.
> Of course they cannot substitute for testing, but they might tell you
> something interesting.
They certainly can provide insights into how to improve a document.
They might, in some cases, be able to verify that a document is free
of certain kinds of defects, or to verify that the document is "readable"
by some measure. But can they "verify that a document is good"?
> The Writer's Workbench project at Bell Labs in the 70s had several ways
> to collect statistics on text samples.
I think most of the pros and cons of tools like this have been well
addressed in the recent discussion about the Fog Index.
My graduate school project in the mid-80's was to maintain and
extend the Writer's Workbench, and evaluate the changes with a
few thousand users. I've seen the Writer's Workbench and other
such tools (ab)used in several ways:
- To compare one corpus of documents with another. The tools work
very well for this.
- To compare one document to a corpus of other documents. The tools
can often provide a few helpful insights when used this way.
- To evaluate the "goodness" of a particular document. I've seen
perfectly good writing hacked up to meet the requirements of the
tool. People tend to optimize whatever is measured, and ignore
everything else. Most people also tend to trust the judgement
of the computer over their own good sense, and accept whatever
changes the computer suggests. In the hands of a thoughtful
writer, even this use can be OK. In the hands of a manager
of a writing department, it would probably spell disaster.
- To evaluate the skill of a writer. This is the worst case.
Writers can learn to make the tool happy, while continuing
to disappoint their readers.
Using artificial intelligence to actually parse the writing,
taking into account both semantic and structural elements, has
a lot of potential as a useful tool to evaluate writing.
Unfortunately, nearly all development of grammar checkers and
readability tools came to a screeching halt around 1990.
Check out RoboDemo for tutorials! It makes creating full-motion software
demonstrations and other onscreen support materials easy and intuitive.
Need RoboHelp? Save $100 on RoboHelp Office in May with our mail-in rebate.
Go to http://www.ehelp.com/techwr-l
Your monthly sponsorship message here reaches more than
5000 technical writers, providing 2,500,000+ monthly impressions.
Contact Eric (ejray -at- raycomm -dot- com) for details and availability.
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit http://www.raycomm.com/techwhirl/ for more resources and info.