RE: Queries on Single Sourcing

Subject: RE: Queries on Single Sourcing
From: Mailing List <mlist -at- ca -dot- rainbow -dot- com>
To: "'lyndsey -dot- amott -at- docsymmetry -dot- com'" <lyndsey -dot- amott -at- docsymmetry -dot- com>, TECHWR-L <techwr-l -at- lists -dot- raycomm -dot- com>
Date: Fri, 13 Feb 2004 14:09:22 -0500

lyndsey -dot- amott -at- docsymmetry -dot- com enthused:

> Dick Margulis writes:

> > Now that we've written all those chunks, we're in a
> position to assemble
> > our various outputs from them (with a good deal of
> automation one would
> > hope), picking the category of information needed for the
> particular
> > output document and organizing the chunks according to the
> plan for that
> > document.
> Wow! I've seen the light (and the possibilities!). Having
> said that, it seems to me that you'd have to be working
> for a company that has quite sophisticated views about
> documentation in order to be given the time to set
> all this up. The benefits are (now) obvious, but, as you say,
> the set-up time has to be weighed against time saved by avoiding
> duplicative work.

> This brings to mind the recent thread on where documentation
> belongs. If it
> is in a centralized department, then designing and developing a
> single-sourcing system could be cost-effective, but if
> writers belong to
> different development teams in the R&D department, and if the training
> development group is in a different country, then the costs
> would appear to > be prohibitive.

Consider also that there are two ways to deal with a database
of "chunks" of information and with extracting selected chunks
in order to generate output documents:

1) your database contains the "smarts" to catalog, organize
and select the info, so that your chunk-selecting, compiling
and output tools can be relatively dumb;


2) your database can contain just the chunks, with not much
meta information, requiring that your chunk-selecting,
compiling and output tools must be extremely intelligent
because they must act directly on the content of chunks
themselves for decision-making.

There can be combinations, but those are the two extremes.
In the first case, you hire a master database person, who
discovers what your criteria and meta-data requirements are,
and who then creates the database using -- and reflecting --
that knowledge.

Thereafter, you hire ordinary writers and font-fondlers and
script writers who can run an output engine (like FrameMaker
or certain web applications, etc.), and they learn some
basic database user skills, in order to generate queries
that generate books and WebHelp, etc. A lot of their
interaction with the database might be mediated by the original
database master, or her/his successor, leaving them free to
do nothing but:

a) input new info, adhering to templates that force them to
include the relevant meta data with each new info chunk

b) fondle fonts and deal with output formatting (page layout
and that kind of stuff) whenever new documents are required.

In the other case, your database is flat and dumb, so the
people you hire to make documents from it must be extremely
proficient at creating and using and maintaining tools that
do all the parsing, compiling and output.

That is, in the absence of well-conceived and implemented
meta-data, about all that differentiates your information
"chunks" from one enormous blob of text is that there are
separators between chunks. There's nothing to differentiate
one chunk from another, other than the content itself. That
means you can have a writer search "by hand" through tens
of thousands of undifferentiated chunks, in order to glean
the ones that belong in the next book. The writer also gets
to decide the order in which that subset of chunks will be
assembled. We can assume that increasing experience would
allow the writer to become more proficient at locating
relevant stuff to extract from the amorphous cloud of chunks.
If that writer leaves or gets hit by a bus, the learning
curve starts over with somebody else.

Instead, you would attempt to formalize what the writer does,
in hopes of turning it into algorithms which could in turn
be used to create scripts or macros that could mine the
amorphous chunk-base and create new documents. Tools would
proliferate. The more you adhered to some sort of industry
standard, the better would be your chances of replacing any
scripter/programmer/macro-writer who left or died.

Because your chunk-blob had no metadata, it would be very
difficult to decide that a given chunk had outlived its
usefulness, so you might tend to keep everything -- including
duplicates and near-duplicates -- causing endless bloat.

At first glance, it appears that the more sensible approach,
especially if there's to be a long-term, is to create a chunk
database/repository that makes use of LOTS of well-conceived
meta-data. At second glance, you realize that while it's
necessary work, it's also a LOT of work, and nothing is
coming out of it until the bulk of the work is complete.

By contrast, a low/no meta data chunk-repository might
lead to a nightmare of tool complexity and maintenance
woes, but it's relatively quick to start up and you can
start getting SOMETHING out of it in the short term.

So, you are probably likely to end up with some sort of
pragmatic hybrid. Good luck with that. :-)

Either way, the only way it's going to be useful to create
a database/repository is if most of the users/writers in
the company will be able to use it, and if much of its
contents will be used by everybody... as opposed to just
giving everybody their own ghettoes within a larger
centralized repository.

Having writers/users widely separated, your use of a
repository that controlled its inputs would help your
company to maintain standards and an accessible corporate
memory. That is, if the database enforces (at least) the
meta-data that must attach to every input chunk, then
the tools that any person or group uses to extract what
they need from the repository will work on any relevant
chunk, regardless of who input it (and one bit of meta-data
might be the name of the chunk-creator, so that you could
track down who was making lousy meta-data). If the database
further enforces a review/approval requirement for incoming
chunks, before they are unquarantined, then your corporate
editor or some other person/body can vet the chunks for style,
spelling, and other writing standards.

Now, all you need to do is to find affordable tools that
can make all that painless and productive . . . and get
the request past the budget people. Good luck with THAT!




Previous by Author: RE: can you recommend a good book on the mysteries of Illustrator and graphic file formats in general?
Next by Author: RE: GoLive vs. Dreamweaver
Previous by Thread: Re: Queries on Single Sourcing
Next by Thread: Re: Queries on Single Sourcing

What this post helpful? Share it with friends and colleagues:

Sponsored Ads