Experiences with re-useable content objects (RCOs), data mining, and database publishing

Subject: Experiences with re-useable content objects (RCOs), data mining, and database publishing
From: Gwen Thomas <thomasgp -at- CBS -dot- FISERV -dot- COM>
Date: Mon, 8 Feb 1999 19:28:44 -0500

Donn DeBoard wrote:

Has anyone has any experiences that they would be willing to share concerning re-usable content objects (RCOs), data mining, or database publishing? My company is exploring avenues to re-use information across many media for a diverse audience. While I read alot about database publishing and RCOs, I haven't found anyone who actually went forward and converted information originally on paper documents to a database storage format.


Recently I architected just such a project for the worlds market leader in mainframe credit card software. Currently I'm providing consulting for the international side of a multi-national banking software company that is considering a similar approach to some of its documentation.

For the credit card company, we dissected the contents of Screens and Reports reference manuals for multiple applications to create a series of format-neutral, media-neutral relational databases. The level of granularity for the databases was the online or report field, with each such record represented as multiple attributes, such as the screen label, field definition, format, underlying copybook data element, etc.

These databases were strictly structured, following a logical organization that is both hierarchical (online fields are part of field groupings are part of screens are part of functions) and relational (since the online field is associated with a copybook data element, we know the format and valid value set for every other field also associated with this element). The design is modular and scaleable and, since the data is stored as ASCII text, it lends itself to multiple formats and multiple media.

The data contains occasional embedded formatting tags (primarily to set off field labels when theyre referred to within a field description). Generally, though, the structure of the database itself is enough to allow logical formatting in generated documents.

The premise is to separate content from presentation. The content is structured in such a way that you can pick and choose what you want included in end products. These products can end up looking just like the word processing files you probably have now. The difference is that you maintain changes to the content in the database, not in the word processing files. And if you need to change the format or organization of the end product, you simply adjust the underlying template and regenerate the document.

Indeed, there are many advantages to moving away from the traditional way of creating documentation, where valuable information is embedded in heavily formatted, single-use, word-processing based files. If, instead, you adopt a database approach to storing product information, you can easily generate out of the database multiple kinds of documentation. Your department can turn on a dime. You can produce doc in less time, and you can also generate - with very little effort - other kinds of end-products.

For the credit card company, we were able to produce traditional paper/PDF doc, online help, reports, and many, many special documents used throughout the enterprise. For example:
* the Y2K project
Give me a listing, with screen labels, formats, and location, of every date field in the application.
* development teams
Were redesigning the screens. Show me every place in the 260+ Accounts Receivable application screens where customer address information appears.
* customer service
Data element XXXYYY needs to be modified. Give me a report of every online field that will be affected by this.
* training
Produce an appendix cross-referencing fields by the features they support.

An additional advantage:
You may find yourself with something much more than a documentation database. Such a knowledge store can serve as the basis for a Knowledge Management initiative. And, it can put to rest any questions about value added by your department.

We ended up building a front-end so that individuals across the enterprise could conduct just-in-time research. We allowed end users to view, print, and output this custom documentation, while allowing only the documentation team to update database contents. I used Access to build both the databases and the front-end, but there are many tools/applications that could be used.

Database design. A documentation database has a set of unique requirements that might not be obvious to someone used to designing smaller or number-intensive databases. It is crucial to start with a design that can grow with the project. It must be detailed enough to produce the level of information the user requires, but structured to make porting existing documentation a
manageable job.

Populating the databases. Its safe to say that most documentation was not written to be dissected. Facilitating such a process can be challenging. We built a series of macros to dissect and chunk up Word documentation and to add structure to data pulled from the mainframe.

Managing project creep. Since the knowledge store serves a diverse user base, you might find that many different user groups each request the addition of data not currently in the documentation set, but easily attainable. The challenge is to have a build plan that allows rapid and frequent rollouts.

Words of caution:
The credit card software project was a big success. However, in the period before the right resources and team members were assembled, the project experienced several hesitant steps. My advice would be to NOT reinvent the wheel yourselves. Invest in a few days consulting from someone who has been there and done that. This isnt necessarily a plug for my own services, although Im available. Im just saying that its possible to make some VERY expensive
mistakes, and you ought to do what you can up front to avoid this.

Also, think carefully about the ramifications of buying a proprietary system. Im not saying there arent systems that could serve your needs. Just examine what you really need and what is being offered from an out of the frying pan into the fire point of view.

Donn, I hope this has been helpful. Id be happy to answer questions off-line. I hope I havent made such an exciting project seem daunting - it takes a bit to get into the swing of things, but I have to brag that my team ported their 4th application in a fraction of the time it took for the 1st one. The biggest leap is giving up the linear approach.

From a cost-justification approach, if you can avoid costly missteps, such a project can pay for itself from the doc budget alone in just 1-3 releases. If you factor in value added to the company and reduced development and customer service costs, the numbers become even more attractive.

BTW, it was interesting that your query included information about data mining. I modeled the original application after the classical data warehouse schema (missing some dimensions, of course). And since we were mining existing documentation and mainframe sources for metadata much as a warehouse performs datamining on transactional data, we named the project
The DataMine. Eventually the company got into the data warehouse biz, so we renamed our project to avoid confusion. But the data warehouse folks were some of our strongest customers - we were able to provide data that they needed as a foundation for their work.

It aint just doc anymore...

Like I said, Id be happy to answer any questions I can.

Gwen Thomas
email Gwen_Thomas -at- yahoo -dot- com

747 Garden Plaza
Orlando, FL 32803

From ??? -at- ??? Sun Jan 00 00:00:00 0000

Previous by Author: Re: Camera Ready for Printing
Next by Author: Avoiding mainenance nightmares (was Brainstorming request)
Previous by Thread: Re: Technical writers and instructional designers: PART THREE OF THREE
Next by Thread: Re: User-centered design (re: origin of Notes)

What this post helpful? Share it with friends and colleagues:

Sponsored Ads

Sponsored Ads