Re: Telecom Planning Guide Outline

Subject: Re: Telecom Planning Guide Outline
From: Thom Randolph <thom -at- HALCYON -dot- COM>
Date: Sun, 2 May 1999 00:45:23 -0700

Benzi:

I am involved with similar documents, provisioning for Internet
Service Providers. We use two different rough subject groupings,
depending on whether it is possible to simulate load on the
system. Of these, use what is reasonable for your systems;
these are just suggestions.

If a trial system can be set up and tested under simulated
load, then we use a set of topics like this:

1) Introduction to the methodology, scope of the document, etc.
2) Architectural overview, including event diagrams showing how
the pieces work together.
3) How to determine proper load levels for testing, how to
measure system resource consumption during testing, and how
to simulate the user load. This includes describing the
nature of the load, or the "usage model": does it come in
peaks? Is the traffic essentially random? Also, if there
are such things in the system, describe WHAT the user is
doing and how to simulate that.
4) Running the tests, load-points to test, gathering the data.
If the system doesn't come with a testing tool to simulate
the user load, consider giving the customer guidelines for
creating their own. Never mention the internal tool, since
it will find its way into a customers hands faster than you
can say "Sales did WHAT?".
5) How to calculate system capacity based on the gathered data.
6) How to determine resource requirements for the planned user
load levels.
7) Building-out the expected deployment. This often includes
many tips and techniques, arranged by the level of recommendation:
required, recommended, suggested. But probably most important:
what to avoid, and what the deployment team should NEVER do.
If the system has many parts, it makes sense to think in terms
of detecting and eliminating system bottlenecks. We tend to
arrange them in order of which they would hit first under
increasing user load.
8) Monitoring production-deployed systems for how it handles
the full-load operation
9) Montioring production-deployed systems for signs of impending
reliability problems.
10) How to add capacity to meet needs, and how to remove any
excess capacity. How to move the systems to upgraded hardware.
11) At least two tested configurations, including detailed
configuration data, test conditions and gathered data. Be
sure all the test data was truly gathered as they say, and
make sure all the test data for a configuration come from
the same test run. If it was not, tell them the numbers are
essentially worthless, and watch them squirm! They may dig
up good numbers for you; they may not. Also, be sure to ask
whether there were other tests going on at the same time, or
other conditions worthy of note in your document. There
should be sufficient tables and graphs here to show how the
test configuration would be analyzed using the methods you've
recommended in the document.

If there is no reasonable way to simulate user load, then
things become more recommendations rather than equations.
The main difference is that you don't spend much time talking
about the testing tool, how to use it, etc. You leave all that
up to the reader, who has to make their own testing tool.
This does indeed make the document significantly shorter,
but it makes deployment harder for the customer. Another
important difference is that the gathered-data sections can
serve as examples of systems, and you can point out how the
performance is being limited by the bottlenecks you've listed
previously. If possible, try to translate the gathered data
back into numbers useful to the customer: number of simultaneous
users the test system can handle, etc.

These types of documents, for the systems we're working on,
tend to run between 30 and 80 pages, depending on how detailed
the gathered information is, how much detail you put into
the deployment information, and on how complex the system is.
I have seen them run as high as 140 pages, though.

The biggest recommendation I have for this? Remember that the
people you're writing for are very expert. You can speak in
jargon without defining standard terms, you can leave it to
them to understand the "why" of your methodology. If your
testing and capacity analysis methods are not readily understandable
to someone in their position, it might be good to have backup
material explaining the capacity analysis methodology. We usually
use a white-paper for that, and reference it in the introduction.

Another big one: your bosses will always want this document
finished when the product is finished. Well, that is almost
never possible, in reality. The product will have bugs, most
likely ones that seriously affect your ability to obtain
accurate and reliable performance numbers. Do your best to
write the document's meat early, and just add the numbers,
charts and graphs later. Just remember that drastic changes
in performance will lead to different recommendations. For
example over the course of 2 weeks of under-load testing, one
system improved performance more than ten times over the original
testing. It had only a limited affect on the document, however,
since we wrote the document so the numbers and conclusions were
isolated in the gathered-data sections at the back.

I hope this helps. I know my first document of this type was
a confused jumble, but we've got things running smoothly now.

Oh, and by the way, "provisioning" is indeed a word. It's what
armies do when they give supplies to soldiers.

Regards,

Thom Randolph
thom -at- halcyon -dot- com

From ??? -at- ??? Sun Jan 00 00:00:00 0000=




Previous by Author: Re: pc-manmonth - I had to say it
Next by Author: Re: (long) Marketing and document design
Previous by Thread: Telecom Planning Guide Outline
Next by Thread: Marketing and document design


What this post helpful? Share it with friends and colleagues:

Sponsored Ads


Sponsored Ads