TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Robert Heath wrote:
> The engineer who wrote the feature specs for an application I'm documenting
> used the word "granularity" in a sense I'm not familiar with. ...
> Is this term accepted in computer science or elsewhere in the technology
> industry, or is it just jargon that is being used in place of a more precise
I've seen it used fairly often in a sense analogous to the original meaning,
relating to the "grain size" of things. e.g. memory allocation might be done
with a granularity of 4K (the OS page size) or scheduling might be done at
100 ms intervals.
Analogous to photography, you cannot resolve things smaller than the
granularity or grain size. Want to schedule something for 250 ms from now?
Best you can do is schedule it after 2 100 ms ticks (averages 250, but may
be near 200 or 300) or after 3 ticks (guaranteed > 300 so > 250).
A system might have multiple timers with different resolution. Use a timer
with 10 ms resolution and 25 ticks gives you 250 < x < 260, but this might
involve more overhead than the 100 ms timer.
The term granularity is used in discussing parallel computing, along with
like "fine-grained" or "coarse-grained" parallelism. Coarse-grained breaks
the work up into large chunks which a bunch of CPUs can work on separately,
and requires little comunication except to report results. e.g. cracking
ciphers, seiving for factors of large numbers (http://www.gimps.org), ...
Fine-grained has only tiny chunks of computation between communication
events, and is /much/ harder to do...
I didn't understand your engineer's table, possibly because the formatting
was awful as I received it.