RE: Ideas in Motion

Subject: RE: Ideas in Motion
From: "Mike Starr" <writstar -at- wi -dot- net>
To: "Tim Altom" <taltom -at- simplywritten -dot- com>, "Mike Starr" <writstar -at- wi -dot- net>, "TechDoc List" <techwr-l -at- lists -dot- raycomm -dot- com>
Date: Mon, 13 Mar 2000 09:46:20 -0600

But my original question remains: what redesign of anything did Microsoft
do as a result of the user dragging the mouse down the table leg (which you
pointed out as an example of the value of usability testing)??? I'd be
willing to bet they did nothing.

I'm not opposed to usability testing. I think it can provide valuable
insight into the way an application or document should be put together.
However, your basic position seems to be that one never learns and retains
lessons from prior testing or feedback or other products and that usability
testing must occur in order for the product or document to be good. I say
I've learned a lot of lessons about usability over the years and I
incorporate those lessons into everything I do.

I'm a real pain in the keister to development teams who create interfaces
that aren't intuitive or are confusing. I nag, cajole, whine, beg, threaten
and generally make myself intolerable until the development team sees that
they've done something stupid and corrects it.

When a company is developing a product that's extremely powerful, that
product becomes, of necessity, extremely complex. It takes a lot of work to
get an extremely powerful industrial automation product working right and
the users are primarily concerned with functionality. If it don't work, it
ain't gonna sell.

I worked with a product that was written by a single programmer, one of
those prima donnas we all hear about. The difficulty was that he was worth
it. He was able to make that product do things that were incredible. Nobody
out there in the marketplace is able to create a product to compete with
it. He's worth his weight in diamonds. Unfortunately, he couldn't design a
good GUI to save his soul. Did he care? Nope. Did he want to focus on
making it more usable? Nope. Six levels of nested dialog boxes?? Yep. The
same dialog box repurposed inappropriately multiple times? Yep. The product
was (and still is) the best in its class. I made some small progress on
getting the usability improved but it was like pulling teeth.

Now, could a good usability engineer have made the interface better? Maybe,
but it would take that usability engineer at least a year of work to just
understand the power and functionality of the product. Then, it might have
been possible to redesign the product with a more rational and intuitive
interface. And I would gladly have supported the concept but if I was the
manager and the developer threatened to walk rather than implement a new
interface, I would have kept the developer. The product is a big seller and
incredibly profitable. It's broke, but not that broke.

Again, the more powerful and useful the product, the more willing the
marketplace is to accept less than optimal usability.

Mike Starr - WriteStarr Information Services
Technical Writer - Online Help Developer - Technical Illustrator
Graphic Designer - Desktop Publisher - MS Office Expert
Telephone (262) 694-0932 - Pager (414) 318-9509 - mailto:writstar -at- wi -dot- net

-----------------------Original Message-----------------------
From: Tim Altom
To: Mike Starr;TechDoc List
Date: 3/13/2000 8:53 AM
Subject: Re: Ideas in Motion

Actually, Microsoft virtually returned to the drafting board to redesign the
Windows interface. It cost them millions, but they did it anyway. Much of
what you see in Windows today is the result of large user tests. Windows is
bloatware, but it's bloatware to satisfy marketing. The interface is
usability-driven. Check out the Outlook interface for 2000. I think it's an
excellent interface considering the crazy quilt of functions that they named
"Outlook". We did a book on Office 2000, and it was absolutely the worst
writing job when we reached Outlook, because it doesn't have specific
windows or dialog boxes for specific functions. Rather, a constellation of
functions exerts gravitational pull on one another. It's like the software
equivalent of the three-body problem. Still, the interface for it works. I
have a client who mimicked it, not to shortcut anything but because their
software worked well also in that paradigm.

My point of usability is not that a particular manual is or is not usable,
but that until you test it YOU DON'T KNOW FOR SURE. Only testing will reveal
usability. Not discussion, not focus groups, not discretionary feedback, not
heuristics, not guesswork, not confidence, not call records. Nothing else,
because everything else is discretionary on the part of the end user. That's
a factor you can control in testing, but not otherwise. If the user doesn't
want to talk to you, or hasn't read the manual, or is trying to be nice, or
just doesn't have the time to call, or assumes that if you wrote a crappy
manual your tech support can't be much better, then you have no loop
closure. And in my experience, most people are astonished at how their
informed guesswork comes apart like a snowball in the oven when users
actually put it to the test. Even users are often astonished when you show
them the results. They'll swear that they just love that neat clickable
image, but when you test them you discover that they never use it, because
they prefer the bland but obvious menu on the left side.

Of course, usability testing must be done correctly. The same can be said
for every aspect of life. Products that crash networks should have been done
correctly, too. Documents that don't enlighten should have been done
correctly, too. Saying "Well, it has to be done right, and that's a reason
not to do it" doesn't seem to me to be a strong argument against testing.
Nor is it a strong argument to contend that if the users *don't* find
something, then they've failed and the whole thing was a waste. You can't
prove a negative. Generally, experience shows that you don't have to worry
about that, because testing ALWAYS reveals wrinkles. The point is to
minimize the numbers of them, not to eliminate them entirely. After a
certain point, you run into the chaos of individual actions, a kind of
Brownian motion that you'll never be able to eliminate. If the majority of
your small testing sample find the same problem, it's a wrinkle that needs
ironing. One out of the sample should be considered, just not as strongly.

As a final note, have you noticed that product introductions and maturation
follow a pattern? In the early days, small companies rush out products that
may be buggy and unreliable, but they get customers because nobody else has
tackled the problem. Then as sales go up, the original vendor either grows
to the point of having to institute quality (as opposed to subjective)
measures, or some larger company that uses quality standards takes the
business away. First adopters will tolerate buggy products; later adopters
won't. But unless you can quantify your expectations, you can't institute
quality. You can talk about it, you can offer opinions about it, but you
can't institute it. Only when you can put numbers to performance can you
institute quality.

If a company can't or won't put numbers to their standards, it's not a
quality assurance environment. It's art. If the hordes of harried buyers out
there are content with art, then your company is in good shape. If those
hordes are more cautious and insist on more measurable performance, then
your company either learns the lesson or is doomed. Or it must come out with
something else artistic and keep running the introduction cycle so it
doesn't have to face up to the quality dilemma.

Tim Altom
Simply Written, Inc.
Featuring FrameMaker and the Clustar Method(TM)
"Better communication is a service to mankind."
Check our Web site for the upcoming Clustar class info


Previous by Author: RE: Ideas in Motion
Next by Author: RE: Don't be a tool
Previous by Thread: Re: Ideas in Motion
Next by Thread: Re: Ideas in Motion

What this post helpful? Share it with friends and colleagues:

Sponsored Ads