TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Summary - HTML and graphics in one download From:Alexandra Sutherland <Alexandra -dot- Sutherland -at- aspect -dot- com -dot- au> To:"'techwr-l -at- lists -dot- raycomm -dot- com'" <techwr-l -at- lists -dot- raycomm -dot- com> Date:Wed, 22 Dec 1999 12:54:36 +1100
Thanks to all those who responded to my query about how to download a web
report to the local PC as one file. Your help is much appreciated. A summary
of responses is provided below.
(alexandra -dot- sutherland -at- aspect -dot- com -dot- au)
Joy Kocar http://www.isolns.com
You can capture a site using Acrobat 4 and create a PDF, or you can try a
mirroring software, that will copy the html files and all linked files
(including those to other sites) to a folder. You can find some at
www.tucows.com. I've tried WinHTTrack. It's worked quite well for my needs.
Hope this answers your question.
Pete Kleczka (knp -at- execpc -dot- com)
Look on the web for Trellix software. They have
a software package that downloads everything
from a webpage seamlessly. There are probally
other similar packages out there.
Geoff Lane, Cornwall, UK geoff -at- gjctech -dot- co -dot- uk
The last time I faced a similar situation, I zipped up the pages, graphics,
and everything else, and then converted that to a self-extractor. The user
clicked a link to download the self-extractor and then ran it.
jeanne a. e. devoto ~ jaed -at- jaedworks -dot- com http://www.jaedworks.com
to its security model is unable to alter information on the disk, create
folders, and so on.
One solution is to create an archive file (StuffIt, for instance, or tar)
containing the desired files. This would enable users to download the
information locally while retaining data structure after the file is
exploded. This is probably the simplest solution, if your users' browsers
don't have the built-in capability to download a website archive. (It does
require that you update the file every time something on the live site
changes. This can probably be automated on the server.)