1 \input texinfo @c -*-texinfo-*-
7 @settitle GNU Wget @value{VERSION} Manual
8 @c Disable the monstrous rectangles beside overfull hbox-es.
10 @c Use `odd' to print double-sided.
15 @c Remove this if you don't use A4 paper.
19 @c Title for man page. The weird way texi2pod.pl is written requires
20 @c the preceding @set.
22 @c man title Wget The non-interactive network downloader.
24 @dircategory Network Applications
26 * Wget: (wget). The non-interactive network downloader.
30 This file documents the the GNU Wget utility for downloading network
33 @c man begin COPYRIGHT
34 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2002, 2003, 2004, 2005
35 Free Software Foundation, Inc.
37 Permission is granted to make and distribute verbatim copies of
38 this manual provided the copyright notice and this permission notice
39 are preserved on all copies.
42 Permission is granted to process this file through TeX and print the
43 results, provided the printed document carries a copying permission
44 notice identical to this one except for the removal of this paragraph
45 (this paragraph not being relevant to the printed manual).
47 Permission is granted to copy, distribute and/or modify this document
48 under the terms of the GNU Free Documentation License, Version 1.1 or
49 any later version published by the Free Software Foundation; with the
50 Invariant Sections being ``GNU General Public License'' and ``GNU Free
51 Documentation License'', with no Front-Cover Texts, and with no
52 Back-Cover Texts. A copy of the license is included in the section
53 entitled ``GNU Free Documentation License''.
58 @title GNU Wget @value{VERSION}
59 @subtitle The non-interactive download utility
60 @subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
61 @author by Hrvoje Nik@v{s}i@'{c} and the developers
65 Originally written by Hrvoje Niksic <hniksic@xemacs.org>.
68 GNU Info entry for @file{wget}.
73 @vskip 0pt plus 1filll
74 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2003, 2004, 2005,
75 Free Software Foundation, Inc.
77 Permission is granted to copy, distribute and/or modify this document
78 under the terms of the GNU Free Documentation License, Version 1.2 or
79 any later version published by the Free Software Foundation; with the
80 Invariant Sections being ``GNU General Public License'' and ``GNU Free
81 Documentation License'', with no Front-Cover Texts, and with no
82 Back-Cover Texts. A copy of the license is included in the section
83 entitled ``GNU Free Documentation License''.
88 @top Wget @value{VERSION}
90 This manual documents version @value{VERSION} of GNU Wget, the freely
91 available utility for network downloads.
93 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2003, 2004, 2005
94 Free Software Foundation, Inc.
97 * Overview:: Features of Wget.
98 * Invoking:: Wget command-line arguments.
99 * Recursive Download:: Downloading interlinked pages.
100 * Following Links:: The available methods of chasing links.
101 * Time-Stamping:: Mirroring according to time-stamps.
102 * Startup File:: Wget's initialization file.
103 * Examples:: Examples of usage.
104 * Various:: The stuff that doesn't fit anywhere else.
105 * Appendices:: Some useful references.
106 * Copying:: You may give out copies of Wget and of this manual.
107 * Concept Index:: Topics covered by this manual.
116 @c man begin DESCRIPTION
117 GNU Wget is a free utility for non-interactive download of files from
118 the Web. It supports @sc{http}, @sc{https}, and @sc{ftp} protocols, as
119 well as retrieval through @sc{http} proxies.
122 This chapter is a partial overview of Wget's features.
126 @c man begin DESCRIPTION
127 Wget is non-interactive, meaning that it can work in the background,
128 while the user is not logged on. This allows you to start a retrieval
129 and disconnect from the system, letting Wget finish the work. By
130 contrast, most of the Web browsers require constant user's presence,
131 which can be a great hindrance when transferring a lot of data.
137 @c man begin DESCRIPTION
141 @c man begin DESCRIPTION
142 Wget can follow links in @sc{html} and @sc{xhtml} pages and create local
143 versions of remote web sites, fully recreating the directory structure of
144 the original site. This is sometimes referred to as ``recursive
145 downloading.'' While doing that, Wget respects the Robot Exclusion
146 Standard (@file{/robots.txt}). Wget can be instructed to convert the
147 links in downloaded @sc{html} files to the local files for offline
153 File name wildcard matching and recursive mirroring of directories are
154 available when retrieving via @sc{ftp}. Wget can read the time-stamp
155 information given by both @sc{http} and @sc{ftp} servers, and store it
156 locally. Thus Wget can see if the remote file has changed since last
157 retrieval, and automatically retrieve the new version if it has. This
158 makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
164 @c man begin DESCRIPTION
168 @c man begin DESCRIPTION
169 Wget has been designed for robustness over slow or unstable network
170 connections; if a download fails due to a network problem, it will
171 keep retrying until the whole file has been retrieved. If the server
172 supports regetting, it will instruct the server to continue the
173 download from where it left off.
178 Wget supports proxy servers, which can lighten the network load, speed
179 up retrieval and provide access behind firewalls. However, if you are
180 behind a firewall that requires that you use a socks style gateway,
181 you can get the socks library and build Wget with support for socks.
182 Wget uses the passive @sc{ftp} downloading by default, active @sc{ftp}
187 Wget supports IP version 6, the next generation of IP. IPv6 is
188 autodetected at compile-time, and can be disabled at either build or
189 run time. Binaries built with IPv6 support work well in both
190 IPv4-only and dual family environments.
194 Built-in features offer mechanisms to tune which links you wish to follow
195 (@pxref{Following Links}).
199 The progress of individual downloads is traced using a progress gauge.
200 Interactive downloads are tracked using a ``thermometer''-style gauge,
201 whereas non-interactive ones are traced with dots, each dot
202 representing a fixed amount of data received (1KB by default). Either
203 gauge can be customized to your preferences.
207 Most of the features are fully configurable, either through command line
208 options, or via the initialization file @file{.wgetrc} (@pxref{Startup
209 File}). Wget allows you to define @dfn{global} startup files
210 (@file{/usr/local/etc/wgetrc} by default) for site settings.
215 @item /usr/local/etc/wgetrc
216 Default location of the @dfn{global} startup file.
226 Finally, GNU Wget is free software. This means that everyone may use
227 it, redistribute it and/or modify it under the terms of the GNU General
228 Public License, as published by the Free Software Foundation
239 By default, Wget is very simple to invoke. The basic syntax is:
242 @c man begin SYNOPSIS
243 wget [@var{option}]@dots{} [@var{URL}]@dots{}
247 Wget will simply download all the @sc{url}s specified on the command
248 line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
250 However, you may wish to change some of the default parameters of
251 Wget. You can do it two ways: permanently, adding the appropriate
252 command to @file{.wgetrc} (@pxref{Startup File}), or specifying it on
258 * Basic Startup Options::
259 * Logging and Input File Options::
261 * Directory Options::
264 * Recursive Retrieval Options::
265 * Recursive Accept/Reject Options::
273 @dfn{URL} is an acronym for Uniform Resource Locator. A uniform
274 resource locator is a compact string representation for a resource
275 available via the Internet. Wget recognizes the @sc{url} syntax as per
276 @sc{rfc1738}. This is the most widely used form (square brackets denote
280 http://host[:port]/directory/file
281 ftp://host[:port]/directory/file
284 You can also encode your username and password within a @sc{url}:
287 ftp://user:password@@host/path
288 http://user:password@@host/path
291 Either @var{user} or @var{password}, or both, may be left out. If you
292 leave out either the @sc{http} username or password, no authentication
293 will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
294 will be used. If you leave out the @sc{ftp} password, your email
295 address will be supplied as a default password.@footnote{If you have a
296 @file{.netrc} file in your home directory, password will also be
299 @strong{Important Note}: if you specify a password-containing @sc{url}
300 on the command line, the username and password will be plainly visible
301 to all users on the system, by way of @code{ps}. On multi-user systems,
302 this is a big security risk. To work around it, use @code{wget -i -}
303 and feed the @sc{url}s to Wget's standard input, each on a separate
304 line, terminated by @kbd{C-d}.
306 You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
307 being the hexadecimal representation of the character's @sc{ascii}
308 value. Some common unsafe characters include @samp{%} (quoted as
309 @samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
310 @samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
313 Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
314 default, @sc{ftp} documents are retrieved in the binary mode (type
315 @samp{i}), which means that they are downloaded unchanged. Another
316 useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
317 delimiters between the different operating systems, and is thus useful
318 for text files. Here is an example:
321 ftp://host/directory/file;type=a
324 Two alternative variants of @sc{url} specification are also supported,
325 because of historical (hysterical?) reasons and their widespreaded use.
327 @sc{ftp}-only syntax (supported by @code{NcFTP}):
332 @sc{http}-only syntax (introduced by @code{Netscape}):
337 These two alternative forms are deprecated, and may cease being
338 supported in the future.
340 If you do not understand the difference between these notations, or do
341 not know which one to use, just use the plain ordinary format you use
342 with your favorite browser, like @code{Lynx} or @code{Netscape}.
345 @section Option Syntax
346 @cindex option syntax
347 @cindex syntax of options
349 Since Wget uses GNU getopts to process its arguments, every option has a
350 short form and a long form. Long options are more convenient to
351 remember, but take time to type. You may freely mix different option
352 styles, or specify options after the command-line arguments. Thus you
356 wget -r --tries=10 http://fly.srk.fer.hr/ -o log
359 The space between the option accepting an argument and the argument may
360 be omitted. Instead @samp{-o log} you can write @samp{-olog}.
362 You may put several options that do not require arguments together,
369 This is a complete equivalent of:
372 wget -d -r -c @var{URL}
375 Since the options can be specified after the arguments, you may
376 terminate them with @samp{--}. So the following will try to download
377 @sc{url} @samp{-x}, reporting failure to @file{log}:
383 The options that accept comma-separated lists all respect the convention
384 that specifying an empty list clears its value. This can be useful to
385 clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
386 sets @code{exclude_directories} to @file{/cgi-bin}, the following
387 example will first reset it, and then set it to exclude @file{/~nobody}
388 and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
389 (@pxref{Wgetrc Syntax}).
392 wget -X '' -X /~nobody,/~somebody
397 @node Basic Startup Options
398 @section Basic Startup Options
403 Display the version of Wget.
407 Print a help message describing all of Wget's command-line options.
411 Go to background immediately after startup. If no output file is
412 specified via the @samp{-o}, output is redirected to @file{wget-log}.
414 @cindex execute wgetrc command
415 @item -e @var{command}
416 @itemx --execute @var{command}
417 Execute @var{command} as if it were a part of @file{.wgetrc}
418 (@pxref{Startup File}). A command thus invoked will be executed
419 @emph{after} the commands in @file{.wgetrc}, thus taking precedence over
420 them. If you need to specify more than one wgetrc command, use multiple
421 instances of @samp{-e}.
425 @node Logging and Input File Options
426 @section Logging and Input File Options
431 @item -o @var{logfile}
432 @itemx --output-file=@var{logfile}
433 Log all messages to @var{logfile}. The messages are normally reported
436 @cindex append to log
437 @item -a @var{logfile}
438 @itemx --append-output=@var{logfile}
439 Append to @var{logfile}. This is the same as @samp{-o}, only it appends
440 to @var{logfile} instead of overwriting the old log file. If
441 @var{logfile} does not exist, a new file is created.
446 Turn on debug output, meaning various information important to the
447 developers of Wget if it does not work properly. Your system
448 administrator may have chosen to compile Wget without debug support, in
449 which case @samp{-d} will not work. Please note that compiling with
450 debug support is always safe---Wget compiled with the debug support will
451 @emph{not} print any debug info unless requested with @samp{-d}.
452 @xref{Reporting Bugs}, for more information on how to use @samp{-d} for
458 Turn off Wget's output.
463 Turn on verbose output, with all the available data. The default output
468 Non-verbose output---turn off verbose without being completely quiet
469 (use @samp{-q} for that), which means that error messages and basic
470 information still get printed.
474 @itemx --input-file=@var{file}
475 Read @sc{url}s from @var{file}, in which case no @sc{url}s need to be on
476 the command line. If there are @sc{url}s both on the command line and
477 in an input file, those on the command lines will be the first ones to
478 be retrieved. The @var{file} need not be an @sc{html} document (but no
479 harm if it is)---it is enough if the @sc{url}s are just listed
482 However, if you specify @samp{--force-html}, the document will be
483 regarded as @samp{html}. In that case you may have problems with
484 relative links, which you can solve either by adding @code{<base
485 href="@var{url}">} to the documents or by specifying
486 @samp{--base=@var{url}} on the command line.
491 When input is read from a file, force it to be treated as an @sc{html}
492 file. This enables you to retrieve relative links from existing
493 @sc{html} files on your local disk, by adding @code{<base
494 href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
497 @cindex base for relative links in input file
499 @itemx --base=@var{URL}
500 When used in conjunction with @samp{-F}, prepends @var{URL} to relative
501 links in the file specified by @samp{-i}.
504 @node Download Options
505 @section Download Options
508 @cindex bind() address
509 @cindex client IP address
510 @cindex IP address, client
511 @item --bind-address=@var{ADDRESS}
512 When making client TCP/IP connections, @code{bind()} to @var{ADDRESS} on
513 the local machine. @var{ADDRESS} may be specified as a hostname or IP
514 address. This option can be useful if your machine is bound to multiple
519 @cindex number of retries
520 @item -t @var{number}
521 @itemx --tries=@var{number}
522 Set number of retries to @var{number}. Specify 0 or @samp{inf} for
523 infinite retrying. The default is to retry 20 times, with the exception
524 of fatal errors like ``connection refused'' or ``not found'' (404),
525 which are not retried.
528 @itemx --output-document=@var{file}
529 The documents will not be written to the appropriate files, but all will
530 be concatenated together and written to @var{file}. If @var{file}
531 already exists, it will be overwritten. If the @var{file} is @samp{-},
532 the documents will be written to standard output (disabling @samp{-k}).
534 Note that a combination with @samp{-k} is only well-defined for downloading
537 @cindex clobbering, file
538 @cindex downloading multiple times
542 If a file is downloaded more than once in the same directory, Wget's
543 behavior depends on a few options, including @samp{-nc}. In certain
544 cases, the local file will be @dfn{clobbered}, or overwritten, upon
545 repeated download. In other cases it will be preserved.
547 When running Wget without @samp{-N}, @samp{-nc}, or @samp{-r},
548 downloading the same file in the same directory will result in the
549 original copy of @var{file} being preserved and the second copy being
550 named @samp{@var{file}.1}. If that file is downloaded yet again, the
551 third copy will be named @samp{@var{file}.2}, and so on. When
552 @samp{-nc} is specified, this behavior is suppressed, and Wget will
553 refuse to download newer copies of @samp{@var{file}}. Therefore,
554 ``@code{no-clobber}'' is actually a misnomer in this mode---it's not
555 clobbering that's prevented (as the numeric suffixes were already
556 preventing clobbering), but rather the multiple version saving that's
559 When running Wget with @samp{-r}, but without @samp{-N} or @samp{-nc},
560 re-downloading a file will result in the new copy simply overwriting the
561 old. Adding @samp{-nc} will prevent this behavior, instead causing the
562 original version to be preserved and any newer copies on the server to
565 When running Wget with @samp{-N}, with or without @samp{-r}, the
566 decision as to whether or not to download a newer copy of a file depends
567 on the local and remote timestamp and size of the file
568 (@pxref{Time-Stamping}). @samp{-nc} may not be specified at the same
571 Note that when @samp{-nc} is specified, files with the suffixes
572 @samp{.html} or @samp{.htm} will be loaded from the local disk and
573 parsed as if they had been retrieved from the Web.
575 @cindex continue retrieval
576 @cindex incomplete downloads
577 @cindex resume download
580 Continue getting a partially-downloaded file. This is useful when you
581 want to finish up a download started by a previous instance of Wget, or
582 by another program. For instance:
585 wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
588 If there is a file named @file{ls-lR.Z} in the current directory, Wget
589 will assume that it is the first portion of the remote file, and will
590 ask the server to continue the retrieval from an offset equal to the
591 length of the local file.
593 Note that you don't need to specify this option if you just want the
594 current invocation of Wget to retry downloading a file should the
595 connection be lost midway through. This is the default behavior.
596 @samp{-c} only affects resumption of downloads started @emph{prior} to
597 this invocation of Wget, and whose local files are still sitting around.
599 Without @samp{-c}, the previous example would just download the remote
600 file to @file{ls-lR.Z.1}, leaving the truncated @file{ls-lR.Z} file
603 Beginning with Wget 1.7, if you use @samp{-c} on a non-empty file, and
604 it turns out that the server does not support continued downloading,
605 Wget will refuse to start the download from scratch, which would
606 effectively ruin existing contents. If you really want the download to
607 start from scratch, remove the file.
609 Also beginning with Wget 1.7, if you use @samp{-c} on a file which is of
610 equal size as the one on the server, Wget will refuse to download the
611 file and print an explanatory message. The same happens when the file
612 is smaller on the server than locally (presumably because it was changed
613 on the server since your last download attempt)---because ``continuing''
614 is not meaningful, no download occurs.
616 On the other side of the coin, while using @samp{-c}, any file that's
617 bigger on the server than locally will be considered an incomplete
618 download and only @code{(length(remote) - length(local))} bytes will be
619 downloaded and tacked onto the end of the local file. This behavior can
620 be desirable in certain cases---for instance, you can use @samp{wget -c}
621 to download just the new portion that's been appended to a data
622 collection or log file.
624 However, if the file is bigger on the server because it's been
625 @emph{changed}, as opposed to just @emph{appended} to, you'll end up
626 with a garbled file. Wget has no way of verifying that the local file
627 is really a valid prefix of the remote file. You need to be especially
628 careful of this when using @samp{-c} in conjunction with @samp{-r},
629 since every file will be considered as an "incomplete download" candidate.
631 Another instance where you'll get a garbled file if you try to use
632 @samp{-c} is if you have a lame @sc{http} proxy that inserts a
633 ``transfer interrupted'' string into the local file. In the future a
634 ``rollback'' option may be added to deal with this case.
636 Note that @samp{-c} only works with @sc{ftp} servers and with @sc{http}
637 servers that support the @code{Range} header.
639 @cindex progress indicator
641 @item --progress=@var{type}
642 Select the type of the progress indicator you wish to use. Legal
643 indicators are ``dot'' and ``bar''.
645 The ``bar'' indicator is used by default. It draws an @sc{ascii} progress
646 bar graphics (a.k.a ``thermometer'' display) indicating the status of
647 retrieval. If the output is not a TTY, the ``dot'' bar will be used by
650 Use @samp{--progress=dot} to switch to the ``dot'' display. It traces
651 the retrieval by printing dots on the screen, each dot representing a
652 fixed amount of downloaded data.
654 When using the dotted retrieval, you may also set the @dfn{style} by
655 specifying the type as @samp{dot:@var{style}}. Different styles assign
656 different meaning to one dot. With the @code{default} style each dot
657 represents 1K, there are ten dots in a cluster and 50 dots in a line.
658 The @code{binary} style has a more ``computer''-like orientation---8K
659 dots, 16-dots clusters and 48 dots per line (which makes for 384K
660 lines). The @code{mega} style is suitable for downloading very large
661 files---each dot represents 64K retrieved, there are eight dots in a
662 cluster, and 48 dots on each line (so each line contains 3M).
664 Note that you can set the default style using the @code{progress}
665 command in @file{.wgetrc}. That setting may be overridden from the
666 command line. The exception is that, when the output is not a TTY, the
667 ``dot'' progress will be favored over ``bar''. To force the bar output,
668 use @samp{--progress=bar:force}.
671 @itemx --timestamping
672 Turn on time-stamping. @xref{Time-Stamping}, for details.
674 @cindex server response, print
676 @itemx --server-response
677 Print the headers sent by @sc{http} servers and responses sent by
680 @cindex Wget as spider
683 When invoked with this option, Wget will behave as a Web @dfn{spider},
684 which means that it will not download the pages, just check that they
685 are there. For example, you can use Wget to check your bookmarks:
688 wget --spider --force-html -i bookmarks.html
691 This feature needs much more work for Wget to get close to the
692 functionality of real web spiders.
696 @itemx --timeout=@var{seconds}
697 Set the network timeout to @var{seconds} seconds. This is equivalent
698 to specifying @samp{--dns-timeout}, @samp{--connect-timeout}, and
699 @samp{--read-timeout}, all at the same time.
701 Whenever Wget connects to or reads from a remote host, it checks for a
702 timeout and aborts the operation if the time expires. This prevents
703 anomalous occurrences such as hanging reads or infinite connects. The
704 only timeout enabled by default is a 900-second timeout for reading.
705 Setting timeout to 0 disables checking for timeouts.
707 Unless you know what you are doing, it is best not to set any of the
708 timeout-related options.
712 @item --dns-timeout=@var{seconds}
713 Set the DNS lookup timeout to @var{seconds} seconds. DNS lookups that
714 don't complete within the specified time will fail. By default, there
715 is no timeout on DNS lookups, other than that implemented by system
718 @cindex connect timeout
719 @cindex timeout, connect
720 @item --connect-timeout=@var{seconds}
721 Set the connect timeout to @var{seconds} seconds. TCP connections that
722 take longer to establish will be aborted. By default, there is no
723 connect timeout, other than that implemented by system libraries.
726 @cindex timeout, read
727 @item --read-timeout=@var{seconds}
728 Set the read (and write) timeout to @var{seconds} seconds. Reads that
729 take longer will fail. The default value for read timeout is 900
732 @cindex bandwidth, limit
734 @cindex limit bandwidth
735 @item --limit-rate=@var{amount}
736 Limit the download speed to @var{amount} bytes per second. Amount may
737 be expressed in bytes, kilobytes with the @samp{k} suffix, or megabytes
738 with the @samp{m} suffix. For example, @samp{--limit-rate=20k} will
739 limit the retrieval rate to 20KB/s. This kind of thing is useful when,
740 for whatever reason, you don't want Wget to consume the entire available
743 Note that Wget implements the limiting by sleeping the appropriate
744 amount of time after a network read that took less time than specified
745 by the rate. Eventually this strategy causes the TCP transfer to slow
746 down to approximately the specified rate. However, it may take some
747 time for this balance to be achieved, so don't be surprised if limiting
748 the rate doesn't work well with very small files.
752 @item -w @var{seconds}
753 @itemx --wait=@var{seconds}
754 Wait the specified number of seconds between the retrievals. Use of
755 this option is recommended, as it lightens the server load by making the
756 requests less frequent. Instead of in seconds, the time can be
757 specified in minutes using the @code{m} suffix, in hours using @code{h}
758 suffix, or in days using @code{d} suffix.
760 Specifying a large value for this option is useful if the network or the
761 destination host is down, so that Wget can wait long enough to
762 reasonably expect the network error to be fixed before the retry.
764 @cindex retries, waiting between
765 @cindex waiting between retries
766 @item --waitretry=@var{seconds}
767 If you don't want Wget to wait between @emph{every} retrieval, but only
768 between retries of failed downloads, you can use this option. Wget will
769 use @dfn{linear backoff}, waiting 1 second after the first failure on a
770 given file, then waiting 2 seconds after the second failure on that
771 file, up to the maximum number of @var{seconds} you specify. Therefore,
772 a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
775 Note that this option is turned on by default in the global
781 Some web sites may perform log analysis to identify retrieval programs
782 such as Wget by looking for statistically significant similarities in
783 the time between requests. This option causes the time between requests
784 to vary between 0 and 2 * @var{wait} seconds, where @var{wait} was
785 specified using the @samp{--wait} option, in order to mask Wget's
786 presence from such analysis.
788 A recent article in a publication devoted to development on a popular
789 consumer platform provided code to perform this analysis on the fly.
790 Its author suggested blocking at the class C address level to ensure
791 automated retrieval programs were blocked despite changing DHCP-supplied
794 The @samp{--random-wait} option was inspired by this ill-advised
795 recommendation to block many unrelated users from a web site due to the
800 @itemx --proxy=on/off
801 Turn proxy support on or off. The proxy is on by default if the
802 appropriate environment variable is defined.
804 For more information about the use of proxies with Wget, @xref{Proxies}.
808 @itemx --quota=@var{quota}
809 Specify download quota for automatic retrievals. The value can be
810 specified in bytes (default), kilobytes (with @samp{k} suffix), or
811 megabytes (with @samp{m} suffix).
813 Note that quota will never affect downloading a single file. So if you
814 specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
815 @file{ls-lR.gz} will be downloaded. The same goes even when several
816 @sc{url}s are specified on the command-line. However, quota is
817 respected when retrieving either recursively, or from an input file.
818 Thus you may safely type @samp{wget -Q2m -i sites}---download will be
819 aborted when the quota is exceeded.
821 Setting quota to 0 or to @samp{inf} unlimits the download quota.
824 @cindex caching of DNS lookups
826 Turn off caching of DNS lookups. Normally, Wget remembers the IP
827 addresses it looked up from DNS so it doesn't have to repeatedly
828 contact the DNS server for the same (typically small) set of hosts it
829 retrieves from. This cache exists in memory only; a new Wget run will
832 However, it has been reported that in some situations it is not
833 desirable to cache host names, even for the duration of a
834 short-running application like Wget. With this option Wget issues a
835 new DNS lookup (more precisely, a new call to @code{gethostbyname} or
836 @code{getaddrinfo}) each time it makes a new connection. Please note
837 that this option will @emph{not} affect caching that might be
838 performed by the resolving library or by an external caching layer,
841 If you don't understand exactly what this option does, you probably
844 @cindex file names, restrict
845 @cindex Windows file names
846 @item --restrict-file-names=@var{mode}
847 Change which characters found in remote URLs may show up in local file
848 names generated from those URLs. Characters that are @dfn{restricted}
849 by this option are escaped, i.e. replaced with @samp{%HH}, where
850 @samp{HH} is the hexadecimal number that corresponds to the restricted
853 By default, Wget escapes the characters that are not valid as part of
854 file names on your operating system, as well as control characters that
855 are typically unprintable. This option is useful for changing these
856 defaults, either because you are downloading to a non-native partition,
857 or because you want to disable escaping of the control characters.
859 When mode is set to ``unix'', Wget escapes the character @samp{/} and
860 the control characters in the ranges 0--31 and 128--159. This is the
861 default on Unix-like OS'es.
863 When mode is set to ``windows'', Wget escapes the characters @samp{\},
864 @samp{|}, @samp{/}, @samp{:}, @samp{?}, @samp{"}, @samp{*}, @samp{<},
865 @samp{>}, and the control characters in the ranges 0--31 and 128--159.
866 In addition to this, Wget in Windows mode uses @samp{+} instead of
867 @samp{:} to separate host and port in local file names, and uses
868 @samp{@@} instead of @samp{?} to separate the query portion of the file
869 name from the rest. Therefore, a URL that would be saved as
870 @samp{www.xemacs.org:4300/search.pl?input=blah} in Unix mode would be
871 saved as @samp{www.xemacs.org+4300/search.pl@@input=blah} in Windows
872 mode. This mode is the default on Windows.
874 If you append @samp{,nocontrol} to the mode, as in
875 @samp{unix,nocontrol}, escaping of the control characters is also
876 switched off. You can use @samp{--restrict-file-names=nocontrol} to
877 turn off escaping of control characters without affecting the choice of
878 the OS to use as file name restriction mode.
885 Force connecting to IPv4 or IPv6 addresses. With @samp{--inet4-only}
886 or @samp{-4}, Wget will only connect to IPv4 hosts, ignoring AAAA
887 records in DNS, and refusing to connect to IPv6 addresses specified in
888 URLs. Conversely, with @samp{--inet6-only} or @samp{-6}, Wget will
889 only connect to IPv6 hosts and ignore A records and IPv4 addresses.
891 Neither options should be needed normally. By default, an IPv6-aware
892 Wget will use the address family specified by the host's DNS record.
893 If the DNS specifies both an A record and an AAAA record, Wget will
894 try them in sequence until it finds one it can connect to.
896 These options can be used to deliberately force the use of IPv4 or
897 IPv6 address families on dual family systems, usually to aid debugging
898 or to deal with broken network configuration. Only one of
899 @samp{--inet6-only} and @samp{--inet4-only} may be specified in the
900 same command. Neither option is available in Wget compiled without
903 Note: the current implementation of the @samp{-6} switch allows IPv4
904 addresses mapped into IPv6 addresses to be connected to. This usage
905 is not intended to be condoned, and it might be removed in a later
909 @node Directory Options
910 @section Directory Options
914 @itemx --no-directories
915 Do not create a hierarchy of directories when retrieving recursively.
916 With this option turned on, all files will get saved to the current
917 directory, without clobbering (if a name shows up more than once, the
918 filenames will get extensions @samp{.n}).
921 @itemx --force-directories
922 The opposite of @samp{-nd}---create a hierarchy of directories, even if
923 one would not have been created otherwise. E.g. @samp{wget -x
924 http://fly.srk.fer.hr/robots.txt} will save the downloaded file to
925 @file{fly.srk.fer.hr/robots.txt}.
928 @itemx --no-host-directories
929 Disable generation of host-prefixed directories. By default, invoking
930 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
931 directories beginning with @file{fly.srk.fer.hr/}. This option disables
934 @item --protocol-directories
935 Use the protocol name as a directory component of local file names. For
936 example, with this option, @samp{wget -r http://@var{host}} will save to
937 @samp{http/@var{host}/...} rather than just to @samp{@var{host}/...}.
939 Disable generation of host-prefixed directories. By default, invoking
940 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
941 directories beginning with @file{fly.srk.fer.hr/}. This option disables
944 @cindex cut directories
945 @item --cut-dirs=@var{number}
946 Ignore @var{number} directory components. This is useful for getting a
947 fine-grained control over the directory where recursive retrieval will
950 Take, for example, the directory at
951 @samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
952 @samp{-r}, it will be saved locally under
953 @file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
954 remove the @file{ftp.xemacs.org/} part, you are still stuck with
955 @file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
956 makes Wget not ``see'' @var{number} remote directory components. Here
957 are several examples of how @samp{--cut-dirs} option works.
961 No options -> ftp.xemacs.org/pub/xemacs/
963 -nH --cut-dirs=1 -> xemacs/
964 -nH --cut-dirs=2 -> .
966 --cut-dirs=1 -> ftp.xemacs.org/xemacs/
971 If you just want to get rid of the directory structure, this option is
972 similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
973 @samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
974 instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
975 be placed to @file{xemacs/beta}, as one would expect.
977 @cindex directory prefix
978 @item -P @var{prefix}
979 @itemx --directory-prefix=@var{prefix}
980 Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
981 directory where all other files and subdirectories will be saved to,
982 i.e. the top of the retrieval tree. The default is @samp{.} (the
987 @section HTTP Options
990 @cindex .html extension
992 @itemx --html-extension
993 If a file of type @samp{application/xhtml+xml} or @samp{text/html} is
994 downloaded and the URL does not end with the regexp
995 @samp{\.[Hh][Tt][Mm][Ll]?}, this option will cause the suffix @samp{.html}
996 to be appended to the local filename. This is useful, for instance, when
997 you're mirroring a remote site that uses @samp{.asp} pages, but you want
998 the mirrored pages to be viewable on your stock Apache server. Another
999 good use for this is when you're downloading CGI-generated materials. A URL
1000 like @samp{http://site.com/article.cgi?25} will be saved as
1001 @file{article.cgi?25.html}.
1003 Note that filenames changed in this way will be re-downloaded every time
1004 you re-mirror a site, because Wget can't tell that the local
1005 @file{@var{X}.html} file corresponds to remote URL @samp{@var{X}} (since
1006 it doesn't yet know that the URL produces output of type
1007 @samp{text/html} or @samp{application/xhtml+xml}. To prevent this
1008 re-downloading, you must use @samp{-k} and @samp{-K} so that the original
1009 version of the file will be saved as @file{@var{X}.orig} (@pxref{Recursive
1010 Retrieval Options}).
1013 @cindex http password
1014 @cindex authentication
1015 @item --http-user=@var{user}
1016 @itemx --http-passwd=@var{password}
1017 Specify the username @var{user} and password @var{password} on an
1018 @sc{http} server. According to the type of the challenge, Wget will
1019 encode them using either the @code{basic} (insecure) or the
1020 @code{digest} authentication scheme.
1022 Another way to specify username and password is in the @sc{url} itself
1023 (@pxref{URL Format}). Either method reveals your password to anyone who
1024 bothers to run @code{ps}. To prevent the passwords from being seen,
1025 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1026 those files from other users with @code{chmod}. If the passwords are
1027 really important, do not leave them lying in those files either---edit
1028 the files and delete them after Wget has started the download.
1030 For more information about security issues with Wget, @xref{Security
1036 Disable server-side cache. In this case, Wget will send the remote
1037 server an appropriate directive (@samp{Pragma: no-cache}) to get the
1038 file from the remote service, rather than returning the cached version.
1039 This is especially useful for retrieving and flushing out-of-date
1040 documents on proxy servers.
1042 Caching is allowed by default.
1046 Disable the use of cookies. Cookies are a mechanism for maintaining
1047 server-side state. The server sends the client a cookie using the
1048 @code{Set-Cookie} header, and the client responds with the same cookie
1049 upon further requests. Since cookies allow the server owners to keep
1050 track of visitors and for sites to exchange this information, some
1051 consider them a breach of privacy. The default is to use cookies;
1052 however, @emph{storing} cookies is not on by default.
1054 @cindex loading cookies
1055 @cindex cookies, loading
1056 @item --load-cookies @var{file}
1057 Load cookies from @var{file} before the first HTTP retrieval.
1058 @var{file} is a textual file in the format originally used by Netscape's
1059 @file{cookies.txt} file.
1061 You will typically use this option when mirroring sites that require
1062 that you be logged in to access some or all of their content. The login
1063 process typically works by the web server issuing an @sc{http} cookie
1064 upon receiving and verifying your credentials. The cookie is then
1065 resent by the browser when accessing that part of the site, and so
1066 proves your identity.
1068 Mirroring such a site requires Wget to send the same cookies your
1069 browser sends when communicating with the site. This is achieved by
1070 @samp{--load-cookies}---simply point Wget to the location of the
1071 @file{cookies.txt} file, and it will send the same cookies your browser
1072 would send in the same situation. Different browsers keep textual
1073 cookie files in different locations:
1077 The cookies are in @file{~/.netscape/cookies.txt}.
1079 @item Mozilla and Netscape 6.x.
1080 Mozilla's cookie file is also named @file{cookies.txt}, located
1081 somewhere under @file{~/.mozilla}, in the directory of your profile.
1082 The full path usually ends up looking somewhat like
1083 @file{~/.mozilla/default/@var{some-weird-string}/cookies.txt}.
1085 @item Internet Explorer.
1086 You can produce a cookie file Wget can use by using the File menu,
1087 Import and Export, Export Cookies. This has been tested with Internet
1088 Explorer 5; it is not guaranteed to work with earlier versions.
1090 @item Other browsers.
1091 If you are using a different browser to create your cookies,
1092 @samp{--load-cookies} will only work if you can locate or produce a
1093 cookie file in the Netscape format that Wget expects.
1096 If you cannot use @samp{--load-cookies}, there might still be an
1097 alternative. If your browser supports a ``cookie manager'', you can use
1098 it to view the cookies used when accessing the site you're mirroring.
1099 Write down the name and value of the cookie, and manually instruct Wget
1100 to send those cookies, bypassing the ``official'' cookie support:
1103 wget --no-cookies --header "Cookie: @var{name}=@var{value}"
1106 @cindex saving cookies
1107 @cindex cookies, saving
1108 @item --save-cookies @var{file}
1109 Save cookies to @var{file} before exiting. This will not save cookies
1110 that have expired or that have no expiry time (so-called ``session
1111 cookies''), but also see @samp{--keep-session-cookies}.
1113 @cindex cookies, session
1114 @cindex session cookies
1115 @item --keep-session-cookies
1117 When specified, causes @samp{--save-cookies} to also save session
1118 cookies. Session cookies are normally not save because they are
1119 supposed to be forgotten when you exit the browser. Saving them is
1120 useful on sites that require you to log in or to visit the home page
1121 before you can access some pages. With this option, multiple Wget runs
1122 are considered a single browser session as far as the site is concerned.
1124 Since the cookie file format does not normally carry session cookies,
1125 Wget marks them with an expiry timestamp of 0. Wget's
1126 @samp{--load-cookies} recognizes those as session cookies, but it might
1127 confuse other browsers. Also note that cookies so loaded will be
1128 treated as other session cookies, which means that if you want
1129 @samp{--save-cookies} to preserve them again, you must use
1130 @samp{--keep-session-cookies} again.
1132 @cindex Content-Length, ignore
1133 @cindex ignore length
1134 @item --ignore-length
1135 Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
1136 precise) send out bogus @code{Content-Length} headers, which makes Wget
1137 go wild, as it thinks not all the document was retrieved. You can spot
1138 this syndrome if Wget retries getting the same document again and again,
1139 each time claiming that the (otherwise normal) connection has closed on
1142 With this option, Wget will ignore the @code{Content-Length} header---as
1143 if it never existed.
1146 @item --header=@var{additional-header}
1147 Define an @var{additional-header} to be passed to the @sc{http} servers.
1148 Headers must contain a @samp{:} preceded by one or more non-blank
1149 characters, and must not contain newlines.
1151 You may define more than one additional header by specifying
1152 @samp{--header} more than once.
1156 wget --header='Accept-Charset: iso-8859-2' \
1157 --header='Accept-Language: hr' \
1158 http://fly.srk.fer.hr/
1162 Specification of an empty string as the header value will clear all
1163 previous user-defined headers.
1166 @cindex proxy password
1167 @cindex proxy authentication
1168 @item --proxy-user=@var{user}
1169 @itemx --proxy-passwd=@var{password}
1170 Specify the username @var{user} and password @var{password} for
1171 authentication on a proxy server. Wget will encode them using the
1172 @code{basic} authentication scheme.
1174 Security considerations similar to those with @samp{--http-passwd}
1175 pertain here as well.
1177 @cindex http referer
1178 @cindex referer, http
1179 @item --referer=@var{url}
1180 Include `Referer: @var{url}' header in HTTP request. Useful for
1181 retrieving documents with server-side processing that assume they are
1182 always being retrieved by interactive web browsers and only come out
1183 properly when Referer is set to one of the pages that point to them.
1185 @cindex server response, save
1186 @item --save-headers
1187 Save the headers sent by the @sc{http} server to the file, preceding the
1188 actual contents, with an empty line as the separator.
1191 @item -U @var{agent-string}
1192 @itemx --user-agent=@var{agent-string}
1193 Identify as @var{agent-string} to the @sc{http} server.
1195 The @sc{http} protocol allows the clients to identify themselves using a
1196 @code{User-Agent} header field. This enables distinguishing the
1197 @sc{www} software, usually for statistical purposes or for tracing of
1198 protocol violations. Wget normally identifies as
1199 @samp{Wget/@var{version}}, @var{version} being the current version
1202 However, some sites have been known to impose the policy of tailoring
1203 the output according to the @code{User-Agent}-supplied information.
1204 While conceptually this is not such a bad idea, it has been abused by
1205 servers denying information to clients other than @code{Mozilla} or
1206 Microsoft @code{Internet Explorer}. This option allows you to change
1207 the @code{User-Agent} line issued by Wget. Use of this option is
1208 discouraged, unless you really know what you are doing.
1211 @item --post-data=@var{string}
1212 @itemx --post-file=@var{file}
1213 Use POST as the method for all HTTP requests and send the specified data
1214 in the request body. @code{--post-data} sends @var{string} as data,
1215 whereas @code{--post-file} sends the contents of @var{file}. Other than
1216 that, they work in exactly the same way.
1218 Please be aware that Wget needs to know the size of the POST data in
1219 advance. Therefore the argument to @code{--post-file} must be a regular
1220 file; specifying a FIFO or something like @file{/dev/stdin} won't work.
1221 It's not quite clear how to work around this limitation inherent in
1222 HTTP/1.0. Although HTTP/1.1 introduces @dfn{chunked} transfer that
1223 doesn't require knowing the request length in advance, a client can't
1224 use chunked unless it knows it's talking to an HTTP/1.1 server. And it
1225 can't know that until it receives a response, which in turn requires the
1226 request to have been completed -- a chicken-and-egg problem.
1228 Note: if Wget is redirected after the POST request is completed, it will
1229 not send the POST data to the redirected URL. This is because URLs that
1230 process POST often respond with a redirection to a regular page
1231 (although that's technically disallowed), which does not desire or
1232 accept POST. It is not yet clear that this behavior is optimal; if it
1233 doesn't work out, it will be changed.
1235 This example shows how to log to a server using POST and then proceed to
1236 download the desired pages, presumably only accessible to authorized
1241 # @r{Log in to the server. This can be done only once.}
1242 wget --save-cookies cookies.txt \
1243 --post-data 'user=foo&password=bar' \
1244 http://server.com/auth.php
1246 # @r{Now grab the page or pages we care about.}
1247 wget --load-cookies cookies.txt \
1248 -p http://server.com/interesting/article.php
1254 @section FTP Options
1257 @cindex password, FTP
1258 @item --ftp-passwd=@var{string}
1259 Set the default FTP password to @var{string}. Without this, or the
1260 corresponding startup option, the password defaults to @samp{-wget@@},
1261 normally used for anonymous FTP.
1263 @cindex .listing files, removing
1264 @item --no-remove-listing
1265 Don't remove the temporary @file{.listing} files generated by @sc{ftp}
1266 retrievals. Normally, these files contain the raw directory listings
1267 received from @sc{ftp} servers. Not removing them can be useful for
1268 debugging purposes, or when you want to be able to easily check on the
1269 contents of remote server directories (e.g. to verify that a mirror
1270 you're running is complete).
1272 Note that even though Wget writes to a known filename for this file,
1273 this is not a security hole in the scenario of a user making
1274 @file{.listing} a symbolic link to @file{/etc/passwd} or something and
1275 asking @code{root} to run Wget in his or her directory. Depending on
1276 the options used, either Wget will refuse to write to @file{.listing},
1277 making the globbing/recursion/time-stamping operation fail, or the
1278 symbolic link will be deleted and replaced with the actual
1279 @file{.listing} file, or the listing will be written to a
1280 @file{.listing.@var{number}} file.
1282 Even though this situation isn't a problem, though, @code{root} should
1283 never run Wget in a non-trusted user's directory. A user could do
1284 something as simple as linking @file{index.html} to @file{/etc/passwd}
1285 and asking @code{root} to run Wget with @samp{-N} or @samp{-r} so the file
1286 will be overwritten.
1288 @cindex globbing, toggle
1290 Turn off @sc{ftp} globbing. Globbing refers to the use of shell-like
1291 special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[}
1292 and @samp{]} to retrieve more than one file from the same directory at
1296 wget ftp://gnjilux.srk.fer.hr/*.msg
1299 By default, globbing will be turned on if the @sc{url} contains a
1300 globbing character. This option may be used to turn globbing on or off
1303 You may have to quote the @sc{url} to protect it from being expanded by
1304 your shell. Globbing makes Wget look for a directory listing, which is
1305 system-specific. This is why it currently works only with Unix @sc{ftp}
1306 servers (and the ones emulating Unix @code{ls} output).
1309 @item --no-passive-ftp
1310 Disable the use of the @dfn{passive} FTP transfer mode. Passive FTP
1311 mandates that the client connect to the server to establish the data
1312 connection rather than the other way around.
1314 If the machine is connected to the Internet directly, both passive and
1315 active FTP should work equally well. Behind most firewall and NAT
1316 configurations passive FTP has a better chance of working. However,
1317 in some rare firewall configurations, active FTP actually works when
1318 passive FTP doesn't. If you suspect this to be the case, use this
1319 option, or set @code{passive_ftp=off} in your init file.
1321 @cindex symbolic links, retrieving
1322 @item --retr-symlinks
1323 Usually, when retrieving @sc{ftp} directories recursively and a symbolic
1324 link is encountered, the linked-to file is not downloaded. Instead, a
1325 matching symbolic link is created on the local filesystem. The
1326 pointed-to file will not be downloaded unless this recursive retrieval
1327 would have encountered it separately and downloaded it anyway.
1329 When @samp{--retr-symlinks} is specified, however, symbolic links are
1330 traversed and the pointed-to files are retrieved. At this time, this
1331 option does not cause Wget to traverse symlinks to directories and
1332 recurse through them, but in the future it should be enhanced to do
1335 Note that when retrieving a file (not a directory) because it was
1336 specified on the command-line, rather than because it was recursed to,
1337 this option has no effect. Symbolic links are always traversed in this
1340 @cindex Keep-Alive, turning off
1341 @cindex Persistent Connections, disabling
1342 @item --no-http-keep-alive
1343 Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget
1344 asks the server to keep the connection open so that, when you download
1345 more than one document from the same server, they get transferred over
1346 the same TCP connection. This saves time and at the same time reduces
1347 the load on the server.
1349 This option is useful when, for some reason, persistent (keep-alive)
1350 connections don't work for you, for example due to a server bug or due
1351 to the inability of server-side scripts to cope with the connections.
1354 @node Recursive Retrieval Options
1355 @section Recursive Retrieval Options
1360 Turn on recursive retrieving. @xref{Recursive Download}, for more
1363 @item -l @var{depth}
1364 @itemx --level=@var{depth}
1365 Specify recursion maximum depth level @var{depth} (@pxref{Recursive
1366 Download}). The default maximum depth is 5.
1368 @cindex proxy filling
1369 @cindex delete after retrieval
1370 @cindex filling proxy cache
1371 @item --delete-after
1372 This option tells Wget to delete every single file it downloads,
1373 @emph{after} having done so. It is useful for pre-fetching popular
1374 pages through a proxy, e.g.:
1377 wget -r -nd --delete-after http://whatever.com/~popular/page/
1380 The @samp{-r} option is to retrieve recursively, and @samp{-nd} to not
1383 Note that @samp{--delete-after} deletes files on the local machine. It
1384 does not issue the @samp{DELE} command to remote FTP sites, for
1385 instance. Also note that when @samp{--delete-after} is specified,
1386 @samp{--convert-links} is ignored, so @samp{.orig} files are simply not
1387 created in the first place.
1389 @cindex conversion of links
1390 @cindex link conversion
1392 @itemx --convert-links
1393 After the download is complete, convert the links in the document to
1394 make them suitable for local viewing. This affects not only the visible
1395 hyperlinks, but any part of the document that links to external content,
1396 such as embedded images, links to style sheets, hyperlinks to non-@sc{html}
1399 Each link will be changed in one of the two ways:
1403 The links to files that have been downloaded by Wget will be changed to
1404 refer to the file they point to as a relative link.
1406 Example: if the downloaded file @file{/foo/doc.html} links to
1407 @file{/bar/img.gif}, also downloaded, then the link in @file{doc.html}
1408 will be modified to point to @samp{../bar/img.gif}. This kind of
1409 transformation works reliably for arbitrary combinations of directories.
1412 The links to files that have not been downloaded by Wget will be changed
1413 to include host name and absolute path of the location they point to.
1415 Example: if the downloaded file @file{/foo/doc.html} links to
1416 @file{/bar/img.gif} (or to @file{../bar/img.gif}), then the link in
1417 @file{doc.html} will be modified to point to
1418 @file{http://@var{hostname}/bar/img.gif}.
1421 Because of this, local browsing works reliably: if a linked file was
1422 downloaded, the link will refer to its local name; if it was not
1423 downloaded, the link will refer to its full Internet address rather than
1424 presenting a broken link. The fact that the former links are converted
1425 to relative links ensures that you can move the downloaded hierarchy to
1428 Note that only at the end of the download can Wget know which links have
1429 been downloaded. Because of that, the work done by @samp{-k} will be
1430 performed at the end of all the downloads.
1432 @cindex backing up converted files
1434 @itemx --backup-converted
1435 When converting a file, back up the original version with a @samp{.orig}
1436 suffix. Affects the behavior of @samp{-N} (@pxref{HTTP Time-Stamping
1441 Turn on options suitable for mirroring. This option turns on recursion
1442 and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
1443 directory listings. It is currently equivalent to
1444 @samp{-r -N -l inf --no-remove-listing}.
1446 @cindex page requisites
1447 @cindex required images, downloading
1449 @itemx --page-requisites
1450 This option causes Wget to download all the files that are necessary to
1451 properly display a given @sc{html} page. This includes such things as
1452 inlined images, sounds, and referenced stylesheets.
1454 Ordinarily, when downloading a single @sc{html} page, any requisite documents
1455 that may be needed to display it properly are not downloaded. Using
1456 @samp{-r} together with @samp{-l} can help, but since Wget does not
1457 ordinarily distinguish between external and inlined documents, one is
1458 generally left with ``leaf documents'' that are missing their
1461 For instance, say document @file{1.html} contains an @code{<IMG>} tag
1462 referencing @file{1.gif} and an @code{<A>} tag pointing to external
1463 document @file{2.html}. Say that @file{2.html} is similar but that its
1464 image is @file{2.gif} and it links to @file{3.html}. Say this
1465 continues up to some arbitrarily high number.
1467 If one executes the command:
1470 wget -r -l 2 http://@var{site}/1.html
1473 then @file{1.html}, @file{1.gif}, @file{2.html}, @file{2.gif}, and
1474 @file{3.html} will be downloaded. As you can see, @file{3.html} is
1475 without its requisite @file{3.gif} because Wget is simply counting the
1476 number of hops (up to 2) away from @file{1.html} in order to determine
1477 where to stop the recursion. However, with this command:
1480 wget -r -l 2 -p http://@var{site}/1.html
1483 all the above files @emph{and} @file{3.html}'s requisite @file{3.gif}
1484 will be downloaded. Similarly,
1487 wget -r -l 1 -p http://@var{site}/1.html
1490 will cause @file{1.html}, @file{1.gif}, @file{2.html}, and @file{2.gif}
1491 to be downloaded. One might think that:
1494 wget -r -l 0 -p http://@var{site}/1.html
1497 would download just @file{1.html} and @file{1.gif}, but unfortunately
1498 this is not the case, because @samp{-l 0} is equivalent to
1499 @samp{-l inf}---that is, infinite recursion. To download a single @sc{html}
1500 page (or a handful of them, all specified on the command-line or in a
1501 @samp{-i} @sc{url} input file) and its (or their) requisites, simply leave off
1502 @samp{-r} and @samp{-l}:
1505 wget -p http://@var{site}/1.html
1508 Note that Wget will behave as if @samp{-r} had been specified, but only
1509 that single page and its requisites will be downloaded. Links from that
1510 page to external documents will not be followed. Actually, to download
1511 a single page and all its requisites (even if they exist on separate
1512 websites), and make sure the lot displays properly locally, this author
1513 likes to use a few options in addition to @samp{-p}:
1516 wget -E -H -k -K -p http://@var{site}/@var{document}
1519 To finish off this topic, it's worth knowing that Wget's idea of an
1520 external document link is any URL specified in an @code{<A>} tag, an
1521 @code{<AREA>} tag, or a @code{<LINK>} tag other than @code{<LINK
1524 @cindex @sc{html} comments
1525 @cindex comments, @sc{html}
1526 @item --strict-comments
1527 Turn on strict parsing of @sc{html} comments. The default is to terminate
1528 comments at the first occurrence of @samp{-->}.
1530 According to specifications, @sc{html} comments are expressed as @sc{sgml}
1531 @dfn{declarations}. Declaration is special markup that begins with
1532 @samp{<!} and ends with @samp{>}, such as @samp{<!DOCTYPE ...>}, that
1533 may contain comments between a pair of @samp{--} delimiters. @sc{html}
1534 comments are ``empty declarations'', @sc{sgml} declarations without any
1535 non-comment text. Therefore, @samp{<!--foo-->} is a valid comment, and
1536 so is @samp{<!--one-- --two-->}, but @samp{<!--1--2-->} is not.
1538 On the other hand, most @sc{html} writers don't perceive comments as anything
1539 other than text delimited with @samp{<!--} and @samp{-->}, which is not
1540 quite the same. For example, something like @samp{<!------------>}
1541 works as a valid comment as long as the number of dashes is a multiple
1542 of four (!). If not, the comment technically lasts until the next
1543 @samp{--}, which may be at the other end of the document. Because of
1544 this, many popular browsers completely ignore the specification and
1545 implement what users have come to expect: comments delimited with
1546 @samp{<!--} and @samp{-->}.
1548 Until version 1.9, Wget interpreted comments strictly, which resulted in
1549 missing links in many web pages that displayed fine in browsers, but had
1550 the misfortune of containing non-compliant comments. Beginning with
1551 version 1.9, Wget has joined the ranks of clients that implements
1552 ``naive'' comments, terminating each comment at the first occurrence of
1555 If, for whatever reason, you want strict comment parsing, use this
1556 option to turn it on.
1559 @node Recursive Accept/Reject Options
1560 @section Recursive Accept/Reject Options
1563 @item -A @var{acclist} --accept @var{acclist}
1564 @itemx -R @var{rejlist} --reject @var{rejlist}
1565 Specify comma-separated lists of file name suffixes or patterns to
1566 accept or reject (@pxref{Types of Files} for more details).
1568 @item -D @var{domain-list}
1569 @itemx --domains=@var{domain-list}
1570 Set domains to be followed. @var{domain-list} is a comma-separated list
1571 of domains. Note that it does @emph{not} turn on @samp{-H}.
1573 @item --exclude-domains @var{domain-list}
1574 Specify the domains that are @emph{not} to be followed.
1575 (@pxref{Spanning Hosts}).
1577 @cindex follow FTP links
1579 Follow @sc{ftp} links from @sc{html} documents. Without this option,
1580 Wget will ignore all the @sc{ftp} links.
1582 @cindex tag-based recursive pruning
1583 @item --follow-tags=@var{list}
1584 Wget has an internal table of @sc{html} tag / attribute pairs that it
1585 considers when looking for linked documents during a recursive
1586 retrieval. If a user wants only a subset of those tags to be
1587 considered, however, he or she should be specify such tags in a
1588 comma-separated @var{list} with this option.
1590 @item --ignore-tags=@var{list}
1591 This is the opposite of the @samp{--follow-tags} option. To skip
1592 certain @sc{html} tags when recursively looking for documents to download,
1593 specify them in a comma-separated @var{list}.
1595 In the past, this option was the best bet for downloading a single page
1596 and its requisites, using a command-line like:
1599 wget --ignore-tags=a,area -H -k -K -r http://@var{site}/@var{document}
1602 However, the author of this option came across a page with tags like
1603 @code{<LINK REL="home" HREF="/">} and came to the realization that
1604 specifying tags to ignore was not enough. One can't just tell Wget to
1605 ignore @code{<LINK>}, because then stylesheets will not be downloaded.
1606 Now the best bet for downloading a single page and its requisites is the
1607 dedicated @samp{--page-requisites} option.
1611 Enable spanning across hosts when doing recursive retrieving
1612 (@pxref{Spanning Hosts}).
1616 Follow relative links only. Useful for retrieving a specific home page
1617 without any distractions, not even those from the same hosts
1618 (@pxref{Relative Links}).
1621 @itemx --include-directories=@var{list}
1622 Specify a comma-separated list of directories you wish to follow when
1623 downloading (@pxref{Directory-Based Limits} for more details.) Elements
1624 of @var{list} may contain wildcards.
1627 @itemx --exclude-directories=@var{list}
1628 Specify a comma-separated list of directories you wish to exclude from
1629 download (@pxref{Directory-Based Limits} for more details.) Elements of
1630 @var{list} may contain wildcards.
1634 Do not ever ascend to the parent directory when retrieving recursively.
1635 This is a useful option, since it guarantees that only the files
1636 @emph{below} a certain hierarchy will be downloaded.
1637 @xref{Directory-Based Limits}, for more details.
1642 @node Recursive Download
1643 @chapter Recursive Download
1646 @cindex recursive download
1648 GNU Wget is capable of traversing parts of the Web (or a single
1649 @sc{http} or @sc{ftp} server), following links and directory structure.
1650 We refer to this as to @dfn{recursive retrieval}, or @dfn{recursion}.
1652 With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from
1653 the given @sc{url}, documents, retrieving the files the @sc{html}
1654 document was referring to, through markup like @code{href}, or
1655 @code{src}. If the freshly downloaded file is also of type
1656 @code{text/html} or @code{application/xhtml+xml}, it will be parsed and
1659 Recursive retrieval of @sc{http} and @sc{html} content is
1660 @dfn{breadth-first}. This means that Wget first downloads the requested
1661 @sc{html} document, then the documents linked from that document, then the
1662 documents linked by them, and so on. In other words, Wget first
1663 downloads the documents at depth 1, then those at depth 2, and so on
1664 until the specified maximum depth.
1666 The maximum @dfn{depth} to which the retrieval may descend is specified
1667 with the @samp{-l} option. The default maximum depth is five layers.
1669 When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
1670 the data from the given directory tree (including the subdirectories up
1671 to the specified depth) on the remote server, creating its mirror image
1672 locally. @sc{ftp} retrieval is also limited by the @code{depth}
1673 parameter. Unlike @sc{http} recursion, @sc{ftp} recursion is performed
1676 By default, Wget will create a local directory tree, corresponding to
1677 the one found on the remote server.
1679 Recursive retrieving can find a number of applications, the most
1680 important of which is mirroring. It is also useful for @sc{www}
1681 presentations, and any other opportunities where slow network
1682 connections should be bypassed by storing the files locally.
1684 You should be warned that recursive downloads can overload the remote
1685 servers. Because of that, many administrators frown upon them and may
1686 ban access from your site if they detect very fast downloads of big
1687 amounts of content. When downloading from Internet servers, consider
1688 using the @samp{-w} option to introduce a delay between accesses to the
1689 server. The download will take a while longer, but the server
1690 administrator will not be alarmed by your rudeness.
1692 Of course, recursive download may cause problems on your machine. If
1693 left to run unchecked, it can easily fill up the disk. If downloading
1694 from local network, it can also take bandwidth on the system, as well as
1695 consume memory and CPU.
1697 Try to specify the criteria that match the kind of download you are
1698 trying to achieve. If you want to download only one page, use
1699 @samp{--page-requisites} without any additional recursion. If you want
1700 to download things under one directory, use @samp{-np} to avoid
1701 downloading things from other directories. If you want to download all
1702 the files from one directory, use @samp{-l 1} to make sure the recursion
1703 depth never exceeds one. @xref{Following Links}, for more information
1706 Recursive retrieval should be used with care. Don't say you were not
1709 @node Following Links
1710 @chapter Following Links
1712 @cindex following links
1714 When retrieving recursively, one does not wish to retrieve loads of
1715 unnecessary data. Most of the time the users bear in mind exactly what
1716 they want to download, and want Wget to follow only specific links.
1718 For example, if you wish to download the music archive from
1719 @samp{fly.srk.fer.hr}, you will not want to download all the home pages
1720 that happen to be referenced by an obscure part of the archive.
1722 Wget possesses several mechanisms that allows you to fine-tune which
1723 links it will follow.
1726 * Spanning Hosts:: (Un)limiting retrieval based on host name.
1727 * Types of Files:: Getting only certain files.
1728 * Directory-Based Limits:: Getting only certain directories.
1729 * Relative Links:: Follow relative links only.
1730 * FTP Links:: Following FTP links.
1733 @node Spanning Hosts
1734 @section Spanning Hosts
1735 @cindex spanning hosts
1736 @cindex hosts, spanning
1738 Wget's recursive retrieval normally refuses to visit hosts different
1739 than the one you specified on the command line. This is a reasonable
1740 default; without it, every retrieval would have the potential to turn
1741 your Wget into a small version of google.
1743 However, visiting different hosts, or @dfn{host spanning,} is sometimes
1744 a useful option. Maybe the images are served from a different server.
1745 Maybe you're mirroring a site that consists of pages interlinked between
1746 three servers. Maybe the server has two equivalent names, and the @sc{html}
1747 pages refer to both interchangeably.
1750 @item Span to any host---@samp{-H}
1752 The @samp{-H} option turns on host spanning, thus allowing Wget's
1753 recursive run to visit any host referenced by a link. Unless sufficient
1754 recursion-limiting criteria are applied depth, these foreign hosts will
1755 typically link to yet more hosts, and so on until Wget ends up sucking
1756 up much more data than you have intended.
1758 @item Limit spanning to certain domains---@samp{-D}
1760 The @samp{-D} option allows you to specify the domains that will be
1761 followed, thus limiting the recursion only to the hosts that belong to
1762 these domains. Obviously, this makes sense only in conjunction with
1763 @samp{-H}. A typical example would be downloading the contents of
1764 @samp{www.server.com}, but allowing downloads from
1765 @samp{images.server.com}, etc.:
1768 wget -rH -Dserver.com http://www.server.com/
1771 You can specify more than one address by separating them with a comma,
1772 e.g. @samp{-Ddomain1.com,domain2.com}.
1774 @item Keep download off certain domains---@samp{--exclude-domains}
1776 If there are domains you want to exclude specifically, you can do it
1777 with @samp{--exclude-domains}, which accepts the same type of arguments
1778 of @samp{-D}, but will @emph{exclude} all the listed domains. For
1779 example, if you want to download all the hosts from @samp{foo.edu}
1780 domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
1784 wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
1790 @node Types of Files
1791 @section Types of Files
1792 @cindex types of files
1794 When downloading material from the web, you will often want to restrict
1795 the retrieval to only certain file types. For example, if you are
1796 interested in downloading @sc{gif}s, you will not be overjoyed to get
1797 loads of PostScript documents, and vice versa.
1799 Wget offers two options to deal with this problem. Each option
1800 description lists a short name, a long name, and the equivalent command
1803 @cindex accept wildcards
1804 @cindex accept suffixes
1805 @cindex wildcards, accept
1806 @cindex suffixes, accept
1808 @item -A @var{acclist}
1809 @itemx --accept @var{acclist}
1810 @itemx accept = @var{acclist}
1811 The argument to @samp{--accept} option is a list of file suffixes or
1812 patterns that Wget will download during recursive retrieval. A suffix
1813 is the ending part of a file, and consists of ``normal'' letters,
1814 e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
1815 wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
1817 So, specifying @samp{wget -A gif,jpg} will make Wget download only the
1818 files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
1819 @sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
1820 download only files beginning with @samp{zelazny} and containing numbers
1821 from 1960 to 1969 anywhere within. Look up the manual of your shell for
1822 a description of how pattern matching works.
1824 Of course, any number of suffixes and patterns can be combined into a
1825 comma-separated list, and given as an argument to @samp{-A}.
1827 @cindex reject wildcards
1828 @cindex reject suffixes
1829 @cindex wildcards, reject
1830 @cindex suffixes, reject
1831 @item -R @var{rejlist}
1832 @itemx --reject @var{rejlist}
1833 @itemx reject = @var{rejlist}
1834 The @samp{--reject} option works the same way as @samp{--accept}, only
1835 its logic is the reverse; Wget will download all files @emph{except} the
1836 ones matching the suffixes (or patterns) in the list.
1838 So, if you want to download a whole page except for the cumbersome
1839 @sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
1840 Analogously, to download all files except the ones beginning with
1841 @samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
1842 expansion by the shell.
1845 The @samp{-A} and @samp{-R} options may be combined to achieve even
1846 better fine-tuning of which files to retrieve. E.g. @samp{wget -A
1847 "*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
1848 a part of their name, but @emph{not} the PostScript files.
1850 Note that these two options do not affect the downloading of @sc{html}
1851 files; Wget must load all the @sc{html}s to know where to go at
1852 all---recursive retrieval would make no sense otherwise.
1854 @node Directory-Based Limits
1855 @section Directory-Based Limits
1857 @cindex directory limits
1859 Regardless of other link-following facilities, it is often useful to
1860 place the restriction of what files to retrieve based on the directories
1861 those files are placed in. There can be many reasons for this---the
1862 home pages may be organized in a reasonable directory structure; or some
1863 directories may contain useless information, e.g. @file{/cgi-bin} or
1864 @file{/dev} directories.
1866 Wget offers three different options to deal with this requirement. Each
1867 option description lists a short name, a long name, and the equivalent
1868 command in @file{.wgetrc}.
1870 @cindex directories, include
1871 @cindex include directories
1872 @cindex accept directories
1875 @itemx --include @var{list}
1876 @itemx include_directories = @var{list}
1877 @samp{-I} option accepts a comma-separated list of directories included
1878 in the retrieval. Any other directories will simply be ignored. The
1879 directories are absolute paths.
1881 So, if you wish to download from @samp{http://host/people/bozo/}
1882 following only links to bozo's colleagues in the @file{/people}
1883 directory and the bogus scripts in @file{/cgi-bin}, you can specify:
1886 wget -I /people,/cgi-bin http://host/people/bozo/
1889 @cindex directories, exclude
1890 @cindex exclude directories
1891 @cindex reject directories
1893 @itemx --exclude @var{list}
1894 @itemx exclude_directories = @var{list}
1895 @samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
1896 directories @emph{excluded} from the download. E.g. if you do not want
1897 Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
1898 /cgi-bin} on the command line.
1900 The same as with @samp{-A}/@samp{-R}, these two options can be combined
1901 to get a better fine-tuning of downloading subdirectories. E.g. if you
1902 want to load all the files from @file{/pub} hierarchy except for
1903 @file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
1908 @itemx no_parent = on
1909 The simplest, and often very useful way of limiting directories is
1910 disallowing retrieval of the links that refer to the hierarchy
1911 @dfn{above} than the beginning directory, i.e. disallowing ascent to the
1912 parent directory/directories.
1914 The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
1915 Using it guarantees that you will never leave the existing hierarchy.
1916 Supposing you issue Wget with:
1919 wget -r --no-parent http://somehost/~luzer/my-archive/
1922 You may rest assured that none of the references to
1923 @file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
1924 followed. Only the archive you are interested in will be downloaded.
1925 Essentially, @samp{--no-parent} is similar to
1926 @samp{-I/~luzer/my-archive}, only it handles redirections in a more
1927 intelligent fashion.
1930 @node Relative Links
1931 @section Relative Links
1932 @cindex relative links
1934 When @samp{-L} is turned on, only the relative links are ever followed.
1935 Relative links are here defined those that do not refer to the web
1936 server root. For example, these links are relative:
1940 <a href="foo/bar.gif">
1941 <a href="../foo/bar.gif">
1944 These links are not relative:
1948 <a href="/foo/bar.gif">
1949 <a href="http://www.server.com/foo/bar.gif">
1952 Using this option guarantees that recursive retrieval will not span
1953 hosts, even without @samp{-H}. In simple cases it also allows downloads
1954 to ``just work'' without having to convert links.
1956 This option is probably not very useful and might be removed in a future
1960 @section Following FTP Links
1961 @cindex following ftp links
1963 The rules for @sc{ftp} are somewhat specific, as it is necessary for
1964 them to be. @sc{ftp} links in @sc{html} documents are often included
1965 for purposes of reference, and it is often inconvenient to download them
1968 To have @sc{ftp} links followed from @sc{html} documents, you need to
1969 specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
1970 links will span hosts regardless of @samp{-H} setting. This is logical,
1971 as @sc{ftp} links rarely point to the same host where the @sc{http}
1972 server resides. For similar reasons, the @samp{-L} options has no
1973 effect on such downloads. On the other hand, domain acceptance
1974 (@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
1976 Also note that followed links to @sc{ftp} directories will not be
1977 retrieved recursively further.
1980 @chapter Time-Stamping
1981 @cindex time-stamping
1982 @cindex timestamping
1983 @cindex updating the archives
1984 @cindex incremental updating
1986 One of the most important aspects of mirroring information from the
1987 Internet is updating your archives.
1989 Downloading the whole archive again and again, just to replace a few
1990 changed files is expensive, both in terms of wasted bandwidth and money,
1991 and the time to do the update. This is why all the mirroring tools
1992 offer the option of incremental updating.
1994 Such an updating mechanism means that the remote server is scanned in
1995 search of @dfn{new} files. Only those new files will be downloaded in
1996 the place of the old ones.
1998 A file is considered new if one of these two conditions are met:
2002 A file of that name does not already exist locally.
2005 A file of that name does exist, but the remote file was modified more
2006 recently than the local file.
2009 To implement this, the program needs to be aware of the time of last
2010 modification of both local and remote files. We call this information the
2011 @dfn{time-stamp} of a file.
2013 The time-stamping in GNU Wget is turned on using @samp{--timestamping}
2014 (@samp{-N}) option, or through @code{timestamping = on} directive in
2015 @file{.wgetrc}. With this option, for each file it intends to download,
2016 Wget will check whether a local file of the same name exists. If it
2017 does, and the remote file is older, Wget will not download it.
2019 If the local file does not exist, or the sizes of the files do not
2020 match, Wget will download the remote file no matter what the time-stamps
2024 * Time-Stamping Usage::
2025 * HTTP Time-Stamping Internals::
2026 * FTP Time-Stamping Internals::
2029 @node Time-Stamping Usage
2030 @section Time-Stamping Usage
2031 @cindex time-stamping usage
2032 @cindex usage, time-stamping
2034 The usage of time-stamping is simple. Say you would like to download a
2035 file so that it keeps its date of modification.
2038 wget -S http://www.gnu.ai.mit.edu/
2041 A simple @code{ls -l} shows that the time stamp on the local file equals
2042 the state of the @code{Last-Modified} header, as returned by the server.
2043 As you can see, the time-stamping info is preserved locally, even
2044 without @samp{-N} (at least for @sc{http}).
2046 Several days later, you would like Wget to check if the remote file has
2047 changed, and download it if it has.
2050 wget -N http://www.gnu.ai.mit.edu/
2053 Wget will ask the server for the last-modified date. If the local file
2054 has the same timestamp as the server, or a newer one, the remote file
2055 will not be re-fetched. However, if the remote file is more recent,
2056 Wget will proceed to fetch it.
2058 The same goes for @sc{ftp}. For example:
2061 wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
2064 (The quotes around that URL are to prevent the shell from trying to
2065 interpret the @samp{*}.)
2067 After download, a local directory listing will show that the timestamps
2068 match those on the remote server. Reissuing the command with @samp{-N}
2069 will make Wget re-fetch @emph{only} the files that have been modified
2070 since the last download.
2072 If you wished to mirror the GNU archive every week, you would use a
2073 command like the following, weekly:
2076 wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
2079 Note that time-stamping will only work for files for which the server
2080 gives a timestamp. For @sc{http}, this depends on getting a
2081 @code{Last-Modified} header. For @sc{ftp}, this depends on getting a
2082 directory listing with dates in a format that Wget can parse
2083 (@pxref{FTP Time-Stamping Internals}).
2085 @node HTTP Time-Stamping Internals
2086 @section HTTP Time-Stamping Internals
2087 @cindex http time-stamping
2089 Time-stamping in @sc{http} is implemented by checking of the
2090 @code{Last-Modified} header. If you wish to retrieve the file
2091 @file{foo.html} through @sc{http}, Wget will check whether
2092 @file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
2093 retrieved unconditionally.
2095 If the file does exist locally, Wget will first check its local
2096 time-stamp (similar to the way @code{ls -l} checks it), and then send a
2097 @code{HEAD} request to the remote server, demanding the information on
2100 The @code{Last-Modified} header is examined to find which file was
2101 modified more recently (which makes it ``newer''). If the remote file
2102 is newer, it will be downloaded; if it is older, Wget will give
2103 up.@footnote{As an additional check, Wget will look at the
2104 @code{Content-Length} header, and compare the sizes; if they are not the
2105 same, the remote file will be downloaded no matter what the time-stamp
2108 When @samp{--backup-converted} (@samp{-K}) is specified in conjunction
2109 with @samp{-N}, server file @samp{@var{X}} is compared to local file
2110 @samp{@var{X}.orig}, if extant, rather than being compared to local file
2111 @samp{@var{X}}, which will always differ if it's been converted by
2112 @samp{--convert-links} (@samp{-k}).
2114 Arguably, @sc{http} time-stamping should be implemented using the
2115 @code{If-Modified-Since} request.
2117 @node FTP Time-Stamping Internals
2118 @section FTP Time-Stamping Internals
2119 @cindex ftp time-stamping
2121 In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
2122 @sc{ftp} has no headers---time-stamps must be ferreted out of directory
2125 If an @sc{ftp} download is recursive or uses globbing, Wget will use the
2126 @sc{ftp} @code{LIST} command to get a file listing for the directory
2127 containing the desired file(s). It will try to analyze the listing,
2128 treating it like Unix @code{ls -l} output, extracting the time-stamps.
2129 The rest is exactly the same as for @sc{http}. Note that when
2130 retrieving individual files from an @sc{ftp} server without using
2131 globbing or recursion, listing files will not be downloaded (and thus
2132 files will not be time-stamped) unless @samp{-N} is specified.
2134 Assumption that every directory listing is a Unix-style listing may
2135 sound extremely constraining, but in practice it is not, as many
2136 non-Unix @sc{ftp} servers use the Unixoid listing format because most
2137 (all?) of the clients understand it. Bear in mind that @sc{rfc959}
2138 defines no standard way to get a file list, let alone the time-stamps.
2139 We can only hope that a future standard will define this.
2141 Another non-standard solution includes the use of @code{MDTM} command
2142 that is supported by some @sc{ftp} servers (including the popular
2143 @code{wu-ftpd}), which returns the exact time of the specified file.
2144 Wget may support this command in the future.
2147 @chapter Startup File
2148 @cindex startup file
2154 Once you know how to change default settings of Wget through command
2155 line arguments, you may wish to make some of those settings permanent.
2156 You can do that in a convenient way by creating the Wget startup
2157 file---@file{.wgetrc}.
2159 Besides @file{.wgetrc} is the ``main'' initialization file, it is
2160 convenient to have a special facility for storing passwords. Thus Wget
2161 reads and interprets the contents of @file{$HOME/.netrc}, if it finds
2162 it. You can find @file{.netrc} format in your system manuals.
2164 Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
2168 * Wgetrc Location:: Location of various wgetrc files.
2169 * Wgetrc Syntax:: Syntax of wgetrc.
2170 * Wgetrc Commands:: List of available commands.
2171 * Sample Wgetrc:: A wgetrc example.
2174 @node Wgetrc Location
2175 @section Wgetrc Location
2176 @cindex wgetrc location
2177 @cindex location of wgetrc
2179 When initializing, Wget will look for a @dfn{global} startup file,
2180 @file{/usr/local/etc/wgetrc} by default (or some prefix other than
2181 @file{/usr/local}, if Wget was not installed there) and read commands
2182 from there, if it exists.
2184 Then it will look for the user's file. If the environmental variable
2185 @code{WGETRC} is set, Wget will try to load that file. Failing that, no
2186 further attempts will be made.
2188 If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
2190 The fact that user's settings are loaded after the system-wide ones
2191 means that in case of collision user's wgetrc @emph{overrides} the
2192 system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
2193 Fascist admins, away!
2196 @section Wgetrc Syntax
2197 @cindex wgetrc syntax
2198 @cindex syntax of wgetrc
2200 The syntax of a wgetrc command is simple:
2206 The @dfn{variable} will also be called @dfn{command}. Valid
2207 @dfn{values} are different for different commands.
2209 The commands are case-insensitive and underscore-insensitive. Thus
2210 @samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
2211 beginning with @samp{#} and lines containing white-space only are
2214 Commands that expect a comma-separated list will clear the list on an
2215 empty command. So, if you wish to reset the rejection list specified in
2216 global @file{wgetrc}, you can do it with:
2222 @node Wgetrc Commands
2223 @section Wgetrc Commands
2224 @cindex wgetrc commands
2226 The complete set of commands is listed below. Legal values are listed
2227 after the @samp{=}. Simple Boolean values can be set or unset using
2228 @samp{on} and @samp{off} or @samp{1} and @samp{0}. A fancier kind of
2229 Boolean allowed in some cases is the @dfn{lockable Boolean}, which may
2230 be set to @samp{on}, @samp{off}, @samp{always}, or @samp{never}. If an
2231 option is set to @samp{always} or @samp{never}, that value will be
2232 locked in for the duration of the Wget invocation---command-line options
2235 Some commands take pseudo-arbitrary values. @var{address} values can be
2236 hostnames or dotted-quad IP addresses. @var{n} can be any positive
2237 integer, or @samp{inf} for infinity, where appropriate. @var{string}
2238 values can be any non-empty string.
2240 Most of these commands have direct command-line equivalents. Also, any
2241 wgetrc command can be specified on the command line using the
2242 @samp{--execute} switch (@pxref{Basic Startup Options}.)
2245 @item accept/reject = @var{string}
2246 Same as @samp{-A}/@samp{-R} (@pxref{Types of Files}).
2248 @item add_hostdir = on/off
2249 Enable/disable host-prefixed file names. @samp{-nH} disables it.
2251 @item continue = on/off
2252 If set to on, force continuation of preexistent partially retrieved
2253 files. See @samp{-c} before setting it.
2255 @item background = on/off
2256 Enable/disable going to background---the same as @samp{-b} (which
2259 @item backup_converted = on/off
2260 Enable/disable saving pre-converted files with the suffix
2261 @samp{.orig}---the same as @samp{-K} (which enables it).
2263 @c @item backups = @var{number}
2264 @c #### Document me!
2266 @item base = @var{string}
2267 Consider relative @sc{url}s in @sc{url} input files forced to be
2268 interpreted as @sc{html} as being relative to @var{string}---the same as
2271 @item bind_address = @var{address}
2272 Bind to @var{address}, like the @samp{--bind-address} option.
2274 @item cache = on/off
2275 When set to off, disallow server-caching. See the @samp{--no-cache}
2278 @item convert_links = on/off
2279 Convert non-relative links locally. The same as @samp{-k}.
2281 @item cookies = on/off
2282 When set to off, disallow cookies. See the @samp{--cookies} option.
2284 @item load_cookies = @var{file}
2285 Load cookies from @var{file}. See @samp{--load-cookies}.
2287 @item save_cookies = @var{file}
2288 Save cookies to @var{file}. See @samp{--save-cookies}.
2290 @item connect_timeout = @var{n}
2291 Set the connect timeout---the same as @samp{--connect-timeout}.
2293 @item cut_dirs = @var{n}
2294 Ignore @var{n} remote directory components.
2296 @item debug = on/off
2297 Debug mode, same as @samp{-d}.
2299 @item delete_after = on/off
2300 Delete after download---the same as @samp{--delete-after}.
2302 @item dir_prefix = @var{string}
2303 Top of directory tree---the same as @samp{-P}.
2305 @item dirstruct = on/off
2306 Turning dirstruct on or off---the same as @samp{-x} or @samp{-nd},
2309 @item dns_cache = on/off
2310 Turn DNS caching on/off. Since DNS caching is on by default, this
2311 option is normally used to turn it off. Same as @samp{--dns-cache}.
2313 @item dns_timeout = @var{n}
2314 Set the DNS timeout---the same as @samp{--dns-timeout}.
2316 @item domains = @var{string}
2317 Same as @samp{-D} (@pxref{Spanning Hosts}).
2319 @item dot_bytes = @var{n}
2320 Specify the number of bytes ``contained'' in a dot, as seen throughout
2321 the retrieval (1024 by default). You can postfix the value with
2322 @samp{k} or @samp{m}, representing kilobytes and megabytes,
2323 respectively. With dot settings you can tailor the dot retrieval to
2324 suit your needs, or you can use the predefined @dfn{styles}
2325 (@pxref{Download Options}).
2327 @item dots_in_line = @var{n}
2328 Specify the number of dots that will be printed in each line throughout
2329 the retrieval (50 by default).
2331 @item dot_spacing = @var{n}
2332 Specify the number of dots in a single cluster (10 by default).
2334 @item exclude_directories = @var{string}
2335 Specify a comma-separated list of directories you wish to exclude from
2336 download---the same as @samp{-X} (@pxref{Directory-Based Limits}).
2338 @item exclude_domains = @var{string}
2339 Same as @samp{--exclude-domains} (@pxref{Spanning Hosts}).
2341 @item follow_ftp = on/off
2342 Follow @sc{ftp} links from @sc{html} documents---the same as
2343 @samp{--follow-ftp}.
2345 @item follow_tags = @var{string}
2346 Only follow certain @sc{html} tags when doing a recursive retrieval, just like
2347 @samp{--follow-tags}.
2349 @item force_html = on/off
2350 If set to on, force the input filename to be regarded as an @sc{html}
2351 document---the same as @samp{-F}.
2353 @item ftp_passwd = @var{string}
2354 Set your @sc{ftp} password to @var{string}. Without this setting, the
2355 password defaults to @samp{-wget@@}, which is a useful default for
2356 anonymous @sc{ftp} access.
2358 This command used to be named @code{passwd} prior to Wget 1.10.
2360 @item ftp_proxy = @var{string}
2361 Use @var{string} as @sc{ftp} proxy, instead of the one specified in
2365 Turn globbing on/off---the same as @samp{--glob} and @samp{--no-glob}.
2367 @item header = @var{string}
2368 Define an additional header, like @samp{--header}.
2370 @item html_extension = on/off
2371 Add a @samp{.html} extension to @samp{text/html} or
2372 @samp{application/xhtml+xml} files without it, like
2375 @item http_keep_alive = on/off
2376 Turn the keep-alive feature on or off (defaults to on). The same as
2377 `--http-keep-alive'.
2379 @item http_passwd = @var{string}
2380 Set @sc{http} password.
2382 @item http_proxy = @var{string}
2383 Use @var{string} as @sc{http} proxy, instead of the one specified in
2386 @item http_user = @var{string}
2387 Set @sc{http} user to @var{string}.
2389 @item ignore_length = on/off
2390 When set to on, ignore @code{Content-Length} header; the same as
2391 @samp{--ignore-length}.
2393 @item ignore_tags = @var{string}
2394 Ignore certain @sc{html} tags when doing a recursive retrieval, just like
2395 @samp{--ignore-tags}.
2397 @item include_directories = @var{string}
2398 Specify a comma-separated list of directories you wish to follow when
2399 downloading---the same as @samp{-I}.
2401 @item inet4_only = on/off
2402 Force connecting to IPv4 addresses, off by default. You can put this
2403 in the global init file to disable Wget's attempts to resolve and
2404 connect to IPv6 hosts. Available only if Wget was compiled with IPv6
2405 support. The same as @samp{--inet4-only} or @samp{-4}.
2407 @item inet6_only = on/off
2408 Force connecting to IPv6 addresses, off by default. Available only if
2409 Wget was compiled with IPv6 support. The same as @samp{--inet6-only}
2412 @item input = @var{string}
2413 Read the @sc{url}s from @var{string}, like @samp{-i}.
2415 @item kill_longer = on/off
2416 Consider data longer than specified in content-length header as invalid
2417 (and retry getting it). The default behavior is to save as much data
2418 as there is, provided there is more than or equal to the value in
2419 @code{Content-Length}.
2421 @item limit_rate = @var{rate}
2422 Limit the download speed to no more than @var{rate} bytes per second.
2423 The same as @samp{--limit-rate}.
2425 @item logfile = @var{string}
2426 Set logfile---the same as @samp{-o}.
2428 @item login = @var{string}
2429 Your user name on the remote machine, for @sc{ftp}. Defaults to
2432 @item mirror = on/off
2433 Turn mirroring on/off. The same as @samp{-m}.
2435 @item netrc = on/off
2436 Turn reading netrc on or off.
2438 @item noclobber = on/off
2441 @item no_parent = on/off
2442 Disallow retrieving outside the directory hierarchy, like
2443 @samp{--no-parent} (@pxref{Directory-Based Limits}).
2445 @item no_proxy = @var{string}
2446 Use @var{string} as the comma-separated list of domains to avoid in
2447 proxy loading, instead of the one specified in environment.
2449 @item output_document = @var{string}
2450 Set the output filename---the same as @samp{-O}.
2452 @item page_requisites = on/off
2453 Download all ancillary documents necessary for a single @sc{html} page to
2454 display properly---the same as @samp{-p}.
2456 @item passive_ftp = on/off/always/never
2457 Change setting of passive @sc{ftp}, equivalent to the
2458 @samp{--passive-ftp} option. Some scripts and @samp{.pm} (Perl
2459 module) files download files using @samp{wget --passive-ftp}. If your
2460 firewall does not allow this, you can set @samp{passive_ftp = never}
2461 to override the command-line.
2463 @item post_data = @var{string}
2464 Use POST as the method for all HTTP requests and send @var{string} in
2465 the request body. The same as @samp{--post-data}.
2467 @item post_file = @var{file}
2468 Use POST as the method for all HTTP requests and send the contents of
2469 @var{file} in the request body. The same as @samp{--post-file}.
2471 @item progress = @var{string}
2472 Set the type of the progress indicator. Legal types are ``dot'' and
2475 @item protocol_directories = on/off
2476 When set, use the protocol name as a directory component of local file
2477 names. The same as @samp{--protocol-directories}.
2479 @item proxy_user = @var{string}
2480 Set proxy authentication user name to @var{string}, like @samp{--proxy-user}.
2482 @item proxy_passwd = @var{string}
2483 Set proxy authentication password to @var{string}, like @samp{--proxy-passwd}.
2485 @item referer = @var{string}
2486 Set HTTP @samp{Referer:} header just like @samp{--referer}. (Note it
2487 was the folks who wrote the @sc{http} spec who got the spelling of
2488 ``referrer'' wrong.)
2490 @item quiet = on/off
2491 Quiet mode---the same as @samp{-q}.
2493 @item quota = @var{quota}
2494 Specify the download quota, which is useful to put in the global
2495 @file{wgetrc}. When download quota is specified, Wget will stop
2496 retrieving after the download sum has become greater than quota. The
2497 quota can be specified in bytes (default), kbytes @samp{k} appended) or
2498 mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota
2499 to 5 mbytes. Note that the user's startup file overrides system
2502 @item read_timeout = @var{n}
2503 Set the read (and write) timeout---the same as @samp{--read-timeout}.
2505 @item reclevel = @var{n}
2506 Recursion level---the same as @samp{-l}.
2508 @item recursive = on/off
2509 Recursive on/off---the same as @samp{-r}.
2511 @item relative_only = on/off
2512 Follow only relative links---the same as @samp{-L} (@pxref{Relative
2515 @item remove_listing = on/off
2516 If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
2517 to off is the same as @samp{--no-remove-listing}.
2519 @item restrict_file_names = unix/windows
2520 Restrict the file names generated by Wget from URLs. See
2521 @samp{--restrict-file-names} for a more detailed description.
2523 @item retr_symlinks = on/off
2524 When set to on, retrieve symbolic links as if they were plain files; the
2525 same as @samp{--retr-symlinks}.
2527 @item robots = on/off
2528 Specify whether the norobots convention is respected by Wget, ``on'' by
2529 default. This switch controls both the @file{/robots.txt} and the
2530 @samp{nofollow} aspect of the spec. @xref{Robot Exclusion}, for more
2531 details about this. Be sure you know what you are doing before turning
2534 @item server_response = on/off
2535 Choose whether or not to print the @sc{http} and @sc{ftp} server
2536 responses---the same as @samp{-S}.
2538 @item span_hosts = on/off
2541 @item strict_comments = on/off
2542 Same as @samp{--strict-comments}.
2544 @item timeout = @var{n}
2545 Set timeout value---the same as @samp{-T}.
2547 @item timestamping = on/off
2548 Turn timestamping on/off. The same as @samp{-N} (@pxref{Time-Stamping}).
2550 @item tries = @var{n}
2551 Set number of retries per @sc{url}---the same as @samp{-t}.
2553 @item use_proxy = on/off
2554 Turn proxy support on/off. The same as @samp{-Y}.
2556 @item verbose = on/off
2557 Turn verbose on/off---the same as @samp{-v}/@samp{-nv}.
2559 @item wait = @var{n}
2560 Wait @var{n} seconds between retrievals---the same as @samp{-w}.
2562 @item waitretry = @var{n}
2563 Wait up to @var{n} seconds between retries of failed retrievals
2564 only---the same as @samp{--waitretry}. Note that this is turned on by
2565 default in the global @file{wgetrc}.
2567 @item randomwait = on/off
2568 Turn random between-request wait times on or off. The same as
2569 @samp{--random-wait}.
2573 @section Sample Wgetrc
2574 @cindex sample wgetrc
2576 This is the sample initialization file, as given in the distribution.
2577 It is divided in two section---one for global usage (suitable for global
2578 startup file), and one for local usage (suitable for
2579 @file{$HOME/.wgetrc}). Be careful about the things you change.
2581 Note that almost all the lines are commented out. For a command to have
2582 any effect, you must remove the @samp{#} character at the beginning of
2586 @include sample.wgetrc.munged_for_texi_inclusion
2593 @c man begin EXAMPLES
2594 The examples are divided into three sections loosely based on their
2598 * Simple Usage:: Simple, basic usage of the program.
2599 * Advanced Usage:: Advanced tips.
2600 * Very Advanced Usage:: The hairy stuff.
2604 @section Simple Usage
2608 Say you want to download a @sc{url}. Just type:
2611 wget http://fly.srk.fer.hr/
2615 But what will happen if the connection is slow, and the file is lengthy?
2616 The connection will probably fail before the whole file is retrieved,
2617 more than once. In this case, Wget will try getting the file until it
2618 either gets the whole of it, or exceeds the default number of retries
2619 (this being 20). It is easy to change the number of tries to 45, to
2620 insure that the whole file will arrive safely:
2623 wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
2627 Now let's leave Wget to work in the background, and write its progress
2628 to log file @file{log}. It is tiring to type @samp{--tries}, so we
2629 shall use @samp{-t}.
2632 wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
2635 The ampersand at the end of the line makes sure that Wget works in the
2636 background. To unlimit the number of retries, use @samp{-t inf}.
2639 The usage of @sc{ftp} is as simple. Wget will take care of login and
2643 wget ftp://gnjilux.srk.fer.hr/welcome.msg
2647 If you specify a directory, Wget will retrieve the directory listing,
2648 parse it and convert it to @sc{html}. Try:
2651 wget ftp://ftp.gnu.org/pub/gnu/
2656 @node Advanced Usage
2657 @section Advanced Usage
2661 You have a file that contains the URLs you want to download? Use the
2668 If you specify @samp{-} as file name, the @sc{url}s will be read from
2672 Create a five levels deep mirror image of the GNU web site, with the
2673 same directory structure the original has, with only one try per
2674 document, saving the log of the activities to @file{gnulog}:
2677 wget -r http://www.gnu.org/ -o gnulog
2681 The same as the above, but convert the links in the @sc{html} files to
2682 point to local files, so you can view the documents off-line:
2685 wget --convert-links -r http://www.gnu.org/ -o gnulog
2689 Retrieve only one @sc{html} page, but make sure that all the elements needed
2690 for the page to be displayed, such as inline images and external style
2691 sheets, are also downloaded. Also make sure the downloaded page
2692 references the downloaded links.
2695 wget -p --convert-links http://www.server.com/dir/page.html
2698 The @sc{html} page will be saved to @file{www.server.com/dir/page.html}, and
2699 the images, stylesheets, etc., somewhere under @file{www.server.com/},
2700 depending on where they were on the remote server.
2703 The same as the above, but without the @file{www.server.com/} directory.
2704 In fact, I don't want to have all those random server directories
2705 anyway---just save @emph{all} those files under a @file{download/}
2706 subdirectory of the current directory.
2709 wget -p --convert-links -nH -nd -Pdownload \
2710 http://www.server.com/dir/page.html
2714 Retrieve the index.html of @samp{www.lycos.com}, showing the original
2718 wget -S http://www.lycos.com/
2722 Save the server headers with the file, perhaps for post-processing.
2725 wget -s http://www.lycos.com/
2730 Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
2734 wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
2738 You want to download all the @sc{gif}s from a directory on an @sc{http}
2739 server. You tried @samp{wget http://www.server.com/dir/*.gif}, but that
2740 didn't work because @sc{http} retrieval does not support globbing. In
2744 wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
2747 More verbose, but the effect is the same. @samp{-r -l1} means to
2748 retrieve recursively (@pxref{Recursive Download}), with maximum depth
2749 of 1. @samp{--no-parent} means that references to the parent directory
2750 are ignored (@pxref{Directory-Based Limits}), and @samp{-A.gif} means to
2751 download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
2755 Suppose you were in the middle of downloading, when Wget was
2756 interrupted. Now you do not want to clobber the files already present.
2760 wget -nc -r http://www.gnu.org/
2764 If you want to encode your own username and password to @sc{http} or
2765 @sc{ftp}, use the appropriate @sc{url} syntax (@pxref{URL Format}).
2768 wget ftp://hniksic:mypassword@@unix.server.com/.emacs
2771 Note, however, that this usage is not advisable on multi-user systems
2772 because it reveals your password to anyone who looks at the output of
2775 @cindex redirecting output
2777 You would like the output documents to go to standard output instead of
2781 wget -O - http://jagor.srce.hr/ http://www.srce.hr/
2784 You can also combine the two options and make pipelines to retrieve the
2785 documents from remote hotlists:
2788 wget -O - http://cool.list.com/ | wget --force-html -i -
2792 @node Very Advanced Usage
2793 @section Very Advanced Usage
2798 If you wish Wget to keep a mirror of a page (or @sc{ftp}
2799 subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
2800 for @samp{-r -l inf -N}. You can put Wget in the crontab file asking it
2801 to recheck a site each Sunday:
2805 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
2809 In addition to the above, you want the links to be converted for local
2810 viewing. But, after having read this manual, you know that link
2811 conversion doesn't play well with timestamping, so you also want Wget to
2812 back up the original @sc{html} files before the conversion. Wget invocation
2813 would look like this:
2816 wget --mirror --convert-links --backup-converted \
2817 http://www.gnu.org/ -o /home/me/weeklog
2821 But you've also noticed that local viewing doesn't work all that well
2822 when @sc{html} files are saved under extensions other than @samp{.html},
2823 perhaps because they were served as @file{index.cgi}. So you'd like
2824 Wget to rename all the files served with content-type @samp{text/html}
2825 or @samp{application/xhtml+xml} to @file{@var{name}.html}.
2828 wget --mirror --convert-links --backup-converted \
2829 --html-extension -o /home/me/weeklog \
2833 Or, with less typing:
2836 wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
2845 This chapter contains all the stuff that could not fit anywhere else.
2848 * Proxies:: Support for proxy servers
2849 * Distribution:: Getting the latest version.
2850 * Mailing List:: Wget mailing list for announcements and discussion.
2851 * Reporting Bugs:: How and where to report bugs.
2852 * Portability:: The systems Wget works on.
2853 * Signals:: Signal-handling performed by Wget.
2860 @dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
2861 data from remote servers to local clients. One typical use of proxies
2862 is lightening network load for users behind a slow connection. This is
2863 achieved by channeling all @sc{http} and @sc{ftp} requests through the
2864 proxy which caches the transferred data. When a cached resource is
2865 requested again, proxy will return the data from cache. Another use for
2866 proxies is for companies that separate (for security reasons) their
2867 internal networks from the rest of Internet. In order to obtain
2868 information from the Web, their users connect and retrieve remote data
2869 using an authorized proxy.
2871 Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
2872 standard way to specify proxy location, which Wget recognizes, is using
2873 the following environment variables:
2877 This variable should contain the @sc{url} of the proxy for @sc{http}
2881 This variable should contain the @sc{url} of the proxy for @sc{ftp}
2882 connections. It is quite common that @sc{http_proxy} and @sc{ftp_proxy}
2883 are set to the same @sc{url}.
2886 This variable should contain a comma-separated list of domain extensions
2887 proxy should @emph{not} be used for. For instance, if the value of
2888 @code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
2892 In addition to the environment variables, proxy location and settings
2893 may be specified from within Wget itself.
2897 @itemx --proxy=on/off
2898 @itemx proxy = on/off
2899 This option may be used to turn the proxy support on or off. Proxy
2900 support is on by default, provided that the appropriate environment
2903 @item http_proxy = @var{URL}
2904 @itemx ftp_proxy = @var{URL}
2905 @itemx no_proxy = @var{string}
2906 These startup file variables allow you to override the proxy settings
2907 specified by the environment.
2910 Some proxy servers require authorization to enable you to use them. The
2911 authorization consists of @dfn{username} and @dfn{password}, which must
2912 be sent by Wget. As with @sc{http} authorization, several
2913 authentication schemes exist. For proxy authorization only the
2914 @code{Basic} authentication scheme is currently implemented.
2916 You may specify your username and password either through the proxy
2917 @sc{url} or through the command-line options. Assuming that the
2918 company's proxy is located at @samp{proxy.company.com} at port 8001, a
2919 proxy @sc{url} location containing authorization data might look like
2923 http://hniksic:mypassword@@proxy.company.com:8001/
2926 Alternatively, you may use the @samp{proxy-user} and
2927 @samp{proxy-password} options, and the equivalent @file{.wgetrc}
2928 settings @code{proxy_user} and @code{proxy_passwd} to set the proxy
2929 username and password.
2932 @section Distribution
2933 @cindex latest version
2935 Like all GNU utilities, the latest version of Wget can be found at the
2936 master GNU archive site ftp.gnu.org, and its mirrors. For example,
2937 Wget @value{VERSION} can be found at
2938 @url{ftp://ftp.gnu.org/pub/gnu/wget/wget-@value{VERSION}.tar.gz}
2941 @section Mailing List
2942 @cindex mailing list
2945 There are several Wget-related mailing lists, all hosted by
2946 SunSITE.dk. The general discussion list is at
2947 @email{wget@@sunsite.dk}. It is the preferred place for bug reports
2948 and suggestions, as well as for discussion of development. You are
2949 invited to subscribe.
2951 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
2952 and follow the instructions. Unsubscribe by mailing to
2953 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
2954 @url{http://www.mail-archive.com/wget%40sunsite.dk/} and at
2955 @url{http://news.gmane.org/gmane.comp.web.wget.general}.
2957 The second mailing list is at @email{wget-patches@@sunsite.dk}, and is
2958 used to submit patches for review by Wget developers. A ``patch'' is
2959 a textual representation of change to source code, readable by both
2960 humans and programs. The file @file{PATCHES} that comes with Wget
2961 covers the creation and submitting of patches in detail. Please don't
2962 send general suggestions or bug reports to @samp{wget-patches}; use it
2963 only for patch submissions.
2965 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
2966 and follow the instructions. Unsubscribe by mailing to
2967 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
2968 @url{http://news.gmane.org/gmane.comp.web.wget.patches}.
2970 Finally, there is a read-only list at @email{wget-cvs@@sunsite.dk}
2971 that tracks commits to the Wget CVS repository. To subscribe to that
2972 list, send mail to @email{wget-cvs-subscribe@@sunsite.dk}. The list
2975 @node Reporting Bugs
2976 @section Reporting Bugs
2978 @cindex reporting bugs
2982 You are welcome to send bug reports about GNU Wget to
2983 @email{bug-wget@@gnu.org}.
2985 Before actually submitting a bug report, please try to follow a few
2990 Please try to ascertain that the behavior you see really is a bug. If
2991 Wget crashes, it's a bug. If Wget does not behave as documented,
2992 it's a bug. If things work strange, but you are not sure about the way
2993 they are supposed to work, it might well be a bug.
2996 Try to repeat the bug in as simple circumstances as possible. E.g. if
2997 Wget crashes while downloading @samp{wget -rl0 -kKE -t5 -Y0
2998 http://yoyodyne.com -o /tmp/log}, you should try to see if the crash is
2999 repeatable, and if will occur with a simpler set of options. You might
3000 even try to start the download at the page where the crash occurred to
3001 see if that page somehow triggered the crash.
3003 Also, while I will probably be interested to know the contents of your
3004 @file{.wgetrc} file, just dumping it into the debug message is probably
3005 a bad idea. Instead, you should first try to see if the bug repeats
3006 with @file{.wgetrc} moved out of the way. Only if it turns out that
3007 @file{.wgetrc} settings affect the bug, mail me the relevant parts of
3011 Please start Wget with @samp{-d} option and send us the resulting
3012 output (or relevant parts thereof). If Wget was compiled without
3013 debug support, recompile it---it is @emph{much} easier to trace bugs
3014 with debug support on.
3016 Note: please make sure to remove any potentially sensitive information
3017 from the debug log before sending it to the bug address. The
3018 @code{-d} won't go out of its way to collect sensitive information,
3019 but the log @emph{will} contain a fairly complete transcript of Wget's
3020 communication with the server, which may include passwords and pieces
3021 of downloaded data. Since the bug address is publically archived, you
3022 may assume that all bug reports are visible to the public.
3025 If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
3026 wget` core} and type @code{where} to get the backtrace. This may not
3027 work if the system administrator has disabled core files, but it is
3033 @section Portability
3035 @cindex operating systems
3037 Like all GNU software, Wget works on the GNU system. However, since it
3038 uses GNU Autoconf for building and configuring, and mostly avoids using
3039 ``special'' features of any particular Unix, it should compile (and
3040 work) on all common Unix flavors.
3042 Various Wget versions have been compiled and tested under many kinds
3043 of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, OSF (aka
3044 Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some of
3045 those systems are no longer in widespread use and may not be able to
3046 support recent versions of Wget. If Wget fails to compile on your
3047 system, we would like to know about it.
3049 Thanks to kind contributors, this version of Wget compiles and works
3050 on 32-bit Microsoft Windows platforms. It has been compiled
3051 successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
3052 compilers. Naturally, it is crippled of some features available on
3053 Unix, but it should work as a substitute for people stuck with
3054 Windows. Note that Windows-specific portions of Wget are not
3055 guaranteed to be supported in the future, although this has been the
3056 case in practice for many years now. All questions and problems in
3057 Windows usage should be reported to Wget mailing list at
3058 @email{wget@@sunsite.dk} where the volunteers who maintain the
3059 Windows-related features might look at them.
3063 @cindex signal handling
3066 Since the purpose of Wget is background work, it catches the hangup
3067 signal (@code{SIGHUP}) and ignores it. If the output was on standard
3068 output, it will be redirected to a file named @file{wget-log}.
3069 Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
3070 to redirect the output of Wget after having started it.
3073 $ wget http://www.gnus.org/dist/gnus.tar.gz &
3076 SIGHUP received, redirecting output to `wget-log'.
3079 Other than that, Wget will not try to interfere with signals in any way.
3080 @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike.
3085 This chapter contains some references I consider useful.
3088 * Robot Exclusion:: Wget's support for RES.
3089 * Security Considerations:: Security with Wget.
3090 * Contributors:: People who helped.
3093 @node Robot Exclusion
3094 @section Robot Exclusion
3095 @cindex robot exclusion
3097 @cindex server maintenance
3099 It is extremely easy to make Wget wander aimlessly around a web site,
3100 sucking all the available data in progress. @samp{wget -r @var{site}},
3101 and you're set. Great? Not for the server admin.
3103 As long as Wget is only retrieving static pages, and doing it at a
3104 reasonable rate (see the @samp{--wait} option), there's not much of a
3105 problem. The trouble is that Wget can't tell the difference between the
3106 smallest static page and the most demanding CGI. A site I know has a
3107 section handled by a CGI Perl script that converts Info files to @sc{html} on
3108 the fly. The script is slow, but works well enough for human users
3109 viewing an occasional Info file. However, when someone's recursive Wget
3110 download stumbles upon the index page that links to all the Info files
3111 through the script, the system is brought to its knees without providing
3112 anything useful to the user (This task of converting Info files could be
3113 done locally and access to Info documentation for all installed GNU
3114 software on a system is available from the @code{info} command).
3116 To avoid this kind of accident, as well as to preserve privacy for
3117 documents that need to be protected from well-behaved robots, the
3118 concept of @dfn{robot exclusion} was invented. The idea is that
3119 the server administrators and document authors can specify which
3120 portions of the site they wish to protect from robots and those
3121 they will permit access.
3123 The most popular mechanism, and the @i{de facto} standard supported by
3124 all the major robots, is the ``Robots Exclusion Standard'' (RES) written
3125 by Martijn Koster et al. in 1994. It specifies the format of a text
3126 file containing directives that instruct the robots which URL paths to
3127 avoid. To be found by the robots, the specifications must be placed in
3128 @file{/robots.txt} in the server root, which the robots are expected to
3131 Although Wget is not a web robot in the strictest sense of the word, it
3132 can downloads large parts of the site without the user's intervention to
3133 download an individual page. Because of that, Wget honors RES when
3134 downloading recursively. For instance, when you issue:
3137 wget -r http://www.server.com/
3140 First the index of @samp{www.server.com} will be downloaded. If Wget
3141 finds that it wants to download more documents from that server, it will
3142 request @samp{http://www.server.com/robots.txt} and, if found, use it
3143 for further downloads. @file{robots.txt} is loaded only once per each
3146 Until version 1.8, Wget supported the first version of the standard,
3147 written by Martijn Koster in 1994 and available at
3148 @url{http://www.robotstxt.org/wc/norobots.html}. As of version 1.8,
3149 Wget has supported the additional directives specified in the internet
3150 draft @samp{<draft-koster-robots-00.txt>} titled ``A Method for Web
3151 Robots Control''. The draft, which has as far as I know never made to
3152 an @sc{rfc}, is available at
3153 @url{http://www.robotstxt.org/wc/norobots-rfc.txt}.
3155 This manual no longer includes the text of the Robot Exclusion Standard.
3157 The second, less known mechanism, enables the author of an individual
3158 document to specify whether they want the links from the file to be
3159 followed by a robot. This is achieved using the @code{META} tag, like
3163 <meta name="robots" content="nofollow">
3166 This is explained in some detail at
3167 @url{http://www.robotstxt.org/wc/meta-user.html}. Wget supports this
3168 method of robot exclusion in addition to the usual @file{/robots.txt}
3171 If you know what you are doing and really really wish to turn off the
3172 robot exclusion, set the @code{robots} variable to @samp{off} in your
3173 @file{.wgetrc}. You can achieve the same effect from the command line
3174 using the @code{-e} switch, e.g. @samp{wget -e robots=off @var{url}...}.
3176 @node Security Considerations
3177 @section Security Considerations
3180 When using Wget, you must be aware that it sends unencrypted passwords
3181 through the network, which may present a security problem. Here are the
3182 main issues, and some solutions.
3186 The passwords on the command line are visible using @code{ps}. The best
3187 way around it is to use @code{wget -i -} and feed the @sc{url}s to
3188 Wget's standard input, each on a separate line, terminated by @kbd{C-d}.
3189 Another workaround is to use @file{.netrc} to store passwords; however,
3190 storing unencrypted passwords is also considered a security risk.
3193 Using the insecure @dfn{basic} authentication scheme, unencrypted
3194 passwords are transmitted through the network routers and gateways.
3197 The @sc{ftp} passwords are also in no way encrypted. There is no good
3198 solution for this at the moment.
3201 Although the ``normal'' output of Wget tries to hide the passwords,
3202 debugging logs show them, in all forms. This problem is avoided by
3203 being careful when you send debug logs (yes, even when you send them to
3208 @section Contributors
3209 @cindex contributors
3212 GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@xemacs.org}.
3215 GNU Wget was written by Hrvoje Niksic @email{hniksic@@xemacs.org}.
3217 However, its development could never have gone as far as it has, were it
3218 not for the help of many people, either with bug reports, feature
3219 proposals, patches, or letters saying ``Thanks!''.
3221 Special thanks goes to the following people (no particular order):
3225 Karsten Thygesen---donated system resources such as the mailing list,
3226 web space, and @sc{ftp} space, along with a lot of time to make these
3230 Shawn McHorse---bug reports and patches.
3233 Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. Lots of
3237 Gordon Matzigkeit---@file{.netrc} support.
3241 Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
3242 Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
3245 Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
3246 and ``philosophical'' discussions.
3250 Darko Budor---initial port to Windows.
3253 Antonio Rosella---help and suggestions, plus the Italian translation.
3257 Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
3261 Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
3266 Fran@,{c}ois Pinard---many thorough bug reports and discussions.
3269 Francois Pinard---many thorough bug reports and discussions.
3273 Karl Eichwalder---lots of help with internationalization and other
3277 Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
3281 The people who provided donations for development, including Brian
3285 The following people have provided patches, bug/build reports, useful
3286 suggestions, beta testing services, fan mail and all the other things
3287 that make maintenance so much fun:
3307 Kristijan @v{C}onka@v{s},
3327 Aleksandar Erkalovi@'{c},
3330 Aleksandar Erkalovic,
3350 Erik Magnus Hulthen,
3369 Goran Kezunovi@'{c},
3380 $\Sigma\acute{\iota}\mu o\varsigma\;
3381 \Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
3382 (Simos KSenitellis),
3390 Nicol@'{a}s Lichtmeier,
3396 Alexander V. Lukyanov,
3427 @c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
3429 Juan Jos\'{e} Rodr\'{\i}gues,
3432 Juan Jose Rodrigues,
3446 Szakacsits Szabolcs,
3454 Douglas E. Wegscheid,
3465 Apologies to all who I accidentally left out, and many thanks to all the
3466 subscribers of the Wget mailing list.
3473 @cindex free software
3475 GNU Wget is licensed under the GNU General Public License (GNU GPL),
3476 which makes it @dfn{free software}. Please note that ``free'' in ``free
3477 software'' refers to liberty, not price. As some people like to point
3478 out, it's the ``free'' of ``free speech'', not the ``free'' of ``free
3481 The exact and legally binding distribution terms are spelled out below.
3482 The GPL guarantees that you have the right (freedom) to run and change
3483 GNU Wget and distribute it to others, and even---if you want---charge
3484 money for doing any of those things. With these rights comes the
3485 obligation to distribute the source code along with the software and to
3486 grant your recipients the same rights and impose the same restrictions.
3488 This licensing model is also known as @dfn{open source} because it,
3489 among other things, makes sure that all recipients will receive the
3490 source code along with the program, and be able to improve it. The GNU
3491 project prefers the term ``free software'' for reasons outlined at
3492 @url{http://www.gnu.org/philosophy/free-software-for-freedom.html}.
3494 The exact license terms are defined by this paragraph and the GNU
3495 General Public License it refers to:
3498 GNU Wget is free software; you can redistribute it and/or modify it
3499 under the terms of the GNU General Public License as published by the
3500 Free Software Foundation; either version 2 of the License, or (at your
3501 option) any later version.
3503 GNU Wget is distributed in the hope that it will be useful, but WITHOUT
3504 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
3505 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
3508 A copy of the GNU General Public License is included as part of this
3509 manual; if you did not receive it, write to the Free Software
3510 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
3513 In addition to this, this manual is free in the same sense:
3516 Permission is granted to copy, distribute and/or modify this document
3517 under the terms of the GNU Free Documentation License, Version 1.2 or
3518 any later version published by the Free Software Foundation; with the
3519 Invariant Sections being ``GNU General Public License'' and ``GNU Free
3520 Documentation License'', with no Front-Cover Texts, and with no
3521 Back-Cover Texts. A copy of the license is included in the section
3522 entitled ``GNU Free Documentation License''.
3525 @c #### Maybe we should wrap these licenses in ifinfo? Stallman says
3526 @c that the GFDL needs to be present in the manual, and to me it would
3527 @c suck to include the license for the manual and not the license for
3530 The full texts of the GNU General Public License and of the GNU Free
3531 Documentation License are available below.
3534 * GNU General Public License::
3535 * GNU Free Documentation License::
3538 @node GNU General Public License
3539 @section GNU General Public License
3540 @center Version 2, June 1991
3543 Copyright @copyright{} 1989, 1991 Free Software Foundation, Inc.
3544 675 Mass Ave, Cambridge, MA 02139, USA
3546 Everyone is permitted to copy and distribute verbatim copies
3547 of this license document, but changing it is not allowed.
3550 @unnumberedsec Preamble
3552 The licenses for most software are designed to take away your
3553 freedom to share and change it. By contrast, the GNU General Public
3554 License is intended to guarantee your freedom to share and change free
3555 software---to make sure the software is free for all its users. This
3556 General Public License applies to most of the Free Software
3557 Foundation's software and to any other program whose authors commit to
3558 using it. (Some other Free Software Foundation software is covered by
3559 the GNU Library General Public License instead.) You can apply it to
3562 When we speak of free software, we are referring to freedom, not
3563 price. Our General Public Licenses are designed to make sure that you
3564 have the freedom to distribute copies of free software (and charge for
3565 this service if you wish), that you receive source code or can get it
3566 if you want it, that you can change the software or use pieces of it
3567 in new free programs; and that you know you can do these things.
3569 To protect your rights, we need to make restrictions that forbid
3570 anyone to deny you these rights or to ask you to surrender the rights.
3571 These restrictions translate to certain responsibilities for you if you
3572 distribute copies of the software, or if you modify it.
3574 For example, if you distribute copies of such a program, whether
3575 gratis or for a fee, you must give the recipients all the rights that
3576 you have. You must make sure that they, too, receive or can get the
3577 source code. And you must show them these terms so they know their
3580 We protect your rights with two steps: (1) copyright the software, and
3581 (2) offer you this license which gives you legal permission to copy,
3582 distribute and/or modify the software.
3584 Also, for each author's protection and ours, we want to make certain
3585 that everyone understands that there is no warranty for this free
3586 software. If the software is modified by someone else and passed on, we
3587 want its recipients to know that what they have is not the original, so
3588 that any problems introduced by others will not reflect on the original
3589 authors' reputations.
3591 Finally, any free program is threatened constantly by software
3592 patents. We wish to avoid the danger that redistributors of a free
3593 program will individually obtain patent licenses, in effect making the
3594 program proprietary. To prevent this, we have made it clear that any
3595 patent must be licensed for everyone's free use or not licensed at all.
3597 The precise terms and conditions for copying, distribution and
3598 modification follow.
3601 @unnumberedsec TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
3604 @center TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
3609 This License applies to any program or other work which contains
3610 a notice placed by the copyright holder saying it may be distributed
3611 under the terms of this General Public License. The ``Program'', below,
3612 refers to any such program or work, and a ``work based on the Program''
3613 means either the Program or any derivative work under copyright law:
3614 that is to say, a work containing the Program or a portion of it,
3615 either verbatim or with modifications and/or translated into another
3616 language. (Hereinafter, translation is included without limitation in
3617 the term ``modification''.) Each licensee is addressed as ``you''.
3619 Activities other than copying, distribution and modification are not
3620 covered by this License; they are outside its scope. The act of
3621 running the Program is not restricted, and the output from the Program
3622 is covered only if its contents constitute a work based on the
3623 Program (independent of having been made by running the Program).
3624 Whether that is true depends on what the Program does.
3627 You may copy and distribute verbatim copies of the Program's
3628 source code as you receive it, in any medium, provided that you
3629 conspicuously and appropriately publish on each copy an appropriate
3630 copyright notice and disclaimer of warranty; keep intact all the
3631 notices that refer to this License and to the absence of any warranty;
3632 and give any other recipients of the Program a copy of this License
3633 along with the Program.
3635 You may charge a fee for the physical act of transferring a copy, and
3636 you may at your option offer warranty protection in exchange for a fee.
3639 You may modify your copy or copies of the Program or any portion
3640 of it, thus forming a work based on the Program, and copy and
3641 distribute such modifications or work under the terms of Section 1
3642 above, provided that you also meet all of these conditions:
3646 You must cause the modified files to carry prominent notices
3647 stating that you changed the files and the date of any change.
3650 You must cause any work that you distribute or publish, that in
3651 whole or in part contains or is derived from the Program or any
3652 part thereof, to be licensed as a whole at no charge to all third
3653 parties under the terms of this License.
3656 If the modified program normally reads commands interactively
3657 when run, you must cause it, when started running for such
3658 interactive use in the most ordinary way, to print or display an
3659 announcement including an appropriate copyright notice and a
3660 notice that there is no warranty (or else, saying that you provide
3661 a warranty) and that users may redistribute the program under
3662 these conditions, and telling the user how to view a copy of this
3663 License. (Exception: if the Program itself is interactive but
3664 does not normally print such an announcement, your work based on
3665 the Program is not required to print an announcement.)
3668 These requirements apply to the modified work as a whole. If
3669 identifiable sections of that work are not derived from the Program,
3670 and can be reasonably considered independent and separate works in
3671 themselves, then this License, and its terms, do not apply to those
3672 sections when you distribute them as separate works. But when you
3673 distribute the same sections as part of a whole which is a work based
3674 on the Program, the distribution of the whole must be on the terms of
3675 this License, whose permissions for other licensees extend to the
3676 entire whole, and thus to each and every part regardless of who wrote it.
3678 Thus, it is not the intent of this section to claim rights or contest
3679 your rights to work written entirely by you; rather, the intent is to
3680 exercise the right to control the distribution of derivative or
3681 collective works based on the Program.
3683 In addition, mere aggregation of another work not based on the Program
3684 with the Program (or with a work based on the Program) on a volume of
3685 a storage or distribution medium does not bring the other work under
3686 the scope of this License.
3689 You may copy and distribute the Program (or a work based on it,
3690 under Section 2) in object code or executable form under the terms of
3691 Sections 1 and 2 above provided that you also do one of the following:
3695 Accompany it with the complete corresponding machine-readable
3696 source code, which must be distributed under the terms of Sections
3697 1 and 2 above on a medium customarily used for software interchange; or,
3700 Accompany it with a written offer, valid for at least three
3701 years, to give any third party, for a charge no more than your
3702 cost of physically performing source distribution, a complete
3703 machine-readable copy of the corresponding source code, to be
3704 distributed under the terms of Sections 1 and 2 above on a medium
3705 customarily used for software interchange; or,
3708 Accompany it with the information you received as to the offer
3709 to distribute corresponding source code. (This alternative is
3710 allowed only for noncommercial distribution and only if you
3711 received the program in object code or executable form with such
3712 an offer, in accord with Subsection b above.)
3715 The source code for a work means the preferred form of the work for
3716 making modifications to it. For an executable work, complete source
3717 code means all the source code for all modules it contains, plus any
3718 associated interface definition files, plus the scripts used to
3719 control compilation and installation of the executable. However, as a
3720 special exception, the source code distributed need not include
3721 anything that is normally distributed (in either source or binary
3722 form) with the major components (compiler, kernel, and so on) of the
3723 operating system on which the executable runs, unless that component
3724 itself accompanies the executable.
3726 If distribution of executable or object code is made by offering
3727 access to copy from a designated place, then offering equivalent
3728 access to copy the source code from the same place counts as
3729 distribution of the source code, even though third parties are not
3730 compelled to copy the source along with the object code.
3733 You may not copy, modify, sublicense, or distribute the Program
3734 except as expressly provided under this License. Any attempt
3735 otherwise to copy, modify, sublicense or distribute the Program is
3736 void, and will automatically terminate your rights under this License.
3737 However, parties who have received copies, or rights, from you under
3738 this License will not have their licenses terminated so long as such
3739 parties remain in full compliance.
3742 You are not required to accept this License, since you have not
3743 signed it. However, nothing else grants you permission to modify or
3744 distribute the Program or its derivative works. These actions are
3745 prohibited by law if you do not accept this License. Therefore, by
3746 modifying or distributing the Program (or any work based on the
3747 Program), you indicate your acceptance of this License to do so, and
3748 all its terms and conditions for copying, distributing or modifying
3749 the Program or works based on it.
3752 Each time you redistribute the Program (or any work based on the
3753 Program), the recipient automatically receives a license from the
3754 original licensor to copy, distribute or modify the Program subject to
3755 these terms and conditions. You may not impose any further
3756 restrictions on the recipients' exercise of the rights granted herein.
3757 You are not responsible for enforcing compliance by third parties to
3761 If, as a consequence of a court judgment or allegation of patent
3762 infringement or for any other reason (not limited to patent issues),
3763 conditions are imposed on you (whether by court order, agreement or
3764 otherwise) that contradict the conditions of this License, they do not
3765 excuse you from the conditions of this License. If you cannot
3766 distribute so as to satisfy simultaneously your obligations under this
3767 License and any other pertinent obligations, then as a consequence you
3768 may not distribute the Program at all. For example, if a patent
3769 license would not permit royalty-free redistribution of the Program by
3770 all those who receive copies directly or indirectly through you, then
3771 the only way you could satisfy both it and this License would be to
3772 refrain entirely from distribution of the Program.
3774 If any portion of this section is held invalid or unenforceable under
3775 any particular circumstance, the balance of the section is intended to
3776 apply and the section as a whole is intended to apply in other
3779 It is not the purpose of this section to induce you to infringe any
3780 patents or other property right claims or to contest validity of any
3781 such claims; this section has the sole purpose of protecting the
3782 integrity of the free software distribution system, which is
3783 implemented by public license practices. Many people have made
3784 generous contributions to the wide range of software distributed
3785 through that system in reliance on consistent application of that
3786 system; it is up to the author/donor to decide if he or she is willing
3787 to distribute software through any other system and a licensee cannot
3790 This section is intended to make thoroughly clear what is believed to
3791 be a consequence of the rest of this License.
3794 If the distribution and/or use of the Program is restricted in
3795 certain countries either by patents or by copyrighted interfaces, the
3796 original copyright holder who places the Program under this License
3797 may add an explicit geographical distribution limitation excluding
3798 those countries, so that distribution is permitted only in or among
3799 countries not thus excluded. In such case, this License incorporates
3800 the limitation as if written in the body of this License.
3803 The Free Software Foundation may publish revised and/or new versions
3804 of the General Public License from time to time. Such new versions will
3805 be similar in spirit to the present version, but may differ in detail to
3806 address new problems or concerns.
3808 Each version is given a distinguishing version number. If the Program
3809 specifies a version number of this License which applies to it and ``any
3810 later version'', you have the option of following the terms and conditions
3811 either of that version or of any later version published by the Free
3812 Software Foundation. If the Program does not specify a version number of
3813 this License, you may choose any version ever published by the Free Software
3817 If you wish to incorporate parts of the Program into other free
3818 programs whose distribution conditions are different, write to the author
3819 to ask for permission. For software which is copyrighted by the Free
3820 Software Foundation, write to the Free Software Foundation; we sometimes
3821 make exceptions for this. Our decision will be guided by the two goals
3822 of preserving the free status of all derivatives of our free software and
3823 of promoting the sharing and reuse of software generally.
3826 @heading NO WARRANTY
3834 BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
3835 FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
3836 OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
3837 PROVIDE THE PROGRAM ``AS IS'' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
3838 OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
3839 MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
3840 TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
3841 PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
3842 REPAIR OR CORRECTION.
3845 IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
3846 WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
3847 REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
3848 INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
3849 OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
3850 TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
3851 YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
3852 PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
3853 POSSIBILITY OF SUCH DAMAGES.
3857 @heading END OF TERMS AND CONDITIONS
3860 @center END OF TERMS AND CONDITIONS
3864 @unnumberedsec How to Apply These Terms to Your New Programs
3866 If you develop a new program, and you want it to be of the greatest
3867 possible use to the public, the best way to achieve this is to make it
3868 free software which everyone can redistribute and change under these terms.
3870 To do so, attach the following notices to the program. It is safest
3871 to attach them to the start of each source file to most effectively
3872 convey the exclusion of warranty; and each file should have at least
3873 the ``copyright'' line and a pointer to where the full notice is found.
3876 @var{one line to give the program's name and an idea of what it does.}
3877 Copyright (C) 20@var{yy} @var{name of author}
3879 This program is free software; you can redistribute it and/or
3880 modify it under the terms of the GNU General Public License
3881 as published by the Free Software Foundation; either version 2
3882 of the License, or (at your option) any later version.
3884 This program is distributed in the hope that it will be useful,
3885 but WITHOUT ANY WARRANTY; without even the implied warranty of
3886 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3887 GNU General Public License for more details.
3889 You should have received a copy of the GNU General Public License
3890 along with this program; if not, write to the Free Software
3891 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
3894 Also add information on how to contact you by electronic and paper mail.
3896 If the program is interactive, make it output a short notice like this
3897 when it starts in an interactive mode:
3900 Gnomovision version 69, Copyright (C) 20@var{yy} @var{name of author}
3901 Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
3902 type `show w'. This is free software, and you are welcome
3903 to redistribute it under certain conditions; type `show c'
3907 The hypothetical commands @samp{show w} and @samp{show c} should show
3908 the appropriate parts of the General Public License. Of course, the
3909 commands you use may be called something other than @samp{show w} and
3910 @samp{show c}; they could even be mouse-clicks or menu items---whatever
3913 You should also get your employer (if you work as a programmer) or your
3914 school, if any, to sign a ``copyright disclaimer'' for the program, if
3915 necessary. Here is a sample; alter the names:
3919 Yoyodyne, Inc., hereby disclaims all copyright
3920 interest in the program `Gnomovision'
3921 (which makes passes at compilers) written
3924 @var{signature of Ty Coon}, 1 April 1989
3925 Ty Coon, President of Vice
3929 This General Public License does not permit incorporating your program into
3930 proprietary programs. If your program is a subroutine library, you may
3931 consider it more useful to permit linking proprietary applications with the
3932 library. If this is what you want to do, use the GNU Library General
3933 Public License instead of this License.
3938 @unnumbered Concept Index