1 \input texinfo @c -*-texinfo-*-
7 @settitle GNU Wget @value{VERSION} Manual
8 @c Disable the monstrous rectangles beside overfull hbox-es.
10 @c Use `odd' to print double-sided.
15 @c Remove this if you don't use A4 paper.
19 @c Title for man page. The weird way texi2pod.pl is written requires
20 @c the preceding @set.
22 @c man title Wget The non-interactive network downloader.
24 @dircategory Network Applications
26 * Wget: (wget). The non-interactive network downloader.
30 This file documents the the GNU Wget utility for downloading network
33 @c man begin COPYRIGHT
34 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2002, 2003, 2004, 2005
35 Free Software Foundation, Inc.
37 Permission is granted to make and distribute verbatim copies of
38 this manual provided the copyright notice and this permission notice
39 are preserved on all copies.
42 Permission is granted to process this file through TeX and print the
43 results, provided the printed document carries a copying permission
44 notice identical to this one except for the removal of this paragraph
45 (this paragraph not being relevant to the printed manual).
47 Permission is granted to copy, distribute and/or modify this document
48 under the terms of the GNU Free Documentation License, Version 1.1 or
49 any later version published by the Free Software Foundation; with the
50 Invariant Sections being ``GNU General Public License'' and ``GNU Free
51 Documentation License'', with no Front-Cover Texts, and with no
52 Back-Cover Texts. A copy of the license is included in the section
53 entitled ``GNU Free Documentation License''.
58 @title GNU Wget @value{VERSION}
59 @subtitle The non-interactive download utility
60 @subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
61 @author by Hrvoje Nik@v{s}i@'{c} and the developers
65 Originally written by Hrvoje Niksic <hniksic@xemacs.org>.
68 GNU Info entry for @file{wget}.
73 @vskip 0pt plus 1filll
74 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2003, 2004, 2005,
75 Free Software Foundation, Inc.
77 Permission is granted to copy, distribute and/or modify this document
78 under the terms of the GNU Free Documentation License, Version 1.2 or
79 any later version published by the Free Software Foundation; with the
80 Invariant Sections being ``GNU General Public License'' and ``GNU Free
81 Documentation License'', with no Front-Cover Texts, and with no
82 Back-Cover Texts. A copy of the license is included in the section
83 entitled ``GNU Free Documentation License''.
88 @top Wget @value{VERSION}
90 This manual documents version @value{VERSION} of GNU Wget, the freely
91 available utility for network downloads.
93 Copyright @copyright{} 1996, 1997, 1998, 2000, 2001, 2003, 2004, 2005
94 Free Software Foundation, Inc.
97 * Overview:: Features of Wget.
98 * Invoking:: Wget command-line arguments.
99 * Recursive Download:: Downloading interlinked pages.
100 * Following Links:: The available methods of chasing links.
101 * Time-Stamping:: Mirroring according to time-stamps.
102 * Startup File:: Wget's initialization file.
103 * Examples:: Examples of usage.
104 * Various:: The stuff that doesn't fit anywhere else.
105 * Appendices:: Some useful references.
106 * Copying:: You may give out copies of Wget and of this manual.
107 * Concept Index:: Topics covered by this manual.
116 @c man begin DESCRIPTION
117 GNU Wget is a free utility for non-interactive download of files from
118 the Web. It supports @sc{http}, @sc{https}, and @sc{ftp} protocols, as
119 well as retrieval through @sc{http} proxies.
122 This chapter is a partial overview of Wget's features.
126 @c man begin DESCRIPTION
127 Wget is non-interactive, meaning that it can work in the background,
128 while the user is not logged on. This allows you to start a retrieval
129 and disconnect from the system, letting Wget finish the work. By
130 contrast, most of the Web browsers require constant user's presence,
131 which can be a great hindrance when transferring a lot of data.
137 @c man begin DESCRIPTION
141 @c man begin DESCRIPTION
142 Wget can follow links in @sc{html} and @sc{xhtml} pages and create local
143 versions of remote web sites, fully recreating the directory structure of
144 the original site. This is sometimes referred to as ``recursive
145 downloading.'' While doing that, Wget respects the Robot Exclusion
146 Standard (@file{/robots.txt}). Wget can be instructed to convert the
147 links in downloaded @sc{html} files to the local files for offline
153 File name wildcard matching and recursive mirroring of directories are
154 available when retrieving via @sc{ftp}. Wget can read the time-stamp
155 information given by both @sc{http} and @sc{ftp} servers, and store it
156 locally. Thus Wget can see if the remote file has changed since last
157 retrieval, and automatically retrieve the new version if it has. This
158 makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
164 @c man begin DESCRIPTION
168 @c man begin DESCRIPTION
169 Wget has been designed for robustness over slow or unstable network
170 connections; if a download fails due to a network problem, it will
171 keep retrying until the whole file has been retrieved. If the server
172 supports regetting, it will instruct the server to continue the
173 download from where it left off.
178 Wget supports proxy servers, which can lighten the network load, speed
179 up retrieval and provide access behind firewalls. However, if you are
180 behind a firewall that requires that you use a socks style gateway,
181 you can get the socks library and build Wget with support for socks.
182 Wget uses the passive @sc{ftp} downloading by default, active @sc{ftp}
187 Wget supports IP version 6, the next generation of IP. IPv6 is
188 autodetected at compile-time, and can be disabled at either build or
189 run time. Binaries built with IPv6 support work well in both
190 IPv4-only and dual family environments.
194 Built-in features offer mechanisms to tune which links you wish to follow
195 (@pxref{Following Links}).
199 The progress of individual downloads is traced using a progress gauge.
200 Interactive downloads are tracked using a ``thermometer''-style gauge,
201 whereas non-interactive ones are traced with dots, each dot
202 representing a fixed amount of data received (1KB by default). Either
203 gauge can be customized to your preferences.
207 Most of the features are fully configurable, either through command line
208 options, or via the initialization file @file{.wgetrc} (@pxref{Startup
209 File}). Wget allows you to define @dfn{global} startup files
210 (@file{/usr/local/etc/wgetrc} by default) for site settings.
215 @item /usr/local/etc/wgetrc
216 Default location of the @dfn{global} startup file.
226 Finally, GNU Wget is free software. This means that everyone may use
227 it, redistribute it and/or modify it under the terms of the GNU General
228 Public License, as published by the Free Software Foundation
239 By default, Wget is very simple to invoke. The basic syntax is:
242 @c man begin SYNOPSIS
243 wget [@var{option}]@dots{} [@var{URL}]@dots{}
247 Wget will simply download all the @sc{url}s specified on the command
248 line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
250 However, you may wish to change some of the default parameters of
251 Wget. You can do it two ways: permanently, adding the appropriate
252 command to @file{.wgetrc} (@pxref{Startup File}), or specifying it on
258 * Basic Startup Options::
259 * Logging and Input File Options::
261 * Directory Options::
263 * HTTPS (SSL/TLS) Options::
265 * Recursive Retrieval Options::
266 * Recursive Accept/Reject Options::
274 @dfn{URL} is an acronym for Uniform Resource Locator. A uniform
275 resource locator is a compact string representation for a resource
276 available via the Internet. Wget recognizes the @sc{url} syntax as per
277 @sc{rfc1738}. This is the most widely used form (square brackets denote
281 http://host[:port]/directory/file
282 ftp://host[:port]/directory/file
285 You can also encode your username and password within a @sc{url}:
288 ftp://user:password@@host/path
289 http://user:password@@host/path
292 Either @var{user} or @var{password}, or both, may be left out. If you
293 leave out either the @sc{http} username or password, no authentication
294 will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
295 will be used. If you leave out the @sc{ftp} password, your email
296 address will be supplied as a default password.@footnote{If you have a
297 @file{.netrc} file in your home directory, password will also be
300 @strong{Important Note}: if you specify a password-containing @sc{url}
301 on the command line, the username and password will be plainly visible
302 to all users on the system, by way of @code{ps}. On multi-user systems,
303 this is a big security risk. To work around it, use @code{wget -i -}
304 and feed the @sc{url}s to Wget's standard input, each on a separate
305 line, terminated by @kbd{C-d}.
307 You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
308 being the hexadecimal representation of the character's @sc{ascii}
309 value. Some common unsafe characters include @samp{%} (quoted as
310 @samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
311 @samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
314 Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
315 default, @sc{ftp} documents are retrieved in the binary mode (type
316 @samp{i}), which means that they are downloaded unchanged. Another
317 useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
318 delimiters between the different operating systems, and is thus useful
319 for text files. Here is an example:
322 ftp://host/directory/file;type=a
325 Two alternative variants of @sc{url} specification are also supported,
326 because of historical (hysterical?) reasons and their widespreaded use.
328 @sc{ftp}-only syntax (supported by @code{NcFTP}):
333 @sc{http}-only syntax (introduced by @code{Netscape}):
338 These two alternative forms are deprecated, and may cease being
339 supported in the future.
341 If you do not understand the difference between these notations, or do
342 not know which one to use, just use the plain ordinary format you use
343 with your favorite browser, like @code{Lynx} or @code{Netscape}.
346 @section Option Syntax
347 @cindex option syntax
348 @cindex syntax of options
350 Since Wget uses GNU getopts to process its arguments, every option has a
351 short form and a long form. Long options are more convenient to
352 remember, but take time to type. You may freely mix different option
353 styles, or specify options after the command-line arguments. Thus you
357 wget -r --tries=10 http://fly.srk.fer.hr/ -o log
360 The space between the option accepting an argument and the argument may
361 be omitted. Instead @samp{-o log} you can write @samp{-olog}.
363 You may put several options that do not require arguments together,
370 This is a complete equivalent of:
373 wget -d -r -c @var{URL}
376 Since the options can be specified after the arguments, you may
377 terminate them with @samp{--}. So the following will try to download
378 @sc{url} @samp{-x}, reporting failure to @file{log}:
384 The options that accept comma-separated lists all respect the convention
385 that specifying an empty list clears its value. This can be useful to
386 clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
387 sets @code{exclude_directories} to @file{/cgi-bin}, the following
388 example will first reset it, and then set it to exclude @file{/~nobody}
389 and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
390 (@pxref{Wgetrc Syntax}).
393 wget -X '' -X /~nobody,/~somebody
398 @node Basic Startup Options
399 @section Basic Startup Options
404 Display the version of Wget.
408 Print a help message describing all of Wget's command-line options.
412 Go to background immediately after startup. If no output file is
413 specified via the @samp{-o}, output is redirected to @file{wget-log}.
415 @cindex execute wgetrc command
416 @item -e @var{command}
417 @itemx --execute @var{command}
418 Execute @var{command} as if it were a part of @file{.wgetrc}
419 (@pxref{Startup File}). A command thus invoked will be executed
420 @emph{after} the commands in @file{.wgetrc}, thus taking precedence over
421 them. If you need to specify more than one wgetrc command, use multiple
422 instances of @samp{-e}.
426 @node Logging and Input File Options
427 @section Logging and Input File Options
432 @item -o @var{logfile}
433 @itemx --output-file=@var{logfile}
434 Log all messages to @var{logfile}. The messages are normally reported
437 @cindex append to log
438 @item -a @var{logfile}
439 @itemx --append-output=@var{logfile}
440 Append to @var{logfile}. This is the same as @samp{-o}, only it appends
441 to @var{logfile} instead of overwriting the old log file. If
442 @var{logfile} does not exist, a new file is created.
447 Turn on debug output, meaning various information important to the
448 developers of Wget if it does not work properly. Your system
449 administrator may have chosen to compile Wget without debug support, in
450 which case @samp{-d} will not work. Please note that compiling with
451 debug support is always safe---Wget compiled with the debug support will
452 @emph{not} print any debug info unless requested with @samp{-d}.
453 @xref{Reporting Bugs}, for more information on how to use @samp{-d} for
459 Turn off Wget's output.
464 Turn on verbose output, with all the available data. The default output
469 Non-verbose output---turn off verbose without being completely quiet
470 (use @samp{-q} for that), which means that error messages and basic
471 information still get printed.
475 @itemx --input-file=@var{file}
476 Read @sc{url}s from @var{file}, in which case no @sc{url}s need to be on
477 the command line. If there are @sc{url}s both on the command line and
478 in an input file, those on the command lines will be the first ones to
479 be retrieved. The @var{file} need not be an @sc{html} document (but no
480 harm if it is)---it is enough if the @sc{url}s are just listed
483 However, if you specify @samp{--force-html}, the document will be
484 regarded as @samp{html}. In that case you may have problems with
485 relative links, which you can solve either by adding @code{<base
486 href="@var{url}">} to the documents or by specifying
487 @samp{--base=@var{url}} on the command line.
492 When input is read from a file, force it to be treated as an @sc{html}
493 file. This enables you to retrieve relative links from existing
494 @sc{html} files on your local disk, by adding @code{<base
495 href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
498 @cindex base for relative links in input file
500 @itemx --base=@var{URL}
501 When used in conjunction with @samp{-F}, prepends @var{URL} to relative
502 links in the file specified by @samp{-i}.
505 @node Download Options
506 @section Download Options
509 @cindex bind() address
510 @cindex client IP address
511 @cindex IP address, client
512 @item --bind-address=@var{ADDRESS}
513 When making client TCP/IP connections, @code{bind()} to @var{ADDRESS} on
514 the local machine. @var{ADDRESS} may be specified as a hostname or IP
515 address. This option can be useful if your machine is bound to multiple
520 @cindex number of retries
521 @item -t @var{number}
522 @itemx --tries=@var{number}
523 Set number of retries to @var{number}. Specify 0 or @samp{inf} for
524 infinite retrying. The default is to retry 20 times, with the exception
525 of fatal errors like ``connection refused'' or ``not found'' (404),
526 which are not retried.
529 @itemx --output-document=@var{file}
530 The documents will not be written to the appropriate files, but all will
531 be concatenated together and written to @var{file}. If @var{file}
532 already exists, it will be overwritten. If the @var{file} is @samp{-},
533 the documents will be written to standard output (disabling @samp{-k}).
535 Note that a combination with @samp{-k} is only well-defined for downloading
538 @cindex clobbering, file
539 @cindex downloading multiple times
543 If a file is downloaded more than once in the same directory, Wget's
544 behavior depends on a few options, including @samp{-nc}. In certain
545 cases, the local file will be @dfn{clobbered}, or overwritten, upon
546 repeated download. In other cases it will be preserved.
548 When running Wget without @samp{-N}, @samp{-nc}, or @samp{-r},
549 downloading the same file in the same directory will result in the
550 original copy of @var{file} being preserved and the second copy being
551 named @samp{@var{file}.1}. If that file is downloaded yet again, the
552 third copy will be named @samp{@var{file}.2}, and so on. When
553 @samp{-nc} is specified, this behavior is suppressed, and Wget will
554 refuse to download newer copies of @samp{@var{file}}. Therefore,
555 ``@code{no-clobber}'' is actually a misnomer in this mode---it's not
556 clobbering that's prevented (as the numeric suffixes were already
557 preventing clobbering), but rather the multiple version saving that's
560 When running Wget with @samp{-r}, but without @samp{-N} or @samp{-nc},
561 re-downloading a file will result in the new copy simply overwriting the
562 old. Adding @samp{-nc} will prevent this behavior, instead causing the
563 original version to be preserved and any newer copies on the server to
566 When running Wget with @samp{-N}, with or without @samp{-r}, the
567 decision as to whether or not to download a newer copy of a file depends
568 on the local and remote timestamp and size of the file
569 (@pxref{Time-Stamping}). @samp{-nc} may not be specified at the same
572 Note that when @samp{-nc} is specified, files with the suffixes
573 @samp{.html} or @samp{.htm} will be loaded from the local disk and
574 parsed as if they had been retrieved from the Web.
576 @cindex continue retrieval
577 @cindex incomplete downloads
578 @cindex resume download
581 Continue getting a partially-downloaded file. This is useful when you
582 want to finish up a download started by a previous instance of Wget, or
583 by another program. For instance:
586 wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
589 If there is a file named @file{ls-lR.Z} in the current directory, Wget
590 will assume that it is the first portion of the remote file, and will
591 ask the server to continue the retrieval from an offset equal to the
592 length of the local file.
594 Note that you don't need to specify this option if you just want the
595 current invocation of Wget to retry downloading a file should the
596 connection be lost midway through. This is the default behavior.
597 @samp{-c} only affects resumption of downloads started @emph{prior} to
598 this invocation of Wget, and whose local files are still sitting around.
600 Without @samp{-c}, the previous example would just download the remote
601 file to @file{ls-lR.Z.1}, leaving the truncated @file{ls-lR.Z} file
604 Beginning with Wget 1.7, if you use @samp{-c} on a non-empty file, and
605 it turns out that the server does not support continued downloading,
606 Wget will refuse to start the download from scratch, which would
607 effectively ruin existing contents. If you really want the download to
608 start from scratch, remove the file.
610 Also beginning with Wget 1.7, if you use @samp{-c} on a file which is of
611 equal size as the one on the server, Wget will refuse to download the
612 file and print an explanatory message. The same happens when the file
613 is smaller on the server than locally (presumably because it was changed
614 on the server since your last download attempt)---because ``continuing''
615 is not meaningful, no download occurs.
617 On the other side of the coin, while using @samp{-c}, any file that's
618 bigger on the server than locally will be considered an incomplete
619 download and only @code{(length(remote) - length(local))} bytes will be
620 downloaded and tacked onto the end of the local file. This behavior can
621 be desirable in certain cases---for instance, you can use @samp{wget -c}
622 to download just the new portion that's been appended to a data
623 collection or log file.
625 However, if the file is bigger on the server because it's been
626 @emph{changed}, as opposed to just @emph{appended} to, you'll end up
627 with a garbled file. Wget has no way of verifying that the local file
628 is really a valid prefix of the remote file. You need to be especially
629 careful of this when using @samp{-c} in conjunction with @samp{-r},
630 since every file will be considered as an "incomplete download" candidate.
632 Another instance where you'll get a garbled file if you try to use
633 @samp{-c} is if you have a lame @sc{http} proxy that inserts a
634 ``transfer interrupted'' string into the local file. In the future a
635 ``rollback'' option may be added to deal with this case.
637 Note that @samp{-c} only works with @sc{ftp} servers and with @sc{http}
638 servers that support the @code{Range} header.
640 @cindex progress indicator
642 @item --progress=@var{type}
643 Select the type of the progress indicator you wish to use. Legal
644 indicators are ``dot'' and ``bar''.
646 The ``bar'' indicator is used by default. It draws an @sc{ascii} progress
647 bar graphics (a.k.a ``thermometer'' display) indicating the status of
648 retrieval. If the output is not a TTY, the ``dot'' bar will be used by
651 Use @samp{--progress=dot} to switch to the ``dot'' display. It traces
652 the retrieval by printing dots on the screen, each dot representing a
653 fixed amount of downloaded data.
655 When using the dotted retrieval, you may also set the @dfn{style} by
656 specifying the type as @samp{dot:@var{style}}. Different styles assign
657 different meaning to one dot. With the @code{default} style each dot
658 represents 1K, there are ten dots in a cluster and 50 dots in a line.
659 The @code{binary} style has a more ``computer''-like orientation---8K
660 dots, 16-dots clusters and 48 dots per line (which makes for 384K
661 lines). The @code{mega} style is suitable for downloading very large
662 files---each dot represents 64K retrieved, there are eight dots in a
663 cluster, and 48 dots on each line (so each line contains 3M).
665 Note that you can set the default style using the @code{progress}
666 command in @file{.wgetrc}. That setting may be overridden from the
667 command line. The exception is that, when the output is not a TTY, the
668 ``dot'' progress will be favored over ``bar''. To force the bar output,
669 use @samp{--progress=bar:force}.
672 @itemx --timestamping
673 Turn on time-stamping. @xref{Time-Stamping}, for details.
675 @cindex server response, print
677 @itemx --server-response
678 Print the headers sent by @sc{http} servers and responses sent by
681 @cindex Wget as spider
684 When invoked with this option, Wget will behave as a Web @dfn{spider},
685 which means that it will not download the pages, just check that they
686 are there. For example, you can use Wget to check your bookmarks:
689 wget --spider --force-html -i bookmarks.html
692 This feature needs much more work for Wget to get close to the
693 functionality of real web spiders.
697 @itemx --timeout=@var{seconds}
698 Set the network timeout to @var{seconds} seconds. This is equivalent
699 to specifying @samp{--dns-timeout}, @samp{--connect-timeout}, and
700 @samp{--read-timeout}, all at the same time.
702 Whenever Wget connects to or reads from a remote host, it checks for a
703 timeout and aborts the operation if the time expires. This prevents
704 anomalous occurrences such as hanging reads or infinite connects. The
705 only timeout enabled by default is a 900-second timeout for reading.
706 Setting timeout to 0 disables checking for timeouts.
708 Unless you know what you are doing, it is best not to set any of the
709 timeout-related options.
713 @item --dns-timeout=@var{seconds}
714 Set the DNS lookup timeout to @var{seconds} seconds. DNS lookups that
715 don't complete within the specified time will fail. By default, there
716 is no timeout on DNS lookups, other than that implemented by system
719 @cindex connect timeout
720 @cindex timeout, connect
721 @item --connect-timeout=@var{seconds}
722 Set the connect timeout to @var{seconds} seconds. TCP connections that
723 take longer to establish will be aborted. By default, there is no
724 connect timeout, other than that implemented by system libraries.
727 @cindex timeout, read
728 @item --read-timeout=@var{seconds}
729 Set the read (and write) timeout to @var{seconds} seconds. Reads that
730 take longer will fail. The default value for read timeout is 900
733 @cindex bandwidth, limit
735 @cindex limit bandwidth
736 @item --limit-rate=@var{amount}
737 Limit the download speed to @var{amount} bytes per second. Amount may
738 be expressed in bytes, kilobytes with the @samp{k} suffix, or megabytes
739 with the @samp{m} suffix. For example, @samp{--limit-rate=20k} will
740 limit the retrieval rate to 20KB/s. This kind of thing is useful when,
741 for whatever reason, you don't want Wget to consume the entire available
744 Note that Wget implements the limiting by sleeping the appropriate
745 amount of time after a network read that took less time than specified
746 by the rate. Eventually this strategy causes the TCP transfer to slow
747 down to approximately the specified rate. However, it may take some
748 time for this balance to be achieved, so don't be surprised if limiting
749 the rate doesn't work well with very small files.
753 @item -w @var{seconds}
754 @itemx --wait=@var{seconds}
755 Wait the specified number of seconds between the retrievals. Use of
756 this option is recommended, as it lightens the server load by making the
757 requests less frequent. Instead of in seconds, the time can be
758 specified in minutes using the @code{m} suffix, in hours using @code{h}
759 suffix, or in days using @code{d} suffix.
761 Specifying a large value for this option is useful if the network or the
762 destination host is down, so that Wget can wait long enough to
763 reasonably expect the network error to be fixed before the retry.
765 @cindex retries, waiting between
766 @cindex waiting between retries
767 @item --waitretry=@var{seconds}
768 If you don't want Wget to wait between @emph{every} retrieval, but only
769 between retries of failed downloads, you can use this option. Wget will
770 use @dfn{linear backoff}, waiting 1 second after the first failure on a
771 given file, then waiting 2 seconds after the second failure on that
772 file, up to the maximum number of @var{seconds} you specify. Therefore,
773 a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
776 Note that this option is turned on by default in the global
782 Some web sites may perform log analysis to identify retrieval programs
783 such as Wget by looking for statistically significant similarities in
784 the time between requests. This option causes the time between requests
785 to vary between 0 and 2 * @var{wait} seconds, where @var{wait} was
786 specified using the @samp{--wait} option, in order to mask Wget's
787 presence from such analysis.
789 A recent article in a publication devoted to development on a popular
790 consumer platform provided code to perform this analysis on the fly.
791 Its author suggested blocking at the class C address level to ensure
792 automated retrieval programs were blocked despite changing DHCP-supplied
795 The @samp{--random-wait} option was inspired by this ill-advised
796 recommendation to block many unrelated users from a web site due to the
801 @itemx --proxy=on/off
802 Turn proxy support on or off. The proxy is on by default if the
803 appropriate environment variable is defined.
805 For more information about the use of proxies with Wget, @xref{Proxies}.
809 @itemx --quota=@var{quota}
810 Specify download quota for automatic retrievals. The value can be
811 specified in bytes (default), kilobytes (with @samp{k} suffix), or
812 megabytes (with @samp{m} suffix).
814 Note that quota will never affect downloading a single file. So if you
815 specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
816 @file{ls-lR.gz} will be downloaded. The same goes even when several
817 @sc{url}s are specified on the command-line. However, quota is
818 respected when retrieving either recursively, or from an input file.
819 Thus you may safely type @samp{wget -Q2m -i sites}---download will be
820 aborted when the quota is exceeded.
822 Setting quota to 0 or to @samp{inf} unlimits the download quota.
825 @cindex caching of DNS lookups
827 Turn off caching of DNS lookups. Normally, Wget remembers the IP
828 addresses it looked up from DNS so it doesn't have to repeatedly
829 contact the DNS server for the same (typically small) set of hosts it
830 retrieves from. This cache exists in memory only; a new Wget run will
833 However, it has been reported that in some situations it is not
834 desirable to cache host names, even for the duration of a
835 short-running application like Wget. With this option Wget issues a
836 new DNS lookup (more precisely, a new call to @code{gethostbyname} or
837 @code{getaddrinfo}) each time it makes a new connection. Please note
838 that this option will @emph{not} affect caching that might be
839 performed by the resolving library or by an external caching layer,
842 If you don't understand exactly what this option does, you probably
845 @cindex file names, restrict
846 @cindex Windows file names
847 @item --restrict-file-names=@var{mode}
848 Change which characters found in remote URLs may show up in local file
849 names generated from those URLs. Characters that are @dfn{restricted}
850 by this option are escaped, i.e. replaced with @samp{%HH}, where
851 @samp{HH} is the hexadecimal number that corresponds to the restricted
854 By default, Wget escapes the characters that are not valid as part of
855 file names on your operating system, as well as control characters that
856 are typically unprintable. This option is useful for changing these
857 defaults, either because you are downloading to a non-native partition,
858 or because you want to disable escaping of the control characters.
860 When mode is set to ``unix'', Wget escapes the character @samp{/} and
861 the control characters in the ranges 0--31 and 128--159. This is the
862 default on Unix-like OS'es.
864 When mode is set to ``windows'', Wget escapes the characters @samp{\},
865 @samp{|}, @samp{/}, @samp{:}, @samp{?}, @samp{"}, @samp{*}, @samp{<},
866 @samp{>}, and the control characters in the ranges 0--31 and 128--159.
867 In addition to this, Wget in Windows mode uses @samp{+} instead of
868 @samp{:} to separate host and port in local file names, and uses
869 @samp{@@} instead of @samp{?} to separate the query portion of the file
870 name from the rest. Therefore, a URL that would be saved as
871 @samp{www.xemacs.org:4300/search.pl?input=blah} in Unix mode would be
872 saved as @samp{www.xemacs.org+4300/search.pl@@input=blah} in Windows
873 mode. This mode is the default on Windows.
875 If you append @samp{,nocontrol} to the mode, as in
876 @samp{unix,nocontrol}, escaping of the control characters is also
877 switched off. You can use @samp{--restrict-file-names=nocontrol} to
878 turn off escaping of control characters without affecting the choice of
879 the OS to use as file name restriction mode.
886 Force connecting to IPv4 or IPv6 addresses. With @samp{--inet4-only}
887 or @samp{-4}, Wget will only connect to IPv4 hosts, ignoring AAAA
888 records in DNS, and refusing to connect to IPv6 addresses specified in
889 URLs. Conversely, with @samp{--inet6-only} or @samp{-6}, Wget will
890 only connect to IPv6 hosts and ignore A records and IPv4 addresses.
892 Neither options should be needed normally. By default, an IPv6-aware
893 Wget will use the address family specified by the host's DNS record.
894 If the DNS specifies both an A record and an AAAA record, Wget will
895 try them in sequence until it finds one it can connect to.
897 These options can be used to deliberately force the use of IPv4 or
898 IPv6 address families on dual family systems, usually to aid debugging
899 or to deal with broken network configuration. Only one of
900 @samp{--inet6-only} and @samp{--inet4-only} may be specified in the
901 same command. Neither option is available in Wget compiled without
905 @node Directory Options
906 @section Directory Options
910 @itemx --no-directories
911 Do not create a hierarchy of directories when retrieving recursively.
912 With this option turned on, all files will get saved to the current
913 directory, without clobbering (if a name shows up more than once, the
914 filenames will get extensions @samp{.n}).
917 @itemx --force-directories
918 The opposite of @samp{-nd}---create a hierarchy of directories, even if
919 one would not have been created otherwise. E.g. @samp{wget -x
920 http://fly.srk.fer.hr/robots.txt} will save the downloaded file to
921 @file{fly.srk.fer.hr/robots.txt}.
924 @itemx --no-host-directories
925 Disable generation of host-prefixed directories. By default, invoking
926 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
927 directories beginning with @file{fly.srk.fer.hr/}. This option disables
930 @item --protocol-directories
931 Use the protocol name as a directory component of local file names. For
932 example, with this option, @samp{wget -r http://@var{host}} will save to
933 @samp{http/@var{host}/...} rather than just to @samp{@var{host}/...}.
935 Disable generation of host-prefixed directories. By default, invoking
936 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
937 directories beginning with @file{fly.srk.fer.hr/}. This option disables
940 @cindex cut directories
941 @item --cut-dirs=@var{number}
942 Ignore @var{number} directory components. This is useful for getting a
943 fine-grained control over the directory where recursive retrieval will
946 Take, for example, the directory at
947 @samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
948 @samp{-r}, it will be saved locally under
949 @file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
950 remove the @file{ftp.xemacs.org/} part, you are still stuck with
951 @file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
952 makes Wget not ``see'' @var{number} remote directory components. Here
953 are several examples of how @samp{--cut-dirs} option works.
957 No options -> ftp.xemacs.org/pub/xemacs/
959 -nH --cut-dirs=1 -> xemacs/
960 -nH --cut-dirs=2 -> .
962 --cut-dirs=1 -> ftp.xemacs.org/xemacs/
967 If you just want to get rid of the directory structure, this option is
968 similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
969 @samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
970 instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
971 be placed to @file{xemacs/beta}, as one would expect.
973 @cindex directory prefix
974 @item -P @var{prefix}
975 @itemx --directory-prefix=@var{prefix}
976 Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
977 directory where all other files and subdirectories will be saved to,
978 i.e. the top of the retrieval tree. The default is @samp{.} (the
983 @section HTTP Options
986 @cindex .html extension
988 @itemx --html-extension
989 If a file of type @samp{application/xhtml+xml} or @samp{text/html} is
990 downloaded and the URL does not end with the regexp
991 @samp{\.[Hh][Tt][Mm][Ll]?}, this option will cause the suffix @samp{.html}
992 to be appended to the local filename. This is useful, for instance, when
993 you're mirroring a remote site that uses @samp{.asp} pages, but you want
994 the mirrored pages to be viewable on your stock Apache server. Another
995 good use for this is when you're downloading CGI-generated materials. A URL
996 like @samp{http://site.com/article.cgi?25} will be saved as
997 @file{article.cgi?25.html}.
999 Note that filenames changed in this way will be re-downloaded every time
1000 you re-mirror a site, because Wget can't tell that the local
1001 @file{@var{X}.html} file corresponds to remote URL @samp{@var{X}} (since
1002 it doesn't yet know that the URL produces output of type
1003 @samp{text/html} or @samp{application/xhtml+xml}. To prevent this
1004 re-downloading, you must use @samp{-k} and @samp{-K} so that the original
1005 version of the file will be saved as @file{@var{X}.orig} (@pxref{Recursive
1006 Retrieval Options}).
1009 @cindex http password
1010 @cindex authentication
1011 @item --http-user=@var{user}
1012 @itemx --http-passwd=@var{password}
1013 Specify the username @var{user} and password @var{password} on an
1014 @sc{http} server. According to the type of the challenge, Wget will
1015 encode them using either the @code{basic} (insecure) or the
1016 @code{digest} authentication scheme.
1018 Another way to specify username and password is in the @sc{url} itself
1019 (@pxref{URL Format}). Either method reveals your password to anyone who
1020 bothers to run @code{ps}. To prevent the passwords from being seen,
1021 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1022 those files from other users with @code{chmod}. If the passwords are
1023 really important, do not leave them lying in those files either---edit
1024 the files and delete them after Wget has started the download.
1026 For more information about security issues with Wget, @xref{Security
1032 Disable server-side cache. In this case, Wget will send the remote
1033 server an appropriate directive (@samp{Pragma: no-cache}) to get the
1034 file from the remote service, rather than returning the cached version.
1035 This is especially useful for retrieving and flushing out-of-date
1036 documents on proxy servers.
1038 Caching is allowed by default.
1042 Disable the use of cookies. Cookies are a mechanism for maintaining
1043 server-side state. The server sends the client a cookie using the
1044 @code{Set-Cookie} header, and the client responds with the same cookie
1045 upon further requests. Since cookies allow the server owners to keep
1046 track of visitors and for sites to exchange this information, some
1047 consider them a breach of privacy. The default is to use cookies;
1048 however, @emph{storing} cookies is not on by default.
1050 @cindex loading cookies
1051 @cindex cookies, loading
1052 @item --load-cookies @var{file}
1053 Load cookies from @var{file} before the first HTTP retrieval.
1054 @var{file} is a textual file in the format originally used by Netscape's
1055 @file{cookies.txt} file.
1057 You will typically use this option when mirroring sites that require
1058 that you be logged in to access some or all of their content. The login
1059 process typically works by the web server issuing an @sc{http} cookie
1060 upon receiving and verifying your credentials. The cookie is then
1061 resent by the browser when accessing that part of the site, and so
1062 proves your identity.
1064 Mirroring such a site requires Wget to send the same cookies your
1065 browser sends when communicating with the site. This is achieved by
1066 @samp{--load-cookies}---simply point Wget to the location of the
1067 @file{cookies.txt} file, and it will send the same cookies your browser
1068 would send in the same situation. Different browsers keep textual
1069 cookie files in different locations:
1073 The cookies are in @file{~/.netscape/cookies.txt}.
1075 @item Mozilla and Netscape 6.x.
1076 Mozilla's cookie file is also named @file{cookies.txt}, located
1077 somewhere under @file{~/.mozilla}, in the directory of your profile.
1078 The full path usually ends up looking somewhat like
1079 @file{~/.mozilla/default/@var{some-weird-string}/cookies.txt}.
1081 @item Internet Explorer.
1082 You can produce a cookie file Wget can use by using the File menu,
1083 Import and Export, Export Cookies. This has been tested with Internet
1084 Explorer 5; it is not guaranteed to work with earlier versions.
1086 @item Other browsers.
1087 If you are using a different browser to create your cookies,
1088 @samp{--load-cookies} will only work if you can locate or produce a
1089 cookie file in the Netscape format that Wget expects.
1092 If you cannot use @samp{--load-cookies}, there might still be an
1093 alternative. If your browser supports a ``cookie manager'', you can use
1094 it to view the cookies used when accessing the site you're mirroring.
1095 Write down the name and value of the cookie, and manually instruct Wget
1096 to send those cookies, bypassing the ``official'' cookie support:
1099 wget --no-cookies --header "Cookie: @var{name}=@var{value}"
1102 @cindex saving cookies
1103 @cindex cookies, saving
1104 @item --save-cookies @var{file}
1105 Save cookies to @var{file} before exiting. This will not save cookies
1106 that have expired or that have no expiry time (so-called ``session
1107 cookies''), but also see @samp{--keep-session-cookies}.
1109 @cindex cookies, session
1110 @cindex session cookies
1111 @item --keep-session-cookies
1113 When specified, causes @samp{--save-cookies} to also save session
1114 cookies. Session cookies are normally not save because they are
1115 supposed to be forgotten when you exit the browser. Saving them is
1116 useful on sites that require you to log in or to visit the home page
1117 before you can access some pages. With this option, multiple Wget runs
1118 are considered a single browser session as far as the site is concerned.
1120 Since the cookie file format does not normally carry session cookies,
1121 Wget marks them with an expiry timestamp of 0. Wget's
1122 @samp{--load-cookies} recognizes those as session cookies, but it might
1123 confuse other browsers. Also note that cookies so loaded will be
1124 treated as other session cookies, which means that if you want
1125 @samp{--save-cookies} to preserve them again, you must use
1126 @samp{--keep-session-cookies} again.
1128 @cindex Content-Length, ignore
1129 @cindex ignore length
1130 @item --ignore-length
1131 Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
1132 precise) send out bogus @code{Content-Length} headers, which makes Wget
1133 go wild, as it thinks not all the document was retrieved. You can spot
1134 this syndrome if Wget retries getting the same document again and again,
1135 each time claiming that the (otherwise normal) connection has closed on
1138 With this option, Wget will ignore the @code{Content-Length} header---as
1139 if it never existed.
1142 @item --header=@var{additional-header}
1143 Define an @var{additional-header} to be passed to the @sc{http} servers.
1144 Headers must contain a @samp{:} preceded by one or more non-blank
1145 characters, and must not contain newlines.
1147 You may define more than one additional header by specifying
1148 @samp{--header} more than once.
1152 wget --header='Accept-Charset: iso-8859-2' \
1153 --header='Accept-Language: hr' \
1154 http://fly.srk.fer.hr/
1158 Specification of an empty string as the header value will clear all
1159 previous user-defined headers.
1162 @cindex proxy password
1163 @cindex proxy authentication
1164 @item --proxy-user=@var{user}
1165 @itemx --proxy-passwd=@var{password}
1166 Specify the username @var{user} and password @var{password} for
1167 authentication on a proxy server. Wget will encode them using the
1168 @code{basic} authentication scheme.
1170 Security considerations similar to those with @samp{--http-passwd}
1171 pertain here as well.
1173 @cindex http referer
1174 @cindex referer, http
1175 @item --referer=@var{url}
1176 Include `Referer: @var{url}' header in HTTP request. Useful for
1177 retrieving documents with server-side processing that assume they are
1178 always being retrieved by interactive web browsers and only come out
1179 properly when Referer is set to one of the pages that point to them.
1181 @cindex server response, save
1182 @item --save-headers
1183 Save the headers sent by the @sc{http} server to the file, preceding the
1184 actual contents, with an empty line as the separator.
1187 @item -U @var{agent-string}
1188 @itemx --user-agent=@var{agent-string}
1189 Identify as @var{agent-string} to the @sc{http} server.
1191 The @sc{http} protocol allows the clients to identify themselves using a
1192 @code{User-Agent} header field. This enables distinguishing the
1193 @sc{www} software, usually for statistical purposes or for tracing of
1194 protocol violations. Wget normally identifies as
1195 @samp{Wget/@var{version}}, @var{version} being the current version
1198 However, some sites have been known to impose the policy of tailoring
1199 the output according to the @code{User-Agent}-supplied information.
1200 While conceptually this is not such a bad idea, it has been abused by
1201 servers denying information to clients other than @code{Mozilla} or
1202 Microsoft @code{Internet Explorer}. This option allows you to change
1203 the @code{User-Agent} line issued by Wget. Use of this option is
1204 discouraged, unless you really know what you are doing.
1207 @item --post-data=@var{string}
1208 @itemx --post-file=@var{file}
1209 Use POST as the method for all HTTP requests and send the specified data
1210 in the request body. @code{--post-data} sends @var{string} as data,
1211 whereas @code{--post-file} sends the contents of @var{file}. Other than
1212 that, they work in exactly the same way.
1214 Please be aware that Wget needs to know the size of the POST data in
1215 advance. Therefore the argument to @code{--post-file} must be a regular
1216 file; specifying a FIFO or something like @file{/dev/stdin} won't work.
1217 It's not quite clear how to work around this limitation inherent in
1218 HTTP/1.0. Although HTTP/1.1 introduces @dfn{chunked} transfer that
1219 doesn't require knowing the request length in advance, a client can't
1220 use chunked unless it knows it's talking to an HTTP/1.1 server. And it
1221 can't know that until it receives a response, which in turn requires the
1222 request to have been completed -- a chicken-and-egg problem.
1224 Note: if Wget is redirected after the POST request is completed, it will
1225 not send the POST data to the redirected URL. This is because URLs that
1226 process POST often respond with a redirection to a regular page
1227 (although that's technically disallowed), which does not desire or
1228 accept POST. It is not yet clear that this behavior is optimal; if it
1229 doesn't work out, it will be changed.
1231 This example shows how to log to a server using POST and then proceed to
1232 download the desired pages, presumably only accessible to authorized
1237 # @r{Log in to the server. This can be done only once.}
1238 wget --save-cookies cookies.txt \
1239 --post-data 'user=foo&password=bar' \
1240 http://server.com/auth.php
1242 # @r{Now grab the page or pages we care about.}
1243 wget --load-cookies cookies.txt \
1244 -p http://server.com/interesting/article.php
1249 @node HTTPS (SSL/TLS) Options
1250 @section HTTPS (SSL/TLS) Options
1253 To support SSL-based HTTP (HTTPS) downloads, Wget must be compiled
1254 with an external SSL library, currently OpenSSL. If Wget is compiled
1255 without SSL support, none of these options are available.
1258 @item --sslcertfile=@var{file}
1259 Use the client certificate stored in @var{file}. This is needed for
1260 servers that are configured to require certificates from the clients
1261 that connect to them. Normally a certificate is not required and this
1264 @cindex SSL certificate
1265 @item --sslcertkey=@var{keyfile}
1266 Read the certificate key from @var{keyfile}.
1268 @cindex SSL certificate authority
1269 @item --sslcadir=@var{directory}
1270 Specifies directory used for certificate authorities (``CA'').
1272 @item --sslcafile=@var{file}
1273 Use @var{file} as the file with the bundle of certificate authorities.
1275 @cindex SSL certificate type, specify
1276 @item --sslcerttype=0/1
1277 Specify the type of the client certificate: 0 means @code{PEM}
1278 (default), 1 means @code{ASN1} (@code{DER}).
1280 @cindex SSL certificate, check
1281 @item --sslcheckcert=0/1
1282 If set to 1, check the server certificate against the specified client
1283 authorities. If this is 0 (the default), Wget will break the SSL
1284 handshake if the server certificate is not valid.
1286 @cindex SSL protocol, choose
1287 @item --sslprotocol=0-3
1288 Choose the SSL protocol to be used. If 0 is specified (the default),
1289 the OpenSSL library chooses the appropriate protocol automatically.
1290 Specifying 1 forces the use of SSLv2, specifying 2 forces SSLv3, and
1291 specifying 3 forces TLSv1.
1293 In most cases the OpenSSL library is capable of making an intelligent
1294 choice of the protocol, but there have been reports of sites that use
1295 old (and presumably buggy) server libraries with which a protocol has
1296 to be specified manually.
1299 @item --egd-file=@var{file}
1300 Use @var{file} as the EGD socket. EGD stands for @dfn{Entropy
1301 Gathering Daemon}, a user-space program that collects data from
1302 various unpredictable system sources and makes it available to other
1303 programs that might need it. Encryption software, such as the SSL
1304 library, needs sources of non-repeating randomness to seed the random
1305 number generator used to produce cryptographically strong keys.
1307 OpenSSL allows the user to specify his own source of entropy using the
1308 @code{RAND_FILE} environment variable. If this variable is unset, or
1309 if the specified file does not produce enough randomness, OpenSSL will
1310 read random data from EGD socket specified using this option.
1312 If this option is not specified (and the equivalent startup command is
1313 not used), EGD is never contacted. EGD is not needed on modern Unix
1314 systems that support @file{/dev/random}.
1318 @section FTP Options
1321 @cindex password, FTP
1322 @item --ftp-passwd=@var{string}
1323 Set the default FTP password to @var{string}. Without this, or the
1324 corresponding startup option, the password defaults to @samp{-wget@@},
1325 normally used for anonymous FTP.
1327 @cindex .listing files, removing
1328 @item --no-remove-listing
1329 Don't remove the temporary @file{.listing} files generated by @sc{ftp}
1330 retrievals. Normally, these files contain the raw directory listings
1331 received from @sc{ftp} servers. Not removing them can be useful for
1332 debugging purposes, or when you want to be able to easily check on the
1333 contents of remote server directories (e.g. to verify that a mirror
1334 you're running is complete).
1336 Note that even though Wget writes to a known filename for this file,
1337 this is not a security hole in the scenario of a user making
1338 @file{.listing} a symbolic link to @file{/etc/passwd} or something and
1339 asking @code{root} to run Wget in his or her directory. Depending on
1340 the options used, either Wget will refuse to write to @file{.listing},
1341 making the globbing/recursion/time-stamping operation fail, or the
1342 symbolic link will be deleted and replaced with the actual
1343 @file{.listing} file, or the listing will be written to a
1344 @file{.listing.@var{number}} file.
1346 Even though this situation isn't a problem, though, @code{root} should
1347 never run Wget in a non-trusted user's directory. A user could do
1348 something as simple as linking @file{index.html} to @file{/etc/passwd}
1349 and asking @code{root} to run Wget with @samp{-N} or @samp{-r} so the file
1350 will be overwritten.
1352 @cindex globbing, toggle
1354 Turn off @sc{ftp} globbing. Globbing refers to the use of shell-like
1355 special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[}
1356 and @samp{]} to retrieve more than one file from the same directory at
1360 wget ftp://gnjilux.srk.fer.hr/*.msg
1363 By default, globbing will be turned on if the @sc{url} contains a
1364 globbing character. This option may be used to turn globbing on or off
1367 You may have to quote the @sc{url} to protect it from being expanded by
1368 your shell. Globbing makes Wget look for a directory listing, which is
1369 system-specific. This is why it currently works only with Unix @sc{ftp}
1370 servers (and the ones emulating Unix @code{ls} output).
1373 @item --no-passive-ftp
1374 Disable the use of the @dfn{passive} FTP transfer mode. Passive FTP
1375 mandates that the client connect to the server to establish the data
1376 connection rather than the other way around.
1378 If the machine is connected to the Internet directly, both passive and
1379 active FTP should work equally well. Behind most firewall and NAT
1380 configurations passive FTP has a better chance of working. However,
1381 in some rare firewall configurations, active FTP actually works when
1382 passive FTP doesn't. If you suspect this to be the case, use this
1383 option, or set @code{passive_ftp=off} in your init file.
1385 @cindex symbolic links, retrieving
1386 @item --retr-symlinks
1387 Usually, when retrieving @sc{ftp} directories recursively and a symbolic
1388 link is encountered, the linked-to file is not downloaded. Instead, a
1389 matching symbolic link is created on the local filesystem. The
1390 pointed-to file will not be downloaded unless this recursive retrieval
1391 would have encountered it separately and downloaded it anyway.
1393 When @samp{--retr-symlinks} is specified, however, symbolic links are
1394 traversed and the pointed-to files are retrieved. At this time, this
1395 option does not cause Wget to traverse symlinks to directories and
1396 recurse through them, but in the future it should be enhanced to do
1399 Note that when retrieving a file (not a directory) because it was
1400 specified on the command-line, rather than because it was recursed to,
1401 this option has no effect. Symbolic links are always traversed in this
1404 @cindex Keep-Alive, turning off
1405 @cindex Persistent Connections, disabling
1406 @item --no-http-keep-alive
1407 Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget
1408 asks the server to keep the connection open so that, when you download
1409 more than one document from the same server, they get transferred over
1410 the same TCP connection. This saves time and at the same time reduces
1411 the load on the server.
1413 This option is useful when, for some reason, persistent (keep-alive)
1414 connections don't work for you, for example due to a server bug or due
1415 to the inability of server-side scripts to cope with the connections.
1418 @node Recursive Retrieval Options
1419 @section Recursive Retrieval Options
1424 Turn on recursive retrieving. @xref{Recursive Download}, for more
1427 @item -l @var{depth}
1428 @itemx --level=@var{depth}
1429 Specify recursion maximum depth level @var{depth} (@pxref{Recursive
1430 Download}). The default maximum depth is 5.
1432 @cindex proxy filling
1433 @cindex delete after retrieval
1434 @cindex filling proxy cache
1435 @item --delete-after
1436 This option tells Wget to delete every single file it downloads,
1437 @emph{after} having done so. It is useful for pre-fetching popular
1438 pages through a proxy, e.g.:
1441 wget -r -nd --delete-after http://whatever.com/~popular/page/
1444 The @samp{-r} option is to retrieve recursively, and @samp{-nd} to not
1447 Note that @samp{--delete-after} deletes files on the local machine. It
1448 does not issue the @samp{DELE} command to remote FTP sites, for
1449 instance. Also note that when @samp{--delete-after} is specified,
1450 @samp{--convert-links} is ignored, so @samp{.orig} files are simply not
1451 created in the first place.
1453 @cindex conversion of links
1454 @cindex link conversion
1456 @itemx --convert-links
1457 After the download is complete, convert the links in the document to
1458 make them suitable for local viewing. This affects not only the visible
1459 hyperlinks, but any part of the document that links to external content,
1460 such as embedded images, links to style sheets, hyperlinks to non-@sc{html}
1463 Each link will be changed in one of the two ways:
1467 The links to files that have been downloaded by Wget will be changed to
1468 refer to the file they point to as a relative link.
1470 Example: if the downloaded file @file{/foo/doc.html} links to
1471 @file{/bar/img.gif}, also downloaded, then the link in @file{doc.html}
1472 will be modified to point to @samp{../bar/img.gif}. This kind of
1473 transformation works reliably for arbitrary combinations of directories.
1476 The links to files that have not been downloaded by Wget will be changed
1477 to include host name and absolute path of the location they point to.
1479 Example: if the downloaded file @file{/foo/doc.html} links to
1480 @file{/bar/img.gif} (or to @file{../bar/img.gif}), then the link in
1481 @file{doc.html} will be modified to point to
1482 @file{http://@var{hostname}/bar/img.gif}.
1485 Because of this, local browsing works reliably: if a linked file was
1486 downloaded, the link will refer to its local name; if it was not
1487 downloaded, the link will refer to its full Internet address rather than
1488 presenting a broken link. The fact that the former links are converted
1489 to relative links ensures that you can move the downloaded hierarchy to
1492 Note that only at the end of the download can Wget know which links have
1493 been downloaded. Because of that, the work done by @samp{-k} will be
1494 performed at the end of all the downloads.
1496 @cindex backing up converted files
1498 @itemx --backup-converted
1499 When converting a file, back up the original version with a @samp{.orig}
1500 suffix. Affects the behavior of @samp{-N} (@pxref{HTTP Time-Stamping
1505 Turn on options suitable for mirroring. This option turns on recursion
1506 and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
1507 directory listings. It is currently equivalent to
1508 @samp{-r -N -l inf --no-remove-listing}.
1510 @cindex page requisites
1511 @cindex required images, downloading
1513 @itemx --page-requisites
1514 This option causes Wget to download all the files that are necessary to
1515 properly display a given @sc{html} page. This includes such things as
1516 inlined images, sounds, and referenced stylesheets.
1518 Ordinarily, when downloading a single @sc{html} page, any requisite documents
1519 that may be needed to display it properly are not downloaded. Using
1520 @samp{-r} together with @samp{-l} can help, but since Wget does not
1521 ordinarily distinguish between external and inlined documents, one is
1522 generally left with ``leaf documents'' that are missing their
1525 For instance, say document @file{1.html} contains an @code{<IMG>} tag
1526 referencing @file{1.gif} and an @code{<A>} tag pointing to external
1527 document @file{2.html}. Say that @file{2.html} is similar but that its
1528 image is @file{2.gif} and it links to @file{3.html}. Say this
1529 continues up to some arbitrarily high number.
1531 If one executes the command:
1534 wget -r -l 2 http://@var{site}/1.html
1537 then @file{1.html}, @file{1.gif}, @file{2.html}, @file{2.gif}, and
1538 @file{3.html} will be downloaded. As you can see, @file{3.html} is
1539 without its requisite @file{3.gif} because Wget is simply counting the
1540 number of hops (up to 2) away from @file{1.html} in order to determine
1541 where to stop the recursion. However, with this command:
1544 wget -r -l 2 -p http://@var{site}/1.html
1547 all the above files @emph{and} @file{3.html}'s requisite @file{3.gif}
1548 will be downloaded. Similarly,
1551 wget -r -l 1 -p http://@var{site}/1.html
1554 will cause @file{1.html}, @file{1.gif}, @file{2.html}, and @file{2.gif}
1555 to be downloaded. One might think that:
1558 wget -r -l 0 -p http://@var{site}/1.html
1561 would download just @file{1.html} and @file{1.gif}, but unfortunately
1562 this is not the case, because @samp{-l 0} is equivalent to
1563 @samp{-l inf}---that is, infinite recursion. To download a single @sc{html}
1564 page (or a handful of them, all specified on the command-line or in a
1565 @samp{-i} @sc{url} input file) and its (or their) requisites, simply leave off
1566 @samp{-r} and @samp{-l}:
1569 wget -p http://@var{site}/1.html
1572 Note that Wget will behave as if @samp{-r} had been specified, but only
1573 that single page and its requisites will be downloaded. Links from that
1574 page to external documents will not be followed. Actually, to download
1575 a single page and all its requisites (even if they exist on separate
1576 websites), and make sure the lot displays properly locally, this author
1577 likes to use a few options in addition to @samp{-p}:
1580 wget -E -H -k -K -p http://@var{site}/@var{document}
1583 To finish off this topic, it's worth knowing that Wget's idea of an
1584 external document link is any URL specified in an @code{<A>} tag, an
1585 @code{<AREA>} tag, or a @code{<LINK>} tag other than @code{<LINK
1588 @cindex @sc{html} comments
1589 @cindex comments, @sc{html}
1590 @item --strict-comments
1591 Turn on strict parsing of @sc{html} comments. The default is to terminate
1592 comments at the first occurrence of @samp{-->}.
1594 According to specifications, @sc{html} comments are expressed as @sc{sgml}
1595 @dfn{declarations}. Declaration is special markup that begins with
1596 @samp{<!} and ends with @samp{>}, such as @samp{<!DOCTYPE ...>}, that
1597 may contain comments between a pair of @samp{--} delimiters. @sc{html}
1598 comments are ``empty declarations'', @sc{sgml} declarations without any
1599 non-comment text. Therefore, @samp{<!--foo-->} is a valid comment, and
1600 so is @samp{<!--one-- --two-->}, but @samp{<!--1--2-->} is not.
1602 On the other hand, most @sc{html} writers don't perceive comments as anything
1603 other than text delimited with @samp{<!--} and @samp{-->}, which is not
1604 quite the same. For example, something like @samp{<!------------>}
1605 works as a valid comment as long as the number of dashes is a multiple
1606 of four (!). If not, the comment technically lasts until the next
1607 @samp{--}, which may be at the other end of the document. Because of
1608 this, many popular browsers completely ignore the specification and
1609 implement what users have come to expect: comments delimited with
1610 @samp{<!--} and @samp{-->}.
1612 Until version 1.9, Wget interpreted comments strictly, which resulted in
1613 missing links in many web pages that displayed fine in browsers, but had
1614 the misfortune of containing non-compliant comments. Beginning with
1615 version 1.9, Wget has joined the ranks of clients that implements
1616 ``naive'' comments, terminating each comment at the first occurrence of
1619 If, for whatever reason, you want strict comment parsing, use this
1620 option to turn it on.
1623 @node Recursive Accept/Reject Options
1624 @section Recursive Accept/Reject Options
1627 @item -A @var{acclist} --accept @var{acclist}
1628 @itemx -R @var{rejlist} --reject @var{rejlist}
1629 Specify comma-separated lists of file name suffixes or patterns to
1630 accept or reject (@pxref{Types of Files} for more details).
1632 @item -D @var{domain-list}
1633 @itemx --domains=@var{domain-list}
1634 Set domains to be followed. @var{domain-list} is a comma-separated list
1635 of domains. Note that it does @emph{not} turn on @samp{-H}.
1637 @item --exclude-domains @var{domain-list}
1638 Specify the domains that are @emph{not} to be followed.
1639 (@pxref{Spanning Hosts}).
1641 @cindex follow FTP links
1643 Follow @sc{ftp} links from @sc{html} documents. Without this option,
1644 Wget will ignore all the @sc{ftp} links.
1646 @cindex tag-based recursive pruning
1647 @item --follow-tags=@var{list}
1648 Wget has an internal table of @sc{html} tag / attribute pairs that it
1649 considers when looking for linked documents during a recursive
1650 retrieval. If a user wants only a subset of those tags to be
1651 considered, however, he or she should be specify such tags in a
1652 comma-separated @var{list} with this option.
1654 @item --ignore-tags=@var{list}
1655 This is the opposite of the @samp{--follow-tags} option. To skip
1656 certain @sc{html} tags when recursively looking for documents to download,
1657 specify them in a comma-separated @var{list}.
1659 In the past, this option was the best bet for downloading a single page
1660 and its requisites, using a command-line like:
1663 wget --ignore-tags=a,area -H -k -K -r http://@var{site}/@var{document}
1666 However, the author of this option came across a page with tags like
1667 @code{<LINK REL="home" HREF="/">} and came to the realization that
1668 specifying tags to ignore was not enough. One can't just tell Wget to
1669 ignore @code{<LINK>}, because then stylesheets will not be downloaded.
1670 Now the best bet for downloading a single page and its requisites is the
1671 dedicated @samp{--page-requisites} option.
1675 Enable spanning across hosts when doing recursive retrieving
1676 (@pxref{Spanning Hosts}).
1680 Follow relative links only. Useful for retrieving a specific home page
1681 without any distractions, not even those from the same hosts
1682 (@pxref{Relative Links}).
1685 @itemx --include-directories=@var{list}
1686 Specify a comma-separated list of directories you wish to follow when
1687 downloading (@pxref{Directory-Based Limits} for more details.) Elements
1688 of @var{list} may contain wildcards.
1691 @itemx --exclude-directories=@var{list}
1692 Specify a comma-separated list of directories you wish to exclude from
1693 download (@pxref{Directory-Based Limits} for more details.) Elements of
1694 @var{list} may contain wildcards.
1698 Do not ever ascend to the parent directory when retrieving recursively.
1699 This is a useful option, since it guarantees that only the files
1700 @emph{below} a certain hierarchy will be downloaded.
1701 @xref{Directory-Based Limits}, for more details.
1706 @node Recursive Download
1707 @chapter Recursive Download
1710 @cindex recursive download
1712 GNU Wget is capable of traversing parts of the Web (or a single
1713 @sc{http} or @sc{ftp} server), following links and directory structure.
1714 We refer to this as to @dfn{recursive retrieval}, or @dfn{recursion}.
1716 With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from
1717 the given @sc{url}, documents, retrieving the files the @sc{html}
1718 document was referring to, through markup like @code{href}, or
1719 @code{src}. If the freshly downloaded file is also of type
1720 @code{text/html} or @code{application/xhtml+xml}, it will be parsed and
1723 Recursive retrieval of @sc{http} and @sc{html} content is
1724 @dfn{breadth-first}. This means that Wget first downloads the requested
1725 @sc{html} document, then the documents linked from that document, then the
1726 documents linked by them, and so on. In other words, Wget first
1727 downloads the documents at depth 1, then those at depth 2, and so on
1728 until the specified maximum depth.
1730 The maximum @dfn{depth} to which the retrieval may descend is specified
1731 with the @samp{-l} option. The default maximum depth is five layers.
1733 When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
1734 the data from the given directory tree (including the subdirectories up
1735 to the specified depth) on the remote server, creating its mirror image
1736 locally. @sc{ftp} retrieval is also limited by the @code{depth}
1737 parameter. Unlike @sc{http} recursion, @sc{ftp} recursion is performed
1740 By default, Wget will create a local directory tree, corresponding to
1741 the one found on the remote server.
1743 Recursive retrieving can find a number of applications, the most
1744 important of which is mirroring. It is also useful for @sc{www}
1745 presentations, and any other opportunities where slow network
1746 connections should be bypassed by storing the files locally.
1748 You should be warned that recursive downloads can overload the remote
1749 servers. Because of that, many administrators frown upon them and may
1750 ban access from your site if they detect very fast downloads of big
1751 amounts of content. When downloading from Internet servers, consider
1752 using the @samp{-w} option to introduce a delay between accesses to the
1753 server. The download will take a while longer, but the server
1754 administrator will not be alarmed by your rudeness.
1756 Of course, recursive download may cause problems on your machine. If
1757 left to run unchecked, it can easily fill up the disk. If downloading
1758 from local network, it can also take bandwidth on the system, as well as
1759 consume memory and CPU.
1761 Try to specify the criteria that match the kind of download you are
1762 trying to achieve. If you want to download only one page, use
1763 @samp{--page-requisites} without any additional recursion. If you want
1764 to download things under one directory, use @samp{-np} to avoid
1765 downloading things from other directories. If you want to download all
1766 the files from one directory, use @samp{-l 1} to make sure the recursion
1767 depth never exceeds one. @xref{Following Links}, for more information
1770 Recursive retrieval should be used with care. Don't say you were not
1773 @node Following Links
1774 @chapter Following Links
1776 @cindex following links
1778 When retrieving recursively, one does not wish to retrieve loads of
1779 unnecessary data. Most of the time the users bear in mind exactly what
1780 they want to download, and want Wget to follow only specific links.
1782 For example, if you wish to download the music archive from
1783 @samp{fly.srk.fer.hr}, you will not want to download all the home pages
1784 that happen to be referenced by an obscure part of the archive.
1786 Wget possesses several mechanisms that allows you to fine-tune which
1787 links it will follow.
1790 * Spanning Hosts:: (Un)limiting retrieval based on host name.
1791 * Types of Files:: Getting only certain files.
1792 * Directory-Based Limits:: Getting only certain directories.
1793 * Relative Links:: Follow relative links only.
1794 * FTP Links:: Following FTP links.
1797 @node Spanning Hosts
1798 @section Spanning Hosts
1799 @cindex spanning hosts
1800 @cindex hosts, spanning
1802 Wget's recursive retrieval normally refuses to visit hosts different
1803 than the one you specified on the command line. This is a reasonable
1804 default; without it, every retrieval would have the potential to turn
1805 your Wget into a small version of google.
1807 However, visiting different hosts, or @dfn{host spanning,} is sometimes
1808 a useful option. Maybe the images are served from a different server.
1809 Maybe you're mirroring a site that consists of pages interlinked between
1810 three servers. Maybe the server has two equivalent names, and the @sc{html}
1811 pages refer to both interchangeably.
1814 @item Span to any host---@samp{-H}
1816 The @samp{-H} option turns on host spanning, thus allowing Wget's
1817 recursive run to visit any host referenced by a link. Unless sufficient
1818 recursion-limiting criteria are applied depth, these foreign hosts will
1819 typically link to yet more hosts, and so on until Wget ends up sucking
1820 up much more data than you have intended.
1822 @item Limit spanning to certain domains---@samp{-D}
1824 The @samp{-D} option allows you to specify the domains that will be
1825 followed, thus limiting the recursion only to the hosts that belong to
1826 these domains. Obviously, this makes sense only in conjunction with
1827 @samp{-H}. A typical example would be downloading the contents of
1828 @samp{www.server.com}, but allowing downloads from
1829 @samp{images.server.com}, etc.:
1832 wget -rH -Dserver.com http://www.server.com/
1835 You can specify more than one address by separating them with a comma,
1836 e.g. @samp{-Ddomain1.com,domain2.com}.
1838 @item Keep download off certain domains---@samp{--exclude-domains}
1840 If there are domains you want to exclude specifically, you can do it
1841 with @samp{--exclude-domains}, which accepts the same type of arguments
1842 of @samp{-D}, but will @emph{exclude} all the listed domains. For
1843 example, if you want to download all the hosts from @samp{foo.edu}
1844 domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
1848 wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
1854 @node Types of Files
1855 @section Types of Files
1856 @cindex types of files
1858 When downloading material from the web, you will often want to restrict
1859 the retrieval to only certain file types. For example, if you are
1860 interested in downloading @sc{gif}s, you will not be overjoyed to get
1861 loads of PostScript documents, and vice versa.
1863 Wget offers two options to deal with this problem. Each option
1864 description lists a short name, a long name, and the equivalent command
1867 @cindex accept wildcards
1868 @cindex accept suffixes
1869 @cindex wildcards, accept
1870 @cindex suffixes, accept
1872 @item -A @var{acclist}
1873 @itemx --accept @var{acclist}
1874 @itemx accept = @var{acclist}
1875 The argument to @samp{--accept} option is a list of file suffixes or
1876 patterns that Wget will download during recursive retrieval. A suffix
1877 is the ending part of a file, and consists of ``normal'' letters,
1878 e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
1879 wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
1881 So, specifying @samp{wget -A gif,jpg} will make Wget download only the
1882 files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
1883 @sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
1884 download only files beginning with @samp{zelazny} and containing numbers
1885 from 1960 to 1969 anywhere within. Look up the manual of your shell for
1886 a description of how pattern matching works.
1888 Of course, any number of suffixes and patterns can be combined into a
1889 comma-separated list, and given as an argument to @samp{-A}.
1891 @cindex reject wildcards
1892 @cindex reject suffixes
1893 @cindex wildcards, reject
1894 @cindex suffixes, reject
1895 @item -R @var{rejlist}
1896 @itemx --reject @var{rejlist}
1897 @itemx reject = @var{rejlist}
1898 The @samp{--reject} option works the same way as @samp{--accept}, only
1899 its logic is the reverse; Wget will download all files @emph{except} the
1900 ones matching the suffixes (or patterns) in the list.
1902 So, if you want to download a whole page except for the cumbersome
1903 @sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
1904 Analogously, to download all files except the ones beginning with
1905 @samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
1906 expansion by the shell.
1909 The @samp{-A} and @samp{-R} options may be combined to achieve even
1910 better fine-tuning of which files to retrieve. E.g. @samp{wget -A
1911 "*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
1912 a part of their name, but @emph{not} the PostScript files.
1914 Note that these two options do not affect the downloading of @sc{html}
1915 files; Wget must load all the @sc{html}s to know where to go at
1916 all---recursive retrieval would make no sense otherwise.
1918 @node Directory-Based Limits
1919 @section Directory-Based Limits
1921 @cindex directory limits
1923 Regardless of other link-following facilities, it is often useful to
1924 place the restriction of what files to retrieve based on the directories
1925 those files are placed in. There can be many reasons for this---the
1926 home pages may be organized in a reasonable directory structure; or some
1927 directories may contain useless information, e.g. @file{/cgi-bin} or
1928 @file{/dev} directories.
1930 Wget offers three different options to deal with this requirement. Each
1931 option description lists a short name, a long name, and the equivalent
1932 command in @file{.wgetrc}.
1934 @cindex directories, include
1935 @cindex include directories
1936 @cindex accept directories
1939 @itemx --include @var{list}
1940 @itemx include_directories = @var{list}
1941 @samp{-I} option accepts a comma-separated list of directories included
1942 in the retrieval. Any other directories will simply be ignored. The
1943 directories are absolute paths.
1945 So, if you wish to download from @samp{http://host/people/bozo/}
1946 following only links to bozo's colleagues in the @file{/people}
1947 directory and the bogus scripts in @file{/cgi-bin}, you can specify:
1950 wget -I /people,/cgi-bin http://host/people/bozo/
1953 @cindex directories, exclude
1954 @cindex exclude directories
1955 @cindex reject directories
1957 @itemx --exclude @var{list}
1958 @itemx exclude_directories = @var{list}
1959 @samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
1960 directories @emph{excluded} from the download. E.g. if you do not want
1961 Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
1962 /cgi-bin} on the command line.
1964 The same as with @samp{-A}/@samp{-R}, these two options can be combined
1965 to get a better fine-tuning of downloading subdirectories. E.g. if you
1966 want to load all the files from @file{/pub} hierarchy except for
1967 @file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
1972 @itemx no_parent = on
1973 The simplest, and often very useful way of limiting directories is
1974 disallowing retrieval of the links that refer to the hierarchy
1975 @dfn{above} than the beginning directory, i.e. disallowing ascent to the
1976 parent directory/directories.
1978 The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
1979 Using it guarantees that you will never leave the existing hierarchy.
1980 Supposing you issue Wget with:
1983 wget -r --no-parent http://somehost/~luzer/my-archive/
1986 You may rest assured that none of the references to
1987 @file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
1988 followed. Only the archive you are interested in will be downloaded.
1989 Essentially, @samp{--no-parent} is similar to
1990 @samp{-I/~luzer/my-archive}, only it handles redirections in a more
1991 intelligent fashion.
1994 @node Relative Links
1995 @section Relative Links
1996 @cindex relative links
1998 When @samp{-L} is turned on, only the relative links are ever followed.
1999 Relative links are here defined those that do not refer to the web
2000 server root. For example, these links are relative:
2004 <a href="foo/bar.gif">
2005 <a href="../foo/bar.gif">
2008 These links are not relative:
2012 <a href="/foo/bar.gif">
2013 <a href="http://www.server.com/foo/bar.gif">
2016 Using this option guarantees that recursive retrieval will not span
2017 hosts, even without @samp{-H}. In simple cases it also allows downloads
2018 to ``just work'' without having to convert links.
2020 This option is probably not very useful and might be removed in a future
2024 @section Following FTP Links
2025 @cindex following ftp links
2027 The rules for @sc{ftp} are somewhat specific, as it is necessary for
2028 them to be. @sc{ftp} links in @sc{html} documents are often included
2029 for purposes of reference, and it is often inconvenient to download them
2032 To have @sc{ftp} links followed from @sc{html} documents, you need to
2033 specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
2034 links will span hosts regardless of @samp{-H} setting. This is logical,
2035 as @sc{ftp} links rarely point to the same host where the @sc{http}
2036 server resides. For similar reasons, the @samp{-L} options has no
2037 effect on such downloads. On the other hand, domain acceptance
2038 (@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
2040 Also note that followed links to @sc{ftp} directories will not be
2041 retrieved recursively further.
2044 @chapter Time-Stamping
2045 @cindex time-stamping
2046 @cindex timestamping
2047 @cindex updating the archives
2048 @cindex incremental updating
2050 One of the most important aspects of mirroring information from the
2051 Internet is updating your archives.
2053 Downloading the whole archive again and again, just to replace a few
2054 changed files is expensive, both in terms of wasted bandwidth and money,
2055 and the time to do the update. This is why all the mirroring tools
2056 offer the option of incremental updating.
2058 Such an updating mechanism means that the remote server is scanned in
2059 search of @dfn{new} files. Only those new files will be downloaded in
2060 the place of the old ones.
2062 A file is considered new if one of these two conditions are met:
2066 A file of that name does not already exist locally.
2069 A file of that name does exist, but the remote file was modified more
2070 recently than the local file.
2073 To implement this, the program needs to be aware of the time of last
2074 modification of both local and remote files. We call this information the
2075 @dfn{time-stamp} of a file.
2077 The time-stamping in GNU Wget is turned on using @samp{--timestamping}
2078 (@samp{-N}) option, or through @code{timestamping = on} directive in
2079 @file{.wgetrc}. With this option, for each file it intends to download,
2080 Wget will check whether a local file of the same name exists. If it
2081 does, and the remote file is older, Wget will not download it.
2083 If the local file does not exist, or the sizes of the files do not
2084 match, Wget will download the remote file no matter what the time-stamps
2088 * Time-Stamping Usage::
2089 * HTTP Time-Stamping Internals::
2090 * FTP Time-Stamping Internals::
2093 @node Time-Stamping Usage
2094 @section Time-Stamping Usage
2095 @cindex time-stamping usage
2096 @cindex usage, time-stamping
2098 The usage of time-stamping is simple. Say you would like to download a
2099 file so that it keeps its date of modification.
2102 wget -S http://www.gnu.ai.mit.edu/
2105 A simple @code{ls -l} shows that the time stamp on the local file equals
2106 the state of the @code{Last-Modified} header, as returned by the server.
2107 As you can see, the time-stamping info is preserved locally, even
2108 without @samp{-N} (at least for @sc{http}).
2110 Several days later, you would like Wget to check if the remote file has
2111 changed, and download it if it has.
2114 wget -N http://www.gnu.ai.mit.edu/
2117 Wget will ask the server for the last-modified date. If the local file
2118 has the same timestamp as the server, or a newer one, the remote file
2119 will not be re-fetched. However, if the remote file is more recent,
2120 Wget will proceed to fetch it.
2122 The same goes for @sc{ftp}. For example:
2125 wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
2128 (The quotes around that URL are to prevent the shell from trying to
2129 interpret the @samp{*}.)
2131 After download, a local directory listing will show that the timestamps
2132 match those on the remote server. Reissuing the command with @samp{-N}
2133 will make Wget re-fetch @emph{only} the files that have been modified
2134 since the last download.
2136 If you wished to mirror the GNU archive every week, you would use a
2137 command like the following, weekly:
2140 wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
2143 Note that time-stamping will only work for files for which the server
2144 gives a timestamp. For @sc{http}, this depends on getting a
2145 @code{Last-Modified} header. For @sc{ftp}, this depends on getting a
2146 directory listing with dates in a format that Wget can parse
2147 (@pxref{FTP Time-Stamping Internals}).
2149 @node HTTP Time-Stamping Internals
2150 @section HTTP Time-Stamping Internals
2151 @cindex http time-stamping
2153 Time-stamping in @sc{http} is implemented by checking of the
2154 @code{Last-Modified} header. If you wish to retrieve the file
2155 @file{foo.html} through @sc{http}, Wget will check whether
2156 @file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
2157 retrieved unconditionally.
2159 If the file does exist locally, Wget will first check its local
2160 time-stamp (similar to the way @code{ls -l} checks it), and then send a
2161 @code{HEAD} request to the remote server, demanding the information on
2164 The @code{Last-Modified} header is examined to find which file was
2165 modified more recently (which makes it ``newer''). If the remote file
2166 is newer, it will be downloaded; if it is older, Wget will give
2167 up.@footnote{As an additional check, Wget will look at the
2168 @code{Content-Length} header, and compare the sizes; if they are not the
2169 same, the remote file will be downloaded no matter what the time-stamp
2172 When @samp{--backup-converted} (@samp{-K}) is specified in conjunction
2173 with @samp{-N}, server file @samp{@var{X}} is compared to local file
2174 @samp{@var{X}.orig}, if extant, rather than being compared to local file
2175 @samp{@var{X}}, which will always differ if it's been converted by
2176 @samp{--convert-links} (@samp{-k}).
2178 Arguably, @sc{http} time-stamping should be implemented using the
2179 @code{If-Modified-Since} request.
2181 @node FTP Time-Stamping Internals
2182 @section FTP Time-Stamping Internals
2183 @cindex ftp time-stamping
2185 In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
2186 @sc{ftp} has no headers---time-stamps must be ferreted out of directory
2189 If an @sc{ftp} download is recursive or uses globbing, Wget will use the
2190 @sc{ftp} @code{LIST} command to get a file listing for the directory
2191 containing the desired file(s). It will try to analyze the listing,
2192 treating it like Unix @code{ls -l} output, extracting the time-stamps.
2193 The rest is exactly the same as for @sc{http}. Note that when
2194 retrieving individual files from an @sc{ftp} server without using
2195 globbing or recursion, listing files will not be downloaded (and thus
2196 files will not be time-stamped) unless @samp{-N} is specified.
2198 Assumption that every directory listing is a Unix-style listing may
2199 sound extremely constraining, but in practice it is not, as many
2200 non-Unix @sc{ftp} servers use the Unixoid listing format because most
2201 (all?) of the clients understand it. Bear in mind that @sc{rfc959}
2202 defines no standard way to get a file list, let alone the time-stamps.
2203 We can only hope that a future standard will define this.
2205 Another non-standard solution includes the use of @code{MDTM} command
2206 that is supported by some @sc{ftp} servers (including the popular
2207 @code{wu-ftpd}), which returns the exact time of the specified file.
2208 Wget may support this command in the future.
2211 @chapter Startup File
2212 @cindex startup file
2218 Once you know how to change default settings of Wget through command
2219 line arguments, you may wish to make some of those settings permanent.
2220 You can do that in a convenient way by creating the Wget startup
2221 file---@file{.wgetrc}.
2223 Besides @file{.wgetrc} is the ``main'' initialization file, it is
2224 convenient to have a special facility for storing passwords. Thus Wget
2225 reads and interprets the contents of @file{$HOME/.netrc}, if it finds
2226 it. You can find @file{.netrc} format in your system manuals.
2228 Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
2232 * Wgetrc Location:: Location of various wgetrc files.
2233 * Wgetrc Syntax:: Syntax of wgetrc.
2234 * Wgetrc Commands:: List of available commands.
2235 * Sample Wgetrc:: A wgetrc example.
2238 @node Wgetrc Location
2239 @section Wgetrc Location
2240 @cindex wgetrc location
2241 @cindex location of wgetrc
2243 When initializing, Wget will look for a @dfn{global} startup file,
2244 @file{/usr/local/etc/wgetrc} by default (or some prefix other than
2245 @file{/usr/local}, if Wget was not installed there) and read commands
2246 from there, if it exists.
2248 Then it will look for the user's file. If the environmental variable
2249 @code{WGETRC} is set, Wget will try to load that file. Failing that, no
2250 further attempts will be made.
2252 If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
2254 The fact that user's settings are loaded after the system-wide ones
2255 means that in case of collision user's wgetrc @emph{overrides} the
2256 system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
2257 Fascist admins, away!
2260 @section Wgetrc Syntax
2261 @cindex wgetrc syntax
2262 @cindex syntax of wgetrc
2264 The syntax of a wgetrc command is simple:
2270 The @dfn{variable} will also be called @dfn{command}. Valid
2271 @dfn{values} are different for different commands.
2273 The commands are case-insensitive and underscore-insensitive. Thus
2274 @samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
2275 beginning with @samp{#} and lines containing white-space only are
2278 Commands that expect a comma-separated list will clear the list on an
2279 empty command. So, if you wish to reset the rejection list specified in
2280 global @file{wgetrc}, you can do it with:
2286 @node Wgetrc Commands
2287 @section Wgetrc Commands
2288 @cindex wgetrc commands
2290 The complete set of commands is listed below. Legal values are listed
2291 after the @samp{=}. Simple Boolean values can be set or unset using
2292 @samp{on} and @samp{off} or @samp{1} and @samp{0}. A fancier kind of
2293 Boolean allowed in some cases is the @dfn{lockable Boolean}, which may
2294 be set to @samp{on}, @samp{off}, @samp{always}, or @samp{never}. If an
2295 option is set to @samp{always} or @samp{never}, that value will be
2296 locked in for the duration of the Wget invocation---command-line options
2299 Some commands take pseudo-arbitrary values. @var{address} values can be
2300 hostnames or dotted-quad IP addresses. @var{n} can be any positive
2301 integer, or @samp{inf} for infinity, where appropriate. @var{string}
2302 values can be any non-empty string.
2304 Most of these commands have direct command-line equivalents. Also, any
2305 wgetrc command can be specified on the command line using the
2306 @samp{--execute} switch (@pxref{Basic Startup Options}.)
2309 @item accept/reject = @var{string}
2310 Same as @samp{-A}/@samp{-R} (@pxref{Types of Files}).
2312 @item add_hostdir = on/off
2313 Enable/disable host-prefixed file names. @samp{-nH} disables it.
2315 @item continue = on/off
2316 If set to on, force continuation of preexistent partially retrieved
2317 files. See @samp{-c} before setting it.
2319 @item background = on/off
2320 Enable/disable going to background---the same as @samp{-b} (which
2323 @item backup_converted = on/off
2324 Enable/disable saving pre-converted files with the suffix
2325 @samp{.orig}---the same as @samp{-K} (which enables it).
2327 @c @item backups = @var{number}
2328 @c #### Document me!
2330 @item base = @var{string}
2331 Consider relative @sc{url}s in @sc{url} input files forced to be
2332 interpreted as @sc{html} as being relative to @var{string}---the same as
2335 @item bind_address = @var{address}
2336 Bind to @var{address}, like the @samp{--bind-address} option.
2338 @item cache = on/off
2339 When set to off, disallow server-caching. See the @samp{--no-cache}
2342 @item convert_links = on/off
2343 Convert non-relative links locally. The same as @samp{-k}.
2345 @item cookies = on/off
2346 When set to off, disallow cookies. See the @samp{--cookies} option.
2348 @item load_cookies = @var{file}
2349 Load cookies from @var{file}. See @samp{--load-cookies}.
2351 @item save_cookies = @var{file}
2352 Save cookies to @var{file}. See @samp{--save-cookies}.
2354 @item connect_timeout = @var{n}
2355 Set the connect timeout---the same as @samp{--connect-timeout}.
2357 @item cut_dirs = @var{n}
2358 Ignore @var{n} remote directory components.
2360 @item debug = on/off
2361 Debug mode, same as @samp{-d}.
2363 @item delete_after = on/off
2364 Delete after download---the same as @samp{--delete-after}.
2366 @item dir_prefix = @var{string}
2367 Top of directory tree---the same as @samp{-P}.
2369 @item dirstruct = on/off
2370 Turning dirstruct on or off---the same as @samp{-x} or @samp{-nd},
2373 @item dns_cache = on/off
2374 Turn DNS caching on/off. Since DNS caching is on by default, this
2375 option is normally used to turn it off. Same as @samp{--dns-cache}.
2377 @item dns_timeout = @var{n}
2378 Set the DNS timeout---the same as @samp{--dns-timeout}.
2380 @item domains = @var{string}
2381 Same as @samp{-D} (@pxref{Spanning Hosts}).
2383 @item dot_bytes = @var{n}
2384 Specify the number of bytes ``contained'' in a dot, as seen throughout
2385 the retrieval (1024 by default). You can postfix the value with
2386 @samp{k} or @samp{m}, representing kilobytes and megabytes,
2387 respectively. With dot settings you can tailor the dot retrieval to
2388 suit your needs, or you can use the predefined @dfn{styles}
2389 (@pxref{Download Options}).
2391 @item dots_in_line = @var{n}
2392 Specify the number of dots that will be printed in each line throughout
2393 the retrieval (50 by default).
2395 @item dot_spacing = @var{n}
2396 Specify the number of dots in a single cluster (10 by default).
2398 @item egd_file = @var{string}
2399 Use @var{string} as the EGD socket file name. The same as
2402 @item exclude_directories = @var{string}
2403 Specify a comma-separated list of directories you wish to exclude from
2404 download---the same as @samp{-X} (@pxref{Directory-Based Limits}).
2406 @item exclude_domains = @var{string}
2407 Same as @samp{--exclude-domains} (@pxref{Spanning Hosts}).
2409 @item follow_ftp = on/off
2410 Follow @sc{ftp} links from @sc{html} documents---the same as
2411 @samp{--follow-ftp}.
2413 @item follow_tags = @var{string}
2414 Only follow certain @sc{html} tags when doing a recursive retrieval, just like
2415 @samp{--follow-tags}.
2417 @item force_html = on/off
2418 If set to on, force the input filename to be regarded as an @sc{html}
2419 document---the same as @samp{-F}.
2421 @item ftp_passwd = @var{string}
2422 Set your @sc{ftp} password to @var{string}. Without this setting, the
2423 password defaults to @samp{-wget@@}, which is a useful default for
2424 anonymous @sc{ftp} access.
2426 This command used to be named @code{passwd} prior to Wget 1.10.
2428 @item ftp_proxy = @var{string}
2429 Use @var{string} as @sc{ftp} proxy, instead of the one specified in
2433 Turn globbing on/off---the same as @samp{--glob} and @samp{--no-glob}.
2435 @item header = @var{string}
2436 Define an additional header, like @samp{--header}.
2438 @item html_extension = on/off
2439 Add a @samp{.html} extension to @samp{text/html} or
2440 @samp{application/xhtml+xml} files without it, like
2443 @item http_keep_alive = on/off
2444 Turn the keep-alive feature on or off (defaults to on). The same as
2445 `--http-keep-alive'.
2447 @item http_passwd = @var{string}
2448 Set @sc{http} password.
2450 @item http_proxy = @var{string}
2451 Use @var{string} as @sc{http} proxy, instead of the one specified in
2454 @item http_user = @var{string}
2455 Set @sc{http} user to @var{string}.
2457 @item ignore_length = on/off
2458 When set to on, ignore @code{Content-Length} header; the same as
2459 @samp{--ignore-length}.
2461 @item ignore_tags = @var{string}
2462 Ignore certain @sc{html} tags when doing a recursive retrieval, just like
2463 @samp{--ignore-tags}.
2465 @item include_directories = @var{string}
2466 Specify a comma-separated list of directories you wish to follow when
2467 downloading---the same as @samp{-I}.
2469 @item inet4_only = on/off
2470 Force connecting to IPv4 addresses, off by default. You can put this
2471 in the global init file to disable Wget's attempts to resolve and
2472 connect to IPv6 hosts. Available only if Wget was compiled with IPv6
2473 support. The same as @samp{--inet4-only} or @samp{-4}.
2475 @item inet6_only = on/off
2476 Force connecting to IPv6 addresses, off by default. Available only if
2477 Wget was compiled with IPv6 support. The same as @samp{--inet6-only}
2480 @item input = @var{string}
2481 Read the @sc{url}s from @var{string}, like @samp{-i}.
2483 @item kill_longer = on/off
2484 Consider data longer than specified in content-length header as invalid
2485 (and retry getting it). The default behavior is to save as much data
2486 as there is, provided there is more than or equal to the value in
2487 @code{Content-Length}.
2489 @item limit_rate = @var{rate}
2490 Limit the download speed to no more than @var{rate} bytes per second.
2491 The same as @samp{--limit-rate}.
2493 @item logfile = @var{string}
2494 Set logfile---the same as @samp{-o}.
2496 @item login = @var{string}
2497 Your user name on the remote machine, for @sc{ftp}. Defaults to
2500 @item mirror = on/off
2501 Turn mirroring on/off. The same as @samp{-m}.
2503 @item netrc = on/off
2504 Turn reading netrc on or off.
2506 @item noclobber = on/off
2509 @item no_parent = on/off
2510 Disallow retrieving outside the directory hierarchy, like
2511 @samp{--no-parent} (@pxref{Directory-Based Limits}).
2513 @item no_proxy = @var{string}
2514 Use @var{string} as the comma-separated list of domains to avoid in
2515 proxy loading, instead of the one specified in environment.
2517 @item output_document = @var{string}
2518 Set the output filename---the same as @samp{-O}.
2520 @item page_requisites = on/off
2521 Download all ancillary documents necessary for a single @sc{html} page to
2522 display properly---the same as @samp{-p}.
2524 @item passive_ftp = on/off/always/never
2525 Change setting of passive @sc{ftp}, equivalent to the
2526 @samp{--passive-ftp} option. Some scripts and @samp{.pm} (Perl
2527 module) files download files using @samp{wget --passive-ftp}. If your
2528 firewall does not allow this, you can set @samp{passive_ftp = never}
2529 to override the command-line.
2531 @item post_data = @var{string}
2532 Use POST as the method for all HTTP requests and send @var{string} in
2533 the request body. The same as @samp{--post-data}.
2535 @item post_file = @var{file}
2536 Use POST as the method for all HTTP requests and send the contents of
2537 @var{file} in the request body. The same as @samp{--post-file}.
2539 @item progress = @var{string}
2540 Set the type of the progress indicator. Legal types are ``dot'' and
2543 @item protocol_directories = on/off
2544 When set, use the protocol name as a directory component of local file
2545 names. The same as @samp{--protocol-directories}.
2547 @item proxy_user = @var{string}
2548 Set proxy authentication user name to @var{string}, like @samp{--proxy-user}.
2550 @item proxy_passwd = @var{string}
2551 Set proxy authentication password to @var{string}, like @samp{--proxy-passwd}.
2553 @item quiet = on/off
2554 Quiet mode---the same as @samp{-q}.
2556 @item quota = @var{quota}
2557 Specify the download quota, which is useful to put in the global
2558 @file{wgetrc}. When download quota is specified, Wget will stop
2559 retrieving after the download sum has become greater than quota. The
2560 quota can be specified in bytes (default), kbytes @samp{k} appended) or
2561 mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota
2562 to 5 mbytes. Note that the user's startup file overrides system
2565 @item read_timeout = @var{n}
2566 Set the read (and write) timeout---the same as @samp{--read-timeout}.
2568 @item reclevel = @var{n}
2569 Recursion level---the same as @samp{-l}.
2571 @item recursive = on/off
2572 Recursive on/off---the same as @samp{-r}.
2574 @item referer = @var{string}
2575 Set HTTP @samp{Referer:} header just like @samp{--referer}. (Note it
2576 was the folks who wrote the @sc{http} spec who got the spelling of
2577 ``referrer'' wrong.)
2579 @item relative_only = on/off
2580 Follow only relative links---the same as @samp{-L} (@pxref{Relative
2583 @item remove_listing = on/off
2584 If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
2585 to off is the same as @samp{--no-remove-listing}.
2587 @item restrict_file_names = unix/windows
2588 Restrict the file names generated by Wget from URLs. See
2589 @samp{--restrict-file-names} for a more detailed description.
2591 @item retr_symlinks = on/off
2592 When set to on, retrieve symbolic links as if they were plain files; the
2593 same as @samp{--retr-symlinks}.
2595 @item robots = on/off
2596 Specify whether the norobots convention is respected by Wget, ``on'' by
2597 default. This switch controls both the @file{/robots.txt} and the
2598 @samp{nofollow} aspect of the spec. @xref{Robot Exclusion}, for more
2599 details about this. Be sure you know what you are doing before turning
2602 @item server_response = on/off
2603 Choose whether or not to print the @sc{http} and @sc{ftp} server
2604 responses---the same as @samp{-S}.
2606 @item span_hosts = on/off
2609 @item ssl_cert_file = @var{string}
2610 Set the client certificate file name to @var{string}. The same as
2611 @samp{--sslcertfile}.
2613 @item ssl_cert_key = @var{string}
2614 Set the certificate key file to @var{string}. The same as
2615 @samp{--sslcertkey}.
2617 @item ssl_ca_dir = @var{string}
2618 Set the directory used for certificate authorities. The same as
2621 @item ssl_ca_file = @var{string}
2622 Set the certificate authority bundle file to @var{string}. The same
2623 as @samp{--sslcafile}.
2625 @item ssl_cert_type = 0/1
2626 Specify the type of the client certificate: 0 means @code{PEM}
2627 (default), 1 means @code{ASN1} (@code{DER}). The same as
2628 @samp{--sslcerttype}.
2630 @item ssl_check_cert = 0/1
2631 If this is set to 1, the server certificate is checked against the
2632 specified client authorities. The same as @samp{--sslcheckcert}.
2634 @item ssl_protocol = 0-3
2635 Choose the SSL protocol to be used. 0 means choose automatically, 1
2636 means force SSLv2, 2 means force SSLv3, and 3 means force TLSv1. The
2637 same as @samp{--sslprotocol}.
2639 @item strict_comments = on/off
2640 Same as @samp{--strict-comments}.
2642 @item timeout = @var{n}
2643 Set timeout value---the same as @samp{-T}.
2645 @item timestamping = on/off
2646 Turn timestamping on/off. The same as @samp{-N} (@pxref{Time-Stamping}).
2648 @item tries = @var{n}
2649 Set number of retries per @sc{url}---the same as @samp{-t}.
2651 @item use_proxy = on/off
2652 Turn proxy support on/off. The same as @samp{-Y}.
2654 @item verbose = on/off
2655 Turn verbose on/off---the same as @samp{-v}/@samp{-nv}.
2657 @item wait = @var{n}
2658 Wait @var{n} seconds between retrievals---the same as @samp{-w}.
2660 @item waitretry = @var{n}
2661 Wait up to @var{n} seconds between retries of failed retrievals
2662 only---the same as @samp{--waitretry}. Note that this is turned on by
2663 default in the global @file{wgetrc}.
2665 @item randomwait = on/off
2666 Turn random between-request wait times on or off. The same as
2667 @samp{--random-wait}.
2671 @section Sample Wgetrc
2672 @cindex sample wgetrc
2674 This is the sample initialization file, as given in the distribution.
2675 It is divided in two section---one for global usage (suitable for global
2676 startup file), and one for local usage (suitable for
2677 @file{$HOME/.wgetrc}). Be careful about the things you change.
2679 Note that almost all the lines are commented out. For a command to have
2680 any effect, you must remove the @samp{#} character at the beginning of
2684 @include sample.wgetrc.munged_for_texi_inclusion
2691 @c man begin EXAMPLES
2692 The examples are divided into three sections loosely based on their
2696 * Simple Usage:: Simple, basic usage of the program.
2697 * Advanced Usage:: Advanced tips.
2698 * Very Advanced Usage:: The hairy stuff.
2702 @section Simple Usage
2706 Say you want to download a @sc{url}. Just type:
2709 wget http://fly.srk.fer.hr/
2713 But what will happen if the connection is slow, and the file is lengthy?
2714 The connection will probably fail before the whole file is retrieved,
2715 more than once. In this case, Wget will try getting the file until it
2716 either gets the whole of it, or exceeds the default number of retries
2717 (this being 20). It is easy to change the number of tries to 45, to
2718 insure that the whole file will arrive safely:
2721 wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
2725 Now let's leave Wget to work in the background, and write its progress
2726 to log file @file{log}. It is tiring to type @samp{--tries}, so we
2727 shall use @samp{-t}.
2730 wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
2733 The ampersand at the end of the line makes sure that Wget works in the
2734 background. To unlimit the number of retries, use @samp{-t inf}.
2737 The usage of @sc{ftp} is as simple. Wget will take care of login and
2741 wget ftp://gnjilux.srk.fer.hr/welcome.msg
2745 If you specify a directory, Wget will retrieve the directory listing,
2746 parse it and convert it to @sc{html}. Try:
2749 wget ftp://ftp.gnu.org/pub/gnu/
2754 @node Advanced Usage
2755 @section Advanced Usage
2759 You have a file that contains the URLs you want to download? Use the
2766 If you specify @samp{-} as file name, the @sc{url}s will be read from
2770 Create a five levels deep mirror image of the GNU web site, with the
2771 same directory structure the original has, with only one try per
2772 document, saving the log of the activities to @file{gnulog}:
2775 wget -r http://www.gnu.org/ -o gnulog
2779 The same as the above, but convert the links in the @sc{html} files to
2780 point to local files, so you can view the documents off-line:
2783 wget --convert-links -r http://www.gnu.org/ -o gnulog
2787 Retrieve only one @sc{html} page, but make sure that all the elements needed
2788 for the page to be displayed, such as inline images and external style
2789 sheets, are also downloaded. Also make sure the downloaded page
2790 references the downloaded links.
2793 wget -p --convert-links http://www.server.com/dir/page.html
2796 The @sc{html} page will be saved to @file{www.server.com/dir/page.html}, and
2797 the images, stylesheets, etc., somewhere under @file{www.server.com/},
2798 depending on where they were on the remote server.
2801 The same as the above, but without the @file{www.server.com/} directory.
2802 In fact, I don't want to have all those random server directories
2803 anyway---just save @emph{all} those files under a @file{download/}
2804 subdirectory of the current directory.
2807 wget -p --convert-links -nH -nd -Pdownload \
2808 http://www.server.com/dir/page.html
2812 Retrieve the index.html of @samp{www.lycos.com}, showing the original
2816 wget -S http://www.lycos.com/
2820 Save the server headers with the file, perhaps for post-processing.
2823 wget -s http://www.lycos.com/
2828 Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
2832 wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
2836 You want to download all the @sc{gif}s from a directory on an @sc{http}
2837 server. You tried @samp{wget http://www.server.com/dir/*.gif}, but that
2838 didn't work because @sc{http} retrieval does not support globbing. In
2842 wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
2845 More verbose, but the effect is the same. @samp{-r -l1} means to
2846 retrieve recursively (@pxref{Recursive Download}), with maximum depth
2847 of 1. @samp{--no-parent} means that references to the parent directory
2848 are ignored (@pxref{Directory-Based Limits}), and @samp{-A.gif} means to
2849 download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
2853 Suppose you were in the middle of downloading, when Wget was
2854 interrupted. Now you do not want to clobber the files already present.
2858 wget -nc -r http://www.gnu.org/
2862 If you want to encode your own username and password to @sc{http} or
2863 @sc{ftp}, use the appropriate @sc{url} syntax (@pxref{URL Format}).
2866 wget ftp://hniksic:mypassword@@unix.server.com/.emacs
2869 Note, however, that this usage is not advisable on multi-user systems
2870 because it reveals your password to anyone who looks at the output of
2873 @cindex redirecting output
2875 You would like the output documents to go to standard output instead of
2879 wget -O - http://jagor.srce.hr/ http://www.srce.hr/
2882 You can also combine the two options and make pipelines to retrieve the
2883 documents from remote hotlists:
2886 wget -O - http://cool.list.com/ | wget --force-html -i -
2890 @node Very Advanced Usage
2891 @section Very Advanced Usage
2896 If you wish Wget to keep a mirror of a page (or @sc{ftp}
2897 subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
2898 for @samp{-r -l inf -N}. You can put Wget in the crontab file asking it
2899 to recheck a site each Sunday:
2903 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
2907 In addition to the above, you want the links to be converted for local
2908 viewing. But, after having read this manual, you know that link
2909 conversion doesn't play well with timestamping, so you also want Wget to
2910 back up the original @sc{html} files before the conversion. Wget invocation
2911 would look like this:
2914 wget --mirror --convert-links --backup-converted \
2915 http://www.gnu.org/ -o /home/me/weeklog
2919 But you've also noticed that local viewing doesn't work all that well
2920 when @sc{html} files are saved under extensions other than @samp{.html},
2921 perhaps because they were served as @file{index.cgi}. So you'd like
2922 Wget to rename all the files served with content-type @samp{text/html}
2923 or @samp{application/xhtml+xml} to @file{@var{name}.html}.
2926 wget --mirror --convert-links --backup-converted \
2927 --html-extension -o /home/me/weeklog \
2931 Or, with less typing:
2934 wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
2943 This chapter contains all the stuff that could not fit anywhere else.
2946 * Proxies:: Support for proxy servers
2947 * Distribution:: Getting the latest version.
2948 * Mailing List:: Wget mailing list for announcements and discussion.
2949 * Reporting Bugs:: How and where to report bugs.
2950 * Portability:: The systems Wget works on.
2951 * Signals:: Signal-handling performed by Wget.
2958 @dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
2959 data from remote servers to local clients. One typical use of proxies
2960 is lightening network load for users behind a slow connection. This is
2961 achieved by channeling all @sc{http} and @sc{ftp} requests through the
2962 proxy which caches the transferred data. When a cached resource is
2963 requested again, proxy will return the data from cache. Another use for
2964 proxies is for companies that separate (for security reasons) their
2965 internal networks from the rest of Internet. In order to obtain
2966 information from the Web, their users connect and retrieve remote data
2967 using an authorized proxy.
2969 Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
2970 standard way to specify proxy location, which Wget recognizes, is using
2971 the following environment variables:
2975 This variable should contain the @sc{url} of the proxy for @sc{http}
2979 This variable should contain the @sc{url} of the proxy for @sc{ftp}
2980 connections. It is quite common that @sc{http_proxy} and @sc{ftp_proxy}
2981 are set to the same @sc{url}.
2984 This variable should contain a comma-separated list of domain extensions
2985 proxy should @emph{not} be used for. For instance, if the value of
2986 @code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
2990 In addition to the environment variables, proxy location and settings
2991 may be specified from within Wget itself.
2995 @itemx --proxy=on/off
2996 @itemx proxy = on/off
2997 This option may be used to turn the proxy support on or off. Proxy
2998 support is on by default, provided that the appropriate environment
3001 @item http_proxy = @var{URL}
3002 @itemx ftp_proxy = @var{URL}
3003 @itemx no_proxy = @var{string}
3004 These startup file variables allow you to override the proxy settings
3005 specified by the environment.
3008 Some proxy servers require authorization to enable you to use them. The
3009 authorization consists of @dfn{username} and @dfn{password}, which must
3010 be sent by Wget. As with @sc{http} authorization, several
3011 authentication schemes exist. For proxy authorization only the
3012 @code{Basic} authentication scheme is currently implemented.
3014 You may specify your username and password either through the proxy
3015 @sc{url} or through the command-line options. Assuming that the
3016 company's proxy is located at @samp{proxy.company.com} at port 8001, a
3017 proxy @sc{url} location containing authorization data might look like
3021 http://hniksic:mypassword@@proxy.company.com:8001/
3024 Alternatively, you may use the @samp{proxy-user} and
3025 @samp{proxy-password} options, and the equivalent @file{.wgetrc}
3026 settings @code{proxy_user} and @code{proxy_passwd} to set the proxy
3027 username and password.
3030 @section Distribution
3031 @cindex latest version
3033 Like all GNU utilities, the latest version of Wget can be found at the
3034 master GNU archive site ftp.gnu.org, and its mirrors. For example,
3035 Wget @value{VERSION} can be found at
3036 @url{ftp://ftp.gnu.org/pub/gnu/wget/wget-@value{VERSION}.tar.gz}
3039 @section Mailing List
3040 @cindex mailing list
3043 There are several Wget-related mailing lists, all hosted by
3044 SunSITE.dk. The general discussion list is at
3045 @email{wget@@sunsite.dk}. It is the preferred place for bug reports
3046 and suggestions, as well as for discussion of development. You are
3047 invited to subscribe.
3049 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
3050 and follow the instructions. Unsubscribe by mailing to
3051 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
3052 @url{http://www.mail-archive.com/wget%40sunsite.dk/} and at
3053 @url{http://news.gmane.org/gmane.comp.web.wget.general}.
3055 The second mailing list is at @email{wget-patches@@sunsite.dk}, and is
3056 used to submit patches for review by Wget developers. A ``patch'' is
3057 a textual representation of change to source code, readable by both
3058 humans and programs. The file @file{PATCHES} that comes with Wget
3059 covers the creation and submitting of patches in detail. Please don't
3060 send general suggestions or bug reports to @samp{wget-patches}; use it
3061 only for patch submissions.
3063 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
3064 and follow the instructions. Unsubscribe by mailing to
3065 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
3066 @url{http://news.gmane.org/gmane.comp.web.wget.patches}.
3068 Finally, there is a read-only list at @email{wget-cvs@@sunsite.dk}
3069 that tracks commits to the Wget CVS repository. To subscribe to that
3070 list, send mail to @email{wget-cvs-subscribe@@sunsite.dk}. The list
3073 @node Reporting Bugs
3074 @section Reporting Bugs
3076 @cindex reporting bugs
3080 You are welcome to send bug reports about GNU Wget to
3081 @email{bug-wget@@gnu.org}.
3083 Before actually submitting a bug report, please try to follow a few
3088 Please try to ascertain that the behavior you see really is a bug. If
3089 Wget crashes, it's a bug. If Wget does not behave as documented,
3090 it's a bug. If things work strange, but you are not sure about the way
3091 they are supposed to work, it might well be a bug.
3094 Try to repeat the bug in as simple circumstances as possible. E.g. if
3095 Wget crashes while downloading @samp{wget -rl0 -kKE -t5 -Y0
3096 http://yoyodyne.com -o /tmp/log}, you should try to see if the crash is
3097 repeatable, and if will occur with a simpler set of options. You might
3098 even try to start the download at the page where the crash occurred to
3099 see if that page somehow triggered the crash.
3101 Also, while I will probably be interested to know the contents of your
3102 @file{.wgetrc} file, just dumping it into the debug message is probably
3103 a bad idea. Instead, you should first try to see if the bug repeats
3104 with @file{.wgetrc} moved out of the way. Only if it turns out that
3105 @file{.wgetrc} settings affect the bug, mail me the relevant parts of
3109 Please start Wget with @samp{-d} option and send us the resulting
3110 output (or relevant parts thereof). If Wget was compiled without
3111 debug support, recompile it---it is @emph{much} easier to trace bugs
3112 with debug support on.
3114 Note: please make sure to remove any potentially sensitive information
3115 from the debug log before sending it to the bug address. The
3116 @code{-d} won't go out of its way to collect sensitive information,
3117 but the log @emph{will} contain a fairly complete transcript of Wget's
3118 communication with the server, which may include passwords and pieces
3119 of downloaded data. Since the bug address is publically archived, you
3120 may assume that all bug reports are visible to the public.
3123 If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
3124 wget` core} and type @code{where} to get the backtrace. This may not
3125 work if the system administrator has disabled core files, but it is
3131 @section Portability
3133 @cindex operating systems
3135 Like all GNU software, Wget works on the GNU system. However, since it
3136 uses GNU Autoconf for building and configuring, and mostly avoids using
3137 ``special'' features of any particular Unix, it should compile (and
3138 work) on all common Unix flavors.
3140 Various Wget versions have been compiled and tested under many kinds
3141 of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, OSF (aka
3142 Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some of
3143 those systems are no longer in widespread use and may not be able to
3144 support recent versions of Wget. If Wget fails to compile on your
3145 system, we would like to know about it.
3147 Thanks to kind contributors, this version of Wget compiles and works
3148 on 32-bit Microsoft Windows platforms. It has been compiled
3149 successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
3150 compilers. Naturally, it is crippled of some features available on
3151 Unix, but it should work as a substitute for people stuck with
3152 Windows. Note that Windows-specific portions of Wget are not
3153 guaranteed to be supported in the future, although this has been the
3154 case in practice for many years now. All questions and problems in
3155 Windows usage should be reported to Wget mailing list at
3156 @email{wget@@sunsite.dk} where the volunteers who maintain the
3157 Windows-related features might look at them.
3161 @cindex signal handling
3164 Since the purpose of Wget is background work, it catches the hangup
3165 signal (@code{SIGHUP}) and ignores it. If the output was on standard
3166 output, it will be redirected to a file named @file{wget-log}.
3167 Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
3168 to redirect the output of Wget after having started it.
3171 $ wget http://www.gnus.org/dist/gnus.tar.gz &
3174 SIGHUP received, redirecting output to `wget-log'.
3177 Other than that, Wget will not try to interfere with signals in any way.
3178 @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike.
3183 This chapter contains some references I consider useful.
3186 * Robot Exclusion:: Wget's support for RES.
3187 * Security Considerations:: Security with Wget.
3188 * Contributors:: People who helped.
3191 @node Robot Exclusion
3192 @section Robot Exclusion
3193 @cindex robot exclusion
3195 @cindex server maintenance
3197 It is extremely easy to make Wget wander aimlessly around a web site,
3198 sucking all the available data in progress. @samp{wget -r @var{site}},
3199 and you're set. Great? Not for the server admin.
3201 As long as Wget is only retrieving static pages, and doing it at a
3202 reasonable rate (see the @samp{--wait} option), there's not much of a
3203 problem. The trouble is that Wget can't tell the difference between the
3204 smallest static page and the most demanding CGI. A site I know has a
3205 section handled by a CGI Perl script that converts Info files to @sc{html} on
3206 the fly. The script is slow, but works well enough for human users
3207 viewing an occasional Info file. However, when someone's recursive Wget
3208 download stumbles upon the index page that links to all the Info files
3209 through the script, the system is brought to its knees without providing
3210 anything useful to the user (This task of converting Info files could be
3211 done locally and access to Info documentation for all installed GNU
3212 software on a system is available from the @code{info} command).
3214 To avoid this kind of accident, as well as to preserve privacy for
3215 documents that need to be protected from well-behaved robots, the
3216 concept of @dfn{robot exclusion} was invented. The idea is that
3217 the server administrators and document authors can specify which
3218 portions of the site they wish to protect from robots and those
3219 they will permit access.
3221 The most popular mechanism, and the @i{de facto} standard supported by
3222 all the major robots, is the ``Robots Exclusion Standard'' (RES) written
3223 by Martijn Koster et al. in 1994. It specifies the format of a text
3224 file containing directives that instruct the robots which URL paths to
3225 avoid. To be found by the robots, the specifications must be placed in
3226 @file{/robots.txt} in the server root, which the robots are expected to
3229 Although Wget is not a web robot in the strictest sense of the word, it
3230 can downloads large parts of the site without the user's intervention to
3231 download an individual page. Because of that, Wget honors RES when
3232 downloading recursively. For instance, when you issue:
3235 wget -r http://www.server.com/
3238 First the index of @samp{www.server.com} will be downloaded. If Wget
3239 finds that it wants to download more documents from that server, it will
3240 request @samp{http://www.server.com/robots.txt} and, if found, use it
3241 for further downloads. @file{robots.txt} is loaded only once per each
3244 Until version 1.8, Wget supported the first version of the standard,
3245 written by Martijn Koster in 1994 and available at
3246 @url{http://www.robotstxt.org/wc/norobots.html}. As of version 1.8,
3247 Wget has supported the additional directives specified in the internet
3248 draft @samp{<draft-koster-robots-00.txt>} titled ``A Method for Web
3249 Robots Control''. The draft, which has as far as I know never made to
3250 an @sc{rfc}, is available at
3251 @url{http://www.robotstxt.org/wc/norobots-rfc.txt}.
3253 This manual no longer includes the text of the Robot Exclusion Standard.
3255 The second, less known mechanism, enables the author of an individual
3256 document to specify whether they want the links from the file to be
3257 followed by a robot. This is achieved using the @code{META} tag, like
3261 <meta name="robots" content="nofollow">
3264 This is explained in some detail at
3265 @url{http://www.robotstxt.org/wc/meta-user.html}. Wget supports this
3266 method of robot exclusion in addition to the usual @file{/robots.txt}
3269 If you know what you are doing and really really wish to turn off the
3270 robot exclusion, set the @code{robots} variable to @samp{off} in your
3271 @file{.wgetrc}. You can achieve the same effect from the command line
3272 using the @code{-e} switch, e.g. @samp{wget -e robots=off @var{url}...}.
3274 @node Security Considerations
3275 @section Security Considerations
3278 When using Wget, you must be aware that it sends unencrypted passwords
3279 through the network, which may present a security problem. Here are the
3280 main issues, and some solutions.
3284 The passwords on the command line are visible using @code{ps}. The best
3285 way around it is to use @code{wget -i -} and feed the @sc{url}s to
3286 Wget's standard input, each on a separate line, terminated by @kbd{C-d}.
3287 Another workaround is to use @file{.netrc} to store passwords; however,
3288 storing unencrypted passwords is also considered a security risk.
3291 Using the insecure @dfn{basic} authentication scheme, unencrypted
3292 passwords are transmitted through the network routers and gateways.
3295 The @sc{ftp} passwords are also in no way encrypted. There is no good
3296 solution for this at the moment.
3299 Although the ``normal'' output of Wget tries to hide the passwords,
3300 debugging logs show them, in all forms. This problem is avoided by
3301 being careful when you send debug logs (yes, even when you send them to
3306 @section Contributors
3307 @cindex contributors
3310 GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@xemacs.org}.
3313 GNU Wget was written by Hrvoje Niksic @email{hniksic@@xemacs.org}.
3315 However, its development could never have gone as far as it has, were it
3316 not for the help of many people, either with bug reports, feature
3317 proposals, patches, or letters saying ``Thanks!''.
3319 Special thanks goes to the following people (no particular order):
3323 Karsten Thygesen---donated system resources such as the mailing list,
3324 web space, and @sc{ftp} space, along with a lot of time to make these
3328 Shawn McHorse---bug reports and patches.
3331 Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. Lots of
3335 Gordon Matzigkeit---@file{.netrc} support.
3339 Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
3340 Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
3343 Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
3344 and ``philosophical'' discussions.
3348 Darko Budor---initial port to Windows.
3351 Antonio Rosella---help and suggestions, plus the Italian translation.
3355 Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
3359 Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
3364 Fran@,{c}ois Pinard---many thorough bug reports and discussions.
3367 Francois Pinard---many thorough bug reports and discussions.
3371 Karl Eichwalder---lots of help with internationalization and other
3375 Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
3379 The people who provided donations for development, including Brian
3383 The following people have provided patches, bug/build reports, useful
3384 suggestions, beta testing services, fan mail and all the other things
3385 that make maintenance so much fun:
3405 Kristijan @v{C}onka@v{s},
3425 Aleksandar Erkalovi@'{c},
3428 Aleksandar Erkalovic,
3448 Erik Magnus Hulthen,
3467 Goran Kezunovi@'{c},
3478 $\Sigma\acute{\iota}\mu o\varsigma\;
3479 \Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
3480 (Simos KSenitellis),
3488 Nicol@'{a}s Lichtmeier,
3494 Alexander V. Lukyanov,
3525 @c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
3527 Juan Jos\'{e} Rodr\'{\i}gues,
3530 Juan Jose Rodrigues,
3544 Szakacsits Szabolcs,
3552 Douglas E. Wegscheid,
3563 Apologies to all who I accidentally left out, and many thanks to all the
3564 subscribers of the Wget mailing list.
3571 @cindex free software
3573 GNU Wget is licensed under the GNU General Public License (GNU GPL),
3574 which makes it @dfn{free software}. Please note that ``free'' in ``free
3575 software'' refers to liberty, not price. As some people like to point
3576 out, it's the ``free'' of ``free speech'', not the ``free'' of ``free
3579 The exact and legally binding distribution terms are spelled out below.
3580 The GPL guarantees that you have the right (freedom) to run and change
3581 GNU Wget and distribute it to others, and even---if you want---charge
3582 money for doing any of those things. With these rights comes the
3583 obligation to distribute the source code along with the software and to
3584 grant your recipients the same rights and impose the same restrictions.
3586 This licensing model is also known as @dfn{open source} because it,
3587 among other things, makes sure that all recipients will receive the
3588 source code along with the program, and be able to improve it. The GNU
3589 project prefers the term ``free software'' for reasons outlined at
3590 @url{http://www.gnu.org/philosophy/free-software-for-freedom.html}.
3592 The exact license terms are defined by this paragraph and the GNU
3593 General Public License it refers to:
3596 GNU Wget is free software; you can redistribute it and/or modify it
3597 under the terms of the GNU General Public License as published by the
3598 Free Software Foundation; either version 2 of the License, or (at your
3599 option) any later version.
3601 GNU Wget is distributed in the hope that it will be useful, but WITHOUT
3602 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
3603 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
3606 A copy of the GNU General Public License is included as part of this
3607 manual; if you did not receive it, write to the Free Software
3608 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
3611 In addition to this, this manual is free in the same sense:
3614 Permission is granted to copy, distribute and/or modify this document
3615 under the terms of the GNU Free Documentation License, Version 1.2 or
3616 any later version published by the Free Software Foundation; with the
3617 Invariant Sections being ``GNU General Public License'' and ``GNU Free
3618 Documentation License'', with no Front-Cover Texts, and with no
3619 Back-Cover Texts. A copy of the license is included in the section
3620 entitled ``GNU Free Documentation License''.
3623 @c #### Maybe we should wrap these licenses in ifinfo? Stallman says
3624 @c that the GFDL needs to be present in the manual, and to me it would
3625 @c suck to include the license for the manual and not the license for
3628 The full texts of the GNU General Public License and of the GNU Free
3629 Documentation License are available below.
3632 * GNU General Public License::
3633 * GNU Free Documentation License::
3636 @node GNU General Public License
3637 @section GNU General Public License
3638 @center Version 2, June 1991
3641 Copyright @copyright{} 1989, 1991 Free Software Foundation, Inc.
3642 675 Mass Ave, Cambridge, MA 02139, USA
3644 Everyone is permitted to copy and distribute verbatim copies
3645 of this license document, but changing it is not allowed.
3648 @unnumberedsec Preamble
3650 The licenses for most software are designed to take away your
3651 freedom to share and change it. By contrast, the GNU General Public
3652 License is intended to guarantee your freedom to share and change free
3653 software---to make sure the software is free for all its users. This
3654 General Public License applies to most of the Free Software
3655 Foundation's software and to any other program whose authors commit to
3656 using it. (Some other Free Software Foundation software is covered by
3657 the GNU Library General Public License instead.) You can apply it to
3660 When we speak of free software, we are referring to freedom, not
3661 price. Our General Public Licenses are designed to make sure that you
3662 have the freedom to distribute copies of free software (and charge for
3663 this service if you wish), that you receive source code or can get it
3664 if you want it, that you can change the software or use pieces of it
3665 in new free programs; and that you know you can do these things.
3667 To protect your rights, we need to make restrictions that forbid
3668 anyone to deny you these rights or to ask you to surrender the rights.
3669 These restrictions translate to certain responsibilities for you if you
3670 distribute copies of the software, or if you modify it.
3672 For example, if you distribute copies of such a program, whether
3673 gratis or for a fee, you must give the recipients all the rights that
3674 you have. You must make sure that they, too, receive or can get the
3675 source code. And you must show them these terms so they know their
3678 We protect your rights with two steps: (1) copyright the software, and
3679 (2) offer you this license which gives you legal permission to copy,
3680 distribute and/or modify the software.
3682 Also, for each author's protection and ours, we want to make certain
3683 that everyone understands that there is no warranty for this free
3684 software. If the software is modified by someone else and passed on, we
3685 want its recipients to know that what they have is not the original, so
3686 that any problems introduced by others will not reflect on the original
3687 authors' reputations.
3689 Finally, any free program is threatened constantly by software
3690 patents. We wish to avoid the danger that redistributors of a free
3691 program will individually obtain patent licenses, in effect making the
3692 program proprietary. To prevent this, we have made it clear that any
3693 patent must be licensed for everyone's free use or not licensed at all.
3695 The precise terms and conditions for copying, distribution and
3696 modification follow.
3699 @unnumberedsec TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
3702 @center TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
3707 This License applies to any program or other work which contains
3708 a notice placed by the copyright holder saying it may be distributed
3709 under the terms of this General Public License. The ``Program'', below,
3710 refers to any such program or work, and a ``work based on the Program''
3711 means either the Program or any derivative work under copyright law:
3712 that is to say, a work containing the Program or a portion of it,
3713 either verbatim or with modifications and/or translated into another
3714 language. (Hereinafter, translation is included without limitation in
3715 the term ``modification''.) Each licensee is addressed as ``you''.
3717 Activities other than copying, distribution and modification are not
3718 covered by this License; they are outside its scope. The act of
3719 running the Program is not restricted, and the output from the Program
3720 is covered only if its contents constitute a work based on the
3721 Program (independent of having been made by running the Program).
3722 Whether that is true depends on what the Program does.
3725 You may copy and distribute verbatim copies of the Program's
3726 source code as you receive it, in any medium, provided that you
3727 conspicuously and appropriately publish on each copy an appropriate
3728 copyright notice and disclaimer of warranty; keep intact all the
3729 notices that refer to this License and to the absence of any warranty;
3730 and give any other recipients of the Program a copy of this License
3731 along with the Program.
3733 You may charge a fee for the physical act of transferring a copy, and
3734 you may at your option offer warranty protection in exchange for a fee.
3737 You may modify your copy or copies of the Program or any portion
3738 of it, thus forming a work based on the Program, and copy and
3739 distribute such modifications or work under the terms of Section 1
3740 above, provided that you also meet all of these conditions:
3744 You must cause the modified files to carry prominent notices
3745 stating that you changed the files and the date of any change.
3748 You must cause any work that you distribute or publish, that in
3749 whole or in part contains or is derived from the Program or any
3750 part thereof, to be licensed as a whole at no charge to all third
3751 parties under the terms of this License.
3754 If the modified program normally reads commands interactively
3755 when run, you must cause it, when started running for such
3756 interactive use in the most ordinary way, to print or display an
3757 announcement including an appropriate copyright notice and a
3758 notice that there is no warranty (or else, saying that you provide
3759 a warranty) and that users may redistribute the program under
3760 these conditions, and telling the user how to view a copy of this
3761 License. (Exception: if the Program itself is interactive but
3762 does not normally print such an announcement, your work based on
3763 the Program is not required to print an announcement.)
3766 These requirements apply to the modified work as a whole. If
3767 identifiable sections of that work are not derived from the Program,
3768 and can be reasonably considered independent and separate works in
3769 themselves, then this License, and its terms, do not apply to those
3770 sections when you distribute them as separate works. But when you
3771 distribute the same sections as part of a whole which is a work based
3772 on the Program, the distribution of the whole must be on the terms of
3773 this License, whose permissions for other licensees extend to the
3774 entire whole, and thus to each and every part regardless of who wrote it.
3776 Thus, it is not the intent of this section to claim rights or contest
3777 your rights to work written entirely by you; rather, the intent is to
3778 exercise the right to control the distribution of derivative or
3779 collective works based on the Program.
3781 In addition, mere aggregation of another work not based on the Program
3782 with the Program (or with a work based on the Program) on a volume of
3783 a storage or distribution medium does not bring the other work under
3784 the scope of this License.
3787 You may copy and distribute the Program (or a work based on it,
3788 under Section 2) in object code or executable form under the terms of
3789 Sections 1 and 2 above provided that you also do one of the following:
3793 Accompany it with the complete corresponding machine-readable
3794 source code, which must be distributed under the terms of Sections
3795 1 and 2 above on a medium customarily used for software interchange; or,
3798 Accompany it with a written offer, valid for at least three
3799 years, to give any third party, for a charge no more than your
3800 cost of physically performing source distribution, a complete
3801 machine-readable copy of the corresponding source code, to be
3802 distributed under the terms of Sections 1 and 2 above on a medium
3803 customarily used for software interchange; or,
3806 Accompany it with the information you received as to the offer
3807 to distribute corresponding source code. (This alternative is
3808 allowed only for noncommercial distribution and only if you
3809 received the program in object code or executable form with such
3810 an offer, in accord with Subsection b above.)
3813 The source code for a work means the preferred form of the work for
3814 making modifications to it. For an executable work, complete source
3815 code means all the source code for all modules it contains, plus any
3816 associated interface definition files, plus the scripts used to
3817 control compilation and installation of the executable. However, as a
3818 special exception, the source code distributed need not include
3819 anything that is normally distributed (in either source or binary
3820 form) with the major components (compiler, kernel, and so on) of the
3821 operating system on which the executable runs, unless that component
3822 itself accompanies the executable.
3824 If distribution of executable or object code is made by offering
3825 access to copy from a designated place, then offering equivalent
3826 access to copy the source code from the same place counts as
3827 distribution of the source code, even though third parties are not
3828 compelled to copy the source along with the object code.
3831 You may not copy, modify, sublicense, or distribute the Program
3832 except as expressly provided under this License. Any attempt
3833 otherwise to copy, modify, sublicense or distribute the Program is
3834 void, and will automatically terminate your rights under this License.
3835 However, parties who have received copies, or rights, from you under
3836 this License will not have their licenses terminated so long as such
3837 parties remain in full compliance.
3840 You are not required to accept this License, since you have not
3841 signed it. However, nothing else grants you permission to modify or
3842 distribute the Program or its derivative works. These actions are
3843 prohibited by law if you do not accept this License. Therefore, by
3844 modifying or distributing the Program (or any work based on the
3845 Program), you indicate your acceptance of this License to do so, and
3846 all its terms and conditions for copying, distributing or modifying
3847 the Program or works based on it.
3850 Each time you redistribute the Program (or any work based on the
3851 Program), the recipient automatically receives a license from the
3852 original licensor to copy, distribute or modify the Program subject to
3853 these terms and conditions. You may not impose any further
3854 restrictions on the recipients' exercise of the rights granted herein.
3855 You are not responsible for enforcing compliance by third parties to
3859 If, as a consequence of a court judgment or allegation of patent
3860 infringement or for any other reason (not limited to patent issues),
3861 conditions are imposed on you (whether by court order, agreement or
3862 otherwise) that contradict the conditions of this License, they do not
3863 excuse you from the conditions of this License. If you cannot
3864 distribute so as to satisfy simultaneously your obligations under this
3865 License and any other pertinent obligations, then as a consequence you
3866 may not distribute the Program at all. For example, if a patent
3867 license would not permit royalty-free redistribution of the Program by
3868 all those who receive copies directly or indirectly through you, then
3869 the only way you could satisfy both it and this License would be to
3870 refrain entirely from distribution of the Program.
3872 If any portion of this section is held invalid or unenforceable under
3873 any particular circumstance, the balance of the section is intended to
3874 apply and the section as a whole is intended to apply in other
3877 It is not the purpose of this section to induce you to infringe any
3878 patents or other property right claims or to contest validity of any
3879 such claims; this section has the sole purpose of protecting the
3880 integrity of the free software distribution system, which is
3881 implemented by public license practices. Many people have made
3882 generous contributions to the wide range of software distributed
3883 through that system in reliance on consistent application of that
3884 system; it is up to the author/donor to decide if he or she is willing
3885 to distribute software through any other system and a licensee cannot
3888 This section is intended to make thoroughly clear what is believed to
3889 be a consequence of the rest of this License.
3892 If the distribution and/or use of the Program is restricted in
3893 certain countries either by patents or by copyrighted interfaces, the
3894 original copyright holder who places the Program under this License
3895 may add an explicit geographical distribution limitation excluding
3896 those countries, so that distribution is permitted only in or among
3897 countries not thus excluded. In such case, this License incorporates
3898 the limitation as if written in the body of this License.
3901 The Free Software Foundation may publish revised and/or new versions
3902 of the General Public License from time to time. Such new versions will
3903 be similar in spirit to the present version, but may differ in detail to
3904 address new problems or concerns.
3906 Each version is given a distinguishing version number. If the Program
3907 specifies a version number of this License which applies to it and ``any
3908 later version'', you have the option of following the terms and conditions
3909 either of that version or of any later version published by the Free
3910 Software Foundation. If the Program does not specify a version number of
3911 this License, you may choose any version ever published by the Free Software
3915 If you wish to incorporate parts of the Program into other free
3916 programs whose distribution conditions are different, write to the author
3917 to ask for permission. For software which is copyrighted by the Free
3918 Software Foundation, write to the Free Software Foundation; we sometimes
3919 make exceptions for this. Our decision will be guided by the two goals
3920 of preserving the free status of all derivatives of our free software and
3921 of promoting the sharing and reuse of software generally.
3924 @heading NO WARRANTY
3932 BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
3933 FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
3934 OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
3935 PROVIDE THE PROGRAM ``AS IS'' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
3936 OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
3937 MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
3938 TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
3939 PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
3940 REPAIR OR CORRECTION.
3943 IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
3944 WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
3945 REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
3946 INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
3947 OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
3948 TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
3949 YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
3950 PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
3951 POSSIBILITY OF SUCH DAMAGES.
3955 @heading END OF TERMS AND CONDITIONS
3958 @center END OF TERMS AND CONDITIONS
3962 @unnumberedsec How to Apply These Terms to Your New Programs
3964 If you develop a new program, and you want it to be of the greatest
3965 possible use to the public, the best way to achieve this is to make it
3966 free software which everyone can redistribute and change under these terms.
3968 To do so, attach the following notices to the program. It is safest
3969 to attach them to the start of each source file to most effectively
3970 convey the exclusion of warranty; and each file should have at least
3971 the ``copyright'' line and a pointer to where the full notice is found.
3974 @var{one line to give the program's name and an idea of what it does.}
3975 Copyright (C) 20@var{yy} @var{name of author}
3977 This program is free software; you can redistribute it and/or
3978 modify it under the terms of the GNU General Public License
3979 as published by the Free Software Foundation; either version 2
3980 of the License, or (at your option) any later version.
3982 This program is distributed in the hope that it will be useful,
3983 but WITHOUT ANY WARRANTY; without even the implied warranty of
3984 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3985 GNU General Public License for more details.
3987 You should have received a copy of the GNU General Public License
3988 along with this program; if not, write to the Free Software
3989 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
3992 Also add information on how to contact you by electronic and paper mail.
3994 If the program is interactive, make it output a short notice like this
3995 when it starts in an interactive mode:
3998 Gnomovision version 69, Copyright (C) 20@var{yy} @var{name of author}
3999 Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
4000 type `show w'. This is free software, and you are welcome
4001 to redistribute it under certain conditions; type `show c'
4005 The hypothetical commands @samp{show w} and @samp{show c} should show
4006 the appropriate parts of the General Public License. Of course, the
4007 commands you use may be called something other than @samp{show w} and
4008 @samp{show c}; they could even be mouse-clicks or menu items---whatever
4011 You should also get your employer (if you work as a programmer) or your
4012 school, if any, to sign a ``copyright disclaimer'' for the program, if
4013 necessary. Here is a sample; alter the names:
4017 Yoyodyne, Inc., hereby disclaims all copyright
4018 interest in the program `Gnomovision'
4019 (which makes passes at compilers) written
4022 @var{signature of Ty Coon}, 1 April 1989
4023 Ty Coon, President of Vice
4027 This General Public License does not permit incorporating your program into
4028 proprietary programs. If your program is a subroutine library, you may
4029 consider it more useful to permit linking proprietary applications with the
4030 library. If this is what you want to do, use the GNU Library General
4031 Public License instead of this License.
4036 @unnumbered Concept Index