1 \input texinfo @c -*-texinfo-*-
7 @settitle GNU Wget @value{VERSION} Manual
8 @c Disable the monstrous rectangles beside overfull hbox-es.
10 @c Use `odd' to print double-sided.
15 @c Remove this if you don't use A4 paper.
19 @c Title for man page. The weird way texi2pod.pl is written requires
20 @c the preceding @set.
22 @c man title Wget The non-interactive network downloader.
24 @dircategory Network Applications
26 * Wget: (wget). The non-interactive network downloader.
30 This file documents the the GNU Wget utility for downloading network
33 @c man begin COPYRIGHT
34 Copyright @copyright{} 1996--2005 Free Software Foundation, Inc.
36 Permission is granted to make and distribute verbatim copies of
37 this manual provided the copyright notice and this permission notice
38 are preserved on all copies.
41 Permission is granted to process this file through TeX and print the
42 results, provided the printed document carries a copying permission
43 notice identical to this one except for the removal of this paragraph
44 (this paragraph not being relevant to the printed manual).
46 Permission is granted to copy, distribute and/or modify this document
47 under the terms of the GNU Free Documentation License, Version 1.2 or
48 any later version published by the Free Software Foundation; with the
49 Invariant Sections being ``GNU General Public License'' and ``GNU Free
50 Documentation License'', with no Front-Cover Texts, and with no
51 Back-Cover Texts. A copy of the license is included in the section
52 entitled ``GNU Free Documentation License''.
57 @title GNU Wget @value{VERSION}
58 @subtitle The non-interactive download utility
59 @subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
60 @author by Hrvoje Nik@v{s}i@'{c} and others
64 Originally written by Hrvoje Niksic <hniksic@xemacs.org>.
67 GNU Info entry for @file{wget}.
72 @vskip 0pt plus 1filll
73 Copyright @copyright{} 1996--2005, Free Software Foundation, Inc.
75 Permission is granted to copy, distribute and/or modify this document
76 under the terms of the GNU Free Documentation License, Version 1.2 or
77 any later version published by the Free Software Foundation; with the
78 Invariant Sections being ``GNU General Public License'' and ``GNU Free
79 Documentation License'', with no Front-Cover Texts, and with no
80 Back-Cover Texts. A copy of the license is included in the section
81 entitled ``GNU Free Documentation License''.
86 @top Wget @value{VERSION}
88 This manual documents version @value{VERSION} of GNU Wget, the freely
89 available utility for network downloads.
91 Copyright @copyright{} 1996--2005 Free Software Foundation, Inc.
94 * Overview:: Features of Wget.
95 * Invoking:: Wget command-line arguments.
96 * Recursive Download:: Downloading interlinked pages.
97 * Following Links:: The available methods of chasing links.
98 * Time-Stamping:: Mirroring according to time-stamps.
99 * Startup File:: Wget's initialization file.
100 * Examples:: Examples of usage.
101 * Various:: The stuff that doesn't fit anywhere else.
102 * Appendices:: Some useful references.
103 * Copying:: You may give out copies of Wget and of this manual.
104 * Concept Index:: Topics covered by this manual.
113 @c man begin DESCRIPTION
114 GNU Wget is a free utility for non-interactive download of files from
115 the Web. It supports @sc{http}, @sc{https}, and @sc{ftp} protocols, as
116 well as retrieval through @sc{http} proxies.
119 This chapter is a partial overview of Wget's features.
123 @c man begin DESCRIPTION
124 Wget is non-interactive, meaning that it can work in the background,
125 while the user is not logged on. This allows you to start a retrieval
126 and disconnect from the system, letting Wget finish the work. By
127 contrast, most of the Web browsers require constant user's presence,
128 which can be a great hindrance when transferring a lot of data.
133 @c man begin DESCRIPTION
137 @c man begin DESCRIPTION
138 Wget can follow links in @sc{html} and @sc{xhtml} pages and create local
139 versions of remote web sites, fully recreating the directory structure of
140 the original site. This is sometimes referred to as ``recursive
141 downloading.'' While doing that, Wget respects the Robot Exclusion
142 Standard (@file{/robots.txt}). Wget can be instructed to convert the
143 links in downloaded @sc{html} files to the local files for offline
148 File name wildcard matching and recursive mirroring of directories are
149 available when retrieving via @sc{ftp}. Wget can read the time-stamp
150 information given by both @sc{http} and @sc{ftp} servers, and store it
151 locally. Thus Wget can see if the remote file has changed since last
152 retrieval, and automatically retrieve the new version if it has. This
153 makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
158 @c man begin DESCRIPTION
162 @c man begin DESCRIPTION
163 Wget has been designed for robustness over slow or unstable network
164 connections; if a download fails due to a network problem, it will
165 keep retrying until the whole file has been retrieved. If the server
166 supports regetting, it will instruct the server to continue the
167 download from where it left off.
171 Wget supports proxy servers, which can lighten the network load, speed
172 up retrieval and provide access behind firewalls. However, if you are
173 behind a firewall that requires that you use a socks style gateway,
174 you can get the socks library and build Wget with support for socks.
175 Wget uses the passive @sc{ftp} downloading by default, active @sc{ftp}
179 Wget supports IP version 6, the next generation of IP. IPv6 is
180 autodetected at compile-time, and can be disabled at either build or
181 run time. Binaries built with IPv6 support work well in both
182 IPv4-only and dual family environments.
185 Built-in features offer mechanisms to tune which links you wish to follow
186 (@pxref{Following Links}).
189 The progress of individual downloads is traced using a progress gauge.
190 Interactive downloads are tracked using a ``thermometer''-style gauge,
191 whereas non-interactive ones are traced with dots, each dot
192 representing a fixed amount of data received (1KB by default). Either
193 gauge can be customized to your preferences.
196 Most of the features are fully configurable, either through command line
197 options, or via the initialization file @file{.wgetrc} (@pxref{Startup
198 File}). Wget allows you to define @dfn{global} startup files
199 (@file{/usr/local/etc/wgetrc} by default) for site settings.
204 @item /usr/local/etc/wgetrc
205 Default location of the @dfn{global} startup file.
214 Finally, GNU Wget is free software. This means that everyone may use
215 it, redistribute it and/or modify it under the terms of the GNU General
216 Public License, as published by the Free Software Foundation
227 By default, Wget is very simple to invoke. The basic syntax is:
230 @c man begin SYNOPSIS
231 wget [@var{option}]@dots{} [@var{URL}]@dots{}
235 Wget will simply download all the @sc{url}s specified on the command
236 line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
238 However, you may wish to change some of the default parameters of
239 Wget. You can do it two ways: permanently, adding the appropriate
240 command to @file{.wgetrc} (@pxref{Startup File}), or specifying it on
246 * Basic Startup Options::
247 * Logging and Input File Options::
249 * Directory Options::
251 * HTTPS (SSL/TLS) Options::
253 * Recursive Retrieval Options::
254 * Recursive Accept/Reject Options::
262 @dfn{URL} is an acronym for Uniform Resource Locator. A uniform
263 resource locator is a compact string representation for a resource
264 available via the Internet. Wget recognizes the @sc{url} syntax as per
265 @sc{rfc1738}. This is the most widely used form (square brackets denote
269 http://host[:port]/directory/file
270 ftp://host[:port]/directory/file
273 You can also encode your username and password within a @sc{url}:
276 ftp://user:password@@host/path
277 http://user:password@@host/path
280 Either @var{user} or @var{password}, or both, may be left out. If you
281 leave out either the @sc{http} username or password, no authentication
282 will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
283 will be used. If you leave out the @sc{ftp} password, your email
284 address will be supplied as a default password.@footnote{If you have a
285 @file{.netrc} file in your home directory, password will also be
288 @strong{Important Note}: if you specify a password-containing @sc{url}
289 on the command line, the username and password will be plainly visible
290 to all users on the system, by way of @code{ps}. On multi-user systems,
291 this is a big security risk. To work around it, use @code{wget -i -}
292 and feed the @sc{url}s to Wget's standard input, each on a separate
293 line, terminated by @kbd{C-d}.
295 You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
296 being the hexadecimal representation of the character's @sc{ascii}
297 value. Some common unsafe characters include @samp{%} (quoted as
298 @samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
299 @samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
302 Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
303 default, @sc{ftp} documents are retrieved in the binary mode (type
304 @samp{i}), which means that they are downloaded unchanged. Another
305 useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
306 delimiters between the different operating systems, and is thus useful
307 for text files. Here is an example:
310 ftp://host/directory/file;type=a
313 Two alternative variants of @sc{url} specification are also supported,
314 because of historical (hysterical?) reasons and their widespreaded use.
316 @sc{ftp}-only syntax (supported by @code{NcFTP}):
321 @sc{http}-only syntax (introduced by @code{Netscape}):
326 These two alternative forms are deprecated, and may cease being
327 supported in the future.
329 If you do not understand the difference between these notations, or do
330 not know which one to use, just use the plain ordinary format you use
331 with your favorite browser, like @code{Lynx} or @code{Netscape}.
336 @section Option Syntax
337 @cindex option syntax
338 @cindex syntax of options
340 Since Wget uses GNU getopt to process command-line arguments, every
341 option has a long form along with the short one. Long options are
342 more convenient to remember, but take time to type. You may freely
343 mix different option styles, or specify options after the command-line
344 arguments. Thus you may write:
347 wget -r --tries=10 http://fly.srk.fer.hr/ -o log
350 The space between the option accepting an argument and the argument may
351 be omitted. Instead @samp{-o log} you can write @samp{-olog}.
353 You may put several options that do not require arguments together,
360 This is a complete equivalent of:
363 wget -d -r -c @var{URL}
366 Since the options can be specified after the arguments, you may
367 terminate them with @samp{--}. So the following will try to download
368 @sc{url} @samp{-x}, reporting failure to @file{log}:
374 The options that accept comma-separated lists all respect the convention
375 that specifying an empty list clears its value. This can be useful to
376 clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
377 sets @code{exclude_directories} to @file{/cgi-bin}, the following
378 example will first reset it, and then set it to exclude @file{/~nobody}
379 and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
380 (@pxref{Wgetrc Syntax}).
383 wget -X '' -X /~nobody,/~somebody
386 Most options that do not accept arguments are @dfn{boolean} options,
387 so named because their state can be captured with a yes-or-no
388 (``boolean'') variable. For example, @samp{--follow-ftp} tells Wget
389 to follow FTP links from HTML files and, on the other hand,
390 @samp{--no-glob} tells it not to perform file globbing on FTP URLs. A
391 boolean option is either @dfn{affirmative} or @dfn{negative}
392 (beginning with @samp{--no}). All such options share several
395 Unless stated otherwise, it is assumed that the default behavior is
396 the opposite of what the option accomplishes. For example, the
397 documented existence of @samp{--follow-ftp} assumes that the default
398 is to @emph{not} follow FTP links from HTML pages.
400 Affirmative options can be negated by prepending the @samp{--no-} to
401 the option name; negative options can be negated by omitting the
402 @samp{--no-} prefix. This might seem superfluous---if the default for
403 an affirmative option is to not do something, then why provide a way
404 to explicitly turn it off? But the startup file may in fact change
405 the default. For instance, using @code{follow_ftp = off} in
406 @file{.wgetrc} makes Wget @emph{not} follow FTP links by default, and
407 using @samp{--no-follow-ftp} is the only way to restore the factory
408 default from the command line.
410 @node Basic Startup Options
411 @section Basic Startup Options
416 Display the version of Wget.
420 Print a help message describing all of Wget's command-line options.
424 Go to background immediately after startup. If no output file is
425 specified via the @samp{-o}, output is redirected to @file{wget-log}.
427 @cindex execute wgetrc command
428 @item -e @var{command}
429 @itemx --execute @var{command}
430 Execute @var{command} as if it were a part of @file{.wgetrc}
431 (@pxref{Startup File}). A command thus invoked will be executed
432 @emph{after} the commands in @file{.wgetrc}, thus taking precedence over
433 them. If you need to specify more than one wgetrc command, use multiple
434 instances of @samp{-e}.
438 @node Logging and Input File Options
439 @section Logging and Input File Options
444 @item -o @var{logfile}
445 @itemx --output-file=@var{logfile}
446 Log all messages to @var{logfile}. The messages are normally reported
449 @cindex append to log
450 @item -a @var{logfile}
451 @itemx --append-output=@var{logfile}
452 Append to @var{logfile}. This is the same as @samp{-o}, only it appends
453 to @var{logfile} instead of overwriting the old log file. If
454 @var{logfile} does not exist, a new file is created.
459 Turn on debug output, meaning various information important to the
460 developers of Wget if it does not work properly. Your system
461 administrator may have chosen to compile Wget without debug support, in
462 which case @samp{-d} will not work. Please note that compiling with
463 debug support is always safe---Wget compiled with the debug support will
464 @emph{not} print any debug info unless requested with @samp{-d}.
465 @xref{Reporting Bugs}, for more information on how to use @samp{-d} for
471 Turn off Wget's output.
476 Turn on verbose output, with all the available data. The default output
481 Turn off verbose without being completely quiet (use @samp{-q} for
482 that), which means that error messages and basic information still get
487 @itemx --input-file=@var{file}
488 Read @sc{url}s from @var{file}. If @samp{-} is specified as
489 @var{file}, @sc{url}s are read from the standard input. (Use
490 @samp{./-} to read from a file literally named @samp{-}.)
492 If this function is used, no @sc{url}s need be present on the command
493 line. If there are @sc{url}s both on the command line and in an input
494 file, those on the command lines will be the first ones to be
495 retrieved. The @var{file} need not be an @sc{html} document (but no
496 harm if it is)---it is enough if the @sc{url}s are just listed
499 However, if you specify @samp{--force-html}, the document will be
500 regarded as @samp{html}. In that case you may have problems with
501 relative links, which you can solve either by adding @code{<base
502 href="@var{url}">} to the documents or by specifying
503 @samp{--base=@var{url}} on the command line.
508 When input is read from a file, force it to be treated as an @sc{html}
509 file. This enables you to retrieve relative links from existing
510 @sc{html} files on your local disk, by adding @code{<base
511 href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
514 @cindex base for relative links in input file
516 @itemx --base=@var{URL}
517 Prepends @var{URL} to relative links read from the file specified with
518 the @samp{-i} option.
521 @node Download Options
522 @section Download Options
526 @cindex client IP address
527 @cindex IP address, client
528 @item --bind-address=@var{ADDRESS}
529 When making client TCP/IP connections, bind to @var{ADDRESS} on
530 the local machine. @var{ADDRESS} may be specified as a hostname or IP
531 address. This option can be useful if your machine is bound to multiple
536 @cindex number of retries
537 @item -t @var{number}
538 @itemx --tries=@var{number}
539 Set number of retries to @var{number}. Specify 0 or @samp{inf} for
540 infinite retrying. The default is to retry 20 times, with the exception
541 of fatal errors like ``connection refused'' or ``not found'' (404),
542 which are not retried.
545 @itemx --output-document=@var{file}
546 The documents will not be written to the appropriate files, but all
547 will be concatenated together and written to @var{file}. If @samp{-}
548 is used as @var{file}, documents will be printed to standard output,
549 disabling link conversion. (Use @samp{./-} to print to a file
550 literally named @samp{-}.)
552 Note that a combination with @samp{-k} is only well-defined for
553 downloading a single document.
555 @cindex clobbering, file
556 @cindex downloading multiple times
560 If a file is downloaded more than once in the same directory, Wget's
561 behavior depends on a few options, including @samp{-nc}. In certain
562 cases, the local file will be @dfn{clobbered}, or overwritten, upon
563 repeated download. In other cases it will be preserved.
565 When running Wget without @samp{-N}, @samp{-nc}, or @samp{-r},
566 downloading the same file in the same directory will result in the
567 original copy of @var{file} being preserved and the second copy being
568 named @samp{@var{file}.1}. If that file is downloaded yet again, the
569 third copy will be named @samp{@var{file}.2}, and so on. When
570 @samp{-nc} is specified, this behavior is suppressed, and Wget will
571 refuse to download newer copies of @samp{@var{file}}. Therefore,
572 ``@code{no-clobber}'' is actually a misnomer in this mode---it's not
573 clobbering that's prevented (as the numeric suffixes were already
574 preventing clobbering), but rather the multiple version saving that's
577 When running Wget with @samp{-r}, but without @samp{-N} or @samp{-nc},
578 re-downloading a file will result in the new copy simply overwriting the
579 old. Adding @samp{-nc} will prevent this behavior, instead causing the
580 original version to be preserved and any newer copies on the server to
583 When running Wget with @samp{-N}, with or without @samp{-r}, the
584 decision as to whether or not to download a newer copy of a file depends
585 on the local and remote timestamp and size of the file
586 (@pxref{Time-Stamping}). @samp{-nc} may not be specified at the same
589 Note that when @samp{-nc} is specified, files with the suffixes
590 @samp{.html} or @samp{.htm} will be loaded from the local disk and
591 parsed as if they had been retrieved from the Web.
593 @cindex continue retrieval
594 @cindex incomplete downloads
595 @cindex resume download
598 Continue getting a partially-downloaded file. This is useful when you
599 want to finish up a download started by a previous instance of Wget, or
600 by another program. For instance:
603 wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
606 If there is a file named @file{ls-lR.Z} in the current directory, Wget
607 will assume that it is the first portion of the remote file, and will
608 ask the server to continue the retrieval from an offset equal to the
609 length of the local file.
611 Note that you don't need to specify this option if you just want the
612 current invocation of Wget to retry downloading a file should the
613 connection be lost midway through. This is the default behavior.
614 @samp{-c} only affects resumption of downloads started @emph{prior} to
615 this invocation of Wget, and whose local files are still sitting around.
617 Without @samp{-c}, the previous example would just download the remote
618 file to @file{ls-lR.Z.1}, leaving the truncated @file{ls-lR.Z} file
621 Beginning with Wget 1.7, if you use @samp{-c} on a non-empty file, and
622 it turns out that the server does not support continued downloading,
623 Wget will refuse to start the download from scratch, which would
624 effectively ruin existing contents. If you really want the download to
625 start from scratch, remove the file.
627 Also beginning with Wget 1.7, if you use @samp{-c} on a file which is of
628 equal size as the one on the server, Wget will refuse to download the
629 file and print an explanatory message. The same happens when the file
630 is smaller on the server than locally (presumably because it was changed
631 on the server since your last download attempt)---because ``continuing''
632 is not meaningful, no download occurs.
634 On the other side of the coin, while using @samp{-c}, any file that's
635 bigger on the server than locally will be considered an incomplete
636 download and only @code{(length(remote) - length(local))} bytes will be
637 downloaded and tacked onto the end of the local file. This behavior can
638 be desirable in certain cases---for instance, you can use @samp{wget -c}
639 to download just the new portion that's been appended to a data
640 collection or log file.
642 However, if the file is bigger on the server because it's been
643 @emph{changed}, as opposed to just @emph{appended} to, you'll end up
644 with a garbled file. Wget has no way of verifying that the local file
645 is really a valid prefix of the remote file. You need to be especially
646 careful of this when using @samp{-c} in conjunction with @samp{-r},
647 since every file will be considered as an "incomplete download" candidate.
649 Another instance where you'll get a garbled file if you try to use
650 @samp{-c} is if you have a lame @sc{http} proxy that inserts a
651 ``transfer interrupted'' string into the local file. In the future a
652 ``rollback'' option may be added to deal with this case.
654 Note that @samp{-c} only works with @sc{ftp} servers and with @sc{http}
655 servers that support the @code{Range} header.
657 @cindex progress indicator
659 @item --progress=@var{type}
660 Select the type of the progress indicator you wish to use. Legal
661 indicators are ``dot'' and ``bar''.
663 The ``bar'' indicator is used by default. It draws an @sc{ascii} progress
664 bar graphics (a.k.a ``thermometer'' display) indicating the status of
665 retrieval. If the output is not a TTY, the ``dot'' bar will be used by
668 Use @samp{--progress=dot} to switch to the ``dot'' display. It traces
669 the retrieval by printing dots on the screen, each dot representing a
670 fixed amount of downloaded data.
672 When using the dotted retrieval, you may also set the @dfn{style} by
673 specifying the type as @samp{dot:@var{style}}. Different styles assign
674 different meaning to one dot. With the @code{default} style each dot
675 represents 1K, there are ten dots in a cluster and 50 dots in a line.
676 The @code{binary} style has a more ``computer''-like orientation---8K
677 dots, 16-dots clusters and 48 dots per line (which makes for 384K
678 lines). The @code{mega} style is suitable for downloading very large
679 files---each dot represents 64K retrieved, there are eight dots in a
680 cluster, and 48 dots on each line (so each line contains 3M).
682 Note that you can set the default style using the @code{progress}
683 command in @file{.wgetrc}. That setting may be overridden from the
684 command line. The exception is that, when the output is not a TTY, the
685 ``dot'' progress will be favored over ``bar''. To force the bar output,
686 use @samp{--progress=bar:force}.
689 @itemx --timestamping
690 Turn on time-stamping. @xref{Time-Stamping}, for details.
692 @cindex server response, print
694 @itemx --server-response
695 Print the headers sent by @sc{http} servers and responses sent by
698 @cindex Wget as spider
701 When invoked with this option, Wget will behave as a Web @dfn{spider},
702 which means that it will not download the pages, just check that they
703 are there. For example, you can use Wget to check your bookmarks:
706 wget --spider --force-html -i bookmarks.html
709 This feature needs much more work for Wget to get close to the
710 functionality of real web spiders.
714 @itemx --timeout=@var{seconds}
715 Set the network timeout to @var{seconds} seconds. This is equivalent
716 to specifying @samp{--dns-timeout}, @samp{--connect-timeout}, and
717 @samp{--read-timeout}, all at the same time.
719 When interacting with the network, Wget can check for timeout and
720 abort the operation if it takes too long. This prevents anomalies
721 like hanging reads and infinite connects. The only timeout enabled by
722 default is a 900-second read timeout. Setting a timeout to 0 disables
723 it altogether. Unless you know what you are doing, it is best not to
724 change the default timeout settings.
726 All timeout-related options accept decimal values, as well as
727 subsecond values. For example, @samp{0.1} seconds is a legal (though
728 unwise) choice of timeout. Subsecond timeouts are useful for checking
729 server response times or for testing network latency.
733 @item --dns-timeout=@var{seconds}
734 Set the DNS lookup timeout to @var{seconds} seconds. DNS lookups that
735 don't complete within the specified time will fail. By default, there
736 is no timeout on DNS lookups, other than that implemented by system
739 @cindex connect timeout
740 @cindex timeout, connect
741 @item --connect-timeout=@var{seconds}
742 Set the connect timeout to @var{seconds} seconds. TCP connections that
743 take longer to establish will be aborted. By default, there is no
744 connect timeout, other than that implemented by system libraries.
747 @cindex timeout, read
748 @item --read-timeout=@var{seconds}
749 Set the read (and write) timeout to @var{seconds} seconds. The
750 ``time'' of this timeout refers @dfn{idle time}: if, at any point in
751 the download, no data is received for more than the specified number
752 of seconds, reading fails and the download is restarted. This option
753 does not directly affect the duration of the entire download.
755 Of course, the remote server may choose to terminate the connection
756 sooner than this option requires. The default read timeout is 900
759 @cindex bandwidth, limit
761 @cindex limit bandwidth
762 @item --limit-rate=@var{amount}
763 Limit the download speed to @var{amount} bytes per second. Amount may
764 be expressed in bytes, kilobytes with the @samp{k} suffix, or megabytes
765 with the @samp{m} suffix. For example, @samp{--limit-rate=20k} will
766 limit the retrieval rate to 20KB/s. This is useful when, for whatever
767 reason, you don't want Wget to consume the entire available bandwidth.
769 This option allows the use of decimal numbers, usually in conjunction
770 with power suffixes; for example, @samp{--limit-rate=2.5k} is a legal
773 Note that Wget implements the limiting by sleeping the appropriate
774 amount of time after a network read that took less time than specified
775 by the rate. Eventually this strategy causes the TCP transfer to slow
776 down to approximately the specified rate. However, it may take some
777 time for this balance to be achieved, so don't be surprised if limiting
778 the rate doesn't work well with very small files.
782 @item -w @var{seconds}
783 @itemx --wait=@var{seconds}
784 Wait the specified number of seconds between the retrievals. Use of
785 this option is recommended, as it lightens the server load by making the
786 requests less frequent. Instead of in seconds, the time can be
787 specified in minutes using the @code{m} suffix, in hours using @code{h}
788 suffix, or in days using @code{d} suffix.
790 Specifying a large value for this option is useful if the network or the
791 destination host is down, so that Wget can wait long enough to
792 reasonably expect the network error to be fixed before the retry. The
793 waiting interval specified by this function is influenced by
794 @code{--random-wait}, which see.
796 @cindex retries, waiting between
797 @cindex waiting between retries
798 @item --waitretry=@var{seconds}
799 If you don't want Wget to wait between @emph{every} retrieval, but only
800 between retries of failed downloads, you can use this option. Wget will
801 use @dfn{linear backoff}, waiting 1 second after the first failure on a
802 given file, then waiting 2 seconds after the second failure on that
803 file, up to the maximum number of @var{seconds} you specify. Therefore,
804 a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
807 Note that this option is turned on by default in the global
813 Some web sites may perform log analysis to identify retrieval programs
814 such as Wget by looking for statistically significant similarities in
815 the time between requests. This option causes the time between requests
816 to vary between 0.5 and 1.5 * @var{wait} seconds, where @var{wait} was
817 specified using the @samp{--wait} option, in order to mask Wget's
818 presence from such analysis.
820 A 2001 article in a publication devoted to development on a popular
821 consumer platform provided code to perform this analysis on the fly.
822 Its author suggested blocking at the class C address level to ensure
823 automated retrieval programs were blocked despite changing DHCP-supplied
826 The @samp{--random-wait} option was inspired by this ill-advised
827 recommendation to block many unrelated users from a web site due to the
832 Don't use proxies, even if the appropriate @code{*_proxy} environment
835 For more information about the use of proxies with Wget, @xref{Proxies}.
839 @itemx --quota=@var{quota}
840 Specify download quota for automatic retrievals. The value can be
841 specified in bytes (default), kilobytes (with @samp{k} suffix), or
842 megabytes (with @samp{m} suffix).
844 Note that quota will never affect downloading a single file. So if you
845 specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
846 @file{ls-lR.gz} will be downloaded. The same goes even when several
847 @sc{url}s are specified on the command-line. However, quota is
848 respected when retrieving either recursively, or from an input file.
849 Thus you may safely type @samp{wget -Q2m -i sites}---download will be
850 aborted when the quota is exceeded.
852 Setting quota to 0 or to @samp{inf} unlimits the download quota.
855 @cindex caching of DNS lookups
857 Turn off caching of DNS lookups. Normally, Wget remembers the IP
858 addresses it looked up from DNS so it doesn't have to repeatedly
859 contact the DNS server for the same (typically small) set of hosts it
860 retrieves from. This cache exists in memory only; a new Wget run will
863 However, it has been reported that in some situations it is not
864 desirable to cache host names, even for the duration of a
865 short-running application like Wget. With this option Wget issues a
866 new DNS lookup (more precisely, a new call to @code{gethostbyname} or
867 @code{getaddrinfo}) each time it makes a new connection. Please note
868 that this option will @emph{not} affect caching that might be
869 performed by the resolving library or by an external caching layer,
872 If you don't understand exactly what this option does, you probably
875 @cindex file names, restrict
876 @cindex Windows file names
877 @item --restrict-file-names=@var{mode}
878 Change which characters found in remote URLs may show up in local file
879 names generated from those URLs. Characters that are @dfn{restricted}
880 by this option are escaped, i.e. replaced with @samp{%HH}, where
881 @samp{HH} is the hexadecimal number that corresponds to the restricted
884 By default, Wget escapes the characters that are not valid as part of
885 file names on your operating system, as well as control characters that
886 are typically unprintable. This option is useful for changing these
887 defaults, either because you are downloading to a non-native partition,
888 or because you want to disable escaping of the control characters.
890 When mode is set to ``unix'', Wget escapes the character @samp{/} and
891 the control characters in the ranges 0--31 and 128--159. This is the
892 default on Unix-like OS'es.
894 When mode is set to ``windows'', Wget escapes the characters @samp{\},
895 @samp{|}, @samp{/}, @samp{:}, @samp{?}, @samp{"}, @samp{*}, @samp{<},
896 @samp{>}, and the control characters in the ranges 0--31 and 128--159.
897 In addition to this, Wget in Windows mode uses @samp{+} instead of
898 @samp{:} to separate host and port in local file names, and uses
899 @samp{@@} instead of @samp{?} to separate the query portion of the file
900 name from the rest. Therefore, a URL that would be saved as
901 @samp{www.xemacs.org:4300/search.pl?input=blah} in Unix mode would be
902 saved as @samp{www.xemacs.org+4300/search.pl@@input=blah} in Windows
903 mode. This mode is the default on Windows.
905 If you append @samp{,nocontrol} to the mode, as in
906 @samp{unix,nocontrol}, escaping of the control characters is also
907 switched off. You can use @samp{--restrict-file-names=nocontrol} to
908 turn off escaping of control characters without affecting the choice of
909 the OS to use as file name restriction mode.
916 Force connecting to IPv4 or IPv6 addresses. With @samp{--inet4-only}
917 or @samp{-4}, Wget will only connect to IPv4 hosts, ignoring AAAA
918 records in DNS, and refusing to connect to IPv6 addresses specified in
919 URLs. Conversely, with @samp{--inet6-only} or @samp{-6}, Wget will
920 only connect to IPv6 hosts and ignore A records and IPv4 addresses.
922 Neither options should be needed normally. By default, an IPv6-aware
923 Wget will use the address family specified by the host's DNS record.
924 If the DNS responds with both IPv4 and IPv6 addresses, Wget will try
925 them in sequence until it finds one it can connect to. (Also see
926 @code{--prefer-family} option described below.)
928 These options can be used to deliberately force the use of IPv4 or
929 IPv6 address families on dual family systems, usually to aid debugging
930 or to deal with broken network configuration. Only one of
931 @samp{--inet6-only} and @samp{--inet4-only} may be specified at the
932 same time. Neither option is available in Wget compiled without IPv6
935 @item --prefer-family=IPv4/IPv6/none
936 When given a choice of several addresses, connect to the addresses
937 with specified address family first. IPv4 addresses are preferred by
940 This avoids spurious errors and connect attempts when accessing hosts
941 that resolve to both IPv6 and IPv4 addresses from IPv4 networks. For
942 example, @samp{www.kame.net} resolves to
943 @samp{2001:200:0:8002:203:47ff:fea5:3085} and to
944 @samp{203.178.141.194}. When the preferred family is @code{IPv4}, the
945 IPv4 address is used first; when the preferred family is @code{IPv6},
946 the IPv6 address is used first; if the specified value is @code{none},
947 the address order returned by DNS is used without change.
949 Unlike @samp{-4} and @samp{-6}, this option doesn't inhibit access to
950 any address family, it only changes the @emph{order} in which the
951 addresses are accessed. Also note that the reordering performed by
952 this option is @dfn{stable}---it doesn't affect order of addresses of
953 the same family. That is, the relative order of all IPv4 addresses
954 and of all IPv6 addresses remains intact in all cases.
956 @item --retry-connrefused
957 Consider ``connection refused'' a transient error and try again.
958 Normally Wget gives up on a URL when it is unable to connect to the
959 site because failure to connect is taken as a sign that the server is
960 not running at all and that retries would not help. This option is
961 for mirroring unreliable sites whose servers tend to disappear for
962 short periods of time.
966 @cindex authentication
967 @item --user=@var{user}
968 @itemx --password=@var{password}
969 Specify the username @var{user} and password @var{password} for both
970 @sc{ftp} and @sc{http} file retrieval. These parameters can be overridden
971 using the @samp{--ftp-user} and @samp{--ftp-password} options for
972 @sc{ftp} connections and the @samp{--http-user} and @samp{--http-password}
973 options for @sc{http} connections.
976 @node Directory Options
977 @section Directory Options
981 @itemx --no-directories
982 Do not create a hierarchy of directories when retrieving recursively.
983 With this option turned on, all files will get saved to the current
984 directory, without clobbering (if a name shows up more than once, the
985 filenames will get extensions @samp{.n}).
988 @itemx --force-directories
989 The opposite of @samp{-nd}---create a hierarchy of directories, even if
990 one would not have been created otherwise. E.g. @samp{wget -x
991 http://fly.srk.fer.hr/robots.txt} will save the downloaded file to
992 @file{fly.srk.fer.hr/robots.txt}.
995 @itemx --no-host-directories
996 Disable generation of host-prefixed directories. By default, invoking
997 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
998 directories beginning with @file{fly.srk.fer.hr/}. This option disables
1001 @item --protocol-directories
1002 Use the protocol name as a directory component of local file names. For
1003 example, with this option, @samp{wget -r http://@var{host}} will save to
1004 @samp{http/@var{host}/...} rather than just to @samp{@var{host}/...}.
1006 @cindex cut directories
1007 @item --cut-dirs=@var{number}
1008 Ignore @var{number} directory components. This is useful for getting a
1009 fine-grained control over the directory where recursive retrieval will
1012 Take, for example, the directory at
1013 @samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
1014 @samp{-r}, it will be saved locally under
1015 @file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
1016 remove the @file{ftp.xemacs.org/} part, you are still stuck with
1017 @file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
1018 makes Wget not ``see'' @var{number} remote directory components. Here
1019 are several examples of how @samp{--cut-dirs} option works.
1023 No options -> ftp.xemacs.org/pub/xemacs/
1025 -nH --cut-dirs=1 -> xemacs/
1026 -nH --cut-dirs=2 -> .
1028 --cut-dirs=1 -> ftp.xemacs.org/xemacs/
1033 If you just want to get rid of the directory structure, this option is
1034 similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
1035 @samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
1036 instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
1037 be placed to @file{xemacs/beta}, as one would expect.
1039 @cindex directory prefix
1040 @item -P @var{prefix}
1041 @itemx --directory-prefix=@var{prefix}
1042 Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
1043 directory where all other files and subdirectories will be saved to,
1044 i.e. the top of the retrieval tree. The default is @samp{.} (the
1049 @section HTTP Options
1052 @cindex .html extension
1054 @itemx --html-extension
1055 If a file of type @samp{application/xhtml+xml} or @samp{text/html} is
1056 downloaded and the URL does not end with the regexp
1057 @samp{\.[Hh][Tt][Mm][Ll]?}, this option will cause the suffix @samp{.html}
1058 to be appended to the local filename. This is useful, for instance, when
1059 you're mirroring a remote site that uses @samp{.asp} pages, but you want
1060 the mirrored pages to be viewable on your stock Apache server. Another
1061 good use for this is when you're downloading CGI-generated materials. A URL
1062 like @samp{http://site.com/article.cgi?25} will be saved as
1063 @file{article.cgi?25.html}.
1065 Note that filenames changed in this way will be re-downloaded every time
1066 you re-mirror a site, because Wget can't tell that the local
1067 @file{@var{X}.html} file corresponds to remote URL @samp{@var{X}} (since
1068 it doesn't yet know that the URL produces output of type
1069 @samp{text/html} or @samp{application/xhtml+xml}. To prevent this
1070 re-downloading, you must use @samp{-k} and @samp{-K} so that the original
1071 version of the file will be saved as @file{@var{X}.orig} (@pxref{Recursive
1072 Retrieval Options}).
1075 @cindex http password
1076 @cindex authentication
1077 @item --http-user=@var{user}
1078 @itemx --http-password=@var{password}
1079 Specify the username @var{user} and password @var{password} on an
1080 @sc{http} server. According to the type of the challenge, Wget will
1081 encode them using either the @code{basic} (insecure) or the
1082 @code{digest} authentication scheme.
1084 Another way to specify username and password is in the @sc{url} itself
1085 (@pxref{URL Format}). Either method reveals your password to anyone who
1086 bothers to run @code{ps}. To prevent the passwords from being seen,
1087 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1088 those files from other users with @code{chmod}. If the passwords are
1089 really important, do not leave them lying in those files either---edit
1090 the files and delete them after Wget has started the download.
1093 For more information about security issues with Wget, @xref{Security
1100 Disable server-side cache. In this case, Wget will send the remote
1101 server an appropriate directive (@samp{Pragma: no-cache}) to get the
1102 file from the remote service, rather than returning the cached version.
1103 This is especially useful for retrieving and flushing out-of-date
1104 documents on proxy servers.
1106 Caching is allowed by default.
1110 Disable the use of cookies. Cookies are a mechanism for maintaining
1111 server-side state. The server sends the client a cookie using the
1112 @code{Set-Cookie} header, and the client responds with the same cookie
1113 upon further requests. Since cookies allow the server owners to keep
1114 track of visitors and for sites to exchange this information, some
1115 consider them a breach of privacy. The default is to use cookies;
1116 however, @emph{storing} cookies is not on by default.
1118 @cindex loading cookies
1119 @cindex cookies, loading
1120 @item --load-cookies @var{file}
1121 Load cookies from @var{file} before the first HTTP retrieval.
1122 @var{file} is a textual file in the format originally used by Netscape's
1123 @file{cookies.txt} file.
1125 You will typically use this option when mirroring sites that require
1126 that you be logged in to access some or all of their content. The login
1127 process typically works by the web server issuing an @sc{http} cookie
1128 upon receiving and verifying your credentials. The cookie is then
1129 resent by the browser when accessing that part of the site, and so
1130 proves your identity.
1132 Mirroring such a site requires Wget to send the same cookies your
1133 browser sends when communicating with the site. This is achieved by
1134 @samp{--load-cookies}---simply point Wget to the location of the
1135 @file{cookies.txt} file, and it will send the same cookies your browser
1136 would send in the same situation. Different browsers keep textual
1137 cookie files in different locations:
1141 The cookies are in @file{~/.netscape/cookies.txt}.
1143 @item Mozilla and Netscape 6.x.
1144 Mozilla's cookie file is also named @file{cookies.txt}, located
1145 somewhere under @file{~/.mozilla}, in the directory of your profile.
1146 The full path usually ends up looking somewhat like
1147 @file{~/.mozilla/default/@var{some-weird-string}/cookies.txt}.
1149 @item Internet Explorer.
1150 You can produce a cookie file Wget can use by using the File menu,
1151 Import and Export, Export Cookies. This has been tested with Internet
1152 Explorer 5; it is not guaranteed to work with earlier versions.
1154 @item Other browsers.
1155 If you are using a different browser to create your cookies,
1156 @samp{--load-cookies} will only work if you can locate or produce a
1157 cookie file in the Netscape format that Wget expects.
1160 If you cannot use @samp{--load-cookies}, there might still be an
1161 alternative. If your browser supports a ``cookie manager'', you can use
1162 it to view the cookies used when accessing the site you're mirroring.
1163 Write down the name and value of the cookie, and manually instruct Wget
1164 to send those cookies, bypassing the ``official'' cookie support:
1167 wget --no-cookies --header "Cookie: @var{name}=@var{value}"
1170 @cindex saving cookies
1171 @cindex cookies, saving
1172 @item --save-cookies @var{file}
1173 Save cookies to @var{file} before exiting. This will not save cookies
1174 that have expired or that have no expiry time (so-called ``session
1175 cookies''), but also see @samp{--keep-session-cookies}.
1177 @cindex cookies, session
1178 @cindex session cookies
1179 @item --keep-session-cookies
1180 When specified, causes @samp{--save-cookies} to also save session
1181 cookies. Session cookies are normally not saved because they are
1182 meant to be kept in memory and forgotten when you exit the browser.
1183 Saving them is useful on sites that require you to log in or to visit
1184 the home page before you can access some pages. With this option,
1185 multiple Wget runs are considered a single browser session as far as
1186 the site is concerned.
1188 Since the cookie file format does not normally carry session cookies,
1189 Wget marks them with an expiry timestamp of 0. Wget's
1190 @samp{--load-cookies} recognizes those as session cookies, but it might
1191 confuse other browsers. Also note that cookies so loaded will be
1192 treated as other session cookies, which means that if you want
1193 @samp{--save-cookies} to preserve them again, you must use
1194 @samp{--keep-session-cookies} again.
1196 @cindex Content-Length, ignore
1197 @cindex ignore length
1198 @item --ignore-length
1199 Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
1200 precise) send out bogus @code{Content-Length} headers, which makes Wget
1201 go wild, as it thinks not all the document was retrieved. You can spot
1202 this syndrome if Wget retries getting the same document again and again,
1203 each time claiming that the (otherwise normal) connection has closed on
1206 With this option, Wget will ignore the @code{Content-Length} header---as
1207 if it never existed.
1210 @item --header=@var{header-line}
1211 Send @var{header-line} along with the rest of the headers in each
1212 @sc{http} request. The supplied header is sent as-is, which means it
1213 must contain name and value separated by colon, and must not contain
1216 You may define more than one additional header by specifying
1217 @samp{--header} more than once.
1221 wget --header='Accept-Charset: iso-8859-2' \
1222 --header='Accept-Language: hr' \
1223 http://fly.srk.fer.hr/
1227 Specification of an empty string as the header value will clear all
1228 previous user-defined headers.
1230 As of Wget 1.10, this option can be used to override headers otherwise
1231 generated automatically. This example instructs Wget to connect to
1232 localhost, but to specify @samp{foo.bar} in the @code{Host} header:
1235 wget --header="Host: foo.bar" http://localhost/
1238 In versions of Wget prior to 1.10 such use of @samp{--header} caused
1239 sending of duplicate headers.
1242 @cindex proxy password
1243 @cindex proxy authentication
1244 @item --proxy-user=@var{user}
1245 @itemx --proxy-password=@var{password}
1246 Specify the username @var{user} and password @var{password} for
1247 authentication on a proxy server. Wget will encode them using the
1248 @code{basic} authentication scheme.
1250 Security considerations similar to those with @samp{--http-password}
1251 pertain here as well.
1253 @cindex http referer
1254 @cindex referer, http
1255 @item --referer=@var{url}
1256 Include `Referer: @var{url}' header in HTTP request. Useful for
1257 retrieving documents with server-side processing that assume they are
1258 always being retrieved by interactive web browsers and only come out
1259 properly when Referer is set to one of the pages that point to them.
1261 @cindex server response, save
1262 @item --save-headers
1263 Save the headers sent by the @sc{http} server to the file, preceding the
1264 actual contents, with an empty line as the separator.
1267 @item -U @var{agent-string}
1268 @itemx --user-agent=@var{agent-string}
1269 Identify as @var{agent-string} to the @sc{http} server.
1271 The @sc{http} protocol allows the clients to identify themselves using a
1272 @code{User-Agent} header field. This enables distinguishing the
1273 @sc{www} software, usually for statistical purposes or for tracing of
1274 protocol violations. Wget normally identifies as
1275 @samp{Wget/@var{version}}, @var{version} being the current version
1278 However, some sites have been known to impose the policy of tailoring
1279 the output according to the @code{User-Agent}-supplied information.
1280 While this is not such a bad idea in theory, it has been abused by
1281 servers denying information to clients other than (historically)
1282 Netscape or, more frequently, Microsoft Internet Explorer. This
1283 option allows you to change the @code{User-Agent} line issued by Wget.
1284 Use of this option is discouraged, unless you really know what you are
1287 Specifying empty user agent with @samp{--user-agent=""} instructs Wget
1288 not to send the @code{User-Agent} header in @sc{http} requests.
1291 @item --post-data=@var{string}
1292 @itemx --post-file=@var{file}
1293 Use POST as the method for all HTTP requests and send the specified data
1294 in the request body. @code{--post-data} sends @var{string} as data,
1295 whereas @code{--post-file} sends the contents of @var{file}. Other than
1296 that, they work in exactly the same way.
1298 Please be aware that Wget needs to know the size of the POST data in
1299 advance. Therefore the argument to @code{--post-file} must be a regular
1300 file; specifying a FIFO or something like @file{/dev/stdin} won't work.
1301 It's not quite clear how to work around this limitation inherent in
1302 HTTP/1.0. Although HTTP/1.1 introduces @dfn{chunked} transfer that
1303 doesn't require knowing the request length in advance, a client can't
1304 use chunked unless it knows it's talking to an HTTP/1.1 server. And it
1305 can't know that until it receives a response, which in turn requires the
1306 request to have been completed -- a chicken-and-egg problem.
1308 Note: if Wget is redirected after the POST request is completed, it
1309 will not send the POST data to the redirected URL. This is because
1310 URLs that process POST often respond with a redirection to a regular
1311 page, which does not desire or accept POST. It is not completely
1312 clear that this behavior is optimal; if it doesn't work out, it might
1313 be changed in the future.
1315 This example shows how to log to a server using POST and then proceed to
1316 download the desired pages, presumably only accessible to authorized
1321 # @r{Log in to the server. This can be done only once.}
1322 wget --save-cookies cookies.txt \
1323 --post-data 'user=foo&password=bar' \
1324 http://server.com/auth.php
1326 # @r{Now grab the page or pages we care about.}
1327 wget --load-cookies cookies.txt \
1328 -p http://server.com/interesting/article.php
1332 If the server is using session cookies to track user authentication,
1333 the above will not work because @samp{--save-cookies} will not save
1334 them (and neither will browsers) and the @file{cookies.txt} file will
1335 be empty. In that case use @samp{--keep-session-cookies} along with
1336 @samp{--save-cookies} to force saving of session cookies.
1339 @node HTTPS (SSL/TLS) Options
1340 @section HTTPS (SSL/TLS) Options
1343 To support encrypted HTTP (HTTPS) downloads, Wget must be compiled
1344 with an external SSL library, currently OpenSSL. If Wget is compiled
1345 without SSL support, none of these options are available.
1348 @cindex SSL protocol, choose
1349 @item --secure-protocol=@var{protocol}
1350 Choose the secure protocol to be used. Legal values are @samp{auto},
1351 @samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. If @samp{auto} is used,
1352 the SSL library is given the liberty of choosing the appropriate
1353 protocol automatically, which is achieved by sending an SSLv2 greeting
1354 and announcing support for SSLv3 and TLSv1. This is the default.
1356 Specifying @samp{SSLv2}, @samp{SSLv3}, or @samp{TLSv1} forces the use
1357 of the corresponding protocol. This is useful when talking to old and
1358 buggy SSL server implementations that make it hard for OpenSSL to
1359 choose the correct protocol version. Fortunately, such servers are
1362 @cindex SSL certificate, check
1363 @item --no-check-certificate
1364 Don't check the server certificate against the available certificate
1365 authorities. Also don't require the URL host name to match the common
1366 name presented by the certificate.
1368 As of Wget 1.10, the default is to verify the server's certificate
1369 against the recognized certificate authorities, breaking the SSL
1370 handshake and aborting the download if the verification fails.
1371 Although this provides more secure downloads, it does break
1372 interoperability with some sites that worked with previous Wget
1373 versions, particularly those using self-signed, expired, or otherwise
1374 invalid certificates. This option forces an ``insecure'' mode of
1375 operation that turns the certificate verification errors into warnings
1376 and allows you to proceed.
1378 If you encounter ``certificate verification'' errors or ones saying
1379 that ``common name doesn't match requested host name'', you can use
1380 this option to bypass the verification and proceed with the download.
1381 @emph{Only use this option if you are otherwise convinced of the
1382 site's authenticity, or if you really don't care about the validity of
1383 its certificate.} It is almost always a bad idea not to check the
1384 certificates when transmitting confidential or important data.
1386 @cindex SSL certificate
1387 @item --certificate=@var{file}
1388 Use the client certificate stored in @var{file}. This is needed for
1389 servers that are configured to require certificates from the clients
1390 that connect to them. Normally a certificate is not required and this
1393 @cindex SSL certificate type, specify
1394 @item --certificate-type=@var{type}
1395 Specify the type of the client certificate. Legal values are
1396 @samp{PEM} (assumed by default) and @samp{DER}, also known as
1399 @item --private-key=@var{file}
1400 Read the private key from @var{file}. This allows you to provide the
1401 private key in a file separate from the certificate.
1403 @item --private-key-type=@var{type}
1404 Specify the type of the private key. Accepted values are @samp{PEM}
1405 (the default) and @samp{DER}.
1407 @item --ca-certificate=@var{file}
1408 Use @var{file} as the file with the bundle of certificate authorities
1409 (``CA'') to verify the peers. The certificates must be in PEM format.
1411 Without this option Wget looks for CA certificates at the
1412 system-specified locations, chosen at OpenSSL installation time.
1414 @cindex SSL certificate authority
1415 @item --ca-directory=@var{directory}
1416 Specifies directory containing CA certificates in PEM format. Each
1417 file contains one CA certificate, and the file name is based on a hash
1418 value derived from the certificate. This is achieved by processing a
1419 certificate directory with the @code{c_rehash} utility supplied with
1420 OpenSSL. Using @samp{--ca-directory} is more efficient than
1421 @samp{--ca-certificate} when many certificates are installed because
1422 it allows Wget to fetch certificates on demand.
1424 Without this option Wget looks for CA certificates at the
1425 system-specified locations, chosen at OpenSSL installation time.
1427 @cindex entropy, specifying source of
1428 @cindex randomness, specifying source of
1429 @item --random-file=@var{file}
1430 Use @var{file} as the source of random data for seeding the
1431 pseudo-random number generator on systems without @file{/dev/random}.
1433 On such systems the SSL library needs an external source of randomness
1434 to initialize. Randomness may be provided by EGD (see
1435 @samp{--egd-file} below) or read from an external source specified by
1436 the user. If this option is not specified, Wget looks for random data
1437 in @code{$RANDFILE} or, if that is unset, in @file{$HOME/.rnd}. If
1438 none of those are available, it is likely that SSL encryption will not
1441 If you're getting the ``Could not seed OpenSSL PRNG; disabling SSL.''
1442 error, you should provide random data using some of the methods
1446 @item --egd-file=@var{file}
1447 Use @var{file} as the EGD socket. EGD stands for @dfn{Entropy
1448 Gathering Daemon}, a user-space program that collects data from
1449 various unpredictable system sources and makes it available to other
1450 programs that might need it. Encryption software, such as the SSL
1451 library, needs sources of non-repeating randomness to seed the random
1452 number generator used to produce cryptographically strong keys.
1454 OpenSSL allows the user to specify his own source of entropy using the
1455 @code{RAND_FILE} environment variable. If this variable is unset, or
1456 if the specified file does not produce enough randomness, OpenSSL will
1457 read random data from EGD socket specified using this option.
1459 If this option is not specified (and the equivalent startup command is
1460 not used), EGD is never contacted. EGD is not needed on modern Unix
1461 systems that support @file{/dev/random}.
1465 @section FTP Options
1469 @cindex ftp password
1470 @cindex ftp authentication
1471 @item --ftp-user=@var{user}
1472 @itemx --ftp-password=@var{password}
1473 Specify the username @var{user} and password @var{password} on an
1474 @sc{ftp} server. Without this, or the corresponding startup option,
1475 the password defaults to @samp{-wget@@}, normally used for anonymous
1478 Another way to specify username and password is in the @sc{url} itself
1479 (@pxref{URL Format}). Either method reveals your password to anyone who
1480 bothers to run @code{ps}. To prevent the passwords from being seen,
1481 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1482 those files from other users with @code{chmod}. If the passwords are
1483 really important, do not leave them lying in those files either---edit
1484 the files and delete them after Wget has started the download.
1487 For more information about security issues with Wget, @xref{Security
1491 @cindex .listing files, removing
1492 @item --no-remove-listing
1493 Don't remove the temporary @file{.listing} files generated by @sc{ftp}
1494 retrievals. Normally, these files contain the raw directory listings
1495 received from @sc{ftp} servers. Not removing them can be useful for
1496 debugging purposes, or when you want to be able to easily check on the
1497 contents of remote server directories (e.g. to verify that a mirror
1498 you're running is complete).
1500 Note that even though Wget writes to a known filename for this file,
1501 this is not a security hole in the scenario of a user making
1502 @file{.listing} a symbolic link to @file{/etc/passwd} or something and
1503 asking @code{root} to run Wget in his or her directory. Depending on
1504 the options used, either Wget will refuse to write to @file{.listing},
1505 making the globbing/recursion/time-stamping operation fail, or the
1506 symbolic link will be deleted and replaced with the actual
1507 @file{.listing} file, or the listing will be written to a
1508 @file{.listing.@var{number}} file.
1510 Even though this situation isn't a problem, though, @code{root} should
1511 never run Wget in a non-trusted user's directory. A user could do
1512 something as simple as linking @file{index.html} to @file{/etc/passwd}
1513 and asking @code{root} to run Wget with @samp{-N} or @samp{-r} so the file
1514 will be overwritten.
1516 @cindex globbing, toggle
1518 Turn off @sc{ftp} globbing. Globbing refers to the use of shell-like
1519 special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[}
1520 and @samp{]} to retrieve more than one file from the same directory at
1524 wget ftp://gnjilux.srk.fer.hr/*.msg
1527 By default, globbing will be turned on if the @sc{url} contains a
1528 globbing character. This option may be used to turn globbing on or off
1531 You may have to quote the @sc{url} to protect it from being expanded by
1532 your shell. Globbing makes Wget look for a directory listing, which is
1533 system-specific. This is why it currently works only with Unix @sc{ftp}
1534 servers (and the ones emulating Unix @code{ls} output).
1537 @item --no-passive-ftp
1538 Disable the use of the @dfn{passive} FTP transfer mode. Passive FTP
1539 mandates that the client connect to the server to establish the data
1540 connection rather than the other way around.
1542 If the machine is connected to the Internet directly, both passive and
1543 active FTP should work equally well. Behind most firewall and NAT
1544 configurations passive FTP has a better chance of working. However,
1545 in some rare firewall configurations, active FTP actually works when
1546 passive FTP doesn't. If you suspect this to be the case, use this
1547 option, or set @code{passive_ftp=off} in your init file.
1549 @cindex symbolic links, retrieving
1550 @item --retr-symlinks
1551 Usually, when retrieving @sc{ftp} directories recursively and a symbolic
1552 link is encountered, the linked-to file is not downloaded. Instead, a
1553 matching symbolic link is created on the local filesystem. The
1554 pointed-to file will not be downloaded unless this recursive retrieval
1555 would have encountered it separately and downloaded it anyway.
1557 When @samp{--retr-symlinks} is specified, however, symbolic links are
1558 traversed and the pointed-to files are retrieved. At this time, this
1559 option does not cause Wget to traverse symlinks to directories and
1560 recurse through them, but in the future it should be enhanced to do
1563 Note that when retrieving a file (not a directory) because it was
1564 specified on the command-line, rather than because it was recursed to,
1565 this option has no effect. Symbolic links are always traversed in this
1568 @cindex Keep-Alive, turning off
1569 @cindex Persistent Connections, disabling
1570 @item --no-http-keep-alive
1571 Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget
1572 asks the server to keep the connection open so that, when you download
1573 more than one document from the same server, they get transferred over
1574 the same TCP connection. This saves time and at the same time reduces
1575 the load on the server.
1577 This option is useful when, for some reason, persistent (keep-alive)
1578 connections don't work for you, for example due to a server bug or due
1579 to the inability of server-side scripts to cope with the connections.
1582 @node Recursive Retrieval Options
1583 @section Recursive Retrieval Options
1588 Turn on recursive retrieving. @xref{Recursive Download}, for more
1591 @item -l @var{depth}
1592 @itemx --level=@var{depth}
1593 Specify recursion maximum depth level @var{depth} (@pxref{Recursive
1594 Download}). The default maximum depth is 5.
1596 @cindex proxy filling
1597 @cindex delete after retrieval
1598 @cindex filling proxy cache
1599 @item --delete-after
1600 This option tells Wget to delete every single file it downloads,
1601 @emph{after} having done so. It is useful for pre-fetching popular
1602 pages through a proxy, e.g.:
1605 wget -r -nd --delete-after http://whatever.com/~popular/page/
1608 The @samp{-r} option is to retrieve recursively, and @samp{-nd} to not
1611 Note that @samp{--delete-after} deletes files on the local machine. It
1612 does not issue the @samp{DELE} command to remote FTP sites, for
1613 instance. Also note that when @samp{--delete-after} is specified,
1614 @samp{--convert-links} is ignored, so @samp{.orig} files are simply not
1615 created in the first place.
1617 @cindex conversion of links
1618 @cindex link conversion
1620 @itemx --convert-links
1621 After the download is complete, convert the links in the document to
1622 make them suitable for local viewing. This affects not only the visible
1623 hyperlinks, but any part of the document that links to external content,
1624 such as embedded images, links to style sheets, hyperlinks to non-@sc{html}
1627 Each link will be changed in one of the two ways:
1631 The links to files that have been downloaded by Wget will be changed to
1632 refer to the file they point to as a relative link.
1634 Example: if the downloaded file @file{/foo/doc.html} links to
1635 @file{/bar/img.gif}, also downloaded, then the link in @file{doc.html}
1636 will be modified to point to @samp{../bar/img.gif}. This kind of
1637 transformation works reliably for arbitrary combinations of directories.
1640 The links to files that have not been downloaded by Wget will be changed
1641 to include host name and absolute path of the location they point to.
1643 Example: if the downloaded file @file{/foo/doc.html} links to
1644 @file{/bar/img.gif} (or to @file{../bar/img.gif}), then the link in
1645 @file{doc.html} will be modified to point to
1646 @file{http://@var{hostname}/bar/img.gif}.
1649 Because of this, local browsing works reliably: if a linked file was
1650 downloaded, the link will refer to its local name; if it was not
1651 downloaded, the link will refer to its full Internet address rather than
1652 presenting a broken link. The fact that the former links are converted
1653 to relative links ensures that you can move the downloaded hierarchy to
1656 Note that only at the end of the download can Wget know which links have
1657 been downloaded. Because of that, the work done by @samp{-k} will be
1658 performed at the end of all the downloads.
1660 @cindex backing up converted files
1662 @itemx --backup-converted
1663 When converting a file, back up the original version with a @samp{.orig}
1664 suffix. Affects the behavior of @samp{-N} (@pxref{HTTP Time-Stamping
1669 Turn on options suitable for mirroring. This option turns on recursion
1670 and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
1671 directory listings. It is currently equivalent to
1672 @samp{-r -N -l inf --no-remove-listing}.
1674 @cindex page requisites
1675 @cindex required images, downloading
1677 @itemx --page-requisites
1678 This option causes Wget to download all the files that are necessary to
1679 properly display a given @sc{html} page. This includes such things as
1680 inlined images, sounds, and referenced stylesheets.
1682 Ordinarily, when downloading a single @sc{html} page, any requisite documents
1683 that may be needed to display it properly are not downloaded. Using
1684 @samp{-r} together with @samp{-l} can help, but since Wget does not
1685 ordinarily distinguish between external and inlined documents, one is
1686 generally left with ``leaf documents'' that are missing their
1689 For instance, say document @file{1.html} contains an @code{<IMG>} tag
1690 referencing @file{1.gif} and an @code{<A>} tag pointing to external
1691 document @file{2.html}. Say that @file{2.html} is similar but that its
1692 image is @file{2.gif} and it links to @file{3.html}. Say this
1693 continues up to some arbitrarily high number.
1695 If one executes the command:
1698 wget -r -l 2 http://@var{site}/1.html
1701 then @file{1.html}, @file{1.gif}, @file{2.html}, @file{2.gif}, and
1702 @file{3.html} will be downloaded. As you can see, @file{3.html} is
1703 without its requisite @file{3.gif} because Wget is simply counting the
1704 number of hops (up to 2) away from @file{1.html} in order to determine
1705 where to stop the recursion. However, with this command:
1708 wget -r -l 2 -p http://@var{site}/1.html
1711 all the above files @emph{and} @file{3.html}'s requisite @file{3.gif}
1712 will be downloaded. Similarly,
1715 wget -r -l 1 -p http://@var{site}/1.html
1718 will cause @file{1.html}, @file{1.gif}, @file{2.html}, and @file{2.gif}
1719 to be downloaded. One might think that:
1722 wget -r -l 0 -p http://@var{site}/1.html
1725 would download just @file{1.html} and @file{1.gif}, but unfortunately
1726 this is not the case, because @samp{-l 0} is equivalent to
1727 @samp{-l inf}---that is, infinite recursion. To download a single @sc{html}
1728 page (or a handful of them, all specified on the command-line or in a
1729 @samp{-i} @sc{url} input file) and its (or their) requisites, simply leave off
1730 @samp{-r} and @samp{-l}:
1733 wget -p http://@var{site}/1.html
1736 Note that Wget will behave as if @samp{-r} had been specified, but only
1737 that single page and its requisites will be downloaded. Links from that
1738 page to external documents will not be followed. Actually, to download
1739 a single page and all its requisites (even if they exist on separate
1740 websites), and make sure the lot displays properly locally, this author
1741 likes to use a few options in addition to @samp{-p}:
1744 wget -E -H -k -K -p http://@var{site}/@var{document}
1747 To finish off this topic, it's worth knowing that Wget's idea of an
1748 external document link is any URL specified in an @code{<A>} tag, an
1749 @code{<AREA>} tag, or a @code{<LINK>} tag other than @code{<LINK
1752 @cindex @sc{html} comments
1753 @cindex comments, @sc{html}
1754 @item --strict-comments
1755 Turn on strict parsing of @sc{html} comments. The default is to terminate
1756 comments at the first occurrence of @samp{-->}.
1758 According to specifications, @sc{html} comments are expressed as @sc{sgml}
1759 @dfn{declarations}. Declaration is special markup that begins with
1760 @samp{<!} and ends with @samp{>}, such as @samp{<!DOCTYPE ...>}, that
1761 may contain comments between a pair of @samp{--} delimiters. @sc{html}
1762 comments are ``empty declarations'', @sc{sgml} declarations without any
1763 non-comment text. Therefore, @samp{<!--foo-->} is a valid comment, and
1764 so is @samp{<!--one-- --two-->}, but @samp{<!--1--2-->} is not.
1766 On the other hand, most @sc{html} writers don't perceive comments as anything
1767 other than text delimited with @samp{<!--} and @samp{-->}, which is not
1768 quite the same. For example, something like @samp{<!------------>}
1769 works as a valid comment as long as the number of dashes is a multiple
1770 of four (!). If not, the comment technically lasts until the next
1771 @samp{--}, which may be at the other end of the document. Because of
1772 this, many popular browsers completely ignore the specification and
1773 implement what users have come to expect: comments delimited with
1774 @samp{<!--} and @samp{-->}.
1776 Until version 1.9, Wget interpreted comments strictly, which resulted in
1777 missing links in many web pages that displayed fine in browsers, but had
1778 the misfortune of containing non-compliant comments. Beginning with
1779 version 1.9, Wget has joined the ranks of clients that implements
1780 ``naive'' comments, terminating each comment at the first occurrence of
1783 If, for whatever reason, you want strict comment parsing, use this
1784 option to turn it on.
1787 @node Recursive Accept/Reject Options
1788 @section Recursive Accept/Reject Options
1791 @item -A @var{acclist} --accept @var{acclist}
1792 @itemx -R @var{rejlist} --reject @var{rejlist}
1793 Specify comma-separated lists of file name suffixes or patterns to
1794 accept or reject (@pxref{Types of Files} for more details).
1796 @item -D @var{domain-list}
1797 @itemx --domains=@var{domain-list}
1798 Set domains to be followed. @var{domain-list} is a comma-separated list
1799 of domains. Note that it does @emph{not} turn on @samp{-H}.
1801 @item --exclude-domains @var{domain-list}
1802 Specify the domains that are @emph{not} to be followed.
1803 (@pxref{Spanning Hosts}).
1805 @cindex follow FTP links
1807 Follow @sc{ftp} links from @sc{html} documents. Without this option,
1808 Wget will ignore all the @sc{ftp} links.
1810 @cindex tag-based recursive pruning
1811 @item --follow-tags=@var{list}
1812 Wget has an internal table of @sc{html} tag / attribute pairs that it
1813 considers when looking for linked documents during a recursive
1814 retrieval. If a user wants only a subset of those tags to be
1815 considered, however, he or she should be specify such tags in a
1816 comma-separated @var{list} with this option.
1818 @item --ignore-tags=@var{list}
1819 This is the opposite of the @samp{--follow-tags} option. To skip
1820 certain @sc{html} tags when recursively looking for documents to download,
1821 specify them in a comma-separated @var{list}.
1823 In the past, this option was the best bet for downloading a single page
1824 and its requisites, using a command-line like:
1827 wget --ignore-tags=a,area -H -k -K -r http://@var{site}/@var{document}
1830 However, the author of this option came across a page with tags like
1831 @code{<LINK REL="home" HREF="/">} and came to the realization that
1832 specifying tags to ignore was not enough. One can't just tell Wget to
1833 ignore @code{<LINK>}, because then stylesheets will not be downloaded.
1834 Now the best bet for downloading a single page and its requisites is the
1835 dedicated @samp{--page-requisites} option.
1840 Ignore case when matching files and directories. This influences the
1841 behavior of -R, -A, -I, and -X options, as well as globbing
1842 implemented when downloading from FTP sites. For example, with this
1843 option, @samp{-A *.txt} will match @samp{file1.txt}, but also
1844 @samp{file2.TXT}, @samp{file3.TxT}, and so on.
1848 Enable spanning across hosts when doing recursive retrieving
1849 (@pxref{Spanning Hosts}).
1853 Follow relative links only. Useful for retrieving a specific home page
1854 without any distractions, not even those from the same hosts
1855 (@pxref{Relative Links}).
1858 @itemx --include-directories=@var{list}
1859 Specify a comma-separated list of directories you wish to follow when
1860 downloading (@pxref{Directory-Based Limits} for more details.) Elements
1861 of @var{list} may contain wildcards.
1864 @itemx --exclude-directories=@var{list}
1865 Specify a comma-separated list of directories you wish to exclude from
1866 download (@pxref{Directory-Based Limits} for more details.) Elements of
1867 @var{list} may contain wildcards.
1871 Do not ever ascend to the parent directory when retrieving recursively.
1872 This is a useful option, since it guarantees that only the files
1873 @emph{below} a certain hierarchy will be downloaded.
1874 @xref{Directory-Based Limits}, for more details.
1879 @node Recursive Download
1880 @chapter Recursive Download
1883 @cindex recursive download
1885 GNU Wget is capable of traversing parts of the Web (or a single
1886 @sc{http} or @sc{ftp} server), following links and directory structure.
1887 We refer to this as to @dfn{recursive retrieval}, or @dfn{recursion}.
1889 With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from
1890 the given @sc{url}, documents, retrieving the files the @sc{html}
1891 document was referring to, through markup like @code{href}, or
1892 @code{src}. If the freshly downloaded file is also of type
1893 @code{text/html} or @code{application/xhtml+xml}, it will be parsed and
1896 Recursive retrieval of @sc{http} and @sc{html} content is
1897 @dfn{breadth-first}. This means that Wget first downloads the requested
1898 @sc{html} document, then the documents linked from that document, then the
1899 documents linked by them, and so on. In other words, Wget first
1900 downloads the documents at depth 1, then those at depth 2, and so on
1901 until the specified maximum depth.
1903 The maximum @dfn{depth} to which the retrieval may descend is specified
1904 with the @samp{-l} option. The default maximum depth is five layers.
1906 When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
1907 the data from the given directory tree (including the subdirectories up
1908 to the specified depth) on the remote server, creating its mirror image
1909 locally. @sc{ftp} retrieval is also limited by the @code{depth}
1910 parameter. Unlike @sc{http} recursion, @sc{ftp} recursion is performed
1913 By default, Wget will create a local directory tree, corresponding to
1914 the one found on the remote server.
1916 Recursive retrieving can find a number of applications, the most
1917 important of which is mirroring. It is also useful for @sc{www}
1918 presentations, and any other opportunities where slow network
1919 connections should be bypassed by storing the files locally.
1921 You should be warned that recursive downloads can overload the remote
1922 servers. Because of that, many administrators frown upon them and may
1923 ban access from your site if they detect very fast downloads of big
1924 amounts of content. When downloading from Internet servers, consider
1925 using the @samp{-w} option to introduce a delay between accesses to the
1926 server. The download will take a while longer, but the server
1927 administrator will not be alarmed by your rudeness.
1929 Of course, recursive download may cause problems on your machine. If
1930 left to run unchecked, it can easily fill up the disk. If downloading
1931 from local network, it can also take bandwidth on the system, as well as
1932 consume memory and CPU.
1934 Try to specify the criteria that match the kind of download you are
1935 trying to achieve. If you want to download only one page, use
1936 @samp{--page-requisites} without any additional recursion. If you want
1937 to download things under one directory, use @samp{-np} to avoid
1938 downloading things from other directories. If you want to download all
1939 the files from one directory, use @samp{-l 1} to make sure the recursion
1940 depth never exceeds one. @xref{Following Links}, for more information
1943 Recursive retrieval should be used with care. Don't say you were not
1946 @node Following Links
1947 @chapter Following Links
1949 @cindex following links
1951 When retrieving recursively, one does not wish to retrieve loads of
1952 unnecessary data. Most of the time the users bear in mind exactly what
1953 they want to download, and want Wget to follow only specific links.
1955 For example, if you wish to download the music archive from
1956 @samp{fly.srk.fer.hr}, you will not want to download all the home pages
1957 that happen to be referenced by an obscure part of the archive.
1959 Wget possesses several mechanisms that allows you to fine-tune which
1960 links it will follow.
1963 * Spanning Hosts:: (Un)limiting retrieval based on host name.
1964 * Types of Files:: Getting only certain files.
1965 * Directory-Based Limits:: Getting only certain directories.
1966 * Relative Links:: Follow relative links only.
1967 * FTP Links:: Following FTP links.
1970 @node Spanning Hosts
1971 @section Spanning Hosts
1972 @cindex spanning hosts
1973 @cindex hosts, spanning
1975 Wget's recursive retrieval normally refuses to visit hosts different
1976 than the one you specified on the command line. This is a reasonable
1977 default; without it, every retrieval would have the potential to turn
1978 your Wget into a small version of google.
1980 However, visiting different hosts, or @dfn{host spanning,} is sometimes
1981 a useful option. Maybe the images are served from a different server.
1982 Maybe you're mirroring a site that consists of pages interlinked between
1983 three servers. Maybe the server has two equivalent names, and the @sc{html}
1984 pages refer to both interchangeably.
1987 @item Span to any host---@samp{-H}
1989 The @samp{-H} option turns on host spanning, thus allowing Wget's
1990 recursive run to visit any host referenced by a link. Unless sufficient
1991 recursion-limiting criteria are applied depth, these foreign hosts will
1992 typically link to yet more hosts, and so on until Wget ends up sucking
1993 up much more data than you have intended.
1995 @item Limit spanning to certain domains---@samp{-D}
1997 The @samp{-D} option allows you to specify the domains that will be
1998 followed, thus limiting the recursion only to the hosts that belong to
1999 these domains. Obviously, this makes sense only in conjunction with
2000 @samp{-H}. A typical example would be downloading the contents of
2001 @samp{www.server.com}, but allowing downloads from
2002 @samp{images.server.com}, etc.:
2005 wget -rH -Dserver.com http://www.server.com/
2008 You can specify more than one address by separating them with a comma,
2009 e.g. @samp{-Ddomain1.com,domain2.com}.
2011 @item Keep download off certain domains---@samp{--exclude-domains}
2013 If there are domains you want to exclude specifically, you can do it
2014 with @samp{--exclude-domains}, which accepts the same type of arguments
2015 of @samp{-D}, but will @emph{exclude} all the listed domains. For
2016 example, if you want to download all the hosts from @samp{foo.edu}
2017 domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
2021 wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
2027 @node Types of Files
2028 @section Types of Files
2029 @cindex types of files
2031 When downloading material from the web, you will often want to restrict
2032 the retrieval to only certain file types. For example, if you are
2033 interested in downloading @sc{gif}s, you will not be overjoyed to get
2034 loads of PostScript documents, and vice versa.
2036 Wget offers two options to deal with this problem. Each option
2037 description lists a short name, a long name, and the equivalent command
2040 @cindex accept wildcards
2041 @cindex accept suffixes
2042 @cindex wildcards, accept
2043 @cindex suffixes, accept
2045 @item -A @var{acclist}
2046 @itemx --accept @var{acclist}
2047 @itemx accept = @var{acclist}
2048 The argument to @samp{--accept} option is a list of file suffixes or
2049 patterns that Wget will download during recursive retrieval. A suffix
2050 is the ending part of a file, and consists of ``normal'' letters,
2051 e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
2052 wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
2054 So, specifying @samp{wget -A gif,jpg} will make Wget download only the
2055 files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
2056 @sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
2057 download only files beginning with @samp{zelazny} and containing numbers
2058 from 1960 to 1969 anywhere within. Look up the manual of your shell for
2059 a description of how pattern matching works.
2061 Of course, any number of suffixes and patterns can be combined into a
2062 comma-separated list, and given as an argument to @samp{-A}.
2064 @cindex reject wildcards
2065 @cindex reject suffixes
2066 @cindex wildcards, reject
2067 @cindex suffixes, reject
2068 @item -R @var{rejlist}
2069 @itemx --reject @var{rejlist}
2070 @itemx reject = @var{rejlist}
2071 The @samp{--reject} option works the same way as @samp{--accept}, only
2072 its logic is the reverse; Wget will download all files @emph{except} the
2073 ones matching the suffixes (or patterns) in the list.
2075 So, if you want to download a whole page except for the cumbersome
2076 @sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
2077 Analogously, to download all files except the ones beginning with
2078 @samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
2079 expansion by the shell.
2082 The @samp{-A} and @samp{-R} options may be combined to achieve even
2083 better fine-tuning of which files to retrieve. E.g. @samp{wget -A
2084 "*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
2085 a part of their name, but @emph{not} the PostScript files.
2087 Note that these two options do not affect the downloading of @sc{html}
2088 files; Wget must load all the @sc{html}s to know where to go at
2089 all---recursive retrieval would make no sense otherwise.
2091 @node Directory-Based Limits
2092 @section Directory-Based Limits
2094 @cindex directory limits
2096 Regardless of other link-following facilities, it is often useful to
2097 place the restriction of what files to retrieve based on the directories
2098 those files are placed in. There can be many reasons for this---the
2099 home pages may be organized in a reasonable directory structure; or some
2100 directories may contain useless information, e.g. @file{/cgi-bin} or
2101 @file{/dev} directories.
2103 Wget offers three different options to deal with this requirement. Each
2104 option description lists a short name, a long name, and the equivalent
2105 command in @file{.wgetrc}.
2107 @cindex directories, include
2108 @cindex include directories
2109 @cindex accept directories
2112 @itemx --include @var{list}
2113 @itemx include_directories = @var{list}
2114 @samp{-I} option accepts a comma-separated list of directories included
2115 in the retrieval. Any other directories will simply be ignored. The
2116 directories are absolute paths.
2118 So, if you wish to download from @samp{http://host/people/bozo/}
2119 following only links to bozo's colleagues in the @file{/people}
2120 directory and the bogus scripts in @file{/cgi-bin}, you can specify:
2123 wget -I /people,/cgi-bin http://host/people/bozo/
2126 @cindex directories, exclude
2127 @cindex exclude directories
2128 @cindex reject directories
2130 @itemx --exclude @var{list}
2131 @itemx exclude_directories = @var{list}
2132 @samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
2133 directories @emph{excluded} from the download. E.g. if you do not want
2134 Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
2135 /cgi-bin} on the command line.
2137 The same as with @samp{-A}/@samp{-R}, these two options can be combined
2138 to get a better fine-tuning of downloading subdirectories. E.g. if you
2139 want to load all the files from @file{/pub} hierarchy except for
2140 @file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
2145 @itemx no_parent = on
2146 The simplest, and often very useful way of limiting directories is
2147 disallowing retrieval of the links that refer to the hierarchy
2148 @dfn{above} than the beginning directory, i.e. disallowing ascent to the
2149 parent directory/directories.
2151 The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
2152 Using it guarantees that you will never leave the existing hierarchy.
2153 Supposing you issue Wget with:
2156 wget -r --no-parent http://somehost/~luzer/my-archive/
2159 You may rest assured that none of the references to
2160 @file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
2161 followed. Only the archive you are interested in will be downloaded.
2162 Essentially, @samp{--no-parent} is similar to
2163 @samp{-I/~luzer/my-archive}, only it handles redirections in a more
2164 intelligent fashion.
2167 @node Relative Links
2168 @section Relative Links
2169 @cindex relative links
2171 When @samp{-L} is turned on, only the relative links are ever followed.
2172 Relative links are here defined those that do not refer to the web
2173 server root. For example, these links are relative:
2177 <a href="foo/bar.gif">
2178 <a href="../foo/bar.gif">
2181 These links are not relative:
2185 <a href="/foo/bar.gif">
2186 <a href="http://www.server.com/foo/bar.gif">
2189 Using this option guarantees that recursive retrieval will not span
2190 hosts, even without @samp{-H}. In simple cases it also allows downloads
2191 to ``just work'' without having to convert links.
2193 This option is probably not very useful and might be removed in a future
2197 @section Following FTP Links
2198 @cindex following ftp links
2200 The rules for @sc{ftp} are somewhat specific, as it is necessary for
2201 them to be. @sc{ftp} links in @sc{html} documents are often included
2202 for purposes of reference, and it is often inconvenient to download them
2205 To have @sc{ftp} links followed from @sc{html} documents, you need to
2206 specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
2207 links will span hosts regardless of @samp{-H} setting. This is logical,
2208 as @sc{ftp} links rarely point to the same host where the @sc{http}
2209 server resides. For similar reasons, the @samp{-L} options has no
2210 effect on such downloads. On the other hand, domain acceptance
2211 (@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
2213 Also note that followed links to @sc{ftp} directories will not be
2214 retrieved recursively further.
2217 @chapter Time-Stamping
2218 @cindex time-stamping
2219 @cindex timestamping
2220 @cindex updating the archives
2221 @cindex incremental updating
2223 One of the most important aspects of mirroring information from the
2224 Internet is updating your archives.
2226 Downloading the whole archive again and again, just to replace a few
2227 changed files is expensive, both in terms of wasted bandwidth and money,
2228 and the time to do the update. This is why all the mirroring tools
2229 offer the option of incremental updating.
2231 Such an updating mechanism means that the remote server is scanned in
2232 search of @dfn{new} files. Only those new files will be downloaded in
2233 the place of the old ones.
2235 A file is considered new if one of these two conditions are met:
2239 A file of that name does not already exist locally.
2242 A file of that name does exist, but the remote file was modified more
2243 recently than the local file.
2246 To implement this, the program needs to be aware of the time of last
2247 modification of both local and remote files. We call this information the
2248 @dfn{time-stamp} of a file.
2250 The time-stamping in GNU Wget is turned on using @samp{--timestamping}
2251 (@samp{-N}) option, or through @code{timestamping = on} directive in
2252 @file{.wgetrc}. With this option, for each file it intends to download,
2253 Wget will check whether a local file of the same name exists. If it
2254 does, and the remote file is older, Wget will not download it.
2256 If the local file does not exist, or the sizes of the files do not
2257 match, Wget will download the remote file no matter what the time-stamps
2261 * Time-Stamping Usage::
2262 * HTTP Time-Stamping Internals::
2263 * FTP Time-Stamping Internals::
2266 @node Time-Stamping Usage
2267 @section Time-Stamping Usage
2268 @cindex time-stamping usage
2269 @cindex usage, time-stamping
2271 The usage of time-stamping is simple. Say you would like to download a
2272 file so that it keeps its date of modification.
2275 wget -S http://www.gnu.ai.mit.edu/
2278 A simple @code{ls -l} shows that the time stamp on the local file equals
2279 the state of the @code{Last-Modified} header, as returned by the server.
2280 As you can see, the time-stamping info is preserved locally, even
2281 without @samp{-N} (at least for @sc{http}).
2283 Several days later, you would like Wget to check if the remote file has
2284 changed, and download it if it has.
2287 wget -N http://www.gnu.ai.mit.edu/
2290 Wget will ask the server for the last-modified date. If the local file
2291 has the same timestamp as the server, or a newer one, the remote file
2292 will not be re-fetched. However, if the remote file is more recent,
2293 Wget will proceed to fetch it.
2295 The same goes for @sc{ftp}. For example:
2298 wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
2301 (The quotes around that URL are to prevent the shell from trying to
2302 interpret the @samp{*}.)
2304 After download, a local directory listing will show that the timestamps
2305 match those on the remote server. Reissuing the command with @samp{-N}
2306 will make Wget re-fetch @emph{only} the files that have been modified
2307 since the last download.
2309 If you wished to mirror the GNU archive every week, you would use a
2310 command like the following, weekly:
2313 wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
2316 Note that time-stamping will only work for files for which the server
2317 gives a timestamp. For @sc{http}, this depends on getting a
2318 @code{Last-Modified} header. For @sc{ftp}, this depends on getting a
2319 directory listing with dates in a format that Wget can parse
2320 (@pxref{FTP Time-Stamping Internals}).
2322 @node HTTP Time-Stamping Internals
2323 @section HTTP Time-Stamping Internals
2324 @cindex http time-stamping
2326 Time-stamping in @sc{http} is implemented by checking of the
2327 @code{Last-Modified} header. If you wish to retrieve the file
2328 @file{foo.html} through @sc{http}, Wget will check whether
2329 @file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
2330 retrieved unconditionally.
2332 If the file does exist locally, Wget will first check its local
2333 time-stamp (similar to the way @code{ls -l} checks it), and then send a
2334 @code{HEAD} request to the remote server, demanding the information on
2337 The @code{Last-Modified} header is examined to find which file was
2338 modified more recently (which makes it ``newer''). If the remote file
2339 is newer, it will be downloaded; if it is older, Wget will give
2340 up.@footnote{As an additional check, Wget will look at the
2341 @code{Content-Length} header, and compare the sizes; if they are not the
2342 same, the remote file will be downloaded no matter what the time-stamp
2345 When @samp{--backup-converted} (@samp{-K}) is specified in conjunction
2346 with @samp{-N}, server file @samp{@var{X}} is compared to local file
2347 @samp{@var{X}.orig}, if extant, rather than being compared to local file
2348 @samp{@var{X}}, which will always differ if it's been converted by
2349 @samp{--convert-links} (@samp{-k}).
2351 Arguably, @sc{http} time-stamping should be implemented using the
2352 @code{If-Modified-Since} request.
2354 @node FTP Time-Stamping Internals
2355 @section FTP Time-Stamping Internals
2356 @cindex ftp time-stamping
2358 In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
2359 @sc{ftp} has no headers---time-stamps must be ferreted out of directory
2362 If an @sc{ftp} download is recursive or uses globbing, Wget will use the
2363 @sc{ftp} @code{LIST} command to get a file listing for the directory
2364 containing the desired file(s). It will try to analyze the listing,
2365 treating it like Unix @code{ls -l} output, extracting the time-stamps.
2366 The rest is exactly the same as for @sc{http}. Note that when
2367 retrieving individual files from an @sc{ftp} server without using
2368 globbing or recursion, listing files will not be downloaded (and thus
2369 files will not be time-stamped) unless @samp{-N} is specified.
2371 Assumption that every directory listing is a Unix-style listing may
2372 sound extremely constraining, but in practice it is not, as many
2373 non-Unix @sc{ftp} servers use the Unixoid listing format because most
2374 (all?) of the clients understand it. Bear in mind that @sc{rfc959}
2375 defines no standard way to get a file list, let alone the time-stamps.
2376 We can only hope that a future standard will define this.
2378 Another non-standard solution includes the use of @code{MDTM} command
2379 that is supported by some @sc{ftp} servers (including the popular
2380 @code{wu-ftpd}), which returns the exact time of the specified file.
2381 Wget may support this command in the future.
2384 @chapter Startup File
2385 @cindex startup file
2391 Once you know how to change default settings of Wget through command
2392 line arguments, you may wish to make some of those settings permanent.
2393 You can do that in a convenient way by creating the Wget startup
2394 file---@file{.wgetrc}.
2396 Besides @file{.wgetrc} is the ``main'' initialization file, it is
2397 convenient to have a special facility for storing passwords. Thus Wget
2398 reads and interprets the contents of @file{$HOME/.netrc}, if it finds
2399 it. You can find @file{.netrc} format in your system manuals.
2401 Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
2405 * Wgetrc Location:: Location of various wgetrc files.
2406 * Wgetrc Syntax:: Syntax of wgetrc.
2407 * Wgetrc Commands:: List of available commands.
2408 * Sample Wgetrc:: A wgetrc example.
2411 @node Wgetrc Location
2412 @section Wgetrc Location
2413 @cindex wgetrc location
2414 @cindex location of wgetrc
2416 When initializing, Wget will look for a @dfn{global} startup file,
2417 @file{/usr/local/etc/wgetrc} by default (or some prefix other than
2418 @file{/usr/local}, if Wget was not installed there) and read commands
2419 from there, if it exists.
2421 Then it will look for the user's file. If the environmental variable
2422 @code{WGETRC} is set, Wget will try to load that file. Failing that, no
2423 further attempts will be made.
2425 If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
2427 The fact that user's settings are loaded after the system-wide ones
2428 means that in case of collision user's wgetrc @emph{overrides} the
2429 system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
2430 Fascist admins, away!
2433 @section Wgetrc Syntax
2434 @cindex wgetrc syntax
2435 @cindex syntax of wgetrc
2437 The syntax of a wgetrc command is simple:
2443 The @dfn{variable} will also be called @dfn{command}. Valid
2444 @dfn{values} are different for different commands.
2446 The commands are case-insensitive and underscore-insensitive. Thus
2447 @samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
2448 beginning with @samp{#} and lines containing white-space only are
2451 Commands that expect a comma-separated list will clear the list on an
2452 empty command. So, if you wish to reset the rejection list specified in
2453 global @file{wgetrc}, you can do it with:
2459 @node Wgetrc Commands
2460 @section Wgetrc Commands
2461 @cindex wgetrc commands
2463 The complete set of commands is listed below. Legal values are listed
2464 after the @samp{=}. Simple Boolean values can be set or unset using
2465 @samp{on} and @samp{off} or @samp{1} and @samp{0}.
2467 Some commands take pseudo-arbitrary values. @var{address} values can be
2468 hostnames or dotted-quad IP addresses. @var{n} can be any positive
2469 integer, or @samp{inf} for infinity, where appropriate. @var{string}
2470 values can be any non-empty string.
2472 Most of these commands have direct command-line equivalents. Also, any
2473 wgetrc command can be specified on the command line using the
2474 @samp{--execute} switch (@pxref{Basic Startup Options}.)
2477 @item accept/reject = @var{string}
2478 Same as @samp{-A}/@samp{-R} (@pxref{Types of Files}).
2480 @item add_hostdir = on/off
2481 Enable/disable host-prefixed file names. @samp{-nH} disables it.
2483 @item continue = on/off
2484 If set to on, force continuation of preexistent partially retrieved
2485 files. See @samp{-c} before setting it.
2487 @item background = on/off
2488 Enable/disable going to background---the same as @samp{-b} (which
2491 @item backup_converted = on/off
2492 Enable/disable saving pre-converted files with the suffix
2493 @samp{.orig}---the same as @samp{-K} (which enables it).
2495 @c @item backups = @var{number}
2496 @c #### Document me!
2498 @item base = @var{string}
2499 Consider relative @sc{url}s in @sc{url} input files forced to be
2500 interpreted as @sc{html} as being relative to @var{string}---the same as
2501 @samp{--base=@var{string}}.
2503 @item bind_address = @var{address}
2504 Bind to @var{address}, like the @samp{--bind-address=@var{address}}.
2506 @item ca_certificate = @var{file}
2507 Set the certificate authority bundle file to @var{file}. The same
2508 as @samp{--ca-certificate=@var{file}}.
2510 @item ca_directory = @var{directory}
2511 Set the directory used for certificate authorities. The same as
2512 @samp{--ca-directory=@var{directory}}.
2514 @item cache = on/off
2515 When set to off, disallow server-caching. See the @samp{--no-cache}
2518 @item certificate = @var{file}
2519 Set the client certificate file name to @var{file}. The same as
2520 @samp{--certificate=@var{file}}.
2522 @item certificate_type = @var{string}
2523 Specify the type of the client certificate, legal values being
2524 @samp{PEM} (the default) and @samp{DER} (aka ASN1). The same as
2525 @samp{--certificate-type=@var{string}}.
2527 @item check_certificate = on/off
2528 If this is set to off, the server certificate is not checked against
2529 the specified client authorities. The default is ``on''. The same as
2530 @samp{--check-certificate}.
2532 @item convert_links = on/off
2533 Convert non-relative links locally. The same as @samp{-k}.
2535 @item cookies = on/off
2536 When set to off, disallow cookies. See the @samp{--cookies} option.
2538 @item connect_timeout = @var{n}
2539 Set the connect timeout---the same as @samp{--connect-timeout}.
2541 @item cut_dirs = @var{n}
2542 Ignore @var{n} remote directory components. Equivalent to
2543 @samp{--cut-dirs=@var{n}}.
2545 @item debug = on/off
2546 Debug mode, same as @samp{-d}.
2548 @item delete_after = on/off
2549 Delete after download---the same as @samp{--delete-after}.
2551 @item dir_prefix = @var{string}
2552 Top of directory tree---the same as @samp{-P @var{string}}.
2554 @item dirstruct = on/off
2555 Turning dirstruct on or off---the same as @samp{-x} or @samp{-nd},
2558 @item dns_cache = on/off
2559 Turn DNS caching on/off. Since DNS caching is on by default, this
2560 option is normally used to turn it off and is equivalent to
2561 @samp{--no-dns-cache}.
2563 @item dns_timeout = @var{n}
2564 Set the DNS timeout---the same as @samp{--dns-timeout}.
2566 @item domains = @var{string}
2567 Same as @samp{-D} (@pxref{Spanning Hosts}).
2569 @item dot_bytes = @var{n}
2570 Specify the number of bytes ``contained'' in a dot, as seen throughout
2571 the retrieval (1024 by default). You can postfix the value with
2572 @samp{k} or @samp{m}, representing kilobytes and megabytes,
2573 respectively. With dot settings you can tailor the dot retrieval to
2574 suit your needs, or you can use the predefined @dfn{styles}
2575 (@pxref{Download Options}).
2577 @item dots_in_line = @var{n}
2578 Specify the number of dots that will be printed in each line throughout
2579 the retrieval (50 by default).
2581 @item dot_spacing = @var{n}
2582 Specify the number of dots in a single cluster (10 by default).
2584 @item egd_file = @var{file}
2585 Use @var{string} as the EGD socket file name. The same as
2586 @samp{--egd-file=@var{file}}.
2588 @item exclude_directories = @var{string}
2589 Specify a comma-separated list of directories you wish to exclude from
2590 download---the same as @samp{-X @var{string}} (@pxref{Directory-Based
2593 @item exclude_domains = @var{string}
2594 Same as @samp{--exclude-domains=@var{string}} (@pxref{Spanning
2597 @item follow_ftp = on/off
2598 Follow @sc{ftp} links from @sc{html} documents---the same as
2599 @samp{--follow-ftp}.
2601 @item follow_tags = @var{string}
2602 Only follow certain @sc{html} tags when doing a recursive retrieval,
2603 just like @samp{--follow-tags=@var{string}}.
2605 @item force_html = on/off
2606 If set to on, force the input filename to be regarded as an @sc{html}
2607 document---the same as @samp{-F}.
2609 @item ftp_password = @var{string}
2610 Set your @sc{ftp} password to @var{string}. Without this setting, the
2611 password defaults to @samp{-wget@@}, which is a useful default for
2612 anonymous @sc{ftp} access.
2614 This command used to be named @code{passwd} prior to Wget 1.10.
2616 @item ftp_proxy = @var{string}
2617 Use @var{string} as @sc{ftp} proxy, instead of the one specified in
2620 @item ftp_user = @var{string}
2621 Set @sc{ftp} user to @var{string}.
2623 This command used to be named @code{login} prior to Wget 1.10.
2626 Turn globbing on/off---the same as @samp{--glob} and @samp{--no-glob}.
2628 @item header = @var{string}
2629 Define a header for HTTP doewnloads, like using
2630 @samp{--header=@var{string}}.
2632 @item html_extension = on/off
2633 Add a @samp{.html} extension to @samp{text/html} or
2634 @samp{application/xhtml+xml} files without it, like @samp{-E}.
2636 @item http_keep_alive = on/off
2637 Turn the keep-alive feature on or off (defaults to on). Turning it
2638 off is equivalent to @samp{--no-http-keep-alive}.
2640 @item http_password = @var{string}
2641 Set @sc{http} password, equivalent to
2642 @samp{--http-password=@var{string}}.
2644 @item http_proxy = @var{string}
2645 Use @var{string} as @sc{http} proxy, instead of the one specified in
2648 @item http_user = @var{string}
2649 Set @sc{http} user to @var{string}, equivalent to
2650 @samp{--http-user=@var{string}}.
2652 @item https_proxy = @var{string}
2653 Use @var{string} as @sc{https} proxy, instead of the one specified in
2656 @item ignore_case = on/off
2657 When set to on, match files and directories case insensitively; the
2658 same as @samp{--ignore-case}.
2660 @item ignore_length = on/off
2661 When set to on, ignore @code{Content-Length} header; the same as
2662 @samp{--ignore-length}.
2664 @item ignore_tags = @var{string}
2665 Ignore certain @sc{html} tags when doing a recursive retrieval, like
2666 @samp{--ignore-tags=@var{string}}.
2668 @item include_directories = @var{string}
2669 Specify a comma-separated list of directories you wish to follow when
2670 downloading---the same as @samp{-I @var{string}}.
2672 @item inet4_only = on/off
2673 Force connecting to IPv4 addresses, off by default. You can put this
2674 in the global init file to disable Wget's attempts to resolve and
2675 connect to IPv6 hosts. Available only if Wget was compiled with IPv6
2676 support. The same as @samp{--inet4-only} or @samp{-4}.
2678 @item inet6_only = on/off
2679 Force connecting to IPv6 addresses, off by default. Available only if
2680 Wget was compiled with IPv6 support. The same as @samp{--inet6-only}
2683 @item input = @var{file}
2684 Read the @sc{url}s from @var{string}, like @samp{-i @var{file}}.
2686 @item limit_rate = @var{rate}
2687 Limit the download speed to no more than @var{rate} bytes per second.
2688 The same as @samp{--limit-rate=@var{rate}}.
2690 @item load_cookies = @var{file}
2691 Load cookies from @var{file}. See @samp{--load-cookies @var{file}}.
2693 @item logfile = @var{file}
2694 Set logfile to @var{file}, the same as @samp{-o @var{file}}.
2696 @item mirror = on/off
2697 Turn mirroring on/off. The same as @samp{-m}.
2699 @item netrc = on/off
2700 Turn reading netrc on or off.
2702 @item noclobber = on/off
2705 @item no_parent = on/off
2706 Disallow retrieving outside the directory hierarchy, like
2707 @samp{--no-parent} (@pxref{Directory-Based Limits}).
2709 @item no_proxy = @var{string}
2710 Use @var{string} as the comma-separated list of domains to avoid in
2711 proxy loading, instead of the one specified in environment.
2713 @item output_document = @var{file}
2714 Set the output filename---the same as @samp{-O @var{file}}.
2716 @item page_requisites = on/off
2717 Download all ancillary documents necessary for a single @sc{html} page to
2718 display properly---the same as @samp{-p}.
2720 @item passive_ftp = on/off
2721 Change setting of passive @sc{ftp}, equivalent to the
2722 @samp{--passive-ftp} option.
2724 @itemx password = @var{string}
2725 Specify password @var{string} for both @sc{ftp} and @sc{http} file retrieval.
2726 This command can be overridden using the @samp{ftp_password} and
2727 @samp{http_password} command for @sc{ftp} and @sc{http} respectively.
2729 @item post_data = @var{string}
2730 Use POST as the method for all HTTP requests and send @var{string} in
2731 the request body. The same as @samp{--post-data=@var{string}}.
2733 @item post_file = @var{file}
2734 Use POST as the method for all HTTP requests and send the contents of
2735 @var{file} in the request body. The same as
2736 @samp{--post-file=@var{file}}.
2738 @item prefer_family = IPv4/IPv6/none
2739 When given a choice of several addresses, connect to the addresses
2740 with specified address family first. IPv4 addresses are preferred by
2741 default. The same as @samp{--prefer-family}, which see for a detailed
2742 discussion of why this is useful.
2744 @item private_key = @var{file}
2745 Set the private key file to @var{file}. The same as
2746 @samp{--private-key=@var{file}}.
2748 @item private_key_type = @var{string}
2749 Specify the type of the private key, legal values being @samp{PEM}
2750 (the default) and @samp{DER} (aka ASN1). The same as
2751 @samp{--private-type=@var{string}}.
2753 @item progress = @var{string}
2754 Set the type of the progress indicator. Legal types are @samp{dot}
2755 and @samp{bar}. Equivalent to @samp{--progress=@var{string}}.
2757 @item protocol_directories = on/off
2758 When set, use the protocol name as a directory component of local file
2759 names. The same as @samp{--protocol-directories}.
2761 @item proxy_user = @var{string}
2762 Set proxy authentication user name to @var{string}, like
2763 @samp{--proxy-user=@var{string}}.
2765 @item proxy_password = @var{string}
2766 Set proxy authentication password to @var{string}, like
2767 @samp{--proxy-password=@var{string}}.
2769 @item quiet = on/off
2770 Quiet mode---the same as @samp{-q}.
2772 @item quota = @var{quota}
2773 Specify the download quota, which is useful to put in the global
2774 @file{wgetrc}. When download quota is specified, Wget will stop
2775 retrieving after the download sum has become greater than quota. The
2776 quota can be specified in bytes (default), kbytes @samp{k} appended) or
2777 mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota
2778 to 5 megabytes. Note that the user's startup file overrides system
2781 @item random_file = @var{file}
2782 Use @var{file} as a source of randomness on systems lacking
2785 @item random_wait = on/off
2786 Turn random between-request wait times on or off. The same as
2787 @samp{--random-wait}.
2789 @item read_timeout = @var{n}
2790 Set the read (and write) timeout---the same as
2791 @samp{--read-timeout=@var{n}}.
2793 @item reclevel = @var{n}
2794 Recursion level (depth)---the same as @samp{-l @var{n}}.
2796 @item recursive = on/off
2797 Recursive on/off---the same as @samp{-r}.
2799 @item referer = @var{string}
2800 Set HTTP @samp{Referer:} header just like
2801 @samp{--referer=@var{string}}. (Note it was the folks who wrote the
2802 @sc{http} spec who got the spelling of ``referrer'' wrong.)
2804 @item relative_only = on/off
2805 Follow only relative links---the same as @samp{-L} (@pxref{Relative
2808 @item remove_listing = on/off
2809 If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
2810 to off is the same as @samp{--no-remove-listing}.
2812 @item restrict_file_names = unix/windows
2813 Restrict the file names generated by Wget from URLs. See
2814 @samp{--restrict-file-names} for a more detailed description.
2816 @item retr_symlinks = on/off
2817 When set to on, retrieve symbolic links as if they were plain files; the
2818 same as @samp{--retr-symlinks}.
2820 @item retry_connrefused = on/off
2821 When set to on, consider ``connection refused'' a transient
2822 error---the same as @samp{--retry-connrefused}.
2824 @item robots = on/off
2825 Specify whether the norobots convention is respected by Wget, ``on'' by
2826 default. This switch controls both the @file{/robots.txt} and the
2827 @samp{nofollow} aspect of the spec. @xref{Robot Exclusion}, for more
2828 details about this. Be sure you know what you are doing before turning
2831 @item save_cookies = @var{file}
2832 Save cookies to @var{file}. The same as @samp{--save-cookies
2835 @item secure_protocol = @var{string}
2836 Choose the secure protocol to be used. Legal values are @samp{auto}
2837 (the default), @samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. The same
2838 as @samp{--secure-protocol=@var{string}}.
2840 @item server_response = on/off
2841 Choose whether or not to print the @sc{http} and @sc{ftp} server
2842 responses---the same as @samp{-S}.
2844 @item span_hosts = on/off
2847 @item strict_comments = on/off
2848 Same as @samp{--strict-comments}.
2850 @item timeout = @var{n}
2851 Set all applicable timeout values to @var{n}, the same as @samp{-T
2854 @item timestamping = on/off
2855 Turn timestamping on/off. The same as @samp{-N} (@pxref{Time-Stamping}).
2857 @item tries = @var{n}
2858 Set number of retries per @sc{url}---the same as @samp{-t @var{n}}.
2860 @item use_proxy = on/off
2861 When set to off, don't use proxy even when proxy-related environment
2862 variables are set. In that case it is the same as using
2865 @item user = @var{string}
2866 Specify username @var{string} for both @sc{ftp} and @sc{http} file retrieval.
2867 This command can be overridden using the @samp{ftp_user} and
2868 @samp{http_user} command for @sc{ftp} and @sc{http} respectively.
2870 @item verbose = on/off
2871 Turn verbose on/off---the same as @samp{-v}/@samp{-nv}.
2873 @item wait = @var{n}
2874 Wait @var{n} seconds between retrievals---the same as @samp{-w
2877 @item waitretry = @var{n}
2878 Wait up to @var{n} seconds between retries of failed retrievals
2879 only---the same as @samp{--waitretry=@var{n}}. Note that this is
2880 turned on by default in the global @file{wgetrc}.
2884 @section Sample Wgetrc
2885 @cindex sample wgetrc
2887 This is the sample initialization file, as given in the distribution.
2888 It is divided in two section---one for global usage (suitable for global
2889 startup file), and one for local usage (suitable for
2890 @file{$HOME/.wgetrc}). Be careful about the things you change.
2892 Note that almost all the lines are commented out. For a command to have
2893 any effect, you must remove the @samp{#} character at the beginning of
2897 @include sample.wgetrc.munged_for_texi_inclusion
2904 @c man begin EXAMPLES
2905 The examples are divided into three sections loosely based on their
2909 * Simple Usage:: Simple, basic usage of the program.
2910 * Advanced Usage:: Advanced tips.
2911 * Very Advanced Usage:: The hairy stuff.
2915 @section Simple Usage
2919 Say you want to download a @sc{url}. Just type:
2922 wget http://fly.srk.fer.hr/
2926 But what will happen if the connection is slow, and the file is lengthy?
2927 The connection will probably fail before the whole file is retrieved,
2928 more than once. In this case, Wget will try getting the file until it
2929 either gets the whole of it, or exceeds the default number of retries
2930 (this being 20). It is easy to change the number of tries to 45, to
2931 insure that the whole file will arrive safely:
2934 wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
2938 Now let's leave Wget to work in the background, and write its progress
2939 to log file @file{log}. It is tiring to type @samp{--tries}, so we
2940 shall use @samp{-t}.
2943 wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
2946 The ampersand at the end of the line makes sure that Wget works in the
2947 background. To unlimit the number of retries, use @samp{-t inf}.
2950 The usage of @sc{ftp} is as simple. Wget will take care of login and
2954 wget ftp://gnjilux.srk.fer.hr/welcome.msg
2958 If you specify a directory, Wget will retrieve the directory listing,
2959 parse it and convert it to @sc{html}. Try:
2962 wget ftp://ftp.gnu.org/pub/gnu/
2967 @node Advanced Usage
2968 @section Advanced Usage
2972 You have a file that contains the URLs you want to download? Use the
2979 If you specify @samp{-} as file name, the @sc{url}s will be read from
2983 Create a five levels deep mirror image of the GNU web site, with the
2984 same directory structure the original has, with only one try per
2985 document, saving the log of the activities to @file{gnulog}:
2988 wget -r http://www.gnu.org/ -o gnulog
2992 The same as the above, but convert the links in the @sc{html} files to
2993 point to local files, so you can view the documents off-line:
2996 wget --convert-links -r http://www.gnu.org/ -o gnulog
3000 Retrieve only one @sc{html} page, but make sure that all the elements needed
3001 for the page to be displayed, such as inline images and external style
3002 sheets, are also downloaded. Also make sure the downloaded page
3003 references the downloaded links.
3006 wget -p --convert-links http://www.server.com/dir/page.html
3009 The @sc{html} page will be saved to @file{www.server.com/dir/page.html}, and
3010 the images, stylesheets, etc., somewhere under @file{www.server.com/},
3011 depending on where they were on the remote server.
3014 The same as the above, but without the @file{www.server.com/} directory.
3015 In fact, I don't want to have all those random server directories
3016 anyway---just save @emph{all} those files under a @file{download/}
3017 subdirectory of the current directory.
3020 wget -p --convert-links -nH -nd -Pdownload \
3021 http://www.server.com/dir/page.html
3025 Retrieve the index.html of @samp{www.lycos.com}, showing the original
3029 wget -S http://www.lycos.com/
3033 Save the server headers with the file, perhaps for post-processing.
3036 wget --save-headers http://www.lycos.com/
3041 Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
3045 wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
3049 You want to download all the @sc{gif}s from a directory on an @sc{http}
3050 server. You tried @samp{wget http://www.server.com/dir/*.gif}, but that
3051 didn't work because @sc{http} retrieval does not support globbing. In
3055 wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
3058 More verbose, but the effect is the same. @samp{-r -l1} means to
3059 retrieve recursively (@pxref{Recursive Download}), with maximum depth
3060 of 1. @samp{--no-parent} means that references to the parent directory
3061 are ignored (@pxref{Directory-Based Limits}), and @samp{-A.gif} means to
3062 download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
3066 Suppose you were in the middle of downloading, when Wget was
3067 interrupted. Now you do not want to clobber the files already present.
3071 wget -nc -r http://www.gnu.org/
3075 If you want to encode your own username and password to @sc{http} or
3076 @sc{ftp}, use the appropriate @sc{url} syntax (@pxref{URL Format}).
3079 wget ftp://hniksic:mypassword@@unix.server.com/.emacs
3082 Note, however, that this usage is not advisable on multi-user systems
3083 because it reveals your password to anyone who looks at the output of
3086 @cindex redirecting output
3088 You would like the output documents to go to standard output instead of
3092 wget -O - http://jagor.srce.hr/ http://www.srce.hr/
3095 You can also combine the two options and make pipelines to retrieve the
3096 documents from remote hotlists:
3099 wget -O - http://cool.list.com/ | wget --force-html -i -
3103 @node Very Advanced Usage
3104 @section Very Advanced Usage
3109 If you wish Wget to keep a mirror of a page (or @sc{ftp}
3110 subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
3111 for @samp{-r -l inf -N}. You can put Wget in the crontab file asking it
3112 to recheck a site each Sunday:
3116 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
3120 In addition to the above, you want the links to be converted for local
3121 viewing. But, after having read this manual, you know that link
3122 conversion doesn't play well with timestamping, so you also want Wget to
3123 back up the original @sc{html} files before the conversion. Wget invocation
3124 would look like this:
3127 wget --mirror --convert-links --backup-converted \
3128 http://www.gnu.org/ -o /home/me/weeklog
3132 But you've also noticed that local viewing doesn't work all that well
3133 when @sc{html} files are saved under extensions other than @samp{.html},
3134 perhaps because they were served as @file{index.cgi}. So you'd like
3135 Wget to rename all the files served with content-type @samp{text/html}
3136 or @samp{application/xhtml+xml} to @file{@var{name}.html}.
3139 wget --mirror --convert-links --backup-converted \
3140 --html-extension -o /home/me/weeklog \
3144 Or, with less typing:
3147 wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
3156 This chapter contains all the stuff that could not fit anywhere else.
3159 * Proxies:: Support for proxy servers
3160 * Distribution:: Getting the latest version.
3161 * Mailing List:: Wget mailing list for announcements and discussion.
3162 * Reporting Bugs:: How and where to report bugs.
3163 * Portability:: The systems Wget works on.
3164 * Signals:: Signal-handling performed by Wget.
3171 @dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
3172 data from remote servers to local clients. One typical use of proxies
3173 is lightening network load for users behind a slow connection. This is
3174 achieved by channeling all @sc{http} and @sc{ftp} requests through the
3175 proxy which caches the transferred data. When a cached resource is
3176 requested again, proxy will return the data from cache. Another use for
3177 proxies is for companies that separate (for security reasons) their
3178 internal networks from the rest of Internet. In order to obtain
3179 information from the Web, their users connect and retrieve remote data
3180 using an authorized proxy.
3182 Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
3183 standard way to specify proxy location, which Wget recognizes, is using
3184 the following environment variables:
3189 If set, the @code{http_proxy} and @code{https_proxy} variables should
3190 contain the @sc{url}s of the proxies for @sc{http} and @sc{https}
3191 connections respectively.
3194 This variable should contain the @sc{url} of the proxy for @sc{ftp}
3195 connections. It is quite common that @code{http_proxy} and
3196 @code{ftp_proxy} are set to the same @sc{url}.
3199 This variable should contain a comma-separated list of domain extensions
3200 proxy should @emph{not} be used for. For instance, if the value of
3201 @code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
3205 In addition to the environment variables, proxy location and settings
3206 may be specified from within Wget itself.
3210 @itemx proxy = on/off
3211 This option and the corresponding command may be used to suppress the
3212 use of proxy, even if the appropriate environment variables are set.
3214 @item http_proxy = @var{URL}
3215 @itemx https_proxy = @var{URL}
3216 @itemx ftp_proxy = @var{URL}
3217 @itemx no_proxy = @var{string}
3218 These startup file variables allow you to override the proxy settings
3219 specified by the environment.
3222 Some proxy servers require authorization to enable you to use them. The
3223 authorization consists of @dfn{username} and @dfn{password}, which must
3224 be sent by Wget. As with @sc{http} authorization, several
3225 authentication schemes exist. For proxy authorization only the
3226 @code{Basic} authentication scheme is currently implemented.
3228 You may specify your username and password either through the proxy
3229 @sc{url} or through the command-line options. Assuming that the
3230 company's proxy is located at @samp{proxy.company.com} at port 8001, a
3231 proxy @sc{url} location containing authorization data might look like
3235 http://hniksic:mypassword@@proxy.company.com:8001/
3238 Alternatively, you may use the @samp{proxy-user} and
3239 @samp{proxy-password} options, and the equivalent @file{.wgetrc}
3240 settings @code{proxy_user} and @code{proxy_password} to set the proxy
3241 username and password.
3244 @section Distribution
3245 @cindex latest version
3247 Like all GNU utilities, the latest version of Wget can be found at the
3248 master GNU archive site ftp.gnu.org, and its mirrors. For example,
3249 Wget @value{VERSION} can be found at
3250 @url{ftp://ftp.gnu.org/pub/gnu/wget/wget-@value{VERSION}.tar.gz}
3253 @section Mailing List
3254 @cindex mailing list
3257 There are several Wget-related mailing lists, all hosted by
3258 SunSITE.dk. The general discussion list is at
3259 @email{wget@@sunsite.dk}. It is the preferred place for bug reports
3260 and suggestions, as well as for discussion of development. You are
3261 invited to subscribe.
3263 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
3264 and follow the instructions. Unsubscribe by mailing to
3265 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
3266 @url{http://www.mail-archive.com/wget%40sunsite.dk/} and at
3267 @url{http://news.gmane.org/gmane.comp.web.wget.general}.
3269 The second mailing list is at @email{wget-patches@@sunsite.dk}, and is
3270 used to submit patches for review by Wget developers. A ``patch'' is
3271 a textual representation of change to source code, readable by both
3272 humans and programs. The file @file{PATCHES} that comes with Wget
3273 covers the creation and submitting of patches in detail. Please don't
3274 send general suggestions or bug reports to @samp{wget-patches}; use it
3275 only for patch submissions.
3277 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
3278 and follow the instructions. Unsubscribe by mailing to
3279 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
3280 @url{http://news.gmane.org/gmane.comp.web.wget.patches}.
3282 @node Reporting Bugs
3283 @section Reporting Bugs
3285 @cindex reporting bugs
3289 You are welcome to send bug reports about GNU Wget to
3290 @email{bug-wget@@gnu.org}.
3292 Before actually submitting a bug report, please try to follow a few
3297 Please try to ascertain that the behavior you see really is a bug. If
3298 Wget crashes, it's a bug. If Wget does not behave as documented,
3299 it's a bug. If things work strange, but you are not sure about the way
3300 they are supposed to work, it might well be a bug.
3303 Try to repeat the bug in as simple circumstances as possible. E.g. if
3304 Wget crashes while downloading @samp{wget -rl0 -kKE -t5 -Y0
3305 http://yoyodyne.com -o /tmp/log}, you should try to see if the crash is
3306 repeatable, and if will occur with a simpler set of options. You might
3307 even try to start the download at the page where the crash occurred to
3308 see if that page somehow triggered the crash.
3310 Also, while I will probably be interested to know the contents of your
3311 @file{.wgetrc} file, just dumping it into the debug message is probably
3312 a bad idea. Instead, you should first try to see if the bug repeats
3313 with @file{.wgetrc} moved out of the way. Only if it turns out that
3314 @file{.wgetrc} settings affect the bug, mail me the relevant parts of
3318 Please start Wget with @samp{-d} option and send us the resulting
3319 output (or relevant parts thereof). If Wget was compiled without
3320 debug support, recompile it---it is @emph{much} easier to trace bugs
3321 with debug support on.
3323 Note: please make sure to remove any potentially sensitive information
3324 from the debug log before sending it to the bug address. The
3325 @code{-d} won't go out of its way to collect sensitive information,
3326 but the log @emph{will} contain a fairly complete transcript of Wget's
3327 communication with the server, which may include passwords and pieces
3328 of downloaded data. Since the bug address is publically archived, you
3329 may assume that all bug reports are visible to the public.
3332 If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
3333 wget` core} and type @code{where} to get the backtrace. This may not
3334 work if the system administrator has disabled core files, but it is
3340 @section Portability
3342 @cindex operating systems
3344 Like all GNU software, Wget works on the GNU system. However, since it
3345 uses GNU Autoconf for building and configuring, and mostly avoids using
3346 ``special'' features of any particular Unix, it should compile (and
3347 work) on all common Unix flavors.
3349 Various Wget versions have been compiled and tested under many kinds
3350 of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, OSF (aka
3351 Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some of
3352 those systems are no longer in widespread use and may not be able to
3353 support recent versions of Wget. If Wget fails to compile on your
3354 system, we would like to know about it.
3356 Thanks to kind contributors, this version of Wget compiles and works
3357 on 32-bit Microsoft Windows platforms. It has been compiled
3358 successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
3359 compilers. Naturally, it is crippled of some features available on
3360 Unix, but it should work as a substitute for people stuck with
3361 Windows. Note that Windows-specific portions of Wget are not
3362 guaranteed to be supported in the future, although this has been the
3363 case in practice for many years now. All questions and problems in
3364 Windows usage should be reported to Wget mailing list at
3365 @email{wget@@sunsite.dk} where the volunteers who maintain the
3366 Windows-related features might look at them.
3370 @cindex signal handling
3373 Since the purpose of Wget is background work, it catches the hangup
3374 signal (@code{SIGHUP}) and ignores it. If the output was on standard
3375 output, it will be redirected to a file named @file{wget-log}.
3376 Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
3377 to redirect the output of Wget after having started it.
3380 $ wget http://www.gnus.org/dist/gnus.tar.gz &
3383 SIGHUP received, redirecting output to `wget-log'.
3386 Other than that, Wget will not try to interfere with signals in any way.
3387 @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike.
3392 This chapter contains some references I consider useful.
3395 * Robot Exclusion:: Wget's support for RES.
3396 * Security Considerations:: Security with Wget.
3397 * Contributors:: People who helped.
3400 @node Robot Exclusion
3401 @section Robot Exclusion
3402 @cindex robot exclusion
3404 @cindex server maintenance
3406 It is extremely easy to make Wget wander aimlessly around a web site,
3407 sucking all the available data in progress. @samp{wget -r @var{site}},
3408 and you're set. Great? Not for the server admin.
3410 As long as Wget is only retrieving static pages, and doing it at a
3411 reasonable rate (see the @samp{--wait} option), there's not much of a
3412 problem. The trouble is that Wget can't tell the difference between the
3413 smallest static page and the most demanding CGI. A site I know has a
3414 section handled by a CGI Perl script that converts Info files to @sc{html} on
3415 the fly. The script is slow, but works well enough for human users
3416 viewing an occasional Info file. However, when someone's recursive Wget
3417 download stumbles upon the index page that links to all the Info files
3418 through the script, the system is brought to its knees without providing
3419 anything useful to the user (This task of converting Info files could be
3420 done locally and access to Info documentation for all installed GNU
3421 software on a system is available from the @code{info} command).
3423 To avoid this kind of accident, as well as to preserve privacy for
3424 documents that need to be protected from well-behaved robots, the
3425 concept of @dfn{robot exclusion} was invented. The idea is that
3426 the server administrators and document authors can specify which
3427 portions of the site they wish to protect from robots and those
3428 they will permit access.
3430 The most popular mechanism, and the @i{de facto} standard supported by
3431 all the major robots, is the ``Robots Exclusion Standard'' (RES) written
3432 by Martijn Koster et al. in 1994. It specifies the format of a text
3433 file containing directives that instruct the robots which URL paths to
3434 avoid. To be found by the robots, the specifications must be placed in
3435 @file{/robots.txt} in the server root, which the robots are expected to
3438 Although Wget is not a web robot in the strictest sense of the word, it
3439 can downloads large parts of the site without the user's intervention to
3440 download an individual page. Because of that, Wget honors RES when
3441 downloading recursively. For instance, when you issue:
3444 wget -r http://www.server.com/
3447 First the index of @samp{www.server.com} will be downloaded. If Wget
3448 finds that it wants to download more documents from that server, it will
3449 request @samp{http://www.server.com/robots.txt} and, if found, use it
3450 for further downloads. @file{robots.txt} is loaded only once per each
3453 Until version 1.8, Wget supported the first version of the standard,
3454 written by Martijn Koster in 1994 and available at
3455 @url{http://www.robotstxt.org/wc/norobots.html}. As of version 1.8,
3456 Wget has supported the additional directives specified in the internet
3457 draft @samp{<draft-koster-robots-00.txt>} titled ``A Method for Web
3458 Robots Control''. The draft, which has as far as I know never made to
3459 an @sc{rfc}, is available at
3460 @url{http://www.robotstxt.org/wc/norobots-rfc.txt}.
3462 This manual no longer includes the text of the Robot Exclusion Standard.
3464 The second, less known mechanism, enables the author of an individual
3465 document to specify whether they want the links from the file to be
3466 followed by a robot. This is achieved using the @code{META} tag, like
3470 <meta name="robots" content="nofollow">
3473 This is explained in some detail at
3474 @url{http://www.robotstxt.org/wc/meta-user.html}. Wget supports this
3475 method of robot exclusion in addition to the usual @file{/robots.txt}
3478 If you know what you are doing and really really wish to turn off the
3479 robot exclusion, set the @code{robots} variable to @samp{off} in your
3480 @file{.wgetrc}. You can achieve the same effect from the command line
3481 using the @code{-e} switch, e.g. @samp{wget -e robots=off @var{url}...}.
3483 @node Security Considerations
3484 @section Security Considerations
3487 When using Wget, you must be aware that it sends unencrypted passwords
3488 through the network, which may present a security problem. Here are the
3489 main issues, and some solutions.
3493 The passwords on the command line are visible using @code{ps}. The best
3494 way around it is to use @code{wget -i -} and feed the @sc{url}s to
3495 Wget's standard input, each on a separate line, terminated by @kbd{C-d}.
3496 Another workaround is to use @file{.netrc} to store passwords; however,
3497 storing unencrypted passwords is also considered a security risk.
3500 Using the insecure @dfn{basic} authentication scheme, unencrypted
3501 passwords are transmitted through the network routers and gateways.
3504 The @sc{ftp} passwords are also in no way encrypted. There is no good
3505 solution for this at the moment.
3508 Although the ``normal'' output of Wget tries to hide the passwords,
3509 debugging logs show them, in all forms. This problem is avoided by
3510 being careful when you send debug logs (yes, even when you send them to
3515 @section Contributors
3516 @cindex contributors
3519 GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@xemacs.org}.
3522 GNU Wget was written by Hrvoje Niksic @email{hniksic@@xemacs.org}.
3524 However, its development could never have gone as far as it has, were it
3525 not for the help of many people, either with bug reports, feature
3526 proposals, patches, or letters saying ``Thanks!''.
3528 Special thanks goes to the following people (no particular order):
3531 @item Mauro Tortonesi---contributed high-quality IPv6 code and many
3534 @item Dan Harkless---contributed a lot of code and documentation of
3535 extremely high quality, as well as the @code{--page-requisites} and
3536 related options. He was the principal maintainer for some time and
3539 @item Ian Abbott---contributed bug fixes, Windows-related fixes, and
3540 provided a prototype implementation of the breadth-first recursive
3541 download. Co-maintained Wget during the 1.8 release cycle.
3544 The dotsrc.org crew, in particular Karsten Thygesen---donated system
3545 resources such as the mailing list, web space, @sc{ftp} space, and
3546 version control repositories, along with a lot of time to make these
3547 actually work. Christian Reiniger was of invaluable help with setting
3551 Heiko Herold---provided high-quality Windows builds and contributed
3552 bug and build reports for many years.
3555 Shawn McHorse---bug reports and patches.
3558 Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. Lots of
3562 Gordon Matzigkeit---@file{.netrc} support.
3566 Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
3567 Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
3570 Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
3571 and ``philosophical'' discussions.
3575 Darko Budor---initial port to Windows.
3578 Antonio Rosella---help and suggestions, plus the initial Italian
3583 Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
3587 Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
3592 Fran@,{c}ois Pinard---many thorough bug reports and discussions.
3595 Francois Pinard---many thorough bug reports and discussions.
3599 Karl Eichwalder---lots of help with internationalization, Makefile
3600 layout and many other things.
3603 Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
3607 People who provided donations for development---including Brian Gough.
3610 The following people have provided patches, bug/build reports, useful
3611 suggestions, beta testing services, fan mail and all the other things
3612 that make maintenance so much fun:
3631 Kristijan @v{C}onka@v{s},
3640 Bertrand Demiddelaer,
3653 Aleksandar Erkalovi@'{c},
3656 Aleksandar Erkalovic,
3675 Erik Magnus Hulthen,
3694 Goran Kezunovi@'{c},
3707 $\Sigma\acute{\iota}\mu o\varsigma\;
3708 \Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
3709 (Simos KSenitellis),
3718 Nicol@'{a}s Lichtmeier,
3724 Alexander V. Lukyanov,
3764 @c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
3766 Juan Jos\'{e} Rodr\'{\i}guez,
3769 Juan Jose Rodriguez,
3787 Szakacsits Szabolcs,
3801 Douglas E. Wegscheid,
3812 Apologies to all who I accidentally left out, and many thanks to all the
3813 subscribers of the Wget mailing list.
3820 @cindex free software
3822 GNU Wget is licensed under the GNU General Public License (GNU GPL),
3823 which makes it @dfn{free software}. Please note that ``free'' in ``free
3824 software'' refers to liberty, not price. As some people like to point
3825 out, it's the ``free'' of ``free speech'', not the ``free'' of ``free
3828 The exact and legally binding distribution terms are spelled out below.
3829 The GPL guarantees that you have the right (freedom) to run and change
3830 GNU Wget and distribute it to others, and even---if you want---charge
3831 money for doing any of those things. With these rights comes the
3832 obligation to distribute the source code along with the software and to
3833 grant your recipients the same rights and impose the same restrictions.
3835 This licensing model is also known as @dfn{open source} because it,
3836 among other things, makes sure that all recipients will receive the
3837 source code along with the program, and be able to improve it. The GNU
3838 project prefers the term ``free software'' for reasons outlined at
3839 @url{http://www.gnu.org/philosophy/free-software-for-freedom.html}.
3841 The exact license terms are defined by this paragraph and the GNU
3842 General Public License it refers to:
3845 GNU Wget is free software; you can redistribute it and/or modify it
3846 under the terms of the GNU General Public License as published by the
3847 Free Software Foundation; either version 2 of the License, or (at your
3848 option) any later version.
3850 GNU Wget is distributed in the hope that it will be useful, but WITHOUT
3851 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
3852 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
3855 A copy of the GNU General Public License is included as part of this
3856 manual; if you did not receive it, write to the Free Software
3857 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
3860 In addition to this, this manual is free in the same sense:
3863 Permission is granted to copy, distribute and/or modify this document
3864 under the terms of the GNU Free Documentation License, Version 1.2 or
3865 any later version published by the Free Software Foundation; with the
3866 Invariant Sections being ``GNU General Public License'' and ``GNU Free
3867 Documentation License'', with no Front-Cover Texts, and with no
3868 Back-Cover Texts. A copy of the license is included in the section
3869 entitled ``GNU Free Documentation License''.
3872 @c #### Maybe we should wrap these licenses in ifinfo? Stallman says
3873 @c that the GFDL needs to be present in the manual, and to me it would
3874 @c suck to include the license for the manual and not the license for
3877 The full texts of the GNU General Public License and of the GNU Free
3878 Documentation License are available below.
3881 * GNU General Public License::
3882 * GNU Free Documentation License::
3890 @unnumbered Concept Index