1 \input texinfo @c -*-texinfo-*-
7 @settitle GNU Wget @value{VERSION} Manual
8 @c Disable the monstrous rectangles beside overfull hbox-es.
10 @c Use `odd' to print double-sided.
15 @c Remove this if you don't use A4 paper.
19 @c Title for man page. The weird way texi2pod.pl is written requires
20 @c the preceding @set.
22 @c man title Wget The non-interactive network downloader.
24 @dircategory Network Applications
26 * Wget: (wget). The non-interactive network downloader.
30 This file documents the GNU Wget utility for downloading network
33 @c man begin COPYRIGHT
34 Copyright @copyright{} 1996, 1997, 1998, 1999, 2000, 2001, 2002,
35 2003, 2004, 2005, 2006, 2007, 2008 Free Software Foundation, Inc.
38 Permission is granted to make and distribute verbatim copies of
39 this manual provided the copyright notice and this permission notice
40 are preserved on all copies.
44 Permission is granted to process this file through TeX and print the
45 results, provided the printed document carries a copying permission
46 notice identical to this one except for the removal of this paragraph
47 (this paragraph not being relevant to the printed manual).
49 Permission is granted to copy, distribute and/or modify this document
50 under the terms of the GNU Free Documentation License, Version 1.2 or
51 any later version published by the Free Software Foundation; with no
52 Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
53 copy of the license is included in the section entitled ``GNU Free
54 Documentation License''.
59 @title GNU Wget @value{VERSION}
60 @subtitle The non-interactive download utility
61 @subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
62 @author by Hrvoje Nik@v{s}i@'{c} and others
66 Originally written by Hrvoje Niksic <hniksic@xemacs.org>.
67 Currently maintained by Micah Cowan <micah@cowan.name>.
70 This is @strong{not} the complete manual for GNU Wget.
71 For more complete information, including more detailed explanations of
72 some of the options, and a number of commands available
73 for use with @file{.wgetrc} files and the @samp{-e} option, see the GNU
74 Info entry for @file{wget}.
79 @vskip 0pt plus 1filll
87 @top Wget @value{VERSION}
93 * Overview:: Features of Wget.
94 * Invoking:: Wget command-line arguments.
95 * Recursive Download:: Downloading interlinked pages.
96 * Following Links:: The available methods of chasing links.
97 * Time-Stamping:: Mirroring according to time-stamps.
98 * Startup File:: Wget's initialization file.
99 * Examples:: Examples of usage.
100 * Various:: The stuff that doesn't fit anywhere else.
101 * Appendices:: Some useful references.
102 * Copying this manual:: You may give out copies of Wget and of this manual.
103 * Concept Index:: Topics covered by this manual.
111 @c man begin DESCRIPTION
112 GNU Wget is a free utility for non-interactive download of files from
113 the Web. It supports @sc{http}, @sc{https}, and @sc{ftp} protocols, as
114 well as retrieval through @sc{http} proxies.
117 This chapter is a partial overview of Wget's features.
121 @c man begin DESCRIPTION
122 Wget is non-interactive, meaning that it can work in the background,
123 while the user is not logged on. This allows you to start a retrieval
124 and disconnect from the system, letting Wget finish the work. By
125 contrast, most of the Web browsers require constant user's presence,
126 which can be a great hindrance when transferring a lot of data.
131 @c man begin DESCRIPTION
135 @c man begin DESCRIPTION
136 Wget can follow links in @sc{html} and @sc{xhtml} pages and create local
137 versions of remote web sites, fully recreating the directory structure of
138 the original site. This is sometimes referred to as ``recursive
139 downloading.'' While doing that, Wget respects the Robot Exclusion
140 Standard (@file{/robots.txt}). Wget can be instructed to convert the
141 links in downloaded @sc{html} files to the local files for offline
146 File name wildcard matching and recursive mirroring of directories are
147 available when retrieving via @sc{ftp}. Wget can read the time-stamp
148 information given by both @sc{http} and @sc{ftp} servers, and store it
149 locally. Thus Wget can see if the remote file has changed since last
150 retrieval, and automatically retrieve the new version if it has. This
151 makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
156 @c man begin DESCRIPTION
160 @c man begin DESCRIPTION
161 Wget has been designed for robustness over slow or unstable network
162 connections; if a download fails due to a network problem, it will
163 keep retrying until the whole file has been retrieved. If the server
164 supports regetting, it will instruct the server to continue the
165 download from where it left off.
169 Wget supports proxy servers, which can lighten the network load, speed
170 up retrieval and provide access behind firewalls. Wget uses the passive
171 @sc{ftp} downloading by default, active @sc{ftp} being an option.
174 Wget supports IP version 6, the next generation of IP. IPv6 is
175 autodetected at compile-time, and can be disabled at either build or
176 run time. Binaries built with IPv6 support work well in both
177 IPv4-only and dual family environments.
180 Built-in features offer mechanisms to tune which links you wish to follow
181 (@pxref{Following Links}).
184 The progress of individual downloads is traced using a progress gauge.
185 Interactive downloads are tracked using a ``thermometer''-style gauge,
186 whereas non-interactive ones are traced with dots, each dot
187 representing a fixed amount of data received (1KB by default). Either
188 gauge can be customized to your preferences.
191 Most of the features are fully configurable, either through command line
192 options, or via the initialization file @file{.wgetrc} (@pxref{Startup
193 File}). Wget allows you to define @dfn{global} startup files
194 (@file{/usr/local/etc/wgetrc} by default) for site settings.
199 @item /usr/local/etc/wgetrc
200 Default location of the @dfn{global} startup file.
209 Finally, GNU Wget is free software. This means that everyone may use
210 it, redistribute it and/or modify it under the terms of the GNU General
211 Public License, as published by the Free Software Foundation (see the
212 file @file{COPYING} that came with GNU Wget, for details).
222 By default, Wget is very simple to invoke. The basic syntax is:
225 @c man begin SYNOPSIS
226 wget [@var{option}]@dots{} [@var{URL}]@dots{}
230 Wget will simply download all the @sc{url}s specified on the command
231 line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
233 However, you may wish to change some of the default parameters of
234 Wget. You can do it two ways: permanently, adding the appropriate
235 command to @file{.wgetrc} (@pxref{Startup File}), or specifying it on
241 * Basic Startup Options::
242 * Logging and Input File Options::
244 * Directory Options::
246 * HTTPS (SSL/TLS) Options::
248 * Recursive Retrieval Options::
249 * Recursive Accept/Reject Options::
257 @dfn{URL} is an acronym for Uniform Resource Locator. A uniform
258 resource locator is a compact string representation for a resource
259 available via the Internet. Wget recognizes the @sc{url} syntax as per
260 @sc{rfc1738}. This is the most widely used form (square brackets denote
264 http://host[:port]/directory/file
265 ftp://host[:port]/directory/file
268 You can also encode your username and password within a @sc{url}:
271 ftp://user:password@@host/path
272 http://user:password@@host/path
275 Either @var{user} or @var{password}, or both, may be left out. If you
276 leave out either the @sc{http} username or password, no authentication
277 will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
278 will be used. If you leave out the @sc{ftp} password, your email
279 address will be supplied as a default password.@footnote{If you have a
280 @file{.netrc} file in your home directory, password will also be
283 @strong{Important Note}: if you specify a password-containing @sc{url}
284 on the command line, the username and password will be plainly visible
285 to all users on the system, by way of @code{ps}. On multi-user systems,
286 this is a big security risk. To work around it, use @code{wget -i -}
287 and feed the @sc{url}s to Wget's standard input, each on a separate
288 line, terminated by @kbd{C-d}.
290 You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
291 being the hexadecimal representation of the character's @sc{ascii}
292 value. Some common unsafe characters include @samp{%} (quoted as
293 @samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
294 @samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
297 Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
298 default, @sc{ftp} documents are retrieved in the binary mode (type
299 @samp{i}), which means that they are downloaded unchanged. Another
300 useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
301 delimiters between the different operating systems, and is thus useful
302 for text files. Here is an example:
305 ftp://host/directory/file;type=a
308 Two alternative variants of @sc{url} specification are also supported,
309 because of historical (hysterical?) reasons and their widespreaded use.
311 @sc{ftp}-only syntax (supported by @code{NcFTP}):
316 @sc{http}-only syntax (introduced by @code{Netscape}):
321 These two alternative forms are deprecated, and may cease being
322 supported in the future.
324 If you do not understand the difference between these notations, or do
325 not know which one to use, just use the plain ordinary format you use
326 with your favorite browser, like @code{Lynx} or @code{Netscape}.
331 @section Option Syntax
332 @cindex option syntax
333 @cindex syntax of options
335 Since Wget uses GNU getopt to process command-line arguments, every
336 option has a long form along with the short one. Long options are
337 more convenient to remember, but take time to type. You may freely
338 mix different option styles, or specify options after the command-line
339 arguments. Thus you may write:
342 wget -r --tries=10 http://fly.srk.fer.hr/ -o log
345 The space between the option accepting an argument and the argument may
346 be omitted. Instead of @samp{-o log} you can write @samp{-olog}.
348 You may put several options that do not require arguments together,
355 This is a complete equivalent of:
358 wget -d -r -c @var{URL}
361 Since the options can be specified after the arguments, you may
362 terminate them with @samp{--}. So the following will try to download
363 @sc{url} @samp{-x}, reporting failure to @file{log}:
369 The options that accept comma-separated lists all respect the convention
370 that specifying an empty list clears its value. This can be useful to
371 clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
372 sets @code{exclude_directories} to @file{/cgi-bin}, the following
373 example will first reset it, and then set it to exclude @file{/~nobody}
374 and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
375 (@pxref{Wgetrc Syntax}).
378 wget -X '' -X /~nobody,/~somebody
381 Most options that do not accept arguments are @dfn{boolean} options,
382 so named because their state can be captured with a yes-or-no
383 (``boolean'') variable. For example, @samp{--follow-ftp} tells Wget
384 to follow FTP links from HTML files and, on the other hand,
385 @samp{--no-glob} tells it not to perform file globbing on FTP URLs. A
386 boolean option is either @dfn{affirmative} or @dfn{negative}
387 (beginning with @samp{--no}). All such options share several
390 Unless stated otherwise, it is assumed that the default behavior is
391 the opposite of what the option accomplishes. For example, the
392 documented existence of @samp{--follow-ftp} assumes that the default
393 is to @emph{not} follow FTP links from HTML pages.
395 Affirmative options can be negated by prepending the @samp{--no-} to
396 the option name; negative options can be negated by omitting the
397 @samp{--no-} prefix. This might seem superfluous---if the default for
398 an affirmative option is to not do something, then why provide a way
399 to explicitly turn it off? But the startup file may in fact change
400 the default. For instance, using @code{follow_ftp = off} in
401 @file{.wgetrc} makes Wget @emph{not} follow FTP links by default, and
402 using @samp{--no-follow-ftp} is the only way to restore the factory
403 default from the command line.
405 @node Basic Startup Options
406 @section Basic Startup Options
411 Display the version of Wget.
415 Print a help message describing all of Wget's command-line options.
419 Go to background immediately after startup. If no output file is
420 specified via the @samp{-o}, output is redirected to @file{wget-log}.
422 @cindex execute wgetrc command
423 @item -e @var{command}
424 @itemx --execute @var{command}
425 Execute @var{command} as if it were a part of @file{.wgetrc}
426 (@pxref{Startup File}). A command thus invoked will be executed
427 @emph{after} the commands in @file{.wgetrc}, thus taking precedence over
428 them. If you need to specify more than one wgetrc command, use multiple
429 instances of @samp{-e}.
433 @node Logging and Input File Options
434 @section Logging and Input File Options
439 @item -o @var{logfile}
440 @itemx --output-file=@var{logfile}
441 Log all messages to @var{logfile}. The messages are normally reported
444 @cindex append to log
445 @item -a @var{logfile}
446 @itemx --append-output=@var{logfile}
447 Append to @var{logfile}. This is the same as @samp{-o}, only it appends
448 to @var{logfile} instead of overwriting the old log file. If
449 @var{logfile} does not exist, a new file is created.
454 Turn on debug output, meaning various information important to the
455 developers of Wget if it does not work properly. Your system
456 administrator may have chosen to compile Wget without debug support, in
457 which case @samp{-d} will not work. Please note that compiling with
458 debug support is always safe---Wget compiled with the debug support will
459 @emph{not} print any debug info unless requested with @samp{-d}.
460 @xref{Reporting Bugs}, for more information on how to use @samp{-d} for
466 Turn off Wget's output.
471 Turn on verbose output, with all the available data. The default output
476 Turn off verbose without being completely quiet (use @samp{-q} for
477 that), which means that error messages and basic information still get
482 @itemx --input-file=@var{file}
483 Read @sc{url}s from @var{file}. If @samp{-} is specified as
484 @var{file}, @sc{url}s are read from the standard input. (Use
485 @samp{./-} to read from a file literally named @samp{-}.)
487 If this function is used, no @sc{url}s need be present on the command
488 line. If there are @sc{url}s both on the command line and in an input
489 file, those on the command lines will be the first ones to be
490 retrieved. The @var{file} need not be an @sc{html} document (but no
491 harm if it is)---it is enough if the @sc{url}s are just listed
494 However, if you specify @samp{--force-html}, the document will be
495 regarded as @samp{html}. In that case you may have problems with
496 relative links, which you can solve either by adding @code{<base
497 href="@var{url}">} to the documents or by specifying
498 @samp{--base=@var{url}} on the command line.
503 When input is read from a file, force it to be treated as an @sc{html}
504 file. This enables you to retrieve relative links from existing
505 @sc{html} files on your local disk, by adding @code{<base
506 href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
509 @cindex base for relative links in input file
511 @itemx --base=@var{URL}
512 Prepends @var{URL} to relative links read from the file specified with
513 the @samp{-i} option.
516 @node Download Options
517 @section Download Options
521 @cindex client IP address
522 @cindex IP address, client
523 @item --bind-address=@var{ADDRESS}
524 When making client TCP/IP connections, bind to @var{ADDRESS} on
525 the local machine. @var{ADDRESS} may be specified as a hostname or IP
526 address. This option can be useful if your machine is bound to multiple
531 @cindex number of retries
532 @item -t @var{number}
533 @itemx --tries=@var{number}
534 Set number of retries to @var{number}. Specify 0 or @samp{inf} for
535 infinite retrying. The default is to retry 20 times, with the exception
536 of fatal errors like ``connection refused'' or ``not found'' (404),
537 which are not retried.
540 @itemx --output-document=@var{file}
541 The documents will not be written to the appropriate files, but all
542 will be concatenated together and written to @var{file}. If @samp{-}
543 is used as @var{file}, documents will be printed to standard output,
544 disabling link conversion. (Use @samp{./-} to print to a file
545 literally named @samp{-}.)
547 Use of @samp{-O} is @emph{not} intended to mean simply ``use the name
548 @var{file} instead of the one in the URL;'' rather, it is
549 analogous to shell redirection:
550 @samp{wget -O file http://foo} is intended to work like
551 @samp{wget -O - http://foo > file}; @file{file} will be truncated
552 immediately, and @emph{all} downloaded content will be written there.
554 For this reason, @samp{-N} (for timestamp-checking) is not supported
555 in combination with @samp{-O}: since @var{file} is always newly
556 created, it will always have a very new timestamp. A warning will be
557 issued if this combination is used.
559 Similarly, using @samp{-r} or @samp{-p} with @samp{-O} may not work as
560 you expect: Wget won't just download the first file to @var{file} and
561 then download the rest to their normal names: @emph{all} downloaded
562 content will be placed in @var{file}. This was disabled in version
563 1.11, but has been reinstated (with a warning) in 1.11.2, as there are
564 some cases where this behavior can actually have some use.
566 Note that a combination with @samp{-k} is only permitted when
567 downloading a single document, as in that case it will just convert
568 all relative URIs to external ones; @samp{-k} makes no sense for
569 multiple URIs when they're all being downloaded to a single file.
571 @cindex clobbering, file
572 @cindex downloading multiple times
576 If a file is downloaded more than once in the same directory, Wget's
577 behavior depends on a few options, including @samp{-nc}. In certain
578 cases, the local file will be @dfn{clobbered}, or overwritten, upon
579 repeated download. In other cases it will be preserved.
581 When running Wget without @samp{-N}, @samp{-nc}, @samp{-r}, or @samp{p},
582 downloading the same file in the same directory will result in the
583 original copy of @var{file} being preserved and the second copy being
584 named @samp{@var{file}.1}. If that file is downloaded yet again, the
585 third copy will be named @samp{@var{file}.2}, and so on. When
586 @samp{-nc} is specified, this behavior is suppressed, and Wget will
587 refuse to download newer copies of @samp{@var{file}}. Therefore,
588 ``@code{no-clobber}'' is actually a misnomer in this mode---it's not
589 clobbering that's prevented (as the numeric suffixes were already
590 preventing clobbering), but rather the multiple version saving that's
593 When running Wget with @samp{-r} or @samp{-p}, but without @samp{-N}
594 or @samp{-nc}, re-downloading a file will result in the new copy
595 simply overwriting the old. Adding @samp{-nc} will prevent this
596 behavior, instead causing the original version to be preserved and any
597 newer copies on the server to be ignored.
599 When running Wget with @samp{-N}, with or without @samp{-r} or
600 @samp{-p}, the decision as to whether or not to download a newer copy
601 of a file depends on the local and remote timestamp and size of the
602 file (@pxref{Time-Stamping}). @samp{-nc} may not be specified at the
603 same time as @samp{-N}.
605 Note that when @samp{-nc} is specified, files with the suffixes
606 @samp{.html} or @samp{.htm} will be loaded from the local disk and
607 parsed as if they had been retrieved from the Web.
609 @cindex continue retrieval
610 @cindex incomplete downloads
611 @cindex resume download
614 Continue getting a partially-downloaded file. This is useful when you
615 want to finish up a download started by a previous instance of Wget, or
616 by another program. For instance:
619 wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
622 If there is a file named @file{ls-lR.Z} in the current directory, Wget
623 will assume that it is the first portion of the remote file, and will
624 ask the server to continue the retrieval from an offset equal to the
625 length of the local file.
627 Note that you don't need to specify this option if you just want the
628 current invocation of Wget to retry downloading a file should the
629 connection be lost midway through. This is the default behavior.
630 @samp{-c} only affects resumption of downloads started @emph{prior} to
631 this invocation of Wget, and whose local files are still sitting around.
633 Without @samp{-c}, the previous example would just download the remote
634 file to @file{ls-lR.Z.1}, leaving the truncated @file{ls-lR.Z} file
637 Beginning with Wget 1.7, if you use @samp{-c} on a non-empty file, and
638 it turns out that the server does not support continued downloading,
639 Wget will refuse to start the download from scratch, which would
640 effectively ruin existing contents. If you really want the download to
641 start from scratch, remove the file.
643 Also beginning with Wget 1.7, if you use @samp{-c} on a file which is of
644 equal size as the one on the server, Wget will refuse to download the
645 file and print an explanatory message. The same happens when the file
646 is smaller on the server than locally (presumably because it was changed
647 on the server since your last download attempt)---because ``continuing''
648 is not meaningful, no download occurs.
650 On the other side of the coin, while using @samp{-c}, any file that's
651 bigger on the server than locally will be considered an incomplete
652 download and only @code{(length(remote) - length(local))} bytes will be
653 downloaded and tacked onto the end of the local file. This behavior can
654 be desirable in certain cases---for instance, you can use @samp{wget -c}
655 to download just the new portion that's been appended to a data
656 collection or log file.
658 However, if the file is bigger on the server because it's been
659 @emph{changed}, as opposed to just @emph{appended} to, you'll end up
660 with a garbled file. Wget has no way of verifying that the local file
661 is really a valid prefix of the remote file. You need to be especially
662 careful of this when using @samp{-c} in conjunction with @samp{-r},
663 since every file will be considered as an "incomplete download" candidate.
665 Another instance where you'll get a garbled file if you try to use
666 @samp{-c} is if you have a lame @sc{http} proxy that inserts a
667 ``transfer interrupted'' string into the local file. In the future a
668 ``rollback'' option may be added to deal with this case.
670 Note that @samp{-c} only works with @sc{ftp} servers and with @sc{http}
671 servers that support the @code{Range} header.
673 @cindex progress indicator
675 @item --progress=@var{type}
676 Select the type of the progress indicator you wish to use. Legal
677 indicators are ``dot'' and ``bar''.
679 The ``bar'' indicator is used by default. It draws an @sc{ascii} progress
680 bar graphics (a.k.a ``thermometer'' display) indicating the status of
681 retrieval. If the output is not a TTY, the ``dot'' bar will be used by
684 Use @samp{--progress=dot} to switch to the ``dot'' display. It traces
685 the retrieval by printing dots on the screen, each dot representing a
686 fixed amount of downloaded data.
688 When using the dotted retrieval, you may also set the @dfn{style} by
689 specifying the type as @samp{dot:@var{style}}. Different styles assign
690 different meaning to one dot. With the @code{default} style each dot
691 represents 1K, there are ten dots in a cluster and 50 dots in a line.
692 The @code{binary} style has a more ``computer''-like orientation---8K
693 dots, 16-dots clusters and 48 dots per line (which makes for 384K
694 lines). The @code{mega} style is suitable for downloading very large
695 files---each dot represents 64K retrieved, there are eight dots in a
696 cluster, and 48 dots on each line (so each line contains 3M).
698 Note that you can set the default style using the @code{progress}
699 command in @file{.wgetrc}. That setting may be overridden from the
700 command line. The exception is that, when the output is not a TTY, the
701 ``dot'' progress will be favored over ``bar''. To force the bar output,
702 use @samp{--progress=bar:force}.
705 @itemx --timestamping
706 Turn on time-stamping. @xref{Time-Stamping}, for details.
708 @cindex server response, print
710 @itemx --server-response
711 Print the headers sent by @sc{http} servers and responses sent by
714 @cindex Wget as spider
717 When invoked with this option, Wget will behave as a Web @dfn{spider},
718 which means that it will not download the pages, just check that they
719 are there. For example, you can use Wget to check your bookmarks:
722 wget --spider --force-html -i bookmarks.html
725 This feature needs much more work for Wget to get close to the
726 functionality of real web spiders.
730 @itemx --timeout=@var{seconds}
731 Set the network timeout to @var{seconds} seconds. This is equivalent
732 to specifying @samp{--dns-timeout}, @samp{--connect-timeout}, and
733 @samp{--read-timeout}, all at the same time.
735 When interacting with the network, Wget can check for timeout and
736 abort the operation if it takes too long. This prevents anomalies
737 like hanging reads and infinite connects. The only timeout enabled by
738 default is a 900-second read timeout. Setting a timeout to 0 disables
739 it altogether. Unless you know what you are doing, it is best not to
740 change the default timeout settings.
742 All timeout-related options accept decimal values, as well as
743 subsecond values. For example, @samp{0.1} seconds is a legal (though
744 unwise) choice of timeout. Subsecond timeouts are useful for checking
745 server response times or for testing network latency.
749 @item --dns-timeout=@var{seconds}
750 Set the DNS lookup timeout to @var{seconds} seconds. DNS lookups that
751 don't complete within the specified time will fail. By default, there
752 is no timeout on DNS lookups, other than that implemented by system
755 @cindex connect timeout
756 @cindex timeout, connect
757 @item --connect-timeout=@var{seconds}
758 Set the connect timeout to @var{seconds} seconds. TCP connections that
759 take longer to establish will be aborted. By default, there is no
760 connect timeout, other than that implemented by system libraries.
763 @cindex timeout, read
764 @item --read-timeout=@var{seconds}
765 Set the read (and write) timeout to @var{seconds} seconds. The
766 ``time'' of this timeout refers to @dfn{idle time}: if, at any point in
767 the download, no data is received for more than the specified number
768 of seconds, reading fails and the download is restarted. This option
769 does not directly affect the duration of the entire download.
771 Of course, the remote server may choose to terminate the connection
772 sooner than this option requires. The default read timeout is 900
775 @cindex bandwidth, limit
777 @cindex limit bandwidth
778 @item --limit-rate=@var{amount}
779 Limit the download speed to @var{amount} bytes per second. Amount may
780 be expressed in bytes, kilobytes with the @samp{k} suffix, or megabytes
781 with the @samp{m} suffix. For example, @samp{--limit-rate=20k} will
782 limit the retrieval rate to 20KB/s. This is useful when, for whatever
783 reason, you don't want Wget to consume the entire available bandwidth.
785 This option allows the use of decimal numbers, usually in conjunction
786 with power suffixes; for example, @samp{--limit-rate=2.5k} is a legal
789 Note that Wget implements the limiting by sleeping the appropriate
790 amount of time after a network read that took less time than specified
791 by the rate. Eventually this strategy causes the TCP transfer to slow
792 down to approximately the specified rate. However, it may take some
793 time for this balance to be achieved, so don't be surprised if limiting
794 the rate doesn't work well with very small files.
798 @item -w @var{seconds}
799 @itemx --wait=@var{seconds}
800 Wait the specified number of seconds between the retrievals. Use of
801 this option is recommended, as it lightens the server load by making the
802 requests less frequent. Instead of in seconds, the time can be
803 specified in minutes using the @code{m} suffix, in hours using @code{h}
804 suffix, or in days using @code{d} suffix.
806 Specifying a large value for this option is useful if the network or the
807 destination host is down, so that Wget can wait long enough to
808 reasonably expect the network error to be fixed before the retry. The
809 waiting interval specified by this function is influenced by
810 @code{--random-wait}, which see.
812 @cindex retries, waiting between
813 @cindex waiting between retries
814 @item --waitretry=@var{seconds}
815 If you don't want Wget to wait between @emph{every} retrieval, but only
816 between retries of failed downloads, you can use this option. Wget will
817 use @dfn{linear backoff}, waiting 1 second after the first failure on a
818 given file, then waiting 2 seconds after the second failure on that
819 file, up to the maximum number of @var{seconds} you specify. Therefore,
820 a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
823 Note that this option is turned on by default in the global
829 Some web sites may perform log analysis to identify retrieval programs
830 such as Wget by looking for statistically significant similarities in
831 the time between requests. This option causes the time between requests
832 to vary between 0.5 and 1.5 * @var{wait} seconds, where @var{wait} was
833 specified using the @samp{--wait} option, in order to mask Wget's
834 presence from such analysis.
836 A 2001 article in a publication devoted to development on a popular
837 consumer platform provided code to perform this analysis on the fly.
838 Its author suggested blocking at the class C address level to ensure
839 automated retrieval programs were blocked despite changing DHCP-supplied
842 The @samp{--random-wait} option was inspired by this ill-advised
843 recommendation to block many unrelated users from a web site due to the
848 Don't use proxies, even if the appropriate @code{*_proxy} environment
852 For more information about the use of proxies with Wget, @xref{Proxies}.
857 @itemx --quota=@var{quota}
858 Specify download quota for automatic retrievals. The value can be
859 specified in bytes (default), kilobytes (with @samp{k} suffix), or
860 megabytes (with @samp{m} suffix).
862 Note that quota will never affect downloading a single file. So if you
863 specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
864 @file{ls-lR.gz} will be downloaded. The same goes even when several
865 @sc{url}s are specified on the command-line. However, quota is
866 respected when retrieving either recursively, or from an input file.
867 Thus you may safely type @samp{wget -Q2m -i sites}---download will be
868 aborted when the quota is exceeded.
870 Setting quota to 0 or to @samp{inf} unlimits the download quota.
873 @cindex caching of DNS lookups
875 Turn off caching of DNS lookups. Normally, Wget remembers the IP
876 addresses it looked up from DNS so it doesn't have to repeatedly
877 contact the DNS server for the same (typically small) set of hosts it
878 retrieves from. This cache exists in memory only; a new Wget run will
881 However, it has been reported that in some situations it is not
882 desirable to cache host names, even for the duration of a
883 short-running application like Wget. With this option Wget issues a
884 new DNS lookup (more precisely, a new call to @code{gethostbyname} or
885 @code{getaddrinfo}) each time it makes a new connection. Please note
886 that this option will @emph{not} affect caching that might be
887 performed by the resolving library or by an external caching layer,
890 If you don't understand exactly what this option does, you probably
893 @cindex file names, restrict
894 @cindex Windows file names
895 @item --restrict-file-names=@var{mode}
896 Change which characters found in remote URLs may show up in local file
897 names generated from those URLs. Characters that are @dfn{restricted}
898 by this option are escaped, i.e. replaced with @samp{%HH}, where
899 @samp{HH} is the hexadecimal number that corresponds to the restricted
902 By default, Wget escapes the characters that are not valid as part of
903 file names on your operating system, as well as control characters that
904 are typically unprintable. This option is useful for changing these
905 defaults, either because you are downloading to a non-native partition,
906 or because you want to disable escaping of the control characters.
908 When mode is set to ``unix'', Wget escapes the character @samp{/} and
909 the control characters in the ranges 0--31 and 128--159. This is the
910 default on Unix-like OS'es.
912 When mode is set to ``windows'', Wget escapes the characters @samp{\},
913 @samp{|}, @samp{/}, @samp{:}, @samp{?}, @samp{"}, @samp{*}, @samp{<},
914 @samp{>}, and the control characters in the ranges 0--31 and 128--159.
915 In addition to this, Wget in Windows mode uses @samp{+} instead of
916 @samp{:} to separate host and port in local file names, and uses
917 @samp{@@} instead of @samp{?} to separate the query portion of the file
918 name from the rest. Therefore, a URL that would be saved as
919 @samp{www.xemacs.org:4300/search.pl?input=blah} in Unix mode would be
920 saved as @samp{www.xemacs.org+4300/search.pl@@input=blah} in Windows
921 mode. This mode is the default on Windows.
923 If you append @samp{,nocontrol} to the mode, as in
924 @samp{unix,nocontrol}, escaping of the control characters is also
925 switched off. You can use @samp{--restrict-file-names=nocontrol} to
926 turn off escaping of control characters without affecting the choice of
927 the OS to use as file name restriction mode.
934 Force connecting to IPv4 or IPv6 addresses. With @samp{--inet4-only}
935 or @samp{-4}, Wget will only connect to IPv4 hosts, ignoring AAAA
936 records in DNS, and refusing to connect to IPv6 addresses specified in
937 URLs. Conversely, with @samp{--inet6-only} or @samp{-6}, Wget will
938 only connect to IPv6 hosts and ignore A records and IPv4 addresses.
940 Neither options should be needed normally. By default, an IPv6-aware
941 Wget will use the address family specified by the host's DNS record.
942 If the DNS responds with both IPv4 and IPv6 addresses, Wget will try
943 them in sequence until it finds one it can connect to. (Also see
944 @code{--prefer-family} option described below.)
946 These options can be used to deliberately force the use of IPv4 or
947 IPv6 address families on dual family systems, usually to aid debugging
948 or to deal with broken network configuration. Only one of
949 @samp{--inet6-only} and @samp{--inet4-only} may be specified at the
950 same time. Neither option is available in Wget compiled without IPv6
953 @item --prefer-family=IPv4/IPv6/none
954 When given a choice of several addresses, connect to the addresses
955 with specified address family first. IPv4 addresses are preferred by
958 This avoids spurious errors and connect attempts when accessing hosts
959 that resolve to both IPv6 and IPv4 addresses from IPv4 networks. For
960 example, @samp{www.kame.net} resolves to
961 @samp{2001:200:0:8002:203:47ff:fea5:3085} and to
962 @samp{203.178.141.194}. When the preferred family is @code{IPv4}, the
963 IPv4 address is used first; when the preferred family is @code{IPv6},
964 the IPv6 address is used first; if the specified value is @code{none},
965 the address order returned by DNS is used without change.
967 Unlike @samp{-4} and @samp{-6}, this option doesn't inhibit access to
968 any address family, it only changes the @emph{order} in which the
969 addresses are accessed. Also note that the reordering performed by
970 this option is @dfn{stable}---it doesn't affect order of addresses of
971 the same family. That is, the relative order of all IPv4 addresses
972 and of all IPv6 addresses remains intact in all cases.
974 @item --retry-connrefused
975 Consider ``connection refused'' a transient error and try again.
976 Normally Wget gives up on a URL when it is unable to connect to the
977 site because failure to connect is taken as a sign that the server is
978 not running at all and that retries would not help. This option is
979 for mirroring unreliable sites whose servers tend to disappear for
980 short periods of time.
984 @cindex authentication
985 @item --user=@var{user}
986 @itemx --password=@var{password}
987 Specify the username @var{user} and password @var{password} for both
988 @sc{ftp} and @sc{http} file retrieval. These parameters can be overridden
989 using the @samp{--ftp-user} and @samp{--ftp-password} options for
990 @sc{ftp} connections and the @samp{--http-user} and @samp{--http-password}
991 options for @sc{http} connections.
994 @node Directory Options
995 @section Directory Options
999 @itemx --no-directories
1000 Do not create a hierarchy of directories when retrieving recursively.
1001 With this option turned on, all files will get saved to the current
1002 directory, without clobbering (if a name shows up more than once, the
1003 filenames will get extensions @samp{.n}).
1006 @itemx --force-directories
1007 The opposite of @samp{-nd}---create a hierarchy of directories, even if
1008 one would not have been created otherwise. E.g. @samp{wget -x
1009 http://fly.srk.fer.hr/robots.txt} will save the downloaded file to
1010 @file{fly.srk.fer.hr/robots.txt}.
1013 @itemx --no-host-directories
1014 Disable generation of host-prefixed directories. By default, invoking
1015 Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
1016 directories beginning with @file{fly.srk.fer.hr/}. This option disables
1019 @item --protocol-directories
1020 Use the protocol name as a directory component of local file names. For
1021 example, with this option, @samp{wget -r http://@var{host}} will save to
1022 @samp{http/@var{host}/...} rather than just to @samp{@var{host}/...}.
1024 @cindex cut directories
1025 @item --cut-dirs=@var{number}
1026 Ignore @var{number} directory components. This is useful for getting a
1027 fine-grained control over the directory where recursive retrieval will
1030 Take, for example, the directory at
1031 @samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
1032 @samp{-r}, it will be saved locally under
1033 @file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
1034 remove the @file{ftp.xemacs.org/} part, you are still stuck with
1035 @file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
1036 makes Wget not ``see'' @var{number} remote directory components. Here
1037 are several examples of how @samp{--cut-dirs} option works.
1041 No options -> ftp.xemacs.org/pub/xemacs/
1043 -nH --cut-dirs=1 -> xemacs/
1044 -nH --cut-dirs=2 -> .
1046 --cut-dirs=1 -> ftp.xemacs.org/xemacs/
1051 If you just want to get rid of the directory structure, this option is
1052 similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
1053 @samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
1054 instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
1055 be placed to @file{xemacs/beta}, as one would expect.
1057 @cindex directory prefix
1058 @item -P @var{prefix}
1059 @itemx --directory-prefix=@var{prefix}
1060 Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
1061 directory where all other files and subdirectories will be saved to,
1062 i.e. the top of the retrieval tree. The default is @samp{.} (the
1067 @section HTTP Options
1070 @cindex .html extension
1072 @itemx --html-extension
1073 If a file of type @samp{application/xhtml+xml} or @samp{text/html} is
1074 downloaded and the URL does not end with the regexp
1075 @samp{\.[Hh][Tt][Mm][Ll]?}, this option will cause the suffix @samp{.html}
1076 to be appended to the local filename. This is useful, for instance, when
1077 you're mirroring a remote site that uses @samp{.asp} pages, but you want
1078 the mirrored pages to be viewable on your stock Apache server. Another
1079 good use for this is when you're downloading CGI-generated materials. A URL
1080 like @samp{http://site.com/article.cgi?25} will be saved as
1081 @file{article.cgi?25.html}.
1083 Note that filenames changed in this way will be re-downloaded every time
1084 you re-mirror a site, because Wget can't tell that the local
1085 @file{@var{X}.html} file corresponds to remote URL @samp{@var{X}} (since
1086 it doesn't yet know that the URL produces output of type
1087 @samp{text/html} or @samp{application/xhtml+xml}. To prevent this
1088 re-downloading, you must use @samp{-k} and @samp{-K} so that the original
1089 version of the file will be saved as @file{@var{X}.orig} (@pxref{Recursive
1090 Retrieval Options}).
1093 @cindex http password
1094 @cindex authentication
1095 @item --http-user=@var{user}
1096 @itemx --http-password=@var{password}
1097 Specify the username @var{user} and password @var{password} on an
1098 @sc{http} server. According to the type of the challenge, Wget will
1099 encode them using either the @code{basic} (insecure),
1100 the @code{digest}, or the Windows @code{NTLM} authentication scheme.
1102 Another way to specify username and password is in the @sc{url} itself
1103 (@pxref{URL Format}). Either method reveals your password to anyone who
1104 bothers to run @code{ps}. To prevent the passwords from being seen,
1105 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1106 those files from other users with @code{chmod}. If the passwords are
1107 really important, do not leave them lying in those files either---edit
1108 the files and delete them after Wget has started the download.
1111 For more information about security issues with Wget, @xref{Security
1118 Disable server-side cache. In this case, Wget will send the remote
1119 server an appropriate directive (@samp{Pragma: no-cache}) to get the
1120 file from the remote service, rather than returning the cached version.
1121 This is especially useful for retrieving and flushing out-of-date
1122 documents on proxy servers.
1124 Caching is allowed by default.
1128 Disable the use of cookies. Cookies are a mechanism for maintaining
1129 server-side state. The server sends the client a cookie using the
1130 @code{Set-Cookie} header, and the client responds with the same cookie
1131 upon further requests. Since cookies allow the server owners to keep
1132 track of visitors and for sites to exchange this information, some
1133 consider them a breach of privacy. The default is to use cookies;
1134 however, @emph{storing} cookies is not on by default.
1136 @cindex loading cookies
1137 @cindex cookies, loading
1138 @item --load-cookies @var{file}
1139 Load cookies from @var{file} before the first HTTP retrieval.
1140 @var{file} is a textual file in the format originally used by Netscape's
1141 @file{cookies.txt} file.
1143 You will typically use this option when mirroring sites that require
1144 that you be logged in to access some or all of their content. The login
1145 process typically works by the web server issuing an @sc{http} cookie
1146 upon receiving and verifying your credentials. The cookie is then
1147 resent by the browser when accessing that part of the site, and so
1148 proves your identity.
1150 Mirroring such a site requires Wget to send the same cookies your
1151 browser sends when communicating with the site. This is achieved by
1152 @samp{--load-cookies}---simply point Wget to the location of the
1153 @file{cookies.txt} file, and it will send the same cookies your browser
1154 would send in the same situation. Different browsers keep textual
1155 cookie files in different locations:
1159 The cookies are in @file{~/.netscape/cookies.txt}.
1161 @item Mozilla and Netscape 6.x.
1162 Mozilla's cookie file is also named @file{cookies.txt}, located
1163 somewhere under @file{~/.mozilla}, in the directory of your profile.
1164 The full path usually ends up looking somewhat like
1165 @file{~/.mozilla/default/@var{some-weird-string}/cookies.txt}.
1167 @item Internet Explorer.
1168 You can produce a cookie file Wget can use by using the File menu,
1169 Import and Export, Export Cookies. This has been tested with Internet
1170 Explorer 5; it is not guaranteed to work with earlier versions.
1172 @item Other browsers.
1173 If you are using a different browser to create your cookies,
1174 @samp{--load-cookies} will only work if you can locate or produce a
1175 cookie file in the Netscape format that Wget expects.
1178 If you cannot use @samp{--load-cookies}, there might still be an
1179 alternative. If your browser supports a ``cookie manager'', you can use
1180 it to view the cookies used when accessing the site you're mirroring.
1181 Write down the name and value of the cookie, and manually instruct Wget
1182 to send those cookies, bypassing the ``official'' cookie support:
1185 wget --no-cookies --header "Cookie: @var{name}=@var{value}"
1188 @cindex saving cookies
1189 @cindex cookies, saving
1190 @item --save-cookies @var{file}
1191 Save cookies to @var{file} before exiting. This will not save cookies
1192 that have expired or that have no expiry time (so-called ``session
1193 cookies''), but also see @samp{--keep-session-cookies}.
1195 @cindex cookies, session
1196 @cindex session cookies
1197 @item --keep-session-cookies
1198 When specified, causes @samp{--save-cookies} to also save session
1199 cookies. Session cookies are normally not saved because they are
1200 meant to be kept in memory and forgotten when you exit the browser.
1201 Saving them is useful on sites that require you to log in or to visit
1202 the home page before you can access some pages. With this option,
1203 multiple Wget runs are considered a single browser session as far as
1204 the site is concerned.
1206 Since the cookie file format does not normally carry session cookies,
1207 Wget marks them with an expiry timestamp of 0. Wget's
1208 @samp{--load-cookies} recognizes those as session cookies, but it might
1209 confuse other browsers. Also note that cookies so loaded will be
1210 treated as other session cookies, which means that if you want
1211 @samp{--save-cookies} to preserve them again, you must use
1212 @samp{--keep-session-cookies} again.
1214 @cindex Content-Length, ignore
1215 @cindex ignore length
1216 @item --ignore-length
1217 Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
1218 precise) send out bogus @code{Content-Length} headers, which makes Wget
1219 go wild, as it thinks not all the document was retrieved. You can spot
1220 this syndrome if Wget retries getting the same document again and again,
1221 each time claiming that the (otherwise normal) connection has closed on
1224 With this option, Wget will ignore the @code{Content-Length} header---as
1225 if it never existed.
1228 @item --header=@var{header-line}
1229 Send @var{header-line} along with the rest of the headers in each
1230 @sc{http} request. The supplied header is sent as-is, which means it
1231 must contain name and value separated by colon, and must not contain
1234 You may define more than one additional header by specifying
1235 @samp{--header} more than once.
1239 wget --header='Accept-Charset: iso-8859-2' \
1240 --header='Accept-Language: hr' \
1241 http://fly.srk.fer.hr/
1245 Specification of an empty string as the header value will clear all
1246 previous user-defined headers.
1248 As of Wget 1.10, this option can be used to override headers otherwise
1249 generated automatically. This example instructs Wget to connect to
1250 localhost, but to specify @samp{foo.bar} in the @code{Host} header:
1253 wget --header="Host: foo.bar" http://localhost/
1256 In versions of Wget prior to 1.10 such use of @samp{--header} caused
1257 sending of duplicate headers.
1260 @item --max-redirect=@var{number}
1261 Specifies the maximum number of redirections to follow for a resource.
1262 The default is 20, which is usually far more than necessary. However, on
1263 those occasions where you want to allow more (or fewer), this is the
1267 @cindex proxy password
1268 @cindex proxy authentication
1269 @item --proxy-user=@var{user}
1270 @itemx --proxy-password=@var{password}
1271 Specify the username @var{user} and password @var{password} for
1272 authentication on a proxy server. Wget will encode them using the
1273 @code{basic} authentication scheme.
1275 Security considerations similar to those with @samp{--http-password}
1276 pertain here as well.
1278 @cindex http referer
1279 @cindex referer, http
1280 @item --referer=@var{url}
1281 Include `Referer: @var{url}' header in HTTP request. Useful for
1282 retrieving documents with server-side processing that assume they are
1283 always being retrieved by interactive web browsers and only come out
1284 properly when Referer is set to one of the pages that point to them.
1286 @cindex server response, save
1287 @item --save-headers
1288 Save the headers sent by the @sc{http} server to the file, preceding the
1289 actual contents, with an empty line as the separator.
1292 @item -U @var{agent-string}
1293 @itemx --user-agent=@var{agent-string}
1294 Identify as @var{agent-string} to the @sc{http} server.
1296 The @sc{http} protocol allows the clients to identify themselves using a
1297 @code{User-Agent} header field. This enables distinguishing the
1298 @sc{www} software, usually for statistical purposes or for tracing of
1299 protocol violations. Wget normally identifies as
1300 @samp{Wget/@var{version}}, @var{version} being the current version
1303 However, some sites have been known to impose the policy of tailoring
1304 the output according to the @code{User-Agent}-supplied information.
1305 While this is not such a bad idea in theory, it has been abused by
1306 servers denying information to clients other than (historically)
1307 Netscape or, more frequently, Microsoft Internet Explorer. This
1308 option allows you to change the @code{User-Agent} line issued by Wget.
1309 Use of this option is discouraged, unless you really know what you are
1312 Specifying empty user agent with @samp{--user-agent=""} instructs Wget
1313 not to send the @code{User-Agent} header in @sc{http} requests.
1316 @item --post-data=@var{string}
1317 @itemx --post-file=@var{file}
1318 Use POST as the method for all HTTP requests and send the specified data
1319 in the request body. @code{--post-data} sends @var{string} as data,
1320 whereas @code{--post-file} sends the contents of @var{file}. Other than
1321 that, they work in exactly the same way.
1323 Please be aware that Wget needs to know the size of the POST data in
1324 advance. Therefore the argument to @code{--post-file} must be a regular
1325 file; specifying a FIFO or something like @file{/dev/stdin} won't work.
1326 It's not quite clear how to work around this limitation inherent in
1327 HTTP/1.0. Although HTTP/1.1 introduces @dfn{chunked} transfer that
1328 doesn't require knowing the request length in advance, a client can't
1329 use chunked unless it knows it's talking to an HTTP/1.1 server. And it
1330 can't know that until it receives a response, which in turn requires the
1331 request to have been completed -- a chicken-and-egg problem.
1333 Note: if Wget is redirected after the POST request is completed, it
1334 will not send the POST data to the redirected URL. This is because
1335 URLs that process POST often respond with a redirection to a regular
1336 page, which does not desire or accept POST. It is not completely
1337 clear that this behavior is optimal; if it doesn't work out, it might
1338 be changed in the future.
1340 This example shows how to log to a server using POST and then proceed to
1341 download the desired pages, presumably only accessible to authorized
1346 # @r{Log in to the server. This can be done only once.}
1347 wget --save-cookies cookies.txt \
1348 --post-data 'user=foo&password=bar' \
1349 http://server.com/auth.php
1351 # @r{Now grab the page or pages we care about.}
1352 wget --load-cookies cookies.txt \
1353 -p http://server.com/interesting/article.php
1357 If the server is using session cookies to track user authentication,
1358 the above will not work because @samp{--save-cookies} will not save
1359 them (and neither will browsers) and the @file{cookies.txt} file will
1360 be empty. In that case use @samp{--keep-session-cookies} along with
1361 @samp{--save-cookies} to force saving of session cookies.
1363 @cindex Content-Disposition
1364 @item --content-disposition
1366 If this is set to on, experimental (not fully-functional) support for
1367 @code{Content-Disposition} headers is enabled. This can currently result in
1368 extra round-trips to the server for a @code{HEAD} request, and is known
1369 to suffer from a few bugs, which is why it is not currently enabled by default.
1371 This option is useful for some file-downloading CGI programs that use
1372 @code{Content-Disposition} headers to describe what the name of a
1373 downloaded file should be.
1375 @cindex authentication
1376 @item --auth-no-challenge
1378 If this option is given, Wget will send Basic HTTP authentication
1379 information (plaintext username and password) for all requests, just
1380 like Wget 1.10.2 and prior did by default.
1382 Use of this option is not recommended, and is intended only to support
1383 some few obscure servers, which never send HTTP authentication
1384 challenges, but accept unsolicited auth info, say, in addition to
1385 form-based authentication.
1389 @node HTTPS (SSL/TLS) Options
1390 @section HTTPS (SSL/TLS) Options
1393 To support encrypted HTTP (HTTPS) downloads, Wget must be compiled
1394 with an external SSL library, currently OpenSSL. If Wget is compiled
1395 without SSL support, none of these options are available.
1398 @cindex SSL protocol, choose
1399 @item --secure-protocol=@var{protocol}
1400 Choose the secure protocol to be used. Legal values are @samp{auto},
1401 @samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. If @samp{auto} is used,
1402 the SSL library is given the liberty of choosing the appropriate
1403 protocol automatically, which is achieved by sending an SSLv2 greeting
1404 and announcing support for SSLv3 and TLSv1. This is the default.
1406 Specifying @samp{SSLv2}, @samp{SSLv3}, or @samp{TLSv1} forces the use
1407 of the corresponding protocol. This is useful when talking to old and
1408 buggy SSL server implementations that make it hard for OpenSSL to
1409 choose the correct protocol version. Fortunately, such servers are
1412 @cindex SSL certificate, check
1413 @item --no-check-certificate
1414 Don't check the server certificate against the available certificate
1415 authorities. Also don't require the URL host name to match the common
1416 name presented by the certificate.
1418 As of Wget 1.10, the default is to verify the server's certificate
1419 against the recognized certificate authorities, breaking the SSL
1420 handshake and aborting the download if the verification fails.
1421 Although this provides more secure downloads, it does break
1422 interoperability with some sites that worked with previous Wget
1423 versions, particularly those using self-signed, expired, or otherwise
1424 invalid certificates. This option forces an ``insecure'' mode of
1425 operation that turns the certificate verification errors into warnings
1426 and allows you to proceed.
1428 If you encounter ``certificate verification'' errors or ones saying
1429 that ``common name doesn't match requested host name'', you can use
1430 this option to bypass the verification and proceed with the download.
1431 @emph{Only use this option if you are otherwise convinced of the
1432 site's authenticity, or if you really don't care about the validity of
1433 its certificate.} It is almost always a bad idea not to check the
1434 certificates when transmitting confidential or important data.
1436 @cindex SSL certificate
1437 @item --certificate=@var{file}
1438 Use the client certificate stored in @var{file}. This is needed for
1439 servers that are configured to require certificates from the clients
1440 that connect to them. Normally a certificate is not required and this
1443 @cindex SSL certificate type, specify
1444 @item --certificate-type=@var{type}
1445 Specify the type of the client certificate. Legal values are
1446 @samp{PEM} (assumed by default) and @samp{DER}, also known as
1449 @item --private-key=@var{file}
1450 Read the private key from @var{file}. This allows you to provide the
1451 private key in a file separate from the certificate.
1453 @item --private-key-type=@var{type}
1454 Specify the type of the private key. Accepted values are @samp{PEM}
1455 (the default) and @samp{DER}.
1457 @item --ca-certificate=@var{file}
1458 Use @var{file} as the file with the bundle of certificate authorities
1459 (``CA'') to verify the peers. The certificates must be in PEM format.
1461 Without this option Wget looks for CA certificates at the
1462 system-specified locations, chosen at OpenSSL installation time.
1464 @cindex SSL certificate authority
1465 @item --ca-directory=@var{directory}
1466 Specifies directory containing CA certificates in PEM format. Each
1467 file contains one CA certificate, and the file name is based on a hash
1468 value derived from the certificate. This is achieved by processing a
1469 certificate directory with the @code{c_rehash} utility supplied with
1470 OpenSSL. Using @samp{--ca-directory} is more efficient than
1471 @samp{--ca-certificate} when many certificates are installed because
1472 it allows Wget to fetch certificates on demand.
1474 Without this option Wget looks for CA certificates at the
1475 system-specified locations, chosen at OpenSSL installation time.
1477 @cindex entropy, specifying source of
1478 @cindex randomness, specifying source of
1479 @item --random-file=@var{file}
1480 Use @var{file} as the source of random data for seeding the
1481 pseudo-random number generator on systems without @file{/dev/random}.
1483 On such systems the SSL library needs an external source of randomness
1484 to initialize. Randomness may be provided by EGD (see
1485 @samp{--egd-file} below) or read from an external source specified by
1486 the user. If this option is not specified, Wget looks for random data
1487 in @code{$RANDFILE} or, if that is unset, in @file{$HOME/.rnd}. If
1488 none of those are available, it is likely that SSL encryption will not
1491 If you're getting the ``Could not seed OpenSSL PRNG; disabling SSL.''
1492 error, you should provide random data using some of the methods
1496 @item --egd-file=@var{file}
1497 Use @var{file} as the EGD socket. EGD stands for @dfn{Entropy
1498 Gathering Daemon}, a user-space program that collects data from
1499 various unpredictable system sources and makes it available to other
1500 programs that might need it. Encryption software, such as the SSL
1501 library, needs sources of non-repeating randomness to seed the random
1502 number generator used to produce cryptographically strong keys.
1504 OpenSSL allows the user to specify his own source of entropy using the
1505 @code{RAND_FILE} environment variable. If this variable is unset, or
1506 if the specified file does not produce enough randomness, OpenSSL will
1507 read random data from EGD socket specified using this option.
1509 If this option is not specified (and the equivalent startup command is
1510 not used), EGD is never contacted. EGD is not needed on modern Unix
1511 systems that support @file{/dev/random}.
1515 @section FTP Options
1519 @cindex ftp password
1520 @cindex ftp authentication
1521 @item --ftp-user=@var{user}
1522 @itemx --ftp-password=@var{password}
1523 Specify the username @var{user} and password @var{password} on an
1524 @sc{ftp} server. Without this, or the corresponding startup option,
1525 the password defaults to @samp{-wget@@}, normally used for anonymous
1528 Another way to specify username and password is in the @sc{url} itself
1529 (@pxref{URL Format}). Either method reveals your password to anyone who
1530 bothers to run @code{ps}. To prevent the passwords from being seen,
1531 store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
1532 those files from other users with @code{chmod}. If the passwords are
1533 really important, do not leave them lying in those files either---edit
1534 the files and delete them after Wget has started the download.
1537 For more information about security issues with Wget, @xref{Security
1541 @cindex .listing files, removing
1542 @item --no-remove-listing
1543 Don't remove the temporary @file{.listing} files generated by @sc{ftp}
1544 retrievals. Normally, these files contain the raw directory listings
1545 received from @sc{ftp} servers. Not removing them can be useful for
1546 debugging purposes, or when you want to be able to easily check on the
1547 contents of remote server directories (e.g. to verify that a mirror
1548 you're running is complete).
1550 Note that even though Wget writes to a known filename for this file,
1551 this is not a security hole in the scenario of a user making
1552 @file{.listing} a symbolic link to @file{/etc/passwd} or something and
1553 asking @code{root} to run Wget in his or her directory. Depending on
1554 the options used, either Wget will refuse to write to @file{.listing},
1555 making the globbing/recursion/time-stamping operation fail, or the
1556 symbolic link will be deleted and replaced with the actual
1557 @file{.listing} file, or the listing will be written to a
1558 @file{.listing.@var{number}} file.
1560 Even though this situation isn't a problem, though, @code{root} should
1561 never run Wget in a non-trusted user's directory. A user could do
1562 something as simple as linking @file{index.html} to @file{/etc/passwd}
1563 and asking @code{root} to run Wget with @samp{-N} or @samp{-r} so the file
1564 will be overwritten.
1566 @cindex globbing, toggle
1568 Turn off @sc{ftp} globbing. Globbing refers to the use of shell-like
1569 special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[}
1570 and @samp{]} to retrieve more than one file from the same directory at
1574 wget ftp://gnjilux.srk.fer.hr/*.msg
1577 By default, globbing will be turned on if the @sc{url} contains a
1578 globbing character. This option may be used to turn globbing on or off
1581 You may have to quote the @sc{url} to protect it from being expanded by
1582 your shell. Globbing makes Wget look for a directory listing, which is
1583 system-specific. This is why it currently works only with Unix @sc{ftp}
1584 servers (and the ones emulating Unix @code{ls} output).
1587 @item --no-passive-ftp
1588 Disable the use of the @dfn{passive} FTP transfer mode. Passive FTP
1589 mandates that the client connect to the server to establish the data
1590 connection rather than the other way around.
1592 If the machine is connected to the Internet directly, both passive and
1593 active FTP should work equally well. Behind most firewall and NAT
1594 configurations passive FTP has a better chance of working. However,
1595 in some rare firewall configurations, active FTP actually works when
1596 passive FTP doesn't. If you suspect this to be the case, use this
1597 option, or set @code{passive_ftp=off} in your init file.
1599 @cindex symbolic links, retrieving
1600 @item --retr-symlinks
1601 Usually, when retrieving @sc{ftp} directories recursively and a symbolic
1602 link is encountered, the linked-to file is not downloaded. Instead, a
1603 matching symbolic link is created on the local filesystem. The
1604 pointed-to file will not be downloaded unless this recursive retrieval
1605 would have encountered it separately and downloaded it anyway.
1607 When @samp{--retr-symlinks} is specified, however, symbolic links are
1608 traversed and the pointed-to files are retrieved. At this time, this
1609 option does not cause Wget to traverse symlinks to directories and
1610 recurse through them, but in the future it should be enhanced to do
1613 Note that when retrieving a file (not a directory) because it was
1614 specified on the command-line, rather than because it was recursed to,
1615 this option has no effect. Symbolic links are always traversed in this
1618 @cindex Keep-Alive, turning off
1619 @cindex Persistent Connections, disabling
1620 @item --no-http-keep-alive
1621 Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget
1622 asks the server to keep the connection open so that, when you download
1623 more than one document from the same server, they get transferred over
1624 the same TCP connection. This saves time and at the same time reduces
1625 the load on the server.
1627 This option is useful when, for some reason, persistent (keep-alive)
1628 connections don't work for you, for example due to a server bug or due
1629 to the inability of server-side scripts to cope with the connections.
1632 @node Recursive Retrieval Options
1633 @section Recursive Retrieval Options
1638 Turn on recursive retrieving. @xref{Recursive Download}, for more
1641 @item -l @var{depth}
1642 @itemx --level=@var{depth}
1643 Specify recursion maximum depth level @var{depth} (@pxref{Recursive
1644 Download}). The default maximum depth is 5.
1646 @cindex proxy filling
1647 @cindex delete after retrieval
1648 @cindex filling proxy cache
1649 @item --delete-after
1650 This option tells Wget to delete every single file it downloads,
1651 @emph{after} having done so. It is useful for pre-fetching popular
1652 pages through a proxy, e.g.:
1655 wget -r -nd --delete-after http://whatever.com/~popular/page/
1658 The @samp{-r} option is to retrieve recursively, and @samp{-nd} to not
1661 Note that @samp{--delete-after} deletes files on the local machine. It
1662 does not issue the @samp{DELE} command to remote FTP sites, for
1663 instance. Also note that when @samp{--delete-after} is specified,
1664 @samp{--convert-links} is ignored, so @samp{.orig} files are simply not
1665 created in the first place.
1667 @cindex conversion of links
1668 @cindex link conversion
1670 @itemx --convert-links
1671 After the download is complete, convert the links in the document to
1672 make them suitable for local viewing. This affects not only the visible
1673 hyperlinks, but any part of the document that links to external content,
1674 such as embedded images, links to style sheets, hyperlinks to non-@sc{html}
1677 Each link will be changed in one of the two ways:
1681 The links to files that have been downloaded by Wget will be changed to
1682 refer to the file they point to as a relative link.
1684 Example: if the downloaded file @file{/foo/doc.html} links to
1685 @file{/bar/img.gif}, also downloaded, then the link in @file{doc.html}
1686 will be modified to point to @samp{../bar/img.gif}. This kind of
1687 transformation works reliably for arbitrary combinations of directories.
1690 The links to files that have not been downloaded by Wget will be changed
1691 to include host name and absolute path of the location they point to.
1693 Example: if the downloaded file @file{/foo/doc.html} links to
1694 @file{/bar/img.gif} (or to @file{../bar/img.gif}), then the link in
1695 @file{doc.html} will be modified to point to
1696 @file{http://@var{hostname}/bar/img.gif}.
1699 Because of this, local browsing works reliably: if a linked file was
1700 downloaded, the link will refer to its local name; if it was not
1701 downloaded, the link will refer to its full Internet address rather than
1702 presenting a broken link. The fact that the former links are converted
1703 to relative links ensures that you can move the downloaded hierarchy to
1706 Note that only at the end of the download can Wget know which links have
1707 been downloaded. Because of that, the work done by @samp{-k} will be
1708 performed at the end of all the downloads.
1710 @cindex backing up converted files
1712 @itemx --backup-converted
1713 When converting a file, back up the original version with a @samp{.orig}
1714 suffix. Affects the behavior of @samp{-N} (@pxref{HTTP Time-Stamping
1719 Turn on options suitable for mirroring. This option turns on recursion
1720 and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
1721 directory listings. It is currently equivalent to
1722 @samp{-r -N -l inf --no-remove-listing}.
1724 @cindex page requisites
1725 @cindex required images, downloading
1727 @itemx --page-requisites
1728 This option causes Wget to download all the files that are necessary to
1729 properly display a given @sc{html} page. This includes such things as
1730 inlined images, sounds, and referenced stylesheets.
1732 Ordinarily, when downloading a single @sc{html} page, any requisite documents
1733 that may be needed to display it properly are not downloaded. Using
1734 @samp{-r} together with @samp{-l} can help, but since Wget does not
1735 ordinarily distinguish between external and inlined documents, one is
1736 generally left with ``leaf documents'' that are missing their
1739 For instance, say document @file{1.html} contains an @code{<IMG>} tag
1740 referencing @file{1.gif} and an @code{<A>} tag pointing to external
1741 document @file{2.html}. Say that @file{2.html} is similar but that its
1742 image is @file{2.gif} and it links to @file{3.html}. Say this
1743 continues up to some arbitrarily high number.
1745 If one executes the command:
1748 wget -r -l 2 http://@var{site}/1.html
1751 then @file{1.html}, @file{1.gif}, @file{2.html}, @file{2.gif}, and
1752 @file{3.html} will be downloaded. As you can see, @file{3.html} is
1753 without its requisite @file{3.gif} because Wget is simply counting the
1754 number of hops (up to 2) away from @file{1.html} in order to determine
1755 where to stop the recursion. However, with this command:
1758 wget -r -l 2 -p http://@var{site}/1.html
1761 all the above files @emph{and} @file{3.html}'s requisite @file{3.gif}
1762 will be downloaded. Similarly,
1765 wget -r -l 1 -p http://@var{site}/1.html
1768 will cause @file{1.html}, @file{1.gif}, @file{2.html}, and @file{2.gif}
1769 to be downloaded. One might think that:
1772 wget -r -l 0 -p http://@var{site}/1.html
1775 would download just @file{1.html} and @file{1.gif}, but unfortunately
1776 this is not the case, because @samp{-l 0} is equivalent to
1777 @samp{-l inf}---that is, infinite recursion. To download a single @sc{html}
1778 page (or a handful of them, all specified on the command-line or in a
1779 @samp{-i} @sc{url} input file) and its (or their) requisites, simply leave off
1780 @samp{-r} and @samp{-l}:
1783 wget -p http://@var{site}/1.html
1786 Note that Wget will behave as if @samp{-r} had been specified, but only
1787 that single page and its requisites will be downloaded. Links from that
1788 page to external documents will not be followed. Actually, to download
1789 a single page and all its requisites (even if they exist on separate
1790 websites), and make sure the lot displays properly locally, this author
1791 likes to use a few options in addition to @samp{-p}:
1794 wget -E -H -k -K -p http://@var{site}/@var{document}
1797 To finish off this topic, it's worth knowing that Wget's idea of an
1798 external document link is any URL specified in an @code{<A>} tag, an
1799 @code{<AREA>} tag, or a @code{<LINK>} tag other than @code{<LINK
1802 @cindex @sc{html} comments
1803 @cindex comments, @sc{html}
1804 @item --strict-comments
1805 Turn on strict parsing of @sc{html} comments. The default is to terminate
1806 comments at the first occurrence of @samp{-->}.
1808 According to specifications, @sc{html} comments are expressed as @sc{sgml}
1809 @dfn{declarations}. Declaration is special markup that begins with
1810 @samp{<!} and ends with @samp{>}, such as @samp{<!DOCTYPE ...>}, that
1811 may contain comments between a pair of @samp{--} delimiters. @sc{html}
1812 comments are ``empty declarations'', @sc{sgml} declarations without any
1813 non-comment text. Therefore, @samp{<!--foo-->} is a valid comment, and
1814 so is @samp{<!--one-- --two-->}, but @samp{<!--1--2-->} is not.
1816 On the other hand, most @sc{html} writers don't perceive comments as anything
1817 other than text delimited with @samp{<!--} and @samp{-->}, which is not
1818 quite the same. For example, something like @samp{<!------------>}
1819 works as a valid comment as long as the number of dashes is a multiple
1820 of four (!). If not, the comment technically lasts until the next
1821 @samp{--}, which may be at the other end of the document. Because of
1822 this, many popular browsers completely ignore the specification and
1823 implement what users have come to expect: comments delimited with
1824 @samp{<!--} and @samp{-->}.
1826 Until version 1.9, Wget interpreted comments strictly, which resulted in
1827 missing links in many web pages that displayed fine in browsers, but had
1828 the misfortune of containing non-compliant comments. Beginning with
1829 version 1.9, Wget has joined the ranks of clients that implements
1830 ``naive'' comments, terminating each comment at the first occurrence of
1833 If, for whatever reason, you want strict comment parsing, use this
1834 option to turn it on.
1837 @node Recursive Accept/Reject Options
1838 @section Recursive Accept/Reject Options
1841 @item -A @var{acclist} --accept @var{acclist}
1842 @itemx -R @var{rejlist} --reject @var{rejlist}
1843 Specify comma-separated lists of file name suffixes or patterns to
1844 accept or reject (@pxref{Types of Files}). Note that if
1845 any of the wildcard characters, @samp{*}, @samp{?}, @samp{[} or
1846 @samp{]}, appear in an element of @var{acclist} or @var{rejlist},
1847 it will be treated as a pattern, rather than a suffix.
1849 @item -D @var{domain-list}
1850 @itemx --domains=@var{domain-list}
1851 Set domains to be followed. @var{domain-list} is a comma-separated list
1852 of domains. Note that it does @emph{not} turn on @samp{-H}.
1854 @item --exclude-domains @var{domain-list}
1855 Specify the domains that are @emph{not} to be followed.
1856 (@pxref{Spanning Hosts}).
1858 @cindex follow FTP links
1860 Follow @sc{ftp} links from @sc{html} documents. Without this option,
1861 Wget will ignore all the @sc{ftp} links.
1863 @cindex tag-based recursive pruning
1864 @item --follow-tags=@var{list}
1865 Wget has an internal table of @sc{html} tag / attribute pairs that it
1866 considers when looking for linked documents during a recursive
1867 retrieval. If a user wants only a subset of those tags to be
1868 considered, however, he or she should be specify such tags in a
1869 comma-separated @var{list} with this option.
1871 @item --ignore-tags=@var{list}
1872 This is the opposite of the @samp{--follow-tags} option. To skip
1873 certain @sc{html} tags when recursively looking for documents to download,
1874 specify them in a comma-separated @var{list}.
1876 In the past, this option was the best bet for downloading a single page
1877 and its requisites, using a command-line like:
1880 wget --ignore-tags=a,area -H -k -K -r http://@var{site}/@var{document}
1883 However, the author of this option came across a page with tags like
1884 @code{<LINK REL="home" HREF="/">} and came to the realization that
1885 specifying tags to ignore was not enough. One can't just tell Wget to
1886 ignore @code{<LINK>}, because then stylesheets will not be downloaded.
1887 Now the best bet for downloading a single page and its requisites is the
1888 dedicated @samp{--page-requisites} option.
1893 Ignore case when matching files and directories. This influences the
1894 behavior of -R, -A, -I, and -X options, as well as globbing
1895 implemented when downloading from FTP sites. For example, with this
1896 option, @samp{-A *.txt} will match @samp{file1.txt}, but also
1897 @samp{file2.TXT}, @samp{file3.TxT}, and so on.
1901 Enable spanning across hosts when doing recursive retrieving
1902 (@pxref{Spanning Hosts}).
1906 Follow relative links only. Useful for retrieving a specific home page
1907 without any distractions, not even those from the same hosts
1908 (@pxref{Relative Links}).
1911 @itemx --include-directories=@var{list}
1912 Specify a comma-separated list of directories you wish to follow when
1913 downloading (@pxref{Directory-Based Limits}). Elements
1914 of @var{list} may contain wildcards.
1917 @itemx --exclude-directories=@var{list}
1918 Specify a comma-separated list of directories you wish to exclude from
1919 download (@pxref{Directory-Based Limits}). Elements of
1920 @var{list} may contain wildcards.
1924 Do not ever ascend to the parent directory when retrieving recursively.
1925 This is a useful option, since it guarantees that only the files
1926 @emph{below} a certain hierarchy will be downloaded.
1927 @xref{Directory-Based Limits}, for more details.
1932 @node Recursive Download
1933 @chapter Recursive Download
1936 @cindex recursive download
1938 GNU Wget is capable of traversing parts of the Web (or a single
1939 @sc{http} or @sc{ftp} server), following links and directory structure.
1940 We refer to this as to @dfn{recursive retrieval}, or @dfn{recursion}.
1942 With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from
1943 the given @sc{url}, documents, retrieving the files the @sc{html}
1944 document was referring to, through markup like @code{href}, or
1945 @code{src}. If the freshly downloaded file is also of type
1946 @code{text/html} or @code{application/xhtml+xml}, it will be parsed and
1949 Recursive retrieval of @sc{http} and @sc{html} content is
1950 @dfn{breadth-first}. This means that Wget first downloads the requested
1951 @sc{html} document, then the documents linked from that document, then the
1952 documents linked by them, and so on. In other words, Wget first
1953 downloads the documents at depth 1, then those at depth 2, and so on
1954 until the specified maximum depth.
1956 The maximum @dfn{depth} to which the retrieval may descend is specified
1957 with the @samp{-l} option. The default maximum depth is five layers.
1959 When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
1960 the data from the given directory tree (including the subdirectories up
1961 to the specified depth) on the remote server, creating its mirror image
1962 locally. @sc{ftp} retrieval is also limited by the @code{depth}
1963 parameter. Unlike @sc{http} recursion, @sc{ftp} recursion is performed
1966 By default, Wget will create a local directory tree, corresponding to
1967 the one found on the remote server.
1969 Recursive retrieving can find a number of applications, the most
1970 important of which is mirroring. It is also useful for @sc{www}
1971 presentations, and any other opportunities where slow network
1972 connections should be bypassed by storing the files locally.
1974 You should be warned that recursive downloads can overload the remote
1975 servers. Because of that, many administrators frown upon them and may
1976 ban access from your site if they detect very fast downloads of big
1977 amounts of content. When downloading from Internet servers, consider
1978 using the @samp{-w} option to introduce a delay between accesses to the
1979 server. The download will take a while longer, but the server
1980 administrator will not be alarmed by your rudeness.
1982 Of course, recursive download may cause problems on your machine. If
1983 left to run unchecked, it can easily fill up the disk. If downloading
1984 from local network, it can also take bandwidth on the system, as well as
1985 consume memory and CPU.
1987 Try to specify the criteria that match the kind of download you are
1988 trying to achieve. If you want to download only one page, use
1989 @samp{--page-requisites} without any additional recursion. If you want
1990 to download things under one directory, use @samp{-np} to avoid
1991 downloading things from other directories. If you want to download all
1992 the files from one directory, use @samp{-l 1} to make sure the recursion
1993 depth never exceeds one. @xref{Following Links}, for more information
1996 Recursive retrieval should be used with care. Don't say you were not
1999 @node Following Links
2000 @chapter Following Links
2002 @cindex following links
2004 When retrieving recursively, one does not wish to retrieve loads of
2005 unnecessary data. Most of the time the users bear in mind exactly what
2006 they want to download, and want Wget to follow only specific links.
2008 For example, if you wish to download the music archive from
2009 @samp{fly.srk.fer.hr}, you will not want to download all the home pages
2010 that happen to be referenced by an obscure part of the archive.
2012 Wget possesses several mechanisms that allows you to fine-tune which
2013 links it will follow.
2016 * Spanning Hosts:: (Un)limiting retrieval based on host name.
2017 * Types of Files:: Getting only certain files.
2018 * Directory-Based Limits:: Getting only certain directories.
2019 * Relative Links:: Follow relative links only.
2020 * FTP Links:: Following FTP links.
2023 @node Spanning Hosts
2024 @section Spanning Hosts
2025 @cindex spanning hosts
2026 @cindex hosts, spanning
2028 Wget's recursive retrieval normally refuses to visit hosts different
2029 than the one you specified on the command line. This is a reasonable
2030 default; without it, every retrieval would have the potential to turn
2031 your Wget into a small version of google.
2033 However, visiting different hosts, or @dfn{host spanning,} is sometimes
2034 a useful option. Maybe the images are served from a different server.
2035 Maybe you're mirroring a site that consists of pages interlinked between
2036 three servers. Maybe the server has two equivalent names, and the @sc{html}
2037 pages refer to both interchangeably.
2040 @item Span to any host---@samp{-H}
2042 The @samp{-H} option turns on host spanning, thus allowing Wget's
2043 recursive run to visit any host referenced by a link. Unless sufficient
2044 recursion-limiting criteria are applied depth, these foreign hosts will
2045 typically link to yet more hosts, and so on until Wget ends up sucking
2046 up much more data than you have intended.
2048 @item Limit spanning to certain domains---@samp{-D}
2050 The @samp{-D} option allows you to specify the domains that will be
2051 followed, thus limiting the recursion only to the hosts that belong to
2052 these domains. Obviously, this makes sense only in conjunction with
2053 @samp{-H}. A typical example would be downloading the contents of
2054 @samp{www.server.com}, but allowing downloads from
2055 @samp{images.server.com}, etc.:
2058 wget -rH -Dserver.com http://www.server.com/
2061 You can specify more than one address by separating them with a comma,
2062 e.g. @samp{-Ddomain1.com,domain2.com}.
2064 @item Keep download off certain domains---@samp{--exclude-domains}
2066 If there are domains you want to exclude specifically, you can do it
2067 with @samp{--exclude-domains}, which accepts the same type of arguments
2068 of @samp{-D}, but will @emph{exclude} all the listed domains. For
2069 example, if you want to download all the hosts from @samp{foo.edu}
2070 domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
2074 wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
2080 @node Types of Files
2081 @section Types of Files
2082 @cindex types of files
2084 When downloading material from the web, you will often want to restrict
2085 the retrieval to only certain file types. For example, if you are
2086 interested in downloading @sc{gif}s, you will not be overjoyed to get
2087 loads of PostScript documents, and vice versa.
2089 Wget offers two options to deal with this problem. Each option
2090 description lists a short name, a long name, and the equivalent command
2093 @cindex accept wildcards
2094 @cindex accept suffixes
2095 @cindex wildcards, accept
2096 @cindex suffixes, accept
2098 @item -A @var{acclist}
2099 @itemx --accept @var{acclist}
2100 @itemx accept = @var{acclist}
2101 The argument to @samp{--accept} option is a list of file suffixes or
2102 patterns that Wget will download during recursive retrieval. A suffix
2103 is the ending part of a file, and consists of ``normal'' letters,
2104 e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
2105 wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
2107 So, specifying @samp{wget -A gif,jpg} will make Wget download only the
2108 files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
2109 @sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
2110 download only files beginning with @samp{zelazny} and containing numbers
2111 from 1960 to 1969 anywhere within. Look up the manual of your shell for
2112 a description of how pattern matching works.
2114 Of course, any number of suffixes and patterns can be combined into a
2115 comma-separated list, and given as an argument to @samp{-A}.
2117 @cindex reject wildcards
2118 @cindex reject suffixes
2119 @cindex wildcards, reject
2120 @cindex suffixes, reject
2121 @item -R @var{rejlist}
2122 @itemx --reject @var{rejlist}
2123 @itemx reject = @var{rejlist}
2124 The @samp{--reject} option works the same way as @samp{--accept}, only
2125 its logic is the reverse; Wget will download all files @emph{except} the
2126 ones matching the suffixes (or patterns) in the list.
2128 So, if you want to download a whole page except for the cumbersome
2129 @sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
2130 Analogously, to download all files except the ones beginning with
2131 @samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
2132 expansion by the shell.
2136 The @samp{-A} and @samp{-R} options may be combined to achieve even
2137 better fine-tuning of which files to retrieve. E.g. @samp{wget -A
2138 "*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
2139 a part of their name, but @emph{not} the PostScript files.
2141 Note that these two options do not affect the downloading of @sc{html}
2142 files (as determined by a @samp{.htm} or @samp{.html} filename
2143 prefix). This behavior may not be desirable for all users, and may be
2144 changed for future versions of Wget.
2146 Note, too, that query strings (strings at the end of a URL beginning
2147 with a question mark (@samp{?}) are not included as part of the
2148 filename for accept/reject rules, even though these will actually
2149 contribute to the name chosen for the local file. It is expected that
2150 a future version of Wget will provide an option to allow matching
2151 against query strings.
2153 Finally, it's worth noting that the accept/reject lists are matched
2154 @emph{twice} against downloaded files: once against the URL's filename
2155 portion, to determine if the file should be downloaded in the first
2156 place; then, after it has been accepted and successfully downloaded,
2157 the local file's name is also checked against the accept/reject lists
2158 to see if it should be removed. The rationale was that, since
2159 @samp{.htm} and @samp{.html} files are always downloaded regardless of
2160 accept/reject rules, they should be removed @emph{after} being
2161 downloaded and scanned for links, if they did match the accept/reject
2162 lists. However, this can lead to unexpected results, since the local
2163 filenames can differ from the original URL filenames in the following
2164 ways, all of which can change whether an accept/reject rule matches:
2168 If the local file already exists and @samp{--no-directories} was
2169 specified, a numeric suffix will be appended to the original name.
2171 If @samp{--html-extension} was specified, the local filename will have
2172 @samp{.html} appended to it. If Wget is invoked with @samp{-E -A.php},
2173 a filename such as @samp{index.php} will match be accepted, but upon
2174 download will be named @samp{index.php.html}, which no longer matches,
2175 and so the file will be deleted.
2177 Query strings do not contribute to URL matching, but are included in
2178 local filenames, and so @emph{do} contribute to filename matching.
2182 This behavior, too, is considered less-than-desirable, and may change
2183 in a future version of Wget.
2185 @node Directory-Based Limits
2186 @section Directory-Based Limits
2188 @cindex directory limits
2190 Regardless of other link-following facilities, it is often useful to
2191 place the restriction of what files to retrieve based on the directories
2192 those files are placed in. There can be many reasons for this---the
2193 home pages may be organized in a reasonable directory structure; or some
2194 directories may contain useless information, e.g. @file{/cgi-bin} or
2195 @file{/dev} directories.
2197 Wget offers three different options to deal with this requirement. Each
2198 option description lists a short name, a long name, and the equivalent
2199 command in @file{.wgetrc}.
2201 @cindex directories, include
2202 @cindex include directories
2203 @cindex accept directories
2206 @itemx --include @var{list}
2207 @itemx include_directories = @var{list}
2208 @samp{-I} option accepts a comma-separated list of directories included
2209 in the retrieval. Any other directories will simply be ignored. The
2210 directories are absolute paths.
2212 So, if you wish to download from @samp{http://host/people/bozo/}
2213 following only links to bozo's colleagues in the @file{/people}
2214 directory and the bogus scripts in @file{/cgi-bin}, you can specify:
2217 wget -I /people,/cgi-bin http://host/people/bozo/
2220 @cindex directories, exclude
2221 @cindex exclude directories
2222 @cindex reject directories
2224 @itemx --exclude @var{list}
2225 @itemx exclude_directories = @var{list}
2226 @samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
2227 directories @emph{excluded} from the download. E.g. if you do not want
2228 Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
2229 /cgi-bin} on the command line.
2231 The same as with @samp{-A}/@samp{-R}, these two options can be combined
2232 to get a better fine-tuning of downloading subdirectories. E.g. if you
2233 want to load all the files from @file{/pub} hierarchy except for
2234 @file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
2239 @itemx no_parent = on
2240 The simplest, and often very useful way of limiting directories is
2241 disallowing retrieval of the links that refer to the hierarchy
2242 @dfn{above} than the beginning directory, i.e. disallowing ascent to the
2243 parent directory/directories.
2245 The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
2246 Using it guarantees that you will never leave the existing hierarchy.
2247 Supposing you issue Wget with:
2250 wget -r --no-parent http://somehost/~luzer/my-archive/
2253 You may rest assured that none of the references to
2254 @file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
2255 followed. Only the archive you are interested in will be downloaded.
2256 Essentially, @samp{--no-parent} is similar to
2257 @samp{-I/~luzer/my-archive}, only it handles redirections in a more
2258 intelligent fashion.
2260 @strong{Note} that, for HTTP (and HTTPS), the trailing slash is very
2261 important to @samp{--no-parent}. HTTP has no concept of a ``directory''---Wget
2262 relies on you to indicate what's a directory and what isn't. In
2263 @samp{http://foo/bar/}, Wget will consider @samp{bar} to be a
2264 directory, while in @samp{http://foo/bar} (no trailing slash),
2265 @samp{bar} will be considered a filename (so @samp{--no-parent} would be
2266 meaningless, as its parent is @samp{/}).
2269 @node Relative Links
2270 @section Relative Links
2271 @cindex relative links
2273 When @samp{-L} is turned on, only the relative links are ever followed.
2274 Relative links are here defined those that do not refer to the web
2275 server root. For example, these links are relative:
2279 <a href="foo/bar.gif">
2280 <a href="../foo/bar.gif">
2283 These links are not relative:
2287 <a href="/foo/bar.gif">
2288 <a href="http://www.server.com/foo/bar.gif">
2291 Using this option guarantees that recursive retrieval will not span
2292 hosts, even without @samp{-H}. In simple cases it also allows downloads
2293 to ``just work'' without having to convert links.
2295 This option is probably not very useful and might be removed in a future
2299 @section Following FTP Links
2300 @cindex following ftp links
2302 The rules for @sc{ftp} are somewhat specific, as it is necessary for
2303 them to be. @sc{ftp} links in @sc{html} documents are often included
2304 for purposes of reference, and it is often inconvenient to download them
2307 To have @sc{ftp} links followed from @sc{html} documents, you need to
2308 specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
2309 links will span hosts regardless of @samp{-H} setting. This is logical,
2310 as @sc{ftp} links rarely point to the same host where the @sc{http}
2311 server resides. For similar reasons, the @samp{-L} options has no
2312 effect on such downloads. On the other hand, domain acceptance
2313 (@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
2315 Also note that followed links to @sc{ftp} directories will not be
2316 retrieved recursively further.
2319 @chapter Time-Stamping
2320 @cindex time-stamping
2321 @cindex timestamping
2322 @cindex updating the archives
2323 @cindex incremental updating
2325 One of the most important aspects of mirroring information from the
2326 Internet is updating your archives.
2328 Downloading the whole archive again and again, just to replace a few
2329 changed files is expensive, both in terms of wasted bandwidth and money,
2330 and the time to do the update. This is why all the mirroring tools
2331 offer the option of incremental updating.
2333 Such an updating mechanism means that the remote server is scanned in
2334 search of @dfn{new} files. Only those new files will be downloaded in
2335 the place of the old ones.
2337 A file is considered new if one of these two conditions are met:
2341 A file of that name does not already exist locally.
2344 A file of that name does exist, but the remote file was modified more
2345 recently than the local file.
2348 To implement this, the program needs to be aware of the time of last
2349 modification of both local and remote files. We call this information the
2350 @dfn{time-stamp} of a file.
2352 The time-stamping in GNU Wget is turned on using @samp{--timestamping}
2353 (@samp{-N}) option, or through @code{timestamping = on} directive in
2354 @file{.wgetrc}. With this option, for each file it intends to download,
2355 Wget will check whether a local file of the same name exists. If it
2356 does, and the remote file is older, Wget will not download it.
2358 If the local file does not exist, or the sizes of the files do not
2359 match, Wget will download the remote file no matter what the time-stamps
2363 * Time-Stamping Usage::
2364 * HTTP Time-Stamping Internals::
2365 * FTP Time-Stamping Internals::
2368 @node Time-Stamping Usage
2369 @section Time-Stamping Usage
2370 @cindex time-stamping usage
2371 @cindex usage, time-stamping
2373 The usage of time-stamping is simple. Say you would like to download a
2374 file so that it keeps its date of modification.
2377 wget -S http://www.gnu.ai.mit.edu/
2380 A simple @code{ls -l} shows that the time stamp on the local file equals
2381 the state of the @code{Last-Modified} header, as returned by the server.
2382 As you can see, the time-stamping info is preserved locally, even
2383 without @samp{-N} (at least for @sc{http}).
2385 Several days later, you would like Wget to check if the remote file has
2386 changed, and download it if it has.
2389 wget -N http://www.gnu.ai.mit.edu/
2392 Wget will ask the server for the last-modified date. If the local file
2393 has the same timestamp as the server, or a newer one, the remote file
2394 will not be re-fetched. However, if the remote file is more recent,
2395 Wget will proceed to fetch it.
2397 The same goes for @sc{ftp}. For example:
2400 wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
2403 (The quotes around that URL are to prevent the shell from trying to
2404 interpret the @samp{*}.)
2406 After download, a local directory listing will show that the timestamps
2407 match those on the remote server. Reissuing the command with @samp{-N}
2408 will make Wget re-fetch @emph{only} the files that have been modified
2409 since the last download.
2411 If you wished to mirror the GNU archive every week, you would use a
2412 command like the following, weekly:
2415 wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
2418 Note that time-stamping will only work for files for which the server
2419 gives a timestamp. For @sc{http}, this depends on getting a
2420 @code{Last-Modified} header. For @sc{ftp}, this depends on getting a
2421 directory listing with dates in a format that Wget can parse
2422 (@pxref{FTP Time-Stamping Internals}).
2424 @node HTTP Time-Stamping Internals
2425 @section HTTP Time-Stamping Internals
2426 @cindex http time-stamping
2428 Time-stamping in @sc{http} is implemented by checking of the
2429 @code{Last-Modified} header. If you wish to retrieve the file
2430 @file{foo.html} through @sc{http}, Wget will check whether
2431 @file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
2432 retrieved unconditionally.
2434 If the file does exist locally, Wget will first check its local
2435 time-stamp (similar to the way @code{ls -l} checks it), and then send a
2436 @code{HEAD} request to the remote server, demanding the information on
2439 The @code{Last-Modified} header is examined to find which file was
2440 modified more recently (which makes it ``newer''). If the remote file
2441 is newer, it will be downloaded; if it is older, Wget will give
2442 up.@footnote{As an additional check, Wget will look at the
2443 @code{Content-Length} header, and compare the sizes; if they are not the
2444 same, the remote file will be downloaded no matter what the time-stamp
2447 When @samp{--backup-converted} (@samp{-K}) is specified in conjunction
2448 with @samp{-N}, server file @samp{@var{X}} is compared to local file
2449 @samp{@var{X}.orig}, if extant, rather than being compared to local file
2450 @samp{@var{X}}, which will always differ if it's been converted by
2451 @samp{--convert-links} (@samp{-k}).
2453 Arguably, @sc{http} time-stamping should be implemented using the
2454 @code{If-Modified-Since} request.
2456 @node FTP Time-Stamping Internals
2457 @section FTP Time-Stamping Internals
2458 @cindex ftp time-stamping
2460 In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
2461 @sc{ftp} has no headers---time-stamps must be ferreted out of directory
2464 If an @sc{ftp} download is recursive or uses globbing, Wget will use the
2465 @sc{ftp} @code{LIST} command to get a file listing for the directory
2466 containing the desired file(s). It will try to analyze the listing,
2467 treating it like Unix @code{ls -l} output, extracting the time-stamps.
2468 The rest is exactly the same as for @sc{http}. Note that when
2469 retrieving individual files from an @sc{ftp} server without using
2470 globbing or recursion, listing files will not be downloaded (and thus
2471 files will not be time-stamped) unless @samp{-N} is specified.
2473 Assumption that every directory listing is a Unix-style listing may
2474 sound extremely constraining, but in practice it is not, as many
2475 non-Unix @sc{ftp} servers use the Unixoid listing format because most
2476 (all?) of the clients understand it. Bear in mind that @sc{rfc959}
2477 defines no standard way to get a file list, let alone the time-stamps.
2478 We can only hope that a future standard will define this.
2480 Another non-standard solution includes the use of @code{MDTM} command
2481 that is supported by some @sc{ftp} servers (including the popular
2482 @code{wu-ftpd}), which returns the exact time of the specified file.
2483 Wget may support this command in the future.
2486 @chapter Startup File
2487 @cindex startup file
2493 Once you know how to change default settings of Wget through command
2494 line arguments, you may wish to make some of those settings permanent.
2495 You can do that in a convenient way by creating the Wget startup
2496 file---@file{.wgetrc}.
2498 Besides @file{.wgetrc} is the ``main'' initialization file, it is
2499 convenient to have a special facility for storing passwords. Thus Wget
2500 reads and interprets the contents of @file{$HOME/.netrc}, if it finds
2501 it. You can find @file{.netrc} format in your system manuals.
2503 Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
2507 * Wgetrc Location:: Location of various wgetrc files.
2508 * Wgetrc Syntax:: Syntax of wgetrc.
2509 * Wgetrc Commands:: List of available commands.
2510 * Sample Wgetrc:: A wgetrc example.
2513 @node Wgetrc Location
2514 @section Wgetrc Location
2515 @cindex wgetrc location
2516 @cindex location of wgetrc
2518 When initializing, Wget will look for a @dfn{global} startup file,
2519 @file{/usr/local/etc/wgetrc} by default (or some prefix other than
2520 @file{/usr/local}, if Wget was not installed there) and read commands
2521 from there, if it exists.
2523 Then it will look for the user's file. If the environmental variable
2524 @code{WGETRC} is set, Wget will try to load that file. Failing that, no
2525 further attempts will be made.
2527 If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
2529 The fact that user's settings are loaded after the system-wide ones
2530 means that in case of collision user's wgetrc @emph{overrides} the
2531 system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
2532 Fascist admins, away!
2535 @section Wgetrc Syntax
2536 @cindex wgetrc syntax
2537 @cindex syntax of wgetrc
2539 The syntax of a wgetrc command is simple:
2545 The @dfn{variable} will also be called @dfn{command}. Valid
2546 @dfn{values} are different for different commands.
2548 The commands are case-insensitive and underscore-insensitive. Thus
2549 @samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
2550 beginning with @samp{#} and lines containing white-space only are
2553 Commands that expect a comma-separated list will clear the list on an
2554 empty command. So, if you wish to reset the rejection list specified in
2555 global @file{wgetrc}, you can do it with:
2561 @node Wgetrc Commands
2562 @section Wgetrc Commands
2563 @cindex wgetrc commands
2565 The complete set of commands is listed below. Legal values are listed
2566 after the @samp{=}. Simple Boolean values can be set or unset using
2567 @samp{on} and @samp{off} or @samp{1} and @samp{0}.
2569 Some commands take pseudo-arbitrary values. @var{address} values can be
2570 hostnames or dotted-quad IP addresses. @var{n} can be any positive
2571 integer, or @samp{inf} for infinity, where appropriate. @var{string}
2572 values can be any non-empty string.
2574 Most of these commands have direct command-line equivalents. Also, any
2575 wgetrc command can be specified on the command line using the
2576 @samp{--execute} switch (@pxref{Basic Startup Options}.)
2579 @item accept/reject = @var{string}
2580 Same as @samp{-A}/@samp{-R} (@pxref{Types of Files}).
2582 @item add_hostdir = on/off
2583 Enable/disable host-prefixed file names. @samp{-nH} disables it.
2585 @item background = on/off
2586 Enable/disable going to background---the same as @samp{-b} (which
2589 @item backup_converted = on/off
2590 Enable/disable saving pre-converted files with the suffix
2591 @samp{.orig}---the same as @samp{-K} (which enables it).
2593 @c @item backups = @var{number}
2594 @c #### Document me!
2596 @item base = @var{string}
2597 Consider relative @sc{url}s in @sc{url} input files forced to be
2598 interpreted as @sc{html} as being relative to @var{string}---the same as
2599 @samp{--base=@var{string}}.
2601 @item bind_address = @var{address}
2602 Bind to @var{address}, like the @samp{--bind-address=@var{address}}.
2604 @item ca_certificate = @var{file}
2605 Set the certificate authority bundle file to @var{file}. The same
2606 as @samp{--ca-certificate=@var{file}}.
2608 @item ca_directory = @var{directory}
2609 Set the directory used for certificate authorities. The same as
2610 @samp{--ca-directory=@var{directory}}.
2612 @item cache = on/off
2613 When set to off, disallow server-caching. See the @samp{--no-cache}
2616 @item certificate = @var{file}
2617 Set the client certificate file name to @var{file}. The same as
2618 @samp{--certificate=@var{file}}.
2620 @item certificate_type = @var{string}
2621 Specify the type of the client certificate, legal values being
2622 @samp{PEM} (the default) and @samp{DER} (aka ASN1). The same as
2623 @samp{--certificate-type=@var{string}}.
2625 @item check_certificate = on/off
2626 If this is set to off, the server certificate is not checked against
2627 the specified client authorities. The default is ``on''. The same as
2628 @samp{--check-certificate}.
2630 @item connect_timeout = @var{n}
2631 Set the connect timeout---the same as @samp{--connect-timeout}.
2633 @item content_disposition = on/off
2634 Turn on recognition of the (non-standard) @samp{Content-Disposition}
2635 HTTP header---if set to @samp{on}, the same as @samp{--content-disposition}.
2637 @item continue = on/off
2638 If set to on, force continuation of preexistent partially retrieved
2639 files. See @samp{-c} before setting it.
2641 @item convert_links = on/off
2642 Convert non-relative links locally. The same as @samp{-k}.
2644 @item cookies = on/off
2645 When set to off, disallow cookies. See the @samp{--cookies} option.
2647 @item cut_dirs = @var{n}
2648 Ignore @var{n} remote directory components. Equivalent to
2649 @samp{--cut-dirs=@var{n}}.
2651 @item debug = on/off
2652 Debug mode, same as @samp{-d}.
2654 @item delete_after = on/off
2655 Delete after download---the same as @samp{--delete-after}.
2657 @item dir_prefix = @var{string}
2658 Top of directory tree---the same as @samp{-P @var{string}}.
2660 @item dirstruct = on/off
2661 Turning dirstruct on or off---the same as @samp{-x} or @samp{-nd},
2664 @item dns_cache = on/off
2665 Turn DNS caching on/off. Since DNS caching is on by default, this
2666 option is normally used to turn it off and is equivalent to
2667 @samp{--no-dns-cache}.
2669 @item dns_timeout = @var{n}
2670 Set the DNS timeout---the same as @samp{--dns-timeout}.
2672 @item domains = @var{string}
2673 Same as @samp{-D} (@pxref{Spanning Hosts}).
2675 @item dot_bytes = @var{n}
2676 Specify the number of bytes ``contained'' in a dot, as seen throughout
2677 the retrieval (1024 by default). You can postfix the value with
2678 @samp{k} or @samp{m}, representing kilobytes and megabytes,
2679 respectively. With dot settings you can tailor the dot retrieval to
2680 suit your needs, or you can use the predefined @dfn{styles}
2681 (@pxref{Download Options}).
2683 @item dot_spacing = @var{n}
2684 Specify the number of dots in a single cluster (10 by default).
2686 @item dots_in_line = @var{n}
2687 Specify the number of dots that will be printed in each line throughout
2688 the retrieval (50 by default).
2690 @item egd_file = @var{file}
2691 Use @var{string} as the EGD socket file name. The same as
2692 @samp{--egd-file=@var{file}}.
2694 @item exclude_directories = @var{string}
2695 Specify a comma-separated list of directories you wish to exclude from
2696 download---the same as @samp{-X @var{string}} (@pxref{Directory-Based
2699 @item exclude_domains = @var{string}
2700 Same as @samp{--exclude-domains=@var{string}} (@pxref{Spanning
2703 @item follow_ftp = on/off
2704 Follow @sc{ftp} links from @sc{html} documents---the same as
2705 @samp{--follow-ftp}.
2707 @item follow_tags = @var{string}
2708 Only follow certain @sc{html} tags when doing a recursive retrieval,
2709 just like @samp{--follow-tags=@var{string}}.
2711 @item force_html = on/off
2712 If set to on, force the input filename to be regarded as an @sc{html}
2713 document---the same as @samp{-F}.
2715 @item ftp_password = @var{string}
2716 Set your @sc{ftp} password to @var{string}. Without this setting, the
2717 password defaults to @samp{-wget@@}, which is a useful default for
2718 anonymous @sc{ftp} access.
2720 This command used to be named @code{passwd} prior to Wget 1.10.
2722 @item ftp_proxy = @var{string}
2723 Use @var{string} as @sc{ftp} proxy, instead of the one specified in
2726 @item ftp_user = @var{string}
2727 Set @sc{ftp} user to @var{string}.
2729 This command used to be named @code{login} prior to Wget 1.10.
2732 Turn globbing on/off---the same as @samp{--glob} and @samp{--no-glob}.
2734 @item header = @var{string}
2735 Define a header for HTTP downloads, like using
2736 @samp{--header=@var{string}}.
2738 @item html_extension = on/off
2739 Add a @samp{.html} extension to @samp{text/html} or
2740 @samp{application/xhtml+xml} files without it, like @samp{-E}.
2742 @item http_keep_alive = on/off
2743 Turn the keep-alive feature on or off (defaults to on). Turning it
2744 off is equivalent to @samp{--no-http-keep-alive}.
2746 @item http_password = @var{string}
2747 Set @sc{http} password, equivalent to
2748 @samp{--http-password=@var{string}}.
2750 @item http_proxy = @var{string}
2751 Use @var{string} as @sc{http} proxy, instead of the one specified in
2754 @item http_user = @var{string}
2755 Set @sc{http} user to @var{string}, equivalent to
2756 @samp{--http-user=@var{string}}.
2758 @item https_proxy = @var{string}
2759 Use @var{string} as @sc{https} proxy, instead of the one specified in
2762 @item ignore_case = on/off
2763 When set to on, match files and directories case insensitively; the
2764 same as @samp{--ignore-case}.
2766 @item ignore_length = on/off
2767 When set to on, ignore @code{Content-Length} header; the same as
2768 @samp{--ignore-length}.
2770 @item ignore_tags = @var{string}
2771 Ignore certain @sc{html} tags when doing a recursive retrieval, like
2772 @samp{--ignore-tags=@var{string}}.
2774 @item include_directories = @var{string}
2775 Specify a comma-separated list of directories you wish to follow when
2776 downloading---the same as @samp{-I @var{string}}.
2778 @item inet4_only = on/off
2779 Force connecting to IPv4 addresses, off by default. You can put this
2780 in the global init file to disable Wget's attempts to resolve and
2781 connect to IPv6 hosts. Available only if Wget was compiled with IPv6
2782 support. The same as @samp{--inet4-only} or @samp{-4}.
2784 @item inet6_only = on/off
2785 Force connecting to IPv6 addresses, off by default. Available only if
2786 Wget was compiled with IPv6 support. The same as @samp{--inet6-only}
2789 @item input = @var{file}
2790 Read the @sc{url}s from @var{string}, like @samp{-i @var{file}}.
2792 @item limit_rate = @var{rate}
2793 Limit the download speed to no more than @var{rate} bytes per second.
2794 The same as @samp{--limit-rate=@var{rate}}.
2796 @item load_cookies = @var{file}
2797 Load cookies from @var{file}. See @samp{--load-cookies @var{file}}.
2799 @item logfile = @var{file}
2800 Set logfile to @var{file}, the same as @samp{-o @var{file}}.
2802 @item max_redirect = @var{number}
2803 Specifies the maximum number of redirections to follow for a resource.
2804 See @samp{--max-redirect=@var{number}}.
2806 @item mirror = on/off
2807 Turn mirroring on/off. The same as @samp{-m}.
2809 @item netrc = on/off
2810 Turn reading netrc on or off.
2812 @item no_clobber = on/off
2815 @item no_parent = on/off
2816 Disallow retrieving outside the directory hierarchy, like
2817 @samp{--no-parent} (@pxref{Directory-Based Limits}).
2819 @item no_proxy = @var{string}
2820 Use @var{string} as the comma-separated list of domains to avoid in
2821 proxy loading, instead of the one specified in environment.
2823 @item output_document = @var{file}
2824 Set the output filename---the same as @samp{-O @var{file}}.
2826 @item page_requisites = on/off
2827 Download all ancillary documents necessary for a single @sc{html} page to
2828 display properly---the same as @samp{-p}.
2830 @item passive_ftp = on/off
2831 Change setting of passive @sc{ftp}, equivalent to the
2832 @samp{--passive-ftp} option.
2834 @itemx password = @var{string}
2835 Specify password @var{string} for both @sc{ftp} and @sc{http} file retrieval.
2836 This command can be overridden using the @samp{ftp_password} and
2837 @samp{http_password} command for @sc{ftp} and @sc{http} respectively.
2839 @item post_data = @var{string}
2840 Use POST as the method for all HTTP requests and send @var{string} in
2841 the request body. The same as @samp{--post-data=@var{string}}.
2843 @item post_file = @var{file}
2844 Use POST as the method for all HTTP requests and send the contents of
2845 @var{file} in the request body. The same as
2846 @samp{--post-file=@var{file}}.
2848 @item prefer_family = IPv4/IPv6/none
2849 When given a choice of several addresses, connect to the addresses
2850 with specified address family first. IPv4 addresses are preferred by
2851 default. The same as @samp{--prefer-family}, which see for a detailed
2852 discussion of why this is useful.
2854 @item private_key = @var{file}
2855 Set the private key file to @var{file}. The same as
2856 @samp{--private-key=@var{file}}.
2858 @item private_key_type = @var{string}
2859 Specify the type of the private key, legal values being @samp{PEM}
2860 (the default) and @samp{DER} (aka ASN1). The same as
2861 @samp{--private-type=@var{string}}.
2863 @item progress = @var{string}
2864 Set the type of the progress indicator. Legal types are @samp{dot}
2865 and @samp{bar}. Equivalent to @samp{--progress=@var{string}}.
2867 @item protocol_directories = on/off
2868 When set, use the protocol name as a directory component of local file
2869 names. The same as @samp{--protocol-directories}.
2871 @item proxy_password = @var{string}
2872 Set proxy authentication password to @var{string}, like
2873 @samp{--proxy-password=@var{string}}.
2875 @item proxy_user = @var{string}
2876 Set proxy authentication user name to @var{string}, like
2877 @samp{--proxy-user=@var{string}}.
2879 @item quiet = on/off
2880 Quiet mode---the same as @samp{-q}.
2882 @item quota = @var{quota}
2883 Specify the download quota, which is useful to put in the global
2884 @file{wgetrc}. When download quota is specified, Wget will stop
2885 retrieving after the download sum has become greater than quota. The
2886 quota can be specified in bytes (default), kbytes @samp{k} appended) or
2887 mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota
2888 to 5 megabytes. Note that the user's startup file overrides system
2891 @item random_file = @var{file}
2892 Use @var{file} as a source of randomness on systems lacking
2895 @item random_wait = on/off
2896 Turn random between-request wait times on or off. The same as
2897 @samp{--random-wait}.
2899 @item read_timeout = @var{n}
2900 Set the read (and write) timeout---the same as
2901 @samp{--read-timeout=@var{n}}.
2903 @item reclevel = @var{n}
2904 Recursion level (depth)---the same as @samp{-l @var{n}}.
2906 @item recursive = on/off
2907 Recursive on/off---the same as @samp{-r}.
2909 @item referer = @var{string}
2910 Set HTTP @samp{Referer:} header just like
2911 @samp{--referer=@var{string}}. (Note that it was the folks who wrote
2912 the @sc{http} spec who got the spelling of ``referrer'' wrong.)
2914 @item relative_only = on/off
2915 Follow only relative links---the same as @samp{-L} (@pxref{Relative
2918 @item remove_listing = on/off
2919 If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
2920 to off is the same as @samp{--no-remove-listing}.
2922 @item restrict_file_names = unix/windows
2923 Restrict the file names generated by Wget from URLs. See
2924 @samp{--restrict-file-names} for a more detailed description.
2926 @item retr_symlinks = on/off
2927 When set to on, retrieve symbolic links as if they were plain files; the
2928 same as @samp{--retr-symlinks}.
2930 @item retry_connrefused = on/off
2931 When set to on, consider ``connection refused'' a transient
2932 error---the same as @samp{--retry-connrefused}.
2934 @item robots = on/off
2935 Specify whether the norobots convention is respected by Wget, ``on'' by
2936 default. This switch controls both the @file{/robots.txt} and the
2937 @samp{nofollow} aspect of the spec. @xref{Robot Exclusion}, for more
2938 details about this. Be sure you know what you are doing before turning
2941 @item save_cookies = @var{file}
2942 Save cookies to @var{file}. The same as @samp{--save-cookies
2945 @item secure_protocol = @var{string}
2946 Choose the secure protocol to be used. Legal values are @samp{auto}
2947 (the default), @samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. The same
2948 as @samp{--secure-protocol=@var{string}}.
2950 @item server_response = on/off
2951 Choose whether or not to print the @sc{http} and @sc{ftp} server
2952 responses---the same as @samp{-S}.
2954 @item span_hosts = on/off
2957 @item strict_comments = on/off
2958 Same as @samp{--strict-comments}.
2960 @item timeout = @var{n}
2961 Set all applicable timeout values to @var{n}, the same as @samp{-T
2964 @item timestamping = on/off
2965 Turn timestamping on/off. The same as @samp{-N} (@pxref{Time-Stamping}).
2967 @item tries = @var{n}
2968 Set number of retries per @sc{url}---the same as @samp{-t @var{n}}.
2970 @item use_proxy = on/off
2971 When set to off, don't use proxy even when proxy-related environment
2972 variables are set. In that case it is the same as using
2975 @item user = @var{string}
2976 Specify username @var{string} for both @sc{ftp} and @sc{http} file retrieval.
2977 This command can be overridden using the @samp{ftp_user} and
2978 @samp{http_user} command for @sc{ftp} and @sc{http} respectively.
2980 @item verbose = on/off
2981 Turn verbose on/off---the same as @samp{-v}/@samp{-nv}.
2983 @item wait = @var{n}
2984 Wait @var{n} seconds between retrievals---the same as @samp{-w
2987 @item wait_retry = @var{n}
2988 Wait up to @var{n} seconds between retries of failed retrievals
2989 only---the same as @samp{--waitretry=@var{n}}. Note that this is
2990 turned on by default in the global @file{wgetrc}.
2994 @section Sample Wgetrc
2995 @cindex sample wgetrc
2997 This is the sample initialization file, as given in the distribution.
2998 It is divided in two section---one for global usage (suitable for global
2999 startup file), and one for local usage (suitable for
3000 @file{$HOME/.wgetrc}). Be careful about the things you change.
3002 Note that almost all the lines are commented out. For a command to have
3003 any effect, you must remove the @samp{#} character at the beginning of
3007 @include sample.wgetrc.munged_for_texi_inclusion
3014 @c man begin EXAMPLES
3015 The examples are divided into three sections loosely based on their
3019 * Simple Usage:: Simple, basic usage of the program.
3020 * Advanced Usage:: Advanced tips.
3021 * Very Advanced Usage:: The hairy stuff.
3025 @section Simple Usage
3029 Say you want to download a @sc{url}. Just type:
3032 wget http://fly.srk.fer.hr/
3036 But what will happen if the connection is slow, and the file is lengthy?
3037 The connection will probably fail before the whole file is retrieved,
3038 more than once. In this case, Wget will try getting the file until it
3039 either gets the whole of it, or exceeds the default number of retries
3040 (this being 20). It is easy to change the number of tries to 45, to
3041 insure that the whole file will arrive safely:
3044 wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
3048 Now let's leave Wget to work in the background, and write its progress
3049 to log file @file{log}. It is tiring to type @samp{--tries}, so we
3050 shall use @samp{-t}.
3053 wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
3056 The ampersand at the end of the line makes sure that Wget works in the
3057 background. To unlimit the number of retries, use @samp{-t inf}.
3060 The usage of @sc{ftp} is as simple. Wget will take care of login and
3064 wget ftp://gnjilux.srk.fer.hr/welcome.msg
3068 If you specify a directory, Wget will retrieve the directory listing,
3069 parse it and convert it to @sc{html}. Try:
3072 wget ftp://ftp.gnu.org/pub/gnu/
3077 @node Advanced Usage
3078 @section Advanced Usage
3082 You have a file that contains the URLs you want to download? Use the
3089 If you specify @samp{-} as file name, the @sc{url}s will be read from
3093 Create a five levels deep mirror image of the GNU web site, with the
3094 same directory structure the original has, with only one try per
3095 document, saving the log of the activities to @file{gnulog}:
3098 wget -r http://www.gnu.org/ -o gnulog
3102 The same as the above, but convert the links in the @sc{html} files to
3103 point to local files, so you can view the documents off-line:
3106 wget --convert-links -r http://www.gnu.org/ -o gnulog
3110 Retrieve only one @sc{html} page, but make sure that all the elements needed
3111 for the page to be displayed, such as inline images and external style
3112 sheets, are also downloaded. Also make sure the downloaded page
3113 references the downloaded links.
3116 wget -p --convert-links http://www.server.com/dir/page.html
3119 The @sc{html} page will be saved to @file{www.server.com/dir/page.html}, and
3120 the images, stylesheets, etc., somewhere under @file{www.server.com/},
3121 depending on where they were on the remote server.
3124 The same as the above, but without the @file{www.server.com/} directory.
3125 In fact, I don't want to have all those random server directories
3126 anyway---just save @emph{all} those files under a @file{download/}
3127 subdirectory of the current directory.
3130 wget -p --convert-links -nH -nd -Pdownload \
3131 http://www.server.com/dir/page.html
3135 Retrieve the index.html of @samp{www.lycos.com}, showing the original
3139 wget -S http://www.lycos.com/
3143 Save the server headers with the file, perhaps for post-processing.
3146 wget --save-headers http://www.lycos.com/
3151 Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
3155 wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
3159 You want to download all the @sc{gif}s from a directory on an @sc{http}
3160 server. You tried @samp{wget http://www.server.com/dir/*.gif}, but that
3161 didn't work because @sc{http} retrieval does not support globbing. In
3165 wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
3168 More verbose, but the effect is the same. @samp{-r -l1} means to
3169 retrieve recursively (@pxref{Recursive Download}), with maximum depth
3170 of 1. @samp{--no-parent} means that references to the parent directory
3171 are ignored (@pxref{Directory-Based Limits}), and @samp{-A.gif} means to
3172 download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
3176 Suppose you were in the middle of downloading, when Wget was
3177 interrupted. Now you do not want to clobber the files already present.
3181 wget -nc -r http://www.gnu.org/
3185 If you want to encode your own username and password to @sc{http} or
3186 @sc{ftp}, use the appropriate @sc{url} syntax (@pxref{URL Format}).
3189 wget ftp://hniksic:mypassword@@unix.server.com/.emacs
3192 Note, however, that this usage is not advisable on multi-user systems
3193 because it reveals your password to anyone who looks at the output of
3196 @cindex redirecting output
3198 You would like the output documents to go to standard output instead of
3202 wget -O - http://jagor.srce.hr/ http://www.srce.hr/
3205 You can also combine the two options and make pipelines to retrieve the
3206 documents from remote hotlists:
3209 wget -O - http://cool.list.com/ | wget --force-html -i -
3213 @node Very Advanced Usage
3214 @section Very Advanced Usage
3219 If you wish Wget to keep a mirror of a page (or @sc{ftp}
3220 subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
3221 for @samp{-r -l inf -N}. You can put Wget in the crontab file asking it
3222 to recheck a site each Sunday:
3226 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
3230 In addition to the above, you want the links to be converted for local
3231 viewing. But, after having read this manual, you know that link
3232 conversion doesn't play well with timestamping, so you also want Wget to
3233 back up the original @sc{html} files before the conversion. Wget invocation
3234 would look like this:
3237 wget --mirror --convert-links --backup-converted \
3238 http://www.gnu.org/ -o /home/me/weeklog
3242 But you've also noticed that local viewing doesn't work all that well
3243 when @sc{html} files are saved under extensions other than @samp{.html},
3244 perhaps because they were served as @file{index.cgi}. So you'd like
3245 Wget to rename all the files served with content-type @samp{text/html}
3246 or @samp{application/xhtml+xml} to @file{@var{name}.html}.
3249 wget --mirror --convert-links --backup-converted \
3250 --html-extension -o /home/me/weeklog \
3254 Or, with less typing:
3257 wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
3266 This chapter contains all the stuff that could not fit anywhere else.
3269 * Proxies:: Support for proxy servers.
3270 * Distribution:: Getting the latest version.
3271 * Web Site:: GNU Wget's presence on the World Wide Web.
3272 * Mailing List:: Wget mailing list for announcements and discussion.
3273 * Internet Relay Chat:: Wget's presence on IRC.
3274 * Reporting Bugs:: How and where to report bugs.
3275 * Portability:: The systems Wget works on.
3276 * Signals:: Signal-handling performed by Wget.
3283 @dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
3284 data from remote servers to local clients. One typical use of proxies
3285 is lightening network load for users behind a slow connection. This is
3286 achieved by channeling all @sc{http} and @sc{ftp} requests through the
3287 proxy which caches the transferred data. When a cached resource is
3288 requested again, proxy will return the data from cache. Another use for
3289 proxies is for companies that separate (for security reasons) their
3290 internal networks from the rest of Internet. In order to obtain
3291 information from the Web, their users connect and retrieve remote data
3292 using an authorized proxy.
3294 Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
3295 standard way to specify proxy location, which Wget recognizes, is using
3296 the following environment variables:
3301 If set, the @code{http_proxy} and @code{https_proxy} variables should
3302 contain the @sc{url}s of the proxies for @sc{http} and @sc{https}
3303 connections respectively.
3306 This variable should contain the @sc{url} of the proxy for @sc{ftp}
3307 connections. It is quite common that @code{http_proxy} and
3308 @code{ftp_proxy} are set to the same @sc{url}.
3311 This variable should contain a comma-separated list of domain extensions
3312 proxy should @emph{not} be used for. For instance, if the value of
3313 @code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
3317 In addition to the environment variables, proxy location and settings
3318 may be specified from within Wget itself.
3322 @itemx proxy = on/off
3323 This option and the corresponding command may be used to suppress the
3324 use of proxy, even if the appropriate environment variables are set.
3326 @item http_proxy = @var{URL}
3327 @itemx https_proxy = @var{URL}
3328 @itemx ftp_proxy = @var{URL}
3329 @itemx no_proxy = @var{string}
3330 These startup file variables allow you to override the proxy settings
3331 specified by the environment.
3334 Some proxy servers require authorization to enable you to use them. The
3335 authorization consists of @dfn{username} and @dfn{password}, which must
3336 be sent by Wget. As with @sc{http} authorization, several
3337 authentication schemes exist. For proxy authorization only the
3338 @code{Basic} authentication scheme is currently implemented.
3340 You may specify your username and password either through the proxy
3341 @sc{url} or through the command-line options. Assuming that the
3342 company's proxy is located at @samp{proxy.company.com} at port 8001, a
3343 proxy @sc{url} location containing authorization data might look like
3347 http://hniksic:mypassword@@proxy.company.com:8001/
3350 Alternatively, you may use the @samp{proxy-user} and
3351 @samp{proxy-password} options, and the equivalent @file{.wgetrc}
3352 settings @code{proxy_user} and @code{proxy_password} to set the proxy
3353 username and password.
3356 @section Distribution
3357 @cindex latest version
3359 Like all GNU utilities, the latest version of Wget can be found at the
3360 master GNU archive site ftp.gnu.org, and its mirrors. For example,
3361 Wget @value{VERSION} can be found at
3362 @url{ftp://ftp.gnu.org/pub/gnu/wget/wget-@value{VERSION}.tar.gz}
3368 The official web site for GNU Wget is at
3369 @url{http://www.gnu.org/software/wget/}. However, most useful
3370 information resides at ``The Wget Wgiki'',
3371 @url{http://wget.addictivecode.org/}.
3374 @section Mailing List
3375 @cindex mailing list
3378 There are several Wget-related mailing lists. The general discussion
3379 list is at @email{wget@@sunsite.dk}. It is the preferred place for
3380 support requests and suggestions, as well as for discussion of
3381 development. You are invited to subscribe.
3383 To subscribe, simply send mail to @email{wget-subscribe@@sunsite.dk}
3384 and follow the instructions. Unsubscribe by mailing to
3385 @email{wget-unsubscribe@@sunsite.dk}. The mailing list is archived at
3386 @url{http://www.mail-archive.com/wget%40sunsite.dk/} and at
3387 @url{http://news.gmane.org/gmane.comp.web.wget.general}.
3389 Another mailing list is at @email{wget-patches@@sunsite.dk}, and is
3390 used to submit patches for review by Wget developers. A ``patch'' is
3391 a textual representation of change to source code, readable by both
3392 humans and programs. The
3393 @url{http://wget.addictivecode.org/PatchGuidelines} page
3394 covers the creation and submitting of patches in detail. Please don't
3395 send general suggestions or bug reports to @samp{wget-patches}; use it
3396 only for patch submissions.
3398 Subscription is the same as above for @email{wget@@sunsite.dk}, except
3399 that you send to @email{wget-patches-subscribe@@sunsite.dk}, instead.
3400 The mailing list is archived at
3401 @url{http://news.gmane.org/gmane.comp.web.wget.patches}.
3403 Finally, there is the @email{wget-notify@@addictivecode.org} mailing
3404 list. This is a non-discussion list that receives commit notifications
3405 from the source repository, and also bug report-change notifications.
3406 This is the highest-traffic list for Wget, and is recommended only for
3407 people who are seriously interested in ongoing Wget development.
3408 Subscription is through the @code{mailman} interface at
3409 @url{http://addictivecode.org/mailman/listinfo/wget-notify}.
3411 @node Internet Relay Chat
3412 @section Internet Relay Chat
3413 @cindex Internet Relay Chat
3417 While, at the time of this writing, there is very low activity, we do
3418 have a support channel set up via IRC at @code{irc.freenode.org},
3419 @code{#wget}. Come check it out!
3421 @node Reporting Bugs
3422 @section Reporting Bugs
3424 @cindex reporting bugs
3428 You are welcome to submit bug reports via the GNU Wget bug tracker (see
3429 @url{http://wget.addictivecode.org/BugTracker}).
3431 Before actually submitting a bug report, please try to follow a few
3436 Please try to ascertain that the behavior you see really is a bug. If
3437 Wget crashes, it's a bug. If Wget does not behave as documented,
3438 it's a bug. If things work strange, but you are not sure about the way
3439 they are supposed to work, it might well be a bug, but you might want to
3440 double-check the documentation and the mailing lists (@pxref{Mailing
3444 Try to repeat the bug in as simple circumstances as possible. E.g. if
3445 Wget crashes while downloading @samp{wget -rl0 -kKE -t5 --no-proxy
3446 http://yoyodyne.com -o /tmp/log}, you should try to see if the crash is
3447 repeatable, and if will occur with a simpler set of options. You might
3448 even try to start the download at the page where the crash occurred to
3449 see if that page somehow triggered the crash.
3451 Also, while I will probably be interested to know the contents of your
3452 @file{.wgetrc} file, just dumping it into the debug message is probably
3453 a bad idea. Instead, you should first try to see if the bug repeats
3454 with @file{.wgetrc} moved out of the way. Only if it turns out that
3455 @file{.wgetrc} settings affect the bug, mail me the relevant parts of
3459 Please start Wget with @samp{-d} option and send us the resulting
3460 output (or relevant parts thereof). If Wget was compiled without
3461 debug support, recompile it---it is @emph{much} easier to trace bugs
3462 with debug support on.
3464 Note: please make sure to remove any potentially sensitive information
3465 from the debug log before sending it to the bug address. The
3466 @code{-d} won't go out of its way to collect sensitive information,
3467 but the log @emph{will} contain a fairly complete transcript of Wget's
3468 communication with the server, which may include passwords and pieces
3469 of downloaded data. Since the bug address is publically archived, you
3470 may assume that all bug reports are visible to the public.
3473 If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
3474 wget` core} and type @code{where} to get the backtrace. This may not
3475 work if the system administrator has disabled core files, but it is
3481 @section Portability
3483 @cindex operating systems
3485 Like all GNU software, Wget works on the GNU system. However, since it
3486 uses GNU Autoconf for building and configuring, and mostly avoids using
3487 ``special'' features of any particular Unix, it should compile (and
3488 work) on all common Unix flavors.
3490 Various Wget versions have been compiled and tested under many kinds of
3491 Unix systems, including GNU/Linux, Solaris, SunOS 4.x, Mac OS X, OSF
3492 (aka Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some
3493 of those systems are no longer in widespread use and may not be able to
3494 support recent versions of Wget. If Wget fails to compile on your
3495 system, we would like to know about it.
3497 Thanks to kind contributors, this version of Wget compiles and works
3498 on 32-bit Microsoft Windows platforms. It has been compiled
3499 successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
3500 compilers. Naturally, it is crippled of some features available on
3501 Unix, but it should work as a substitute for people stuck with
3502 Windows. Note that Windows-specific portions of Wget are not
3503 guaranteed to be supported in the future, although this has been the
3504 case in practice for many years now. All questions and problems in
3505 Windows usage should be reported to Wget mailing list at
3506 @email{wget@@sunsite.dk} where the volunteers who maintain the
3507 Windows-related features might look at them.
3509 Support for building on MS-DOS via DJGPP has been contributed by Gisle
3510 Vanem; a port to VMS is maintained by Steven Schweda, and is available
3511 at @url{http://antinode.org/}.
3515 @cindex signal handling
3518 Since the purpose of Wget is background work, it catches the hangup
3519 signal (@code{SIGHUP}) and ignores it. If the output was on standard
3520 output, it will be redirected to a file named @file{wget-log}.
3521 Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
3522 to redirect the output of Wget after having started it.
3525 $ wget http://www.gnus.org/dist/gnus.tar.gz &
3528 SIGHUP received, redirecting output to `wget-log'.
3531 Other than that, Wget will not try to interfere with signals in any way.
3532 @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike.
3537 This chapter contains some references I consider useful.
3540 * Robot Exclusion:: Wget's support for RES.
3541 * Security Considerations:: Security with Wget.
3542 * Contributors:: People who helped.
3545 @node Robot Exclusion
3546 @section Robot Exclusion
3547 @cindex robot exclusion
3549 @cindex server maintenance
3551 It is extremely easy to make Wget wander aimlessly around a web site,
3552 sucking all the available data in progress. @samp{wget -r @var{site}},
3553 and you're set. Great? Not for the server admin.
3555 As long as Wget is only retrieving static pages, and doing it at a
3556 reasonable rate (see the @samp{--wait} option), there's not much of a
3557 problem. The trouble is that Wget can't tell the difference between the
3558 smallest static page and the most demanding CGI. A site I know has a
3559 section handled by a CGI Perl script that converts Info files to @sc{html} on
3560 the fly. The script is slow, but works well enough for human users
3561 viewing an occasional Info file. However, when someone's recursive Wget
3562 download stumbles upon the index page that links to all the Info files
3563 through the script, the system is brought to its knees without providing
3564 anything useful to the user (This task of converting Info files could be
3565 done locally and access to Info documentation for all installed GNU
3566 software on a system is available from the @code{info} command).
3568 To avoid this kind of accident, as well as to preserve privacy for
3569 documents that need to be protected from well-behaved robots, the
3570 concept of @dfn{robot exclusion} was invented. The idea is that
3571 the server administrators and document authors can specify which
3572 portions of the site they wish to protect from robots and those
3573 they will permit access.
3575 The most popular mechanism, and the @i{de facto} standard supported by
3576 all the major robots, is the ``Robots Exclusion Standard'' (RES) written
3577 by Martijn Koster et al. in 1994. It specifies the format of a text
3578 file containing directives that instruct the robots which URL paths to
3579 avoid. To be found by the robots, the specifications must be placed in
3580 @file{/robots.txt} in the server root, which the robots are expected to
3583 Although Wget is not a web robot in the strictest sense of the word, it
3584 can downloads large parts of the site without the user's intervention to
3585 download an individual page. Because of that, Wget honors RES when
3586 downloading recursively. For instance, when you issue:
3589 wget -r http://www.server.com/
3592 First the index of @samp{www.server.com} will be downloaded. If Wget
3593 finds that it wants to download more documents from that server, it will
3594 request @samp{http://www.server.com/robots.txt} and, if found, use it
3595 for further downloads. @file{robots.txt} is loaded only once per each
3598 Until version 1.8, Wget supported the first version of the standard,
3599 written by Martijn Koster in 1994 and available at
3600 @url{http://www.robotstxt.org/wc/norobots.html}. As of version 1.8,
3601 Wget has supported the additional directives specified in the internet
3602 draft @samp{<draft-koster-robots-00.txt>} titled ``A Method for Web
3603 Robots Control''. The draft, which has as far as I know never made to
3604 an @sc{rfc}, is available at
3605 @url{http://www.robotstxt.org/wc/norobots-rfc.txt}.
3607 This manual no longer includes the text of the Robot Exclusion Standard.
3609 The second, less known mechanism, enables the author of an individual
3610 document to specify whether they want the links from the file to be
3611 followed by a robot. This is achieved using the @code{META} tag, like
3615 <meta name="robots" content="nofollow">
3618 This is explained in some detail at
3619 @url{http://www.robotstxt.org/wc/meta-user.html}. Wget supports this
3620 method of robot exclusion in addition to the usual @file{/robots.txt}
3623 If you know what you are doing and really really wish to turn off the
3624 robot exclusion, set the @code{robots} variable to @samp{off} in your
3625 @file{.wgetrc}. You can achieve the same effect from the command line
3626 using the @code{-e} switch, e.g. @samp{wget -e robots=off @var{url}...}.
3628 @node Security Considerations
3629 @section Security Considerations
3632 When using Wget, you must be aware that it sends unencrypted passwords
3633 through the network, which may present a security problem. Here are the
3634 main issues, and some solutions.
3638 The passwords on the command line are visible using @code{ps}. The best
3639 way around it is to use @code{wget -i -} and feed the @sc{url}s to
3640 Wget's standard input, each on a separate line, terminated by @kbd{C-d}.
3641 Another workaround is to use @file{.netrc} to store passwords; however,
3642 storing unencrypted passwords is also considered a security risk.
3645 Using the insecure @dfn{basic} authentication scheme, unencrypted
3646 passwords are transmitted through the network routers and gateways.
3649 The @sc{ftp} passwords are also in no way encrypted. There is no good
3650 solution for this at the moment.
3653 Although the ``normal'' output of Wget tries to hide the passwords,
3654 debugging logs show them, in all forms. This problem is avoided by
3655 being careful when you send debug logs (yes, even when you send them to
3660 @section Contributors
3661 @cindex contributors
3664 GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@xemacs.org},
3667 GNU Wget was written by Hrvoje Niksic @email{hniksic@@xemacs.org},
3669 and it is currently maintained by Micah Cowan @email{micah@@cowan.name}.
3671 However, the development of Wget could never have gone as far as it has, were
3672 it not for the help of many people, either with bug reports, feature proposals,
3673 patches, or letters saying ``Thanks!''.
3675 Special thanks goes to the following people (no particular order):
3678 @item Dan Harkless---contributed a lot of code and documentation of
3679 extremely high quality, as well as the @code{--page-requisites} and
3680 related options. He was the principal maintainer for some time and
3683 @item Ian Abbott---contributed bug fixes, Windows-related fixes, and
3684 provided a prototype implementation of the breadth-first recursive
3685 download. Co-maintained Wget during the 1.8 release cycle.
3688 The dotsrc.org crew, in particular Karsten Thygesen---donated system
3689 resources such as the mailing list, web space, @sc{ftp} space, and
3690 version control repositories, along with a lot of time to make these
3691 actually work. Christian Reiniger was of invaluable help with setting
3695 Heiko Herold---provided high-quality Windows builds and contributed
3696 bug and build reports for many years.
3699 Shawn McHorse---bug reports and patches.
3702 Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. Lots of
3706 Gordon Matzigkeit---@file{.netrc} support.
3710 Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
3711 Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
3714 Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
3715 and ``philosophical'' discussions.
3719 Darko Budor---initial port to Windows.
3722 Antonio Rosella---help and suggestions, plus the initial Italian
3727 Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
3731 Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
3736 Fran@,{c}ois Pinard---many thorough bug reports and discussions.
3739 Francois Pinard---many thorough bug reports and discussions.
3743 Karl Eichwalder---lots of help with internationalization, Makefile
3744 layout and many other things.
3747 Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
3751 Mauro Tortonesi---Improved IPv6 support, adding support for dual
3752 family systems. Refactored and enhanced FTP IPv6 code. Maintained GNU
3753 Wget from 2004--2007.
3756 Christopher G.@: Lewis---Maintenance of the Windows version of GNU WGet.
3759 Gisle Vanem---Many helpful patches and improvements, especially for
3760 Windows and MS-DOS support.
3763 Ralf Wildenhues---Contributed patches to convert Wget to use Automake as
3764 part of its build process, and various bugfixes.
3767 People who provided donations for development---including Brian Gough.
3770 The following people have provided patches, bug/build reports, useful
3771 suggestions, beta testing services, fan mail and all the other things
3772 that make maintenance so much fun:
3792 Kristijan @v{C}onka@v{s},
3801 Bertrand Demiddelaer,
3802 Alexander Dergachev,
3815 Aleksandar Erkalovi@'{c},
3818 Aleksandar Erkalovic,
3837 Erik Magnus Hulthen,
3856 Goran Kezunovi@'{c},
3869 $\Sigma\acute{\iota}\mu o\varsigma\;
3870 \Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
3871 (Simos KSenitellis),
3880 Nicol@'{a}s Lichtmeier,
3886 Alexander V.@: Lukyanov,
3895 Matthew J.@: Mellon,
3927 @c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
3929 Juan Jos\'{e} Rodr\'{\i}guez,
3932 Juan Jose Rodriguez,
3934 Maciej W.@: Rozycki,
3940 Steven M.@: Schweda,
3950 Szakacsits Szabolcs,
3965 Douglas E.@: Wegscheid,
3967 Joshua David Williams,
3978 Apologies to all who I accidentally left out, and many thanks to all the
3979 subscribers of the Wget mailing list.
3981 @node Copying this manual
3982 @appendix Copying this manual
3985 * GNU Free Documentation License:: Licnse for copying this manual.
3992 @unnumbered Concept Index