1 This is Info file wget.info, produced by Makeinfo version 1.68 from the
2 input file ./wget.texi.
4 INFO-DIR-SECTION Net Utilities
5 INFO-DIR-SECTION World Wide Web
7 * Wget: (wget). The non-interactive network downloader.
10 This file documents the the GNU Wget utility for downloading network
13 Copyright (C) 1996, 1997, 1998, 2000 Free Software Foundation, Inc.
15 Permission is granted to make and distribute verbatim copies of this
16 manual provided the copyright notice and this permission notice are
17 preserved on all copies.
19 Permission is granted to copy and distribute modified versions of
20 this manual under the conditions for verbatim copying, provided also
21 that the sections entitled "Copying" and "GNU General Public License"
22 are included exactly as in the original, and provided that the entire
23 resulting derived work is distributed under the terms of a permission
24 notice identical to this one.
27 File: wget.info, Node: Directory-Based Limits, Next: FTP Links, Prev: Types of Files, Up: Following Links
29 Directory-Based Limits
30 ======================
32 Regardless of other link-following facilities, it is often useful to
33 place the restriction of what files to retrieve based on the directories
34 those files are placed in. There can be many reasons for this--the
35 home pages may be organized in a reasonable directory structure; or some
36 directories may contain useless information, e.g. `/cgi-bin' or `/dev'
39 Wget offers three different options to deal with this requirement.
40 Each option description lists a short name, a long name, and the
41 equivalent command in `.wgetrc'.
45 `include_directories = LIST'
46 `-I' option accepts a comma-separated list of directories included
47 in the retrieval. Any other directories will simply be ignored.
48 The directories are absolute paths.
50 So, if you wish to download from `http://host/people/bozo/'
51 following only links to bozo's colleagues in the `/people'
52 directory and the bogus scripts in `/cgi-bin', you can specify:
54 wget -I /people,/cgi-bin http://host/people/bozo/
58 `exclude_directories = LIST'
59 `-X' option is exactly the reverse of `-I'--this is a list of
60 directories *excluded* from the download. E.g. if you do not want
61 Wget to download things from `/cgi-bin' directory, specify `-X
62 /cgi-bin' on the command line.
64 The same as with `-A'/`-R', these two options can be combined to
65 get a better fine-tuning of downloading subdirectories. E.g. if
66 you want to load all the files from `/pub' hierarchy except for
67 `/pub/worthless', specify `-I/pub -X/pub/worthless'.
72 The simplest, and often very useful way of limiting directories is
73 disallowing retrieval of the links that refer to the hierarchy
74 "above" than the beginning directory, i.e. disallowing ascent to
75 the parent directory/directories.
77 The `--no-parent' option (short `-np') is useful in this case.
78 Using it guarantees that you will never leave the existing
79 hierarchy. Supposing you issue Wget with:
81 wget -r --no-parent http://somehost/~luzer/my-archive/
83 You may rest assured that none of the references to
84 `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be
85 followed. Only the archive you are interested in will be
86 downloaded. Essentially, `--no-parent' is similar to
87 `-I/~luzer/my-archive', only it handles redirections in a more
91 File: wget.info, Node: FTP Links, Prev: Directory-Based Limits, Up: Following Links
96 The rules for FTP are somewhat specific, as it is necessary for them
97 to be. FTP links in HTML documents are often included for purposes of
98 reference, and it is often inconvenient to download them by default.
100 To have FTP links followed from HTML documents, you need to specify
101 the `--follow-ftp' option. Having done that, FTP links will span hosts
102 regardless of `-H' setting. This is logical, as FTP links rarely point
103 to the same host where the HTTP server resides. For similar reasons,
104 the `-L' options has no effect on such downloads. On the other hand,
105 domain acceptance (`-D') and suffix rules (`-A' and `-R') apply
108 Also note that followed links to FTP directories will not be
109 retrieved recursively further.
112 File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top
117 One of the most important aspects of mirroring information from the
118 Internet is updating your archives.
120 Downloading the whole archive again and again, just to replace a few
121 changed files is expensive, both in terms of wasted bandwidth and money,
122 and the time to do the update. This is why all the mirroring tools
123 offer the option of incremental updating.
125 Such an updating mechanism means that the remote server is scanned in
126 search of "new" files. Only those new files will be downloaded in the
127 place of the old ones.
129 A file is considered new if one of these two conditions are met:
131 1. A file of that name does not already exist locally.
133 2. A file of that name does exist, but the remote file was modified
134 more recently than the local file.
136 To implement this, the program needs to be aware of the time of last
137 modification of both remote and local files. Such information are
138 called the "time-stamps".
140 The time-stamping in GNU Wget is turned on using `--timestamping'
141 (`-N') option, or through `timestamping = on' directive in `.wgetrc'.
142 With this option, for each file it intends to download, Wget will check
143 whether a local file of the same name exists. If it does, and the
144 remote file is older, Wget will not download it.
146 If the local file does not exist, or the sizes of the files do not
147 match, Wget will download the remote file no matter what the time-stamps
152 * Time-Stamping Usage::
153 * HTTP Time-Stamping Internals::
154 * FTP Time-Stamping Internals::
157 File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping
162 The usage of time-stamping is simple. Say you would like to
163 download a file so that it keeps its date of modification.
165 wget -S http://www.gnu.ai.mit.edu/
167 A simple `ls -l' shows that the time stamp on the local file equals
168 the state of the `Last-Modified' header, as returned by the server. As
169 you can see, the time-stamping info is preserved locally, even without
172 Several days later, you would like Wget to check if the remote file
173 has changed, and download it if it has.
175 wget -N http://www.gnu.ai.mit.edu/
177 Wget will ask the server for the last-modified date. If the local
178 file is newer, the remote file will not be re-fetched. However, if the
179 remote file is more recent, Wget will proceed fetching it normally.
181 The same goes for FTP. For example:
183 wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/*
185 `ls' will show that the timestamps are set according to the state on
186 the remote server. Reissuing the command with `-N' will make Wget
187 re-fetch *only* the files that have been modified.
189 In both HTTP and FTP retrieval Wget will time-stamp the local file
190 correctly (with or without `-N') if it gets the stamps, i.e. gets the
191 directory listing for FTP or the `Last-Modified' header for HTTP.
193 If you wished to mirror the GNU archive every week, you would use the
194 following command every week:
196 wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/
199 File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping
201 HTTP Time-Stamping Internals
202 ============================
204 Time-stamping in HTTP is implemented by checking of the
205 `Last-Modified' header. If you wish to retrieve the file `foo.html'
206 through HTTP, Wget will check whether `foo.html' exists locally. If it
207 doesn't, `foo.html' will be retrieved unconditionally.
209 If the file does exist locally, Wget will first check its local
210 time-stamp (similar to the way `ls -l' checks it), and then send a
211 `HEAD' request to the remote server, demanding the information on the
214 The `Last-Modified' header is examined to find which file was
215 modified more recently (which makes it "newer"). If the remote file is
216 newer, it will be downloaded; if it is older, Wget will give up.(1)
218 When `--backup-converted' (`-K') is specified in conjunction with
219 `-N', server file `X' is compared to local file `X.orig', if extant,
220 rather than being compared to local file `X', which will always differ
221 if it's been converted by `--convert-links' (`-k').
223 Arguably, HTTP time-stamping should be implemented using the
224 `If-Modified-Since' request.
226 ---------- Footnotes ----------
228 (1) As an additional check, Wget will look at the `Content-Length'
229 header, and compare the sizes; if they are not the same, the remote
230 file will be downloaded no matter what the time-stamp says.
233 File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping
235 FTP Time-Stamping Internals
236 ===========================
238 In theory, FTP time-stamping works much the same as HTTP, only FTP
239 has no headers--time-stamps must be received from the directory
242 For each directory files must be retrieved from, Wget will use the
243 `LIST' command to get the listing. It will try to analyze the listing,
244 assuming that it is a Unix `ls -l' listing, and extract the
245 time-stamps. The rest is exactly the same as for HTTP.
247 Assumption that every directory listing is a Unix-style listing may
248 sound extremely constraining, but in practice it is not, as many
249 non-Unix FTP servers use the Unixoid listing format because most (all?)
250 of the clients understand it. Bear in mind that RFC959 defines no
251 standard way to get a file list, let alone the time-stamps. We can
252 only hope that a future standard will define this.
254 Another non-standard solution includes the use of `MDTM' command
255 that is supported by some FTP servers (including the popular
256 `wu-ftpd'), which returns the exact time of the specified file. Wget
257 may support this command in the future.
260 File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top
265 Once you know how to change default settings of Wget through command
266 line arguments, you may wish to make some of those settings permanent.
267 You can do that in a convenient way by creating the Wget startup
270 Besides `.wgetrc' is the "main" initialization file, it is
271 convenient to have a special facility for storing passwords. Thus Wget
272 reads and interprets the contents of `$HOME/.netrc', if it finds it.
273 You can find `.netrc' format in your system manuals.
275 Wget reads `.wgetrc' upon startup, recognizing a limited set of
280 * Wgetrc Location:: Location of various wgetrc files.
281 * Wgetrc Syntax:: Syntax of wgetrc.
282 * Wgetrc Commands:: List of available commands.
283 * Sample Wgetrc:: A wgetrc example.
286 File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File
291 When initializing, Wget will look for a "global" startup file,
292 `/usr/local/etc/wgetrc' by default (or some prefix other than
293 `/usr/local', if Wget was not installed there) and read commands from
296 Then it will look for the user's file. If the environmental variable
297 `WGETRC' is set, Wget will try to load that file. Failing that, no
298 further attempts will be made.
300 If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'.
302 The fact that user's settings are loaded after the system-wide ones
303 means that in case of collision user's wgetrc *overrides* the
304 system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist
308 File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File
313 The syntax of a wgetrc command is simple:
317 The "variable" will also be called "command". Valid "values" are
318 different for different commands.
320 The commands are case-insensitive and underscore-insensitive. Thus
321 `DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning
322 with `#' and lines containing white-space only are discarded.
324 Commands that expect a comma-separated list will clear the list on an
325 empty command. So, if you wish to reset the rejection list specified in
326 global `wgetrc', you can do it with:
331 File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File
336 The complete set of commands is listed below, the letter after `='
337 denoting the value the command takes. It is `on/off' for `on' or `off'
338 (which can also be `1' or `0'), STRING for any non-empty string or N
339 for a positive integer. For example, you may specify `use_proxy = off'
340 to disable use of proxy servers by default. You may use `inf' for
341 infinite values, where appropriate.
343 Most of the commands have their equivalent command-line option
344 (*Note Invoking::), except some more obscure or rarely used ones.
346 accept/reject = STRING
347 Same as `-A'/`-R' (*Note Types of Files::).
350 Enable/disable host-prefixed file names. `-nH' disables it.
353 Enable/disable continuation of the retrieval - the same as `-c'
357 Enable/disable going to background - the same as `-b' (which
360 backup_converted = on/off
361 Enable/disable saving pre-converted files with the suffix `.orig'
362 - the same as `-K' (which enables it).
365 Consider relative URLs in URL input files forced to be interpreted
366 as HTML as being relative to STRING - the same as `-B'.
369 When set to off, disallow server-caching. See the `-C' option.
371 convert links = on/off
372 Convert non-relative links locally. The same as `-k'.
375 Ignore N remote directory components.
378 Debug mode, same as `-d'.
380 delete_after = on/off
381 Delete after download - the same as `--delete-after'.
384 Top of directory tree - the same as `-P'.
387 Turning dirstruct on or off - the same as `-x' or `-nd',
391 Same as `-D' (*Note Domain Acceptance::).
394 Specify the number of bytes "contained" in a dot, as seen
395 throughout the retrieval (1024 by default). You can postfix the
396 value with `k' or `m', representing kilobytes and megabytes,
397 respectively. With dot settings you can tailor the dot retrieval
398 to suit your needs, or you can use the predefined "styles" (*Note
402 Specify the number of dots that will be printed in each line
403 throughout the retrieval (50 by default).
406 Specify the number of dots in a single cluster (10 by default).
409 Specify the dot retrieval "style", as with `--dot-style'.
411 exclude_directories = STRING
412 Specify a comma-separated list of directories you wish to exclude
413 from download - the same as `-X' (*Note Directory-Based Limits::).
415 exclude_domains = STRING
416 Same as `--exclude-domains' (*Note Domain Acceptance::).
419 Follow FTP links from HTML documents - the same as `-f'.
422 Only follow certain HTML tags when doing a recursive retrieval,
423 just like `--follow-tags'.
426 If set to on, force the input filename to be regarded as an HTML
427 document - the same as `-F'.
430 Use STRING as FTP proxy, instead of the one specified in
434 Turn globbing on/off - the same as `-g'.
437 Define an additional header, like `--header'.
439 html_extension = on/off
440 Add a `.html' extension to `text/html' files without it, like `-E'.
446 Use STRING as HTTP proxy, instead of the one specified in
450 Set HTTP user to STRING.
452 ignore_length = on/off
453 When set to on, ignore `Content-Length' header; the same as
457 Ignore certain HTML tags when doing a recursive retrieval, just
458 like `-G' / `--ignore-tags'.
460 include_directories = STRING
461 Specify a comma-separated list of directories you wish to follow
462 when downloading - the same as `-I'.
465 Read the URLs from STRING, like `-i'.
468 Consider data longer than specified in content-length header as
469 invalid (and retry getting it). The default behaviour is to save
470 as much data as there is, provided there is more than or equal to
471 the value in `Content-Length'.
474 Set logfile - the same as `-o'.
477 Your user name on the remote machine, for FTP. Defaults to
481 Turn mirroring on/off. The same as `-m'.
484 Turn reading netrc on or off.
490 Disallow retrieving outside the directory hierarchy, like
491 `--no-parent' (*Note Directory-Based Limits::).
494 Use STRING as the comma-separated list of domains to avoid in
495 proxy loading, instead of the one specified in environment.
497 output_document = STRING
498 Set the output filename - the same as `-O'.
500 page_requisites = on/off
501 Download all ancillary documents necessary for a single HTML page
502 to display properly - the same as `-p'.
505 Set passive FTP - the same as `--passive-ftp'.
508 Set your FTP password to PASSWORD. Without this setting, the
509 password defaults to `username@hostname.domainname'.
512 Set proxy authentication user name to STRING, like `--proxy-user'.
514 proxy_passwd = STRING
515 Set proxy authentication password to STRING, like `--proxy-passwd'.
518 Quiet mode - the same as `-q'.
521 Specify the download quota, which is useful to put in the global
522 `wgetrc'. When download quota is specified, Wget will stop
523 retrieving after the download sum has become greater than quota.
524 The quota can be specified in bytes (default), kbytes `k'
525 appended) or mbytes (`m' appended). Thus `quota = 5m' will set
526 the quota to 5 mbytes. Note that the user's startup file overrides
530 Recursion level - the same as `-l'.
533 Recursive on/off - the same as `-r'.
535 relative_only = on/off
536 Follow only relative links - the same as `-L' (*Note Relative
539 remove_listing = on/off
540 If set to on, remove FTP listings downloaded by Wget. Setting it
541 to off is the same as `-nr'.
543 retr_symlinks = on/off
544 When set to on, retrieve symbolic links as if they were plain
545 files; the same as `--retr-symlinks'.
548 Use (or not) `/robots.txt' file (*Note Robots::). Be sure to know
549 what you are doing before changing the default (which is `on').
551 server_response = on/off
552 Choose whether or not to print the HTTP and FTP server responses -
555 simple_host_check = on/off
556 Same as `-nh' (*Note Host Checking::).
562 Set timeout value - the same as `-T'.
564 timestamping = on/off
565 Turn timestamping on/off. The same as `-N' (*Note Time-Stamping::).
568 Set number of retries per URL - the same as `-t'.
571 Turn proxy support on/off. The same as `-Y'.
574 Turn verbose on/off - the same as `-v'/`-nv'.
577 Wait N seconds between retrievals - the same as `-w'.
580 Wait up to N seconds between retries of failed retrievals only -
581 the same as `--waitretry'. Note that this is turned on by default
582 in the global `wgetrc'.
585 File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File
590 This is the sample initialization file, as given in the distribution.
591 It is divided in two section--one for global usage (suitable for global
592 startup file), and one for local usage (suitable for `$HOME/.wgetrc').
593 Be careful about the things you change.
595 Note that almost all the lines are commented out. For a command to
596 have any effect, you must remove the `#' character at the beginning of
600 ### Sample Wget initialization file .wgetrc
603 ## You can use this file to change the default behaviour of wget or to
604 ## avoid having to type many many command-line options. This file does
605 ## not contain a comprehensive list of commands -- look at the manual
606 ## to find out what you can put into this file.
608 ## Wget initialization file can reside in /usr/local/etc/wgetrc
609 ## (global, for all users) or $HOME/.wgetrc (for a single user).
611 ## To use the settings in this file, you will have to uncomment them,
612 ## as well as change them, in most cases, as the values on the
613 ## commented-out lines are the default values (e.g. "off").
617 ## Global settings (useful for setting up in /usr/local/etc/wgetrc).
618 ## Think well before you change them, since they may reduce wget's
619 ## functionality, and make it behave contrary to the documentation:
622 # You can set retrieve quota for beginners by specifying a value
623 # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
624 # default quota is unlimited.
627 # You can lower (or raise) the default number of retries when
628 # downloading a file (default is 20).
631 # Lowering the maximum depth of the recursive retrieval is handy to
632 # prevent newbies from going too "deep" when they unwittingly start
633 # the recursive retrieval. The default is 5.
636 # Many sites are behind firewalls that do not allow initiation of
637 # connections from the outside. On these sites you have to use the
638 # `passive' feature of FTP. If you are behind such a firewall, you
639 # can turn this on to make Wget use passive FTP by default.
642 # The "wait" command below makes Wget wait between every connection.
643 # If, instead, you want Wget to wait only between retries of failed
644 # downloads, set waitretry to maximum number of seconds to wait (Wget
645 # will use "linear backoff", waiting 1 second after the first failure
646 # on a file, 2 seconds after the second failure, etc. up to this max).
651 ## Local settings (for a user to set in his $HOME/.wgetrc). It is
652 ## *highly* undesirable to put these settings in the global file, since
653 ## they are potentially dangerous to "normal" users.
655 ## Even when setting up your own ~/.wgetrc, you should know what you
656 ## are doing before doing so.
659 # Set this to on to use timestamping by default:
662 # It is a good idea to make Wget send your email address in a `From:'
663 # header with your request (so that server administrators can contact
664 # you in case of errors). Wget does *not* send `From:' by default.
665 #header = From: Your Name <username@site.domain>
667 # You can set up other headers, like Accept-Language. Accept-Language
668 # is *not* sent by default.
669 #header = Accept-Language: en
671 # You can set the default proxy for Wget to use. It will override the
672 # value in the environment.
673 #http_proxy = http://proxy.yoyodyne.com:18023/
675 # If you do not want to use proxy at all, set this to off.
678 # You can customize the retrieval outlook. Valid options are default,
679 # binary, mega and micro.
682 # Setting this to off makes Wget not download /robots.txt. Be sure to
683 # know *exactly* what /robots.txt is and how it is used before changing
687 # It can be useful to make Wget wait between connections. Set this to
688 # the number of seconds you want Wget to wait.
691 # You can force creating directory structure, even if a single is being
692 # retrieved, by setting this to on.
695 # You can turn on recursive retrieving by default (don't do this if
696 # you are not sure you know what it means) by setting this to on.
699 # To always back up file X as X.orig before converting its links (due
700 # to -k / --convert-links / convert_links = on having been specified),
701 # set this variable to on:
702 #backup_converted = off
704 # To have Wget follow FTP links from HTML files by default, set this
709 File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top
714 The examples are classified into three sections, because of clarity.
715 The first section is a tutorial for beginners. The second section
716 explains some of the more complex program features. The third section
717 contains advice for mirror administrators, as well as even more complex
718 features (that some would call perverted).
722 * Simple Usage:: Simple, basic usage of the program.
723 * Advanced Usage:: Advanced techniques of usage.
724 * Guru Usage:: Mirroring and the hairy stuff.
727 File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples
732 * Say you want to download a URL. Just type:
734 wget http://fly.cc.fer.hr/
736 The response will be something like:
738 --13:30:45-- http://fly.cc.fer.hr:80/en/
740 Connecting to fly.cc.fer.hr:80... connected!
741 HTTP request sent, awaiting response... 200 OK
742 Length: 4,694 [text/html]
746 13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694]
748 * But what will happen if the connection is slow, and the file is
749 lengthy? The connection will probably fail before the whole file
750 is retrieved, more than once. In this case, Wget will try getting
751 the file until it either gets the whole of it, or exceeds the
752 default number of retries (this being 20). It is easy to change
753 the number of tries to 45, to insure that the whole file will
756 wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg
758 * Now let's leave Wget to work in the background, and write its
759 progress to log file `log'. It is tiring to type `--tries', so we
762 wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg &
764 The ampersand at the end of the line makes sure that Wget works in
765 the background. To unlimit the number of retries, use `-t inf'.
767 * The usage of FTP is as simple. Wget will take care of login and
770 $ wget ftp://gnjilux.cc.fer.hr/welcome.msg
771 --10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg
773 Connecting to gnjilux.cc.fer.hr:21... connected!
774 Logging in as anonymous ... Logged in!
775 ==> TYPE I ... done. ==> CWD not needed.
776 ==> PORT ... done. ==> RETR welcome.msg ... done.
777 Length: 1,340 (unauthoritative)
781 10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340]
783 * If you specify a directory, Wget will retrieve the directory
784 listing, parse it and convert it to HTML. Try:
786 wget ftp://prep.ai.mit.edu/pub/gnu/
790 File: wget.info, Node: Advanced Usage, Next: Guru Usage, Prev: Simple Usage, Up: Examples
795 * You would like to read the list of URLs from a file? Not a problem
800 If you specify `-' as file name, the URLs will be read from
803 * Create a mirror image of GNU WWW site (with the same directory
804 structure the original has) with only one try per document, saving
805 the log of the activities to `gnulog':
807 wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog
809 * Retrieve the first layer of yahoo links:
811 wget -r -l1 http://www.yahoo.com/
813 * Retrieve the index.html of `www.lycos.com', showing the original
816 wget -S http://www.lycos.com/
818 * Save the server headers with the file:
819 wget -s http://www.lycos.com/
822 * Retrieve the first two levels of `wuarchive.wustl.edu', saving them
825 wget -P/tmp -l2 ftp://wuarchive.wustl.edu/
827 * You want to download all the GIFs from an HTTP directory. `wget
828 http://host/dir/*.gif' doesn't work, since HTTP retrieval does not
829 support globbing. In that case, use:
831 wget -r -l1 --no-parent -A.gif http://host/dir/
833 It is a bit of a kludge, but it works. `-r -l1' means to retrieve
834 recursively (*Note Recursive Retrieval::), with maximum depth of 1.
835 `--no-parent' means that references to the parent directory are
836 ignored (*Note Directory-Based Limits::), and `-A.gif' means to
837 download only the GIF files. `-A "*.gif"' would have worked too.
839 * Suppose you were in the middle of downloading, when Wget was
840 interrupted. Now you do not want to clobber the files already
841 present. It would be:
843 wget -nc -r http://www.gnu.ai.mit.edu/
845 * If you want to encode your own username and password to HTTP or
846 FTP, use the appropriate URL syntax (*Note URL Format::).
848 wget ftp://hniksic:mypassword@jagor.srce.hr/.emacs
850 * If you do not like the default retrieval visualization (1K dots
851 with 10 dots per cluster and 50 dots per line), you can customize
852 it through dot settings (*Note Wgetrc Commands::). For example,
853 many people like the "binary" style of retrieval, with 8K dots and
856 wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README
858 You can experiment with other styles, like:
860 wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz
861 wget --dot-style=micro http://fly.cc.fer.hr/
863 To make these settings permanent, put them in your `.wgetrc', as
864 described before (*Note Sample Wgetrc::).
867 File: wget.info, Node: Guru Usage, Prev: Advanced Usage, Up: Examples
872 * If you wish Wget to keep a mirror of a page (or FTP
873 subdirectories), use `--mirror' (`-m'), which is the shorthand for
874 `-r -N'. You can put Wget in the crontab file asking it to
875 recheck a site each Sunday:
878 0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog
880 * You may wish to do the same with someone's home page. But you do
881 not want to download all those images--you're only interested in
884 wget --mirror -A.html http://www.w3.org/
886 * But what about mirroring the hosts networkologically close to you?
887 It seems so awfully slow because of all that DNS resolving. Just
888 use `-D' (*Note Domain Acceptance::).
890 wget -rN -Dsrce.hr http://www.srce.hr/
892 Now Wget will correctly find out that `regoc.srce.hr' is the same
893 as `www.srce.hr', but will not even take into consideration the
894 link to `www.mit.edu'.
896 * You have a presentation and would like the dumb absolute links to
897 be converted to relative? Use `-k':
901 * You would like the output documents to go to standard output
902 instead of to files? OK, but Wget will automatically shut up
903 (turn on `--quiet') to prevent mixing of Wget output and the
906 wget -O - http://jagor.srce.hr/ http://www.srce.hr/
908 You can also combine the two options and make weird pipelines to
909 retrieve the documents from remote hotlists:
911 wget -O - http://cool.list.com/ | wget --force-html -i -
914 File: wget.info, Node: Various, Next: Appendices, Prev: Examples, Up: Top
919 This chapter contains all the stuff that could not fit anywhere else.
923 * Proxies:: Support for proxy servers
924 * Distribution:: Getting the latest version.
925 * Mailing List:: Wget mailing list for announcements and discussion.
926 * Reporting Bugs:: How and where to report bugs.
927 * Portability:: The systems Wget works on.
928 * Signals:: Signal-handling performed by Wget.
931 File: wget.info, Node: Proxies, Next: Distribution, Prev: Various, Up: Various
936 "Proxies" are special-purpose HTTP servers designed to transfer data
937 from remote servers to local clients. One typical use of proxies is
938 lightening network load for users behind a slow connection. This is
939 achieved by channeling all HTTP and FTP requests through the proxy
940 which caches the transferred data. When a cached resource is requested
941 again, proxy will return the data from cache. Another use for proxies
942 is for companies that separate (for security reasons) their internal
943 networks from the rest of Internet. In order to obtain information
944 from the Web, their users connect and retrieve remote data using an
947 Wget supports proxies for both HTTP and FTP retrievals. The
948 standard way to specify proxy location, which Wget recognizes, is using
949 the following environment variables:
952 This variable should contain the URL of the proxy for HTTP
956 This variable should contain the URL of the proxy for HTTP
957 connections. It is quite common that HTTP_PROXY and FTP_PROXY are
961 This variable should contain a comma-separated list of domain
962 extensions proxy should *not* be used for. For instance, if the
963 value of `no_proxy' is `.mit.edu', proxy will not be used to
964 retrieve documents from MIT.
966 In addition to the environment variables, proxy location and settings
967 may be specified from within Wget itself.
972 This option may be used to turn the proxy support on or off. Proxy
973 support is on by default, provided that the appropriate environment
979 These startup file variables allow you to override the proxy
980 settings specified by the environment.
982 Some proxy servers require authorization to enable you to use them.
983 The authorization consists of "username" and "password", which must be
984 sent by Wget. As with HTTP authorization, several authentication
985 schemes exist. For proxy authorization only the `Basic' authentication
986 scheme is currently implemented.
988 You may specify your username and password either through the proxy
989 URL or through the command-line options. Assuming that the company's
990 proxy is located at `proxy.srce.hr' at port 8001, a proxy URL location
991 containing authorization data might look like this:
993 http://hniksic:mypassword@proxy.company.com:8001/
995 Alternatively, you may use the `proxy-user' and `proxy-password'
996 options, and the equivalent `.wgetrc' settings `proxy_user' and
997 `proxy_passwd' to set the proxy username and password.
1000 File: wget.info, Node: Distribution, Next: Mailing List, Prev: Proxies, Up: Various
1005 Like all GNU utilities, the latest version of Wget can be found at
1006 the master GNU archive site prep.ai.mit.edu, and its mirrors. For
1007 example, Wget 1.5.3+dev can be found at
1008 `ftp://prep.ai.mit.edu/gnu/wget/wget-1.5.3+dev.tar.gz'
1011 File: wget.info, Node: Mailing List, Next: Reporting Bugs, Prev: Distribution, Up: Various
1016 Wget has its own mailing list at <wget@sunsite.auc.dk>, thanks to
1017 Karsten Thygesen. The mailing list is for discussion of Wget features
1018 and web, reporting Wget bugs (those that you think may be of interest
1019 to the public) and mailing announcements. You are welcome to
1020 subscribe. The more people on the list, the better!
1022 To subscribe, send mail to <wget-subscribe@sunsite.auc.dk>. the
1023 magic word `subscribe' in the subject line. Unsubscribe by mailing to
1024 <wget-unsubscribe@sunsite.auc.dk>.
1026 The mailing list is archived at `http://fly.cc.fer.hr/archive/wget'.
1029 File: wget.info, Node: Reporting Bugs, Next: Portability, Prev: Mailing List, Up: Various
1034 You are welcome to send bug reports about GNU Wget to
1035 <bug-wget@gnu.org>. The bugs that you think are of the interest to the
1036 public (i.e. more people should be informed about them) can be Cc-ed to
1037 the mailing list at <wget@sunsite.auc.dk>.
1039 Before actually submitting a bug report, please try to follow a few
1042 1. Please try to ascertain that the behaviour you see really is a
1043 bug. If Wget crashes, it's a bug. If Wget does not behave as
1044 documented, it's a bug. If things work strange, but you are not
1045 sure about the way they are supposed to work, it might well be a
1048 2. Try to repeat the bug in as simple circumstances as possible.
1049 E.g. if Wget crashes on `wget -rLl0 -t5 -Y0 http://yoyodyne.com -o
1050 /tmp/log', you should try to see if it will crash with a simpler
1053 Also, while I will probably be interested to know the contents of
1054 your `.wgetrc' file, just dumping it into the debug message is
1055 probably a bad idea. Instead, you should first try to see if the
1056 bug repeats with `.wgetrc' moved out of the way. Only if it turns
1057 out that `.wgetrc' settings affect the bug, should you mail me the
1058 relevant parts of the file.
1060 3. Please start Wget with `-d' option and send the log (or the
1061 relevant parts of it). If Wget was compiled without debug support,
1062 recompile it. It is *much* easier to trace bugs with debug support
1065 4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which
1066 wget` core' and type `where' to get the backtrace.
1068 5. Find where the bug is, fix it and send me the patches. :-)
1071 File: wget.info, Node: Portability, Next: Signals, Prev: Reporting Bugs, Up: Various
1076 Since Wget uses GNU Autoconf for building and configuring, and avoids
1077 using "special" ultra-mega-cool features of any particular Unix, it
1078 should compile (and work) on all common Unix flavors.
1080 Various Wget versions have been compiled and tested under many kinds
1081 of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital
1082 Unix), Ultrix, *BSD, IRIX, and others; refer to the file `MACHINES' in
1083 the distribution directory for a comprehensive list. If you compile it
1084 on an architecture not listed there, please let me know so I can update
1087 Wget should also compile on the other Unix systems, not listed in
1088 `MACHINES'. If it doesn't, please let me know.
1090 Thanks to kind contributors, this version of Wget compiles and works
1091 on Microsoft Windows 95 and Windows NT platforms. It has been compiled
1092 successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
1093 with Winsock as networking software. Naturally, it is crippled of some
1094 features available on Unix, but it should work as a substitute for
1095 people stuck with Windows. Note that the Windows port is *neither
1096 tested nor maintained* by me--all questions and problems should be
1097 reported to Wget mailing list at <wget@sunsite.auc.dk> where the
1098 maintainers will look at them.
1101 File: wget.info, Node: Signals, Prev: Portability, Up: Various
1106 Since the purpose of Wget is background work, it catches the hangup
1107 signal (`SIGHUP') and ignores it. If the output was on standard
1108 output, it will be redirected to a file named `wget-log'. Otherwise,
1109 `SIGHUP' is ignored. This is convenient when you wish to redirect the
1110 output of Wget after having started it.
1112 $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &
1113 $ kill -HUP %% # Redirect the output to wget-log
1115 Other than that, Wget will not try to interfere with signals in any
1116 way. `C-c', `kill -TERM' and `kill -KILL' should kill it alike.
1119 File: wget.info, Node: Appendices, Next: Copying, Prev: Various, Up: Top
1124 This chapter contains some references I consider useful, like the
1125 Robots Exclusion Standard specification, as well as a list of
1126 contributors to GNU Wget.
1130 * Robots:: Wget as a WWW robot.
1131 * Security Considerations:: Security with Wget.
1132 * Contributors:: People who helped.
1135 File: wget.info, Node: Robots, Next: Security Considerations, Prev: Appendices, Up: Appendices
1140 Since Wget is able to traverse the web, it counts as one of the Web
1141 "robots". Thus Wget understands "Robots Exclusion Standard"
1142 (RES)--contents of `/robots.txt', used by server administrators to
1143 shield parts of their systems from wanderings of Wget.
1145 Norobots support is turned on only when retrieving recursively, and
1146 *never* for the first page. Thus, you may issue:
1148 wget -r http://fly.cc.fer.hr/
1150 First the index of fly.cc.fer.hr will be downloaded. If Wget finds
1151 anything worth downloading on the same host, only *then* will it load
1152 the robots, and decide whether or not to load the links after all.
1153 `/robots.txt' is loaded only once per host. Wget does not support the
1156 The description of the norobots standard was written, and is
1157 maintained by Martijn Koster <m.koster@webcrawler.com>. With his
1158 permission, I contribute a (slightly modified) TeXified version of the
1163 * Introduction to RES::
1165 * User-Agent Field::
1167 * Norobots Examples::
1170 File: wget.info, Node: Introduction to RES, Next: RES Format, Prev: Robots, Up: Robots
1175 "WWW Robots" (also called "wanderers" or "spiders") are programs
1176 that traverse many pages in the World Wide Web by recursively
1177 retrieving linked pages. For more information see the robots page.
1179 In 1993 and 1994 there have been occasions where robots have visited
1180 WWW servers where they weren't welcome for various reasons. Sometimes
1181 these reasons were robot specific, e.g. certain robots swamped servers
1182 with rapid-fire requests, or retrieved the same files repeatedly. In
1183 other situations robots traversed parts of WWW servers that weren't
1184 suitable, e.g. very deep virtual trees, duplicated information,
1185 temporary information, or cgi-scripts with side-effects (such as
1188 These incidents indicated the need for established mechanisms for
1189 WWW servers to indicate to robots which parts of their server should
1190 not be accessed. This standard addresses this need with an operational
1193 This document represents a consensus on 30 June 1994 on the robots
1194 mailing list (`robots@webcrawler.com'), between the majority of robot
1195 authors and other people with an interest in robots. It has also been
1196 open for discussion on the Technical World Wide Web mailing list
1197 (`www-talk@info.cern.ch'). This document is based on a previous working
1198 draft under the same title.
1200 It is not an official standard backed by a standards body, or owned
1201 by any commercial organization. It is not enforced by anybody, and there
1202 no guarantee that all current and future robots will use it. Consider
1203 it a common facility the majority of robot authors offer the WWW
1204 community to protect WWW server against unwanted accesses by their
1207 The latest version of this document can be found at
1208 `http://info.webcrawler.com/mak/projects/robots/norobots.html'.
1211 File: wget.info, Node: RES Format, Next: User-Agent Field, Prev: Introduction to RES, Up: Robots
1216 The format and semantics of the `/robots.txt' file are as follows:
1218 The file consists of one or more records separated by one or more
1219 blank lines (terminated by `CR', `CR/NL', or `NL'). Each record
1220 contains lines of the form:
1222 <field>:<optionalspace><value><optionalspace>
1224 The field name is case insensitive.
1226 Comments can be included in file using UNIX Bourne shell conventions:
1227 the `#' character is used to indicate that preceding space (if any) and
1228 the remainder of the line up to the line termination is discarded.
1229 Lines containing only a comment are discarded completely, and therefore
1230 do not indicate a record boundary.
1232 The record starts with one or more User-agent lines, followed by one
1233 or more Disallow lines, as detailed below. Unrecognized headers are
1236 The presence of an empty `/robots.txt' file has no explicit
1237 associated semantics, it will be treated as if it was not present, i.e.
1238 all robots will consider themselves welcome.
1241 File: wget.info, Node: User-Agent Field, Next: Disallow Field, Prev: RES Format, Up: Robots
1246 The value of this field is the name of the robot the record is
1247 describing access policy for.
1249 If more than one User-agent field is present the record describes an
1250 identical access policy for more than one robot. At least one field
1251 needs to be present per record.
1253 The robot should be liberal in interpreting this field. A case
1254 insensitive substring match of the name without version information is
1257 If the value is `*', the record describes the default access policy
1258 for any robot that has not matched any of the other records. It is not
1259 allowed to have multiple such records in the `/robots.txt' file.
1262 File: wget.info, Node: Disallow Field, Next: Norobots Examples, Prev: User-Agent Field, Up: Robots
1267 The value of this field specifies a partial URL that is not to be
1268 visited. This can be a full path, or a partial path; any URL that
1269 starts with this value will not be retrieved. For example,
1270 `Disallow: /help' disallows both `/help.html' and `/help/index.html',
1271 whereas `Disallow: /help/' would disallow `/help/index.html' but allow
1274 Any empty value, indicates that all URLs can be retrieved. At least
1275 one Disallow field needs to be present in a record.
1278 File: wget.info, Node: Norobots Examples, Prev: Disallow Field, Up: Robots
1283 The following example `/robots.txt' file specifies that no robots
1284 should visit any URL starting with `/cyberworld/map/' or `/tmp/':
1286 # robots.txt for http://www.site.com/
1289 Disallow: /cyberworld/map/ # This is an infinite virtual URL space
1290 Disallow: /tmp/ # these will soon disappear
1292 This example `/robots.txt' file specifies that no robots should
1293 visit any URL starting with `/cyberworld/map/', except the robot called
1296 # robots.txt for http://www.site.com/
1299 Disallow: /cyberworld/map/ # This is an infinite virtual URL space
1301 # Cybermapper knows where to go.
1302 User-agent: cybermapper
1305 This example indicates that no robots should visit this site further:
1312 File: wget.info, Node: Security Considerations, Next: Contributors, Prev: Robots, Up: Appendices
1314 Security Considerations
1315 =======================
1317 When using Wget, you must be aware that it sends unencrypted
1318 passwords through the network, which may present a security problem.
1319 Here are the main issues, and some solutions.
1321 1. The passwords on the command line are visible using `ps'. If this
1322 is a problem, avoid putting passwords from the command line--e.g.
1323 you can use `.netrc' for this.
1325 2. Using the insecure "basic" authentication scheme, unencrypted
1326 passwords are transmitted through the network routers and gateways.
1328 3. The FTP passwords are also in no way encrypted. There is no good
1329 solution for this at the moment.
1331 4. Although the "normal" output of Wget tries to hide the passwords,
1332 debugging logs show them, in all forms. This problem is avoided by
1333 being careful when you send debug logs (yes, even when you send