3 This is the to-do list for GNU Wget. There is no timetable of when we
4 plan to implement these features -- this is just a list of features
5 we'd like to see in Wget, as well as a list of problems that need
6 fixing. Patches to implement these items are likely to be accepted,
7 especially if they follow the coding convention outlined in PATCHES
8 and if they patch the documentation as well.
10 The items are not listed in any particular order (except that
11 recently-added items may tend towards the top). Not all of these
12 represent user-visible changes.
14 * Change the file name generation logic so that redirects can't dictate
15 file names (but redirects should still be followed). By default, file
16 names should be generated only from the URL the user provided. However,
17 with an appropriate flag, Wget will allow the remote server to specify
18 the file name, either through redirection (as is always the case now)
19 or via the increasingly popular header `Content-Disposition: XXX;
22 The file name should be generated and displayed *after* processing
23 the server's response, not before, as it is done now. This will
24 allow trivial implementation of -nc, of O_EXCL when opening the
25 file, --html-extension will stop being a horrible hack, and so on.
27 * -O should be respected, with no exceptions. It should work in
28 conjunction with -N and -k. (This is hard to achieve in the current
29 code base.) Ancillary files, such as directory listings and such,
30 should be downloaded either directly to memory, or to /tmp.
32 * Implement digest and NTLM authorization for proxies. This is harder
33 than it seems because it requires some rethinking of the HTTP code.
35 * Rethink the interaction between recur.c (the recursive download code)
36 and HTTP/FTP code. Ideally, the downloading code should have a way
37 to retrieve a file and, optionally, to specify a list of URLs for
38 continuing the "recursive" download. FTP code will surely benefit
39 from such a restructuring because its current incarnation is way too
40 smart for its own good.
42 * Both HTTP and FTP connections should be first-class objects that can
43 be reused after a download is done. Currently information about both
44 is kept implicitly on the stack, and forgotten after each download.
46 * Restructure the FTP code to remove massive amounts of code duplication
47 and repetition. Remove all the "intelligence" and make it work as
48 outlined in the previous bullet.
50 * Add support for SFTP. Teach Wget about newer features of more
51 recent FTP servers in general, such as receiving reliable checksums
52 and timestamps. This can be used to implement really robust
55 * Wget shouldn't delete rejected files that were not downloaded, but
56 just found on disk because of `-nc'. For example, `wget -r -nc
57 -A.gif URL' should allow the user to get all the GIFs without
58 removing any of the existing HTML files.
60 * Be careful not to lose username/password information given for the
61 URL on the command line. For example,
62 wget -r http://username:password@server/path/ should send that
63 username and password to all content under /path/ (this is apparently
66 * Don't send credentials using "Basic" authorization before the server
67 has a chance to tell us that it supports Digest or NTLM!
69 * Add a --range parameter allowing you to explicitly specify a range
70 of bytes to get from a file over HTTP (FTP only supports ranges
71 ending at the end of the file, though forcibly disconnecting from
72 the server at the desired endpoint would work). For example,
73 --range=n-m would specify inclusive range (a la the Range header),
74 and --range=n:m would specify exclusive range (a la Python's
75 slices). -c should work with --range by assuming the range is
76 partially downloaded on disk, and contuing from there (effectively
77 requesting a smaller range).
79 * If multiple FTP URLs are specified that are on the same host, Wget should
80 re-use the connection rather than opening a new one for each file.
81 This should be easy provided the above restructuring of FTP code that
82 would include the FTP connection becoming a first-class objects.
84 * Try to devise a scheme so that, when password is unknown, Wget asks
85 the user for one. This is harder than it seems because the password
86 may be requested by some page encountered long after the user has
89 * If -c used with -N, check to make sure a file hasn't changed on the server
90 before "continuing" to download it (preventing a bogus hybrid file).
92 * Generalize --html-extension to something like --mime-extensions and
93 have consult mime.types for the preferred extension. Non-HTML files
94 with filenames changed this way would be re-downloaded each time
95 despite -N unless .orig files were saved for them. (#### Why? The
96 HEAD request we use to implement -N would still be able to construct
97 the correct file name based on the declared Content-Type.)
99 Since .orig would contain the same data as non-.orig, the latter
100 could be just a link to the former. Another possibility would be to
101 implement a per-directory database called something like
102 .wget_url_mapping containing URLs and their corresponding filenames.
104 * When spanning hosts, there's no way to say that you are only
105 interested in files in a certain directory on _one_ of the hosts (-I
106 and -X apply to all). Perhaps -I and -X should take an optional
107 "hostname:" before the directory?
109 * --retr-symlinks should cause wget to traverse links to directories too.
111 * Make wget return non-zero status in more situations, like incorrect HTTP auth.
112 Create and document different exit statuses for different errors.
114 * Make -K compare X.orig to X and move the former on top of the latter if
115 they're the same, rather than leaving identical .orig files laying around.
117 * Make `-k' check for files that were downloaded in the past and convert links
118 to them in newly-downloaded documents.
120 * Devise a way for options to have effect on a per-URL basis. This is very
121 natural for some options, such as --post-data. It could be implemented
122 simply by having more than one struct options.
124 * Add option to clobber existing file names (no `.N' suffixes).
126 * Add option to only list wildcard matches without doing the download. The same
127 could be generalized to support something like apt's --print-uri.
129 * Handle MIME types correctly. There should be an option to (not)
130 retrieve files based on MIME types, e.g. `--accept-types=image/*'.
131 This would work for FTP by translating file extensions to MIME types
134 * Allow time-stamping by arbitrary date. For example,
135 wget --if-modified-after DATE URL.
137 * Make quota apply to single files, preferrably so that the download of an
138 oversized file is not attempted at all.
140 * When updating an existing mirror, download to temporary files (such as .in*)
141 and rename the file after the download is done.
143 * Add an option to delete or move no-longer-existent files when mirroring.
145 * Implement uploading (--upload=FILE URL?) in FTP and HTTP. A beginning of
146 this is available in the form of --post-file, but it should be expanded to
149 * Make HTTP timestamping use If-Modified-Since facility.
151 * Add more protocols (such as news or possibly some of the streaming
152 protocols), implementing them in a modular fashion.
154 * Add a "rollback" option to have continued retrieval throw away a
155 configurable number of bytes at the end of a file before resuming
156 download. Apparently, some stupid proxies insert a "transfer
157 interrupted" string we need to get rid of.