may tend towards the top). Not all of these represent user-visible
changes.
-* It would be nice to have a simple man page for wget that refers you to the
- .info files for more information. It could be as simple as the output of wget
- --help plus some boilerplate. This should stop wget re-packagers like RedHat
- who include the out-of-date 1.4.5 man page in order to have one. Perhaps we
- can automatically generate a man page from the .texi file like gcc does?
+* -p should probably go "_two_ more hops" on <FRAMESET> pages.
+
+* Only normal link-following recursion should respect -np. Page-requisite
+ recursion should not. When -np -p is specified, Wget should still retrieve
+ requisite images and such on the server, even if they aren't in that directory
+ or a subdirectory of it. Likewise, -H -np -p should retrieve requisite files
+ from other hosts.
+
+* Add a --range parameter allowing you to explicitly specify a range of bytes to
+ get from a file over HTTP (FTP only supports ranges ending at the end of the
+ file, though forcibly disconnecting from the server at the desired endpoint
+ might be workable).
+
+* RFC 1738 says that if logging on to an FTP server puts you in a directory
+ other than '/', the way to specify a file relative to '/' in a URL (let's use
+ "/bin/ls" in this example) is "ftp://host/%2Fbin/ls". Wget needs to support
+ this (and ideally not consider "ftp://host//bin/ls" to be equivalent, as that
+ would equate to the command "CWD " rather than "CWD /"). To accomodate people
+ used to broken FTP clients like Internet Explorer and Netscape, if
+ "ftp://host/bin/ls" doesn't exist, Wget should try again (perhaps under
+ control of an option), acting as if the user had typed "ftp://host/%2Fbin/ls".
+
+* If multiple FTP URLs are specified that are on the same host, Wget should
+ re-use the connection rather than opening a new one for each file.
* Try to devise a scheme so that, when password is unknown, Wget asks
the user for one.
* If -c used on a file that's already completely downloaded, don't re-download
it (unless normal --timestamping processing would cause you to do so).
+* If -c used with -N, check to make sure a file hasn't changed on the server
+ before "continuing" to download it (preventing a bogus hybrid file).
+
* Take a look at
<http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html>
and support the new directives.
* Make wget return non-zero status in more situations, like incorrect HTTP auth.
-* Timestamps are sometimes not copied over on files retrieved by FTP.
-
* Make -K compare X.orig to X and move the former on top of the latter if
they're the same, rather than leaving identical .orig files laying around.
* Rewrite FTP code to allow for easy addition of new commands. It
should probably be coded as a simple DFA engine.
-* Recognize more FTP servers (VMS).
-
* Make HTTP timestamping use If-Modified-Since facility.
* Implement better spider options.
* Implement correct RFC1808 URL parsing.
-* Implement HTTP cookies.
-
* Implement more HTTP/1.1 bells and whistles (ETag, Content-MD5 etc.)
* Add a "rollback" option to have --continue throw away a configurable number of
bytes at the end of a file before resuming download. Apparently, some stupid
proxies insert a "transfer interrupted" string we need to get rid of.
+
+* When using --accept and --reject, you can end up with empty directories. Have
+ Wget any such at the end.