--- /dev/null
+Authors of GNU Wget.
+
+[ Note that this file does not attempt to list all the contributors to
+ Wget; look at the ChangeLog for that. This is a list of people who
+ contributed sizeable amounts of code and assigned the copyright to the
+ FSF. ]
+
+Hrvoje Niksic. Designed and implemented Wget.
+
+Gordon Matzigkeit. Wrote netrc.c and netrc.h.
+
+Darko Budor. Added Windows support, wrote wsstartup.c, wsstartup.h
+and windecl.h.
+
+Junio Hamano. Added support for FTP Opie and HTTP digest
+authentication.
--- /dev/null
+ GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+ 675 Mass Ave, Cambridge, MA 02139, USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+\f
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+\f
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+\f
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+\f
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+\f
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) 19yy <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) 19yy name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Library General
+Public License instead of this License.
--- /dev/null
+1998-06-23 Dave Love <d.love@dl.ac.uk>
+
+ * configure.in (exext): Define.
+
+1998-06-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: Check for access().
+
+1998-05-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * po/hr.po: Some fixes, as per suggestions by Francois Pinard.
+
+1998-05-19 Dominique Delamarre <dominique.delamarre@hol.fr>
+
+ * po/fr.po: New file.
+
+1998-05-19 Toomas Soome <tsoome@ut.ee>
+
+ * po/et.po: Updated.
+
+1998-05-11 Simos KSenitellis <simos@teiath.gr>
+
+ * po/el.po: New file.
+
+1998-05-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * aclocal.m4 (WGET_WITH_NLS): Print available catalogs.
+
+1998-05-09 Toomas Soome <tsoome@ut.ee>
+
+ * po/et.po: New file.
+
+1998-05-06 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * configure.bat: set up for either Borland or Visual C
+
+ * windows/wget.dep: new file
+
+ * windows/Makefile.*: use wget.dep
+
+ * rename windows/Makefile.bor to Makefile.src.bor
+
+1998-05-06 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * windows/makefile.bor: Updated.
+
+ * windows/Makefile.src: Ditto.
+
+1998-04-30 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * windows/config.h.bor: New file.
+
+ * windows/makefile.bor: New file.
+
+1998-04-27 John Burden <john@futuresguide.com>
+
+ * windows/Makefile.*: Cleanup.
+
+1998-04-27 Gregor Hoffleit <flight@mathi.uni-heidelberg.de>
+
+ * configure.in: Check for PID_T.
+
+1998-04-19 Giovanni Bortolozzo <borto@dei.unipd.it>
+
+ * po/it.po: Updated.
+
+1998-04-19 Jan Prikryl <prikryl@cg.tuwien.ac.at>
+
+ * po/cs.po: Updated.
+
+1998-04-19 Wanderlei Cavassin <cavassin@conectiva.com.br>
+
+ * po/pt_BR.po: Updated.
+
+1998-04-08 Stefan Hornburg <racke@gundel.han.de>
+
+ * Makefile (dist): New target.
+
+1998-04-08 Wanderlei Cavassin <cavassin@conectiva.com.br>
+
+ * po/pt_BR.po: Updated.
+
+1998-04-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * aclocal.m4 (WGET_WITH_NLS): Renamed USE_NLS to HAVE_NLS.
+
+ * ABOUT-NLS: Removed.
+
+ * Makefile.in (stamp-h): Clean up stamp-h-related dependencies.
+ Don't attempt to write to stamp-h.in.
+
+ * aclocal.m4 (WGET_PROCESS_PO): Reset srcdir to ac_given_srcdir.
+
+1998-04-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (distclean-top): Remove stamp-h.
+
+1998-04-02 Robert Schmidt <rsc@vingmed.no>
+
+ * po/no.po: New file.
+
+1998-04-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: New option `--disable-debug'.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: Check for endianness.
+
+1998-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * aclocal.m4 (WGET_PROCESS_PO): Use echo instead of AC_MSG_RESULT.
+
+1998-03-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * aclocal.m4 (WGET_WITH_NLS): Disable USE_NLS if gettext is
+ unavailable.
+
+ * aclocal.m4: Renamed AM_STRUCT_UTIMBUF to WGET_STRUCT_UTIMBUF;
+ renamed AM_WITH_NLS to WGET_WITH_NLS.
+
+ * aclocal.m4: Eliminate POSUBS.
+
+1998-03-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in: config.h* -> src/config.h*
+
+ * configure.in: Check for vsnprintf().
+
+ * po/POTFILES.in: Updated.
+
+1998-03-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * po/POTFILES.in: Removed extraneous newline at end of line, which
+ caused an error in `Makefile' which Sun make choked on.
+
+1998-03-16 Jan Prikryl <prikryl@cg.tuwien.ac.at>
+
+ * po/cs.po: New file.
+
+1998-03-12 Wanderlei Cavassin <cavassin@conectiva.com.br>
+
+ * po/pt_BR.po: New file.
+
+1998-03-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * PROBLEMS: New file.
+
+1998-02-22 Karl Eichwalder <ke@suse.de>
+
+ * po/Makefile.in.in (install-data-yes): Fix creation of
+ directories for LC_MESSAGE files.
+
+1998-02-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: Removed `-Wno-switch' for gcc.
+
+ * po/Makefile.in.in (install-data-yes): Use mkinstalldirs to
+ create the directory first.
+
+1998-02-21 Karl Eichwalder <karl@suse.de>
+
+ * po/de.po: Updated.
+
+1998-02-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (check): New empty target.
+
+1998-02-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * po/it.po: New file, by Antonio Rosella.
+
+1998-02-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * aclocal.m4: Cleaned up.
+
+ * po/hr.po: Updated.
+
+ * configure.in: Removed check for POSIXized ISC.
+
+1998-02-08 Karl Eichwalder <karl@suse.de>
+
+ * po/de.po: Updated.
+
+1998-02-07 Karl Eichwalder <ke@suse.de>
+
+ * Makefile.in (install.info uninstall.info install.man
+ uninstall.man install.wgetrc): Use it.
+
+ * Makefile.in (install.mo): New target.
+
+1998-02-03 Karl Eichwalder <ke@suse.de>
+
+ * po/POTFILES.in: Touch it (needed for NLS); add src/ftp.c,
+ src/getopt.c, src/host.c, src/html.c, src/http.c, src/init.c,
+ src/main.c, src/mswindows.c, src/netrc.c, src/recur.c, src/retr.c,
+ src/url.c, and src/utils.c.
+
+ * intl/po2tbl.sed.in: Add from gettext-0.10.32 (needed for NLS).
+
+ * po/Makefile.in.in: Add from gettext-0.10.32.
+
+ * Makefile.in (SUBDIRS): Add po/.
+
+ * configure.in (ALL_LINGUAS): New variable. Add "de" and "hr".
+ (AM_GNU_GETTEXT): Add.
+ (AC_OUTPUT): Add po/Makefile.in; run the sed command.
+
+ * aclocal.m4 (AM_WITH_NLS, AM_GNU_GETTEXT, AM_LC_MESSAGES,
+ AM_PATH_PROG_WITH_TEST): from gettext-0.10.32.
+
--- /dev/null
+ -*- text -*-
+ Installation Procedure
+
+0) Preparation
+
+To build and install GNU Wget, you need to unpack the archive (which
+you have presumably done, since you are reading this), and read on.
+Like most GNU utilities, Wget uses the GNU Autoconf mechanism for
+build and installation; those of you familiar with compiling GNU
+software will feel at home.
+
+1) Configuration
+
+To configure Wget, run the configure script provided with the
+distribution. You may use all the standard arguments configure
+scripts take. The most important ones are:
+
+ --help print help message
+
+ --prefix=PREFIX install architecture-independent files in PREFIX
+ (/usr/local by default)
+ --bindir=DIR user executables in DIR (PREFIX/bin)
+ --infodir=DIR info documentation in DIR [PREFIX/info]
+ --mandir=DIR man documentation in DIR [PREFIX/man]
+
+ --build=BUILD configure for building on BUILD [BUILD=HOST]
+ --host=HOST configure for HOST [guessed]
+ --target=TARGET configure for TARGET [TARGET=HOST]
+
+--enable and --with options recognized (mostly Wget-specific):
+ --with-socks use the socks library
+ --disable-opie disable support for opie or s/key FTP login
+ --disable-digest disable support for HTTP digest authorization
+ --disable-debug disable support for debugging output
+ --disable-nls do not use Native Language Support
+
+So, if you want to configure Wget for installation in your home
+directory, you can type:
+./configure --prefix=$HOME
+
+You can customize many default settings by editing Makefile and
+config.h. The program will work very well without your touching these
+files, but it is useful to have a look at things you can change there.
+
+If you use socks, it is useful to add -L/usr/local/lib (or wherever
+the socks library is installed) to LDFLAGS in Makefile.
+
+To configure Wget on Windows, run configure.bat and follow the
+instructions in the windows/ directory. If this doesn't work for any
+reason, talk to the Windows developers listed in `windows/README'; I
+do not maintain the port.
+
+2) Compilation
+
+To compile the program, type make and cross your fingers. If you do
+not have an ANSI compiler, Wget will try to KNR-ize its sources "on
+the fly". This should make GNU Wget compilable virtually anywhere.
+
+After the compilation a ready to use `wget' executable should reside
+in the src directory. I do not have any kind of test-suite as of this
+moment, but it should be easy enough to test whether the basic stuff
+works.
+
+3) Installation
+
+Use `make install' to install GNU Wget to directories specified to
+configure (/usr/local/* by default).
+
+The standard installation process will copy the wget binary to
+/usr/local/bin, install the info pages (wget.info*) to
+/usr/local/info. You can customize the directories either through the
+configuration process or making the necessary changes in the Makefile.
+
+To delete the files created by Wget installation, you can use make
+uninstall.
--- /dev/null
+This files lists the architectures on which this version of GNU Wget
+was tried on. If you compile Wget on a new architecture, please drop
+me a note, or send a patch to this file.
+
+\f
+Sun SunOS, Solaris (sparc-sun-solaris*, sparc-sun-sunos*)
+
+GNU/Linux (i[3456]86-*-linux*)
+
+DEC Ultrix, Digital Unix (mips-dec-ultrix*, alpha-dec-osf*)
+
+HP BSD (m68k-hp-bsd)
+
+HP HPUX (hppa1.0-hp-hpux7.00, hppa1.1-hp-hpux9.01 and others)
+
+IBM AIX (powerpc-ibm-aix4.1.4.0)
+
+Amiga NetBSD (m68k-cbm-netbsd1.2)
+
+SGI IRIX (mips-sgi-irix4.0.5, mips-sgi-irix5.3)
+
+SCO Unix (i586-pc-sco3.2v5.0.4)
+
+NeXTStep 3.3 Intel (i386-next-nextstep3)
+
+FreeBSD (i386-unknown-freebsd2.2.6)
+
+Windows 95/NT (i[3456]86)
--- /dev/null
+ -*- text -*-
+
+ Mailing List Info
+
+
+Thanks to Karsten Thygesen, Wget has its own mailing list for
+discussion and announcements. The list address is hosted at Sunsite
+Denmark, <wget@sunsite.auc.dk>. To subscribe, send mail to
+<wget-subscribe@sunsite.auc.dk>.
+
+The list is fairly low-volume -- one or two messages per day and with
+sporadic periods of intensive activity. If you are interested in
+using or hacking Wget, or wish to read the important announcements,
+you are very welcome to subscribe.
+
+The list is archived at <URL:http://fly.cc.fer.hr/archive/wget>.
--- /dev/null
+# Makefile for `Wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+#
+# Version: @VERSION@
+#
+
+SHELL = /bin/sh
+@SET_MAKE@
+
+srcdir = @srcdir@
+VPATH = @srcdir@
+
+#
+# User configuration section
+#
+
+#
+# Install variables
+#
+prefix = @prefix@
+exec_prefix = @exec_prefix@
+bindir = @bindir@
+infodir = @infodir@
+sysconfdir = @sysconfdir@
+mandir = @mandir@
+manext = 1
+localedir = $(prefix)/share/locale
+
+CC = @CC@
+CFLAGS = @CFLAGS@
+CPPFLAGS = @CPPFLAGS@
+DEFS = @DEFS@ -DSYSTEM_WGETRC=\"$(sysconfdir)/wgetrc\" -DLOCALEDIR=\"$(localedir)\"
+LIBS = @LIBS@
+LDFLAGS = @LDFLAGS@
+
+#
+# End of user configuration section. There should be no need to change
+# anything below this line.
+#
+
+DISTNAME = wget-@VERSION@
+RM = rm -f
+
+# These are used for maintenance only, so they are safe without
+# special autoconf cruft.
+FIND = find
+GZIP = gzip
+TAR = tar
+
+# flags passed to recursive makes in subdirectories
+MAKEDEFS = CC='$(CC)' CPPFLAGS='$(CPPFLAGS)' DEFS='$(DEFS)' \
+CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS)' LIBS='$(LIBS)' \
+prefix='$(prefix)' exec_prefix='$(exec_prefix)' bindir='$(bindir)' \
+infodir='$(infodir)' mandir='$(mandir)' manext='$(manext)'
+
+# subdirectories in the distribution
+SUBDIRS = src doc po util
+
+# default target
+all: src/config.h Makefile $(SUBDIRS)
+
+check: all
+
+$(SUBDIRS): FORCE
+ cd $@ && $(MAKE) $(MAKEDEFS)
+
+# install everything
+install: install.bin install.info install.wgetrc install.mo # install.man
+
+# install/uninstall the binary
+install.bin uninstall.bin:
+ cd src && $(MAKE) $(MAKEDEFS) $@
+
+# install/uninstall the info/man pages
+install.info uninstall.info install.man uninstall.man install.wgetrc:
+ cd doc && $(MAKE) $(MAKEDEFS) $@
+
+# Install `.mo' files
+install.mo:
+ cd po && $(MAKE) $(MAKEDEFS) $@
+
+# create tag files for Emacs
+TAGS:
+ cd src && $(MAKE) $@
+
+dist: $(srcdir)/configure DISTFILES
+ mkdir $(DISTNAME)
+ for d in `$(FIND) . -type d ! -name RCS -print`; do \
+ if [ "$$d" != "." -a "$$d" != "./$(DISTNAME)" ]; then \
+ mkdir $(DISTNAME)/$$d; \
+ fi; \
+ done
+ for f in `cat DISTFILES`; do \
+ ln $(srcdir)/$$f $(DISTNAME)/$$f || \
+ { echo copying $$f; cp -p $(srcdir)/$$f $(DISTNAME)/$$f ; } \
+ done
+ (cd $(DISTNAME); $(MAKE) distclean)
+ $(TAR) chvf - $(DISTNAME) | $(GZIP) -c --best >$(DISTNAME).tar.gz
+ $(RM) -r $(DISTNAME)
+ $(RM) DISTFILES
+
+DISTFILES: FORCE
+ rm -rf $(DISTNAME)
+ (cd $(srcdir); find . ! -type d -print) \
+ | sed '/\/RCS\//d; \
+ /$@/d; \
+ /\.tar.*/d; \
+ s/^.\///; /^\.$$/d;' \
+ | sort | uniq > $@
+
+#
+# Cleanup dependencies
+#
+
+clean: clean-recursive clean-top
+distclean: distclean-recursive distclean-top
+realclean: realclean-recursive realclean-top
+
+clean-top:
+ $(RM) *~ *.bak $(DISTNAME).tar.gz
+
+distclean-top: clean-top
+ $(RM) Makefile config.status config.log config.cache stamp-h
+
+realclean-top: distclean-top
+
+clean-recursive distclean-recursive realclean-recursive:
+ for subdir in $(SUBDIRS); do \
+ target=`echo $@ | sed s/-recursive//`; \
+ (cd $$subdir && $(MAKE) $(MAKEDEFS) $$target) || exit 1; \
+ done
+
+#
+# Dependencies for maintenance
+#
+
+Makefile: Makefile.in config.status
+ CONFIG_HEADERS= ./config.status
+
+config.status: configure
+ ./config.status --recheck
+
+configure: configure.in aclocal.m4
+ cd $(srcdir) && autoconf
+
+src/config.h: stamp-h
+stamp-h: src/config.h.in config.status
+ CONFIG_FILES= CONFIG_HEADERS=src/config.h ./config.status
+
+src/config.h.in: stamp-h.in
+stamp-h.in: configure.in aclocal.m4
+ echo timestamp > $@
+
+FORCE:
+
--- /dev/null
+GNU Wget NEWS -- history of user-visible changes.
+
+Copyright (C) 1997, 1998 Free Software Foundation, Inc.
+See the end for copying conditions.
+
+Please send GNU Wget bug reports to <bug-wget@gnu.org>.
+\f
+* Wget 1.5.3 is a bugfix release with no user-visible changes.
+\f
+* Wget 1.5.2 is a bugfix release with no user-visible changes.
+\f
+* Wget 1.5.1 is a bugfix release with no user-visible changes.
+\f
+* Changes in Wget 1.5.0
+
+** Wget speaks many languages!
+
+On systems with gettext(), Wget will output messages in the language
+set by the current locale, if available. At this time we support
+Czech, German, Croatian, Italian, Norwegian and Portuguese.
+
+** Opie (Skey) is now supported with FTP.
+
+** HTTP Digest Access Authentication (RFC2069) is now supported.
+
+** The new `-b' option makes Wget go to background automatically.
+
+** The `-I' and `-X' options now accept wildcard arguments.
+
+** The `-w' option now accepts suffixes `s' for seconds, `m' for
+minutes, `h' for hours, `d' for days and `w' for weeks.
+
+** Upon getting SIGHUP, the whole previous log is now copied to
+`wget-log'.
+
+** Wget now understands proxy settings with explicit usernames and
+passwords, e.g. `http://user:password@proxy.foo.com/'.
+
+** You can use the new `--cut-dirs' option to make Wget create less
+directories.
+
+** The `;type=a' appendix to FTP URLs is now recognized. For
+instance, the following command will retrieve the welcoming message in
+ASCII type transfer:
+
+ wget "ftp://ftp.somewhere.com/welcome.msg;type=a"
+
+** `--help' and `--version' options have been redone to to conform to
+standards set by other GNU utilities.
+
+** Wget should now be compilable under MS Windows environment. MS
+Visual C++ and Watcom C have been used successfully.
+
+** If the file length is known, percentages are displayed during
+download.
+
+** The manual page, now hopelessly out of date, is no longer
+distributed with Wget.
+\f
+* Wget 1.4.5 is a bugfix release with no user-visible changes.
+\f
+* Wget 1.4.4 is a bugfix release with no user-visible changes.
+\f
+* Changes in Wget 1.4.3
+
+** Wget is now a GNU utility.
+
+** Can do passive FTP.
+
+** Reads .netrc.
+
+** Info documentation expanded.
+
+** Compiles on pre-ANSI compilers.
+
+** Global wgetrc now goes to /usr/local/etc (i.e. $sysconfdir).
+
+** Lots of bugfixes.
+\f
+* Changes in Wget 1.4.2
+
+** New mirror site at ftp://sunsite.auc.dk/pub/infosystems/wget/,
+thanks to Karsten Thygesen.
+
+** Mailing list! Mail to wget-request@sunsite.auc.dk to subscribe.
+
+** New option --delete-after for proxy prefetching.
+
+** New option --retr-symlinks to retrieve symbolic links like plain
+files.
+
+** rmold.pl -- script to remove files deleted on the remote server
+
+** --convert-links should work now.
+
+** Minor bugfixes.
+\f
+* Changes in Wget 1.4.1
+
+** Minor bugfixes.
+
+** Added -I (the opposite of -X).
+
+** Dot tracing is now customizable; try wget --dot-style=binary
+\f
+* Changes in Wget 1.4.0
+
+** Wget 1.4.0 [formerly known as Geturl] is an extensive rewrite of
+Geturl. Although many things look suspiciously similar, most of the
+stuff was rewritten, like recursive retrieval, HTTP, FTP and mostly
+everything else. Wget should be now easier to debug, maintain and,
+most importantly, use.
+
+** Recursive HTTP should now work without glitches, even with Location
+changes, server-generated directory listings and other naughty stuff.
+
+** HTTP regetting is supported on servers that support Range
+specification. WWW authorization is supported -- try
+wget http://user:password@hostname/
+
+** FTP support was rewritten and widely enhanced. Globbing should now
+work flawlessly. Symbolic links are created locally. All the
+information the Unix-style ls listing can give is now recognized.
+
+** Recursive FTP is supported, e.g.
+ wget -r ftp://gnjilux.cc.fer.hr/pub/unix/util/
+
+** You can specify "rejected" directories, to which you do not want to
+enter, e.g. with wget -X /pub
+
+** Time-stamping is supported, with both HTTP and FTP. Try wget -N URL.
+
+** A new texinfo reference manual is provided. It can be read with
+Emacs, standalone info, or converted to HTML, dvi or postscript.
+
+** Fixed a long-standing bug, so that Wget now works over SLIP
+connections.
+
+** You can have a system-wide wgetrc (/usr/local/lib/wgetrc by
+default). Settings in $HOME/.wgetrc override the global ones, of
+course :-)
+
+** You can set up quota in .wgetrc to prevent sucking too much
+data. Try `quota = 5M' in .wgetrc (or quota = 100K if you want your
+sysadmin to like you).
+
+** Download rate is printed after retrieval.
+
+** Wget now sends the `Referer' header when retrieving
+recursively.
+
+** With the new --no-parent option Wget can retrieve FTP recursively
+through a proxy server.
+
+** HTML parser, as well as the whole of Wget was rewritten to be much
+faster and less memory-consuming (yes, both).
+
+** Absolute links can be converted to relative links locally. Check
+wget -k.
+
+** Wget catches hangup, filtering the output to a log file and
+resuming work. Try kill -HUP %?wget.
+
+** User-defined headers can be sent. Try
+
+ wget http://fly.cc.her.hr/ --header='Accept-Charset: iso-8859-2'
+
+** Acceptance/Rejection lists may contain wildcards.
+
+** Wget can display HTTP headers and/or FTP server response with the
+new `-S' option. It can save the original HTTP headers with `-s'.
+
+** socks library is now supported (thanks to Antonio Rosella
+<Antonio.Rosella@agip.it>). Configure with --with-socks.
+
+** There is a nicer display of REST-ed output.
+
+** Many new options (like -x to force directory hierarchy, or -m to
+turn on mirroring options).
+
+** Wget is now distributed under GNU General Public License (GPL).
+
+** Lots of small features I can't remember. :-)
+
+** A host of bugfixes.
+\f
+* Changes in Geturl 1.3
+
+** Added FTP globbing support (ftp://fly.cc.fer.hr/*)
+
+** Added support for no_proxy
+
+** Added support for ftp://user:password@host/
+
+** Added support for %xx in URL syntax
+
+** More natural command-line options
+
+** Added -e switch to execute .geturlrc commands from the command-line
+
+** Added support for robots.txt
+
+** Fixed some minor bugs
+\f
+* Geturl 1.2 is a bugfix release with no user-visible changes.
+\f
+* Changes in Geturl 1.1
+
+** REST supported in FTP
+
+** Proxy servers supported
+
+** GNU getopt used, which enables command-line arguments to be ordered
+as you wish, e.g. geturl http://fly.cc.fer.hr/ -vo log is the same as
+geturl -vo log http://fly.cc.fer.hr/
+
+** Netscape-compatible URL syntax for HTTP supported: host[:port]/dir/file
+
+** NcFTP-compatible colon URL syntax for FTP supported: host:/dir/file
+
+** <base href="xxx"> supported
+
+** autoconf supported
+\f
+----------------------------------------------------------------------
+Copyright information:
+
+Copyright (C) 1997, 1998 Free Software Foundation, Inc.
+
+ Permission is granted to anyone to make or distribute verbatim
+ copies of this document as received, in any medium, provided that
+ the copyright notice and this permission notice are preserved, thus
+ giving the recipient permission to redistribute in turn.
+
+ Permission is granted to distribute modified versions of this
+ document, or of portions of it, under the above conditions,
+ provided also that they carry prominent notices stating who last
+ changed them.
--- /dev/null
+ -*- text -*-
+ GNU Wget README
+
+GNU Wget is a free network utility to retrieve files from the World
+Wide Web using HTTP and FTP, the two most widely used Internet
+protocols. It works non-interactively, thus enabling work in the
+background, after having logged off.
+
+The recursive retrieval of HTML pages, as well as FTP sites is
+supported -- you can use Wget to make mirrors of archives and home
+pages, or traverse the web like a WWW robot (Wget understands
+/robots.txt).
+
+Wget works exceedingly well on slow or unstable connections, keeping
+getting the document until it is fully retrieved. Re-getting files
+from where it left off works on servers (both HTTP and FTP) that
+support it. Matching of wildcards and recursive mirroring of
+directories are available when retrieving via FTP. Both HTTP and FTP
+retrievals can be time-stamped, thus Wget can see if the remote file
+has changed since last retrieval and automatically retrieve the new
+version if it has.
+
+Wget supports proxy servers, which can lighten the network load, speed
+up retrieval and provide access behind firewalls. If you are behind a
+firewall that requires the use of a socks style gateway, you can get
+the socks library and compile wget with support for socks.
+
+Most of the features are configurable, either through command-line
+options, or via initialization file .wgetrc. Wget allows you to
+install a global startup file (/usr/local/etc/wgetrc by default) for
+site settings.
+
+Wget works under almost all modern Unix variants and, unlike many
+other similar utilities, is written entirely in C, thus requiring no
+additional software (like perl). As Wget uses the GNU Autoconf, it is
+easily built on and ported to other Unix's. Installation procedure is
+described in the INSTALL file.
+
+Like all GNU utilities, the latest version of Wget can be found at the
+master GNU archive site prep.ai.mit.edu, and its mirrors. For
+example, Wget 1.5.2 is at:
+<URL:ftp://prep.ai.mit.edu/pub/gnu/wget-1.5.2.tar.gz>.
+
+The latest version is also available via FTP from the maintainer's
+machine, at:
+<URL:ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/wget.tar.gz>.
+
+This location is mirrored at:
+<URL:ftp://sunsite.auc.dk/pub/infosystems/wget/> and
+<URL:http://sunsite.auc.dk/ftp/pub/infosystems/wget/>.
+
+Please report bugs in Wget to <bug-wget@prep.ai.mit.edu>.
+
+Wget has a own mailing list at <wget@sunsite.auc.dk>. To subscribe,
+mail to <wget-subscribe@sunsite.auc.dk>.
+
+Wget is free in all senses -- it is freely redistributable, and no
+payment is required. If you still wish to donate money to the author,
+or wish to sponsor implementation of specific features, please email
+me at <hniksic@srce.hr>.
+
+
+AUTHOR: Hrvoje Niksic <URL:mailto:hniksic@srce.hr>
+
+
+Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
--- /dev/null
+ Hey Emacs, this is -*- outline -*- mode
+
+This is the todo list for Wget. I don't have any time-table of when I
+plan to implement these features; this is just a list of things I'd
+like to see in Wget. I'll work on some of them myself, and I will
+accept patches in their direction. The items are not listed in any
+particular order. Not all of them are user-visible changes.
+
+* Make `-k' convert <base href=...> too.
+
+* Add option to clobber existing file names (no `.N' suffixes).
+
+* Introduce a concept of "boolean" options. For instance, every
+ boolean option `--foo' would have a `--no-foo' equivalent for
+ turning it off. Get rid of `--foo=no' stuff. Short options would
+ be handled as `-x' vs. `-nx'.
+
+* Implement "thermometer" display (not all that hard; use an
+ alternative show_progress() if the output goes to a terminal.)
+
+* Add option to only list wildcard matches without doing the download.
+
+* Add case-insensitivity as an option.
+
+* Add option to download all files needed to display a web page
+ (images, etc.)
+
+* Handle MIME types correctly. There should be an option to (not)
+ retrieve files based on MIME types, e.g. `--accept-types=image/*'.
+
+* Implement "persistent" retrieving. In "persistent" mode Wget should
+ treat most of the errors as transient.
+
+* Allow time-stamping by arbitrary date.
+
+* Fix Unix directory parser to allow for spaces in file names.
+
+* Allow size limit to files.
+
+* -k should convert convert relative references to absolute if not
+ downloaded.
+
+* Recognize HTML comments correctly. Add more options for handling
+ bogus HTML found all over the 'net.
+
+* Implement breadth-first retrieval.
+
+* Download to .in* when mirroring.
+
+* Add an option to delete or move no-longer-existent files when
+ mirroring.
+
+* Implement a switch to avoid downloading multiple files (e.g. x and
+ x.gz).
+
+* Implement uploading (--upload URL?) in FTP and HTTP.
+
+* Rewrite FTP code to allow for easy addition of new commands. It
+ should probably be coded as a simple DFA engine.
+
+* Recognize more FTP servers (VMS).
+
+* Make HTTP timestamping use If-Modified-Since facility.
+
+* Implement better spider options.
+
+* Add more protocols (e.g. gopher and news), implementing them in a
+ modular fashion.
+
+* Implement a concept of "packages" a la mirror.
+
+* Implement correct RFC1808 URL parsing.
+
+* Implement HTTP cookies.
+
+* Implement more HTTP/1.1 bells and whistles (ETag, Content-MD5 etc.)
+
+* Support SSL encryption through SSLeay.
--- /dev/null
+AC_DEFUN(AM_C_PROTOTYPES,
+[AC_REQUIRE([AM_PROG_CC_STDC])
+AC_BEFORE([$0], [AC_C_INLINE])
+AC_MSG_CHECKING([for function prototypes])
+if test "$am_cv_prog_cc_stdc" != no; then
+ AC_MSG_RESULT(yes)
+ AC_DEFINE(PROTOTYPES)
+ U= ANSI2KNR=
+else
+ AC_MSG_RESULT(no)
+ U=_ ANSI2KNR=./ansi2knr
+ # Ensure some checks needed by ansi2knr itself.
+ AC_HEADER_STDC
+ AC_CHECK_HEADERS(string.h)
+fi
+AC_SUBST(U)dnl
+AC_SUBST(ANSI2KNR)dnl
+])
+
+
+# serial 1
+
+# @defmac AC_PROG_CC_STDC
+# @maindex PROG_CC_STDC
+# @ovindex CC
+# If the C compiler in not in ANSI C mode by default, try to add an option
+# to output variable @code{CC} to make it so. This macro tries various
+# options that select ANSI C on some system or another. It considers the
+# compiler to be in ANSI C mode if it defines @code{__STDC__} to 1 and
+# handles function prototypes correctly.
+#
+# If you use this macro, you should check after calling it whether the C
+# compiler has been set to accept ANSI C; if not, the shell variable
+# @code{am_cv_prog_cc_stdc} is set to @samp{no}. If you wrote your source
+# code in ANSI C, you can make an un-ANSIfied copy of it by using the
+# program @code{ansi2knr}, which comes with Ghostscript.
+# @end defmac
+
+AC_DEFUN(AM_PROG_CC_STDC,
+[AC_REQUIRE([AC_PROG_CC])
+AC_MSG_CHECKING(for ${CC-cc} option to accept ANSI C)
+AC_CACHE_VAL(am_cv_prog_cc_stdc,
+[am_cv_prog_cc_stdc=no
+ac_save_CC="$CC"
+# Don't try gcc -ansi; that turns off useful extensions and
+# breaks some systems' header files.
+# AIX -qlanglvl=ansi
+# Ultrix and OSF/1 -std1
+# HP-UX -Aa -D_HPUX_SOURCE
+# SVR4 -Xc -D__EXTENSIONS__
+for ac_arg in "" -qlanglvl=ansi -std1 "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ AC_TRY_COMPILE(
+[#if !defined(__STDC__) || __STDC__ != 1
+choke me
+#endif
+/* DYNIX/ptx V4.1.3 can't compile sys/stat.h with -Xc -D__EXTENSIONS__. */
+#ifdef _SEQUENT_
+# include <sys/types.h>
+# include <sys/stat.h>
+#endif
+], [
+int test (int i, double x);
+struct s1 {int (*f) (int a);};
+struct s2 {int (*f) (double a);};],
+[am_cv_prog_cc_stdc="$ac_arg"; break])
+done
+CC="$ac_save_CC"
+])
+AC_MSG_RESULT($am_cv_prog_cc_stdc)
+case "x$am_cv_prog_cc_stdc" in
+ x|xno) ;;
+ *) CC="$CC $am_cv_prog_cc_stdc" ;;
+esac
+])
+
+AC_DEFUN(WGET_STRUCT_UTIMBUF,
+[AC_MSG_CHECKING(for struct utimbuf)
+if test x"$ac_cv_header_utime_h" = xyes; then
+ AC_EGREP_CPP([struct[ ]+utimbuf],
+ [#include <utime.h>],
+ [AC_DEFINE(HAVE_STRUCT_UTIMBUF)
+ AC_MSG_RESULT(yes)],
+ AC_MSG_RESULT(no))
+else
+ AC_MSG_RESULT(no)
+fi])
+
+\f
+# This code originates from Ulrich Drepper's AM_WITH_NLS.
+
+AC_DEFUN(WGET_WITH_NLS,
+ [AC_MSG_CHECKING([whether NLS is requested])
+ dnl Default is enabled NLS
+ AC_ARG_ENABLE(nls,
+ [ --disable-nls do not use Native Language Support],
+ HAVE_NLS=$enableval, HAVE_NLS=yes)
+ AC_MSG_RESULT($HAVE_NLS)
+
+ dnl If something goes wrong, we may still decide not to use NLS.
+ dnl For this reason, defer AC_SUBST'ing HAVE_NLS until the very
+ dnl last moment.
+
+ if test x"$HAVE_NLS" = xyes; then
+ AC_MSG_RESULT("language catalogs: $ALL_LINGUAS")
+ AM_PATH_PROG_WITH_TEST(MSGFMT, msgfmt,
+ [test -z "`$ac_dir/$ac_word -h 2>&1 | grep 'dv '`"], msgfmt)
+ AM_PATH_PROG_WITH_TEST(XGETTEXT, xgettext,
+ [test -z "`$ac_dir/$ac_word -h 2>&1 | grep '(HELP)'`"], :)
+ AC_SUBST(MSGFMT)
+ AC_PATH_PROG(GMSGFMT, gmsgfmt, $MSGFMT)
+ CATOBJEXT=.gmo
+ INSTOBJEXT=.mo
+ DATADIRNAME=share
+
+ dnl Test whether we really found GNU xgettext.
+ if test "$XGETTEXT" != ":"; then
+ dnl If it is no GNU xgettext we define it as : so that the
+ dnl Makefiles still can work.
+ if $XGETTEXT --omit-header /dev/null 2> /dev/null; then
+ : ;
+ else
+ AC_MSG_RESULT(
+ [found xgettext programs is not GNU xgettext; ignore it])
+ XGETTEXT=":"
+ fi
+ fi
+
+ AC_CHECK_HEADERS(locale.h libintl.h)
+
+ AC_CHECK_FUNCS(gettext, [], [
+ AC_CHECK_LIB(intl, gettext, [
+ dnl gettext is in libintl; announce the fact manually.
+ LIBS="-lintl $LIBS"
+ AC_DEFINE(HAVE_GETTEXT)
+ ], [
+ AC_MSG_RESULT(
+ [gettext not found; disabling NLS])
+ HAVE_NLS=no
+ ])
+ ])
+
+ dnl These rules are solely for the distribution goal. While doing this
+ dnl we only have to keep exactly one list of the available catalogs
+ dnl in configure.in.
+ for lang in $ALL_LINGUAS; do
+ GMOFILES="$GMOFILES $lang.gmo"
+ POFILES="$POFILES $lang.po"
+ done
+ dnl Construct list of names of catalog files to be constructed.
+ for lang in $ALL_LINGUAS; do
+ CATALOGS="$CATALOGS ${lang}${CATOBJEXT}"
+ done
+
+ dnl Make all variables we use known to autoconf.
+ AC_SUBST(CATALOGS)
+ AC_SUBST(CATOBJEXT)
+ AC_SUBST(DATADIRNAME)
+ AC_SUBST(GMOFILES)
+ AC_SUBST(INSTOBJEXT)
+ AC_SUBST(INTLLIBS)
+ AC_SUBST(POFILES)
+ fi
+ AC_SUBST(HAVE_NLS)
+ dnl Some independently maintained files, such as po/Makefile.in,
+ dnl use `USE_NLS', so support it.
+ USE_NLS=$HAVE_NLS
+ AC_SUBST(USE_NLS)
+ if test "x$HAVE_NLS" = xyes; then
+ AC_DEFINE(HAVE_NLS)
+ fi
+ ])
+
+dnl Generate list of files to be processed by xgettext which will
+dnl be included in po/Makefile.
+dnl
+dnl This is not strictly an Autoconf macro, because it is run from
+dnl within `config.status' rather than from within configure. This
+dnl is why special rules must be applied for it.
+AC_DEFUN(WGET_PROCESS_PO,
+ [srcdir=$ac_given_srcdir # Advanced autoconf hackery
+ dnl I wonder what the following several lines do...
+ if test "x$srcdir" != "x."; then
+ if test "x`echo $srcdir | sed 's@/.*@@'`" = "x"; then
+ posrcprefix="$srcdir/"
+ else
+ posrcprefix="../$srcdir/"
+ fi
+ else
+ posrcprefix="../"
+ fi
+ rm -f po/POTFILES
+ dnl Use `echo' rather than AC_MSG_RESULT, because this is run from
+ dnl `config.status'.
+ echo "generating po/POTFILES from $srcdir/po/POTFILES.in"
+ sed -e "/^#/d" -e "/^\$/d" -e "s,.*, $posrcprefix& \\\\," \
+ -e "\$s/\(.*\) \\\\/\1/" \
+ < $srcdir/po/POTFILES.in > po/POTFILES
+ echo "creating po/Makefile"
+ sed -e "/POTFILES =/r po/POTFILES" po/Makefile.in > po/Makefile
+ ])
+
+# Search path for a program which passes the given test.
+# Ulrich Drepper <drepper@cygnus.com>, 1996.
+#
+# This file may be copied and used freely without restrictions. It
+# can be used in projects which are not available under the GNU Public
+# License but which still want to provide support for the GNU gettext
+# functionality. Please note that the actual code is *not* freely
+# available.
+
+# serial 1
+
+dnl AM_PATH_PROG_WITH_TEST(VARIABLE, PROG-TO-CHECK-FOR,
+dnl TEST-PERFORMED-ON-FOUND_PROGRAM [, VALUE-IF-NOT-FOUND [, PATH]])
+AC_DEFUN(AM_PATH_PROG_WITH_TEST,
+[# Extract the first word of "$2", so it can be a program name with args.
+set dummy $2; ac_word=[$]2
+AC_MSG_CHECKING([for $ac_word])
+AC_CACHE_VAL(ac_cv_path_$1,
+[case "[$]$1" in
+ /*)
+ ac_cv_path_$1="[$]$1" # Let the user override the test with a path.
+ ;;
+ *)
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in ifelse([$5], , $PATH, [$5]); do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ if [$3]; then
+ ac_cv_path_$1="$ac_dir/$ac_word"
+ break
+ fi
+ fi
+ done
+ IFS="$ac_save_ifs"
+dnl If no 4th arg is given, leave the cache variable unset,
+dnl so AC_PATH_PROGS will keep looking.
+ifelse([$4], , , [ test -z "[$]ac_cv_path_$1" && ac_cv_path_$1="$4"
+])dnl
+ ;;
+esac])dnl
+$1="$ac_cv_path_$1"
+if test -n "[$]$1"; then
+ AC_MSG_RESULT([$]$1)
+else
+ AC_MSG_RESULT(no)
+fi
+AC_SUBST($1)dnl
+])
--- /dev/null
+#! /bin/sh
+# Attempt to guess a canonical system name.
+# Copyright (C) 1992, 93, 94, 95, 1996 Free Software Foundation, Inc.
+#
+# This file is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Written by Per Bothner <bothner@cygnus.com>.
+# The master version of this file is at the FSF in /home/gd/gnu/lib.
+#
+# This script attempts to guess a canonical system name similar to
+# config.sub. If it succeeds, it prints the system name on stdout, and
+# exits with 0. Otherwise, it exits with 1.
+#
+# The plan is that this can be called by configure scripts if you
+# don't specify an explicit system type (host/target name).
+#
+# Only a few systems have been added to this list; please add others
+# (but try to keep the structure clean).
+#
+
+# This is needed to find uname on a Pyramid OSx when run in the BSD universe.
+# (ghazi@noc.rutgers.edu 8/24/94.)
+if (test -f /.attbin/uname) >/dev/null 2>&1 ; then
+ PATH=$PATH:/.attbin ; export PATH
+fi
+
+UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown
+UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown
+UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown
+UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown
+
+trap 'rm -f dummy.c dummy.o dummy; exit 1' 1 2 15
+
+# Note: order is significant - the case branches are not exclusive.
+
+case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
+ alpha:OSF1:*:*)
+ # A Vn.n version is a released version.
+ # A Tn.n version is a released field test version.
+ # A Xn.n version is an unreleased experimental baselevel.
+ # 1.2 uses "1.2" for uname -r.
+ echo alpha-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[VTX]//'`
+ exit 0 ;;
+ 21064:Windows_NT:50:3)
+ echo alpha-dec-winnt3.5
+ exit 0 ;;
+ Amiga*:UNIX_System_V:4.0:*)
+ echo m68k-cbm-sysv4
+ exit 0;;
+ amiga:NetBSD:*:*)
+ echo m68k-cbm-netbsd${UNAME_RELEASE}
+ exit 0 ;;
+ amiga:OpenBSD:*:*)
+ echo m68k-cbm-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*)
+ echo arm-acorn-riscix${UNAME_RELEASE}
+ exit 0;;
+ Pyramid*:OSx*:*:*|MIS*:OSx*:*:*)
+ # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE.
+ if test "`(/bin/universe) 2>/dev/null`" = att ; then
+ echo pyramid-pyramid-sysv3
+ else
+ echo pyramid-pyramid-bsd
+ fi
+ exit 0 ;;
+ NILE:*:*:dcosx)
+ echo pyramid-pyramid-svr4
+ exit 0 ;;
+ sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)
+ echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ i86pc:SunOS:5.*:*)
+ echo i386-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ sun4*:SunOS:6*:*)
+ # According to config.sub, this is the proper way to canonicalize
+ # SunOS6. Hard to guess exactly what SunOS6 will be like, but
+ # it's likely to be more like Solaris than SunOS4.
+ echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ sun4*:SunOS:*:*)
+ case "`/usr/bin/arch -k`" in
+ Series*|S4*)
+ UNAME_RELEASE=`uname -v`
+ ;;
+ esac
+ # Japanese Language versions have a version number like `4.1.3-JL'.
+ echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'`
+ exit 0 ;;
+ sun3*:SunOS:*:*)
+ echo m68k-sun-sunos${UNAME_RELEASE}
+ exit 0 ;;
+ aushp:SunOS:*:*)
+ echo sparc-auspex-sunos${UNAME_RELEASE}
+ exit 0 ;;
+ atari*:NetBSD:*:*)
+ echo m68k-atari-netbsd${UNAME_RELEASE}
+ exit 0 ;;
+ atari*:OpenBSD:*:*)
+ echo m68k-atari-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ sun3*:NetBSD:*:*)
+ echo m68k-sun-netbsd${UNAME_RELEASE}
+ exit 0 ;;
+ sun3*:OpenBSD:*:*)
+ echo m68k-sun-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mac68k:NetBSD:*:*)
+ echo m68k-apple-netbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mac68k:OpenBSD:*:*)
+ echo m68k-apple-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ powerpc:machten:*:*)
+ echo powerpc-apple-machten${UNAME_RELEASE}
+ exit 0 ;;
+ RISC*:Mach:*:*)
+ echo mips-dec-mach_bsd4.3
+ exit 0 ;;
+ RISC*:ULTRIX:*:*)
+ echo mips-dec-ultrix${UNAME_RELEASE}
+ exit 0 ;;
+ VAX*:ULTRIX*:*:*)
+ echo vax-dec-ultrix${UNAME_RELEASE}
+ exit 0 ;;
+ mips:*:*:UMIPS | mips:*:*:RISCos)
+ sed 's/^ //' << EOF >dummy.c
+ int main (argc, argv) int argc; char **argv; {
+ #if defined (host_mips) && defined (MIPSEB)
+ #if defined (SYSTYPE_SYSV)
+ printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_SVR4)
+ printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD)
+ printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0);
+ #endif
+ #endif
+ exit (-1);
+ }
+EOF
+ ${CC-cc} dummy.c -o dummy \
+ && ./dummy `echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` \
+ && rm dummy.c dummy && exit 0
+ rm -f dummy.c dummy
+ echo mips-mips-riscos${UNAME_RELEASE}
+ exit 0 ;;
+ Night_Hawk:Power_UNIX:*:*)
+ echo powerpc-harris-powerunix
+ exit 0 ;;
+ m88k:CX/UX:7*:*)
+ echo m88k-harris-cxux7
+ exit 0 ;;
+ m88k:*:4*:R4*)
+ echo m88k-motorola-sysv4
+ exit 0 ;;
+ m88k:*:3*:R3*)
+ echo m88k-motorola-sysv3
+ exit 0 ;;
+ AViiON:dgux:*:*)
+ # DG/UX returns AViiON for all architectures
+ UNAME_PROCESSOR=`/usr/bin/uname -p`
+ if [ $UNAME_PROCESSOR = mc88100 -o $UNAME_PROCESSOR = mc88110 ] ; then
+ if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx \
+ -o ${TARGET_BINARY_INTERFACE}x = x ] ; then
+ echo m88k-dg-dgux${UNAME_RELEASE}
+ else
+ echo m88k-dg-dguxbcs${UNAME_RELEASE}
+ fi
+ else echo i586-dg-dgux${UNAME_RELEASE}
+ fi
+ exit 0 ;;
+ M88*:DolphinOS:*:*) # DolphinOS (SVR3)
+ echo m88k-dolphin-sysv3
+ exit 0 ;;
+ M88*:*:R3*:*)
+ # Delta 88k system running SVR3
+ echo m88k-motorola-sysv3
+ exit 0 ;;
+ XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3)
+ echo m88k-tektronix-sysv3
+ exit 0 ;;
+ Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD)
+ echo m68k-tektronix-bsd
+ exit 0 ;;
+ *:IRIX*:*:*)
+ echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'`
+ exit 0 ;;
+ ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX.
+ echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id
+ exit 0 ;; # Note that: echo "'`uname -s`'" gives 'AIX '
+ i?86:AIX:*:*)
+ echo i386-ibm-aix
+ exit 0 ;;
+ *:AIX:2:3)
+ if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then
+ sed 's/^ //' << EOF >dummy.c
+ #include <sys/systemcfg.h>
+
+ main()
+ {
+ if (!__power_pc())
+ exit(1);
+ puts("powerpc-ibm-aix3.2.5");
+ exit(0);
+ }
+EOF
+ ${CC-cc} dummy.c -o dummy && ./dummy && rm dummy.c dummy && exit 0
+ rm -f dummy.c dummy
+ echo rs6000-ibm-aix3.2.5
+ elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then
+ echo rs6000-ibm-aix3.2.4
+ else
+ echo rs6000-ibm-aix3.2
+ fi
+ exit 0 ;;
+ *:AIX:*:4)
+ if /usr/sbin/lsattr -EHl proc0 | grep POWER >/dev/null 2>&1; then
+ IBM_ARCH=rs6000
+ else
+ IBM_ARCH=powerpc
+ fi
+ if [ -x /usr/bin/oslevel ] ; then
+ IBM_REV=`/usr/bin/oslevel`
+ else
+ IBM_REV=4.${UNAME_RELEASE}
+ fi
+ echo ${IBM_ARCH}-ibm-aix${IBM_REV}
+ exit 0 ;;
+ *:AIX:*:*)
+ echo rs6000-ibm-aix
+ exit 0 ;;
+ ibmrt:4.4BSD:*|romp-ibm:BSD:*)
+ echo romp-ibm-bsd4.4
+ exit 0 ;;
+ ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC NetBSD and
+ echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to
+ exit 0 ;; # report: romp-ibm BSD 4.3
+ *:BOSX:*:*)
+ echo rs6000-bull-bosx
+ exit 0 ;;
+ DPX/2?00:B.O.S.:*:*)
+ echo m68k-bull-sysv3
+ exit 0 ;;
+ 9000/[34]??:4.3bsd:1.*:*)
+ echo m68k-hp-bsd
+ exit 0 ;;
+ hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*)
+ echo m68k-hp-bsd4.4
+ exit 0 ;;
+ 9000/[3478]??:HP-UX:*:*)
+ case "${UNAME_MACHINE}" in
+ 9000/31? ) HP_ARCH=m68000 ;;
+ 9000/[34]?? ) HP_ARCH=m68k ;;
+ 9000/7?? | 9000/8?[1679] ) HP_ARCH=hppa1.1 ;;
+ 9000/8?? ) HP_ARCH=hppa1.0 ;;
+ esac
+ HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'`
+ echo ${HP_ARCH}-hp-hpux${HPUX_REV}
+ exit 0 ;;
+ 3050*:HI-UX:*:*)
+ sed 's/^ //' << EOF >dummy.c
+ #include <unistd.h>
+ int
+ main ()
+ {
+ long cpu = sysconf (_SC_CPU_VERSION);
+ /* The order matters, because CPU_IS_HP_MC68K erroneously returns
+ true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct
+ results, however. */
+ if (CPU_IS_PA_RISC (cpu))
+ {
+ switch (cpu)
+ {
+ case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break;
+ default: puts ("hppa-hitachi-hiuxwe2"); break;
+ }
+ }
+ else if (CPU_IS_HP_MC68K (cpu))
+ puts ("m68k-hitachi-hiuxwe2");
+ else puts ("unknown-hitachi-hiuxwe2");
+ exit (0);
+ }
+EOF
+ ${CC-cc} dummy.c -o dummy && ./dummy && rm dummy.c dummy && exit 0
+ rm -f dummy.c dummy
+ echo unknown-hitachi-hiuxwe2
+ exit 0 ;;
+ 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* )
+ echo hppa1.1-hp-bsd
+ exit 0 ;;
+ 9000/8??:4.3bsd:*:*)
+ echo hppa1.0-hp-bsd
+ exit 0 ;;
+ hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* )
+ echo hppa1.1-hp-osf
+ exit 0 ;;
+ hp8??:OSF1:*:*)
+ echo hppa1.0-hp-osf
+ exit 0 ;;
+ i?86:OSF1:*:*)
+ if [ -x /usr/sbin/sysversion ] ; then
+ echo ${UNAME_MACHINE}-unknown-osf1mk
+ else
+ echo ${UNAME_MACHINE}-unknown-osf1
+ fi
+ exit 0 ;;
+ parisc*:Lites*:*:*)
+ echo hppa1.1-hp-lites
+ exit 0 ;;
+ C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*)
+ echo c1-convex-bsd
+ exit 0 ;;
+ C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*)
+ if getsysinfo -f scalar_acc
+ then echo c32-convex-bsd
+ else echo c2-convex-bsd
+ fi
+ exit 0 ;;
+ C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*)
+ echo c34-convex-bsd
+ exit 0 ;;
+ C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*)
+ echo c38-convex-bsd
+ exit 0 ;;
+ C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*)
+ echo c4-convex-bsd
+ exit 0 ;;
+ CRAY*X-MP:*:*:*)
+ echo xmp-cray-unicos
+ exit 0 ;;
+ CRAY*Y-MP:*:*:*)
+ echo ymp-cray-unicos${UNAME_RELEASE}
+ exit 0 ;;
+ CRAY*[A-Z]90:*:*:*)
+ echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \
+ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \
+ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/
+ exit 0 ;;
+ CRAY*TS:*:*:*)
+ echo t90-cray-unicos${UNAME_RELEASE}
+ exit 0 ;;
+ CRAY-2:*:*:*)
+ echo cray2-cray-unicos
+ exit 0 ;;
+ F300:UNIX_System_V:*:*)
+ FUJITSU_SYS=`uname -p | tr [A-Z] [a-z] | sed -e 's/\///'`
+ FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'`
+ echo "f300-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
+ exit 0 ;;
+ F301:UNIX_System_V:*:*)
+ echo f301-fujitsu-uxpv`echo $UNAME_RELEASE | sed 's/ .*//'`
+ exit 0 ;;
+ hp3[0-9][05]:NetBSD:*:*)
+ echo m68k-hp-netbsd${UNAME_RELEASE}
+ exit 0 ;;
+ hp3[0-9][05]:OpenBSD:*:*)
+ echo m68k-hp-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ i?86:BSD/386:*:* | *:BSD/OS:*:*)
+ echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE}
+ exit 0 ;;
+ *:FreeBSD:*:*)
+ echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`
+ exit 0 ;;
+ *:NetBSD:*:*)
+ echo ${UNAME_MACHINE}-unknown-netbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
+ exit 0 ;;
+ *:OpenBSD:*:*)
+ echo ${UNAME_MACHINE}-unknown-openbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
+ exit 0 ;;
+ i*:CYGWIN*:*)
+ echo i386-pc-cygwin32
+ exit 0 ;;
+ p*:CYGWIN*:*)
+ echo powerpcle-unknown-cygwin32
+ exit 0 ;;
+ prep*:SunOS:5.*:*)
+ echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ *:GNU:*:*)
+ echo `echo ${UNAME_MACHINE}|sed -e 's,/.*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'`
+ exit 0 ;;
+ *:Linux:*:*)
+ # The BFD linker knows what the default object file format is, so
+ # first see if it will tell us.
+ ld_help_string=`ld --help 2>&1`
+ if echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: elf_i.86"; then
+ echo "${UNAME_MACHINE}-pc-linux-gnu" ; exit 0
+ elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: i.86linux"; then
+ echo "${UNAME_MACHINE}-pc-linux-gnuaout" ; exit 0
+ elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: i.86coff"; then
+ echo "${UNAME_MACHINE}-pc-linux-gnucoff" ; exit 0
+ elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: m68kelf"; then
+ echo "${UNAME_MACHINE}-unknown-linux-gnu" ; exit 0
+ elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: m68klinux"; then
+ echo "${UNAME_MACHINE}-unknown-linux-gnuaout" ; exit 0
+ elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: elf32ppc"; then
+ echo "powerpc-unknown-linux-gnu" ; exit 0
+ elif test "${UNAME_MACHINE}" = "alpha" ; then
+ echo alpha-unknown-linux-gnu ; exit 0
+ elif test "${UNAME_MACHINE}" = "sparc" ; then
+ echo sparc-unknown-linux-gnu ; exit 0
+ else
+ # Either a pre-BFD a.out linker (linux-gnuoldld) or one that does not give us
+ # useful --help. Gcc wants to distinguish between linux-gnuoldld and linux-gnuaout.
+ test ! -d /usr/lib/ldscripts/. \
+ && echo "${UNAME_MACHINE}-pc-linux-gnuoldld" && exit 0
+ # Determine whether the default compiler is a.out or elf
+ cat >dummy.c <<EOF
+main(argc, argv)
+int argc;
+char *argv[];
+{
+#ifdef __ELF__
+ printf ("%s-pc-linux-gnu\n", argv[1]);
+#else
+ printf ("%s-pc-linux-gnuaout\n", argv[1]);
+#endif
+ return 0;
+}
+EOF
+ ${CC-cc} dummy.c -o dummy 2>/dev/null && ./dummy "${UNAME_MACHINE}" && rm dummy.c dummy && exit 0
+ rm -f dummy.c dummy
+ fi ;;
+# ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. earlier versions
+# are messed up and put the nodename in both sysname and nodename.
+ i?86:DYNIX/ptx:4*:*)
+ echo i386-sequent-sysv4
+ exit 0 ;;
+ i?86:*:4.*:* | i?86:SYSTEM_V:4.*:*)
+ if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then
+ echo ${UNAME_MACHINE}-univel-sysv${UNAME_RELEASE}
+ else
+ echo ${UNAME_MACHINE}-pc-sysv${UNAME_RELEASE}
+ fi
+ exit 0 ;;
+ i?86:*:3.2:*)
+ if test -f /usr/options/cb.name; then
+ UNAME_REL=`sed -n 's/.*Version //p' </usr/options/cb.name`
+ echo ${UNAME_MACHINE}-pc-isc$UNAME_REL
+ elif /bin/uname -X 2>/dev/null >/dev/null ; then
+ UNAME_REL=`(/bin/uname -X|egrep Release|sed -e 's/.*= //')`
+ (/bin/uname -X|egrep i80486 >/dev/null) && UNAME_MACHINE=i486
+ (/bin/uname -X|egrep '^Machine.*Pentium' >/dev/null) \
+ && UNAME_MACHINE=i586
+ echo ${UNAME_MACHINE}-pc-sco$UNAME_REL
+ else
+ echo ${UNAME_MACHINE}-pc-sysv32
+ fi
+ exit 0 ;;
+ Intel:Mach:3*:*)
+ echo i386-pc-mach3
+ exit 0 ;;
+ paragon:*:*:*)
+ echo i860-intel-osf1
+ exit 0 ;;
+ i860:*:4.*:*) # i860-SVR4
+ if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then
+ echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4
+ else # Add other i860-SVR4 vendors below as they are discovered.
+ echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4
+ fi
+ exit 0 ;;
+ mini*:CTIX:SYS*5:*)
+ # "miniframe"
+ echo m68010-convergent-sysv
+ exit 0 ;;
+ M68*:*:R3V[567]*:*)
+ test -r /sysV68 && echo 'm68k-motorola-sysv' && exit 0 ;;
+ 3[34]??:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 4850:*:4.0:3.0)
+ OS_REL=''
+ test -r /etc/.relid \
+ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && echo i486-ncr-sysv4.3${OS_REL} && exit 0
+ /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
+ && echo i586-ncr-sysv4.3${OS_REL} && exit 0 ;;
+ 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*)
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && echo i486-ncr-sysv4 && exit 0 ;;
+ m68*:LynxOS:2.*:*)
+ echo m68k-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ mc68030:UNIX_System_V:4.*:*)
+ echo m68k-atari-sysv4
+ exit 0 ;;
+ i?86:LynxOS:2.*:*)
+ echo i386-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ TSUNAMI:LynxOS:2.*:*)
+ echo sparc-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ rs6000:LynxOS:2.*:* | PowerPC:LynxOS:2.*:*)
+ echo rs6000-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ SM[BE]S:UNIX_SV:*:*)
+ echo mips-dde-sysv${UNAME_RELEASE}
+ exit 0 ;;
+ RM*:SINIX-*:*:*)
+ echo mips-sni-sysv4
+ exit 0 ;;
+ *:SINIX-*:*:*)
+ if uname -p 2>/dev/null >/dev/null ; then
+ UNAME_MACHINE=`(uname -p) 2>/dev/null`
+ echo ${UNAME_MACHINE}-sni-sysv4
+ else
+ echo ns32k-sni-sysv
+ fi
+ exit 0 ;;
+ *:UNIX_System_V:4*:FTX*)
+ # From Gerald Hewes <hewes@openmarket.com>.
+ # How about differentiating between stratus architectures? -djm
+ echo hppa1.1-stratus-sysv4
+ exit 0 ;;
+ *:*:*:FTX*)
+ # From seanf@swdc.stratus.com.
+ echo i860-stratus-sysv4
+ exit 0 ;;
+ mc68*:A/UX:*:*)
+ echo m68k-apple-aux${UNAME_RELEASE}
+ exit 0 ;;
+ R3000:*System_V*:*:* | R4000:UNIX_SYSV:*:*)
+ if [ -d /usr/nec ]; then
+ echo mips-nec-sysv${UNAME_RELEASE}
+ else
+ echo mips-unknown-sysv${UNAME_RELEASE}
+ fi
+ exit 0 ;;
+ PENTIUM:CPunix:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort
+ # says <Richard.M.Bartel@ccMail.Census.GOV>
+ echo i586-unisys-sysv4
+ exit 0 ;;
+esac
+
+#echo '(No uname command or uname output not recognized.)' 1>&2
+#echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2
+
+cat >dummy.c <<EOF
+#ifdef _SEQUENT_
+# include <sys/types.h>
+# include <sys/utsname.h>
+#endif
+main ()
+{
+#if defined (sony)
+#if defined (MIPSEB)
+ /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed,
+ I don't know.... */
+ printf ("mips-sony-bsd\n"); exit (0);
+#else
+#include <sys/param.h>
+ printf ("m68k-sony-newsos%s\n",
+#ifdef NEWSOS4
+ "4"
+#else
+ ""
+#endif
+ ); exit (0);
+#endif
+#endif
+
+#if defined (__arm) && defined (__acorn) && defined (__unix)
+ printf ("arm-acorn-riscix"); exit (0);
+#endif
+
+#if defined (hp300) && !defined (hpux)
+ printf ("m68k-hp-bsd\n"); exit (0);
+#endif
+
+#if defined (NeXT)
+#if !defined (__ARCHITECTURE__)
+#define __ARCHITECTURE__ "m68k"
+#endif
+ int version;
+ version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`;
+ printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version);
+ exit (0);
+#endif
+
+#if defined (MULTIMAX) || defined (n16)
+#if defined (UMAXV)
+ printf ("ns32k-encore-sysv\n"); exit (0);
+#else
+#if defined (CMU)
+ printf ("ns32k-encore-mach\n"); exit (0);
+#else
+ printf ("ns32k-encore-bsd\n"); exit (0);
+#endif
+#endif
+#endif
+
+#if defined (__386BSD__)
+ printf ("i386-pc-bsd\n"); exit (0);
+#endif
+
+#if defined (sequent)
+#if defined (i386)
+ printf ("i386-sequent-dynix\n"); exit (0);
+#endif
+#if defined (ns32000)
+ printf ("ns32k-sequent-dynix\n"); exit (0);
+#endif
+#endif
+
+#if defined (_SEQUENT_)
+ struct utsname un;
+
+ uname(&un);
+
+ if (strncmp(un.version, "V2", 2) == 0) {
+ printf ("i386-sequent-ptx2\n"); exit (0);
+ }
+ if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */
+ printf ("i386-sequent-ptx1\n"); exit (0);
+ }
+ printf ("i386-sequent-ptx\n"); exit (0);
+
+#endif
+
+#if defined (vax)
+#if !defined (ultrix)
+ printf ("vax-dec-bsd\n"); exit (0);
+#else
+ printf ("vax-dec-ultrix\n"); exit (0);
+#endif
+#endif
+
+#if defined (alliant) && defined (i860)
+ printf ("i860-alliant-bsd\n"); exit (0);
+#endif
+
+ exit (1);
+}
+EOF
+
+${CC-cc} dummy.c -o dummy 2>/dev/null && ./dummy && rm dummy.c dummy && exit 0
+rm -f dummy.c dummy
+
+# Apollos put the system type in the environment.
+
+test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit 0; }
+
+# Convex versions that predate uname can use getsysinfo(1)
+
+if [ -x /usr/convex/getsysinfo ]
+then
+ case `getsysinfo -f cpu_type` in
+ c1*)
+ echo c1-convex-bsd
+ exit 0 ;;
+ c2*)
+ if getsysinfo -f scalar_acc
+ then echo c32-convex-bsd
+ else echo c2-convex-bsd
+ fi
+ exit 0 ;;
+ c34*)
+ echo c34-convex-bsd
+ exit 0 ;;
+ c38*)
+ echo c38-convex-bsd
+ exit 0 ;;
+ c4*)
+ echo c4-convex-bsd
+ exit 0 ;;
+ esac
+fi
+
+#echo '(Unable to guess system type)' 1>&2
+
+exit 1
--- /dev/null
+#! /bin/sh
+# Configuration validation subroutine script, version 1.1.
+# Copyright (C) 1991, 92, 93, 94, 95, 1996 Free Software Foundation, Inc.
+# This file is (in principle) common to ALL GNU software.
+# The presence of a machine in this file suggests that SOME GNU software
+# can handle that machine. It does not imply ALL GNU software can.
+#
+# This file is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330,
+# Boston, MA 02111-1307, USA.
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Configuration subroutine to validate and canonicalize a configuration type.
+# Supply the specified configuration type as an argument.
+# If it is invalid, we print an error message on stderr and exit with code 1.
+# Otherwise, we print the canonical config type on stdout and succeed.
+
+# This file is supposed to be the same for all GNU packages
+# and recognize all the CPU types, system types and aliases
+# that are meaningful with *any* GNU software.
+# Each package is responsible for reporting which valid configurations
+# it does not support. The user should be able to distinguish
+# a failure to support a valid configuration from a meaningless
+# configuration.
+
+# The goal of this file is to map all the various variations of a given
+# machine specification into a single specification in the form:
+# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM
+# or in some cases, the newer four-part form:
+# CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM
+# It is wrong to echo any other type of specification.
+
+if [ x$1 = x ]
+then
+ echo Configuration name missing. 1>&2
+ echo "Usage: $0 CPU-MFR-OPSYS" 1>&2
+ echo "or $0 ALIAS" 1>&2
+ echo where ALIAS is a recognized configuration type. 1>&2
+ exit 1
+fi
+
+# First pass through any local machine types.
+case $1 in
+ *local*)
+ echo $1
+ exit 0
+ ;;
+ *)
+ ;;
+esac
+
+# Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any).
+# Here we must recognize all the valid KERNEL-OS combinations.
+maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'`
+case $maybe_os in
+ linux-gnu*)
+ os=-$maybe_os
+ basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`
+ ;;
+ *)
+ basic_machine=`echo $1 | sed 's/-[^-]*$//'`
+ if [ $basic_machine != $1 ]
+ then os=`echo $1 | sed 's/.*-/-/'`
+ else os=; fi
+ ;;
+esac
+
+### Let's recognize common machines as not being operating systems so
+### that things like config.sub decstation-3100 work. We also
+### recognize some manufacturers as not being operating systems, so we
+### can provide default operating systems below.
+case $os in
+ -sun*os*)
+ # Prevent following clause from handling this invalid input.
+ ;;
+ -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \
+ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \
+ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \
+ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\
+ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \
+ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \
+ -apple)
+ os=
+ basic_machine=$1
+ ;;
+ -hiux*)
+ os=-hiuxwe2
+ ;;
+ -sco5)
+ os=sco3.2v5
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco4)
+ os=-sco3.2v4
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco3.2.[4-9]*)
+ os=`echo $os | sed -e 's/sco3.2./sco3.2v/'`
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco3.2v[4-9]*)
+ # Don't forget version if it is 3.2v4 or newer.
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco*)
+ os=-sco3.2v2
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -isc)
+ os=-isc2.2
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -clix*)
+ basic_machine=clipper-intergraph
+ ;;
+ -isc*)
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -lynx*)
+ os=-lynxos
+ ;;
+ -ptx*)
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'`
+ ;;
+ -windowsnt*)
+ os=`echo $os | sed -e 's/windowsnt/winnt/'`
+ ;;
+ -psos*)
+ os=-psos
+ ;;
+esac
+
+# Decode aliases for certain CPU-COMPANY combinations.
+case $basic_machine in
+ # Recognize the basic CPU types without company name.
+ # Some are omitted here because they have special meanings below.
+ tahoe | i860 | m68k | m68000 | m88k | ns32k | arm \
+ | arme[lb] | pyramid \
+ | tron | a29k | 580 | i960 | h8300 | hppa | hppa1.0 | hppa1.1 \
+ | alpha | we32k | ns16k | clipper | i370 | sh \
+ | powerpc | powerpcle | 1750a | dsp16xx | mips64 | mipsel \
+ | pdp11 | mips64el | mips64orion | mips64orionel \
+ | sparc | sparclet | sparclite | sparc64)
+ basic_machine=$basic_machine-unknown
+ ;;
+ # We use `pc' rather than `unknown'
+ # because (1) that's what they normally are, and
+ # (2) the word "unknown" tends to confuse beginning users.
+ i[3456]86)
+ basic_machine=$basic_machine-pc
+ ;;
+ # Object if more than one company name word.
+ *-*-*)
+ echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2
+ exit 1
+ ;;
+ # Recognize the basic CPU types with company name.
+ vax-* | tahoe-* | i[3456]86-* | i860-* | m68k-* | m68000-* | m88k-* \
+ | sparc-* | ns32k-* | fx80-* | arm-* | c[123]* \
+ | mips-* | pyramid-* | tron-* | a29k-* | romp-* | rs6000-* | power-* \
+ | none-* | 580-* | cray2-* | h8300-* | i960-* | xmp-* | ymp-* \
+ | hppa-* | hppa1.0-* | hppa1.1-* | alpha-* | we32k-* | cydra-* | ns16k-* \
+ | pn-* | np1-* | xps100-* | clipper-* | orion-* | sparclite-* \
+ | pdp11-* | sh-* | powerpc-* | powerpcle-* | sparc64-* | mips64-* | mipsel-* \
+ | mips64el-* | mips64orion-* | mips64orionel-* | f301-*)
+ ;;
+ # Recognize the various machine names and aliases which stand
+ # for a CPU type and a company and sometimes even an OS.
+ 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc)
+ basic_machine=m68000-att
+ ;;
+ 3b*)
+ basic_machine=we32k-att
+ ;;
+ alliant | fx80)
+ basic_machine=fx80-alliant
+ ;;
+ altos | altos3068)
+ basic_machine=m68k-altos
+ ;;
+ am29k)
+ basic_machine=a29k-none
+ os=-bsd
+ ;;
+ amdahl)
+ basic_machine=580-amdahl
+ os=-sysv
+ ;;
+ amiga | amiga-*)
+ basic_machine=m68k-cbm
+ ;;
+ amigados)
+ basic_machine=m68k-cbm
+ os=-amigados
+ ;;
+ amigaunix | amix)
+ basic_machine=m68k-cbm
+ os=-sysv4
+ ;;
+ apollo68)
+ basic_machine=m68k-apollo
+ os=-sysv
+ ;;
+ aux)
+ basic_machine=m68k-apple
+ os=-aux
+ ;;
+ balance)
+ basic_machine=ns32k-sequent
+ os=-dynix
+ ;;
+ convex-c1)
+ basic_machine=c1-convex
+ os=-bsd
+ ;;
+ convex-c2)
+ basic_machine=c2-convex
+ os=-bsd
+ ;;
+ convex-c32)
+ basic_machine=c32-convex
+ os=-bsd
+ ;;
+ convex-c34)
+ basic_machine=c34-convex
+ os=-bsd
+ ;;
+ convex-c38)
+ basic_machine=c38-convex
+ os=-bsd
+ ;;
+ cray | ymp)
+ basic_machine=ymp-cray
+ os=-unicos
+ ;;
+ cray2)
+ basic_machine=cray2-cray
+ os=-unicos
+ ;;
+ [ctj]90-cray)
+ basic_machine=c90-cray
+ os=-unicos
+ ;;
+ crds | unos)
+ basic_machine=m68k-crds
+ ;;
+ da30 | da30-*)
+ basic_machine=m68k-da30
+ ;;
+ decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn)
+ basic_machine=mips-dec
+ ;;
+ delta | 3300 | motorola-3300 | motorola-delta \
+ | 3300-motorola | delta-motorola)
+ basic_machine=m68k-motorola
+ ;;
+ delta88)
+ basic_machine=m88k-motorola
+ os=-sysv3
+ ;;
+ dpx20 | dpx20-*)
+ basic_machine=rs6000-bull
+ os=-bosx
+ ;;
+ dpx2* | dpx2*-bull)
+ basic_machine=m68k-bull
+ os=-sysv3
+ ;;
+ ebmon29k)
+ basic_machine=a29k-amd
+ os=-ebmon
+ ;;
+ elxsi)
+ basic_machine=elxsi-elxsi
+ os=-bsd
+ ;;
+ encore | umax | mmax)
+ basic_machine=ns32k-encore
+ ;;
+ fx2800)
+ basic_machine=i860-alliant
+ ;;
+ genix)
+ basic_machine=ns32k-ns
+ ;;
+ gmicro)
+ basic_machine=tron-gmicro
+ os=-sysv
+ ;;
+ h3050r* | hiux*)
+ basic_machine=hppa1.1-hitachi
+ os=-hiuxwe2
+ ;;
+ h8300hms)
+ basic_machine=h8300-hitachi
+ os=-hms
+ ;;
+ harris)
+ basic_machine=m88k-harris
+ os=-sysv3
+ ;;
+ hp300-*)
+ basic_machine=m68k-hp
+ ;;
+ hp300bsd)
+ basic_machine=m68k-hp
+ os=-bsd
+ ;;
+ hp300hpux)
+ basic_machine=m68k-hp
+ os=-hpux
+ ;;
+ hp9k2[0-9][0-9] | hp9k31[0-9])
+ basic_machine=m68000-hp
+ ;;
+ hp9k3[2-9][0-9])
+ basic_machine=m68k-hp
+ ;;
+ hp9k7[0-9][0-9] | hp7[0-9][0-9] | hp9k8[0-9]7 | hp8[0-9]7)
+ basic_machine=hppa1.1-hp
+ ;;
+ hp9k8[0-9][0-9] | hp8[0-9][0-9])
+ basic_machine=hppa1.0-hp
+ ;;
+ hppa-next)
+ os=-nextstep3
+ ;;
+ i370-ibm* | ibm*)
+ basic_machine=i370-ibm
+ os=-mvs
+ ;;
+# I'm not sure what "Sysv32" means. Should this be sysv3.2?
+ i[3456]86v32)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv32
+ ;;
+ i[3456]86v4*)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv4
+ ;;
+ i[3456]86v)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv
+ ;;
+ i[3456]86sol2)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-solaris2
+ ;;
+ iris | iris4d)
+ basic_machine=mips-sgi
+ case $os in
+ -irix*)
+ ;;
+ *)
+ os=-irix4
+ ;;
+ esac
+ ;;
+ isi68 | isi)
+ basic_machine=m68k-isi
+ os=-sysv
+ ;;
+ m88k-omron*)
+ basic_machine=m88k-omron
+ ;;
+ magnum | m3230)
+ basic_machine=mips-mips
+ os=-sysv
+ ;;
+ merlin)
+ basic_machine=ns32k-utek
+ os=-sysv
+ ;;
+ miniframe)
+ basic_machine=m68000-convergent
+ ;;
+ mips3*-*)
+ basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`
+ ;;
+ mips3*)
+ basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown
+ ;;
+ ncr3000)
+ basic_machine=i486-ncr
+ os=-sysv4
+ ;;
+ news | news700 | news800 | news900)
+ basic_machine=m68k-sony
+ os=-newsos
+ ;;
+ news1000)
+ basic_machine=m68030-sony
+ os=-newsos
+ ;;
+ news-3600 | risc-news)
+ basic_machine=mips-sony
+ os=-newsos
+ ;;
+ next | m*-next )
+ basic_machine=m68k-next
+ case $os in
+ -nextstep* )
+ ;;
+ -ns2*)
+ os=-nextstep2
+ ;;
+ *)
+ os=-nextstep3
+ ;;
+ esac
+ ;;
+ nh3000)
+ basic_machine=m68k-harris
+ os=-cxux
+ ;;
+ nh[45]000)
+ basic_machine=m88k-harris
+ os=-cxux
+ ;;
+ nindy960)
+ basic_machine=i960-intel
+ os=-nindy
+ ;;
+ np1)
+ basic_machine=np1-gould
+ ;;
+ pa-hitachi)
+ basic_machine=hppa1.1-hitachi
+ os=-hiuxwe2
+ ;;
+ paragon)
+ basic_machine=i860-intel
+ os=-osf
+ ;;
+ pbd)
+ basic_machine=sparc-tti
+ ;;
+ pbb)
+ basic_machine=m68k-tti
+ ;;
+ pc532 | pc532-*)
+ basic_machine=ns32k-pc532
+ ;;
+ pentium | p5)
+ basic_machine=i586-intel
+ ;;
+ pentiumpro | p6)
+ basic_machine=i686-intel
+ ;;
+ pentium-* | p5-*)
+ basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ pentiumpro-* | p6-*)
+ basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ k5)
+ # We don't have specific support for AMD's K5 yet, so just call it a Pentium
+ basic_machine=i586-amd
+ ;;
+ nexen)
+ # We don't have specific support for Nexgen yet, so just call it a Pentium
+ basic_machine=i586-nexgen
+ ;;
+ pn)
+ basic_machine=pn-gould
+ ;;
+ power) basic_machine=rs6000-ibm
+ ;;
+ ppc) basic_machine=powerpc-unknown
+ ;;
+ ppc-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ppcle | powerpclittle | ppc-le | powerpc-little)
+ basic_machine=powerpcle-unknown
+ ;;
+ ppcle-* | powerpclittle-*)
+ basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ps2)
+ basic_machine=i386-ibm
+ ;;
+ rm[46]00)
+ basic_machine=mips-siemens
+ ;;
+ rtpc | rtpc-*)
+ basic_machine=romp-ibm
+ ;;
+ sequent)
+ basic_machine=i386-sequent
+ ;;
+ sh)
+ basic_machine=sh-hitachi
+ os=-hms
+ ;;
+ sps7)
+ basic_machine=m68k-bull
+ os=-sysv2
+ ;;
+ spur)
+ basic_machine=spur-unknown
+ ;;
+ sun2)
+ basic_machine=m68000-sun
+ ;;
+ sun2os3)
+ basic_machine=m68000-sun
+ os=-sunos3
+ ;;
+ sun2os4)
+ basic_machine=m68000-sun
+ os=-sunos4
+ ;;
+ sun3os3)
+ basic_machine=m68k-sun
+ os=-sunos3
+ ;;
+ sun3os4)
+ basic_machine=m68k-sun
+ os=-sunos4
+ ;;
+ sun4os3)
+ basic_machine=sparc-sun
+ os=-sunos3
+ ;;
+ sun4os4)
+ basic_machine=sparc-sun
+ os=-sunos4
+ ;;
+ sun4sol2)
+ basic_machine=sparc-sun
+ os=-solaris2
+ ;;
+ sun3 | sun3-*)
+ basic_machine=m68k-sun
+ ;;
+ sun4)
+ basic_machine=sparc-sun
+ ;;
+ sun386 | sun386i | roadrunner)
+ basic_machine=i386-sun
+ ;;
+ symmetry)
+ basic_machine=i386-sequent
+ os=-dynix
+ ;;
+ tower | tower-32)
+ basic_machine=m68k-ncr
+ ;;
+ udi29k)
+ basic_machine=a29k-amd
+ os=-udi
+ ;;
+ ultra3)
+ basic_machine=a29k-nyu
+ os=-sym1
+ ;;
+ vaxv)
+ basic_machine=vax-dec
+ os=-sysv
+ ;;
+ vms)
+ basic_machine=vax-dec
+ os=-vms
+ ;;
+ vpp*|vx|vx-*)
+ basic_machine=f301-fujitsu
+ ;;
+ vxworks960)
+ basic_machine=i960-wrs
+ os=-vxworks
+ ;;
+ vxworks68)
+ basic_machine=m68k-wrs
+ os=-vxworks
+ ;;
+ vxworks29k)
+ basic_machine=a29k-wrs
+ os=-vxworks
+ ;;
+ xmp)
+ basic_machine=xmp-cray
+ os=-unicos
+ ;;
+ xps | xps100)
+ basic_machine=xps100-honeywell
+ ;;
+ none)
+ basic_machine=none-none
+ os=-none
+ ;;
+
+# Here we handle the default manufacturer of certain CPU types. It is in
+# some cases the only manufacturer, in others, it is the most popular.
+ mips)
+ basic_machine=mips-mips
+ ;;
+ romp)
+ basic_machine=romp-ibm
+ ;;
+ rs6000)
+ basic_machine=rs6000-ibm
+ ;;
+ vax)
+ basic_machine=vax-dec
+ ;;
+ pdp11)
+ basic_machine=pdp11-dec
+ ;;
+ we32k)
+ basic_machine=we32k-att
+ ;;
+ sparc)
+ basic_machine=sparc-sun
+ ;;
+ cydra)
+ basic_machine=cydra-cydrome
+ ;;
+ orion)
+ basic_machine=orion-highlevel
+ ;;
+ orion105)
+ basic_machine=clipper-highlevel
+ ;;
+ *)
+ echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2
+ exit 1
+ ;;
+esac
+
+# Here we canonicalize certain aliases for manufacturers.
+case $basic_machine in
+ *-digital*)
+ basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'`
+ ;;
+ *-commodore*)
+ basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'`
+ ;;
+ *)
+ ;;
+esac
+
+# Decode manufacturer-specific aliases for certain operating systems.
+
+if [ x"$os" != x"" ]
+then
+case $os in
+ # First match some system type aliases
+ # that might get confused with valid system types.
+ # -solaris* is a basic system type, with this one exception.
+ -solaris1 | -solaris1.*)
+ os=`echo $os | sed -e 's|solaris1|sunos4|'`
+ ;;
+ -solaris)
+ os=-solaris2
+ ;;
+ -unixware* | svr4*)
+ os=-sysv4
+ ;;
+ -gnu/linux*)
+ os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'`
+ ;;
+ # First accept the basic system types.
+ # The portable systems comes first.
+ # Each alternative MUST END IN A *, to match a version number.
+ # -sysv* is not here because it comes later, after sysvr4.
+ -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \
+ | -*vms* | -sco* | -esix* | -isc* | -aix* | -sunos | -sunos[34]*\
+ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -solaris* | -sym* \
+ | -amigados* | -msdos* | -newsos* | -unicos* | -aof* | -aos* \
+ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \
+ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \
+ | -hiux* | -386bsd* | -netbsd* | -openbsd* | -freebsd* | -riscix* \
+ | -lynxos* | -bosx* | -nextstep* | -cxux* | -aout* | -elf* \
+ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \
+ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \
+ | -cygwin32* | -pe* | -psos* | -moss* | -proelf* | -rtems* \
+ | -linux-gnu* | -uxpv*)
+ # Remember, each alternative MUST END IN *, to match a version number.
+ ;;
+ -linux*)
+ os=`echo $os | sed -e 's|linux|linux-gnu|'`
+ ;;
+ -sunos5*)
+ os=`echo $os | sed -e 's|sunos5|solaris2|'`
+ ;;
+ -sunos6*)
+ os=`echo $os | sed -e 's|sunos6|solaris3|'`
+ ;;
+ -osfrose*)
+ os=-osfrose
+ ;;
+ -osf*)
+ os=-osf
+ ;;
+ -utek*)
+ os=-bsd
+ ;;
+ -dynix*)
+ os=-bsd
+ ;;
+ -acis*)
+ os=-aos
+ ;;
+ -ctix* | -uts*)
+ os=-sysv
+ ;;
+ -ns2 )
+ os=-nextstep2
+ ;;
+ # Preserve the version number of sinix5.
+ -sinix5.*)
+ os=`echo $os | sed -e 's|sinix|sysv|'`
+ ;;
+ -sinix*)
+ os=-sysv4
+ ;;
+ -triton*)
+ os=-sysv3
+ ;;
+ -oss*)
+ os=-sysv3
+ ;;
+ -svr4)
+ os=-sysv4
+ ;;
+ -svr3)
+ os=-sysv3
+ ;;
+ -sysvr4)
+ os=-sysv4
+ ;;
+ # This must come after -sysvr4.
+ -sysv*)
+ ;;
+ -xenix)
+ os=-xenix
+ ;;
+ -none)
+ ;;
+ *)
+ # Get rid of the `-' at the beginning of $os.
+ os=`echo $os | sed 's/[^-]*-//'`
+ echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2
+ exit 1
+ ;;
+esac
+else
+
+# Here we handle the default operating systems that come with various machines.
+# The value should be what the vendor currently ships out the door with their
+# machine or put another way, the most popular os provided with the machine.
+
+# Note that if you're going to try to match "-MANUFACTURER" here (say,
+# "-sun"), then you have to tell the case statement up towards the top
+# that MANUFACTURER isn't an operating system. Otherwise, code above
+# will signal an error saying that MANUFACTURER isn't an operating
+# system, and we'll never get to this point.
+
+case $basic_machine in
+ *-acorn)
+ os=-riscix1.2
+ ;;
+ arm*-semi)
+ os=-aout
+ ;;
+ pdp11-*)
+ os=-none
+ ;;
+ *-dec | vax-*)
+ os=-ultrix4.2
+ ;;
+ m68*-apollo)
+ os=-domain
+ ;;
+ i386-sun)
+ os=-sunos4.0.2
+ ;;
+ m68000-sun)
+ os=-sunos3
+ # This also exists in the configure program, but was not the
+ # default.
+ # os=-sunos4
+ ;;
+ *-tti) # must be before sparc entry or we get the wrong os.
+ os=-sysv3
+ ;;
+ sparc-* | *-sun)
+ os=-sunos4.1.1
+ ;;
+ *-ibm)
+ os=-aix
+ ;;
+ *-hp)
+ os=-hpux
+ ;;
+ *-hitachi)
+ os=-hiux
+ ;;
+ i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent)
+ os=-sysv
+ ;;
+ *-cbm)
+ os=-amigados
+ ;;
+ *-dg)
+ os=-dgux
+ ;;
+ *-dolphin)
+ os=-sysv3
+ ;;
+ m68k-ccur)
+ os=-rtu
+ ;;
+ m88k-omron*)
+ os=-luna
+ ;;
+ *-next )
+ os=-nextstep
+ ;;
+ *-sequent)
+ os=-ptx
+ ;;
+ *-crds)
+ os=-unos
+ ;;
+ *-ns)
+ os=-genix
+ ;;
+ i370-*)
+ os=-mvs
+ ;;
+ *-next)
+ os=-nextstep3
+ ;;
+ *-gould)
+ os=-sysv
+ ;;
+ *-highlevel)
+ os=-bsd
+ ;;
+ *-encore)
+ os=-bsd
+ ;;
+ *-sgi)
+ os=-irix
+ ;;
+ *-siemens)
+ os=-sysv4
+ ;;
+ *-masscomp)
+ os=-rtu
+ ;;
+ f301-fujitsu)
+ os=-uxpv
+ ;;
+ *)
+ os=-none
+ ;;
+esac
+fi
+
+# Here we handle the case where we know the os, and the CPU type, but not the
+# manufacturer. We pick the logical manufacturer.
+vendor=unknown
+case $basic_machine in
+ *-unknown)
+ case $os in
+ -riscix*)
+ vendor=acorn
+ ;;
+ -sunos*)
+ vendor=sun
+ ;;
+ -aix*)
+ vendor=ibm
+ ;;
+ -hpux*)
+ vendor=hp
+ ;;
+ -hiux*)
+ vendor=hitachi
+ ;;
+ -unos*)
+ vendor=crds
+ ;;
+ -dgux*)
+ vendor=dg
+ ;;
+ -luna*)
+ vendor=omron
+ ;;
+ -genix*)
+ vendor=ns
+ ;;
+ -mvs*)
+ vendor=ibm
+ ;;
+ -ptx*)
+ vendor=sequent
+ ;;
+ -vxsim* | -vxworks*)
+ vendor=wrs
+ ;;
+ -aux*)
+ vendor=apple
+ ;;
+ esac
+ basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"`
+ ;;
+esac
+
+echo $basic_machine$os
--- /dev/null
+#! /bin/sh
+
+# Guess values for system-dependent variables and create Makefiles.
+# Generated automatically using autoconf version 2.12
+# Copyright (C) 1992, 93, 94, 95, 96 Free Software Foundation, Inc.
+#
+# This configure script is free software; the Free Software Foundation
+# gives unlimited permission to copy, distribute and modify it.
+
+# Defaults:
+ac_help=
+ac_default_prefix=/usr/local
+# Any additions from configure.in:
+ac_help="$ac_help
+ --with-socks use the socks library"
+ac_help="$ac_help
+ --disable-opie disable support for opie or s/key FTP login"
+ac_help="$ac_help
+ --disable-digest disable support for HTTP digest authorization"
+ac_help="$ac_help
+ --disable-debug disable support for debugging output"
+ac_help="$ac_help
+ --disable-nls do not use Native Language Support"
+
+# Initialize some variables set by options.
+# The variables have the same names as the options, with
+# dashes changed to underlines.
+build=NONE
+cache_file=./config.cache
+exec_prefix=NONE
+host=NONE
+no_create=
+nonopt=NONE
+no_recursion=
+prefix=NONE
+program_prefix=NONE
+program_suffix=NONE
+program_transform_name=s,x,x,
+silent=
+site=
+srcdir=
+target=NONE
+verbose=
+x_includes=NONE
+x_libraries=NONE
+bindir='${exec_prefix}/bin'
+sbindir='${exec_prefix}/sbin'
+libexecdir='${exec_prefix}/libexec'
+datadir='${prefix}/share'
+sysconfdir='${prefix}/etc'
+sharedstatedir='${prefix}/com'
+localstatedir='${prefix}/var'
+libdir='${exec_prefix}/lib'
+includedir='${prefix}/include'
+oldincludedir='/usr/include'
+infodir='${prefix}/info'
+mandir='${prefix}/man'
+
+# Initialize some other variables.
+subdirs=
+MFLAGS= MAKEFLAGS=
+# Maximum number of lines to put in a shell here document.
+ac_max_here_lines=12
+
+ac_prev=
+for ac_option
+do
+
+ # If the previous option needs an argument, assign it.
+ if test -n "$ac_prev"; then
+ eval "$ac_prev=\$ac_option"
+ ac_prev=
+ continue
+ fi
+
+ case "$ac_option" in
+ -*=*) ac_optarg=`echo "$ac_option" | sed 's/[-_a-zA-Z0-9]*=//'` ;;
+ *) ac_optarg= ;;
+ esac
+
+ # Accept the important Cygnus configure options, so we can diagnose typos.
+
+ case "$ac_option" in
+
+ -bindir | --bindir | --bindi | --bind | --bin | --bi)
+ ac_prev=bindir ;;
+ -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*)
+ bindir="$ac_optarg" ;;
+
+ -build | --build | --buil | --bui | --bu)
+ ac_prev=build ;;
+ -build=* | --build=* | --buil=* | --bui=* | --bu=*)
+ build="$ac_optarg" ;;
+
+ -cache-file | --cache-file | --cache-fil | --cache-fi \
+ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c)
+ ac_prev=cache_file ;;
+ -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \
+ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*)
+ cache_file="$ac_optarg" ;;
+
+ -datadir | --datadir | --datadi | --datad | --data | --dat | --da)
+ ac_prev=datadir ;;
+ -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \
+ | --da=*)
+ datadir="$ac_optarg" ;;
+
+ -disable-* | --disable-*)
+ ac_feature=`echo $ac_option|sed -e 's/-*disable-//'`
+ # Reject names that are not valid shell variable names.
+ if test -n "`echo $ac_feature| sed 's/[-a-zA-Z0-9_]//g'`"; then
+ { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; }
+ fi
+ ac_feature=`echo $ac_feature| sed 's/-/_/g'`
+ eval "enable_${ac_feature}=no" ;;
+
+ -enable-* | --enable-*)
+ ac_feature=`echo $ac_option|sed -e 's/-*enable-//' -e 's/=.*//'`
+ # Reject names that are not valid shell variable names.
+ if test -n "`echo $ac_feature| sed 's/[-_a-zA-Z0-9]//g'`"; then
+ { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; }
+ fi
+ ac_feature=`echo $ac_feature| sed 's/-/_/g'`
+ case "$ac_option" in
+ *=*) ;;
+ *) ac_optarg=yes ;;
+ esac
+ eval "enable_${ac_feature}='$ac_optarg'" ;;
+
+ -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \
+ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \
+ | --exec | --exe | --ex)
+ ac_prev=exec_prefix ;;
+ -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \
+ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \
+ | --exec=* | --exe=* | --ex=*)
+ exec_prefix="$ac_optarg" ;;
+
+ -gas | --gas | --ga | --g)
+ # Obsolete; use --with-gas.
+ with_gas=yes ;;
+
+ -help | --help | --hel | --he)
+ # Omit some internal or obsolete options to make the list less imposing.
+ # This message is too long to be a string in the A/UX 3.1 sh.
+ cat << EOF
+Usage: configure [options] [host]
+Options: [defaults in brackets after descriptions]
+Configuration:
+ --cache-file=FILE cache test results in FILE
+ --help print this message
+ --no-create do not create output files
+ --quiet, --silent do not print \`checking...' messages
+ --version print the version of autoconf that created configure
+Directory and file names:
+ --prefix=PREFIX install architecture-independent files in PREFIX
+ [$ac_default_prefix]
+ --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
+ [same as prefix]
+ --bindir=DIR user executables in DIR [EPREFIX/bin]
+ --sbindir=DIR system admin executables in DIR [EPREFIX/sbin]
+ --libexecdir=DIR program executables in DIR [EPREFIX/libexec]
+ --datadir=DIR read-only architecture-independent data in DIR
+ [PREFIX/share]
+ --sysconfdir=DIR read-only single-machine data in DIR [PREFIX/etc]
+ --sharedstatedir=DIR modifiable architecture-independent data in DIR
+ [PREFIX/com]
+ --localstatedir=DIR modifiable single-machine data in DIR [PREFIX/var]
+ --libdir=DIR object code libraries in DIR [EPREFIX/lib]
+ --includedir=DIR C header files in DIR [PREFIX/include]
+ --oldincludedir=DIR C header files for non-gcc in DIR [/usr/include]
+ --infodir=DIR info documentation in DIR [PREFIX/info]
+ --mandir=DIR man documentation in DIR [PREFIX/man]
+ --srcdir=DIR find the sources in DIR [configure dir or ..]
+ --program-prefix=PREFIX prepend PREFIX to installed program names
+ --program-suffix=SUFFIX append SUFFIX to installed program names
+ --program-transform-name=PROGRAM
+ run sed PROGRAM on installed program names
+EOF
+ cat << EOF
+Host type:
+ --build=BUILD configure for building on BUILD [BUILD=HOST]
+ --host=HOST configure for HOST [guessed]
+ --target=TARGET configure for TARGET [TARGET=HOST]
+Features and packages:
+ --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
+ --enable-FEATURE[=ARG] include FEATURE [ARG=yes]
+ --with-PACKAGE[=ARG] use PACKAGE [ARG=yes]
+ --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
+ --x-includes=DIR X include files are in DIR
+ --x-libraries=DIR X library files are in DIR
+EOF
+ if test -n "$ac_help"; then
+ echo "--enable and --with options recognized:$ac_help"
+ fi
+ exit 0 ;;
+
+ -host | --host | --hos | --ho)
+ ac_prev=host ;;
+ -host=* | --host=* | --hos=* | --ho=*)
+ host="$ac_optarg" ;;
+
+ -includedir | --includedir | --includedi | --included | --include \
+ | --includ | --inclu | --incl | --inc)
+ ac_prev=includedir ;;
+ -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \
+ | --includ=* | --inclu=* | --incl=* | --inc=*)
+ includedir="$ac_optarg" ;;
+
+ -infodir | --infodir | --infodi | --infod | --info | --inf)
+ ac_prev=infodir ;;
+ -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*)
+ infodir="$ac_optarg" ;;
+
+ -libdir | --libdir | --libdi | --libd)
+ ac_prev=libdir ;;
+ -libdir=* | --libdir=* | --libdi=* | --libd=*)
+ libdir="$ac_optarg" ;;
+
+ -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \
+ | --libexe | --libex | --libe)
+ ac_prev=libexecdir ;;
+ -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \
+ | --libexe=* | --libex=* | --libe=*)
+ libexecdir="$ac_optarg" ;;
+
+ -localstatedir | --localstatedir | --localstatedi | --localstated \
+ | --localstate | --localstat | --localsta | --localst \
+ | --locals | --local | --loca | --loc | --lo)
+ ac_prev=localstatedir ;;
+ -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \
+ | --localstate=* | --localstat=* | --localsta=* | --localst=* \
+ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*)
+ localstatedir="$ac_optarg" ;;
+
+ -mandir | --mandir | --mandi | --mand | --man | --ma | --m)
+ ac_prev=mandir ;;
+ -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*)
+ mandir="$ac_optarg" ;;
+
+ -nfp | --nfp | --nf)
+ # Obsolete; use --without-fp.
+ with_fp=no ;;
+
+ -no-create | --no-create | --no-creat | --no-crea | --no-cre \
+ | --no-cr | --no-c)
+ no_create=yes ;;
+
+ -no-recursion | --no-recursion | --no-recursio | --no-recursi \
+ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r)
+ no_recursion=yes ;;
+
+ -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \
+ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \
+ | --oldin | --oldi | --old | --ol | --o)
+ ac_prev=oldincludedir ;;
+ -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \
+ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \
+ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*)
+ oldincludedir="$ac_optarg" ;;
+
+ -prefix | --prefix | --prefi | --pref | --pre | --pr | --p)
+ ac_prev=prefix ;;
+ -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*)
+ prefix="$ac_optarg" ;;
+
+ -program-prefix | --program-prefix | --program-prefi | --program-pref \
+ | --program-pre | --program-pr | --program-p)
+ ac_prev=program_prefix ;;
+ -program-prefix=* | --program-prefix=* | --program-prefi=* \
+ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*)
+ program_prefix="$ac_optarg" ;;
+
+ -program-suffix | --program-suffix | --program-suffi | --program-suff \
+ | --program-suf | --program-su | --program-s)
+ ac_prev=program_suffix ;;
+ -program-suffix=* | --program-suffix=* | --program-suffi=* \
+ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*)
+ program_suffix="$ac_optarg" ;;
+
+ -program-transform-name | --program-transform-name \
+ | --program-transform-nam | --program-transform-na \
+ | --program-transform-n | --program-transform- \
+ | --program-transform | --program-transfor \
+ | --program-transfo | --program-transf \
+ | --program-trans | --program-tran \
+ | --progr-tra | --program-tr | --program-t)
+ ac_prev=program_transform_name ;;
+ -program-transform-name=* | --program-transform-name=* \
+ | --program-transform-nam=* | --program-transform-na=* \
+ | --program-transform-n=* | --program-transform-=* \
+ | --program-transform=* | --program-transfor=* \
+ | --program-transfo=* | --program-transf=* \
+ | --program-trans=* | --program-tran=* \
+ | --progr-tra=* | --program-tr=* | --program-t=*)
+ program_transform_name="$ac_optarg" ;;
+
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ silent=yes ;;
+
+ -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
+ ac_prev=sbindir ;;
+ -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
+ | --sbi=* | --sb=*)
+ sbindir="$ac_optarg" ;;
+
+ -sharedstatedir | --sharedstatedir | --sharedstatedi \
+ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \
+ | --sharedst | --shareds | --shared | --share | --shar \
+ | --sha | --sh)
+ ac_prev=sharedstatedir ;;
+ -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \
+ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \
+ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \
+ | --sha=* | --sh=*)
+ sharedstatedir="$ac_optarg" ;;
+
+ -site | --site | --sit)
+ ac_prev=site ;;
+ -site=* | --site=* | --sit=*)
+ site="$ac_optarg" ;;
+
+ -srcdir | --srcdir | --srcdi | --srcd | --src | --sr)
+ ac_prev=srcdir ;;
+ -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*)
+ srcdir="$ac_optarg" ;;
+
+ -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \
+ | --syscon | --sysco | --sysc | --sys | --sy)
+ ac_prev=sysconfdir ;;
+ -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \
+ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*)
+ sysconfdir="$ac_optarg" ;;
+
+ -target | --target | --targe | --targ | --tar | --ta | --t)
+ ac_prev=target ;;
+ -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)
+ target="$ac_optarg" ;;
+
+ -v | -verbose | --verbose | --verbos | --verbo | --verb)
+ verbose=yes ;;
+
+ -version | --version | --versio | --versi | --vers)
+ echo "configure generated by autoconf version 2.12"
+ exit 0 ;;
+
+ -with-* | --with-*)
+ ac_package=`echo $ac_option|sed -e 's/-*with-//' -e 's/=.*//'`
+ # Reject names that are not valid shell variable names.
+ if test -n "`echo $ac_package| sed 's/[-_a-zA-Z0-9]//g'`"; then
+ { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; }
+ fi
+ ac_package=`echo $ac_package| sed 's/-/_/g'`
+ case "$ac_option" in
+ *=*) ;;
+ *) ac_optarg=yes ;;
+ esac
+ eval "with_${ac_package}='$ac_optarg'" ;;
+
+ -without-* | --without-*)
+ ac_package=`echo $ac_option|sed -e 's/-*without-//'`
+ # Reject names that are not valid shell variable names.
+ if test -n "`echo $ac_package| sed 's/[-a-zA-Z0-9_]//g'`"; then
+ { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; }
+ fi
+ ac_package=`echo $ac_package| sed 's/-/_/g'`
+ eval "with_${ac_package}=no" ;;
+
+ --x)
+ # Obsolete; use --with-x.
+ with_x=yes ;;
+
+ -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \
+ | --x-incl | --x-inc | --x-in | --x-i)
+ ac_prev=x_includes ;;
+ -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \
+ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*)
+ x_includes="$ac_optarg" ;;
+
+ -x-libraries | --x-libraries | --x-librarie | --x-librari \
+ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l)
+ ac_prev=x_libraries ;;
+ -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \
+ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*)
+ x_libraries="$ac_optarg" ;;
+
+ -*) { echo "configure: error: $ac_option: invalid option; use --help to show usage" 1>&2; exit 1; }
+ ;;
+
+ *)
+ if test -n "`echo $ac_option| sed 's/[-a-z0-9.]//g'`"; then
+ echo "configure: warning: $ac_option: invalid host type" 1>&2
+ fi
+ if test "x$nonopt" != xNONE; then
+ { echo "configure: error: can only configure for one host and one target at a time" 1>&2; exit 1; }
+ fi
+ nonopt="$ac_option"
+ ;;
+
+ esac
+done
+
+if test -n "$ac_prev"; then
+ { echo "configure: error: missing argument to --`echo $ac_prev | sed 's/_/-/g'`" 1>&2; exit 1; }
+fi
+
+trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15
+
+# File descriptor usage:
+# 0 standard input
+# 1 file creation
+# 2 errors and warnings
+# 3 some systems may open it to /dev/tty
+# 4 used on the Kubota Titan
+# 6 checking for... messages and results
+# 5 compiler messages saved in config.log
+if test "$silent" = yes; then
+ exec 6>/dev/null
+else
+ exec 6>&1
+fi
+exec 5>./config.log
+
+echo "\
+This file contains any messages produced by compilers while
+running configure, to aid debugging if configure makes a mistake.
+" 1>&5
+
+# Strip out --no-create and --no-recursion so they do not pile up.
+# Also quote any args containing shell metacharacters.
+ac_configure_args=
+for ac_arg
+do
+ case "$ac_arg" in
+ -no-create | --no-create | --no-creat | --no-crea | --no-cre \
+ | --no-cr | --no-c) ;;
+ -no-recursion | --no-recursion | --no-recursio | --no-recursi \
+ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) ;;
+ *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?]*)
+ ac_configure_args="$ac_configure_args '$ac_arg'" ;;
+ *) ac_configure_args="$ac_configure_args $ac_arg" ;;
+ esac
+done
+
+# NLS nuisances.
+# Only set these to C if already set. These must not be set unconditionally
+# because not all systems understand e.g. LANG=C (notably SCO).
+# Fixing LC_MESSAGES prevents Solaris sh from translating var values in `set'!
+# Non-C LC_CTYPE values break the ctype check.
+if test "${LANG+set}" = set; then LANG=C; export LANG; fi
+if test "${LC_ALL+set}" = set; then LC_ALL=C; export LC_ALL; fi
+if test "${LC_MESSAGES+set}" = set; then LC_MESSAGES=C; export LC_MESSAGES; fi
+if test "${LC_CTYPE+set}" = set; then LC_CTYPE=C; export LC_CTYPE; fi
+
+# confdefs.h avoids OS command line length limits that DEFS can exceed.
+rm -rf conftest* confdefs.h
+# AIX cpp loses on an empty file, so make sure it contains at least a newline.
+echo > confdefs.h
+
+# A filename unique to this package, relative to the directory that
+# configure is in, which we can look for to find out if srcdir is correct.
+ac_unique_file=src/version.c
+
+# Find the source files, if location was not specified.
+if test -z "$srcdir"; then
+ ac_srcdir_defaulted=yes
+ # Try the directory containing this script, then its parent.
+ ac_prog=$0
+ ac_confdir=`echo $ac_prog|sed 's%/[^/][^/]*$%%'`
+ test "x$ac_confdir" = "x$ac_prog" && ac_confdir=.
+ srcdir=$ac_confdir
+ if test ! -r $srcdir/$ac_unique_file; then
+ srcdir=..
+ fi
+else
+ ac_srcdir_defaulted=no
+fi
+if test ! -r $srcdir/$ac_unique_file; then
+ if test "$ac_srcdir_defaulted" = yes; then
+ { echo "configure: error: can not find sources in $ac_confdir or .." 1>&2; exit 1; }
+ else
+ { echo "configure: error: can not find sources in $srcdir" 1>&2; exit 1; }
+ fi
+fi
+srcdir=`echo "${srcdir}" | sed 's%\([^/]\)/*$%\1%'`
+
+# Prefer explicitly selected file to automatically selected ones.
+if test -z "$CONFIG_SITE"; then
+ if test "x$prefix" != xNONE; then
+ CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site"
+ else
+ CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site"
+ fi
+fi
+for ac_site_file in $CONFIG_SITE; do
+ if test -r "$ac_site_file"; then
+ echo "loading site script $ac_site_file"
+ . "$ac_site_file"
+ fi
+done
+
+if test -r "$cache_file"; then
+ echo "loading cache $cache_file"
+ . $cache_file
+else
+ echo "creating cache $cache_file"
+ > $cache_file
+fi
+
+ac_ext=c
+# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options.
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5'
+ac_link='${CC-cc} -o conftest $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5'
+cross_compiling=$ac_cv_prog_cc_cross
+
+if (echo "testing\c"; echo 1,2,3) | grep c >/dev/null; then
+ # Stardent Vistra SVR4 grep lacks -e, says ghazi@caip.rutgers.edu.
+ if (echo -n testing; echo 1,2,3) | sed s/-n/xn/ | grep xn >/dev/null; then
+ ac_n= ac_c='
+' ac_t=' '
+ else
+ ac_n=-n ac_c= ac_t=
+ fi
+else
+ ac_n= ac_c='\c' ac_t=
+fi
+
+
+
+
+
+VERSION=`sed -e 's/^.*"\(.*\)";$/\1/' ${srcdir}/src/version.c`
+echo "configuring for GNU Wget $VERSION"
+
+PACKAGE=wget
+
+
+ac_aux_dir=
+for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do
+ if test -f $ac_dir/install-sh; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install-sh -c"
+ break
+ elif test -f $ac_dir/install.sh; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install.sh -c"
+ break
+ fi
+done
+if test -z "$ac_aux_dir"; then
+ { echo "configure: error: can not find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." 1>&2; exit 1; }
+fi
+ac_config_guess=$ac_aux_dir/config.guess
+ac_config_sub=$ac_aux_dir/config.sub
+ac_configure=$ac_aux_dir/configure # This should be Cygnus configure.
+
+
+# Make sure we can run config.sub.
+if $ac_config_sub sun4 >/dev/null 2>&1; then :
+else { echo "configure: error: can not run $ac_config_sub" 1>&2; exit 1; }
+fi
+
+echo $ac_n "checking host system type""... $ac_c" 1>&6
+echo "configure:567: checking host system type" >&5
+
+host_alias=$host
+case "$host_alias" in
+NONE)
+ case $nonopt in
+ NONE)
+ if host_alias=`$ac_config_guess`; then :
+ else { echo "configure: error: can not guess host type; you must specify one" 1>&2; exit 1; }
+ fi ;;
+ *) host_alias=$nonopt ;;
+ esac ;;
+esac
+
+host=`$ac_config_sub $host_alias`
+host_cpu=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'`
+host_vendor=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'`
+host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
+echo "$ac_t""$host" 1>&6
+
+cat >> confdefs.h <<EOF
+#define OS_TYPE "$host_os"
+EOF
+
+
+# Check whether --with-socks or --without-socks was given.
+if test "${with_socks+set}" = set; then
+ withval="$with_socks"
+ cat >> confdefs.h <<\EOF
+#define HAVE_SOCKS 1
+EOF
+
+fi
+
+
+# Check whether --enable-opie or --disable-opie was given.
+if test "${enable_opie+set}" = set; then
+ enableval="$enable_opie"
+ USE_OPIE=$enableval
+else
+ USE_OPIE=yes
+fi
+
+test x"${USE_OPIE}" = xyes && cat >> confdefs.h <<\EOF
+#define USE_OPIE 1
+EOF
+
+
+# Check whether --enable-digest or --disable-digest was given.
+if test "${enable_digest+set}" = set; then
+ enableval="$enable_digest"
+ USE_DIGEST=$enableval
+else
+ USE_DIGEST=yes
+fi
+
+test x"${USE_DIGEST}" = xyes && cat >> confdefs.h <<\EOF
+#define USE_DIGEST 1
+EOF
+
+
+# Check whether --enable-debug or --disable-debug was given.
+if test "${enable_debug+set}" = set; then
+ enableval="$enable_debug"
+ DEBUG=$enableval
+else
+ DEBUG=yes
+fi
+
+test x"${DEBUG}" = xyes && cat >> confdefs.h <<\EOF
+#define DEBUG 1
+EOF
+
+
+case "${USE_OPIE}${USE_DIGEST}" in
+*yes*)
+ MD5_OBJ='md5$o'
+esac
+if test x"$USE_OPIE" = xyes; then
+ OPIE_OBJ='ftp-opie$o'
+fi
+
+
+
+echo $ac_n "checking whether ${MAKE-make} sets \${MAKE}""... $ac_c" 1>&6
+echo "configure:652: checking whether ${MAKE-make} sets \${MAKE}" >&5
+set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_prog_make_${ac_make}_set'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftestmake <<\EOF
+all:
+ @echo 'ac_maketemp="${MAKE}"'
+EOF
+# GNU make sometimes prints "make[1]: Entering...", which would confuse us.
+eval `${MAKE-make} -f conftestmake 2>/dev/null | grep temp=`
+if test -n "$ac_maketemp"; then
+ eval ac_cv_prog_make_${ac_make}_set=yes
+else
+ eval ac_cv_prog_make_${ac_make}_set=no
+fi
+rm -f conftestmake
+fi
+if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ SET_MAKE=
+else
+ echo "$ac_t""no" 1>&6
+ SET_MAKE="MAKE=${MAKE-make}"
+fi
+
+
+# Find a good install program. We prefer a C program (faster),
+# so one script is as good as another. But avoid the broken or
+# incompatible versions:
+# SysV /etc/install, /usr/sbin/install
+# SunOS /usr/etc/install
+# IRIX /sbin/install
+# AIX /bin/install
+# AFS /usr/afsws/bin/install, which mishandles nonexistent args
+# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff"
+# ./install, which can be erroneously created by make from ./install.sh.
+echo $ac_n "checking for a BSD compatible install""... $ac_c" 1>&6
+echo "configure:690: checking for a BSD compatible install" >&5
+if test -z "$INSTALL"; then
+if eval "test \"`echo '$''{'ac_cv_path_install'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ IFS="${IFS= }"; ac_save_IFS="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ # Account for people who put trailing slashes in PATH elements.
+ case "$ac_dir/" in
+ /|./|.//|/etc/*|/usr/sbin/*|/usr/etc/*|/sbin/*|/usr/afsws/bin/*|/usr/ucb/*) ;;
+ *)
+ # OSF1 and SCO ODT 3.0 have their own names for install.
+ for ac_prog in ginstall installbsd scoinst install; do
+ if test -f $ac_dir/$ac_prog; then
+ if test $ac_prog = install &&
+ grep dspmsg $ac_dir/$ac_prog >/dev/null 2>&1; then
+ # AIX install. It has an incompatible calling convention.
+ # OSF/1 installbsd also uses dspmsg, but is usable.
+ :
+ else
+ ac_cv_path_install="$ac_dir/$ac_prog -c"
+ break 2
+ fi
+ fi
+ done
+ ;;
+ esac
+ done
+ IFS="$ac_save_IFS"
+
+fi
+ if test "${ac_cv_path_install+set}" = set; then
+ INSTALL="$ac_cv_path_install"
+ else
+ # As a last resort, use the slow shell script. We don't cache a
+ # path for INSTALL within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the path is relative.
+ INSTALL="$ac_install_sh"
+ fi
+fi
+echo "$ac_t""$INSTALL" 1>&6
+
+# Use test -z because SunOS4 sh mishandles braces in ${var-val}.
+# It thinks the first close brace ends the variable substitution.
+test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}'
+
+test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644'
+
+
+
+test -z "$CFLAGS" && CFLAGS= auto_cflags=1
+test -z "$CC" && cc_specified=yes
+
+# Extract the first word of "gcc", so it can be a program name with args.
+set dummy gcc; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:747: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ ac_cv_prog_CC="gcc"
+ break
+ fi
+ done
+ IFS="$ac_save_ifs"
+fi
+fi
+CC="$ac_cv_prog_CC"
+if test -n "$CC"; then
+ echo "$ac_t""$CC" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+if test -z "$CC"; then
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:776: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ ac_prog_rejected=no
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ if test "$ac_dir/$ac_word" = "/usr/ucb/cc"; then
+ ac_prog_rejected=yes
+ continue
+ fi
+ ac_cv_prog_CC="cc"
+ break
+ fi
+ done
+ IFS="$ac_save_ifs"
+if test $ac_prog_rejected = yes; then
+ # We found a bogon in the path, so make sure we never use it.
+ set dummy $ac_cv_prog_CC
+ shift
+ if test $# -gt 0; then
+ # We chose a different compiler from the bogus one.
+ # However, it has the same basename, so the bogon will be chosen
+ # first if we set CC to just the basename; use the full file name.
+ shift
+ set dummy "$ac_dir/$ac_word" "$@"
+ shift
+ ac_cv_prog_CC="$@"
+ fi
+fi
+fi
+fi
+CC="$ac_cv_prog_CC"
+if test -n "$CC"; then
+ echo "$ac_t""$CC" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+ test -z "$CC" && { echo "configure: error: no acceptable cc found in \$PATH" 1>&2; exit 1; }
+fi
+
+echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works""... $ac_c" 1>&6
+echo "configure:824: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works" >&5
+
+ac_ext=c
+# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options.
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5'
+ac_link='${CC-cc} -o conftest $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5'
+cross_compiling=$ac_cv_prog_cc_cross
+
+cat > conftest.$ac_ext <<EOF
+#line 834 "configure"
+#include "confdefs.h"
+main(){return(0);}
+EOF
+if { (eval echo configure:838: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ ac_cv_prog_cc_works=yes
+ # If we can't run a trivial program, we are probably using a cross compiler.
+ if (./conftest; exit) 2>/dev/null; then
+ ac_cv_prog_cc_cross=no
+ else
+ ac_cv_prog_cc_cross=yes
+ fi
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ ac_cv_prog_cc_works=no
+fi
+rm -fr conftest*
+
+echo "$ac_t""$ac_cv_prog_cc_works" 1>&6
+if test $ac_cv_prog_cc_works = no; then
+ { echo "configure: error: installation or configuration problem: C compiler cannot create executables." 1>&2; exit 1; }
+fi
+echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler""... $ac_c" 1>&6
+echo "configure:858: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler" >&5
+echo "$ac_t""$ac_cv_prog_cc_cross" 1>&6
+cross_compiling=$ac_cv_prog_cc_cross
+
+echo $ac_n "checking whether we are using GNU C""... $ac_c" 1>&6
+echo "configure:863: checking whether we are using GNU C" >&5
+if eval "test \"`echo '$''{'ac_cv_prog_gcc'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.c <<EOF
+#ifdef __GNUC__
+ yes;
+#endif
+EOF
+if { ac_try='${CC-cc} -E conftest.c'; { (eval echo configure:872: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }; } | egrep yes >/dev/null 2>&1; then
+ ac_cv_prog_gcc=yes
+else
+ ac_cv_prog_gcc=no
+fi
+fi
+
+echo "$ac_t""$ac_cv_prog_gcc" 1>&6
+
+if test $ac_cv_prog_gcc = yes; then
+ GCC=yes
+ ac_test_CFLAGS="${CFLAGS+set}"
+ ac_save_CFLAGS="$CFLAGS"
+ CFLAGS=
+ echo $ac_n "checking whether ${CC-cc} accepts -g""... $ac_c" 1>&6
+echo "configure:887: checking whether ${CC-cc} accepts -g" >&5
+if eval "test \"`echo '$''{'ac_cv_prog_cc_g'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ echo 'void f(){}' > conftest.c
+if test -z "`${CC-cc} -g -c conftest.c 2>&1`"; then
+ ac_cv_prog_cc_g=yes
+else
+ ac_cv_prog_cc_g=no
+fi
+rm -f conftest*
+
+fi
+
+echo "$ac_t""$ac_cv_prog_cc_g" 1>&6
+ if test "$ac_test_CFLAGS" = set; then
+ CFLAGS="$ac_save_CFLAGS"
+ elif test $ac_cv_prog_cc_g = yes; then
+ CFLAGS="-g -O2"
+ else
+ CFLAGS="-O2"
+ fi
+else
+ GCC=
+ test "${CFLAGS+set}" = set || CFLAGS="-g"
+fi
+
+
+if test -n "$auto_cflags"; then
+ if test -n "$GCC"; then
+ CFLAGS="$CFLAGS -O2 -Wall -Wno-implicit"
+ else
+ case "$host_os" in
+ *hpux*) CFLAGS="$CFLAGS +O3" ;;
+ *ultrix* | *osf*) CFLAGS="$CFLAGS -O -Olimit 2000" ;;
+ *) CFLAGS="$CFLAGS -O" ;;
+ esac
+ fi
+fi
+
+echo $ac_n "checking how to run the C preprocessor""... $ac_c" 1>&6
+echo "configure:928: checking how to run the C preprocessor" >&5
+# On Suns, sometimes $CPP names a directory.
+if test -n "$CPP" && test -d "$CPP"; then
+ CPP=
+fi
+if test -z "$CPP"; then
+if eval "test \"`echo '$''{'ac_cv_prog_CPP'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ # This must be in double quotes, not single quotes, because CPP may get
+ # substituted into the Makefile and "${CC-cc}" will confuse make.
+ CPP="${CC-cc} -E"
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp.
+ cat > conftest.$ac_ext <<EOF
+#line 943 "configure"
+#include "confdefs.h"
+#include <assert.h>
+Syntax Error
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:949: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ :
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ CPP="${CC-cc} -E -traditional-cpp"
+ cat > conftest.$ac_ext <<EOF
+#line 960 "configure"
+#include "confdefs.h"
+#include <assert.h>
+Syntax Error
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:966: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ :
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ CPP=/lib/cpp
+fi
+rm -f conftest*
+fi
+rm -f conftest*
+ ac_cv_prog_CPP="$CPP"
+fi
+ CPP="$ac_cv_prog_CPP"
+else
+ ac_cv_prog_CPP="$CPP"
+fi
+echo "$ac_t""$CPP" 1>&6
+
+echo $ac_n "checking for AIX""... $ac_c" 1>&6
+echo "configure:989: checking for AIX" >&5
+cat > conftest.$ac_ext <<EOF
+#line 991 "configure"
+#include "confdefs.h"
+#ifdef _AIX
+ yes
+#endif
+
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "yes" >/dev/null 2>&1; then
+ rm -rf conftest*
+ echo "$ac_t""yes" 1>&6; cat >> confdefs.h <<\EOF
+#define _ALL_SOURCE 1
+EOF
+
+else
+ rm -rf conftest*
+ echo "$ac_t""no" 1>&6
+fi
+rm -f conftest*
+
+
+
+case "$host_os" in
+ *win32) exeext='.exe';;
+ *) exeext='';;
+esac
+
+
+
+echo $ac_n "checking for ${CC-cc} option to accept ANSI C""... $ac_c" 1>&6
+echo "configure:1021: checking for ${CC-cc} option to accept ANSI C" >&5
+if eval "test \"`echo '$''{'am_cv_prog_cc_stdc'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ am_cv_prog_cc_stdc=no
+ac_save_CC="$CC"
+# Don't try gcc -ansi; that turns off useful extensions and
+# breaks some systems' header files.
+# AIX -qlanglvl=ansi
+# Ultrix and OSF/1 -std1
+# HP-UX -Aa -D_HPUX_SOURCE
+# SVR4 -Xc -D__EXTENSIONS__
+for ac_arg in "" -qlanglvl=ansi -std1 "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ cat > conftest.$ac_ext <<EOF
+#line 1037 "configure"
+#include "confdefs.h"
+#if !defined(__STDC__) || __STDC__ != 1
+choke me
+#endif
+/* DYNIX/ptx V4.1.3 can't compile sys/stat.h with -Xc -D__EXTENSIONS__. */
+#ifdef _SEQUENT_
+# include <sys/types.h>
+# include <sys/stat.h>
+#endif
+
+int main() {
+
+int test (int i, double x);
+struct s1 {int (*f) (int a);};
+struct s2 {int (*f) (double a);};
+; return 0; }
+EOF
+if { (eval echo configure:1055: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ am_cv_prog_cc_stdc="$ac_arg"; break
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+fi
+rm -f conftest*
+done
+CC="$ac_save_CC"
+
+fi
+
+echo "$ac_t""$am_cv_prog_cc_stdc" 1>&6
+case "x$am_cv_prog_cc_stdc" in
+ x|xno) ;;
+ *) CC="$CC $am_cv_prog_cc_stdc" ;;
+esac
+
+
+
+echo $ac_n "checking for function prototypes""... $ac_c" 1>&6
+echo "configure:1077: checking for function prototypes" >&5
+if test "$am_cv_prog_cc_stdc" != no; then
+ echo "$ac_t""yes" 1>&6
+ cat >> confdefs.h <<\EOF
+#define PROTOTYPES 1
+EOF
+
+ U= ANSI2KNR=
+else
+ echo "$ac_t""no" 1>&6
+ U=_ ANSI2KNR=./ansi2knr
+ # Ensure some checks needed by ansi2knr itself.
+ echo $ac_n "checking for ANSI C header files""... $ac_c" 1>&6
+echo "configure:1090: checking for ANSI C header files" >&5
+if eval "test \"`echo '$''{'ac_cv_header_stdc'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1095 "configure"
+#include "confdefs.h"
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <float.h>
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:1103: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ rm -rf conftest*
+ ac_cv_header_stdc=yes
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_header_stdc=no
+fi
+rm -f conftest*
+
+if test $ac_cv_header_stdc = yes; then
+ # SunOS 4.x string.h does not declare mem*, contrary to ANSI.
+cat > conftest.$ac_ext <<EOF
+#line 1120 "configure"
+#include "confdefs.h"
+#include <string.h>
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "memchr" >/dev/null 2>&1; then
+ :
+else
+ rm -rf conftest*
+ ac_cv_header_stdc=no
+fi
+rm -f conftest*
+
+fi
+
+if test $ac_cv_header_stdc = yes; then
+ # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI.
+cat > conftest.$ac_ext <<EOF
+#line 1138 "configure"
+#include "confdefs.h"
+#include <stdlib.h>
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "free" >/dev/null 2>&1; then
+ :
+else
+ rm -rf conftest*
+ ac_cv_header_stdc=no
+fi
+rm -f conftest*
+
+fi
+
+if test $ac_cv_header_stdc = yes; then
+ # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi.
+if test "$cross_compiling" = yes; then
+ :
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1159 "configure"
+#include "confdefs.h"
+#include <ctype.h>
+#define ISLOWER(c) ('a' <= (c) && (c) <= 'z')
+#define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c))
+#define XOR(e, f) (((e) && !(f)) || (!(e) && (f)))
+int main () { int i; for (i = 0; i < 256; i++)
+if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) exit(2);
+exit (0); }
+
+EOF
+if { (eval echo configure:1170: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest && (./conftest; exit) 2>/dev/null
+then
+ :
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -fr conftest*
+ ac_cv_header_stdc=no
+fi
+rm -fr conftest*
+fi
+
+fi
+fi
+
+echo "$ac_t""$ac_cv_header_stdc" 1>&6
+if test $ac_cv_header_stdc = yes; then
+ cat >> confdefs.h <<\EOF
+#define STDC_HEADERS 1
+EOF
+
+fi
+
+ for ac_hdr in string.h
+do
+ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'`
+echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6
+echo "configure:1197: checking for $ac_hdr" >&5
+if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1202 "configure"
+#include "confdefs.h"
+#include <$ac_hdr>
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:1207: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=yes"
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=no"
+fi
+rm -f conftest*
+fi
+if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_hdr 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+fi
+
+
+echo $ac_n "checking for working const""... $ac_c" 1>&6
+echo "configure:1237: checking for working const" >&5
+if eval "test \"`echo '$''{'ac_cv_c_const'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1242 "configure"
+#include "confdefs.h"
+
+int main() {
+
+/* Ultrix mips cc rejects this. */
+typedef int charset[2]; const charset x;
+/* SunOS 4.1.1 cc rejects this. */
+char const *const *ccp;
+char **p;
+/* NEC SVR4.0.2 mips cc rejects this. */
+struct point {int x, y;};
+static struct point const zero = {0,0};
+/* AIX XL C 1.02.0.0 rejects this.
+ It does not let you subtract one const X* pointer from another in an arm
+ of an if-expression whose if-part is not a constant expression */
+const char *g = "string";
+ccp = &g + (g ? g-g : 0);
+/* HPUX 7.0 cc rejects these. */
+++ccp;
+p = (char**) ccp;
+ccp = (char const *const *) p;
+{ /* SCO 3.2v4 cc rejects this. */
+ char *t;
+ char const *s = 0 ? (char *) 0 : (char const *) 0;
+
+ *t++ = 0;
+}
+{ /* Someone thinks the Sun supposedly-ANSI compiler will reject this. */
+ int x[] = {25, 17};
+ const int *foo = &x[0];
+ ++foo;
+}
+{ /* Sun SC1.0 ANSI compiler rejects this -- but not the above. */
+ typedef const int *iptr;
+ iptr p = 0;
+ ++p;
+}
+{ /* AIX XL C 1.02.0.0 rejects this saying
+ "k.c", line 2.27: 1506-025 (S) Operand must be a modifiable lvalue. */
+ struct s { int j; const int *ap[3]; };
+ struct s *b; b->j = 5;
+}
+{ /* ULTRIX-32 V3.1 (Rev 9) vcc rejects this */
+ const int foo = 10;
+}
+
+; return 0; }
+EOF
+if { (eval echo configure:1291: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ ac_cv_c_const=yes
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_c_const=no
+fi
+rm -f conftest*
+fi
+
+echo "$ac_t""$ac_cv_c_const" 1>&6
+if test $ac_cv_c_const = no; then
+ cat >> confdefs.h <<\EOF
+#define const
+EOF
+
+fi
+
+echo $ac_n "checking for size_t""... $ac_c" 1>&6
+echo "configure:1312: checking for size_t" >&5
+if eval "test \"`echo '$''{'ac_cv_type_size_t'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1317 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#if STDC_HEADERS
+#include <stdlib.h>
+#include <stddef.h>
+#endif
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "size_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then
+ rm -rf conftest*
+ ac_cv_type_size_t=yes
+else
+ rm -rf conftest*
+ ac_cv_type_size_t=no
+fi
+rm -f conftest*
+
+fi
+echo "$ac_t""$ac_cv_type_size_t" 1>&6
+if test $ac_cv_type_size_t = no; then
+ cat >> confdefs.h <<\EOF
+#define size_t unsigned
+EOF
+
+fi
+
+echo $ac_n "checking for pid_t""... $ac_c" 1>&6
+echo "configure:1345: checking for pid_t" >&5
+if eval "test \"`echo '$''{'ac_cv_type_pid_t'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1350 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#if STDC_HEADERS
+#include <stdlib.h>
+#include <stddef.h>
+#endif
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "pid_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then
+ rm -rf conftest*
+ ac_cv_type_pid_t=yes
+else
+ rm -rf conftest*
+ ac_cv_type_pid_t=no
+fi
+rm -f conftest*
+
+fi
+echo "$ac_t""$ac_cv_type_pid_t" 1>&6
+if test $ac_cv_type_pid_t = no; then
+ cat >> confdefs.h <<\EOF
+#define pid_t int
+EOF
+
+fi
+
+echo $ac_n "checking whether byte ordering is bigendian""... $ac_c" 1>&6
+echo "configure:1378: checking whether byte ordering is bigendian" >&5
+if eval "test \"`echo '$''{'ac_cv_c_bigendian'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_cv_c_bigendian=unknown
+# See if sys/param.h defines the BYTE_ORDER macro.
+cat > conftest.$ac_ext <<EOF
+#line 1385 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#include <sys/param.h>
+int main() {
+
+#if !BYTE_ORDER || !BIG_ENDIAN || !LITTLE_ENDIAN
+ bogus endian macros
+#endif
+; return 0; }
+EOF
+if { (eval echo configure:1396: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ # It does; now see whether it defined to BIG_ENDIAN or not.
+cat > conftest.$ac_ext <<EOF
+#line 1400 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#include <sys/param.h>
+int main() {
+
+#if BYTE_ORDER != BIG_ENDIAN
+ not big endian
+#endif
+; return 0; }
+EOF
+if { (eval echo configure:1411: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ ac_cv_c_bigendian=yes
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_c_bigendian=no
+fi
+rm -f conftest*
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+fi
+rm -f conftest*
+if test $ac_cv_c_bigendian = unknown; then
+if test "$cross_compiling" = yes; then
+ { echo "configure: error: can not run test program while cross compiling" 1>&2; exit 1; }
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1431 "configure"
+#include "confdefs.h"
+main () {
+ /* Are we little or big endian? From Harbison&Steele. */
+ union
+ {
+ long l;
+ char c[sizeof (long)];
+ } u;
+ u.l = 1;
+ exit (u.c[sizeof (long) - 1] == 1);
+}
+EOF
+if { (eval echo configure:1444: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest && (./conftest; exit) 2>/dev/null
+then
+ ac_cv_c_bigendian=no
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -fr conftest*
+ ac_cv_c_bigendian=yes
+fi
+rm -fr conftest*
+fi
+
+fi
+fi
+
+echo "$ac_t""$ac_cv_c_bigendian" 1>&6
+if test $ac_cv_c_bigendian = yes; then
+ cat >> confdefs.h <<\EOF
+#define WORDS_BIGENDIAN 1
+EOF
+
+fi
+
+
+for ac_hdr in string.h stdarg.h unistd.h sys/time.h utime.h sys/utime.h
+do
+ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'`
+echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6
+echo "configure:1472: checking for $ac_hdr" >&5
+if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1477 "configure"
+#include "confdefs.h"
+#include <$ac_hdr>
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:1482: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=yes"
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=no"
+fi
+rm -f conftest*
+fi
+if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_hdr 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+for ac_hdr in sys/select.h sys/utsname.h pwd.h signal.h
+do
+ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'`
+echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6
+echo "configure:1512: checking for $ac_hdr" >&5
+if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1517 "configure"
+#include "confdefs.h"
+#include <$ac_hdr>
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:1522: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=yes"
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=no"
+fi
+rm -f conftest*
+fi
+if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_hdr 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+echo $ac_n "checking whether time.h and sys/time.h may both be included""... $ac_c" 1>&6
+echo "configure:1549: checking whether time.h and sys/time.h may both be included" >&5
+if eval "test \"`echo '$''{'ac_cv_header_time'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1554 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#include <sys/time.h>
+#include <time.h>
+int main() {
+struct tm *tp;
+; return 0; }
+EOF
+if { (eval echo configure:1563: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ ac_cv_header_time=yes
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_header_time=no
+fi
+rm -f conftest*
+fi
+
+echo "$ac_t""$ac_cv_header_time" 1>&6
+if test $ac_cv_header_time = yes; then
+ cat >> confdefs.h <<\EOF
+#define TIME_WITH_SYS_TIME 1
+EOF
+
+fi
+
+
+echo $ac_n "checking return type of signal handlers""... $ac_c" 1>&6
+echo "configure:1585: checking return type of signal handlers" >&5
+if eval "test \"`echo '$''{'ac_cv_type_signal'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1590 "configure"
+#include "confdefs.h"
+#include <sys/types.h>
+#include <signal.h>
+#ifdef signal
+#undef signal
+#endif
+#ifdef __cplusplus
+extern "C" void (*signal (int, void (*)(int)))(int);
+#else
+void (*signal ()) ();
+#endif
+
+int main() {
+int i;
+; return 0; }
+EOF
+if { (eval echo configure:1607: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then
+ rm -rf conftest*
+ ac_cv_type_signal=void
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_type_signal=int
+fi
+rm -f conftest*
+fi
+
+echo "$ac_t""$ac_cv_type_signal" 1>&6
+cat >> confdefs.h <<EOF
+#define RETSIGTYPE $ac_cv_type_signal
+EOF
+
+
+
+echo $ac_n "checking for struct utimbuf""... $ac_c" 1>&6
+echo "configure:1627: checking for struct utimbuf" >&5
+if test x"$ac_cv_header_utime_h" = xyes; then
+ cat > conftest.$ac_ext <<EOF
+#line 1630 "configure"
+#include "confdefs.h"
+#include <utime.h>
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "struct[ ]+utimbuf" >/dev/null 2>&1; then
+ rm -rf conftest*
+ cat >> confdefs.h <<\EOF
+#define HAVE_STRUCT_UTIMBUF 1
+EOF
+
+ echo "$ac_t""yes" 1>&6
+else
+ rm -rf conftest*
+ echo "$ac_t""no" 1>&6
+fi
+rm -f conftest*
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+# The Ultrix 4.2 mips builtin alloca declared by alloca.h only works
+# for constant arguments. Useless!
+echo $ac_n "checking for working alloca.h""... $ac_c" 1>&6
+echo "configure:1655: checking for working alloca.h" >&5
+if eval "test \"`echo '$''{'ac_cv_header_alloca_h'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1660 "configure"
+#include "confdefs.h"
+#include <alloca.h>
+int main() {
+char *p = alloca(2 * sizeof(int));
+; return 0; }
+EOF
+if { (eval echo configure:1667: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ ac_cv_header_alloca_h=yes
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_header_alloca_h=no
+fi
+rm -f conftest*
+fi
+
+echo "$ac_t""$ac_cv_header_alloca_h" 1>&6
+if test $ac_cv_header_alloca_h = yes; then
+ cat >> confdefs.h <<\EOF
+#define HAVE_ALLOCA_H 1
+EOF
+
+fi
+
+echo $ac_n "checking for alloca""... $ac_c" 1>&6
+echo "configure:1688: checking for alloca" >&5
+if eval "test \"`echo '$''{'ac_cv_func_alloca_works'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1693 "configure"
+#include "confdefs.h"
+
+#ifdef __GNUC__
+# define alloca __builtin_alloca
+#else
+# if HAVE_ALLOCA_H
+# include <alloca.h>
+# else
+# ifdef _AIX
+ #pragma alloca
+# else
+# ifndef alloca /* predefined by HP cc +Olibcalls */
+char *alloca ();
+# endif
+# endif
+# endif
+#endif
+
+int main() {
+char *p = (char *) alloca(1);
+; return 0; }
+EOF
+if { (eval echo configure:1716: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ ac_cv_func_alloca_works=yes
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ ac_cv_func_alloca_works=no
+fi
+rm -f conftest*
+fi
+
+echo "$ac_t""$ac_cv_func_alloca_works" 1>&6
+if test $ac_cv_func_alloca_works = yes; then
+ cat >> confdefs.h <<\EOF
+#define HAVE_ALLOCA 1
+EOF
+
+fi
+
+if test $ac_cv_func_alloca_works = no; then
+ # The SVR3 libPW and SVR4 libucb both contain incompatible functions
+ # that cause trouble. Some versions do not even contain alloca or
+ # contain a buggy version. If you still want to use their alloca,
+ # use ar to extract alloca.o from them instead of compiling alloca.c.
+ ALLOCA=alloca.o
+ cat >> confdefs.h <<\EOF
+#define C_ALLOCA 1
+EOF
+
+
+echo $ac_n "checking whether alloca needs Cray hooks""... $ac_c" 1>&6
+echo "configure:1748: checking whether alloca needs Cray hooks" >&5
+if eval "test \"`echo '$''{'ac_cv_os_cray'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1753 "configure"
+#include "confdefs.h"
+#if defined(CRAY) && ! defined(CRAY2)
+webecray
+#else
+wenotbecray
+#endif
+
+EOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ egrep "webecray" >/dev/null 2>&1; then
+ rm -rf conftest*
+ ac_cv_os_cray=yes
+else
+ rm -rf conftest*
+ ac_cv_os_cray=no
+fi
+rm -f conftest*
+
+fi
+
+echo "$ac_t""$ac_cv_os_cray" 1>&6
+if test $ac_cv_os_cray = yes; then
+for ac_func in _getb67 GETB67 getb67; do
+ echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:1778: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1783 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:1806: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ cat >> confdefs.h <<EOF
+#define CRAY_STACKSEG_END $ac_func
+EOF
+
+ break
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+done
+fi
+
+echo $ac_n "checking stack direction for C alloca""... $ac_c" 1>&6
+echo "configure:1833: checking stack direction for C alloca" >&5
+if eval "test \"`echo '$''{'ac_cv_c_stack_direction'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ if test "$cross_compiling" = yes; then
+ ac_cv_c_stack_direction=0
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1841 "configure"
+#include "confdefs.h"
+find_stack_direction ()
+{
+ static char *addr = 0;
+ auto char dummy;
+ if (addr == 0)
+ {
+ addr = &dummy;
+ return find_stack_direction ();
+ }
+ else
+ return (&dummy > addr) ? 1 : -1;
+}
+main ()
+{
+ exit (find_stack_direction() < 0);
+}
+EOF
+if { (eval echo configure:1860: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest && (./conftest; exit) 2>/dev/null
+then
+ ac_cv_c_stack_direction=1
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -fr conftest*
+ ac_cv_c_stack_direction=-1
+fi
+rm -fr conftest*
+fi
+
+fi
+
+echo "$ac_t""$ac_cv_c_stack_direction" 1>&6
+cat >> confdefs.h <<EOF
+#define STACK_DIRECTION $ac_cv_c_stack_direction
+EOF
+
+fi
+
+for ac_func in strdup strstr strcasecmp strncasecmp
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:1884: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1889 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:1912: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+for ac_func in gettimeofday mktime strptime
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:1939: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1944 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:1967: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+for ac_func in strerror vsnprintf select signal symlink access isatty
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:1994: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 1999 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:2022: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+for ac_func in uname gethostname
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:2049: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 2054 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:2077: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+
+for ac_func in gethostbyname
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:2105: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 2110 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:2133: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+echo $ac_n "checking for gethostbyname in -lnsl""... $ac_c" 1>&6
+echo "configure:2155: checking for gethostbyname in -lnsl" >&5
+ac_lib_var=`echo nsl'_'gethostbyname | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_save_LIBS="$LIBS"
+LIBS="-lnsl $LIBS"
+cat > conftest.$ac_ext <<EOF
+#line 2163 "configure"
+#include "confdefs.h"
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char gethostbyname();
+
+int main() {
+gethostbyname()
+; return 0; }
+EOF
+if { (eval echo configure:2174: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=no"
+fi
+rm -f conftest*
+LIBS="$ac_save_LIBS"
+
+fi
+if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_lib=HAVE_LIB`echo nsl | sed -e 's/^a-zA-Z0-9_/_/g' \
+ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_lib 1
+EOF
+
+ LIBS="-lnsl $LIBS"
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+
+fi
+done
+
+
+
+echo $ac_n "checking for socket in -lsocket""... $ac_c" 1>&6
+echo "configure:2208: checking for socket in -lsocket" >&5
+ac_lib_var=`echo socket'_'socket | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_save_LIBS="$LIBS"
+LIBS="-lsocket $LIBS"
+cat > conftest.$ac_ext <<EOF
+#line 2216 "configure"
+#include "confdefs.h"
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char socket();
+
+int main() {
+socket()
+; return 0; }
+EOF
+if { (eval echo configure:2227: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=no"
+fi
+rm -f conftest*
+LIBS="$ac_save_LIBS"
+
+fi
+if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_lib=HAVE_LIB`echo socket | sed -e 's/[^a-zA-Z0-9_]/_/g' \
+ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_lib 1
+EOF
+
+ LIBS="-lsocket $LIBS"
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+
+if test "x${with_socks}" = xyes
+then
+ echo $ac_n "checking for main in -lresolv""... $ac_c" 1>&6
+echo "configure:2258: checking for main in -lresolv" >&5
+ac_lib_var=`echo resolv'_'main | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_save_LIBS="$LIBS"
+LIBS="-lresolv $LIBS"
+cat > conftest.$ac_ext <<EOF
+#line 2266 "configure"
+#include "confdefs.h"
+
+int main() {
+main()
+; return 0; }
+EOF
+if { (eval echo configure:2273: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=no"
+fi
+rm -f conftest*
+LIBS="$ac_save_LIBS"
+
+fi
+if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_lib=HAVE_LIB`echo resolv | sed -e 's/[^a-zA-Z0-9_]/_/g' \
+ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_lib 1
+EOF
+
+ LIBS="-lresolv $LIBS"
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+ echo $ac_n "checking for Rconnect in -lsocks""... $ac_c" 1>&6
+echo "configure:2301: checking for Rconnect in -lsocks" >&5
+ac_lib_var=`echo socks'_'Rconnect | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_save_LIBS="$LIBS"
+LIBS="-lsocks $LIBS"
+cat > conftest.$ac_ext <<EOF
+#line 2309 "configure"
+#include "confdefs.h"
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char Rconnect();
+
+int main() {
+Rconnect()
+; return 0; }
+EOF
+if { (eval echo configure:2320: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=no"
+fi
+rm -f conftest*
+LIBS="$ac_save_LIBS"
+
+fi
+if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_lib=HAVE_LIB`echo socks | sed -e 's/[^a-zA-Z0-9_]/_/g' \
+ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_lib 1
+EOF
+
+ LIBS="-lsocks $LIBS"
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+fi
+
+ALL_LINGUAS="cs de hr no it pt_BR"
+
+echo $ac_n "checking whether NLS is requested""... $ac_c" 1>&6
+echo "configure:2352: checking whether NLS is requested" >&5
+ # Check whether --enable-nls or --disable-nls was given.
+if test "${enable_nls+set}" = set; then
+ enableval="$enable_nls"
+ HAVE_NLS=$enableval
+else
+ HAVE_NLS=yes
+fi
+
+ echo "$ac_t""$HAVE_NLS" 1>&6
+
+
+ if test x"$HAVE_NLS" = xyes; then
+ echo "$ac_t"""language catalogs: $ALL_LINGUAS"" 1>&6
+ # Extract the first word of "msgfmt", so it can be a program name with args.
+set dummy msgfmt; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:2369: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_path_MSGFMT'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ case "$MSGFMT" in
+ /*)
+ ac_cv_path_MSGFMT="$MSGFMT" # Let the user override the test with a path.
+ ;;
+ *)
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ if test -z "`$ac_dir/$ac_word -h 2>&1 | grep 'dv '`"; then
+ ac_cv_path_MSGFMT="$ac_dir/$ac_word"
+ break
+ fi
+ fi
+ done
+ IFS="$ac_save_ifs"
+ test -z "$ac_cv_path_MSGFMT" && ac_cv_path_MSGFMT="msgfmt"
+ ;;
+esac
+fi
+MSGFMT="$ac_cv_path_MSGFMT"
+if test -n "$MSGFMT"; then
+ echo "$ac_t""$MSGFMT" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+ # Extract the first word of "xgettext", so it can be a program name with args.
+set dummy xgettext; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:2403: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_path_XGETTEXT'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ case "$XGETTEXT" in
+ /*)
+ ac_cv_path_XGETTEXT="$XGETTEXT" # Let the user override the test with a path.
+ ;;
+ *)
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ if test -z "`$ac_dir/$ac_word -h 2>&1 | grep '(HELP)'`"; then
+ ac_cv_path_XGETTEXT="$ac_dir/$ac_word"
+ break
+ fi
+ fi
+ done
+ IFS="$ac_save_ifs"
+ test -z "$ac_cv_path_XGETTEXT" && ac_cv_path_XGETTEXT=":"
+ ;;
+esac
+fi
+XGETTEXT="$ac_cv_path_XGETTEXT"
+if test -n "$XGETTEXT"; then
+ echo "$ac_t""$XGETTEXT" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+
+ # Extract the first word of "gmsgfmt", so it can be a program name with args.
+set dummy gmsgfmt; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:2438: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_path_GMSGFMT'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ case "$GMSGFMT" in
+ /*)
+ ac_cv_path_GMSGFMT="$GMSGFMT" # Let the user override the test with a path.
+ ;;
+ *)
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ ac_cv_path_GMSGFMT="$ac_dir/$ac_word"
+ break
+ fi
+ done
+ IFS="$ac_save_ifs"
+ test -z "$ac_cv_path_GMSGFMT" && ac_cv_path_GMSGFMT="$MSGFMT"
+ ;;
+esac
+fi
+GMSGFMT="$ac_cv_path_GMSGFMT"
+if test -n "$GMSGFMT"; then
+ echo "$ac_t""$GMSGFMT" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+ CATOBJEXT=.gmo
+ INSTOBJEXT=.mo
+ DATADIRNAME=share
+
+ if test "$XGETTEXT" != ":"; then
+ if $XGETTEXT --omit-header /dev/null 2> /dev/null; then
+ : ;
+ else
+ echo "$ac_t""found xgettext programs is not GNU xgettext; ignore it" 1>&6
+ XGETTEXT=":"
+ fi
+ fi
+
+ for ac_hdr in locale.h libintl.h
+do
+ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'`
+echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6
+echo "configure:2484: checking for $ac_hdr" >&5
+if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 2489 "configure"
+#include "confdefs.h"
+#include <$ac_hdr>
+EOF
+ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out"
+{ (eval echo configure:2494: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; }
+ac_err=`grep -v '^ *+' conftest.out`
+if test -z "$ac_err"; then
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=yes"
+else
+ echo "$ac_err" >&5
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_header_$ac_safe=no"
+fi
+rm -f conftest*
+fi
+if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_hdr 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+fi
+done
+
+
+ for ac_func in gettext
+do
+echo $ac_n "checking for $ac_func""... $ac_c" 1>&6
+echo "configure:2524: checking for $ac_func" >&5
+if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ cat > conftest.$ac_ext <<EOF
+#line 2529 "configure"
+#include "confdefs.h"
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char $ac_func(); below. */
+#include <assert.h>
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char $ac_func();
+
+int main() {
+
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_$ac_func) || defined (__stub___$ac_func)
+choke me
+#else
+$ac_func();
+#endif
+
+; return 0; }
+EOF
+if { (eval echo configure:2552: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_func_$ac_func=no"
+fi
+rm -f conftest*
+fi
+
+if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+ ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+ cat >> confdefs.h <<EOF
+#define $ac_tr_func 1
+EOF
+
+else
+ echo "$ac_t""no" 1>&6
+echo $ac_n "checking for gettext in -lintl""... $ac_c" 1>&6
+echo "configure:2574: checking for gettext in -lintl" >&5
+ac_lib_var=`echo intl'_'gettext | sed 'y%./+-%__p_%'`
+if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ ac_save_LIBS="$LIBS"
+LIBS="-lintl $LIBS"
+cat > conftest.$ac_ext <<EOF
+#line 2582 "configure"
+#include "confdefs.h"
+/* Override any gcc2 internal prototype to avoid an error. */
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char gettext();
+
+int main() {
+gettext()
+; return 0; }
+EOF
+if { (eval echo configure:2593: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=yes"
+else
+ echo "configure: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ rm -rf conftest*
+ eval "ac_cv_lib_$ac_lib_var=no"
+fi
+rm -f conftest*
+LIBS="$ac_save_LIBS"
+
+fi
+if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then
+ echo "$ac_t""yes" 1>&6
+
+ LIBS="-lintl $LIBS"
+ cat >> confdefs.h <<\EOF
+#define HAVE_GETTEXT 1
+EOF
+
+
+else
+ echo "$ac_t""no" 1>&6
+
+ echo "$ac_t""gettext not found; disabling NLS" 1>&6
+ HAVE_NLS=no
+
+fi
+
+
+fi
+done
+
+
+ for lang in $ALL_LINGUAS; do
+ GMOFILES="$GMOFILES $lang.gmo"
+ POFILES="$POFILES $lang.po"
+ done
+ for lang in $ALL_LINGUAS; do
+ CATALOGS="$CATALOGS ${lang}${CATOBJEXT}"
+ done
+
+
+
+
+
+
+
+
+ fi
+
+ USE_NLS=$HAVE_NLS
+
+ if test "x$HAVE_NLS" = xyes; then
+ cat >> confdefs.h <<\EOF
+#define HAVE_NLS 1
+EOF
+
+ fi
+
+
+
+for ac_prog in makeinfo emacs xemacs
+do
+# Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo $ac_n "checking for $ac_word""... $ac_c" 1>&6
+echo "configure:2661: checking for $ac_word" >&5
+if eval "test \"`echo '$''{'ac_cv_prog_MAKEINFO'+set}'`\" = set"; then
+ echo $ac_n "(cached) $ac_c" 1>&6
+else
+ if test -n "$MAKEINFO"; then
+ ac_cv_prog_MAKEINFO="$MAKEINFO" # Let the user override the test.
+else
+ IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:"
+ for ac_dir in $PATH; do
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$ac_word; then
+ ac_cv_prog_MAKEINFO="$ac_prog"
+ break
+ fi
+ done
+ IFS="$ac_save_ifs"
+fi
+fi
+MAKEINFO="$ac_cv_prog_MAKEINFO"
+if test -n "$MAKEINFO"; then
+ echo "$ac_t""$MAKEINFO" 1>&6
+else
+ echo "$ac_t""no" 1>&6
+fi
+
+test -n "$MAKEINFO" && break
+done
+
+
+case "${MAKEINFO}" in
+ *makeinfo) MAKEINFO="${MAKEINFO} \$(srcdir)/wget.texi" ;;
+ *emacs | *xemacs) MAKEINFO="${MAKEINFO} -batch -q -no-site-file -eval '(find-file \"\$(srcdir)/wget.texi\")' -l texinfmt -f texinfo-format-buffer -f save-buffer" ;;
+ *) MAKEINFO="makeinfo \$(srcdir)/wget.texi" ;;
+esac
+
+trap '' 1 2 15
+cat > confcache <<\EOF
+# This file is a shell script that caches the results of configure
+# tests run on this system so they can be shared between configure
+# scripts and configure runs. It is not useful on other systems.
+# If it contains results you don't want to keep, you may remove or edit it.
+#
+# By default, configure uses ./config.cache as the cache file,
+# creating it if it does not exist already. You can give configure
+# the --cache-file=FILE option to use a different cache file; that is
+# what configure does when it calls configure scripts in
+# subdirectories, so they share the cache.
+# Giving --cache-file=/dev/null disables caching, for debugging configure.
+# config.status only pays attention to the cache file if you give it the
+# --recheck option to rerun configure.
+#
+EOF
+# The following way of writing the cache mishandles newlines in values,
+# but we know of no workaround that is simple, portable, and efficient.
+# So, don't put newlines in cache variables' values.
+# Ultrix sh set writes to stderr and can't be redirected directly,
+# and sets the high bit in the cache file unless we assign to the vars.
+(set) 2>&1 |
+ case `(ac_space=' '; set) 2>&1` in
+ *ac_space=\ *)
+ # `set' does not quote correctly, so add quotes (double-quote substitution
+ # turns \\\\ into \\, and sed turns \\ into \).
+ sed -n \
+ -e "s/'/'\\\\''/g" \
+ -e "s/^\\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\\)=\\(.*\\)/\\1=\${\\1='\\2'}/p"
+ ;;
+ *)
+ # `set' quotes correctly as required by POSIX, so do not add quotes.
+ sed -n -e 's/^\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\)=\(.*\)/\1=${\1=\2}/p'
+ ;;
+ esac >> confcache
+if cmp -s $cache_file confcache; then
+ :
+else
+ if test -w $cache_file; then
+ echo "updating cache $cache_file"
+ cat confcache > $cache_file
+ else
+ echo "not updating unwritable cache $cache_file"
+ fi
+fi
+rm -f confcache
+
+trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15
+
+test "x$prefix" = xNONE && prefix=$ac_default_prefix
+# Let make expand exec_prefix.
+test "x$exec_prefix" = xNONE && exec_prefix='${prefix}'
+
+# Any assignment to VPATH causes Sun make to only execute
+# the first set of double-colon rules, so remove it if not needed.
+# If there is a colon in the path, we need to keep it.
+if test "x$srcdir" = x.; then
+ ac_vpsub='/^[ ]*VPATH[ ]*=[^:]*$/d'
+fi
+
+trap 'rm -f $CONFIG_STATUS conftest*; exit 1' 1 2 15
+
+DEFS=-DHAVE_CONFIG_H
+
+# Without the "./", some shells look in PATH for config.status.
+: ${CONFIG_STATUS=./config.status}
+
+echo creating $CONFIG_STATUS
+rm -f $CONFIG_STATUS
+cat > $CONFIG_STATUS <<EOF
+#! /bin/sh
+# Generated automatically by configure.
+# Run this file to recreate the current configuration.
+# This directory was configured as follows,
+# on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+#
+# $0 $ac_configure_args
+#
+# Compiler output produced by configure, useful for debugging
+# configure, is in ./config.log if it exists.
+
+ac_cs_usage="Usage: $CONFIG_STATUS [--recheck] [--version] [--help]"
+for ac_option
+do
+ case "\$ac_option" in
+ -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)
+ echo "running \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion"
+ exec \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion ;;
+ -version | --version | --versio | --versi | --vers | --ver | --ve | --v)
+ echo "$CONFIG_STATUS generated by autoconf version 2.12"
+ exit 0 ;;
+ -help | --help | --hel | --he | --h)
+ echo "\$ac_cs_usage"; exit 0 ;;
+ *) echo "\$ac_cs_usage"; exit 1 ;;
+ esac
+done
+
+ac_given_srcdir=$srcdir
+ac_given_INSTALL="$INSTALL"
+
+trap 'rm -fr `echo "Makefile src/Makefile doc/Makefile util/Makefile po/Makefile.in src/config.h" | sed "s/:[^ ]*//g"` conftest*; exit 1' 1 2 15
+EOF
+cat >> $CONFIG_STATUS <<EOF
+
+# Protect against being on the right side of a sed subst in config.status.
+sed 's/%@/@@/; s/@%/@@/; s/%g\$/@g/; /@g\$/s/[\\\\&%]/\\\\&/g;
+ s/@@/%@/; s/@@/@%/; s/@g\$/%g/' > conftest.subs <<\\CEOF
+$ac_vpsub
+$extrasub
+s%@CFLAGS@%$CFLAGS%g
+s%@CPPFLAGS@%$CPPFLAGS%g
+s%@CXXFLAGS@%$CXXFLAGS%g
+s%@DEFS@%$DEFS%g
+s%@LDFLAGS@%$LDFLAGS%g
+s%@LIBS@%$LIBS%g
+s%@exec_prefix@%$exec_prefix%g
+s%@prefix@%$prefix%g
+s%@program_transform_name@%$program_transform_name%g
+s%@bindir@%$bindir%g
+s%@sbindir@%$sbindir%g
+s%@libexecdir@%$libexecdir%g
+s%@datadir@%$datadir%g
+s%@sysconfdir@%$sysconfdir%g
+s%@sharedstatedir@%$sharedstatedir%g
+s%@localstatedir@%$localstatedir%g
+s%@libdir@%$libdir%g
+s%@includedir@%$includedir%g
+s%@oldincludedir@%$oldincludedir%g
+s%@infodir@%$infodir%g
+s%@mandir@%$mandir%g
+s%@VERSION@%$VERSION%g
+s%@PACKAGE@%$PACKAGE%g
+s%@host@%$host%g
+s%@host_alias@%$host_alias%g
+s%@host_cpu@%$host_cpu%g
+s%@host_vendor@%$host_vendor%g
+s%@host_os@%$host_os%g
+s%@MD5_OBJ@%$MD5_OBJ%g
+s%@OPIE_OBJ@%$OPIE_OBJ%g
+s%@SET_MAKE@%$SET_MAKE%g
+s%@INSTALL_PROGRAM@%$INSTALL_PROGRAM%g
+s%@INSTALL_DATA@%$INSTALL_DATA%g
+s%@CC@%$CC%g
+s%@CPP@%$CPP%g
+s%@exeext@%$exeext%g
+s%@U@%$U%g
+s%@ANSI2KNR@%$ANSI2KNR%g
+s%@ALLOCA@%$ALLOCA%g
+s%@MSGFMT@%$MSGFMT%g
+s%@XGETTEXT@%$XGETTEXT%g
+s%@GMSGFMT@%$GMSGFMT%g
+s%@CATALOGS@%$CATALOGS%g
+s%@CATOBJEXT@%$CATOBJEXT%g
+s%@DATADIRNAME@%$DATADIRNAME%g
+s%@GMOFILES@%$GMOFILES%g
+s%@INSTOBJEXT@%$INSTOBJEXT%g
+s%@INTLLIBS@%$INTLLIBS%g
+s%@POFILES@%$POFILES%g
+s%@HAVE_NLS@%$HAVE_NLS%g
+s%@USE_NLS@%$USE_NLS%g
+s%@MAKEINFO@%$MAKEINFO%g
+
+CEOF
+EOF
+
+cat >> $CONFIG_STATUS <<\EOF
+
+# Split the substitutions into bite-sized pieces for seds with
+# small command number limits, like on Digital OSF/1 and HP-UX.
+ac_max_sed_cmds=90 # Maximum number of lines to put in a sed script.
+ac_file=1 # Number of current file.
+ac_beg=1 # First line for current file.
+ac_end=$ac_max_sed_cmds # Line after last line for current file.
+ac_more_lines=:
+ac_sed_cmds=""
+while $ac_more_lines; do
+ if test $ac_beg -gt 1; then
+ sed "1,${ac_beg}d; ${ac_end}q" conftest.subs > conftest.s$ac_file
+ else
+ sed "${ac_end}q" conftest.subs > conftest.s$ac_file
+ fi
+ if test ! -s conftest.s$ac_file; then
+ ac_more_lines=false
+ rm -f conftest.s$ac_file
+ else
+ if test -z "$ac_sed_cmds"; then
+ ac_sed_cmds="sed -f conftest.s$ac_file"
+ else
+ ac_sed_cmds="$ac_sed_cmds | sed -f conftest.s$ac_file"
+ fi
+ ac_file=`expr $ac_file + 1`
+ ac_beg=$ac_end
+ ac_end=`expr $ac_end + $ac_max_sed_cmds`
+ fi
+done
+if test -z "$ac_sed_cmds"; then
+ ac_sed_cmds=cat
+fi
+EOF
+
+cat >> $CONFIG_STATUS <<EOF
+
+CONFIG_FILES=\${CONFIG_FILES-"Makefile src/Makefile doc/Makefile util/Makefile po/Makefile.in"}
+EOF
+cat >> $CONFIG_STATUS <<\EOF
+for ac_file in .. $CONFIG_FILES; do if test "x$ac_file" != x..; then
+ # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in".
+ case "$ac_file" in
+ *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'`
+ ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;;
+ *) ac_file_in="${ac_file}.in" ;;
+ esac
+
+ # Adjust a relative srcdir, top_srcdir, and INSTALL for subdirectories.
+
+ # Remove last slash and all that follows it. Not all systems have dirname.
+ ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'`
+ if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then
+ # The file is in a subdirectory.
+ test ! -d "$ac_dir" && mkdir "$ac_dir"
+ ac_dir_suffix="/`echo $ac_dir|sed 's%^\./%%'`"
+ # A "../" for each directory in $ac_dir_suffix.
+ ac_dots=`echo $ac_dir_suffix|sed 's%/[^/]*%../%g'`
+ else
+ ac_dir_suffix= ac_dots=
+ fi
+
+ case "$ac_given_srcdir" in
+ .) srcdir=.
+ if test -z "$ac_dots"; then top_srcdir=.
+ else top_srcdir=`echo $ac_dots|sed 's%/$%%'`; fi ;;
+ /*) srcdir="$ac_given_srcdir$ac_dir_suffix"; top_srcdir="$ac_given_srcdir" ;;
+ *) # Relative path.
+ srcdir="$ac_dots$ac_given_srcdir$ac_dir_suffix"
+ top_srcdir="$ac_dots$ac_given_srcdir" ;;
+ esac
+
+ case "$ac_given_INSTALL" in
+ [/$]*) INSTALL="$ac_given_INSTALL" ;;
+ *) INSTALL="$ac_dots$ac_given_INSTALL" ;;
+ esac
+
+ echo creating "$ac_file"
+ rm -f "$ac_file"
+ configure_input="Generated automatically from `echo $ac_file_in|sed 's%.*/%%'` by configure."
+ case "$ac_file" in
+ *Makefile*) ac_comsub="1i\\
+# $configure_input" ;;
+ *) ac_comsub= ;;
+ esac
+
+ ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"`
+ sed -e "$ac_comsub
+s%@configure_input@%$configure_input%g
+s%@srcdir@%$srcdir%g
+s%@top_srcdir@%$top_srcdir%g
+s%@INSTALL@%$INSTALL%g
+" $ac_file_inputs | (eval "$ac_sed_cmds") > $ac_file
+fi; done
+rm -f conftest.s*
+
+# These sed commands are passed to sed as "A NAME B NAME C VALUE D", where
+# NAME is the cpp macro being defined and VALUE is the value it is being given.
+#
+# ac_d sets the value in "#define NAME VALUE" lines.
+ac_dA='s%^\([ ]*\)#\([ ]*define[ ][ ]*\)'
+ac_dB='\([ ][ ]*\)[^ ]*%\1#\2'
+ac_dC='\3'
+ac_dD='%g'
+# ac_u turns "#undef NAME" with trailing blanks into "#define NAME VALUE".
+ac_uA='s%^\([ ]*\)#\([ ]*\)undef\([ ][ ]*\)'
+ac_uB='\([ ]\)%\1#\2define\3'
+ac_uC=' '
+ac_uD='\4%g'
+# ac_e turns "#undef NAME" without trailing blanks into "#define NAME VALUE".
+ac_eA='s%^\([ ]*\)#\([ ]*\)undef\([ ][ ]*\)'
+ac_eB='$%\1#\2define\3'
+ac_eC=' '
+ac_eD='%g'
+
+if test "${CONFIG_HEADERS+set}" != set; then
+EOF
+cat >> $CONFIG_STATUS <<EOF
+ CONFIG_HEADERS="src/config.h"
+EOF
+cat >> $CONFIG_STATUS <<\EOF
+fi
+for ac_file in .. $CONFIG_HEADERS; do if test "x$ac_file" != x..; then
+ # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in".
+ case "$ac_file" in
+ *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'`
+ ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;;
+ *) ac_file_in="${ac_file}.in" ;;
+ esac
+
+ echo creating $ac_file
+
+ rm -f conftest.frag conftest.in conftest.out
+ ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"`
+ cat $ac_file_inputs > conftest.in
+
+EOF
+
+# Transform confdefs.h into a sed script conftest.vals that substitutes
+# the proper values into config.h.in to produce config.h. And first:
+# Protect against being on the right side of a sed subst in config.status.
+# Protect against being in an unquoted here document in config.status.
+rm -f conftest.vals
+cat > conftest.hdr <<\EOF
+s/[\\&%]/\\&/g
+s%[\\$`]%\\&%g
+s%#define \([A-Za-z_][A-Za-z0-9_]*\) *\(.*\)%${ac_dA}\1${ac_dB}\1${ac_dC}\2${ac_dD}%gp
+s%ac_d%ac_u%gp
+s%ac_u%ac_e%gp
+EOF
+sed -n -f conftest.hdr confdefs.h > conftest.vals
+rm -f conftest.hdr
+
+# This sed command replaces #undef with comments. This is necessary, for
+# example, in the case of _POSIX_SOURCE, which is predefined and required
+# on some systems where configure will not decide to define it.
+cat >> conftest.vals <<\EOF
+s%^[ ]*#[ ]*undef[ ][ ]*[a-zA-Z_][a-zA-Z_0-9]*%/* & */%
+EOF
+
+# Break up conftest.vals because some shells have a limit on
+# the size of here documents, and old seds have small limits too.
+
+rm -f conftest.tail
+while :
+do
+ ac_lines=`grep -c . conftest.vals`
+ # grep -c gives empty output for an empty file on some AIX systems.
+ if test -z "$ac_lines" || test "$ac_lines" -eq 0; then break; fi
+ # Write a limited-size here document to conftest.frag.
+ echo ' cat > conftest.frag <<CEOF' >> $CONFIG_STATUS
+ sed ${ac_max_here_lines}q conftest.vals >> $CONFIG_STATUS
+ echo 'CEOF
+ sed -f conftest.frag conftest.in > conftest.out
+ rm -f conftest.in
+ mv conftest.out conftest.in
+' >> $CONFIG_STATUS
+ sed 1,${ac_max_here_lines}d conftest.vals > conftest.tail
+ rm -f conftest.vals
+ mv conftest.tail conftest.vals
+done
+rm -f conftest.vals
+
+cat >> $CONFIG_STATUS <<\EOF
+ rm -f conftest.frag conftest.h
+ echo "/* $ac_file. Generated automatically by configure. */" > conftest.h
+ cat conftest.in >> conftest.h
+ rm -f conftest.in
+ if cmp -s $ac_file conftest.h 2>/dev/null; then
+ echo "$ac_file is unchanged"
+ rm -f conftest.h
+ else
+ # Remove last slash and all that follows it. Not all systems have dirname.
+ ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'`
+ if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then
+ # The file is in a subdirectory.
+ test ! -d "$ac_dir" && mkdir "$ac_dir"
+ fi
+ rm -f $ac_file
+ mv conftest.h $ac_file
+ fi
+fi; done
+
+EOF
+cat >> $CONFIG_STATUS <<EOF
+
+EOF
+cat >> $CONFIG_STATUS <<\EOF
+srcdir=$ac_given_srcdir # Advanced autoconf hackery
+ if test "x$srcdir" != "x."; then
+ if test "x`echo $srcdir | sed 's@/.*@@'`" = "x"; then
+ posrcprefix="$srcdir/"
+ else
+ posrcprefix="../$srcdir/"
+ fi
+ else
+ posrcprefix="../"
+ fi
+ rm -f po/POTFILES
+ echo "generating po/POTFILES from $srcdir/po/POTFILES.in"
+ sed -e "/^#/d" -e "/^\$/d" -e "s,.*, $posrcprefix& \\\\," \
+ -e "\$s/\(.*\) \\\\/\1/" \
+ < $srcdir/po/POTFILES.in > po/POTFILES
+ echo "creating po/Makefile"
+ sed -e "/POTFILES =/r po/POTFILES" po/Makefile.in > po/Makefile
+
+test -z "$CONFIG_HEADERS" || echo timestamp > stamp-h
+exit 0
+EOF
+chmod +x $CONFIG_STATUS
+rm -fr confdefs* $ac_clean_files
+test "$no_create" = yes || ${CONFIG_SHELL-/bin/sh} $CONFIG_STATUS || exit 1
+
--- /dev/null
+@echo off
+rem Configure batch file for `Wget' utility
+rem Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+rem This program is free software; you can redistribute it and/or modify
+rem it under the terms of the GNU General Public License as published by
+rem the Free Software Foundation; either version 2 of the License, or
+rem (at your option) any later version.
+
+rem This program is distributed in the hope that it will be useful,
+rem but WITHOUT ANY WARRANTY; without even the implied warranty of
+rem MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+rem GNU General Public License for more details.
+
+rem You should have received a copy of the GNU General Public License
+rem along with this program; if not, write to the Free Software
+rem Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+cls
+if .%1 == .--borland goto :borland
+if .%1 == .--msvc goto :msvc
+if not .%BORPATH% == . goto :borland
+if not .%1 == . goto :usage
+
+:msvc
+copy windows\config.h.ms src\config.h > nul
+copy windows\Makefile.top Makefile > nul
+copy windows\Makefile.src src\Makefile > nul
+copy windows\Makefile.doc doc\Makefile > nul
+
+echo Type NMAKE to start compiling.
+echo If it doesn't work, try executing MSDEV\BIN\VCVARS32.BAT first,
+echo and then NMAKE.
+goto :end
+
+:borland
+copy windows\config.h.bor src\config.h > nul
+copy windows\Makefile.top.bor Makefile > nul
+copy windows\Makefile.src.bor src\Makefile > nul
+copy windows\Makefile.doc doc\Makefile > nul
+
+echo Type MAKE to start compiling.
+goto :end
+
+:usage
+echo Usage: Configure [--borland | --msvc]
+:end
--- /dev/null
+dnl Template file for GNU Autoconf
+dnl Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+dnl This program is free software; you can redistribute it and/or modify
+dnl it under the terms of the GNU General Public License as published by
+dnl the Free Software Foundation; either version 2 of the License, or
+dnl (at your option) any later version.
+
+dnl This program is distributed in the hope that it will be useful,
+dnl but WITHOUT ANY WARRANTY; without even the implied warranty of
+dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+dnl GNU General Public License for more details.
+
+dnl You should have received a copy of the GNU General Public License
+dnl along with this program; if not, write to the Free Software
+dnl Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+dnl
+dnl Process this file with autoconf to produce a configure script.
+dnl
+
+AC_INIT(src/version.c)
+AC_PREREQ(2.12)
+AC_CONFIG_HEADER(src/config.h)
+
+dnl
+dnl What version of Wget are we building?
+dnl
+VERSION=`sed -e 's/^.*"\(.*\)";$/\1/' ${srcdir}/src/version.c`
+echo "configuring for GNU Wget $VERSION"
+AC_SUBST(VERSION)
+PACKAGE=wget
+AC_SUBST(PACKAGE)
+
+dnl
+dnl Get cannonical host
+dnl
+AC_CANONICAL_HOST
+AC_DEFINE_UNQUOTED(OS_TYPE, "$host_os")
+
+dnl
+dnl Process features.
+dnl
+AC_ARG_WITH(socks,
+[ --with-socks use the socks library],
+[AC_DEFINE(HAVE_SOCKS)])
+
+AC_ARG_ENABLE(opie,
+[ --disable-opie disable support for opie or s/key FTP login],
+USE_OPIE=$enableval, USE_OPIE=yes)
+test x"${USE_OPIE}" = xyes && AC_DEFINE(USE_OPIE)
+
+AC_ARG_ENABLE(digest,
+[ --disable-digest disable support for HTTP digest authorization],
+USE_DIGEST=$enableval, USE_DIGEST=yes)
+test x"${USE_DIGEST}" = xyes && AC_DEFINE(USE_DIGEST)
+
+AC_ARG_ENABLE(debug,
+[ --disable-debug disable support for debugging output],
+DEBUG=$enableval, DEBUG=yes)
+test x"${DEBUG}" = xyes && AC_DEFINE(DEBUG)
+
+case "${USE_OPIE}${USE_DIGEST}" in
+*yes*)
+ MD5_OBJ='md5$o'
+esac
+if test x"$USE_OPIE" = xyes; then
+ OPIE_OBJ='ftp-opie$o'
+fi
+AC_SUBST(MD5_OBJ)
+AC_SUBST(OPIE_OBJ)
+
+dnl
+dnl Whether make sets $(MAKE)...
+dnl
+AC_PROG_MAKE_SET
+
+dnl
+dnl Find a good install
+dnl
+AC_PROG_INSTALL
+
+dnl
+dnl Find the compiler
+dnl
+
+dnl We want these before the checks, so the checks can modify their values.
+test -z "$CFLAGS" && CFLAGS= auto_cflags=1
+test -z "$CC" && cc_specified=yes
+
+AC_PROG_CC
+
+dnl
+dnl if the user hasn't specified CFLAGS, then
+dnl if compiler is gcc, then use -O2 and some warning flags
+dnl else use os-specific flags or -O
+dnl
+if test -n "$auto_cflags"; then
+ if test -n "$GCC"; then
+ CFLAGS="$CFLAGS -O2 -Wall -Wno-implicit"
+ else
+ case "$host_os" in
+ *hpux*) CFLAGS="$CFLAGS +O3" ;;
+ *ultrix* | *osf*) CFLAGS="$CFLAGS -O -Olimit 2000" ;;
+ *) CFLAGS="$CFLAGS -O" ;;
+ esac
+ fi
+fi
+
+dnl
+dnl Handle AIX
+dnl
+AC_AIX
+
+dnl
+dnl In case of {cyg,gnu}win32. Should be a _target_ test.
+dnl Might also be erelevant for DJGPP.
+dnl
+case "$host_os" in
+ *win32) exeext='.exe';;
+ *) exeext='';;
+esac
+AC_SUBST(exeext)
+
+dnl
+dnl Check if we can handle prototypes.
+dnl
+AM_C_PROTOTYPES
+
+dnl
+dnl Checks for typedefs, structures, and compiler characteristics.
+dnl
+AC_C_CONST
+AC_TYPE_SIZE_T
+AC_TYPE_PID_T
+dnl #### This generates a warning. What do I do to shut it up?
+AC_C_BIGENDIAN
+
+dnl
+dnl Checks for headers
+dnl
+AC_CHECK_HEADERS(string.h stdarg.h unistd.h sys/time.h utime.h sys/utime.h)
+AC_CHECK_HEADERS(sys/select.h sys/utsname.h pwd.h signal.h)
+AC_HEADER_TIME
+
+dnl
+dnl Return type of signal-handlers
+dnl
+AC_TYPE_SIGNAL
+
+dnl
+dnl Check for struct utimbuf
+WGET_STRUCT_UTIMBUF
+
+dnl
+dnl Checks for library functions.
+dnl
+AC_FUNC_ALLOCA
+AC_CHECK_FUNCS(strdup strstr strcasecmp strncasecmp)
+AC_CHECK_FUNCS(gettimeofday mktime strptime)
+AC_CHECK_FUNCS(strerror vsnprintf select signal symlink access isatty)
+AC_CHECK_FUNCS(uname gethostname)
+
+AC_CHECK_FUNCS(gethostbyname, [], [
+ AC_CHECK_LIB(nsl, gethostbyname)
+])
+
+dnl
+dnl Checks for libraries.
+dnl
+
+AC_CHECK_LIB(socket, socket)
+
+dnl #### This appears to be deficient with later versions of SOCKS.
+if test "x${with_socks}" = xyes
+then
+ AC_CHECK_LIB(resolv, main)
+ AC_CHECK_LIB(socks, Rconnect)
+fi
+
+dnl Set of available languages.
+dnl
+dnl #### This kind of sucks. Shouldn't the configure process
+dnl determine this automagically by scanning `.po' files in `po/'
+dnl subdirectory?
+ALL_LINGUAS="cs de hr no it pt_BR"
+
+dnl internationalization macros
+WGET_WITH_NLS
+
+dnl
+dnl Find makeinfo. If makeinfo is not found, look for Emacs. If
+dnl Emacs cannot be found, look for XEmacs.
+dnl
+
+AC_CHECK_PROGS(MAKEINFO, makeinfo emacs xemacs)
+
+case "${MAKEINFO}" in
+ *makeinfo) MAKEINFO="${MAKEINFO} \$(srcdir)/wget.texi" ;;
+ *emacs | *xemacs) MAKEINFO="${MAKEINFO} -batch -q -no-site-file -eval '(find-file \"\$(srcdir)/wget.texi\")' -l texinfmt -f texinfo-format-buffer -f save-buffer" ;;
+ *) MAKEINFO="makeinfo \$(srcdir)/wget.texi" ;;
+esac
+
+dnl
+dnl Create output
+dnl
+AC_OUTPUT([Makefile src/Makefile doc/Makefile util/Makefile po/Makefile.in],
+[WGET_PROCESS_PO
+test -z "$CONFIG_HEADERS" || echo timestamp > stamp-h])
--- /dev/null
+1998-09-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (HTTP Options): Warn against masquerading as Mozilla.
+
+1998-05-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (clean): Remove HTML files.
+
+1998-05-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Various updates.
+ (Proxies): New node.
+
+1998-05-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * texinfo.tex: New file.
+
+1998-05-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (dvi): New target.
+
+1998-05-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Recursive Retrieval): Fix typo. Suggested by
+ Francois Pinard.
+
+1998-04-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Fixed @dircategory, courtesy Karl Eichwalder.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in: Don't attempt to (un)install the man-page.
+
+1998-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.1: Removed it.
+
+1998-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Invoking): Split into more sections, analogous to
+ output of `wget --help'.
+ (HTTP Options): Document --user-agent.
+
+1998-03-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Contributors): Updated with oodles of new names.
+
+1998-02-22 Karl Eichwalder <ke@suse.de>
+
+ * Makefile.in (install.info): only info files (no *info.orig,
+ etc.).
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (install.wgetrc): Don't use `!'.
+
+1998-01-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Expanded.
+
+1998-01-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document `--cache'.
+ (Contributors): Added Brian.
+
+1997-07-26 Francois Pinard <pinard@iro.umontreal.ca>
+
+ * Makefile.in (install.wgetrc): Print the sample.wgetrc warning
+ only if the files actually differ.
+
+1998-01-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in: Use `test ...' rather than `[ ... ]'.
+
+ * wget.texi (Advanced Options): Explained suffices.
+
+1998-01-23 Karl Heuer <kwzh@gnu.org>
+
+ * wget.texi (Advanced Options): Updated.
+
+1997-12-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Mailing List): Update.
+
+1997-04-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document `--follow-ftp'.
+
+1997-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --proxy-user and
+ --proxy-passwd.
+
+1997-02-14 Karl Eichwalder <ke@ke.Central.DE>
+
+ * Makefile.in (install.wgetrc): Never ever nuke an existing rc file.
+
+1997-02-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Updated and revised.
+
+ * wget.texi (Contributors): Update.
+ (Advanced Options): Removed bogus **/* example.
+
+ * wget.texi: Use ``...'' instead of "...".
+
+1997-02-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Domain Acceptance): Document --exclude-domains.
+
+1997-01-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --ignore-length.
+
+1997-01-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Time-Stamping): New node.
+
+1997-01-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (distclean): Don't remove wget.info*.
+
+1997-01-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Mailing List): Update archive.
+ (Portability): Update the Windows port by Budor.
+
+1996-12-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Security Considerations): New node.
+
+1996-12-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --passive.
+
+1996-12-12 Dieter Baron <dillo@danbala.tuwien.ac.at>
+
+ * wget.texi (Advanced Usage): Would reference prep instead of
+ wuarchive.
+
+1996-11-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Documented --retr-symlinks.
+
+1996-11-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --delete-after.
+
+1996-11-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Portability): Add IRIX and FreeBSD as the "regular"
+ platforms.
+
+1996-11-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Document dot-style.
+
+1996-11-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Dot customization example.
+ (Sample Wgetrc): Likewise.
+
+1996-11-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Wgetrc Syntax): Explained emptying lists.
+
+1996-11-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document includes/excludes.
+ (Wgetrc Commands): Likewise.
+
+1996-11-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document headers.
+
+1996-11-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.wgetrc: Added header examples.
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.wgetrc: Rewritten.
+
+ * Makefile.in (install.wgetrc): Install sample.wgetrc.
+ (uninstall.info): Use $(RM).
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Docfixes.
+
+1996-11-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Proofread; *many* docfixes.
+
+1996-11-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Introduction): Updated robots mailing list address.
+
+1996-11-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Minor docfixes.
+
+1996-10-26 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Document passwords better.
+
+ * Makefile.in (distclean): Remove wget.1 on make distclean.
+
+ * wget.texi (Option Syntax): Explain --.
+
+1996-10-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * fetch.texi (No Parent): update.
+
+1996-10-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * fetch.texi (Advanced Options): Docfix.
+
+1996-10-17 Tage Stabell-Kulo <tage@acm.org>
+
+ * geturl.texi (Advanced Options): Sort alphabetically.
+
+1996-10-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Describe -nr.
+ (Advanced Usage): Moved -O pipelines to Guru Usage.
+ (Simple Usage): Update.
+ (Advanced Options): Docfix.
+
+ * Makefile.in (RM): RM = rm -f.
+
+1996-10-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Guru Usage): Add proxy-filling example.
+
+1996-10-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Added --spider.
+
+1996-10-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Added -X.
+
+ * Makefile.in: Added $(srcdir) where appropriate (I hope).
--- /dev/null
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+#
+# Version: @VERSION@
+#
+
+SHELL = /bin/sh
+
+# Program to format Texinfo source into Info files.
+MAKEINFO = @MAKEINFO@
+# Program to format Texinfo source into DVI files.
+TEXI2DVI = texi2dvi
+# Program to convert DVI files to PostScript
+DVIPS = dvips -D 300
+# Program to convert texinfo files to html
+TEXI2HTML = texi2html -expandinfo -split_chapter
+
+top_srcdir = @top_srcdir@
+srcdir = @srcdir@
+VPATH = @srcdir@
+
+prefix = @prefix@
+infodir = @infodir@
+mandir = @mandir@
+manext = 1
+sysconfdir = @sysconfdir@
+
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+RM = rm -f
+
+MAN = wget.$(manext)
+WGETRC = $(sysconfdir)/wgetrc
+
+#
+# Dependencies for building
+#
+
+all: wget.info # wget.cat
+
+everything: all wget_us.ps wget_a4.ps wget_toc.html
+
+wget.info: wget.texi
+ -$(MAKEINFO)
+
+#wget.cat: $(MAN)
+# nroff -man $(srcdir)/$(MAN) > wget.cat
+
+dvi: wget.dvi
+
+wget.dvi: wget.texi
+ $(TEXI2DVI) $(srcdir)/wget.texi
+
+wget_us.ps: wget.dvi
+ $(DVIPS) -t letter -o $@ wget.dvi
+
+wget_a4.ps: wget.dvi
+ $(DVIPS) -t a4 -o $@ wget.dvi
+
+wget_toc.html: wget.texi
+ $(TEXI2HTML) $(srcdir)/wget.texi
+
+#
+# Dependencies for installing
+#
+
+# install all the documentation
+install: install.info install.wgetrc # install.man
+
+# uninstall all the documentation
+uninstall: uninstall.info # uninstall.man
+
+# install info pages, creating install directory if necessary
+install.info: wget.info
+ $(top_srcdir)/mkinstalldirs $(infodir)
+ -for file in $(srcdir)/wget.info $(srcdir)/wget.info-*[0-9]; do \
+ test -f "$$file" && $(INSTALL_DATA) $$file $(infodir) ; \
+ done
+
+# install man page, creating install directory if necessary
+#install.man:
+# $(top_srcdir)/mkinstalldirs $(mandir)/man$(manext)
+# $(INSTALL_DATA) $(srcdir)/$(MAN) $(mandir)/man$(manext)/$(MAN)
+
+# install sample.wgetrc
+install.wgetrc:
+ $(top_srcdir)/mkinstalldirs $(sysconfdir)
+ @if test -f $(WGETRC); then \
+ if cmp -s $(srcdir)/sample.wgetrc $(WGETRC); then echo ""; \
+ else \
+ echo ' $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new'; \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new; \
+ echo "WARNING: File \`$(WGETRC)' already exists and is spared."; \
+ echo " You might want to consider \`$(WGETRC).new',"; \
+ echo " and merge both into \`$(WGETRC)', for the best."; \
+ fi; \
+ else \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC); \
+ fi
+
+# uninstall info pages
+uninstall.info:
+ $(RM) $(infodir)/wget.info*
+
+# uninstall man page
+#uninstall.man:
+# $(RM) $(mandir)/man$(manext)/$(MAN)
+
+#
+# Dependencies for cleanup
+#
+
+clean:
+ $(RM) *~ *.bak *.cat *.html
+ $(RM) *.dvi *.aux *.cp *.cps *.fn *.toc *.tp *.vr *.ps *.ky *.pg *.log
+
+distclean: clean
+ $(RM) Makefile
+
+realclean: distclean
+ $(RM) wget.info*
+
+#
+# Dependencies for maintenance
+#
+
+subdir = doc
+
+Makefile: Makefile.in ../config.status
+ cd .. && CONFIG_FILES=$(subdir)/$@ CONFIG_HEADERS= ./config.status
--- /dev/null
+.TH ANSI2KNR 1 "31 December 1990"
+.SH NAME
+ansi2knr \- convert ANSI C to Kernighan & Ritchie C
+.SH SYNOPSIS
+.I ansi2knr
+input_file output_file
+.SH DESCRIPTION
+If no output_file is supplied, output goes to stdout.
+.br
+There are no error messages.
+.sp
+.I ansi2knr
+recognizes functions by seeing a non-keyword identifier at the left margin, followed by a left parenthesis, with a right parenthesis as the last character on the line. It will recognize a multi-line header if the last character on each line but the last is a left parenthesis or comma. These algorithms ignore whitespace and comments, except that the function name must be the first thing on the line.
+.sp
+The following constructs will confuse it:
+.br
+ - Any other construct that starts at the left margin and follows the above syntax (such as a macro or function call).
+.br
+ - Macros that tinker with the syntax of the function header.
--- /dev/null
+###
+### Sample Wget initialization file .wgetrc
+###
+
+## You can use this file to change the default behaviour of wget or to
+## avoid having to type many many command-line options. This file does
+## not contain a comprehensive list of commands -- look at the manual
+## to find out what you can put into this file.
+##
+## Wget initialization file can reside in /usr/local/etc/wgetrc
+## (global, for all users) or $HOME/.wgetrc (for a single user).
+##
+## To use any of the settings in this file, you will have to uncomment
+## them (and probably change them).
+
+
+##
+## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+## Think well before you change them, since they may reduce wget's
+## functionality, and make it behave contrary to the documentation:
+##
+
+# You can set retrieve quota for beginners by specifying a value
+# optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+# default quota is unlimited.
+#quota = inf
+
+# You can lower (or raise) the default number of retries when
+# downloading a file (default is 20).
+#tries = 20
+
+# Lowering the maximum depth of the recursive retrieval is handy to
+# prevent newbies from going too "deep" when they unwittingly start
+# the recursive retrieval. The default is 5.
+#reclevel = 5
+
+# Many sites are behind firewalls that do not allow initiation of
+# connections from the outside. On these sites you have to use the
+# `passive' feature of FTP. If you are behind such a firewall, you
+# can turn this on to make Wget use passive FTP by default.
+#passive_ftp = off
+
+
+##
+## Local settings (for a user to set in his $HOME/.wgetrc). It is
+## *highly* undesirable to put these settings in the global file, since
+## they are potentially dangerous to "normal" users.
+##
+## Even when setting up your own ~/.wgetrc, you should know what you
+## are doing before doing so.
+##
+
+# Set this to on to use timestamping by default:
+#timestamping = off
+
+# It is a good idea to make Wget send your email address in a `From:'
+# header with your request (so that server administrators can contact
+# you in case of errors). Wget does *not* send `From:' by default.
+#header = From: Your Name <username@site.domain>
+
+# You can set up other headers, like Accept-Language. Accept-Language
+# is *not* sent by default.
+#header = Accept-Language: en
+
+# You can set the default proxy for Wget to use. It will override the
+# value in the environment.
+#http_proxy = http://proxy.yoyodyne.com:18023/
+
+# If you do not want to use proxy at all, set this to off.
+#use_proxy = on
+
+# You can customize the retrieval outlook. Valid options are default,
+# binary, mega and micro.
+#dot_style = default
+
+# Setting this to off makes Wget not download /robots.txt. Be sure to
+# know *exactly* what /robots.txt is and how it is used before changing
+# the default!
+#robots = on
+
+# It can be useful to make Wget wait between connections. Set this to
+# the number of seconds you want Wget to wait.
+#wait = 0
+
+# You can force creating directory structure, even if a single is being
+# retrieved, by setting this to on.
+#dirstruct = off
+
+# You can turn on recursive retrieving by default (don't do this if
+# you are not sure you know what it means) by setting this to on.
+#recursive = off
+
+# To have Wget follow FTP links from HTML files by default, set this
+# to on:
+#follow_ftp = off
--- /dev/null
+% texinfo.tex -- TeX macros to handle Texinfo files.
+% $Id: texinfo.tex 2 1999-12-02 07:42:23Z kwget $
+%
+% Copyright (C) 1985, 86, 88, 90, 91, 92, 93, 94, 95, 96, 97, 98
+% Free Software Foundation, Inc.
+%
+% This texinfo.tex file is free software; you can redistribute it and/or
+% modify it under the terms of the GNU General Public License as
+% published by the Free Software Foundation; either version 2, or (at
+% your option) any later version.
+%
+% This texinfo.tex file is distributed in the hope that it will be
+% useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+% of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+% General Public License for more details.
+%
+% You should have received a copy of the GNU General Public License
+% along with this texinfo.tex file; see the file COPYING. If not, write
+% to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+% Boston, MA 02111-1307, USA.
+%
+% In other words, you are welcome to use, share and improve this program.
+% You are forbidden to forbid anyone else to use, share and improve
+% what you give them. Help stamp out software-hoarding!
+%
+% Please try the latest version of texinfo.tex before submitting bug
+% reports; you can get the latest version from:
+% /home/gd/gnu/doc/texinfo.tex on the GNU machines.
+% ftp://ftp.gnu.org/pub/gnu/texinfo.tex
+% (and all GNU mirrors)
+% ftp://tug.org/tex/texinfo.tex
+% ftp://ctan.org/macros/texinfo/texinfo.tex
+% (and all CTAN mirrors, finger ctan@tug.org for a list).
+%
+% Send bug reports to bug-texinfo@gnu.org.
+% Please include a precise test case in each bug report,
+% including a complete document with which we can reproduce the problem.
+%
+% Texinfo macros (with @macro) are *not* supported by texinfo.tex. You
+% have to run makeinfo -E to expand macros first; the texi2dvi script
+% does this.
+%
+% To process a Texinfo manual with TeX, it's most reliable to use the
+% texi2dvi shell script that comes with the distribution. For simple
+% manuals, you can get away with:
+% tex foo.texi
+% texindex foo.??
+% tex foo.texi
+% tex foo.texi
+% dvips foo.dvi -o # or whatever, to process the dvi file.
+% The extra runs of TeX get the cross-reference information correct.
+% Sometimes one run after texindex suffices, and sometimes you need more
+% than two; texi2dvi does it as many times as necessary.
+
+
+% Make it possible to create a .fmt file just by loading this file:
+% if the underlying format is not loaded, start by loading it now.
+% Added by gildea November 1993.
+\expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi
+
+% This automatically updates the version number based on RCS.
+\def\deftexinfoversion$#1: #2 ${\def\texinfoversion{#2}}
+\deftexinfoversion$Revision: 2 $
+\message{Loading texinfo package [Version \texinfoversion]:}
+
+% If in a .fmt file, print the version number
+% and turn on active characters that we couldn't do earlier because
+% they might have appeared in the input file name.
+\everyjob{\message{[Texinfo version \texinfoversion]}\message{}
+ \catcode`+=\active \catcode`\_=\active}
+
+% Save some parts of plain tex whose names we will redefine.
+
+\let\ptexb=\b
+\let\ptexbullet=\bullet
+\let\ptexc=\c
+\let\ptexcomma=\,
+\let\ptexdot=\.
+\let\ptexdots=\dots
+\let\ptexend=\end
+\let\ptexequiv=\equiv
+\let\ptexexclam=\!
+\let\ptexi=\i
+\let\ptexlbrace=\{
+\let\ptexrbrace=\}
+\let\ptexstar=\*
+\let\ptext=\t
+
+% We never want plain's outer \+ definition in Texinfo.
+% For @tex, we can use \tabalign.
+\let\+ = \relax
+
+
+\message{Basics,}
+\chardef\other=12
+
+% If this character appears in an error message or help string, it
+% starts a new line in the output.
+\newlinechar = `^^J
+
+% Set up fixed words for English if not already set.
+\ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi
+\ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi
+\ifx\putwordfile\undefined \gdef\putwordfile{file}\fi
+\ifx\putwordInfo\undefined \gdef\putwordfile{Info}\fi
+\ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi
+\ifx\putwordon\undefined \gdef\putwordon{on}\fi
+\ifx\putwordpage\undefined \gdef\putwordpage{page}\fi
+\ifx\putwordsection\undefined \gdef\putwordsection{section}\fi
+\ifx\putwordSection\undefined \gdef\putwordSection{Section}\fi
+\ifx\putwordsee\undefined \gdef\putwordsee{see}\fi
+\ifx\putwordSee\undefined \gdef\putwordSee{See}\fi
+\ifx\putwordShortContents\undefined \gdef\putwordShortContents{Short Contents}\fi
+\ifx\putwordTableofContents\undefined\gdef\putwordTableofContents{Table of Contents}\fi
+
+% Ignore a token.
+%
+\def\gobble#1{}
+
+\hyphenation{ap-pen-dix}
+\hyphenation{mini-buf-fer mini-buf-fers}
+\hyphenation{eshell}
+\hyphenation{white-space}
+
+% Margin to add to right of even pages, to left of odd pages.
+\newdimen \bindingoffset
+\newdimen \normaloffset
+\newdimen\pagewidth \newdimen\pageheight
+
+% Sometimes it is convenient to have everything in the transcript file
+% and nothing on the terminal. We don't just call \tracingall here,
+% since that produces some useless output on the terminal.
+%
+\def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}%
+\def\loggingall{\tracingcommands2 \tracingstats2
+ \tracingpages1 \tracingoutput1 \tracinglostchars1
+ \tracingmacros2 \tracingparagraphs1 \tracingrestores1
+ \showboxbreadth\maxdimen\showboxdepth\maxdimen
+}%
+
+% For @cropmarks command.
+% Do @cropmarks to get crop marks.
+%
+\newif\ifcropmarks
+\let\cropmarks = \cropmarkstrue
+%
+% Dimensions to add cropmarks at corners.
+% Added by P. A. MacKay, 12 Nov. 1986
+%
+\newdimen\cornerlong \newdimen\cornerthick
+\newdimen\topandbottommargin
+\newdimen\outerhsize \newdimen\outervsize
+\cornerlong=1pc\cornerthick=.3pt % These set size of cropmarks
+\outerhsize=7in
+%\outervsize=9.5in
+% Alternative @smallbook page size is 9.25in
+\outervsize=9.25in
+\topandbottommargin=.75in
+
+% Main output routine.
+\chardef\PAGE = 255
+\output = {\onepageout{\pagecontents\PAGE}}
+
+\newbox\headlinebox
+\newbox\footlinebox
+
+% \onepageout takes a vbox as an argument. Note that \pagecontents
+% does insertions, but you have to call it yourself.
+\def\onepageout#1{%
+ \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi
+ %
+ \ifodd\pageno \advance\hoffset by \bindingoffset
+ \else \advance\hoffset by -\bindingoffset\fi
+ %
+ % Do this outside of the \shipout so @code etc. will be expanded in
+ % the headline as they should be, not taken literally (outputting ''code).
+ \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}%
+ \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}%
+ %
+ {%
+ % Have to do this stuff outside the \shipout because we want it to
+ % take effect in \write's, yet the group defined by the \vbox ends
+ % before the \shipout runs.
+ %
+ \escapechar = `\\ % use backslash in output files.
+ \indexdummies % don't expand commands in the output.
+ \normalturnoffactive % \ in index entries must not stay \, e.g., if
+ % the page break happens to be in the middle of an example.
+ \shipout\vbox{%
+ \ifcropmarks \vbox to \outervsize\bgroup
+ \hsize = \outerhsize
+ \line{\ewtop\hfil\ewtop}%
+ \nointerlineskip
+ \line{%
+ \vbox{\moveleft\cornerthick\nstop}%
+ \hfill
+ \vbox{\moveright\cornerthick\nstop}%
+ }%
+ \vskip\topandbottommargin
+ \line\bgroup
+ \hfil % center the page within the outer (page) hsize.
+ \ifodd\pageno\hskip\bindingoffset\fi
+ \vbox\bgroup
+ \fi
+ %
+ \unvbox\headlinebox
+ \pagebody{#1}%
+ \ifdim\ht\footlinebox > 0pt
+ % Only leave this space if the footline is nonempty.
+ % (We lessened \vsize for it in \oddfootingxxx.)
+ % The \baselineskip=24pt in plain's \makefootline has no effect.
+ \vskip 2\baselineskip
+ \unvbox\footlinebox
+ \fi
+ %
+ \ifcropmarks
+ \egroup % end of \vbox\bgroup
+ \hfil\egroup % end of (centering) \line\bgroup
+ \vskip\topandbottommargin plus1fill minus1fill
+ \boxmaxdepth = \cornerthick
+ \line{%
+ \vbox{\moveleft\cornerthick\nsbot}%
+ \hfill
+ \vbox{\moveright\cornerthick\nsbot}%
+ }%
+ \nointerlineskip
+ \line{\ewbot\hfil\ewbot}%
+ \egroup % \vbox from first cropmarks clause
+ \fi
+ }% end of \shipout\vbox
+ }% end of group with \turnoffactive
+ \advancepageno
+ \ifnum\outputpenalty>-20000 \else\dosupereject\fi
+}
+
+\newinsert\margin \dimen\margin=\maxdimen
+
+\def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}}
+{\catcode`\@ =11
+\gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi
+% marginal hacks, juha@viisa.uucp (Juha Takala)
+\ifvoid\margin\else % marginal info is present
+ \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi
+\dimen@=\dp#1 \unvbox#1
+\ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi
+\ifr@ggedbottom \kern-\dimen@ \vfil \fi}
+}
+
+% Here are the rules for the cropmarks. Note that they are
+% offset so that the space between them is truly \outerhsize or \outervsize
+% (P. A. MacKay, 12 November, 1986)
+%
+\def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong}
+\def\nstop{\vbox
+ {\hrule height\cornerthick depth\cornerlong width\cornerthick}}
+\def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong}
+\def\nsbot{\vbox
+ {\hrule height\cornerlong depth\cornerthick width\cornerthick}}
+
+% Parse an argument, then pass it to #1. The argument is the rest of
+% the input line (except we remove a trailing comment). #1 should be a
+% macro which expects an ordinary undelimited TeX argument.
+%
+\def\parsearg#1{%
+ \let\next = #1%
+ \begingroup
+ \obeylines
+ \futurelet\temp\parseargx
+}
+
+% If the next token is an obeyed space (from an @example environment or
+% the like), remove it and recurse. Otherwise, we're done.
+\def\parseargx{%
+ % \obeyedspace is defined far below, after the definition of \sepspaces.
+ \ifx\obeyedspace\temp
+ \expandafter\parseargdiscardspace
+ \else
+ \expandafter\parseargline
+ \fi
+}
+
+% Remove a single space (as the delimiter token to the macro call).
+{\obeyspaces %
+ \gdef\parseargdiscardspace {\futurelet\temp\parseargx}}
+
+{\obeylines %
+ \gdef\parseargline#1^^M{%
+ \endgroup % End of the group started in \parsearg.
+ %
+ % First remove any @c comment, then any @comment.
+ % Result of each macro is put in \toks0.
+ \argremovec #1\c\relax %
+ \expandafter\argremovecomment \the\toks0 \comment\relax %
+ %
+ % Call the caller's macro, saved as \next in \parsearg.
+ \expandafter\next\expandafter{\the\toks0}%
+ }%
+}
+
+% Since all \c{,omment} does is throw away the argument, we can let TeX
+% do that for us. The \relax here is matched by the \relax in the call
+% in \parseargline; it could be more or less anything, its purpose is
+% just to delimit the argument to the \c.
+\def\argremovec#1\c#2\relax{\toks0 = {#1}}
+\def\argremovecomment#1\comment#2\relax{\toks0 = {#1}}
+
+% \argremovec{,omment} might leave us with trailing spaces, though; e.g.,
+% @end itemize @c foo
+% will have two active spaces as part of the argument with the
+% `itemize'. Here we remove all active spaces from #1, and assign the
+% result to \toks0.
+%
+% This loses if there are any *other* active characters besides spaces
+% in the argument -- _ ^ +, for example -- since they get expanded.
+% Fortunately, Texinfo does not define any such commands. (If it ever
+% does, the catcode of the characters in questionwill have to be changed
+% here.) But this means we cannot call \removeactivespaces as part of
+% \argremovec{,omment}, since @c uses \parsearg, and thus the argument
+% that \parsearg gets might well have any character at all in it.
+%
+\def\removeactivespaces#1{%
+ \begingroup
+ \ignoreactivespaces
+ \edef\temp{#1}%
+ \global\toks0 = \expandafter{\temp}%
+ \endgroup
+}
+
+% Change the active space to expand to nothing.
+%
+\begingroup
+ \obeyspaces
+ \gdef\ignoreactivespaces{\obeyspaces\let =\empty}
+\endgroup
+
+
+\def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next}
+
+%% These are used to keep @begin/@end levels from running away
+%% Call \inENV within environments (after a \begingroup)
+\newif\ifENV \ENVfalse \def\inENV{\ifENV\relax\else\ENVtrue\fi}
+\def\ENVcheck{%
+\ifENV\errmessage{Still within an environment. Type Return to continue.}
+\endgroup\fi} % This is not perfect, but it should reduce lossage
+
+% @begin foo is the same as @foo, for now.
+\newhelp\EMsimple{Type <Return> to continue.}
+
+\outer\def\begin{\parsearg\beginxxx}
+
+\def\beginxxx #1{%
+\expandafter\ifx\csname #1\endcsname\relax
+{\errhelp=\EMsimple \errmessage{Undefined command @begin #1}}\else
+\csname #1\endcsname\fi}
+
+% @end foo executes the definition of \Efoo.
+%
+\def\end{\parsearg\endxxx}
+\def\endxxx #1{%
+ \removeactivespaces{#1}%
+ \edef\endthing{\the\toks0}%
+ %
+ \expandafter\ifx\csname E\endthing\endcsname\relax
+ \expandafter\ifx\csname \endthing\endcsname\relax
+ % There's no \foo, i.e., no ``environment'' foo.
+ \errhelp = \EMsimple
+ \errmessage{Undefined command `@end \endthing'}%
+ \else
+ \unmatchedenderror\endthing
+ \fi
+ \else
+ % Everything's ok; the right environment has been started.
+ \csname E\endthing\endcsname
+ \fi
+}
+
+% There is an environment #1, but it hasn't been started. Give an error.
+%
+\def\unmatchedenderror#1{%
+ \errhelp = \EMsimple
+ \errmessage{This `@end #1' doesn't have a matching `@#1'}%
+}
+
+% Define the control sequence \E#1 to give an unmatched @end error.
+%
+\def\defineunmatchedend#1{%
+ \expandafter\def\csname E#1\endcsname{\unmatchedenderror{#1}}%
+}
+
+
+% Single-spacing is done by various environments (specifically, in
+% \nonfillstart and \quotations).
+\newskip\singlespaceskip \singlespaceskip = 12.5pt
+\def\singlespace{%
+ % Why was this kern here? It messes up equalizing space above and below
+ % environments. --karl, 6may93
+ %{\advance \baselineskip by -\singlespaceskip
+ %\kern \baselineskip}%
+ \setleading \singlespaceskip
+}
+
+%% Simple single-character @ commands
+
+% @@ prints an @
+% Kludge this until the fonts are right (grr).
+\def\@{{\tt\char64}}
+
+% This is turned off because it was never documented
+% and you can use @w{...} around a quote to suppress ligatures.
+%% Define @` and @' to be the same as ` and '
+%% but suppressing ligatures.
+%\def\`{{`}}
+%\def\'{{'}}
+
+% Used to generate quoted braces.
+\def\mylbrace {{\tt\char123}}
+\def\myrbrace {{\tt\char125}}
+\let\{=\mylbrace
+\let\}=\myrbrace
+\begingroup
+ % Definitions to produce actual \{ & \} command in an index.
+ \catcode`\{ = 12 \catcode`\} = 12
+ \catcode`\[ = 1 \catcode`\] = 2
+ \catcode`\@ = 0 \catcode`\\ = 12
+ @gdef@lbracecmd[\{]%
+ @gdef@rbracecmd[\}]%
+@endgroup
+
+% Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent
+% Others are defined by plain TeX: @` @' @" @^ @~ @= @v @H.
+\let\, = \c
+\let\dotaccent = \.
+\def\ringaccent#1{{\accent23 #1}}
+\let\tieaccent = \t
+\let\ubaraccent = \b
+\let\udotaccent = \d
+
+% Other special characters: @questiondown @exclamdown
+% Plain TeX defines: @AA @AE @O @OE @L (and lowercase versions) @ss.
+\def\questiondown{?`}
+\def\exclamdown{!`}
+
+% Dotless i and dotless j, used for accents.
+\def\imacro{i}
+\def\jmacro{j}
+\def\dotless#1{%
+ \def\temp{#1}%
+ \ifx\temp\imacro \ptexi
+ \else\ifx\temp\jmacro \j
+ \else \errmessage{@dotless can be used only with i or j}%
+ \fi\fi
+}
+
+% Be sure we're in horizontal mode when doing a tie, since we make space
+% equivalent to this in @example-like environments. Otherwise, a space
+% at the beginning of a line will start with \penalty -- and
+% since \penalty is valid in vertical mode, we'd end up putting the
+% penalty on the vertical list instead of in the new paragraph.
+{\catcode`@ = 11
+ % Avoid using \@M directly, because that causes trouble
+ % if the definition is written into an index file.
+ \global\let\tiepenalty = \@M
+ \gdef\tie{\leavevmode\penalty\tiepenalty\ }
+}
+
+% @: forces normal size whitespace following.
+\def\:{\spacefactor=1000 }
+
+% @* forces a line break.
+\def\*{\hfil\break\hbox{}\ignorespaces}
+
+% @. is an end-of-sentence period.
+\def\.{.\spacefactor=3000 }
+
+% @! is an end-of-sentence bang.
+\def\!{!\spacefactor=3000 }
+
+% @? is an end-of-sentence query.
+\def\?{?\spacefactor=3000 }
+
+% @w prevents a word break. Without the \leavevmode, @w at the
+% beginning of a paragraph, when TeX is still in vertical mode, would
+% produce a whole line of output instead of starting the paragraph.
+\def\w#1{\leavevmode\hbox{#1}}
+
+% @group ... @end group forces ... to be all on one page, by enclosing
+% it in a TeX vbox. We use \vtop instead of \vbox to construct the box
+% to keep its height that of a normal line. According to the rules for
+% \topskip (p.114 of the TeXbook), the glue inserted is
+% max (\topskip - \ht (first item), 0). If that height is large,
+% therefore, no glue is inserted, and the space between the headline and
+% the text is small, which looks bad.
+%
+\def\group{\begingroup
+ \ifnum\catcode13=\active \else
+ \errhelp = \groupinvalidhelp
+ \errmessage{@group invalid in context where filling is enabled}%
+ \fi
+ %
+ % The \vtop we start below produces a box with normal height and large
+ % depth; thus, TeX puts \baselineskip glue before it, and (when the
+ % next line of text is done) \lineskip glue after it. (See p.82 of
+ % the TeXbook.) Thus, space below is not quite equal to space
+ % above. But it's pretty close.
+ \def\Egroup{%
+ \egroup % End the \vtop.
+ \endgroup % End the \group.
+ }%
+ %
+ \vtop\bgroup
+ % We have to put a strut on the last line in case the @group is in
+ % the midst of an example, rather than completely enclosing it.
+ % Otherwise, the interline space between the last line of the group
+ % and the first line afterwards is too small. But we can't put the
+ % strut in \Egroup, since there it would be on a line by itself.
+ % Hence this just inserts a strut at the beginning of each line.
+ \everypar = {\strut}%
+ %
+ % Since we have a strut on every line, we don't need any of TeX's
+ % normal interline spacing.
+ \offinterlineskip
+ %
+ % OK, but now we have to do something about blank
+ % lines in the input in @example-like environments, which normally
+ % just turn into \lisppar, which will insert no space now that we've
+ % turned off the interline space. Simplest is to make them be an
+ % empty paragraph.
+ \ifx\par\lisppar
+ \edef\par{\leavevmode \par}%
+ %
+ % Reset ^^M's definition to new definition of \par.
+ \obeylines
+ \fi
+ %
+ % Do @comment since we are called inside an environment such as
+ % @example, where each end-of-line in the input causes an
+ % end-of-line in the output. We don't want the end-of-line after
+ % the `@group' to put extra space in the output. Since @group
+ % should appear on a line by itself (according to the Texinfo
+ % manual), we don't worry about eating any user text.
+ \comment
+}
+%
+% TeX puts in an \escapechar (i.e., `@') at the beginning of the help
+% message, so this ends up printing `@group can only ...'.
+%
+\newhelp\groupinvalidhelp{%
+group can only be used in environments such as @example,^^J%
+where each line of input produces a line of output.}
+
+% @need space-in-mils
+% forces a page break if there is not space-in-mils remaining.
+
+\newdimen\mil \mil=0.001in
+
+\def\need{\parsearg\needx}
+
+% Old definition--didn't work.
+%\def\needx #1{\par %
+%% This method tries to make TeX break the page naturally
+%% if the depth of the box does not fit.
+%{\baselineskip=0pt%
+%\vtop to #1\mil{\vfil}\kern -#1\mil\penalty 10000
+%\prevdepth=-1000pt
+%}}
+
+\def\needx#1{%
+ % Go into vertical mode, so we don't make a big box in the middle of a
+ % paragraph.
+ \par
+ %
+ % Don't add any leading before our big empty box, but allow a page
+ % break, since the best break might be right here.
+ \allowbreak
+ \nointerlineskip
+ \vtop to #1\mil{\vfil}%
+ %
+ % TeX does not even consider page breaks if a penalty added to the
+ % main vertical list is 10000 or more. But in order to see if the
+ % empty box we just added fits on the page, we must make it consider
+ % page breaks. On the other hand, we don't want to actually break the
+ % page after the empty box. So we use a penalty of 9999.
+ %
+ % There is an extremely small chance that TeX will actually break the
+ % page at this \penalty, if there are no other feasible breakpoints in
+ % sight. (If the user is using lots of big @group commands, which
+ % almost-but-not-quite fill up a page, TeX will have a hard time doing
+ % good page breaking, for example.) However, I could not construct an
+ % example where a page broke at this \penalty; if it happens in a real
+ % document, then we can reconsider our strategy.
+ \penalty9999
+ %
+ % Back up by the size of the box, whether we did a page break or not.
+ \kern -#1\mil
+ %
+ % Do not allow a page break right after this kern.
+ \nobreak
+}
+
+% @br forces paragraph break
+
+\let\br = \par
+
+% @dots{} output an ellipsis using the current font.
+% We do .5em per period so that it has the same spacing in a typewriter
+% font as three actual period characters.
+%
+\def\dots{\hbox to 1.5em{%
+ \hskip 0pt plus 0.25fil minus 0.25fil
+ .\hss.\hss.%
+ \hskip 0pt plus 0.5fil minus 0.5fil
+}}
+
+% @enddots{} is an end-of-sentence ellipsis.
+%
+\def\enddots{%
+ \hbox to 2em{%
+ \hskip 0pt plus 0.25fil minus 0.25fil
+ .\hss.\hss.\hss.%
+ \hskip 0pt plus 0.5fil minus 0.5fil
+ }%
+ \spacefactor=3000
+}
+
+
+% @page forces the start of a new page
+
+\def\page{\par\vfill\supereject}
+
+% @exdent text....
+% outputs text on separate line in roman font, starting at standard page margin
+
+% This records the amount of indent in the innermost environment.
+% That's how much \exdent should take out.
+\newskip\exdentamount
+
+% This defn is used inside fill environments such as @defun.
+\def\exdent{\parsearg\exdentyyy}
+\def\exdentyyy #1{{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break}}
+
+% This defn is used inside nofill environments such as @example.
+\def\nofillexdent{\parsearg\nofillexdentyyy}
+\def\nofillexdentyyy #1{{\advance \leftskip by -\exdentamount
+\leftline{\hskip\leftskip{\rm#1}}}}
+
+% @inmargin{TEXT} puts TEXT in the margin next to the current paragraph.
+
+\def\inmargin#1{%
+\strut\vadjust{\nobreak\kern-\strutdepth
+ \vtop to \strutdepth{\baselineskip\strutdepth\vss
+ \llap{\rightskip=\inmarginspacing \vbox{\noindent #1}}\null}}}
+\newskip\inmarginspacing \inmarginspacing=1cm
+\def\strutdepth{\dp\strutbox}
+
+%\hbox{{\rm#1}}\hfil\break}}
+
+% @include file insert text of that file as input.
+% Allow normal characters that we make active in the argument (a file name).
+\def\include{\begingroup
+ \catcode`\\=12
+ \catcode`~=12
+ \catcode`^=12
+ \catcode`_=12
+ \catcode`|=12
+ \catcode`<=12
+ \catcode`>=12
+ \catcode`+=12
+ \parsearg\includezzz}
+% Restore active chars for included file.
+\def\includezzz#1{\endgroup\begingroup
+ % Read the included file in a group so nested @include's work.
+ \def\thisfile{#1}%
+ \input\thisfile
+\endgroup}
+
+\def\thisfile{}
+
+% @center line outputs that line, centered
+
+\def\center{\parsearg\centerzzz}
+\def\centerzzz #1{{\advance\hsize by -\leftskip
+\advance\hsize by -\rightskip
+\centerline{#1}}}
+
+% @sp n outputs n lines of vertical space
+
+\def\sp{\parsearg\spxxx}
+\def\spxxx #1{\vskip #1\baselineskip}
+
+% @comment ...line which is ignored...
+% @c is the same as @comment
+% @ignore ... @end ignore is another way to write a comment
+
+\def\comment{\catcode 64=\other \catcode 123=\other \catcode 125=\other%
+\parsearg \commentxxx}
+
+\def\commentxxx #1{\catcode 64=0 \catcode 123=1 \catcode 125=2 }
+
+\let\c=\comment
+
+% @paragraphindent is defined for the Info formatting commands only.
+\let\paragraphindent=\comment
+
+% Prevent errors for section commands.
+% Used in @ignore and in failing conditionals.
+\def\ignoresections{%
+\let\chapter=\relax
+\let\unnumbered=\relax
+\let\top=\relax
+\let\unnumberedsec=\relax
+\let\unnumberedsection=\relax
+\let\unnumberedsubsec=\relax
+\let\unnumberedsubsection=\relax
+\let\unnumberedsubsubsec=\relax
+\let\unnumberedsubsubsection=\relax
+\let\section=\relax
+\let\subsec=\relax
+\let\subsubsec=\relax
+\let\subsection=\relax
+\let\subsubsection=\relax
+\let\appendix=\relax
+\let\appendixsec=\relax
+\let\appendixsection=\relax
+\let\appendixsubsec=\relax
+\let\appendixsubsection=\relax
+\let\appendixsubsubsec=\relax
+\let\appendixsubsubsection=\relax
+\let\contents=\relax
+\let\smallbook=\relax
+\let\titlepage=\relax
+}
+
+% Used in nested conditionals, where we have to parse the Texinfo source
+% and so want to turn off most commands, in case they are used
+% incorrectly.
+%
+\def\ignoremorecommands{%
+ \let\defcodeindex = \relax
+ \let\defcv = \relax
+ \let\deffn = \relax
+ \let\deffnx = \relax
+ \let\defindex = \relax
+ \let\defivar = \relax
+ \let\defmac = \relax
+ \let\defmethod = \relax
+ \let\defop = \relax
+ \let\defopt = \relax
+ \let\defspec = \relax
+ \let\deftp = \relax
+ \let\deftypefn = \relax
+ \let\deftypefun = \relax
+ \let\deftypevar = \relax
+ \let\deftypevr = \relax
+ \let\defun = \relax
+ \let\defvar = \relax
+ \let\defvr = \relax
+ \let\ref = \relax
+ \let\xref = \relax
+ \let\printindex = \relax
+ \let\pxref = \relax
+ \let\settitle = \relax
+ \let\setchapternewpage = \relax
+ \let\setchapterstyle = \relax
+ \let\everyheading = \relax
+ \let\evenheading = \relax
+ \let\oddheading = \relax
+ \let\everyfooting = \relax
+ \let\evenfooting = \relax
+ \let\oddfooting = \relax
+ \let\headings = \relax
+ \let\include = \relax
+ \let\lowersections = \relax
+ \let\down = \relax
+ \let\raisesections = \relax
+ \let\up = \relax
+ \let\set = \relax
+ \let\clear = \relax
+ \let\item = \relax
+}
+
+% Ignore @ignore ... @end ignore.
+%
+\def\ignore{\doignore{ignore}}
+
+% Ignore @ifinfo, @ifhtml, @ifnottex, @html, @menu, and @direntry text.
+%
+\def\ifinfo{\doignore{ifinfo}}
+\def\ifhtml{\doignore{ifhtml}}
+\def\ifnottex{\doignore{ifnottex}}
+\def\html{\doignore{html}}
+\def\menu{\doignore{menu}}
+\def\direntry{\doignore{direntry}}
+
+% Also ignore @macro ... @end macro. The user must run texi2dvi,
+% which runs makeinfo to do macro expansion. Ignore @unmacro, too.
+\def\macro{\doignore{macro}}
+\def\macrocsname{macro}
+\let\unmacro = \comment
+
+
+% @dircategory CATEGORY -- specify a category of the dir file
+% which this file should belong to. Ignore this in TeX.
+\let\dircategory = \comment
+
+% Ignore text until a line `@end #1'.
+%
+\def\doignore#1{\begingroup
+ % Don't complain about control sequences we have declared \outer.
+ \ignoresections
+ %
+ % Define a command to swallow text until we reach `@end #1'.
+ % This @ is a catcode 12 token (that is the normal catcode of @ in
+ % this texinfo.tex file). We change the catcode of @ below to match.
+ \long\def\doignoretext##1@end #1{\enddoignore}%
+ %
+ % Make sure that spaces turn into tokens that match what \doignoretext wants.
+ \catcode32 = 10
+ %
+ % Ignore braces, too, so mismatched braces don't cause trouble.
+ \catcode`\{ = 9
+ \catcode`\} = 9
+ %
+ % We must not have @c interpreted as a control sequence.
+ \catcode`\@ = 12
+ %
+ % Make the letter c a comment character so that the rest of the line
+ % will be ignored. This way, the document can have (for example)
+ % @c @end ifinfo
+ % and the @end ifinfo will be properly ignored.
+ % (We've just changed @ to catcode 12.)
+ %
+ % But we can't do this if #1 is `macro', since that actually contains a c.
+ % Happily, none of the other conditionals have the letter `c' in their names!
+ \def\temp{#1}%
+ \ifx\temp\macrocsname \else
+ \catcode`\c = 14
+ \fi
+ %
+ % And now expand that command.
+ \doignoretext
+}
+
+% What we do to finish off ignored text.
+%
+\def\enddoignore{\endgroup\ignorespaces}%
+
+\newif\ifwarnedobs\warnedobsfalse
+\def\obstexwarn{%
+ \ifwarnedobs\relax\else
+ % We need to warn folks that they may have trouble with TeX 3.0.
+ % This uses \immediate\write16 rather than \message to get newlines.
+ \immediate\write16{}
+ \immediate\write16{***WARNING*** for users of Unix TeX 3.0!}
+ \immediate\write16{This manual trips a bug in TeX version 3.0 (tex hangs).}
+ \immediate\write16{If you are running another version of TeX, relax.}
+ \immediate\write16{If you are running Unix TeX 3.0, kill this TeX process.}
+ \immediate\write16{ Then upgrade your TeX installation if you can.}
+ \immediate\write16{ (See ftp://ftp.gnu.ai.mit.edu/pub/gnu/TeX.README.)}
+ \immediate\write16{If you are stuck with version 3.0, run the}
+ \immediate\write16{ script ``tex3patch'' from the Texinfo distribution}
+ \immediate\write16{ to use a workaround.}
+ \immediate\write16{}
+ \global\warnedobstrue
+ \fi
+}
+
+% **In TeX 3.0, setting text in \nullfont hangs tex. For a
+% workaround (which requires the file ``dummy.tfm'' to be installed),
+% uncomment the following line:
+%%%%%\font\nullfont=dummy\let\obstexwarn=\relax
+
+% Ignore text, except that we keep track of conditional commands for
+% purposes of nesting, up to an `@end #1' command.
+%
+\def\nestedignore#1{%
+ \obstexwarn
+ % We must actually expand the ignored text to look for the @end
+ % command, so that nested ignore constructs work. Thus, we put the
+ % text into a \vbox and then do nothing with the result. To minimize
+ % the change of memory overflow, we follow the approach outlined on
+ % page 401 of the TeXbook: make the current font be a dummy font.
+ %
+ \setbox0 = \vbox\bgroup
+ % Don't complain about control sequences we have declared \outer.
+ \ignoresections
+ %
+ % Define `@end #1' to end the box, which will in turn undefine the
+ % @end command again.
+ \expandafter\def\csname E#1\endcsname{\egroup\ignorespaces}%
+ %
+ % We are going to be parsing Texinfo commands. Most cause no
+ % trouble when they are used incorrectly, but some commands do
+ % complicated argument parsing or otherwise get confused, so we
+ % undefine them.
+ %
+ % We can't do anything about stray @-signs, unfortunately;
+ % they'll produce `undefined control sequence' errors.
+ \ignoremorecommands
+ %
+ % Set the current font to be \nullfont, a TeX primitive, and define
+ % all the font commands to also use \nullfont. We don't use
+ % dummy.tfm, as suggested in the TeXbook, because not all sites
+ % might have that installed. Therefore, math mode will still
+ % produce output, but that should be an extremely small amount of
+ % stuff compared to the main input.
+ %
+ \nullfont
+ \let\tenrm = \nullfont \let\tenit = \nullfont \let\tensl = \nullfont
+ \let\tenbf = \nullfont \let\tentt = \nullfont \let\smallcaps = \nullfont
+ \let\tensf = \nullfont
+ % Similarly for index fonts (mostly for their use in
+ % smallexample)
+ \let\indrm = \nullfont \let\indit = \nullfont \let\indsl = \nullfont
+ \let\indbf = \nullfont \let\indtt = \nullfont \let\indsc = \nullfont
+ \let\indsf = \nullfont
+ %
+ % Don't complain when characters are missing from the fonts.
+ \tracinglostchars = 0
+ %
+ % Don't bother to do space factor calculations.
+ \frenchspacing
+ %
+ % Don't report underfull hboxes.
+ \hbadness = 10000
+ %
+ % Do minimal line-breaking.
+ \pretolerance = 10000
+ %
+ % Do not execute instructions in @tex
+ \def\tex{\doignore{tex}}%
+}
+
+% @set VAR sets the variable VAR to an empty value.
+% @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE.
+%
+% Since we want to separate VAR from REST-OF-LINE (which might be
+% empty), we can't just use \parsearg; we have to insert a space of our
+% own to delimit the rest of the line, and then take it out again if we
+% didn't need it. Make sure the catcode of space is correct to avoid
+% losing inside @example, for instance.
+%
+\def\set{\begingroup\catcode` =10
+ \catcode`\-=12 \catcode`\_=12 % Allow - and _ in VAR.
+ \parsearg\setxxx}
+\def\setxxx#1{\setyyy#1 \endsetyyy}
+\def\setyyy#1 #2\endsetyyy{%
+ \def\temp{#2}%
+ \ifx\temp\empty \global\expandafter\let\csname SET#1\endcsname = \empty
+ \else \setzzz{#1}#2\endsetzzz % Remove the trailing space \setxxx inserted.
+ \fi
+ \endgroup
+}
+% Can't use \xdef to pre-expand #2 and save some time, since \temp or
+% \next or other control sequences that we've defined might get us into
+% an infinite loop. Consider `@set foo @cite{bar}'.
+\def\setzzz#1#2 \endsetzzz{\expandafter\gdef\csname SET#1\endcsname{#2}}
+
+% @clear VAR clears (i.e., unsets) the variable VAR.
+%
+\def\clear{\parsearg\clearxxx}
+\def\clearxxx#1{\global\expandafter\let\csname SET#1\endcsname=\relax}
+
+% @value{foo} gets the text saved in variable foo.
+%
+\def\value{\begingroup
+ \catcode`\-=12 \catcode`\_=12 % Allow - and _ in VAR.
+ \valuexxx}
+\def\valuexxx#1{%
+ \expandafter\ifx\csname SET#1\endcsname\relax
+ {\{No value for ``#1''\}}%
+ \else
+ \csname SET#1\endcsname
+ \fi
+\endgroup}
+
+% @ifset VAR ... @end ifset reads the `...' iff VAR has been defined
+% with @set.
+%
+\def\ifset{\parsearg\ifsetxxx}
+\def\ifsetxxx #1{%
+ \expandafter\ifx\csname SET#1\endcsname\relax
+ \expandafter\ifsetfail
+ \else
+ \expandafter\ifsetsucceed
+ \fi
+}
+\def\ifsetsucceed{\conditionalsucceed{ifset}}
+\def\ifsetfail{\nestedignore{ifset}}
+\defineunmatchedend{ifset}
+
+% @ifclear VAR ... @end ifclear reads the `...' iff VAR has never been
+% defined with @set, or has been undefined with @clear.
+%
+\def\ifclear{\parsearg\ifclearxxx}
+\def\ifclearxxx #1{%
+ \expandafter\ifx\csname SET#1\endcsname\relax
+ \expandafter\ifclearsucceed
+ \else
+ \expandafter\ifclearfail
+ \fi
+}
+\def\ifclearsucceed{\conditionalsucceed{ifclear}}
+\def\ifclearfail{\nestedignore{ifclear}}
+\defineunmatchedend{ifclear}
+
+% @iftex, @ifnothtml, @ifnotinfo always succeed; we read the text
+% following, through the first @end iftex (etc.). Make `@end iftex'
+% (etc.) valid only after an @iftex.
+%
+\def\iftex{\conditionalsucceed{iftex}}
+\def\ifnothtml{\conditionalsucceed{ifnothtml}}
+\def\ifnotinfo{\conditionalsucceed{ifnotinfo}}
+\defineunmatchedend{iftex}
+\defineunmatchedend{ifnothtml}
+\defineunmatchedend{ifnotinfo}
+
+% We can't just want to start a group at @iftex (for example) and end it
+% at @end iftex, since then @set commands inside the conditional have no
+% effect (they'd get reverted at the end of the group). So we must
+% define \Eiftex to redefine itself to be its previous value. (We can't
+% just define it to fail again with an ``unmatched end'' error, since
+% the @ifset might be nested.)
+%
+\def\conditionalsucceed#1{%
+ \edef\temp{%
+ % Remember the current value of \E#1.
+ \let\nece{prevE#1} = \nece{E#1}%
+ %
+ % At the `@end #1', redefine \E#1 to be its previous value.
+ \def\nece{E#1}{\let\nece{E#1} = \nece{prevE#1}}%
+ }%
+ \temp
+}
+
+% We need to expand lots of \csname's, but we don't want to expand the
+% control sequences after we've constructed them.
+%
+\def\nece#1{\expandafter\noexpand\csname#1\endcsname}
+
+% @asis just yields its argument. Used with @table, for example.
+%
+\def\asis#1{#1}
+
+% @math means output in math mode.
+% We don't use $'s directly in the definition of \math because control
+% sequences like \math are expanded when the toc file is written. Then,
+% we read the toc file back, the $'s will be normal characters (as they
+% should be, according to the definition of Texinfo). So we must use a
+% control sequence to switch into and out of math mode.
+%
+% This isn't quite enough for @math to work properly in indices, but it
+% seems unlikely it will ever be needed there.
+%
+\let\implicitmath = $
+\def\math#1{\implicitmath #1\implicitmath}
+
+% @bullet and @minus need the same treatment as @math, just above.
+\def\bullet{\implicitmath\ptexbullet\implicitmath}
+\def\minus{\implicitmath-\implicitmath}
+
+\def\node{\ENVcheck\parsearg\nodezzz}
+\def\nodezzz#1{\nodexxx [#1,]}
+\def\nodexxx[#1,#2]{\gdef\lastnode{#1}}
+\let\nwnode=\node
+\let\lastnode=\relax
+
+\def\donoderef{\ifx\lastnode\relax\else
+\expandafter\expandafter\expandafter\setref{\lastnode}\fi
+\global\let\lastnode=\relax}
+
+\def\unnumbnoderef{\ifx\lastnode\relax\else
+\expandafter\expandafter\expandafter\unnumbsetref{\lastnode}\fi
+\global\let\lastnode=\relax}
+
+\def\appendixnoderef{\ifx\lastnode\relax\else
+\expandafter\expandafter\expandafter\appendixsetref{\lastnode}\fi
+\global\let\lastnode=\relax}
+
+% @refill is a no-op.
+\let\refill=\relax
+
+% @setfilename is done at the beginning of every texinfo file.
+% So open here the files we need to have open while reading the input.
+% This makes it possible to make a .fmt file for texinfo.
+\def\setfilename{%
+ \readauxfile
+ \opencontents
+ \openindices
+ \fixbackslash % Turn off hack to swallow `\input texinfo'.
+ \global\let\setfilename=\comment % Ignore extra @setfilename cmds.
+ %
+ % If texinfo.cnf is present on the system, read it.
+ % Useful for site-wide @afourpaper, etc.
+ % Just to be on the safe side, close the input stream before the \input.
+ \openin 1 texinfo.cnf
+ \ifeof1 \let\temp=\relax \else \def\temp{\input texinfo.cnf }\fi
+ \closein1
+ \temp
+ %
+ \comment % Ignore the actual filename.
+}
+
+% @bye.
+\outer\def\bye{\pagealignmacro\tracingstats=1\ptexend}
+
+% \def\macro#1{\begingroup\ignoresections\catcode`\#=6\def\macrotemp{#1}\parsearg\macroxxx}
+% \def\macroxxx#1#2 \end macro{%
+% \expandafter\gdef\macrotemp#1{#2}%
+% \endgroup}
+
+%\def\linemacro#1{\begingroup\ignoresections\catcode`\#=6\def\macrotemp{#1}\parsearg\linemacroxxx}
+%\def\linemacroxxx#1#2 \end linemacro{%
+%\let\parsearg=\relax
+%\edef\macrotempx{\csname M\butfirst\expandafter\string\macrotemp\endcsname}%
+%\expandafter\xdef\macrotemp{\parsearg\macrotempx}%
+%\expandafter\gdef\macrotempx#1{#2}%
+%\endgroup}
+
+%\def\butfirst#1{}
+
+
+\message{fonts,}
+
+% Font-change commands.
+
+% Texinfo supports the sans serif font style, which plain TeX does not.
+% So we set up a \sf analogous to plain's \rm, etc.
+\newfam\sffam
+\def\sf{\fam=\sffam \tensf}
+\let\li = \sf % Sometimes we call it \li, not \sf.
+
+% We don't need math for this one.
+\def\ttsl{\tenttsl}
+
+% Use Computer Modern fonts at \magstephalf (11pt).
+\newcount\mainmagstep
+\mainmagstep=\magstephalf
+
+% Set the font macro #1 to the font named #2, adding on the
+% specified font prefix (normally `cm').
+% #3 is the font's design size, #4 is a scale factor
+\def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4}
+
+% Use cm as the default font prefix.
+% To specify the font prefix, you must define \fontprefix
+% before you read in texinfo.tex.
+\ifx\fontprefix\undefined
+\def\fontprefix{cm}
+\fi
+% Support font families that don't use the same naming scheme as CM.
+\def\rmshape{r}
+\def\rmbshape{bx} %where the normal face is bold
+\def\bfshape{b}
+\def\bxshape{bx}
+\def\ttshape{tt}
+\def\ttbshape{tt}
+\def\ttslshape{sltt}
+\def\itshape{ti}
+\def\itbshape{bxti}
+\def\slshape{sl}
+\def\slbshape{bxsl}
+\def\sfshape{ss}
+\def\sfbshape{ss}
+\def\scshape{csc}
+\def\scbshape{csc}
+
+\ifx\bigger\relax
+\let\mainmagstep=\magstep1
+\setfont\textrm\rmshape{12}{1000}
+\setfont\texttt\ttshape{12}{1000}
+\else
+\setfont\textrm\rmshape{10}{\mainmagstep}
+\setfont\texttt\ttshape{10}{\mainmagstep}
+\fi
+% Instead of cmb10, you many want to use cmbx10.
+% cmbx10 is a prettier font on its own, but cmb10
+% looks better when embedded in a line with cmr10.
+\setfont\textbf\bfshape{10}{\mainmagstep}
+\setfont\textit\itshape{10}{\mainmagstep}
+\setfont\textsl\slshape{10}{\mainmagstep}
+\setfont\textsf\sfshape{10}{\mainmagstep}
+\setfont\textsc\scshape{10}{\mainmagstep}
+\setfont\textttsl\ttslshape{10}{\mainmagstep}
+\font\texti=cmmi10 scaled \mainmagstep
+\font\textsy=cmsy10 scaled \mainmagstep
+
+% A few fonts for @defun, etc.
+\setfont\defbf\bxshape{10}{\magstep1} %was 1314
+\setfont\deftt\ttshape{10}{\magstep1}
+\def\df{\let\tentt=\deftt \let\tenbf = \defbf \bf}
+
+% Fonts for indices and small examples (9pt).
+% We actually use the slanted font rather than the italic,
+% because texinfo normally uses the slanted fonts for that.
+% Do not make many font distinctions in general in the index, since they
+% aren't very useful.
+\setfont\ninett\ttshape{9}{1000}
+\setfont\indrm\rmshape{9}{1000}
+\setfont\indit\slshape{9}{1000}
+\let\indsl=\indit
+\let\indtt=\ninett
+\let\indttsl=\ninett
+\let\indsf=\indrm
+\let\indbf=\indrm
+\setfont\indsc\scshape{10}{900}
+\font\indi=cmmi9
+\font\indsy=cmsy9
+
+% Fonts for title page:
+\setfont\titlerm\rmbshape{12}{\magstep3}
+\setfont\titleit\itbshape{10}{\magstep4}
+\setfont\titlesl\slbshape{10}{\magstep4}
+\setfont\titlett\ttbshape{12}{\magstep3}
+\setfont\titlettsl\ttslshape{10}{\magstep4}
+\setfont\titlesf\sfbshape{17}{\magstep1}
+\let\titlebf=\titlerm
+\setfont\titlesc\scbshape{10}{\magstep4}
+\font\titlei=cmmi12 scaled \magstep3
+\font\titlesy=cmsy10 scaled \magstep4
+\def\authorrm{\secrm}
+
+% Chapter (and unnumbered) fonts (17.28pt).
+\setfont\chaprm\rmbshape{12}{\magstep2}
+\setfont\chapit\itbshape{10}{\magstep3}
+\setfont\chapsl\slbshape{10}{\magstep3}
+\setfont\chaptt\ttbshape{12}{\magstep2}
+\setfont\chapttsl\ttslshape{10}{\magstep3}
+\setfont\chapsf\sfbshape{17}{1000}
+\let\chapbf=\chaprm
+\setfont\chapsc\scbshape{10}{\magstep3}
+\font\chapi=cmmi12 scaled \magstep2
+\font\chapsy=cmsy10 scaled \magstep3
+
+% Section fonts (14.4pt).
+\setfont\secrm\rmbshape{12}{\magstep1}
+\setfont\secit\itbshape{10}{\magstep2}
+\setfont\secsl\slbshape{10}{\magstep2}
+\setfont\sectt\ttbshape{12}{\magstep1}
+\setfont\secttsl\ttslshape{10}{\magstep2}
+\setfont\secsf\sfbshape{12}{\magstep1}
+\let\secbf\secrm
+\setfont\secsc\scbshape{10}{\magstep2}
+\font\seci=cmmi12 scaled \magstep1
+\font\secsy=cmsy10 scaled \magstep2
+
+% \setfont\ssecrm\bxshape{10}{\magstep1} % This size an font looked bad.
+% \setfont\ssecit\itshape{10}{\magstep1} % The letters were too crowded.
+% \setfont\ssecsl\slshape{10}{\magstep1}
+% \setfont\ssectt\ttshape{10}{\magstep1}
+% \setfont\ssecsf\sfshape{10}{\magstep1}
+
+%\setfont\ssecrm\bfshape{10}{1315} % Note the use of cmb rather than cmbx.
+%\setfont\ssecit\itshape{10}{1315} % Also, the size is a little larger than
+%\setfont\ssecsl\slshape{10}{1315} % being scaled magstep1.
+%\setfont\ssectt\ttshape{10}{1315}
+%\setfont\ssecsf\sfshape{10}{1315}
+
+%\let\ssecbf=\ssecrm
+
+% Subsection fonts (13.15pt).
+\setfont\ssecrm\rmbshape{12}{\magstephalf}
+\setfont\ssecit\itbshape{10}{1315}
+\setfont\ssecsl\slbshape{10}{1315}
+\setfont\ssectt\ttbshape{12}{\magstephalf}
+\setfont\ssecttsl\ttslshape{10}{1315}
+\setfont\ssecsf\sfbshape{12}{\magstephalf}
+\let\ssecbf\ssecrm
+\setfont\ssecsc\scbshape{10}{\magstep1}
+\font\sseci=cmmi12 scaled \magstephalf
+\font\ssecsy=cmsy10 scaled 1315
+% The smallcaps and symbol fonts should actually be scaled \magstep1.5,
+% but that is not a standard magnification.
+
+% In order for the font changes to affect most math symbols and letters,
+% we have to define the \textfont of the standard families. Since
+% texinfo doesn't allow for producing subscripts and superscripts, we
+% don't bother to reset \scriptfont and \scriptscriptfont (which would
+% also require loading a lot more fonts).
+%
+\def\resetmathfonts{%
+ \textfont0 = \tenrm \textfont1 = \teni \textfont2 = \tensy
+ \textfont\itfam = \tenit \textfont\slfam = \tensl \textfont\bffam = \tenbf
+ \textfont\ttfam = \tentt \textfont\sffam = \tensf
+}
+
+
+% The font-changing commands redefine the meanings of \tenSTYLE, instead
+% of just \STYLE. We do this so that font changes will continue to work
+% in math mode, where it is the current \fam that is relevant in most
+% cases, not the current font. Plain TeX does \def\bf{\fam=\bffam
+% \tenbf}, for example. By redefining \tenbf, we obviate the need to
+% redefine \bf itself.
+\def\textfonts{%
+ \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl
+ \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc
+ \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy \let\tenttsl=\textttsl
+ \resetmathfonts}
+\def\titlefonts{%
+ \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl
+ \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc
+ \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy
+ \let\tenttsl=\titlettsl
+ \resetmathfonts \setleading{25pt}}
+\def\titlefont#1{{\titlefonts\rm #1}}
+\def\chapfonts{%
+ \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl
+ \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc
+ \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy \let\tenttsl=\chapttsl
+ \resetmathfonts \setleading{19pt}}
+\def\secfonts{%
+ \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl
+ \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc
+ \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy \let\tenttsl=\secttsl
+ \resetmathfonts \setleading{16pt}}
+\def\subsecfonts{%
+ \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl
+ \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc
+ \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy \let\tenttsl=\ssecttsl
+ \resetmathfonts \setleading{15pt}}
+\let\subsubsecfonts = \subsecfonts % Maybe make sssec fonts scaled magstephalf?
+\def\indexfonts{%
+ \let\tenrm=\indrm \let\tenit=\indit \let\tensl=\indsl
+ \let\tenbf=\indbf \let\tentt=\indtt \let\smallcaps=\indsc
+ \let\tensf=\indsf \let\teni=\indi \let\tensy=\indsy \let\tenttsl=\indttsl
+ \resetmathfonts \setleading{12pt}}
+
+% Set up the default fonts, so we can use them for creating boxes.
+%
+\textfonts
+
+% Define these so they can be easily changed for other fonts.
+\def\angleleft{$\langle$}
+\def\angleright{$\rangle$}
+
+% Count depth in font-changes, for error checks
+\newcount\fontdepth \fontdepth=0
+
+% Fonts for short table of contents.
+\setfont\shortcontrm\rmshape{12}{1000}
+\setfont\shortcontbf\bxshape{12}{1000}
+\setfont\shortcontsl\slshape{12}{1000}
+
+%% Add scribe-like font environments, plus @l for inline lisp (usually sans
+%% serif) and @ii for TeX italic
+
+% \smartitalic{ARG} outputs arg in italics, followed by an italic correction
+% unless the following character is such as not to need one.
+\def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else\/\fi\fi\fi}
+\def\smartitalic#1{{\sl #1}\futurelet\next\smartitalicx}
+
+\let\i=\smartitalic
+\let\var=\smartitalic
+\let\dfn=\smartitalic
+\let\emph=\smartitalic
+\let\cite=\smartitalic
+
+\def\b#1{{\bf #1}}
+\let\strong=\b
+
+% We can't just use \exhyphenpenalty, because that only has effect at
+% the end of a paragraph. Restore normal hyphenation at the end of the
+% group within which \nohyphenation is presumably called.
+%
+\def\nohyphenation{\hyphenchar\font = -1 \aftergroup\restorehyphenation}
+\def\restorehyphenation{\hyphenchar\font = `- }
+
+\def\t#1{%
+ {\tt \rawbackslash \frenchspacing #1}%
+ \null
+}
+\let\ttfont=\t
+\def\samp#1{`\tclose{#1}'\null}
+\setfont\smallrm\rmshape{8}{1000}
+\font\smallsy=cmsy9
+\def\key#1{{\smallrm\textfont2=\smallsy \leavevmode\hbox{%
+ \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{%
+ \vbox{\hrule\kern-0.4pt
+ \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}%
+ \kern-0.4pt\hrule}%
+ \kern-.06em\raise0.4pt\hbox{\angleright}}}}
+% The old definition, with no lozenge:
+%\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null}
+\def\ctrl #1{{\tt \rawbackslash \hat}#1}
+
+\let\file=\samp
+
+% @code is a modification of @t,
+% which makes spaces the same size as normal in the surrounding text.
+\def\tclose#1{%
+ {%
+ % Change normal interword space to be same as for the current font.
+ \spaceskip = \fontdimen2\font
+ %
+ % Switch to typewriter.
+ \tt
+ %
+ % But `\ ' produces the large typewriter interword space.
+ \def\ {{\spaceskip = 0pt{} }}%
+ %
+ % Turn off hyphenation.
+ \nohyphenation
+ %
+ \rawbackslash
+ \frenchspacing
+ #1%
+ }%
+ \null
+}
+
+% We *must* turn on hyphenation at `-' and `_' in \code.
+% Otherwise, it is too hard to avoid overfull hboxes
+% in the Emacs manual, the Library manual, etc.
+
+% Unfortunately, TeX uses one parameter (\hyphenchar) to control
+% both hyphenation at - and hyphenation within words.
+% We must therefore turn them both off (\tclose does that)
+% and arrange explicitly to hyphenate at a dash.
+% -- rms.
+{
+\catcode`\-=\active
+\catcode`\_=\active
+\catcode`\|=\active
+\global\def\code{\begingroup \catcode`\-=\active \let-\codedash \catcode`\_=\active \let_\codeunder \codex}
+% The following is used by \doprintindex to insure that long function names
+% wrap around. It is necessary for - and _ to be active before the index is
+% read from the file, as \entry parses the arguments long before \code is
+% ever called. -- mycroft
+% _ is always active; and it shouldn't be \let = to an _ that is a
+% subscript character anyway. Then, @cindex @samp{_} (for example)
+% fails. --karl
+\global\def\indexbreaks{%
+ \catcode`\-=\active \let-\realdash
+}
+}
+
+\def\realdash{-}
+\def\codedash{-\discretionary{}{}{}}
+\def\codeunder{\ifusingtt{\normalunderscore\discretionary{}{}{}}{\_}}
+\def\codex #1{\tclose{#1}\endgroup}
+
+%\let\exp=\tclose %Was temporary
+
+% @kbd is like @code, except that if the argument is just one @key command,
+% then @kbd has no effect.
+
+% @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always),
+% `example' (@kbd uses ttsl only inside of @example and friends),
+% or `code' (@kbd uses normal tty font always).
+\def\kbdinputstyle{\parsearg\kbdinputstylexxx}
+\def\kbdinputstylexxx#1{%
+ \def\arg{#1}%
+ \ifx\arg\worddistinct
+ \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}%
+ \else\ifx\arg\wordexample
+ \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}%
+ \else\ifx\arg\wordcode
+ \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}%
+ \fi\fi\fi
+}
+\def\worddistinct{distinct}
+\def\wordexample{example}
+\def\wordcode{code}
+
+% Default is kbdinputdistinct. (Too much of a hassle to call the macro,
+% the catcodes are wrong for parsearg to work.)
+\gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}
+
+\def\xkey{\key}
+\def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}%
+\ifx\one\xkey\ifx\threex\three \key{#2}%
+\else{\tclose{\kbdfont\look}}\fi
+\else{\tclose{\kbdfont\look}}\fi}
+
+% @url. Quotes do not seem necessary, so use \code.
+\let\url=\code
+
+% @uref (abbreviation for `urlref') takes an optional second argument
+% specifying the text to display. First (mandatory) arg is the url.
+% Perhaps eventually put in a hypertex \special here.
+%
+\def\uref#1{\urefxxx #1,,\finish}
+\def\urefxxx#1,#2,#3\finish{%
+ \setbox0 = \hbox{\ignorespaces #2}%
+ \ifdim\wd0 > 0pt
+ \unhbox0\ (\code{#1})%
+ \else
+ \code{#1}%
+ \fi
+}
+
+% rms does not like the angle brackets --karl, 17may97.
+% So now @email is just like @uref.
+%\def\email#1{\angleleft{\tt #1}\angleright}
+\let\email=\uref
+
+% Check if we are currently using a typewriter font. Since all the
+% Computer Modern typewriter fonts have zero interword stretch (and
+% shrink), and it is reasonable to expect all typewriter fonts to have
+% this property, we can check that font parameter.
+%
+\def\ifmonospace{\ifdim\fontdimen3\font=0pt }
+
+% Typeset a dimension, e.g., `in' or `pt'. The only reason for the
+% argument is to make the input look right: @dmn{pt} instead of
+% @dmn{}pt.
+%
+\def\dmn#1{\thinspace #1}
+
+\def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par}
+
+% @l was never documented to mean ``switch to the Lisp font'',
+% and it is not used as such in any manual I can find. We need it for
+% Polish suppressed-l. --karl, 22sep96.
+%\def\l#1{{\li #1}\null}
+
+\def\r#1{{\rm #1}} % roman font
+% Use of \lowercase was suggested.
+\def\sc#1{{\smallcaps#1}} % smallcaps font
+\def\ii#1{{\it #1}} % italic font
+
+% @pounds{} is a sterling sign.
+\def\pounds{{\it\$}}
+
+
+\message{page headings,}
+
+\newskip\titlepagetopglue \titlepagetopglue = 1.5in
+\newskip\titlepagebottomglue \titlepagebottomglue = 2pc
+
+% First the title page. Must do @settitle before @titlepage.
+\newif\ifseenauthor
+\newif\iffinishedtitlepage
+
+\def\shorttitlepage{\parsearg\shorttitlepagezzz}
+\def\shorttitlepagezzz #1{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}%
+ \endgroup\page\hbox{}\page}
+
+\def\titlepage{\begingroup \parindent=0pt \textfonts
+ \let\subtitlerm=\tenrm
+% I deinstalled the following change because \cmr12 is undefined.
+% This change was not in the ChangeLog anyway. --rms.
+% \let\subtitlerm=\cmr12
+ \def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines}%
+ %
+ \def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines}%
+ %
+ % Leave some space at the very top of the page.
+ \vglue\titlepagetopglue
+ %
+ % Now you can print the title using @title.
+ \def\title{\parsearg\titlezzz}%
+ \def\titlezzz##1{\leftline{\titlefonts\rm ##1}
+ % print a rule at the page bottom also.
+ \finishedtitlepagefalse
+ \vskip4pt \hrule height 4pt width \hsize \vskip4pt}%
+ % No rule at page bottom unless we print one at the top with @title.
+ \finishedtitlepagetrue
+ %
+ % Now you can put text using @subtitle.
+ \def\subtitle{\parsearg\subtitlezzz}%
+ \def\subtitlezzz##1{{\subtitlefont \rightline{##1}}}%
+ %
+ % @author should come last, but may come many times.
+ \def\author{\parsearg\authorzzz}%
+ \def\authorzzz##1{\ifseenauthor\else\vskip 0pt plus 1filll\seenauthortrue\fi
+ {\authorfont \leftline{##1}}}%
+ %
+ % Most title ``pages'' are actually two pages long, with space
+ % at the top of the second. We don't want the ragged left on the second.
+ \let\oldpage = \page
+ \def\page{%
+ \iffinishedtitlepage\else
+ \finishtitlepage
+ \fi
+ \oldpage
+ \let\page = \oldpage
+ \hbox{}}%
+% \def\page{\oldpage \hbox{}}
+}
+
+\def\Etitlepage{%
+ \iffinishedtitlepage\else
+ \finishtitlepage
+ \fi
+ % It is important to do the page break before ending the group,
+ % because the headline and footline are only empty inside the group.
+ % If we use the new definition of \page, we always get a blank page
+ % after the title page, which we certainly don't want.
+ \oldpage
+ \endgroup
+ \HEADINGSon
+}
+
+\def\finishtitlepage{%
+ \vskip4pt \hrule height 2pt width \hsize
+ \vskip\titlepagebottomglue
+ \finishedtitlepagetrue
+}
+
+%%% Set up page headings and footings.
+
+\let\thispage=\folio
+
+\newtoks \evenheadline % Token sequence for heading line of even pages
+\newtoks \oddheadline % Token sequence for heading line of odd pages
+\newtoks \evenfootline % Token sequence for footing line of even pages
+\newtoks \oddfootline % Token sequence for footing line of odd pages
+
+% Now make Tex use those variables
+\headline={{\textfonts\rm \ifodd\pageno \the\oddheadline
+ \else \the\evenheadline \fi}}
+\footline={{\textfonts\rm \ifodd\pageno \the\oddfootline
+ \else \the\evenfootline \fi}\HEADINGShook}
+\let\HEADINGShook=\relax
+
+% Commands to set those variables.
+% For example, this is what @headings on does
+% @evenheading @thistitle|@thispage|@thischapter
+% @oddheading @thischapter|@thispage|@thistitle
+% @evenfooting @thisfile||
+% @oddfooting ||@thisfile
+
+\def\evenheading{\parsearg\evenheadingxxx}
+\def\oddheading{\parsearg\oddheadingxxx}
+\def\everyheading{\parsearg\everyheadingxxx}
+
+\def\evenfooting{\parsearg\evenfootingxxx}
+\def\oddfooting{\parsearg\oddfootingxxx}
+\def\everyfooting{\parsearg\everyfootingxxx}
+
+{\catcode`\@=0 %
+
+\gdef\evenheadingxxx #1{\evenheadingyyy #1@|@|@|@|\finish}
+\gdef\evenheadingyyy #1@|#2@|#3@|#4\finish{%
+\global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\gdef\oddheadingxxx #1{\oddheadingyyy #1@|@|@|@|\finish}
+\gdef\oddheadingyyy #1@|#2@|#3@|#4\finish{%
+\global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\gdef\everyheadingxxx#1{\oddheadingxxx{#1}\evenheadingxxx{#1}}%
+
+\gdef\evenfootingxxx #1{\evenfootingyyy #1@|@|@|@|\finish}
+\gdef\evenfootingyyy #1@|#2@|#3@|#4\finish{%
+\global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\gdef\oddfootingxxx #1{\oddfootingyyy #1@|@|@|@|\finish}
+\gdef\oddfootingyyy #1@|#2@|#3@|#4\finish{%
+ \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}%
+ %
+ % Leave some space for the footline. Hopefully ok to assume
+ % @evenfooting will not be used by itself.
+ \global\advance\pageheight by -\baselineskip
+ \global\advance\vsize by -\baselineskip
+}
+
+\gdef\everyfootingxxx#1{\oddfootingxxx{#1}\evenfootingxxx{#1}}
+%
+}% unbind the catcode of @.
+
+% @headings double turns headings on for double-sided printing.
+% @headings single turns headings on for single-sided printing.
+% @headings off turns them off.
+% @headings on same as @headings double, retained for compatibility.
+% @headings after turns on double-sided headings after this page.
+% @headings doubleafter turns on double-sided headings after this page.
+% @headings singleafter turns on single-sided headings after this page.
+% By default, they are off at the start of a document,
+% and turned `on' after @end titlepage.
+
+\def\headings #1 {\csname HEADINGS#1\endcsname}
+
+\def\HEADINGSoff{
+\global\evenheadline={\hfil} \global\evenfootline={\hfil}
+\global\oddheadline={\hfil} \global\oddfootline={\hfil}}
+\HEADINGSoff
+% When we turn headings on, set the page number to 1.
+% For double-sided printing, put current file name in lower left corner,
+% chapter name on inside top of right hand pages, document
+% title on inside top of left hand pages, and page numbers on outside top
+% edge of all pages.
+\def\HEADINGSdouble{
+\global\pageno=1
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\folio\hfil\thistitle}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chapoddpage
+}
+\let\contentsalignmacro = \chappager
+
+% For single-sided printing, chapter title goes across top left of page,
+% page number on top right.
+\def\HEADINGSsingle{
+\global\pageno=1
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\thischapter\hfil\folio}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chappager
+}
+\def\HEADINGSon{\HEADINGSdouble}
+
+\def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex}
+\let\HEADINGSdoubleafter=\HEADINGSafter
+\def\HEADINGSdoublex{%
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\folio\hfil\thistitle}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chapoddpage
+}
+
+\def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex}
+\def\HEADINGSsinglex{%
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\thischapter\hfil\folio}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chappager
+}
+
+% Subroutines used in generating headings
+% Produces Day Month Year style of output.
+\def\today{\number\day\space
+\ifcase\month\or
+January\or February\or March\or April\or May\or June\or
+July\or August\or September\or October\or November\or December\fi
+\space\number\year}
+
+% Use this if you want the Month Day, Year style of output.
+%\def\today{\ifcase\month\or
+%January\or February\or March\or April\or May\or June\or
+%July\or August\or September\or October\or November\or December\fi
+%\space\number\day, \number\year}
+
+% @settitle line... specifies the title of the document, for headings
+% It generates no output of its own
+
+\def\thistitle{No Title}
+\def\settitle{\parsearg\settitlezzz}
+\def\settitlezzz #1{\gdef\thistitle{#1}}
+
+
+\message{tables,}
+% Tables -- @table, @ftable, @vtable, @item(x), @kitem(x), @xitem(x).
+
+% default indentation of table text
+\newdimen\tableindent \tableindent=.8in
+% default indentation of @itemize and @enumerate text
+\newdimen\itemindent \itemindent=.3in
+% margin between end of table item and start of table text.
+\newdimen\itemmargin \itemmargin=.1in
+
+% used internally for \itemindent minus \itemmargin
+\newdimen\itemmax
+
+% Note @table, @vtable, and @vtable define @item, @itemx, etc., with
+% these defs.
+% They also define \itemindex
+% to index the item name in whatever manner is desired (perhaps none).
+
+\newif\ifitemxneedsnegativevskip
+
+\def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi}
+
+\def\internalBitem{\smallbreak \parsearg\itemzzz}
+\def\internalBitemx{\itemxpar \parsearg\itemzzz}
+
+\def\internalBxitem "#1"{\def\xitemsubtopix{#1} \smallbreak \parsearg\xitemzzz}
+\def\internalBxitemx "#1"{\def\xitemsubtopix{#1} \itemxpar \parsearg\xitemzzz}
+
+\def\internalBkitem{\smallbreak \parsearg\kitemzzz}
+\def\internalBkitemx{\itemxpar \parsearg\kitemzzz}
+
+\def\kitemzzz #1{\dosubind {kw}{\code{#1}}{for {\bf \lastfunction}}%
+ \itemzzz {#1}}
+
+\def\xitemzzz #1{\dosubind {kw}{\code{#1}}{for {\bf \xitemsubtopic}}%
+ \itemzzz {#1}}
+
+\def\itemzzz #1{\begingroup %
+ \advance\hsize by -\rightskip
+ \advance\hsize by -\tableindent
+ \setbox0=\hbox{\itemfont{#1}}%
+ \itemindex{#1}%
+ \nobreak % This prevents a break before @itemx.
+ %
+ % Be sure we are not still in the middle of a paragraph.
+ %{\parskip = 0in
+ %\par
+ %}%
+ %
+ % If the item text does not fit in the space we have, put it on a line
+ % by itself, and do not allow a page break either before or after that
+ % line. We do not start a paragraph here because then if the next
+ % command is, e.g., @kindex, the whatsit would get put into the
+ % horizontal list on a line by itself, resulting in extra blank space.
+ \ifdim \wd0>\itemmax
+ %
+ % Make this a paragraph so we get the \parskip glue and wrapping,
+ % but leave it ragged-right.
+ \begingroup
+ \advance\leftskip by-\tableindent
+ \advance\hsize by\tableindent
+ \advance\rightskip by0pt plus1fil
+ \leavevmode\unhbox0\par
+ \endgroup
+ %
+ % We're going to be starting a paragraph, but we don't want the
+ % \parskip glue -- logically it's part of the @item we just started.
+ \nobreak \vskip-\parskip
+ %
+ % Stop a page break at the \parskip glue coming up. Unfortunately
+ % we can't prevent a possible page break at the following
+ % \baselineskip glue.
+ \nobreak
+ \endgroup
+ \itemxneedsnegativevskipfalse
+ \else
+ % The item text fits into the space. Start a paragraph, so that the
+ % following text (if any) will end up on the same line. Since that
+ % text will be indented by \tableindent, we make the item text be in
+ % a zero-width box.
+ \noindent
+ \rlap{\hskip -\tableindent\box0}\ignorespaces%
+ \endgroup%
+ \itemxneedsnegativevskiptrue%
+ \fi
+}
+
+\def\item{\errmessage{@item while not in a table}}
+\def\itemx{\errmessage{@itemx while not in a table}}
+\def\kitem{\errmessage{@kitem while not in a table}}
+\def\kitemx{\errmessage{@kitemx while not in a table}}
+\def\xitem{\errmessage{@xitem while not in a table}}
+\def\xitemx{\errmessage{@xitemx while not in a table}}
+
+%% Contains a kludge to get @end[description] to work
+\def\description{\tablez{\dontindex}{1}{}{}{}{}}
+
+\def\table{\begingroup\inENV\obeylines\obeyspaces\tablex}
+{\obeylines\obeyspaces%
+\gdef\tablex #1^^M{%
+\tabley\dontindex#1 \endtabley}}
+
+\def\ftable{\begingroup\inENV\obeylines\obeyspaces\ftablex}
+{\obeylines\obeyspaces%
+\gdef\ftablex #1^^M{%
+\tabley\fnitemindex#1 \endtabley
+\def\Eftable{\endgraf\afterenvbreak\endgroup}%
+\let\Etable=\relax}}
+
+\def\vtable{\begingroup\inENV\obeylines\obeyspaces\vtablex}
+{\obeylines\obeyspaces%
+\gdef\vtablex #1^^M{%
+\tabley\vritemindex#1 \endtabley
+\def\Evtable{\endgraf\afterenvbreak\endgroup}%
+\let\Etable=\relax}}
+
+\def\dontindex #1{}
+\def\fnitemindex #1{\doind {fn}{\code{#1}}}%
+\def\vritemindex #1{\doind {vr}{\code{#1}}}%
+
+{\obeyspaces %
+\gdef\tabley#1#2 #3 #4 #5 #6 #7\endtabley{\endgroup%
+\tablez{#1}{#2}{#3}{#4}{#5}{#6}}}
+
+\def\tablez #1#2#3#4#5#6{%
+\aboveenvbreak %
+\begingroup %
+\def\Edescription{\Etable}% Necessary kludge.
+\let\itemindex=#1%
+\ifnum 0#3>0 \advance \leftskip by #3\mil \fi %
+\ifnum 0#4>0 \tableindent=#4\mil \fi %
+\ifnum 0#5>0 \advance \rightskip by #5\mil \fi %
+\def\itemfont{#2}%
+\itemmax=\tableindent %
+\advance \itemmax by -\itemmargin %
+\advance \leftskip by \tableindent %
+\exdentamount=\tableindent
+\parindent = 0pt
+\parskip = \smallskipamount
+\ifdim \parskip=0pt \parskip=2pt \fi%
+\def\Etable{\endgraf\afterenvbreak\endgroup}%
+\let\item = \internalBitem %
+\let\itemx = \internalBitemx %
+\let\kitem = \internalBkitem %
+\let\kitemx = \internalBkitemx %
+\let\xitem = \internalBxitem %
+\let\xitemx = \internalBxitemx %
+}
+
+% This is the counter used by @enumerate, which is really @itemize
+
+\newcount \itemno
+
+\def\itemize{\parsearg\itemizezzz}
+
+\def\itemizezzz #1{%
+ \begingroup % ended by the @end itemsize
+ \itemizey {#1}{\Eitemize}
+}
+
+\def\itemizey #1#2{%
+\aboveenvbreak %
+\itemmax=\itemindent %
+\advance \itemmax by -\itemmargin %
+\advance \leftskip by \itemindent %
+\exdentamount=\itemindent
+\parindent = 0pt %
+\parskip = \smallskipamount %
+\ifdim \parskip=0pt \parskip=2pt \fi%
+\def#2{\endgraf\afterenvbreak\endgroup}%
+\def\itemcontents{#1}%
+\let\item=\itemizeitem}
+
+% Set sfcode to normal for the chars that usually have another value.
+% These are `.?!:;,'
+\def\frenchspacing{\sfcode46=1000 \sfcode63=1000 \sfcode33=1000
+ \sfcode58=1000 \sfcode59=1000 \sfcode44=1000 }
+
+% \splitoff TOKENS\endmark defines \first to be the first token in
+% TOKENS, and \rest to be the remainder.
+%
+\def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}%
+
+% Allow an optional argument of an uppercase letter, lowercase letter,
+% or number, to specify the first label in the enumerated list. No
+% argument is the same as `1'.
+%
+\def\enumerate{\parsearg\enumeratezzz}
+\def\enumeratezzz #1{\enumeratey #1 \endenumeratey}
+\def\enumeratey #1 #2\endenumeratey{%
+ \begingroup % ended by the @end enumerate
+ %
+ % If we were given no argument, pretend we were given `1'.
+ \def\thearg{#1}%
+ \ifx\thearg\empty \def\thearg{1}\fi
+ %
+ % Detect if the argument is a single token. If so, it might be a
+ % letter. Otherwise, the only valid thing it can be is a number.
+ % (We will always have one token, because of the test we just made.
+ % This is a good thing, since \splitoff doesn't work given nothing at
+ % all -- the first parameter is undelimited.)
+ \expandafter\splitoff\thearg\endmark
+ \ifx\rest\empty
+ % Only one token in the argument. It could still be anything.
+ % A ``lowercase letter'' is one whose \lccode is nonzero.
+ % An ``uppercase letter'' is one whose \lccode is both nonzero, and
+ % not equal to itself.
+ % Otherwise, we assume it's a number.
+ %
+ % We need the \relax at the end of the \ifnum lines to stop TeX from
+ % continuing to look for a <number>.
+ %
+ \ifnum\lccode\expandafter`\thearg=0\relax
+ \numericenumerate % a number (we hope)
+ \else
+ % It's a letter.
+ \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax
+ \lowercaseenumerate % lowercase letter
+ \else
+ \uppercaseenumerate % uppercase letter
+ \fi
+ \fi
+ \else
+ % Multiple tokens in the argument. We hope it's a number.
+ \numericenumerate
+ \fi
+}
+
+% An @enumerate whose labels are integers. The starting integer is
+% given in \thearg.
+%
+\def\numericenumerate{%
+ \itemno = \thearg
+ \startenumeration{\the\itemno}%
+}
+
+% The starting (lowercase) letter is in \thearg.
+\def\lowercaseenumerate{%
+ \itemno = \expandafter`\thearg
+ \startenumeration{%
+ % Be sure we're not beyond the end of the alphabet.
+ \ifnum\itemno=0
+ \errmessage{No more lowercase letters in @enumerate; get a bigger
+ alphabet}%
+ \fi
+ \char\lccode\itemno
+ }%
+}
+
+% The starting (uppercase) letter is in \thearg.
+\def\uppercaseenumerate{%
+ \itemno = \expandafter`\thearg
+ \startenumeration{%
+ % Be sure we're not beyond the end of the alphabet.
+ \ifnum\itemno=0
+ \errmessage{No more uppercase letters in @enumerate; get a bigger
+ alphabet}
+ \fi
+ \char\uccode\itemno
+ }%
+}
+
+% Call itemizey, adding a period to the first argument and supplying the
+% common last two arguments. Also subtract one from the initial value in
+% \itemno, since @item increments \itemno.
+%
+\def\startenumeration#1{%
+ \advance\itemno by -1
+ \itemizey{#1.}\Eenumerate\flushcr
+}
+
+% @alphaenumerate and @capsenumerate are abbreviations for giving an arg
+% to @enumerate.
+%
+\def\alphaenumerate{\enumerate{a}}
+\def\capsenumerate{\enumerate{A}}
+\def\Ealphaenumerate{\Eenumerate}
+\def\Ecapsenumerate{\Eenumerate}
+
+% Definition of @item while inside @itemize.
+
+\def\itemizeitem{%
+\advance\itemno by 1
+{\let\par=\endgraf \smallbreak}%
+\ifhmode \errmessage{In hmode at itemizeitem}\fi
+{\parskip=0in \hskip 0pt
+\hbox to 0pt{\hss \itemcontents\hskip \itemmargin}%
+\vadjust{\penalty 1200}}%
+\flushcr}
+
+% @multitable macros
+% Amy Hendrickson, 8/18/94, 3/6/96
+%
+% @multitable ... @end multitable will make as many columns as desired.
+% Contents of each column will wrap at width given in preamble. Width
+% can be specified either with sample text given in a template line,
+% or in percent of \hsize, the current width of text on page.
+
+% Table can continue over pages but will only break between lines.
+
+% To make preamble:
+%
+% Either define widths of columns in terms of percent of \hsize:
+% @multitable @columnfractions .25 .3 .45
+% @item ...
+%
+% Numbers following @columnfractions are the percent of the total
+% current hsize to be used for each column. You may use as many
+% columns as desired.
+
+
+% Or use a template:
+% @multitable {Column 1 template} {Column 2 template} {Column 3 template}
+% @item ...
+% using the widest term desired in each column.
+%
+% For those who want to use more than one line's worth of words in
+% the preamble, break the line within one argument and it
+% will parse correctly, i.e.,
+%
+% @multitable {Column 1 template} {Column 2 template} {Column 3
+% template}
+% Not:
+% @multitable {Column 1 template} {Column 2 template}
+% {Column 3 template}
+
+% Each new table line starts with @item, each subsequent new column
+% starts with @tab. Empty columns may be produced by supplying @tab's
+% with nothing between them for as many times as empty columns are needed,
+% ie, @tab@tab@tab will produce two empty columns.
+
+% @item, @tab, @multitable or @end multitable do not need to be on their
+% own lines, but it will not hurt if they are.
+
+% Sample multitable:
+
+% @multitable {Column 1 template} {Column 2 template} {Column 3 template}
+% @item first col stuff @tab second col stuff @tab third col
+% @item
+% first col stuff
+% @tab
+% second col stuff
+% @tab
+% third col
+% @item first col stuff @tab second col stuff
+% @tab Many paragraphs of text may be used in any column.
+%
+% They will wrap at the width determined by the template.
+% @item@tab@tab This will be in third column.
+% @end multitable
+
+% Default dimensions may be reset by user.
+% @multitableparskip is vertical space between paragraphs in table.
+% @multitableparindent is paragraph indent in table.
+% @multitablecolmargin is horizontal space to be left between columns.
+% @multitablelinespace is space to leave between table items, baseline
+% to baseline.
+% 0pt means it depends on current normal line spacing.
+%
+\newskip\multitableparskip
+\newskip\multitableparindent
+\newdimen\multitablecolspace
+\newskip\multitablelinespace
+\multitableparskip=0pt
+\multitableparindent=6pt
+\multitablecolspace=12pt
+\multitablelinespace=0pt
+
+% Macros used to set up halign preamble:
+%
+\let\endsetuptable\relax
+\def\xendsetuptable{\endsetuptable}
+\let\columnfractions\relax
+\def\xcolumnfractions{\columnfractions}
+\newif\ifsetpercent
+
+% 2/1/96, to allow fractions to be given with more than one digit.
+\def\pickupwholefraction#1 {\global\advance\colcount by1 %
+\expandafter\xdef\csname col\the\colcount\endcsname{.#1\hsize}%
+\setuptable}
+
+\newcount\colcount
+\def\setuptable#1{\def\firstarg{#1}%
+\ifx\firstarg\xendsetuptable\let\go\relax%
+\else
+ \ifx\firstarg\xcolumnfractions\global\setpercenttrue%
+ \else
+ \ifsetpercent
+ \let\go\pickupwholefraction % In this case arg of setuptable
+ % is the decimal point before the
+ % number given in percent of hsize.
+ % We don't need this so we don't use it.
+ \else
+ \global\advance\colcount by1
+ \setbox0=\hbox{#1 }% Add a normal word space as a separator;
+ % typically that is always in the input, anyway.
+ \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}%
+ \fi%
+ \fi%
+\ifx\go\pickupwholefraction\else\let\go\setuptable\fi%
+\fi\go}
+
+% multitable syntax
+\def\tab{&\hskip1sp\relax} % 2/2/96
+ % tiny skip here makes sure this column space is
+ % maintained, even if it is never used.
+
+% @multitable ... @end multitable definitions:
+
+\def\multitable{\parsearg\dotable}
+\def\dotable#1{\bgroup
+ \vskip\parskip
+ \let\item\crcr
+ \tolerance=9500
+ \hbadness=9500
+ \setmultitablespacing
+ \parskip=\multitableparskip
+ \parindent=\multitableparindent
+ \overfullrule=0pt
+ \global\colcount=0
+ \def\Emultitable{\global\setpercentfalse\cr\egroup\egroup}%
+ %
+ % To parse everything between @multitable and @item:
+ \setuptable#1 \endsetuptable
+ %
+ % \everycr will reset column counter, \colcount, at the end of
+ % each line. Every column entry will cause \colcount to advance by one.
+ % The table preamble
+ % looks at the current \colcount to find the correct column width.
+ \everycr{\noalign{%
+ %
+ % \filbreak%% keeps underfull box messages off when table breaks over pages.
+ % Maybe so, but it also creates really weird page breaks when the table
+ % breaks over pages. Wouldn't \vfil be better? Wait until the problem
+ % manifests itself, so it can be fixed for real --karl.
+ \global\colcount=0\relax}}%
+ %
+ % This preamble sets up a generic column definition, which will
+ % be used as many times as user calls for columns.
+ % \vtop will set a single line and will also let text wrap and
+ % continue for many paragraphs if desired.
+ \halign\bgroup&\global\advance\colcount by 1\relax
+ \multistrut\vtop{\hsize=\expandafter\csname col\the\colcount\endcsname
+ %
+ % In order to keep entries from bumping into each other
+ % we will add a \leftskip of \multitablecolspace to all columns after
+ % the first one.
+ %
+ % If a template has been used, we will add \multitablecolspace
+ % to the width of each template entry.
+ %
+ % If the user has set preamble in terms of percent of \hsize we will
+ % use that dimension as the width of the column, and the \leftskip
+ % will keep entries from bumping into each other. Table will start at
+ % left margin and final column will justify at right margin.
+ %
+ % Make sure we don't inherit \rightskip from the outer environment.
+ \rightskip=0pt
+ \ifnum\colcount=1
+ % The first column will be indented with the surrounding text.
+ \advance\hsize by\leftskip
+ \else
+ \ifsetpercent \else
+ % If user has not set preamble in terms of percent of \hsize
+ % we will advance \hsize by \multitablecolspace.
+ \advance\hsize by \multitablecolspace
+ \fi
+ % In either case we will make \leftskip=\multitablecolspace:
+ \leftskip=\multitablecolspace
+ \fi
+ % Ignoring space at the beginning and end avoids an occasional spurious
+ % blank line, when TeX decides to break the line at the space before the
+ % box from the multistrut, so the strut ends up on a line by itself.
+ % For example:
+ % @multitable @columnfractions .11 .89
+ % @item @code{#}
+ % @tab Legal holiday which is valid in major parts of the whole country.
+ % Is automatically provided with highlighting sequences respectively marking
+ % characters.
+ \noindent\ignorespaces##\unskip\multistrut}\cr
+}
+
+\def\setmultitablespacing{% test to see if user has set \multitablelinespace.
+% If so, do nothing. If not, give it an appropriate dimension based on
+% current baselineskip.
+\ifdim\multitablelinespace=0pt
+%% strut to put in table in case some entry doesn't have descenders,
+%% to keep lines equally spaced
+\let\multistrut = \strut
+%% Test to see if parskip is larger than space between lines of
+%% table. If not, do nothing.
+%% If so, set to same dimension as multitablelinespace.
+\else
+\gdef\multistrut{\vrule height\multitablelinespace depth\dp0
+width0pt\relax} \fi
+\ifdim\multitableparskip>\multitablelinespace
+\global\multitableparskip=\multitablelinespace
+\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller
+ %% than skip between lines in the table.
+\fi%
+\ifdim\multitableparskip=0pt
+\global\multitableparskip=\multitablelinespace
+\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller
+ %% than skip between lines in the table.
+\fi}
+
+
+\message{indexing,}
+% Index generation facilities
+
+% Define \newwrite to be identical to plain tex's \newwrite
+% except not \outer, so it can be used within \newindex.
+{\catcode`\@=11
+\gdef\newwrite{\alloc@7\write\chardef\sixt@@n}}
+
+% \newindex {foo} defines an index named foo.
+% It automatically defines \fooindex such that
+% \fooindex ...rest of line... puts an entry in the index foo.
+% It also defines \fooindfile to be the number of the output channel for
+% the file that accumulates this index. The file's extension is foo.
+% The name of an index should be no more than 2 characters long
+% for the sake of vms.
+
+\def\newindex #1{
+\expandafter\newwrite \csname#1indfile\endcsname% Define number for output file
+\openout \csname#1indfile\endcsname \jobname.#1 % Open the file
+\expandafter\xdef\csname#1index\endcsname{% % Define \xxxindex
+\noexpand\doindex {#1}}
+}
+
+% @defindex foo == \newindex{foo}
+
+\def\defindex{\parsearg\newindex}
+
+% Define @defcodeindex, like @defindex except put all entries in @code.
+
+\def\newcodeindex #1{
+\expandafter\newwrite \csname#1indfile\endcsname% Define number for output file
+\openout \csname#1indfile\endcsname \jobname.#1 % Open the file
+\expandafter\xdef\csname#1index\endcsname{% % Define \xxxindex
+\noexpand\docodeindex {#1}}
+}
+
+\def\defcodeindex{\parsearg\newcodeindex}
+
+% @synindex foo bar makes index foo feed into index bar.
+% Do this instead of @defindex foo if you don't want it as a separate index.
+% The \closeout helps reduce unnecessary open files; the limit on the
+% Acorn RISC OS is a mere 16 files.
+\def\synindex#1 #2 {%
+ \expandafter\let\expandafter\synindexfoo\expandafter=\csname#2indfile\endcsname
+ \expandafter\closeout\csname#1indfile\endcsname
+ \expandafter\let\csname#1indfile\endcsname=\synindexfoo
+ \expandafter\xdef\csname#1index\endcsname{% define \xxxindex
+ \noexpand\doindex{#2}}%
+}
+
+% @syncodeindex foo bar similar, but put all entries made for index foo
+% inside @code.
+\def\syncodeindex#1 #2 {%
+ \expandafter\let\expandafter\synindexfoo\expandafter=\csname#2indfile\endcsname
+ \expandafter\closeout\csname#1indfile\endcsname
+ \expandafter\let\csname#1indfile\endcsname=\synindexfoo
+ \expandafter\xdef\csname#1index\endcsname{% define \xxxindex
+ \noexpand\docodeindex{#2}}%
+}
+
+% Define \doindex, the driver for all \fooindex macros.
+% Argument #1 is generated by the calling \fooindex macro,
+% and it is "foo", the name of the index.
+
+% \doindex just uses \parsearg; it calls \doind for the actual work.
+% This is because \doind is more useful to call from other macros.
+
+% There is also \dosubind {index}{topic}{subtopic}
+% which makes an entry in a two-level index such as the operation index.
+
+\def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer}
+\def\singleindexer #1{\doind{\indexname}{#1}}
+
+% like the previous two, but they put @code around the argument.
+\def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer}
+\def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}}
+
+\def\indexdummies{%
+\def\ { }%
+% Take care of the plain tex accent commands.
+\def\"{\realbackslash "}%
+\def\`{\realbackslash `}%
+\def\'{\realbackslash '}%
+\def\^{\realbackslash ^}%
+\def\~{\realbackslash ~}%
+\def\={\realbackslash =}%
+\def\b{\realbackslash b}%
+\def\c{\realbackslash c}%
+\def\d{\realbackslash d}%
+\def\u{\realbackslash u}%
+\def\v{\realbackslash v}%
+\def\H{\realbackslash H}%
+% Take care of the plain tex special European modified letters.
+\def\oe{\realbackslash oe}%
+\def\ae{\realbackslash ae}%
+\def\aa{\realbackslash aa}%
+\def\OE{\realbackslash OE}%
+\def\AE{\realbackslash AE}%
+\def\AA{\realbackslash AA}%
+\def\o{\realbackslash o}%
+\def\O{\realbackslash O}%
+\def\l{\realbackslash l}%
+\def\L{\realbackslash L}%
+\def\ss{\realbackslash ss}%
+% Take care of texinfo commands likely to appear in an index entry.
+% (Must be a way to avoid doing expansion at all, and thus not have to
+% laboriously list every single command here.)
+\def\@{@}% will be @@ when we switch to @ as escape char.
+%\let\{ = \lbracecmd
+%\let\} = \rbracecmd
+\def\_{{\realbackslash _}}%
+\def\w{\realbackslash w }%
+\def\bf{\realbackslash bf }%
+%\def\rm{\realbackslash rm }%
+\def\sl{\realbackslash sl }%
+\def\sf{\realbackslash sf}%
+\def\tt{\realbackslash tt}%
+\def\gtr{\realbackslash gtr}%
+\def\less{\realbackslash less}%
+\def\hat{\realbackslash hat}%
+%\def\char{\realbackslash char}%
+\def\TeX{\realbackslash TeX}%
+\def\dots{\realbackslash dots }%
+\def\result{\realbackslash result}%
+\def\equiv{\realbackslash equiv}%
+\def\expansion{\realbackslash expansion}%
+\def\print{\realbackslash print}%
+\def\error{\realbackslash error}%
+\def\point{\realbackslash point}%
+\def\copyright{\realbackslash copyright}%
+\def\tclose##1{\realbackslash tclose {##1}}%
+\def\code##1{\realbackslash code {##1}}%
+\def\dotless##1{\realbackslash dotless {##1}}%
+\def\samp##1{\realbackslash samp {##1}}%
+\def\,##1{\realbackslash ,{##1}}%
+\def\t##1{\realbackslash t {##1}}%
+\def\r##1{\realbackslash r {##1}}%
+\def\i##1{\realbackslash i {##1}}%
+\def\b##1{\realbackslash b {##1}}%
+\def\sc##1{\realbackslash sc {##1}}%
+\def\cite##1{\realbackslash cite {##1}}%
+\def\key##1{\realbackslash key {##1}}%
+\def\file##1{\realbackslash file {##1}}%
+\def\var##1{\realbackslash var {##1}}%
+\def\kbd##1{\realbackslash kbd {##1}}%
+\def\dfn##1{\realbackslash dfn {##1}}%
+\def\emph##1{\realbackslash emph {##1}}%
+\def\value##1{\realbackslash value {##1}}%
+\unsepspaces
+}
+
+% If an index command is used in an @example environment, any spaces
+% therein should become regular spaces in the raw index file, not the
+% expansion of \tie (\\leavevmode \penalty \@M \ ).
+{\obeyspaces
+ \gdef\unsepspaces{\obeyspaces\let =\space}}
+
+% \indexnofonts no-ops all font-change commands.
+% This is used when outputting the strings to sort the index by.
+\def\indexdummyfont#1{#1}
+\def\indexdummytex{TeX}
+\def\indexdummydots{...}
+
+\def\indexnofonts{%
+% Just ignore accents.
+\let\,=\indexdummyfont
+\let\"=\indexdummyfont
+\let\`=\indexdummyfont
+\let\'=\indexdummyfont
+\let\^=\indexdummyfont
+\let\~=\indexdummyfont
+\let\==\indexdummyfont
+\let\b=\indexdummyfont
+\let\c=\indexdummyfont
+\let\d=\indexdummyfont
+\let\u=\indexdummyfont
+\let\v=\indexdummyfont
+\let\H=\indexdummyfont
+\let\dotless=\indexdummyfont
+% Take care of the plain tex special European modified letters.
+\def\oe{oe}%
+\def\ae{ae}%
+\def\aa{aa}%
+\def\OE{OE}%
+\def\AE{AE}%
+\def\AA{AA}%
+\def\o{o}%
+\def\O{O}%
+\def\l{l}%
+\def\L{L}%
+\def\ss{ss}%
+\let\w=\indexdummyfont
+\let\t=\indexdummyfont
+\let\r=\indexdummyfont
+\let\i=\indexdummyfont
+\let\b=\indexdummyfont
+\let\emph=\indexdummyfont
+\let\strong=\indexdummyfont
+\let\cite=\indexdummyfont
+\let\sc=\indexdummyfont
+%Don't no-op \tt, since it isn't a user-level command
+% and is used in the definitions of the active chars like <, >, |...
+%\let\tt=\indexdummyfont
+\let\tclose=\indexdummyfont
+\let\code=\indexdummyfont
+\let\file=\indexdummyfont
+\let\samp=\indexdummyfont
+\let\kbd=\indexdummyfont
+\let\key=\indexdummyfont
+\let\var=\indexdummyfont
+\let\TeX=\indexdummytex
+\let\dots=\indexdummydots
+\def\@{@}%
+}
+
+% To define \realbackslash, we must make \ not be an escape.
+% We must first make another character (@) an escape
+% so we do not become unable to do a definition.
+
+{\catcode`\@=0 \catcode`\\=\other
+@gdef@realbackslash{\}}
+
+\let\indexbackslash=0 %overridden during \printindex.
+
+\let\SETmarginindex=\relax %initialize!
+% workhorse for all \fooindexes
+% #1 is name of index, #2 is stuff to put there
+\def\doind #1#2{%
+ % Put the index entry in the margin if desired.
+ \ifx\SETmarginindex\relax\else
+ \insert\margin{\hbox{\vrule height8pt depth3pt width0pt #2}}%
+ \fi
+ {%
+ \count255=\lastpenalty
+ {%
+ \indexdummies % Must do this here, since \bf, etc expand at this stage
+ \escapechar=`\\
+ {%
+ \let\folio=0% We will expand all macros now EXCEPT \folio.
+ \def\rawbackslashxx{\indexbackslash}% \indexbackslash isn't defined now
+ % so it will be output as is; and it will print as backslash.
+ %
+ % First process the index-string with all font commands turned off
+ % to get the string to sort by.
+ {\indexnofonts \xdef\indexsorttmp{#2}}%
+ %
+ % Now produce the complete index entry, with both the sort key and the
+ % original text, including any font commands.
+ \toks0 = {#2}%
+ \edef\temp{%
+ \write\csname#1indfile\endcsname{%
+ \realbackslash entry{\indexsorttmp}{\folio}{\the\toks0}}%
+ }%
+ \temp
+ }%
+ }%
+ \penalty\count255
+ }%
+}
+
+\def\dosubind #1#2#3{%
+{\count10=\lastpenalty %
+{\indexdummies % Must do this here, since \bf, etc expand at this stage
+\escapechar=`\\%
+{\let\folio=0%
+\def\rawbackslashxx{\indexbackslash}%
+%
+% Now process the index-string once, with all font commands turned off,
+% to get the string to sort the index by.
+{\indexnofonts
+\xdef\temp1{#2 #3}%
+}%
+% Now produce the complete index entry. We process the index-string again,
+% this time with font commands expanded, to get what to print in the index.
+\edef\temp{%
+\write \csname#1indfile\endcsname{%
+\realbackslash entry {\temp1}{\folio}{#2}{#3}}}%
+\temp }%
+}\penalty\count10}}
+
+% The index entry written in the file actually looks like
+% \entry {sortstring}{page}{topic}
+% or
+% \entry {sortstring}{page}{topic}{subtopic}
+% The texindex program reads in these files and writes files
+% containing these kinds of lines:
+% \initial {c}
+% before the first topic whose initial is c
+% \entry {topic}{pagelist}
+% for a topic that is used without subtopics
+% \primary {topic}
+% for the beginning of a topic that is used with subtopics
+% \secondary {subtopic}{pagelist}
+% for each subtopic.
+
+% Define the user-accessible indexing commands
+% @findex, @vindex, @kindex, @cindex.
+
+\def\findex {\fnindex}
+\def\kindex {\kyindex}
+\def\cindex {\cpindex}
+\def\vindex {\vrindex}
+\def\tindex {\tpindex}
+\def\pindex {\pgindex}
+
+\def\cindexsub {\begingroup\obeylines\cindexsub}
+{\obeylines %
+\gdef\cindexsub "#1" #2^^M{\endgroup %
+\dosubind{cp}{#2}{#1}}}
+
+% Define the macros used in formatting output of the sorted index material.
+
+% @printindex causes a particular index (the ??s file) to get printed.
+% It does not print any chapter heading (usually an @unnumbered).
+%
+\def\printindex{\parsearg\doprintindex}
+\def\doprintindex#1{\begingroup
+ \dobreak \chapheadingskip{10000}%
+ %
+ \indexfonts \rm
+ \tolerance = 9500
+ \indexbreaks
+ %
+ % See if the index file exists and is nonempty.
+ % Change catcode of @ here so that if the index file contains
+ % \initial {@}
+ % as its first line, TeX doesn't complain about mismatched braces
+ % (because it thinks @} is a control sequence).
+ \catcode`\@ = 11
+ \openin 1 \jobname.#1s
+ \ifeof 1
+ % \enddoublecolumns gets confused if there is no text in the index,
+ % and it loses the chapter title and the aux file entries for the
+ % index. The easiest way to prevent this problem is to make sure
+ % there is some text.
+ (Index is nonexistent)
+ \else
+ %
+ % If the index file exists but is empty, then \openin leaves \ifeof
+ % false. We have to make TeX try to read something from the file, so
+ % it can discover if there is anything in it.
+ \read 1 to \temp
+ \ifeof 1
+ (Index is empty)
+ \else
+ % Index files are almost Texinfo source, but we use \ as the escape
+ % character. It would be better to use @, but that's too big a change
+ % to make right now.
+ \def\indexbackslash{\rawbackslashxx}%
+ \catcode`\\ = 0
+ \escapechar = `\\
+ \begindoublecolumns
+ \input \jobname.#1s
+ \enddoublecolumns
+ \fi
+ \fi
+ \closein 1
+\endgroup}
+
+% These macros are used by the sorted index file itself.
+% Change them to control the appearance of the index.
+
+% Same as \bigskipamount except no shrink.
+% \balancecolumns gets confused if there is any shrink.
+\newskip\initialskipamount \initialskipamount 12pt plus4pt
+
+\def\initial #1{%
+{\let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt
+\ifdim\lastskip<\initialskipamount
+\removelastskip \penalty-200 \vskip \initialskipamount\fi
+\line{\secbf#1\hfill}\kern 2pt\penalty10000}}
+
+% This typesets a paragraph consisting of #1, dot leaders, and then #2
+% flush to the right margin. It is used for index and table of contents
+% entries. The paragraph is indented by \leftskip.
+%
+\def\entry #1#2{\begingroup
+ %
+ % Start a new paragraph if necessary, so our assignments below can't
+ % affect previous text.
+ \par
+ %
+ % Do not fill out the last line with white space.
+ \parfillskip = 0in
+ %
+ % No extra space above this paragraph.
+ \parskip = 0in
+ %
+ % Do not prefer a separate line ending with a hyphen to fewer lines.
+ \finalhyphendemerits = 0
+ %
+ % \hangindent is only relevant when the entry text and page number
+ % don't both fit on one line. In that case, bob suggests starting the
+ % dots pretty far over on the line. Unfortunately, a large
+ % indentation looks wrong when the entry text itself is broken across
+ % lines. So we use a small indentation and put up with long leaders.
+ %
+ % \hangafter is reset to 1 (which is the value we want) at the start
+ % of each paragraph, so we need not do anything with that.
+ \hangindent=2em
+ %
+ % When the entry text needs to be broken, just fill out the first line
+ % with blank space.
+ \rightskip = 0pt plus1fil
+ %
+ % Start a ``paragraph'' for the index entry so the line breaking
+ % parameters we've set above will have an effect.
+ \noindent
+ %
+ % Insert the text of the index entry. TeX will do line-breaking on it.
+ #1%
+ % The following is kludged to not output a line of dots in the index if
+ % there are no page numbers. The next person who breaks this will be
+ % cursed by a Unix daemon.
+ \def\tempa{{\rm }}%
+ \def\tempb{#2}%
+ \edef\tempc{\tempa}%
+ \edef\tempd{\tempb}%
+ \ifx\tempc\tempd\ \else%
+ %
+ % If we must, put the page number on a line of its own, and fill out
+ % this line with blank space. (The \hfil is overwhelmed with the
+ % fill leaders glue in \indexdotfill if the page number does fit.)
+ \hfil\penalty50
+ \null\nobreak\indexdotfill % Have leaders before the page number.
+ %
+ % The `\ ' here is removed by the implicit \unskip that TeX does as
+ % part of (the primitive) \par. Without it, a spurious underfull
+ % \hbox ensues.
+ \ #2% The page number ends the paragraph.
+ \fi%
+ \par
+\endgroup}
+
+% Like \dotfill except takes at least 1 em.
+\def\indexdotfill{\cleaders
+ \hbox{$\mathsurround=0pt \mkern1.5mu ${\it .}$ \mkern1.5mu$}\hskip 1em plus 1fill}
+
+\def\primary #1{\line{#1\hfil}}
+
+\newskip\secondaryindent \secondaryindent=0.5cm
+
+\def\secondary #1#2{
+{\parfillskip=0in \parskip=0in
+\hangindent =1in \hangafter=1
+\noindent\hskip\secondaryindent\hbox{#1}\indexdotfill #2\par
+}}
+
+% Define two-column mode, which we use to typeset indexes.
+% Adapted from the TeXbook, page 416, which is to say,
+% the manmac.tex format used to print the TeXbook itself.
+\catcode`\@=11
+
+\newbox\partialpage
+\newdimen\doublecolumnhsize
+
+\def\begindoublecolumns{\begingroup % ended by \enddoublecolumns
+ % Grab any single-column material above us.
+ \output = {\global\setbox\partialpage = \vbox{%
+ %
+ % Here is a possibility not foreseen in manmac: if we accumulate a
+ % whole lot of material, we might end up calling this \output
+ % routine twice in a row (see the doublecol-lose test, which is
+ % essentially a couple of indexes with @setchapternewpage off). In
+ % that case, we must prevent the second \partialpage from
+ % simply overwriting the first, causing us to lose the page.
+ % This will preserve it until a real output routine can ship it
+ % out. Generally, \partialpage will be empty when this runs and
+ % this will be a no-op.
+ \unvbox\partialpage
+ %
+ % Unvbox the main output page.
+ \unvbox255
+ \kern-\topskip \kern\baselineskip
+ }}%
+ \eject
+ %
+ % Use the double-column output routine for subsequent pages.
+ \output = {\doublecolumnout}%
+ %
+ % Change the page size parameters. We could do this once outside this
+ % routine, in each of @smallbook, @afourpaper, and the default 8.5x11
+ % format, but then we repeat the same computation. Repeating a couple
+ % of assignments once per index is clearly meaningless for the
+ % execution time, so we may as well do it in one place.
+ %
+ % First we halve the line length, less a little for the gutter between
+ % the columns. We compute the gutter based on the line length, so it
+ % changes automatically with the paper format. The magic constant
+ % below is chosen so that the gutter has the same value (well, +-<1pt)
+ % as it did when we hard-coded it.
+ %
+ % We put the result in a separate register, \doublecolumhsize, so we
+ % can restore it in \pagesofar, after \hsize itself has (potentially)
+ % been clobbered.
+ %
+ \doublecolumnhsize = \hsize
+ \advance\doublecolumnhsize by -.04154\hsize
+ \divide\doublecolumnhsize by 2
+ \hsize = \doublecolumnhsize
+ %
+ % Double the \vsize as well. (We don't need a separate register here,
+ % since nobody clobbers \vsize.)
+ \vsize = 2\vsize
+}
+\def\doublecolumnout{%
+ \splittopskip=\topskip \splitmaxdepth=\maxdepth
+ % Get the available space for the double columns -- the normal
+ % (undoubled) page height minus any material left over from the
+ % previous page.
+ \dimen@=\pageheight \advance\dimen@ by-\ht\partialpage
+ % box0 will be the left-hand column, box2 the right.
+ \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@
+ \onepageout\pagesofar
+ \unvbox255
+ \penalty\outputpenalty
+}
+\def\pagesofar{%
+ % Re-output the contents of the output page -- any previous material,
+ % followed by the two boxes we just split.
+ \unvbox\partialpage
+ \hsize = \doublecolumnhsize
+ \wd0=\hsize \wd2=\hsize \hbox to\pagewidth{\box0\hfil\box2}%
+}
+\def\enddoublecolumns{%
+ \output = {\balancecolumns}\eject % split what we have
+ \endgroup % started in \begindoublecolumns
+ %
+ % Back to normal single-column typesetting, but take account of the
+ % fact that we just accumulated some stuff on the output page.
+ \pagegoal = \vsize
+}
+\def\balancecolumns{%
+ % Called at the end of the double column material.
+ \setbox0 = \vbox{\unvbox255}%
+ \dimen@ = \ht0
+ \advance\dimen@ by \topskip
+ \advance\dimen@ by-\baselineskip
+ \divide\dimen@ by 2
+ \splittopskip = \topskip
+ % Loop until we get a decent breakpoint.
+ {\vbadness=10000 \loop
+ \global\setbox3=\copy0
+ \global\setbox1=\vsplit3 to\dimen@
+ \ifdim\ht3>\dimen@ \global\advance\dimen@ by1pt
+ \repeat}%
+ \setbox0=\vbox to\dimen@{\unvbox1}%
+ \setbox2=\vbox to\dimen@{\unvbox3}%
+ \pagesofar
+}
+\catcode`\@ = \other
+
+
+\message{sectioning,}
+% Define chapters, sections, etc.
+
+\newcount\chapno
+\newcount\secno \secno=0
+\newcount\subsecno \subsecno=0
+\newcount\subsubsecno \subsubsecno=0
+
+% This counter is funny since it counts through charcodes of letters A, B, ...
+\newcount\appendixno \appendixno = `\@
+\def\appendixletter{\char\the\appendixno}
+
+\newwrite\contentsfile
+% This is called from \setfilename.
+\def\opencontents{\openout\contentsfile = \jobname.toc }
+
+% Each @chapter defines this as the name of the chapter.
+% page headings and footings can use it. @section does likewise
+
+\def\thischapter{} \def\thissection{}
+\def\seccheck#1{\ifnum \pageno<0
+ \errmessage{@#1 not allowed after generating table of contents}%
+\fi}
+
+\def\chapternofonts{%
+ \let\rawbackslash=\relax
+ \let\frenchspacing=\relax
+ \def\result{\realbackslash result}%
+ \def\equiv{\realbackslash equiv}%
+ \def\expansion{\realbackslash expansion}%
+ \def\print{\realbackslash print}%
+ \def\TeX{\realbackslash TeX}%
+ \def\dots{\realbackslash dots}%
+ \def\result{\realbackslash result}%
+ \def\equiv{\realbackslash equiv}%
+ \def\expansion{\realbackslash expansion}%
+ \def\print{\realbackslash print}%
+ \def\error{\realbackslash error}%
+ \def\point{\realbackslash point}%
+ \def\copyright{\realbackslash copyright}%
+ \def\tt{\realbackslash tt}%
+ \def\bf{\realbackslash bf}%
+ \def\w{\realbackslash w}%
+ \def\less{\realbackslash less}%
+ \def\gtr{\realbackslash gtr}%
+ \def\hat{\realbackslash hat}%
+ \def\char{\realbackslash char}%
+ \def\tclose##1{\realbackslash tclose{##1}}%
+ \def\code##1{\realbackslash code{##1}}%
+ \def\samp##1{\realbackslash samp{##1}}%
+ \def\r##1{\realbackslash r{##1}}%
+ \def\b##1{\realbackslash b{##1}}%
+ \def\key##1{\realbackslash key{##1}}%
+ \def\file##1{\realbackslash file{##1}}%
+ \def\kbd##1{\realbackslash kbd{##1}}%
+ % These are redefined because @smartitalic wouldn't work inside xdef.
+ \def\i##1{\realbackslash i{##1}}%
+ \def\cite##1{\realbackslash cite{##1}}%
+ \def\var##1{\realbackslash var{##1}}%
+ \def\emph##1{\realbackslash emph{##1}}%
+ \def\dfn##1{\realbackslash dfn{##1}}%
+}
+
+\newcount\absseclevel % used to calculate proper heading level
+\newcount\secbase\secbase=0 % @raise/lowersections modify this count
+
+% @raisesections: treat @section as chapter, @subsection as section, etc.
+\def\raisesections{\global\advance\secbase by -1}
+\let\up=\raisesections % original BFox name
+
+% @lowersections: treat @chapter as section, @section as subsection, etc.
+\def\lowersections{\global\advance\secbase by 1}
+\let\down=\lowersections % original BFox name
+
+% Choose a numbered-heading macro
+% #1 is heading level if unmodified by @raisesections or @lowersections
+% #2 is text for heading
+\def\numhead#1#2{\absseclevel=\secbase\advance\absseclevel by #1
+\ifcase\absseclevel
+ \chapterzzz{#2}
+\or
+ \seczzz{#2}
+\or
+ \numberedsubseczzz{#2}
+\or
+ \numberedsubsubseczzz{#2}
+\else
+ \ifnum \absseclevel<0
+ \chapterzzz{#2}
+ \else
+ \numberedsubsubseczzz{#2}
+ \fi
+\fi
+}
+
+% like \numhead, but chooses appendix heading levels
+\def\apphead#1#2{\absseclevel=\secbase\advance\absseclevel by #1
+\ifcase\absseclevel
+ \appendixzzz{#2}
+\or
+ \appendixsectionzzz{#2}
+\or
+ \appendixsubseczzz{#2}
+\or
+ \appendixsubsubseczzz{#2}
+\else
+ \ifnum \absseclevel<0
+ \appendixzzz{#2}
+ \else
+ \appendixsubsubseczzz{#2}
+ \fi
+\fi
+}
+
+% like \numhead, but chooses numberless heading levels
+\def\unnmhead#1#2{\absseclevel=\secbase\advance\absseclevel by #1
+\ifcase\absseclevel
+ \unnumberedzzz{#2}
+\or
+ \unnumberedseczzz{#2}
+\or
+ \unnumberedsubseczzz{#2}
+\or
+ \unnumberedsubsubseczzz{#2}
+\else
+ \ifnum \absseclevel<0
+ \unnumberedzzz{#2}
+ \else
+ \unnumberedsubsubseczzz{#2}
+ \fi
+\fi
+}
+
+
+\def\thischaptername{No Chapter Title}
+\outer\def\chapter{\parsearg\chapteryyy}
+\def\chapteryyy #1{\numhead0{#1}} % normally numhead0 calls chapterzzz
+\def\chapterzzz #1{\seccheck{chapter}%
+\secno=0 \subsecno=0 \subsubsecno=0
+\global\advance \chapno by 1 \message{\putwordChapter \the\chapno}%
+\chapmacro {#1}{\the\chapno}%
+\gdef\thissection{#1}%
+\gdef\thischaptername{#1}%
+% We don't substitute the actual chapter name into \thischapter
+% because we don't want its macros evaluated now.
+\xdef\thischapter{\putwordChapter{} \the\chapno: \noexpand\thischaptername}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash chapentry{\the\toks0}{\the\chapno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\donoderef %
+\global\let\section = \numberedsec
+\global\let\subsection = \numberedsubsec
+\global\let\subsubsection = \numberedsubsubsec
+}}
+
+\outer\def\appendix{\parsearg\appendixyyy}
+\def\appendixyyy #1{\apphead0{#1}} % normally apphead0 calls appendixzzz
+\def\appendixzzz #1{\seccheck{appendix}%
+\secno=0 \subsecno=0 \subsubsecno=0
+\global\advance \appendixno by 1 \message{Appendix \appendixletter}%
+\chapmacro {#1}{\putwordAppendix{} \appendixletter}%
+\gdef\thissection{#1}%
+\gdef\thischaptername{#1}%
+\xdef\thischapter{\putwordAppendix{} \appendixletter: \noexpand\thischaptername}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash chapentry{\the\toks0}%
+ {\putwordAppendix{} \appendixletter}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\appendixnoderef %
+\global\let\section = \appendixsec
+\global\let\subsection = \appendixsubsec
+\global\let\subsubsection = \appendixsubsubsec
+}}
+
+% @centerchap is like @unnumbered, but the heading is centered.
+\outer\def\centerchap{\parsearg\centerchapyyy}
+\def\centerchapyyy #1{{\let\unnumbchapmacro=\centerchapmacro \unnumberedyyy{#1}}}
+
+\outer\def\top{\parsearg\unnumberedyyy}
+\outer\def\unnumbered{\parsearg\unnumberedyyy}
+\def\unnumberedyyy #1{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz
+\def\unnumberedzzz #1{\seccheck{unnumbered}%
+\secno=0 \subsecno=0 \subsubsecno=0
+%
+% This used to be simply \message{#1}, but TeX fully expands the
+% argument to \message. Therefore, if #1 contained @-commands, TeX
+% expanded them. For example, in `@unnumbered The @cite{Book}', TeX
+% expanded @cite (which turns out to cause errors because \cite is meant
+% to be executed, not expanded).
+%
+% Anyway, we don't want the fully-expanded definition of @cite to appear
+% as a result of the \message, we just want `@cite' itself. We use
+% \the<toks register> to achieve this: TeX expands \the<toks> only once,
+% simply yielding the contents of the <toks register>.
+\toks0 = {#1}\message{(\the\toks0)}%
+%
+\unnumbchapmacro {#1}%
+\gdef\thischapter{#1}\gdef\thissection{#1}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash unnumbchapentry{\the\toks0}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\unnumbnoderef %
+\global\let\section = \unnumberedsec
+\global\let\subsection = \unnumberedsubsec
+\global\let\subsubsection = \unnumberedsubsubsec
+}}
+
+\outer\def\numberedsec{\parsearg\secyyy}
+\def\secyyy #1{\numhead1{#1}} % normally calls seczzz
+\def\seczzz #1{\seccheck{section}%
+\subsecno=0 \subsubsecno=0 \global\advance \secno by 1 %
+\gdef\thissection{#1}\secheading {#1}{\the\chapno}{\the\secno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash secentry %
+{\the\toks0}{\the\chapno}{\the\secno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\donoderef %
+\penalty 10000 %
+}}
+
+\outer\def\appendixsection{\parsearg\appendixsecyyy}
+\outer\def\appendixsec{\parsearg\appendixsecyyy}
+\def\appendixsecyyy #1{\apphead1{#1}} % normally calls appendixsectionzzz
+\def\appendixsectionzzz #1{\seccheck{appendixsection}%
+\subsecno=0 \subsubsecno=0 \global\advance \secno by 1 %
+\gdef\thissection{#1}\secheading {#1}{\appendixletter}{\the\secno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash secentry %
+{\the\toks0}{\appendixletter}{\the\secno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\appendixnoderef %
+\penalty 10000 %
+}}
+
+\outer\def\unnumberedsec{\parsearg\unnumberedsecyyy}
+\def\unnumberedsecyyy #1{\unnmhead1{#1}} % normally calls unnumberedseczzz
+\def\unnumberedseczzz #1{\seccheck{unnumberedsec}%
+\plainsecheading {#1}\gdef\thissection{#1}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash unnumbsecentry{\the\toks0}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\unnumbnoderef %
+\penalty 10000 %
+}}
+
+\outer\def\numberedsubsec{\parsearg\numberedsubsecyyy}
+\def\numberedsubsecyyy #1{\numhead2{#1}} % normally calls numberedsubseczzz
+\def\numberedsubseczzz #1{\seccheck{subsection}%
+\gdef\thissection{#1}\subsubsecno=0 \global\advance \subsecno by 1 %
+\subsecheading {#1}{\the\chapno}{\the\secno}{\the\subsecno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash subsecentry %
+{\the\toks0}{\the\chapno}{\the\secno}{\the\subsecno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\donoderef %
+\penalty 10000 %
+}}
+
+\outer\def\appendixsubsec{\parsearg\appendixsubsecyyy}
+\def\appendixsubsecyyy #1{\apphead2{#1}} % normally calls appendixsubseczzz
+\def\appendixsubseczzz #1{\seccheck{appendixsubsec}%
+\gdef\thissection{#1}\subsubsecno=0 \global\advance \subsecno by 1 %
+\subsecheading {#1}{\appendixletter}{\the\secno}{\the\subsecno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash subsecentry %
+{\the\toks0}{\appendixletter}{\the\secno}{\the\subsecno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\appendixnoderef %
+\penalty 10000 %
+}}
+
+\outer\def\unnumberedsubsec{\parsearg\unnumberedsubsecyyy}
+\def\unnumberedsubsecyyy #1{\unnmhead2{#1}} %normally calls unnumberedsubseczzz
+\def\unnumberedsubseczzz #1{\seccheck{unnumberedsubsec}%
+\plainsubsecheading {#1}\gdef\thissection{#1}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash unnumbsubsecentry{\the\toks0}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\unnumbnoderef %
+\penalty 10000 %
+}}
+
+\outer\def\numberedsubsubsec{\parsearg\numberedsubsubsecyyy}
+\def\numberedsubsubsecyyy #1{\numhead3{#1}} % normally numberedsubsubseczzz
+\def\numberedsubsubseczzz #1{\seccheck{subsubsection}%
+\gdef\thissection{#1}\global\advance \subsubsecno by 1 %
+\subsubsecheading {#1}
+ {\the\chapno}{\the\secno}{\the\subsecno}{\the\subsubsecno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash subsubsecentry{\the\toks0}
+ {\the\chapno}{\the\secno}{\the\subsecno}{\the\subsubsecno}
+ {\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\donoderef %
+\penalty 10000 %
+}}
+
+\outer\def\appendixsubsubsec{\parsearg\appendixsubsubsecyyy}
+\def\appendixsubsubsecyyy #1{\apphead3{#1}} % normally appendixsubsubseczzz
+\def\appendixsubsubseczzz #1{\seccheck{appendixsubsubsec}%
+\gdef\thissection{#1}\global\advance \subsubsecno by 1 %
+\subsubsecheading {#1}
+ {\appendixletter}{\the\secno}{\the\subsecno}{\the\subsubsecno}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash subsubsecentry{\the\toks0}%
+ {\appendixletter}
+ {\the\secno}{\the\subsecno}{\the\subsubsecno}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\appendixnoderef %
+\penalty 10000 %
+}}
+
+\outer\def\unnumberedsubsubsec{\parsearg\unnumberedsubsubsecyyy}
+\def\unnumberedsubsubsecyyy #1{\unnmhead3{#1}} %normally unnumberedsubsubseczzz
+\def\unnumberedsubsubseczzz #1{\seccheck{unnumberedsubsubsec}%
+\plainsubsubsecheading {#1}\gdef\thissection{#1}%
+{\chapternofonts%
+\toks0 = {#1}%
+\edef\temp{{\realbackslash unnumbsubsubsecentry{\the\toks0}{\noexpand\folio}}}%
+\escapechar=`\\%
+\write \contentsfile \temp %
+\unnumbnoderef %
+\penalty 10000 %
+}}
+
+% These are variants which are not "outer", so they can appear in @ifinfo.
+% Actually, they should now be obsolete; ordinary section commands should work.
+\def\infotop{\parsearg\unnumberedzzz}
+\def\infounnumbered{\parsearg\unnumberedzzz}
+\def\infounnumberedsec{\parsearg\unnumberedseczzz}
+\def\infounnumberedsubsec{\parsearg\unnumberedsubseczzz}
+\def\infounnumberedsubsubsec{\parsearg\unnumberedsubsubseczzz}
+
+\def\infoappendix{\parsearg\appendixzzz}
+\def\infoappendixsec{\parsearg\appendixseczzz}
+\def\infoappendixsubsec{\parsearg\appendixsubseczzz}
+\def\infoappendixsubsubsec{\parsearg\appendixsubsubseczzz}
+
+\def\infochapter{\parsearg\chapterzzz}
+\def\infosection{\parsearg\sectionzzz}
+\def\infosubsection{\parsearg\subsectionzzz}
+\def\infosubsubsection{\parsearg\subsubsectionzzz}
+
+% These macros control what the section commands do, according
+% to what kind of chapter we are in (ordinary, appendix, or unnumbered).
+% Define them by default for a numbered chapter.
+\global\let\section = \numberedsec
+\global\let\subsection = \numberedsubsec
+\global\let\subsubsection = \numberedsubsubsec
+
+% Define @majorheading, @heading and @subheading
+
+% NOTE on use of \vbox for chapter headings, section headings, and
+% such:
+% 1) We use \vbox rather than the earlier \line to permit
+% overlong headings to fold.
+% 2) \hyphenpenalty is set to 10000 because hyphenation in a
+% heading is obnoxious; this forbids it.
+% 3) Likewise, headings look best if no \parindent is used, and
+% if justification is not attempted. Hence \raggedright.
+
+
+\def\majorheading{\parsearg\majorheadingzzz}
+\def\majorheadingzzz #1{%
+{\advance\chapheadingskip by 10pt \chapbreak }%
+{\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt\raggedright
+ \rm #1\hfill}}\bigskip \par\penalty 200}
+
+\def\chapheading{\parsearg\chapheadingzzz}
+\def\chapheadingzzz #1{\chapbreak %
+{\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt\raggedright
+ \rm #1\hfill}}\bigskip \par\penalty 200}
+
+% @heading, @subheading, @subsubheading.
+\def\heading{\parsearg\plainsecheading}
+\def\subheading{\parsearg\plainsubsecheading}
+\def\subsubheading{\parsearg\plainsubsubsecheading}
+
+% These macros generate a chapter, section, etc. heading only
+% (including whitespace, linebreaking, etc. around it),
+% given all the information in convenient, parsed form.
+
+%%% Args are the skip and penalty (usually negative)
+\def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi}
+
+\def\setchapterstyle #1 {\csname CHAPF#1\endcsname}
+
+%%% Define plain chapter starts, and page on/off switching for it
+% Parameter controlling skip before chapter headings (if needed)
+
+\newskip\chapheadingskip
+
+\def\chapbreak{\dobreak \chapheadingskip {-4000}}
+\def\chappager{\par\vfill\supereject}
+\def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi}
+
+\def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname}
+
+\def\CHAPPAGoff{
+\global\let\contentsalignmacro = \chappager
+\global\let\pchapsepmacro=\chapbreak
+\global\let\pagealignmacro=\chappager}
+
+\def\CHAPPAGon{
+\global\let\contentsalignmacro = \chappager
+\global\let\pchapsepmacro=\chappager
+\global\let\pagealignmacro=\chappager
+\global\def\HEADINGSon{\HEADINGSsingle}}
+
+\def\CHAPPAGodd{
+\global\let\contentsalignmacro = \chapoddpage
+\global\let\pchapsepmacro=\chapoddpage
+\global\let\pagealignmacro=\chapoddpage
+\global\def\HEADINGSon{\HEADINGSdouble}}
+
+\CHAPPAGon
+
+\def\CHAPFplain{
+\global\let\chapmacro=\chfplain
+\global\let\unnumbchapmacro=\unnchfplain
+\global\let\centerchapmacro=\centerchfplain}
+
+% Plain chapter opening.
+% #1 is the text, #2 the chapter number or empty if unnumbered.
+\def\chfplain#1#2{%
+ \pchapsepmacro
+ {%
+ \chapfonts \rm
+ \def\chapnum{#2}%
+ \setbox0 = \hbox{#2\ifx\chapnum\empty\else\enspace\fi}%
+ \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright
+ \hangindent = \wd0 \centerparametersmaybe
+ \unhbox0 #1\par}%
+ }%
+ \nobreak\bigskip % no page break after a chapter title
+ \nobreak
+}
+
+% Plain opening for unnumbered.
+\def\unnchfplain#1{\chfplain{#1}{}}
+
+% @centerchap -- centered and unnumbered.
+\let\centerparametersmaybe = \relax
+\def\centerchfplain#1{{%
+ \def\centerparametersmaybe{%
+ \advance\rightskip by 3\rightskip
+ \leftskip = \rightskip
+ \parfillskip = 0pt
+ }%
+ \chfplain{#1}{}%
+}}
+
+\CHAPFplain % The default
+
+\def\unnchfopen #1{%
+\chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt\raggedright
+ \rm #1\hfill}}\bigskip \par\penalty 10000 %
+}
+
+\def\chfopen #1#2{\chapoddpage {\chapfonts
+\vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}%
+\par\penalty 5000 %
+}
+
+\def\centerchfopen #1{%
+\chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt
+ \hfill {\rm #1}\hfill}}\bigskip \par\penalty 10000 %
+}
+
+\def\CHAPFopen{
+\global\let\chapmacro=\chfopen
+\global\let\unnumbchapmacro=\unnchfopen
+\global\let\centerchapmacro=\centerchfopen}
+
+
+% Section titles.
+\newskip\secheadingskip
+\def\secheadingbreak{\dobreak \secheadingskip {-1000}}
+\def\secheading#1#2#3{\sectionheading{sec}{#2.#3}{#1}}
+\def\plainsecheading#1{\sectionheading{sec}{}{#1}}
+
+% Subsection titles.
+\newskip \subsecheadingskip
+\def\subsecheadingbreak{\dobreak \subsecheadingskip {-500}}
+\def\subsecheading#1#2#3#4{\sectionheading{subsec}{#2.#3.#4}{#1}}
+\def\plainsubsecheading#1{\sectionheading{subsec}{}{#1}}
+
+% Subsubsection titles.
+\let\subsubsecheadingskip = \subsecheadingskip
+\let\subsubsecheadingbreak = \subsecheadingbreak
+\def\subsubsecheading#1#2#3#4#5{\sectionheading{subsubsec}{#2.#3.#4.#5}{#1}}
+\def\plainsubsubsecheading#1{\sectionheading{subsubsec}{}{#1}}
+
+
+% Print any size section title.
+%
+% #1 is the section type (sec/subsec/subsubsec), #2 is the section
+% number (maybe empty), #3 the text.
+\def\sectionheading#1#2#3{%
+ {%
+ \expandafter\advance\csname #1headingskip\endcsname by \parskip
+ \csname #1headingbreak\endcsname
+ }%
+ {%
+ % Switch to the right set of fonts.
+ \csname #1fonts\endcsname \rm
+ %
+ % Only insert the separating space if we have a section number.
+ \def\secnum{#2}%
+ \setbox0 = \hbox{#2\ifx\secnum\empty\else\enspace\fi}%
+ %
+ \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright
+ \hangindent = \wd0 % zero if no section number
+ \unhbox0 #3}%
+ }%
+ \ifdim\parskip<10pt \nobreak\kern10pt\nobreak\kern-\parskip\fi \nobreak
+}
+
+
+\message{toc printing,}
+% Finish up the main text and prepare to read what we've written
+% to \contentsfile.
+
+\newskip\contentsrightmargin \contentsrightmargin=1in
+\def\startcontents#1{%
+ % If @setchapternewpage on, and @headings double, the contents should
+ % start on an odd page, unlike chapters. Thus, we maintain
+ % \contentsalignmacro in parallel with \pagealignmacro.
+ % From: Torbjorn Granlund <tege@matematik.su.se>
+ \contentsalignmacro
+ \immediate\closeout \contentsfile
+ \ifnum \pageno>0
+ \pageno = -1 % Request roman numbered pages.
+ \fi
+ % Don't need to put `Contents' or `Short Contents' in the headline.
+ % It is abundantly clear what they are.
+ \unnumbchapmacro{#1}\def\thischapter{}%
+ \begingroup % Set up to handle contents files properly.
+ \catcode`\\=0 \catcode`\{=1 \catcode`\}=2 \catcode`\@=11
+ % We can't do this, because then an actual ^ in a section
+ % title fails, e.g., @chapter ^ -- exponentiation. --karl, 9jul97.
+ %\catcode`\^=7 % to see ^^e4 as \"a etc. juha@piuha.ydi.vtt.fi
+ \raggedbottom % Worry more about breakpoints than the bottom.
+ \advance\hsize by -\contentsrightmargin % Don't use the full line length.
+}
+
+
+% Normal (long) toc.
+\outer\def\contents{%
+ \startcontents{\putwordTableofContents}%
+ \input \jobname.toc
+ \endgroup
+ \vfill \eject
+}
+
+% And just the chapters.
+\outer\def\summarycontents{%
+ \startcontents{\putwordShortContents}%
+ %
+ \let\chapentry = \shortchapentry
+ \let\unnumbchapentry = \shortunnumberedentry
+ % We want a true roman here for the page numbers.
+ \secfonts
+ \let\rm=\shortcontrm \let\bf=\shortcontbf \let\sl=\shortcontsl
+ \rm
+ \hyphenpenalty = 10000
+ \advance\baselineskip by 1pt % Open it up a little.
+ \def\secentry ##1##2##3##4{}
+ \def\unnumbsecentry ##1##2{}
+ \def\subsecentry ##1##2##3##4##5{}
+ \def\unnumbsubsecentry ##1##2{}
+ \def\subsubsecentry ##1##2##3##4##5##6{}
+ \def\unnumbsubsubsecentry ##1##2{}
+ \input \jobname.toc
+ \endgroup
+ \vfill \eject
+}
+\let\shortcontents = \summarycontents
+
+% These macros generate individual entries in the table of contents.
+% The first argument is the chapter or section name.
+% The last argument is the page number.
+% The arguments in between are the chapter number, section number, ...
+
+% Chapter-level things, for both the long and short contents.
+\def\chapentry#1#2#3{\dochapentry{#2\labelspace#1}{#3}}
+
+% See comments in \dochapentry re vbox and related settings
+\def\shortchapentry#1#2#3{%
+ \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno{#3}}%
+}
+
+% Typeset the label for a chapter or appendix for the short contents.
+% The arg is, e.g. `Appendix A' for an appendix, or `3' for a chapter.
+% We could simplify the code here by writing out an \appendixentry
+% command in the toc file for appendices, instead of using \chapentry
+% for both, but it doesn't seem worth it.
+\setbox0 = \hbox{\shortcontrm \putwordAppendix }
+\newdimen\shortappendixwidth \shortappendixwidth = \wd0
+
+\def\shortchaplabel#1{%
+ % We typeset #1 in a box of constant width, regardless of the text of
+ % #1, so the chapter titles will come out aligned.
+ \setbox0 = \hbox{#1}%
+ \dimen0 = \ifdim\wd0 > \shortappendixwidth \shortappendixwidth \else 0pt \fi
+ %
+ % This space should be plenty, since a single number is .5em, and the
+ % widest letter (M) is 1em, at least in the Computer Modern fonts.
+ % (This space doesn't include the extra space that gets added after
+ % the label; that gets put in by \shortchapentry above.)
+ \advance\dimen0 by 1.1em
+ \hbox to \dimen0{#1\hfil}%
+}
+
+\def\unnumbchapentry#1#2{\dochapentry{#1}{#2}}
+\def\shortunnumberedentry#1#2{\tocentry{#1}{\doshortpageno{#2}}}
+
+% Sections.
+\def\secentry#1#2#3#4{\dosecentry{#2.#3\labelspace#1}{#4}}
+\def\unnumbsecentry#1#2{\dosecentry{#1}{#2}}
+
+% Subsections.
+\def\subsecentry#1#2#3#4#5{\dosubsecentry{#2.#3.#4\labelspace#1}{#5}}
+\def\unnumbsubsecentry#1#2{\dosubsecentry{#1}{#2}}
+
+% And subsubsections.
+\def\subsubsecentry#1#2#3#4#5#6{%
+ \dosubsubsecentry{#2.#3.#4.#5\labelspace#1}{#6}}
+\def\unnumbsubsubsecentry#1#2{\dosubsubsecentry{#1}{#2}}
+
+% This parameter controls the indentation of the various levels.
+\newdimen\tocindent \tocindent = 3pc
+
+% Now for the actual typesetting. In all these, #1 is the text and #2 is the
+% page number.
+%
+% If the toc has to be broken over pages, we want it to be at chapters
+% if at all possible; hence the \penalty.
+\def\dochapentry#1#2{%
+ \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip
+ \begingroup
+ \chapentryfonts
+ \tocentry{#1}{\dopageno{#2}}%
+ \endgroup
+ \nobreak\vskip .25\baselineskip plus.1\baselineskip
+}
+
+\def\dosecentry#1#2{\begingroup
+ \secentryfonts \leftskip=\tocindent
+ \tocentry{#1}{\dopageno{#2}}%
+\endgroup}
+
+\def\dosubsecentry#1#2{\begingroup
+ \subsecentryfonts \leftskip=2\tocindent
+ \tocentry{#1}{\dopageno{#2}}%
+\endgroup}
+
+\def\dosubsubsecentry#1#2{\begingroup
+ \subsubsecentryfonts \leftskip=3\tocindent
+ \tocentry{#1}{\dopageno{#2}}%
+\endgroup}
+
+% Final typesetting of a toc entry; we use the same \entry macro as for
+% the index entries, but we want to suppress hyphenation here. (We
+% can't do that in the \entry macro, since index entries might consist
+% of hyphenated-identifiers-that-do-not-fit-on-a-line-and-nothing-else.)
+\def\tocentry#1#2{\begingroup
+ \vskip 0pt plus1pt % allow a little stretch for the sake of nice page breaks
+ % Do not use \turnoffactive in these arguments. Since the toc is
+ % typeset in cmr, so characters such as _ would come out wrong; we
+ % have to do the usual translation tricks.
+ \entry{#1}{#2}%
+\endgroup}
+
+% Space between chapter (or whatever) number and the title.
+\def\labelspace{\hskip1em \relax}
+
+\def\dopageno#1{{\rm #1}}
+\def\doshortpageno#1{{\rm #1}}
+
+\def\chapentryfonts{\secfonts \rm}
+\def\secentryfonts{\textfonts}
+\let\subsecentryfonts = \textfonts
+\let\subsubsecentryfonts = \textfonts
+
+
+\message{environments,}
+
+% Since these characters are used in examples, it should be an even number of
+% \tt widths. Each \tt character is 1en, so two makes it 1em.
+% Furthermore, these definitions must come after we define our fonts.
+\newbox\dblarrowbox \newbox\longdblarrowbox
+\newbox\pushcharbox \newbox\bullbox
+\newbox\equivbox \newbox\errorbox
+
+%{\tentt
+%\global\setbox\dblarrowbox = \hbox to 1em{\hfil$\Rightarrow$\hfil}
+%\global\setbox\longdblarrowbox = \hbox to 1em{\hfil$\mapsto$\hfil}
+%\global\setbox\pushcharbox = \hbox to 1em{\hfil$\dashv$\hfil}
+%\global\setbox\equivbox = \hbox to 1em{\hfil$\ptexequiv$\hfil}
+% Adapted from the manmac format (p.420 of TeXbook)
+%\global\setbox\bullbox = \hbox to 1em{\kern.15em\vrule height .75ex width .85ex
+% depth .1ex\hfil}
+%}
+
+% @point{}, @result{}, @expansion{}, @print{}, @equiv{}.
+\def\point{$\star$}
+\def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}}
+\def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}}
+\def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}}
+\def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}}
+
+% Adapted from the TeXbook's \boxit.
+{\tentt \global\dimen0 = 3em}% Width of the box.
+\dimen2 = .55pt % Thickness of rules
+% The text. (`r' is open on the right, `e' somewhat less so on the left.)
+\setbox0 = \hbox{\kern-.75pt \tensf error\kern-1.5pt}
+
+\global\setbox\errorbox=\hbox to \dimen0{\hfil
+ \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right.
+ \advance\hsize by -2\dimen2 % Rules.
+ \vbox{
+ \hrule height\dimen2
+ \hbox{\vrule width\dimen2 \kern3pt % Space to left of text.
+ \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below.
+ \kern3pt\vrule width\dimen2}% Space to right.
+ \hrule height\dimen2}
+ \hfil}
+
+% The @error{} command.
+\def\error{\leavevmode\lower.7ex\copy\errorbox}
+
+% @tex ... @end tex escapes into raw Tex temporarily.
+% One exception: @ is still an escape character, so that @end tex works.
+% But \@ or @@ will get a plain tex @ character.
+
+\def\tex{\begingroup
+ \catcode `\\=0 \catcode `\{=1 \catcode `\}=2
+ \catcode `\$=3 \catcode `\&=4 \catcode `\#=6
+ \catcode `\^=7 \catcode `\_=8 \catcode `\~=13 \let~=\tie
+ \catcode `\%=14
+ \catcode 43=12 % plus
+ \catcode`\"=12
+ \catcode`\==12
+ \catcode`\|=12
+ \catcode`\<=12
+ \catcode`\>=12
+ \escapechar=`\\
+ %
+ \let\b=\ptexb
+ \let\bullet=\ptexbullet
+ \let\c=\ptexc
+ \let\,=\ptexcomma
+ \let\.=\ptexdot
+ \let\dots=\ptexdots
+ \let\equiv=\ptexequiv
+ \let\!=\ptexexclam
+ \let\i=\ptexi
+ \let\{=\ptexlbrace
+ \let\+=\tabalign
+ \let\}=\ptexrbrace
+ \let\*=\ptexstar
+ \let\t=\ptext
+ %
+ \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}%
+ \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}%
+ \def\@{@}%
+\let\Etex=\endgroup}
+
+% Define @lisp ... @endlisp.
+% @lisp does a \begingroup so it can rebind things,
+% including the definition of @endlisp (which normally is erroneous).
+
+% Amount to narrow the margins by for @lisp.
+\newskip\lispnarrowing \lispnarrowing=0.4in
+
+% This is the definition that ^^M gets inside @lisp, @example, and other
+% such environments. \null is better than a space, since it doesn't
+% have any width.
+\def\lisppar{\null\endgraf}
+
+% Make each space character in the input produce a normal interword
+% space in the output. Don't allow a line break at this space, as this
+% is used only in environments like @example, where each line of input
+% should produce a line of output anyway.
+%
+{\obeyspaces %
+\gdef\sepspaces{\obeyspaces\let =\tie}}
+
+% Define \obeyedspace to be our active space, whatever it is. This is
+% for use in \parsearg.
+{\sepspaces%
+\global\let\obeyedspace= }
+
+% This space is always present above and below environments.
+\newskip\envskipamount \envskipamount = 0pt
+
+% Make spacing and below environment symmetrical. We use \parskip here
+% to help in doing that, since in @example-like environments \parskip
+% is reset to zero; thus the \afterenvbreak inserts no space -- but the
+% start of the next paragraph will insert \parskip
+%
+\def\aboveenvbreak{{\advance\envskipamount by \parskip
+\endgraf \ifdim\lastskip<\envskipamount
+\removelastskip \penalty-50 \vskip\envskipamount \fi}}
+
+\let\afterenvbreak = \aboveenvbreak
+
+% \nonarrowing is a flag. If "set", @lisp etc don't narrow margins.
+\let\nonarrowing=\relax
+
+% @cartouche ... @end cartouche: draw rectangle w/rounded corners around
+% environment contents.
+\font\circle=lcircle10
+\newdimen\circthick
+\newdimen\cartouter\newdimen\cartinner
+\newskip\normbskip\newskip\normpskip\newskip\normlskip
+\circthick=\fontdimen8\circle
+%
+\def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth
+\def\ctr{{\hskip 6pt\circle\char'010}}
+\def\cbl{{\circle\char'012\hskip -6pt}}
+\def\cbr{{\hskip 6pt\circle\char'011}}
+\def\carttop{\hbox to \cartouter{\hskip\lskip
+ \ctl\leaders\hrule height\circthick\hfil\ctr
+ \hskip\rskip}}
+\def\cartbot{\hbox to \cartouter{\hskip\lskip
+ \cbl\leaders\hrule height\circthick\hfil\cbr
+ \hskip\rskip}}
+%
+\newskip\lskip\newskip\rskip
+
+\long\def\cartouche{%
+\begingroup
+ \lskip=\leftskip \rskip=\rightskip
+ \leftskip=0pt\rightskip=0pt %we want these *outside*.
+ \cartinner=\hsize \advance\cartinner by-\lskip
+ \advance\cartinner by-\rskip
+ \cartouter=\hsize
+ \advance\cartouter by 18.4pt % allow for 3pt kerns on either
+% side, and for 6pt waste from
+% each corner char, and rule thickness
+ \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip
+ % Flag to tell @lisp, etc., not to narrow margin.
+ \let\nonarrowing=\comment
+ \vbox\bgroup
+ \baselineskip=0pt\parskip=0pt\lineskip=0pt
+ \carttop
+ \hbox\bgroup
+ \hskip\lskip
+ \vrule\kern3pt
+ \vbox\bgroup
+ \hsize=\cartinner
+ \kern3pt
+ \begingroup
+ \baselineskip=\normbskip
+ \lineskip=\normlskip
+ \parskip=\normpskip
+ \vskip -\parskip
+\def\Ecartouche{%
+ \endgroup
+ \kern3pt
+ \egroup
+ \kern3pt\vrule
+ \hskip\rskip
+ \egroup
+ \cartbot
+ \egroup
+\endgroup
+}}
+
+
+% This macro is called at the beginning of all the @example variants,
+% inside a group.
+\def\nonfillstart{%
+ \aboveenvbreak
+ \inENV % This group ends at the end of the body
+ \hfuzz = 12pt % Don't be fussy
+ \sepspaces % Make spaces be word-separators rather than space tokens.
+ \singlespace
+ \let\par = \lisppar % don't ignore blank lines
+ \obeylines % each line of input is a line of output
+ \parskip = 0pt
+ \parindent = 0pt
+ \emergencystretch = 0pt % don't try to avoid overfull boxes
+ % @cartouche defines \nonarrowing to inhibit narrowing
+ % at next level down.
+ \ifx\nonarrowing\relax
+ \advance \leftskip by \lispnarrowing
+ \exdentamount=\lispnarrowing
+ \let\exdent=\nofillexdent
+ \let\nonarrowing=\relax
+ \fi
+}
+
+% To ending an @example-like environment, we first end the paragraph
+% (via \afterenvbreak's vertical glue), and then the group. That way we
+% keep the zero \parskip that the environments set -- \parskip glue
+% will be inserted at the beginning of the next paragraph in the
+% document, after the environment.
+%
+\def\nonfillfinish{\afterenvbreak\endgroup}%
+
+\def\lisp{\begingroup
+ \nonfillstart
+ \let\Elisp = \nonfillfinish
+ \tt
+ % Make @kbd do something special, if requested.
+ \let\kbdfont\kbdexamplefont
+ \rawbackslash % have \ input char produce \ char from current font
+ \gobble
+}
+
+% Define the \E... control sequence only if we are inside the
+% environment, so the error checking in \end will work.
+%
+% We must call \lisp last in the definition, since it reads the
+% return following the @example (or whatever) command.
+%
+\def\example{\begingroup \def\Eexample{\nonfillfinish\endgroup}\lisp}
+\def\smallexample{\begingroup \def\Esmallexample{\nonfillfinish\endgroup}\lisp}
+\def\smalllisp{\begingroup \def\Esmalllisp{\nonfillfinish\endgroup}\lisp}
+
+% @smallexample and @smalllisp. This is not used unless the @smallbook
+% command is given. Originally contributed by Pavel@xerox.
+%
+\def\smalllispx{\begingroup
+ \nonfillstart
+ \let\Esmalllisp = \nonfillfinish
+ \let\Esmallexample = \nonfillfinish
+ %
+ % Smaller fonts for small examples.
+ \indexfonts \tt
+ \rawbackslash % make \ output the \ character from the current font (tt)
+ \gobble
+}
+
+% This is @display; same as @lisp except use roman font.
+%
+\def\display{\begingroup
+ \nonfillstart
+ \let\Edisplay = \nonfillfinish
+ \gobble
+}
+
+% This is @format; same as @display except don't narrow margins.
+%
+\def\format{\begingroup
+ \let\nonarrowing = t
+ \nonfillstart
+ \let\Eformat = \nonfillfinish
+ \gobble
+}
+
+% @flushleft (same as @format) and @flushright.
+%
+\def\flushleft{\begingroup
+ \let\nonarrowing = t
+ \nonfillstart
+ \let\Eflushleft = \nonfillfinish
+ \gobble
+}
+\def\flushright{\begingroup
+ \let\nonarrowing = t
+ \nonfillstart
+ \let\Eflushright = \nonfillfinish
+ \advance\leftskip by 0pt plus 1fill
+ \gobble}
+
+% @quotation does normal linebreaking (hence we can't use \nonfillstart)
+% and narrows the margins.
+%
+\def\quotation{%
+ \begingroup\inENV %This group ends at the end of the @quotation body
+ {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip
+ \singlespace
+ \parindent=0pt
+ % We have retained a nonzero parskip for the environment, since we're
+ % doing normal filling. So to avoid extra space below the environment...
+ \def\Equotation{\parskip = 0pt \nonfillfinish}%
+ %
+ % @cartouche defines \nonarrowing to inhibit narrowing at next level down.
+ \ifx\nonarrowing\relax
+ \advance\leftskip by \lispnarrowing
+ \advance\rightskip by \lispnarrowing
+ \exdentamount = \lispnarrowing
+ \let\nonarrowing = \relax
+ \fi
+}
+
+\message{defuns,}
+% Define formatter for defuns
+% First, allow user to change definition object font (\df) internally
+\def\setdeffont #1 {\csname DEF#1\endcsname}
+
+\newskip\defbodyindent \defbodyindent=.4in
+\newskip\defargsindent \defargsindent=50pt
+\newskip\deftypemargin \deftypemargin=12pt
+\newskip\deflastargmargin \deflastargmargin=18pt
+
+\newcount\parencount
+% define \functionparens, which makes ( and ) and & do special things.
+% \functionparens affects the group it is contained in.
+\def\activeparens{%
+\catcode`\(=\active \catcode`\)=\active \catcode`\&=\active
+\catcode`\[=\active \catcode`\]=\active}
+
+% Make control sequences which act like normal parenthesis chars.
+\let\lparen = ( \let\rparen = )
+
+{\activeparens % Now, smart parens don't turn on until &foo (see \amprm)
+
+% Be sure that we always have a definition for `(', etc. For example,
+% if the fn name has parens in it, \boldbrax will not be in effect yet,
+% so TeX would otherwise complain about undefined control sequence.
+\global\let(=\lparen \global\let)=\rparen
+\global\let[=\lbrack \global\let]=\rbrack
+
+\gdef\functionparens{\boldbrax\let&=\amprm\parencount=0 }
+\gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb}
+% This is used to turn on special parens
+% but make & act ordinary (given that it's active).
+\gdef\boldbraxnoamp{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb\let&=\ampnr}
+
+% Definitions of (, ) and & used in args for functions.
+% This is the definition of ( outside of all parentheses.
+\gdef\oprm#1 {{\rm\char`\(}#1 \bf \let(=\opnested
+ \global\advance\parencount by 1
+}
+%
+% This is the definition of ( when already inside a level of parens.
+\gdef\opnested{\char`\(\global\advance\parencount by 1 }
+%
+\gdef\clrm{% Print a paren in roman if it is taking us back to depth of 0.
+ % also in that case restore the outer-level definition of (.
+ \ifnum \parencount=1 {\rm \char `\)}\sl \let(=\oprm \else \char `\) \fi
+ \global\advance \parencount by -1 }
+% If we encounter &foo, then turn on ()-hacking afterwards
+\gdef\amprm#1 {{\rm\}\let(=\oprm \let)=\clrm\ }
+%
+\gdef\normalparens{\boldbrax\let&=\ampnr}
+} % End of definition inside \activeparens
+%% These parens (in \boldbrax) actually are a little bolder than the
+%% contained text. This is especially needed for [ and ]
+\def\opnr{{\sf\char`\(}\global\advance\parencount by 1 }
+\def\clnr{{\sf\char`\)}\global\advance\parencount by -1 }
+\def\ampnr{\&}
+\def\lbrb{{\bf\char`\[}}
+\def\rbrb{{\bf\char`\]}}
+
+% First, defname, which formats the header line itself.
+% #1 should be the function name.
+% #2 should be the type of definition, such as "Function".
+
+\def\defname #1#2{%
+% Get the values of \leftskip and \rightskip as they were
+% outside the @def...
+\dimen2=\leftskip
+\advance\dimen2 by -\defbodyindent
+\dimen3=\rightskip
+\advance\dimen3 by -\defbodyindent
+\noindent %
+\setbox0=\hbox{\hskip \deflastargmargin{\rm #2}\hskip \deftypemargin}%
+\dimen0=\hsize \advance \dimen0 by -\wd0 % compute size for first line
+\dimen1=\hsize \advance \dimen1 by -\defargsindent %size for continuations
+\parshape 2 0in \dimen0 \defargsindent \dimen1 %
+% Now output arg 2 ("Function" or some such)
+% ending at \deftypemargin from the right margin,
+% but stuck inside a box of width 0 so it does not interfere with linebreaking
+{% Adjust \hsize to exclude the ambient margins,
+% so that \rightline will obey them.
+\advance \hsize by -\dimen2 \advance \hsize by -\dimen3
+\rlap{\rightline{{\rm #2}\hskip \deftypemargin}}}%
+% Make all lines underfull and no complaints:
+\tolerance=10000 \hbadness=10000
+\advance\leftskip by -\defbodyindent
+\exdentamount=\defbodyindent
+{\df #1}\enskip % Generate function name
+}
+
+% Actually process the body of a definition
+% #1 should be the terminating control sequence, such as \Edefun.
+% #2 should be the "another name" control sequence, such as \defunx.
+% #3 should be the control sequence that actually processes the header,
+% such as \defunheader.
+
+\def\defparsebody #1#2#3{\begingroup\inENV% Environment for definitionbody
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2{\begingroup\obeylines\activeparens\spacesplit#3}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup %
+\catcode 61=\active % 61 is `='
+\obeylines\activeparens\spacesplit#3}
+
+% #1 is the \E... control sequence to end the definition (which we define).
+% #2 is the \...x control sequence for consecutive fns (which we define).
+% #3 is the control sequence to call to resume processing.
+% #4, delimited by the space, is the class name.
+%
+\def\defmethparsebody#1#2#3#4 {\begingroup\inENV %
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2##1 {\begingroup\obeylines\activeparens\spacesplit{#3{##1}}}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup\obeylines\activeparens\spacesplit{#3{#4}}}
+
+% @deftypemethod has an extra argument that nothing else does. Sigh.
+%
+\def\deftypemethparsebody#1#2#3#4 #5 {\begingroup\inENV %
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2##1 {\begingroup\obeylines\activeparens\spacesplit{#3{##1}}}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup\obeylines\activeparens\spacesplit{#3{#4}{#5}}}
+
+\def\defopparsebody #1#2#3#4#5 {\begingroup\inENV %
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2##1 ##2 {\def#4{##1}%
+\begingroup\obeylines\activeparens\spacesplit{#3{##2}}}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup\obeylines\activeparens\spacesplit{#3{#5}}}
+
+% These parsing functions are similar to the preceding ones
+% except that they do not make parens into active characters.
+% These are used for "variables" since they have no arguments.
+
+\def\defvarparsebody #1#2#3{\begingroup\inENV% Environment for definitionbody
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2{\begingroup\obeylines\spacesplit#3}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup %
+\catcode 61=\active %
+\obeylines\spacesplit#3}
+
+% This is used for \def{tp,vr}parsebody. It could probably be used for
+% some of the others, too, with some judicious conditionals.
+%
+\def\parsebodycommon#1#2#3{%
+ \begingroup\inENV %
+ \medbreak %
+ % Define the end token that this defining construct specifies
+ % so that it will exit this group.
+ \def#1{\endgraf\endgroup\medbreak}%
+ \def#2##1 {\begingroup\obeylines\spacesplit{#3{##1}}}%
+ \parindent=0in
+ \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+ \exdentamount=\defbodyindent
+ \begingroup\obeylines
+}
+
+\def\defvrparsebody#1#2#3#4 {%
+ \parsebodycommon{#1}{#2}{#3}%
+ \spacesplit{#3{#4}}%
+}
+
+% This loses on `@deftp {Data Type} {struct termios}' -- it thinks the
+% type is just `struct', because we lose the braces in `{struct
+% termios}' when \spacesplit reads its undelimited argument. Sigh.
+% \let\deftpparsebody=\defvrparsebody
+%
+% So, to get around this, we put \empty in with the type name. That
+% way, TeX won't find exactly `{...}' as an undelimited argument, and
+% won't strip off the braces.
+%
+\def\deftpparsebody #1#2#3#4 {%
+ \parsebodycommon{#1}{#2}{#3}%
+ \spacesplit{\parsetpheaderline{#3{#4}}}\empty
+}
+
+% Fine, but then we have to eventually remove the \empty *and* the
+% braces (if any). That's what this does.
+%
+\def\removeemptybraces\empty#1\relax{#1}
+
+% After \spacesplit has done its work, this is called -- #1 is the final
+% thing to call, #2 the type name (which starts with \empty), and #3
+% (which might be empty) the arguments.
+%
+\def\parsetpheaderline#1#2#3{%
+ #1{\removeemptybraces#2\relax}{#3}%
+}%
+
+\def\defopvarparsebody #1#2#3#4#5 {\begingroup\inENV %
+\medbreak %
+% Define the end token that this defining construct specifies
+% so that it will exit this group.
+\def#1{\endgraf\endgroup\medbreak}%
+\def#2##1 ##2 {\def#4{##1}%
+\begingroup\obeylines\spacesplit{#3{##2}}}%
+\parindent=0in
+\advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent
+\exdentamount=\defbodyindent
+\begingroup\obeylines\spacesplit{#3{#5}}}
+
+% Split up #2 at the first space token.
+% call #1 with two arguments:
+% the first is all of #2 before the space token,
+% the second is all of #2 after that space token.
+% If #2 contains no space token, all of it is passed as the first arg
+% and the second is passed as empty.
+
+{\obeylines
+\gdef\spacesplit#1#2^^M{\endgroup\spacesplitfoo{#1}#2 \relax\spacesplitfoo}%
+\long\gdef\spacesplitfoo#1#2 #3#4\spacesplitfoo{%
+\ifx\relax #3%
+#1{#2}{}\else #1{#2}{#3#4}\fi}}
+
+% So much for the things common to all kinds of definitions.
+
+% Define @defun.
+
+% First, define the processing that is wanted for arguments of \defun
+% Use this to expand the args and terminate the paragraph they make up
+
+\def\defunargs #1{\functionparens \sl
+% Expand, preventing hyphenation at `-' chars.
+% Note that groups don't affect changes in \hyphenchar.
+\hyphenchar\tensl=0
+#1%
+\hyphenchar\tensl=45
+\ifnum\parencount=0 \else \errmessage{Unbalanced parentheses in @def}\fi%
+\interlinepenalty=10000
+\advance\rightskip by 0pt plus 1fil
+\endgraf\penalty 10000\vskip -\parskip\penalty 10000%
+}
+
+\def\deftypefunargs #1{%
+% Expand, preventing hyphenation at `-' chars.
+% Note that groups don't affect changes in \hyphenchar.
+% Use \boldbraxnoamp, not \functionparens, so that & is not special.
+\boldbraxnoamp
+\tclose{#1}% avoid \code because of side effects on active chars
+\interlinepenalty=10000
+\advance\rightskip by 0pt plus 1fil
+\endgraf\penalty 10000\vskip -\parskip\penalty 10000%
+}
+
+% Do complete processing of one @defun or @defunx line already parsed.
+
+% @deffn Command forward-char nchars
+
+\def\deffn{\defmethparsebody\Edeffn\deffnx\deffnheader}
+
+\def\deffnheader #1#2#3{\doind {fn}{\code{#2}}%
+\begingroup\defname {#2}{#1}\defunargs{#3}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% @defun == @deffn Function
+
+\def\defun{\defparsebody\Edefun\defunx\defunheader}
+
+\def\defunheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index
+\begingroup\defname {#1}{Function}%
+\defunargs {#2}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% @deftypefun int foobar (int @var{foo}, float @var{bar})
+
+\def\deftypefun{\defparsebody\Edeftypefun\deftypefunx\deftypefunheader}
+
+% #1 is the data type. #2 is the name and args.
+\def\deftypefunheader #1#2{\deftypefunheaderx{#1}#2 \relax}
+% #1 is the data type, #2 the name, #3 the args.
+\def\deftypefunheaderx #1#2 #3\relax{%
+\doind {fn}{\code{#2}}% Make entry in function index
+\begingroup\defname {\defheaderxcond#1\relax$$$#2}{Function}%
+\deftypefunargs {#3}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% @deftypefn {Library Function} int foobar (int @var{foo}, float @var{bar})
+
+\def\deftypefn{\defmethparsebody\Edeftypefn\deftypefnx\deftypefnheader}
+
+% \defheaderxcond#1\relax$$$
+% puts #1 in @code, followed by a space, but does nothing if #1 is null.
+\def\defheaderxcond#1#2$$${\ifx#1\relax\else\code{#1#2} \fi}
+
+% #1 is the classification. #2 is the data type. #3 is the name and args.
+\def\deftypefnheader #1#2#3{\deftypefnheaderx{#1}{#2}#3 \relax}
+% #1 is the classification, #2 the data type, #3 the name, #4 the args.
+\def\deftypefnheaderx #1#2#3 #4\relax{%
+\doind {fn}{\code{#3}}% Make entry in function index
+\begingroup
+\normalparens % notably, turn off `&' magic, which prevents
+% at least some C++ text from working
+\defname {\defheaderxcond#2\relax$$$#3}{#1}%
+\deftypefunargs {#4}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% @defmac == @deffn Macro
+
+\def\defmac{\defparsebody\Edefmac\defmacx\defmacheader}
+
+\def\defmacheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index
+\begingroup\defname {#1}{Macro}%
+\defunargs {#2}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% @defspec == @deffn Special Form
+
+\def\defspec{\defparsebody\Edefspec\defspecx\defspecheader}
+
+\def\defspecheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index
+\begingroup\defname {#1}{Special Form}%
+\defunargs {#2}\endgroup %
+\catcode 61=\other % Turn off change made in \defparsebody
+}
+
+% This definition is run if you use @defunx
+% anywhere other than immediately after a @defun or @defunx.
+
+\def\deffnx #1 {\errmessage{@deffnx in invalid context}}
+\def\defunx #1 {\errmessage{@defunx in invalid context}}
+\def\defmacx #1 {\errmessage{@defmacx in invalid context}}
+\def\defspecx #1 {\errmessage{@defspecx in invalid context}}
+\def\deftypefnx #1 {\errmessage{@deftypefnx in invalid context}}
+\def\deftypemethodx #1 {\errmessage{@deftypemethodx in invalid context}}
+\def\deftypeunx #1 {\errmessage{@deftypeunx in invalid context}}
+
+% @defmethod, and so on
+
+% @defop CATEGORY CLASS OPERATION ARG...
+
+\def\defop #1 {\def\defoptype{#1}%
+\defopparsebody\Edefop\defopx\defopheader\defoptype}
+
+\def\defopheader #1#2#3{%
+\dosubind {fn}{\code{#2}}{\putwordon\ #1}% Make entry in function index
+\begingroup\defname {#2}{\defoptype{} on #1}%
+\defunargs {#3}\endgroup %
+}
+
+% @deftypemethod CLASS RETURN-TYPE METHOD ARG...
+%
+\def\deftypemethod{%
+ \deftypemethparsebody\Edeftypemethod\deftypemethodx\deftypemethodheader}
+%
+% #1 is the class name, #2 the data type, #3 the method name, #4 the args.
+\def\deftypemethodheader#1#2#3#4{%
+ \dosubind{fn}{\code{#3}}{\putwordon\ \code{#1}}% entry in function index
+ \begingroup
+ \defname{\defheaderxcond#2\relax$$$#3}{\putwordMethodon\ \code{#1}}%
+ \deftypefunargs{#4}%
+ \endgroup
+}
+
+% @defmethod == @defop Method
+%
+\def\defmethod{\defmethparsebody\Edefmethod\defmethodx\defmethodheader}
+%
+% #1 is the class name, #2 the method name, #3 the args.
+\def\defmethodheader#1#2#3{%
+ \dosubind{fn}{\code{#2}}{\putwordon\ \code{#1}}% entry in function index
+ \begingroup
+ \defname{#2}{\putwordMethodon\ \code{#1}}%
+ \defunargs{#3}%
+ \endgroup
+}
+
+% @defcv {Class Option} foo-class foo-flag
+
+\def\defcv #1 {\def\defcvtype{#1}%
+\defopvarparsebody\Edefcv\defcvx\defcvarheader\defcvtype}
+
+\def\defcvarheader #1#2#3{%
+\dosubind {vr}{\code{#2}}{of #1}% Make entry in var index
+\begingroup\defname {#2}{\defcvtype{} of #1}%
+\defvarargs {#3}\endgroup %
+}
+
+% @defivar == @defcv {Instance Variable}
+
+\def\defivar{\defvrparsebody\Edefivar\defivarx\defivarheader}
+
+\def\defivarheader #1#2#3{%
+\dosubind {vr}{\code{#2}}{of #1}% Make entry in var index
+\begingroup\defname {#2}{Instance Variable of #1}%
+\defvarargs {#3}\endgroup %
+}
+
+% These definitions are run if you use @defmethodx, etc.,
+% anywhere other than immediately after a @defmethod, etc.
+
+\def\defopx #1 {\errmessage{@defopx in invalid context}}
+\def\defmethodx #1 {\errmessage{@defmethodx in invalid context}}
+\def\defcvx #1 {\errmessage{@defcvx in invalid context}}
+\def\defivarx #1 {\errmessage{@defivarx in invalid context}}
+
+% Now @defvar
+
+% First, define the processing that is wanted for arguments of @defvar.
+% This is actually simple: just print them in roman.
+% This must expand the args and terminate the paragraph they make up
+\def\defvarargs #1{\normalparens #1%
+\interlinepenalty=10000
+\endgraf\penalty 10000\vskip -\parskip\penalty 10000}
+
+% @defvr Counter foo-count
+
+\def\defvr{\defvrparsebody\Edefvr\defvrx\defvrheader}
+
+\def\defvrheader #1#2#3{\doind {vr}{\code{#2}}%
+\begingroup\defname {#2}{#1}\defvarargs{#3}\endgroup}
+
+% @defvar == @defvr Variable
+
+\def\defvar{\defvarparsebody\Edefvar\defvarx\defvarheader}
+
+\def\defvarheader #1#2{\doind {vr}{\code{#1}}% Make entry in var index
+\begingroup\defname {#1}{Variable}%
+\defvarargs {#2}\endgroup %
+}
+
+% @defopt == @defvr {User Option}
+
+\def\defopt{\defvarparsebody\Edefopt\defoptx\defoptheader}
+
+\def\defoptheader #1#2{\doind {vr}{\code{#1}}% Make entry in var index
+\begingroup\defname {#1}{User Option}%
+\defvarargs {#2}\endgroup %
+}
+
+% @deftypevar int foobar
+
+\def\deftypevar{\defvarparsebody\Edeftypevar\deftypevarx\deftypevarheader}
+
+% #1 is the data type. #2 is the name, perhaps followed by text that
+% is actually part of the data type, which should not be put into the index.
+\def\deftypevarheader #1#2{%
+\dovarind#2 \relax% Make entry in variables index
+\begingroup\defname {\defheaderxcond#1\relax$$$#2}{Variable}%
+\interlinepenalty=10000
+\endgraf\penalty 10000\vskip -\parskip\penalty 10000
+\endgroup}
+\def\dovarind#1 #2\relax{\doind{vr}{\code{#1}}}
+
+% @deftypevr {Global Flag} int enable
+
+\def\deftypevr{\defvrparsebody\Edeftypevr\deftypevrx\deftypevrheader}
+
+\def\deftypevrheader #1#2#3{\dovarind#3 \relax%
+\begingroup\defname {\defheaderxcond#2\relax$$$#3}{#1}
+\interlinepenalty=10000
+\endgraf\penalty 10000\vskip -\parskip\penalty 10000
+\endgroup}
+
+% This definition is run if you use @defvarx
+% anywhere other than immediately after a @defvar or @defvarx.
+
+\def\defvrx #1 {\errmessage{@defvrx in invalid context}}
+\def\defvarx #1 {\errmessage{@defvarx in invalid context}}
+\def\defoptx #1 {\errmessage{@defoptx in invalid context}}
+\def\deftypevarx #1 {\errmessage{@deftypevarx in invalid context}}
+\def\deftypevrx #1 {\errmessage{@deftypevrx in invalid context}}
+
+% Now define @deftp
+% Args are printed in bold, a slight difference from @defvar.
+
+\def\deftpargs #1{\bf \defvarargs{#1}}
+
+% @deftp Class window height width ...
+
+\def\deftp{\deftpparsebody\Edeftp\deftpx\deftpheader}
+
+\def\deftpheader #1#2#3{\doind {tp}{\code{#2}}%
+\begingroup\defname {#2}{#1}\deftpargs{#3}\endgroup}
+
+% This definition is run if you use @deftpx, etc
+% anywhere other than immediately after a @deftp, etc.
+
+\def\deftpx #1 {\errmessage{@deftpx in invalid context}}
+
+
+\message{cross reference,}
+\newwrite\auxfile
+
+\newif\ifhavexrefs % True if xref values are known.
+\newif\ifwarnedxrefs % True if we warned once that they aren't known.
+
+% @inforef is relatively simple.
+\def\inforef #1{\inforefzzz #1,,,,**}
+\def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}},
+ node \samp{\ignorespaces#1{}}}
+
+% @setref{foo} defines a cross-reference point named foo.
+
+\def\setref#1{%
+\dosetq{#1-title}{Ytitle}%
+\dosetq{#1-pg}{Ypagenumber}%
+\dosetq{#1-snt}{Ysectionnumberandtype}}
+
+\def\unnumbsetref#1{%
+\dosetq{#1-title}{Ytitle}%
+\dosetq{#1-pg}{Ypagenumber}%
+\dosetq{#1-snt}{Ynothing}}
+
+\def\appendixsetref#1{%
+\dosetq{#1-title}{Ytitle}%
+\dosetq{#1-pg}{Ypagenumber}%
+\dosetq{#1-snt}{Yappendixletterandtype}}
+
+% \xref, \pxref, and \ref generate cross-references to specified points.
+% For \xrefX, #1 is the node name, #2 the name of the Info
+% cross-reference, #3 the printed node name, #4 the name of the Info
+% file, #5 the name of the printed manual. All but the node name can be
+% omitted.
+%
+\def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]}
+\def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]}
+\def\ref#1{\xrefX[#1,,,,,,,]}
+\def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup
+ \def\printedmanual{\ignorespaces #5}%
+ \def\printednodename{\ignorespaces #3}%
+ \setbox1=\hbox{\printedmanual}%
+ \setbox0=\hbox{\printednodename}%
+ \ifdim \wd0 = 0pt
+ % No printed node name was explicitly given.
+ \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax
+ % Use the node name inside the square brackets.
+ \def\printednodename{\ignorespaces #1}%
+ \else
+ % Use the actual chapter/section title appear inside
+ % the square brackets. Use the real section title if we have it.
+ \ifdim \wd1>0pt%
+ % It is in another manual, so we don't have it.
+ \def\printednodename{\ignorespaces #1}%
+ \else
+ \ifhavexrefs
+ % We know the real title if we have the xref values.
+ \def\printednodename{\refx{#1-title}{}}%
+ \else
+ % Otherwise just copy the Info node name.
+ \def\printednodename{\ignorespaces #1}%
+ \fi%
+ \fi
+ \fi
+ \fi
+ %
+ % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not
+ % insert empty discretionaries after hyphens, which means that it will
+ % not find a line break at a hyphen in a node names. Since some manuals
+ % are best written with fairly long node names, containing hyphens, this
+ % is a loss. Therefore, we give the text of the node name again, so it
+ % is as if TeX is seeing it for the first time.
+ \ifdim \wd1 > 0pt
+ \putwordsection{} ``\printednodename'' in \cite{\printedmanual}%
+ \else
+ % _ (for example) has to be the character _ for the purposes of the
+ % control sequence corresponding to the node, but it has to expand
+ % into the usual \leavevmode...\vrule stuff for purposes of
+ % printing. So we \turnoffactive for the \refx-snt, back on for the
+ % printing, back off for the \refx-pg.
+ {\normalturnoffactive \refx{#1-snt}{}}%
+ \space [\printednodename],\space
+ \turnoffactive \putwordpage\tie\refx{#1-pg}{}%
+ \fi
+\endgroup}
+
+% \dosetq is the interface for calls from other macros
+
+% Use \normalturnoffactive so that punctuation chars such as underscore
+% and backslash work in node names. (\turnoffactive doesn't do \.)
+\def\dosetq#1#2{%
+ {\let\folio=0
+ \normalturnoffactive
+ \edef\next{\write\auxfile{\internalsetq{#1}{#2}}}%
+ \next
+ }%
+}
+
+% \internalsetq {foo}{page} expands into
+% CHARACTERS 'xrdef {foo}{...expansion of \Ypage...}
+% When the aux file is read, ' is the escape character
+
+\def\internalsetq #1#2{'xrdef {#1}{\csname #2\endcsname}}
+
+% Things to be expanded by \internalsetq
+
+\def\Ypagenumber{\folio}
+
+\def\Ytitle{\thissection}
+
+\def\Ynothing{}
+
+\def\Ysectionnumberandtype{%
+\ifnum\secno=0 \putwordChapter\xreftie\the\chapno %
+\else \ifnum \subsecno=0 \putwordSection\xreftie\the\chapno.\the\secno %
+\else \ifnum \subsubsecno=0 %
+\putwordSection\xreftie\the\chapno.\the\secno.\the\subsecno %
+\else %
+\putwordSection\xreftie\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno %
+\fi \fi \fi }
+
+\def\Yappendixletterandtype{%
+\ifnum\secno=0 \putwordAppendix\xreftie'char\the\appendixno{}%
+\else \ifnum \subsecno=0 \putwordSection\xreftie'char\the\appendixno.\the\secno %
+\else \ifnum \subsubsecno=0 %
+\putwordSection\xreftie'char\the\appendixno.\the\secno.\the\subsecno %
+\else %
+\putwordSection\xreftie'char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno %
+\fi \fi \fi }
+
+\gdef\xreftie{'tie}
+
+% Use TeX 3.0's \inputlineno to get the line number, for better error
+% messages, but if we're using an old version of TeX, don't do anything.
+%
+\ifx\inputlineno\thisisundefined
+ \let\linenumber = \empty % Non-3.0.
+\else
+ \def\linenumber{\the\inputlineno:\space}
+\fi
+
+% Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME.
+% If its value is nonempty, SUFFIX is output afterward.
+
+\def\refx#1#2{%
+ \expandafter\ifx\csname X#1\endcsname\relax
+ % If not defined, say something at least.
+ \angleleft un\-de\-fined\angleright
+ \ifhavexrefs
+ \message{\linenumber Undefined cross reference `#1'.}%
+ \else
+ \ifwarnedxrefs\else
+ \global\warnedxrefstrue
+ \message{Cross reference values unknown; you must run TeX again.}%
+ \fi
+ \fi
+ \else
+ % It's defined, so just use it.
+ \csname X#1\endcsname
+ \fi
+ #2% Output the suffix in any case.
+}
+
+% This is the macro invoked by entries in the aux file.
+%
+\def\xrdef#1{\begingroup
+ % Reenable \ as an escape while reading the second argument.
+ \catcode`\\ = 0
+ \afterassignment\endgroup
+ \expandafter\gdef\csname X#1\endcsname
+}
+
+% Read the last existing aux file, if any. No error if none exists.
+\def\readauxfile{\begingroup
+ \catcode`\^^@=\other
+ \catcode`\^^A=\other
+ \catcode`\^^B=\other
+ \catcode`\^^C=\other
+ \catcode`\^^D=\other
+ \catcode`\^^E=\other
+ \catcode`\^^F=\other
+ \catcode`\^^G=\other
+ \catcode`\^^H=\other
+ \catcode`\^^K=\other
+ \catcode`\^^L=\other
+ \catcode`\^^N=\other
+ \catcode`\^^P=\other
+ \catcode`\^^Q=\other
+ \catcode`\^^R=\other
+ \catcode`\^^S=\other
+ \catcode`\^^T=\other
+ \catcode`\^^U=\other
+ \catcode`\^^V=\other
+ \catcode`\^^W=\other
+ \catcode`\^^X=\other
+ \catcode`\^^Z=\other
+ \catcode`\^^[=\other
+ \catcode`\^^\=\other
+ \catcode`\^^]=\other
+ \catcode`\^^^=\other
+ \catcode`\^^_=\other
+ \catcode`\@=\other
+ \catcode`\^=\other
+ % It was suggested to define this as 7, which would allow ^^e4 etc.
+ % in xref tags, i.e., node names. But since ^^e4 notation isn't
+ % supported in the main text, it doesn't seem desirable. Furthermore,
+ % that is not enough: for node names that actually contain a ^
+ % character, we would end up writing a line like this: 'xrdef {'hat
+ % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first
+ % argument, and \hat is not an expandable control sequence. It could
+ % all be worked out, but why? Either we support ^^ or we don't.
+ %
+ % The other change necessary for this was to define \auxhat:
+ % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter
+ % and then to call \auxhat in \setq.
+ %
+ \catcode`\~=\other
+ \catcode`\[=\other
+ \catcode`\]=\other
+ \catcode`\"=\other
+ \catcode`\_=\other
+ \catcode`\|=\other
+ \catcode`\<=\other
+ \catcode`\>=\other
+ \catcode`\$=\other
+ \catcode`\#=\other
+ \catcode`\&=\other
+ \catcode`+=\other % avoid \+ for paranoia even though we've turned it off
+ % Make the characters 128-255 be printing characters
+ {%
+ \count 1=128
+ \def\loop{%
+ \catcode\count 1=\other
+ \advance\count 1 by 1
+ \ifnum \count 1<256 \loop \fi
+ }%
+ }%
+ % The aux file uses ' as the escape (for now).
+ % Turn off \ as an escape so we do not lose on
+ % entries which were dumped with control sequences in their names.
+ % For example, 'xrdef {$\leq $-fun}{page ...} made by @defun ^^
+ % Reference to such entries still does not work the way one would wish,
+ % but at least they do not bomb out when the aux file is read in.
+ \catcode`\{=1
+ \catcode`\}=2
+ \catcode`\%=\other
+ \catcode`\'=0
+ \catcode`\\=\other
+ %
+ \openin 1 \jobname.aux
+ \ifeof 1 \else
+ \closein 1
+ \input \jobname.aux
+ \global\havexrefstrue
+ \global\warnedobstrue
+ \fi
+ % Open the new aux file. TeX will close it automatically at exit.
+ \openout\auxfile=\jobname.aux
+\endgroup}
+
+
+% Footnotes.
+
+\newcount \footnoteno
+
+% The trailing space in the following definition for supereject is
+% vital for proper filling; pages come out unaligned when you do a
+% pagealignmacro call if that space before the closing brace is
+% removed. (Generally, numeric constants should always be followed by a
+% space to prevent strange expansion errors.)
+\def\supereject{\par\penalty -20000\footnoteno =0 }
+
+% @footnotestyle is meaningful for info output only.
+\let\footnotestyle=\comment
+
+\let\ptexfootnote=\footnote
+
+{\catcode `\@=11
+%
+% Auto-number footnotes. Otherwise like plain.
+\gdef\footnote{%
+ \global\advance\footnoteno by \@ne
+ \edef\thisfootno{$^{\the\footnoteno}$}%
+ %
+ % In case the footnote comes at the end of a sentence, preserve the
+ % extra spacing after we do the footnote number.
+ \let\@sf\empty
+ \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\/\fi
+ %
+ % Remove inadvertent blank space before typesetting the footnote number.
+ \unskip
+ \thisfootno\@sf
+ \footnotezzz
+}%
+
+% Don't bother with the trickery in plain.tex to not require the
+% footnote text as a parameter. Our footnotes don't need to be so general.
+%
+% Oh yes, they do; otherwise, @ifset and anything else that uses
+% \parseargline fail inside footnotes because the tokens are fixed when
+% the footnote is read. --karl, 16nov96.
+%
+\long\gdef\footnotezzz{\insert\footins\bgroup
+ % We want to typeset this text as a normal paragraph, even if the
+ % footnote reference occurs in (for example) a display environment.
+ % So reset some parameters.
+ \interlinepenalty\interfootnotelinepenalty
+ \splittopskip\ht\strutbox % top baseline for broken footnotes
+ \splitmaxdepth\dp\strutbox
+ \floatingpenalty\@MM
+ \leftskip\z@skip
+ \rightskip\z@skip
+ \spaceskip\z@skip
+ \xspaceskip\z@skip
+ \parindent\defaultparindent
+ %
+ % Hang the footnote text off the number.
+ \hang
+ \textindent{\thisfootno}%
+ %
+ % Don't crash into the line above the footnote text. Since this
+ % expands into a box, it must come within the paragraph, lest it
+ % provide a place where TeX can split the footnote.
+ \footstrut
+ \futurelet\next\fo@t
+}
+\def\fo@t{\ifcat\bgroup\noexpand\next \let\next\f@@t
+ \else\let\next\f@t\fi \next}
+\def\f@@t{\bgroup\aftergroup\@foot\let\next}
+\def\f@t#1{#1\@foot}
+\def\@foot{\strut\egroup}
+
+}%end \catcode `\@=11
+
+% Set the baselineskip to #1, and the lineskip and strut size
+% correspondingly. There is no deep meaning behind these magic numbers
+% used as factors; they just match (closely enough) what Knuth defined.
+%
+\def\lineskipfactor{.08333}
+\def\strutheightpercent{.70833}
+\def\strutdepthpercent {.29167}
+%
+\def\setleading#1{%
+ \normalbaselineskip = #1\relax
+ \normallineskip = \lineskipfactor\normalbaselineskip
+ \normalbaselines
+ \setbox\strutbox =\hbox{%
+ \vrule width0pt height\strutheightpercent\baselineskip
+ depth \strutdepthpercent \baselineskip
+ }%
+}
+
+% @| inserts a changebar to the left of the current line. It should
+% surround any changed text. This approach does *not* work if the
+% change spans more than two lines of output. To handle that, we would
+% have adopt a much more difficult approach (putting marks into the main
+% vertical list for the beginning and end of each change).
+%
+\def\|{%
+ % \vadjust can only be used in horizontal mode.
+ \leavevmode
+ %
+ % Append this vertical mode material after the current line in the output.
+ \vadjust{%
+ % We want to insert a rule with the height and depth of the current
+ % leading; that is exactly what \strutbox is supposed to record.
+ \vskip-\baselineskip
+ %
+ % \vadjust-items are inserted at the left edge of the type. So
+ % the \llap here moves out into the left-hand margin.
+ \llap{%
+ %
+ % For a thicker or thinner bar, change the `1pt'.
+ \vrule height\baselineskip width1pt
+ %
+ % This is the space between the bar and the text.
+ \hskip 12pt
+ }%
+ }%
+}
+
+% For a final copy, take out the rectangles
+% that mark overfull boxes (in case you have decided
+% that the text looks ok even though it passes the margin).
+%
+\def\finalout{\overfullrule=0pt}
+
+% @image. We use the macros from epsf.tex to support this.
+% If epsf.tex is not installed and @image is used, we complain.
+%
+% Check for and read epsf.tex up front. If we read it only at @image
+% time, we might be inside a group, and then its definitions would get
+% undone and the next image would fail.
+\openin 1 = epsf.tex
+\ifeof 1 \else
+ \closein 1
+ \def\epsfannounce{\toks0 = }% do not bother showing banner
+ \input epsf.tex
+\fi
+%
+\newif\ifwarnednoepsf
+\newhelp\noepsfhelp{epsf.tex must be installed for images to
+ work. It is also included in the Texinfo distribution, or you can get
+ it from ftp://ftp.tug.org/tex/epsf.tex.}
+%
+% Only complain once about lack of epsf.tex.
+\def\image#1{%
+ \ifx\epsfbox\undefined
+ \ifwarnednoepsf \else
+ \errhelp = \noepsfhelp
+ \errmessage{epsf.tex not found, images will be ignored}%
+ \global\warnednoepsftrue
+ \fi
+ \else
+ \imagexxx #1,,,\finish
+ \fi
+}
+%
+% Arguments to @image:
+% #1 is (mandatory) image filename; we tack on .eps extension.
+% #2 is (optional) width, #3 is (optional) height.
+% #4 is just the usual extra ignored arg for parsing this stuff.
+\def\imagexxx#1,#2,#3,#4\finish{%
+ % \epsfbox itself resets \epsf?size at each figure.
+ \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi
+ \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi
+ \epsfbox{#1.eps}%
+}
+
+% End of control word definitions.
+
+
+\message{and turning on texinfo input format.}
+
+\def\openindices{%
+ \newindex{cp}%
+ \newcodeindex{fn}%
+ \newcodeindex{vr}%
+ \newcodeindex{tp}%
+ \newcodeindex{ky}%
+ \newcodeindex{pg}%
+}
+
+% Set some numeric style parameters, for 8.5 x 11 format.
+
+\hsize = 6in
+\hoffset = .25in
+\newdimen\defaultparindent \defaultparindent = 15pt
+\parindent = \defaultparindent
+\parskip 3pt plus 2pt minus 1pt
+\setleading{13.2pt}
+\advance\topskip by 1.2cm
+
+\chapheadingskip = 15pt plus 4pt minus 2pt
+\secheadingskip = 12pt plus 3pt minus 2pt
+\subsecheadingskip = 9pt plus 2pt minus 2pt
+
+% Prevent underfull vbox error messages.
+\vbadness=10000
+
+% Following George Bush, just get rid of widows and orphans.
+\widowpenalty=10000
+\clubpenalty=10000
+
+% Use TeX 3.0's \emergencystretch to help line breaking, but if we're
+% using an old version of TeX, don't do anything. We want the amount of
+% stretch added to depend on the line length, hence the dependence on
+% \hsize. This makes it come to about 9pt for the 8.5x11 format.
+%
+\ifx\emergencystretch\thisisundefined
+ % Allow us to assign to \emergencystretch anyway.
+ \def\emergencystretch{\dimen0}%
+\else
+ \emergencystretch = \hsize
+ \divide\emergencystretch by 45
+\fi
+
+% Use @smallbook to reset parameters for 7x9.5 format (or else 7x9.25)
+\def\smallbook{
+ \global\chapheadingskip = 15pt plus 4pt minus 2pt
+ \global\secheadingskip = 12pt plus 3pt minus 2pt
+ \global\subsecheadingskip = 9pt plus 2pt minus 2pt
+ %
+ \global\lispnarrowing = 0.3in
+ \setleading{12pt}
+ \advance\topskip by -1cm
+ \global\parskip 2pt plus 1pt
+ \global\hsize = 5in
+ \global\vsize=7.5in
+ \global\tolerance=700
+ \global\hfuzz=1pt
+ \global\contentsrightmargin=0pt
+ \global\deftypemargin=0pt
+ \global\defbodyindent=.5cm
+ %
+ \global\pagewidth=\hsize
+ \global\pageheight=\vsize
+ %
+ \global\let\smalllisp=\smalllispx
+ \global\let\smallexample=\smalllispx
+ \global\def\Esmallexample{\Esmalllisp}
+}
+
+% Use @afourpaper to print on European A4 paper.
+\def\afourpaper{
+\global\tolerance=700
+\global\hfuzz=1pt
+\setleading{12pt}
+\global\parskip 15pt plus 1pt
+
+\global\vsize= 53\baselineskip
+\advance\vsize by \topskip
+%\global\hsize= 5.85in % A4 wide 10pt
+\global\hsize= 6.5in
+\global\outerhsize=\hsize
+\global\advance\outerhsize by 0.5in
+\global\outervsize=\vsize
+\global\advance\outervsize by 0.6in
+
+\global\pagewidth=\hsize
+\global\pageheight=\vsize
+}
+
+\bindingoffset=0pt
+\normaloffset=\hoffset
+\pagewidth=\hsize
+\pageheight=\vsize
+
+% Allow control of the text dimensions. Parameters in order: textheight;
+% textwidth; voffset; hoffset; binding offset; topskip.
+% All require a dimension;
+% header is additional; added length extends the bottom of the page.
+
+\def\changepagesizes#1#2#3#4#5#6{
+ \global\vsize= #1
+ \global\topskip= #6
+ \advance\vsize by \topskip
+ \global\voffset= #3
+ \global\hsize= #2
+ \global\outerhsize=\hsize
+ \global\advance\outerhsize by 0.5in
+ \global\outervsize=\vsize
+ \global\advance\outervsize by 0.6in
+ \global\pagewidth=\hsize
+ \global\pageheight=\vsize
+ \global\normaloffset= #4
+ \global\bindingoffset= #5}
+
+% A specific text layout, 24x15cm overall, intended for A4 paper. Top margin
+% 29mm, hence bottom margin 28mm, nominal side margin 3cm.
+\def\afourlatex
+ {\global\tolerance=700
+ \global\hfuzz=1pt
+ \setleading{12pt}
+ \global\parskip 15pt plus 1pt
+ \advance\baselineskip by 1.6pt
+ \changepagesizes{237mm}{150mm}{3.6mm}{3.6mm}{3mm}{7mm}
+ }
+
+% Use @afourwide to print on European A4 paper in wide format.
+\def\afourwide{\afourpaper
+\changepagesizes{9.5in}{6.5in}{\hoffset}{\normaloffset}{\bindingoffset}{7mm}}
+
+% Define macros to output various characters with catcode for normal text.
+\catcode`\"=\other
+\catcode`\~=\other
+\catcode`\^=\other
+\catcode`\_=\other
+\catcode`\|=\other
+\catcode`\<=\other
+\catcode`\>=\other
+\catcode`\+=\other
+\def\normaldoublequote{"}
+\def\normaltilde{~}
+\def\normalcaret{^}
+\def\normalunderscore{_}
+\def\normalverticalbar{|}
+\def\normalless{<}
+\def\normalgreater{>}
+\def\normalplus{+}
+
+% This macro is used to make a character print one way in ttfont
+% where it can probably just be output, and another way in other fonts,
+% where something hairier probably needs to be done.
+%
+% #1 is what to print if we are indeed using \tt; #2 is what to print
+% otherwise. Since all the Computer Modern typewriter fonts have zero
+% interword stretch (and shrink), and it is reasonable to expect all
+% typewriter fonts to have this, we can check that font parameter.
+%
+\def\ifusingtt#1#2{\ifdim \fontdimen3\the\font=0pt #1\else #2\fi}
+
+% Turn off all special characters except @
+% (and those which the user can use as if they were ordinary).
+% Most of these we simply print from the \tt font, but for some, we can
+% use math or other variants that look better in normal text.
+
+\catcode`\"=\active
+\def\activedoublequote{{\tt\char34}}
+\let"=\activedoublequote
+\catcode`\~=\active
+\def~{{\tt\char126}}
+\chardef\hat=`\^
+\catcode`\^=\active
+\def^{{\tt \hat}}
+
+\catcode`\_=\active
+\def_{\ifusingtt\normalunderscore\_}
+% Subroutine for the previous macro.
+\def\_{\leavevmode \kern.06em \vbox{\hrule width.3em height.1ex}}
+
+\catcode`\|=\active
+\def|{{\tt\char124}}
+\chardef \less=`\<
+\catcode`\<=\active
+\def<{{\tt \less}}
+\chardef \gtr=`\>
+\catcode`\>=\active
+\def>{{\tt \gtr}}
+\catcode`\+=\active
+\def+{{\tt \char 43}}
+%\catcode 27=\active
+%\def^^[{$\diamondsuit$}
+
+% Set up an active definition for =, but don't enable it most of the time.
+{\catcode`\==\active
+\global\def={{\tt \char 61}}}
+
+\catcode`+=\active
+\catcode`\_=\active
+
+% If a .fmt file is being used, characters that might appear in a file
+% name cannot be active until we have parsed the command line.
+% So turn them off again, and have \everyjob (or @setfilename) turn them on.
+% \otherifyactive is called near the end of this file.
+\def\otherifyactive{\catcode`+=\other \catcode`\_=\other}
+
+\catcode`\@=0
+
+% \rawbackslashxx output one backslash character in current font
+\global\chardef\rawbackslashxx=`\\
+%{\catcode`\\=\other
+%@gdef@rawbackslashxx{\}}
+
+% \rawbackslash redefines \ as input to do \rawbackslashxx.
+{\catcode`\\=\active
+@gdef@rawbackslash{@let\=@rawbackslashxx }}
+
+% \normalbackslash outputs one backslash in fixed width font.
+\def\normalbackslash{{\tt\rawbackslashxx}}
+
+% Say @foo, not \foo, in error messages.
+\escapechar=`\@
+
+% \catcode 17=0 % Define control-q
+\catcode`\\=\active
+
+% Used sometimes to turn off (effectively) the active characters
+% even after parsing them.
+@def@turnoffactive{@let"=@normaldoublequote
+@let\=@realbackslash
+@let~=@normaltilde
+@let^=@normalcaret
+@let_=@normalunderscore
+@let|=@normalverticalbar
+@let<=@normalless
+@let>=@normalgreater
+@let+=@normalplus}
+
+@def@normalturnoffactive{@let"=@normaldoublequote
+@let\=@normalbackslash
+@let~=@normaltilde
+@let^=@normalcaret
+@let_=@normalunderscore
+@let|=@normalverticalbar
+@let<=@normalless
+@let>=@normalgreater
+@let+=@normalplus}
+
+% Make _ and + \other characters, temporarily.
+% This is canceled by @fixbackslash.
+@otherifyactive
+
+% If a .fmt file is being used, we don't want the `\input texinfo' to show up.
+% That is what \eatinput is for; after that, the `\' should revert to printing
+% a backslash.
+%
+@gdef@eatinput input texinfo{@fixbackslash}
+@global@let\ = @eatinput
+
+% On the other hand, perhaps the file did not have a `\input texinfo'. Then
+% the first `\{ in the file would cause an error. This macro tries to fix
+% that, assuming it is called before the first `\' could plausibly occur.
+% Also back turn on active characters that might appear in the input
+% file name, in case not using a pre-dumped format.
+%
+@gdef@fixbackslash{@ifx\@eatinput @let\ = @normalbackslash @fi
+ @catcode`+=@active @catcode`@_=@active}
+
+% These look ok in all fonts, so just make them not special. The @rm below
+% makes sure that the current font starts out as the newly loaded cmr10
+@catcode`@$=@other @catcode`@%=@other @catcode`@&=@other @catcode`@#=@other
+
+@textfonts
+@rm
+
+@c Local variables:
+@c page-delimiter: "^\\\\message"
+@c End:
--- /dev/null
+This is Info file wget.info, produced by Makeinfo version 1.67 from the
+input file ./wget.texi.
+
+INFO-DIR-SECTION Net Utilities
+INFO-DIR-SECTION World Wide Web
+START-INFO-DIR-ENTRY
+* Wget: (wget). The non-interactive network downloader.
+END-INFO-DIR-ENTRY
+
+ This file documents the the GNU Wget utility for downloading network
+data.
+
+ Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+ Permission is granted to copy and distribute modified versions of
+this manual under the conditions for verbatim copying, provided also
+that the sections entitled "Copying" and "GNU General Public License"
+are included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+
+\1f
+Indirect:
+wget.info-1: 955
+wget.info-2: 50818
+wget.info-3: 88475
+\1f
+Tag Table:
+(Indirect)
+Node: Top\7f955
+Node: Overview\7f1832
+Node: Invoking\7f5006
+Node: URL Format\7f5815
+Node: Option Syntax\7f8147
+Node: Basic Startup Options\7f9571
+Node: Logging and Input File Options\7f10271
+Node: Download Options\7f12665
+Node: Directory Options\7f18450
+Node: HTTP Options\7f20928
+Node: FTP Options\7f24524
+Node: Recursive Retrieval Options\7f25717
+Node: Recursive Accept/Reject Options\7f27493
+Node: Recursive Retrieval\7f29575
+Node: Following Links\7f31871
+Node: Relative Links\7f32903
+Node: Host Checking\7f33417
+Node: Domain Acceptance\7f35450
+Node: All Hosts\7f37120
+Node: Types of Files\7f37547
+Node: Directory-Based Limits\7f39997
+Node: FTP Links\7f42637
+Node: Time-Stamping\7f43507
+Node: Time-Stamping Usage\7f45144
+Node: HTTP Time-Stamping Internals\7f46713
+Node: FTP Time-Stamping Internals\7f47922
+Node: Startup File\7f49130
+Node: Wgetrc Location\7f50003
+Node: Wgetrc Syntax\7f50818
+Node: Wgetrc Commands\7f51533
+Node: Sample Wgetrc\7f58229
+Node: Examples\7f62521
+Node: Simple Usage\7f63128
+Node: Advanced Usage\7f65522
+Node: Guru Usage\7f68273
+Node: Various\7f69935
+Node: Proxies\7f70459
+Node: Distribution\7f73224
+Node: Mailing List\7f73566
+Node: Reporting Bugs\7f74265
+Node: Portability\7f76050
+Node: Signals\7f77425
+Node: Appendices\7f78079
+Node: Robots\7f78494
+Node: Introduction to RES\7f79641
+Node: RES Format\7f81534
+Node: User-Agent Field\7f82638
+Node: Disallow Field\7f83402
+Node: Norobots Examples\7f84013
+Node: Security Considerations\7f84967
+Node: Contributors\7f85963
+Node: Copying\7f88475
+Node: Concept Index\7f107638
+\1f
+End Tag Table
--- /dev/null
+This is Info file wget.info, produced by Makeinfo version 1.67 from the
+input file ./wget.texi.
+
+INFO-DIR-SECTION Net Utilities
+INFO-DIR-SECTION World Wide Web
+START-INFO-DIR-ENTRY
+* Wget: (wget). The non-interactive network downloader.
+END-INFO-DIR-ENTRY
+
+ This file documents the the GNU Wget utility for downloading network
+data.
+
+ Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+ Permission is granted to copy and distribute modified versions of
+this manual under the conditions for verbatim copying, provided also
+that the sections entitled "Copying" and "GNU General Public License"
+are included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+
+\1f
+File: wget.info, Node: Top, Next: Overview, Prev: (dir), Up: (dir)
+
+Wget 1.5.3
+**********
+
+ This manual documents version 1.5.3 of GNU Wget, the freely
+available utility for network download.
+
+ Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+* Menu:
+
+* Overview:: Features of Wget.
+* Invoking:: Wget command-line arguments.
+* Recursive Retrieval:: Description of recursive retrieval.
+* Following Links:: The available methods of chasing links.
+* Time-Stamping:: Mirroring according to time-stamps.
+* Startup File:: Wget's initialization file.
+* Examples:: Examples of usage.
+* Various:: The stuff that doesn't fit anywhere else.
+* Appendices:: Some useful references.
+* Copying:: You may give out copies of Wget.
+* Concept Index:: Topics covered by this manual.
+
+\1f
+File: wget.info, Node: Overview, Next: Invoking, Prev: Top, Up: Top
+
+Overview
+********
+
+ GNU Wget is a freely available network utility to retrieve files from
+the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and FTP
+(File Transfer Protocol), the two most widely used Internet protocols.
+It has many useful features to make downloading easier, some of them
+being:
+
+ * Wget is non-interactive, meaning that it can work in the
+ background, while the user is not logged on. This allows you to
+ start a retrieval and disconnect from the system, letting Wget
+ finish the work. By contrast, most of the Web browsers require
+ constant user's presence, which can be a great hindrance when
+ transferring a lot of data.
+
+ * Wget is capable of descending recursively through the structure of
+ HTML documents and FTP directory trees, making a local copy of the
+ directory hierarchy similar to the one on the remote server. This
+ feature can be used to mirror archives and home pages, or traverse
+ the web in search of data, like a WWW robot (*Note Robots::). In
+ that spirit, Wget understands the `norobots' convention.
+
+ * File name wildcard matching and recursive mirroring of directories
+ are available when retrieving via FTP. Wget can read the
+ time-stamp information given by both HTTP and FTP servers, and
+ store it locally. Thus Wget can see if the remote file has
+ changed since last retrieval, and automatically retrieve the new
+ version if it has. This makes Wget suitable for mirroring of FTP
+ sites, as well as home pages.
+
+ * Wget works exceedingly well on slow or unstable connections,
+ retrying the document until it is fully retrieved, or until a
+ user-specified retry count is surpassed. It will try to resume the
+ download from the point of interruption, using `REST' with FTP and
+ `Range' with HTTP servers that support them.
+
+ * By default, Wget supports proxy servers, which can lighten the
+ network load, speed up retrieval and provide access behind
+ firewalls. However, if you are behind a firewall that requires
+ that you use a socks style gateway, you can get the socks library
+ and build wget with support for socks. Wget also supports the
+ passive FTP downloading as an option.
+
+ * Builtin features offer mechanisms to tune which links you wish to
+ follow (*Note Following Links::).
+
+ * The retrieval is conveniently traced with printing dots, each dot
+ representing a fixed amount of data received (1KB by default).
+ These representations can be customized to your preferences.
+
+ * Most of the features are fully configurable, either through
+ command line options, or via the initialization file `.wgetrc'
+ (*Note Startup File::). Wget allows you to define "global"
+ startup files (`/usr/local/etc/wgetrc' by default) for site
+ settings.
+
+ * Finally, GNU Wget is free software. This means that everyone may
+ use it, redistribute it and/or modify it under the terms of the
+ GNU General Public License, as published by the Free Software
+ Foundation (*Note Copying::).
+
+\1f
+File: wget.info, Node: Invoking, Next: Recursive Retrieval, Prev: Overview, Up: Top
+
+Invoking
+********
+
+ By default, Wget is very simple to invoke. The basic syntax is:
+
+ wget [OPTION]... [URL]...
+
+ Wget will simply download all the URLs specified on the command
+line. URL is a "Uniform Resource Locator", as defined below.
+
+ However, you may wish to change some of the default parameters of
+Wget. You can do it two ways: permanently, adding the appropriate
+command to `.wgetrc' (*Note Startup File::), or specifying it on the
+command line.
+
+* Menu:
+
+* URL Format::
+* Option Syntax::
+* Basic Startup Options::
+* Logging and Input File Options::
+* Download Options::
+* Directory Options::
+* HTTP Options::
+* FTP Options::
+* Recursive Retrieval Options::
+* Recursive Accept/Reject Options::
+
+\1f
+File: wget.info, Node: URL Format, Next: Option Syntax, Prev: Invoking, Up: Invoking
+
+URL Format
+==========
+
+ "URL" is an acronym for Uniform Resource Locator. A uniform
+resource locator is a compact string representation for a resource
+available via the Internet. Wget recognizes the URL syntax as per
+RFC1738. This is the most widely used form (square brackets denote
+optional parts):
+
+ http://host[:port]/directory/file
+ ftp://host[:port]/directory/file
+
+ You can also encode your username and password within a URL:
+
+ ftp://user:password@host/path
+ http://user:password@host/path
+
+ Either USER or PASSWORD, or both, may be left out. If you leave out
+either the HTTP username or password, no authentication will be sent.
+If you leave out the FTP username, `anonymous' will be used. If you
+leave out the FTP password, your email address will be supplied as a
+default password.(1)
+
+ You can encode unsafe characters in a URL as `%xy', `xy' being the
+hexadecimal representation of the character's ASCII value. Some common
+unsafe characters include `%' (quoted as `%25'), `:' (quoted as `%3A'),
+and `@' (quoted as `%40'). Refer to RFC1738 for a comprehensive list
+of unsafe characters.
+
+ Wget also supports the `type' feature for FTP URLs. By default, FTP
+documents are retrieved in the binary mode (type `i'), which means that
+they are downloaded unchanged. Another useful mode is the `a'
+("ASCII") mode, which converts the line delimiters between the
+different operating systems, and is thus useful for text files. Here
+is an example:
+
+ ftp://host/directory/file;type=a
+
+ Two alternative variants of URL specification are also supported,
+because of historical (hysterical?) reasons and their wide-spreadedness.
+
+ FTP-only syntax (supported by `NcFTP'):
+ host:/dir/file
+
+ HTTP-only syntax (introduced by `Netscape'):
+ host[:port]/dir/file
+
+ These two alternative forms are deprecated, and may cease being
+supported in the future.
+
+ If you do not understand the difference between these notations, or
+do not know which one to use, just use the plain ordinary format you use
+with your favorite browser, like `Lynx' or `Netscape'.
+
+ ---------- Footnotes ----------
+
+ (1) If you have a `.netrc' file in your home directory, password
+will also be searched for there.
+
+\1f
+File: wget.info, Node: Option Syntax, Next: Basic Startup Options, Prev: URL Format, Up: Invoking
+
+Option Syntax
+=============
+
+ Since Wget uses GNU getopts to process its arguments, every option
+has a short form and a long form. Long options are more convenient to
+remember, but take time to type. You may freely mix different option
+styles, or specify options after the command-line arguments. Thus you
+may write:
+
+ wget -r --tries=10 http://fly.cc.fer.hr/ -o log
+
+ The space between the option accepting an argument and the argument
+may be omitted. Instead `-o log' you can write `-olog'.
+
+ You may put several options that do not require arguments together,
+like:
+
+ wget -drc URL
+
+ This is a complete equivalent of:
+
+ wget -d -r -c URL
+
+ Since the options can be specified after the arguments, you may
+terminate them with `--'. So the following will try to download URL
+`-x', reporting failure to `log':
+
+ wget -o log -- -x
+
+ The options that accept comma-separated lists all respect the
+convention that specifying an empty list clears its value. This can be
+useful to clear the `.wgetrc' settings. For instance, if your `.wgetrc'
+sets `exclude_directories' to `/cgi-bin', the following example will
+first reset it, and then set it to exclude `/~nobody' and `/~somebody'.
+You can also clear the lists in `.wgetrc' (*Note Wgetrc Syntax::).
+
+ wget -X '' -X /~nobody,/~somebody
+
+\1f
+File: wget.info, Node: Basic Startup Options, Next: Logging and Input File Options, Prev: Option Syntax, Up: Invoking
+
+Basic Startup Options
+=====================
+
+`-V'
+`--version'
+ Display the version of Wget.
+
+`-h'
+`--help'
+ Print a help message describing all of Wget's command-line options.
+
+`-b'
+`--background'
+ Go to background immediately after startup. If no output file is
+ specified via the `-o', output is redirected to `wget-log'.
+
+`-e COMMAND'
+`--execute COMMAND'
+ Execute COMMAND as if it were a part of `.wgetrc' (*Note Startup
+ File::). A command thus invoked will be executed *after* the
+ commands in `.wgetrc', thus taking precedence over them.
+
+\1f
+File: wget.info, Node: Logging and Input File Options, Next: Download Options, Prev: Basic Startup Options, Up: Invoking
+
+Logging and Input File Options
+==============================
+
+`-o LOGFILE'
+`--output-file=LOGFILE'
+ Log all messages to LOGFILE. The messages are normally reported
+ to standard error.
+
+`-a LOGFILE'
+`--append-output=LOGFILE'
+ Append to LOGFILE. This is the same as `-o', only it appends to
+ LOGFILE instead of overwriting the old log file. If LOGFILE does
+ not exist, a new file is created.
+
+`-d'
+`--debug'
+ Turn on debug output, meaning various information important to the
+ developers of Wget if it does not work properly. Your system
+ administrator may have chosen to compile Wget without debug
+ support, in which case `-d' will not work. Please note that
+ compiling with debug support is always safe--Wget compiled with
+ the debug support will *not* print any debug info unless requested
+ with `-d'. *Note Reporting Bugs:: for more information on how to
+ use `-d' for sending bug reports.
+
+`-q'
+`--quiet'
+ Turn off Wget's output.
+
+`-v'
+`--verbose'
+ Turn on verbose output, with all the available data. The default
+ output is verbose.
+
+`-nv'
+`--non-verbose'
+ Non-verbose output--turn off verbose without being completely quiet
+ (use `-q' for that), which means that error messages and basic
+ information still get printed.
+
+`-i FILE'
+`--input-file=FILE'
+ Read URLs from FILE, in which case no URLs need to be on the
+ command line. If there are URLs both on the command line and in
+ an input file, those on the command lines will be the first ones to
+ be retrieved. The FILE need not be an HTML document (but no harm
+ if it is)--it is enough if the URLs are just listed sequentially.
+
+ However, if you specify `--force-html', the document will be
+ regarded as `html'. In that case you may have problems with
+ relative links, which you can solve either by adding `<base
+ href="URL">' to the documents or by specifying `--base=URL' on the
+ command line.
+
+`-F'
+`--force-html'
+ When input is read from a file, force it to be treated as an HTML
+ file. This enables you to retrieve relative links from existing
+ HTML files on your local disk, by adding `<base href="URL">' to
+ HTML, or using the `--base' command-line option.
+
+\1f
+File: wget.info, Node: Download Options, Next: Directory Options, Prev: Logging and Input File Options, Up: Invoking
+
+Download Options
+================
+
+`-t NUMBER'
+`--tries=NUMBER'
+ Set number of retries to NUMBER. Specify 0 or `inf' for infinite
+ retrying.
+
+`-O FILE'
+`--output-document=FILE'
+ The documents will not be written to the appropriate files, but
+ all will be concatenated together and written to FILE. If FILE
+ already exists, it will be overwritten. If the FILE is `-', the
+ documents will be written to standard output. Including this
+ option automatically sets the number of tries to 1.
+
+`-nc'
+`--no-clobber'
+ Do not clobber existing files when saving to directory hierarchy
+ within recursive retrieval of several files. This option is
+ *extremely* useful when you wish to continue where you left off
+ with retrieval of many files. If the files have the `.html' or
+ (yuck) `.htm' suffix, they will be loaded from the local disk, and
+ parsed as if they have been retrieved from the Web.
+
+`-c'
+`--continue'
+ Continue getting an existing file. This is useful when you want to
+ finish up the download started by another program, or a previous
+ instance of Wget. Thus you can write:
+
+ wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
+
+ If there is a file name `ls-lR.Z' in the current directory, Wget
+ will assume that it is the first portion of the remote file, and
+ will require the server to continue the retrieval from an offset
+ equal to the length of the local file.
+
+ Note that you need not specify this option if all you want is Wget
+ to continue retrieving where it left off when the connection is
+ lost--Wget does this by default. You need this option only when
+ you want to continue retrieval of a file already halfway
+ retrieved, saved by another FTP client, or left by Wget being
+ killed.
+
+ Without `-c', the previous example would just begin to download the
+ remote file to `ls-lR.Z.1'. The `-c' option is also applicable
+ for HTTP servers that support the `Range' header.
+
+`--dot-style=STYLE'
+ Set the retrieval style to STYLE. Wget traces the retrieval of
+ each document by printing dots on the screen, each dot
+ representing a fixed amount of retrieved data. Any number of dots
+ may be separated in a "cluster", to make counting easier. This
+ option allows you to choose one of the pre-defined styles,
+ determining the number of bytes represented by a dot, the number
+ of dots in a cluster, and the number of dots on the line.
+
+ With the `default' style each dot represents 1K, there are ten dots
+ in a cluster and 50 dots in a line. The `binary' style has a more
+ "computer"-like orientation--8K dots, 16-dots clusters and 48 dots
+ per line (which makes for 384K lines). The `mega' style is
+ suitable for downloading very large files--each dot represents 64K
+ retrieved, there are eight dots in a cluster, and 48 dots on each
+ line (so each line contains 3M). The `micro' style is exactly the
+ reverse; it is suitable for downloading small files, with 128-byte
+ dots, 8 dots per cluster, and 48 dots (6K) per line.
+
+`-N'
+`--timestamping'
+ Turn on time-stamping. *Note Time-Stamping:: for details.
+
+`-S'
+`--server-response'
+ Print the headers sent by HTTP servers and responses sent by FTP
+ servers.
+
+`--spider'
+ When invoked with this option, Wget will behave as a Web "spider",
+ which means that it will not download the pages, just check that
+ they are there. You can use it to check your bookmarks, e.g. with:
+
+ wget --spider --force-html -i bookmarks.html
+
+ This feature needs much more work for Wget to get close to the
+ functionality of real WWW spiders.
+
+`-T seconds'
+`--timeout=SECONDS'
+ Set the read timeout to SECONDS seconds. Whenever a network read
+ is issued, the file descriptor is checked for a timeout, which
+ could otherwise leave a pending connection (uninterrupted read).
+ The default timeout is 900 seconds (fifteen minutes). Setting
+ timeout to 0 will disable checking for timeouts.
+
+ Please do not lower the default timeout value with this option
+ unless you know what you are doing.
+
+`-w SECONDS'
+`--wait=SECONDS'
+ Wait the specified number of seconds between the retrievals. Use
+ of this option is recommended, as it lightens the server load by
+ making the requests less frequent. Instead of in seconds, the
+ time can be specified in minutes using the `m' suffix, in hours
+ using `h' suffix, or in days using `d' suffix.
+
+ Specifying a large value for this option is useful if the network
+ or the destination host is down, so that Wget can wait long enough
+ to reasonably expect the network error to be fixed before the
+ retry.
+
+`-Y on/off'
+`--proxy=on/off'
+ Turn proxy support on or off. The proxy is on by default if the
+ appropriate environmental variable is defined.
+
+`-Q QUOTA'
+`--quota=QUOTA'
+ Specify download quota for automatic retrievals. The value can be
+ specified in bytes (default), kilobytes (with `k' suffix), or
+ megabytes (with `m' suffix).
+
+ Note that quota will never affect downloading a single file. So
+ if you specify `wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz',
+ all of the `ls-lR.gz' will be downloaded. The same goes even when
+ several URLs are specified on the command-line. However, quota is
+ respected when retrieving either recursively, or from an input
+ file. Thus you may safely type `wget -Q2m -i sites'--download
+ will be aborted when the quota is exceeded.
+
+ Setting quota to 0 or to `inf' unlimits the download quota.
+
+\1f
+File: wget.info, Node: Directory Options, Next: HTTP Options, Prev: Download Options, Up: Invoking
+
+Directory Options
+=================
+
+`-nd'
+`--no-directories'
+ Do not create a hierarchy of directories when retrieving
+ recursively. With this option turned on, all files will get saved
+ to the current directory, without clobbering (if a name shows up
+ more than once, the filenames will get extensions `.n').
+
+`-x'
+`--force-directories'
+ The opposite of `-nd'--create a hierarchy of directories, even if
+ one would not have been created otherwise. E.g. `wget -x
+ http://fly.cc.fer.hr/robots.txt' will save the downloaded file to
+ `fly.cc.fer.hr/robots.txt'.
+
+`-nH'
+`--no-host-directories'
+ Disable generation of host-prefixed directories. By default,
+ invoking Wget with `-r http://fly.cc.fer.hr/' will create a
+ structure of directories beginning with `fly.cc.fer.hr/'. This
+ option disables such behavior.
+
+`--cut-dirs=NUMBER'
+ Ignore NUMBER directory components. This is useful for getting a
+ fine-grained control over the directory where recursive retrieval
+ will be saved.
+
+ Take, for example, the directory at
+ `ftp://ftp.xemacs.org/pub/xemacs/'. If you retrieve it with `-r',
+ it will be saved locally under `ftp.xemacs.org/pub/xemacs/'.
+ While the `-nH' option can remove the `ftp.xemacs.org/' part, you
+ are still stuck with `pub/xemacs'. This is where `--cut-dirs'
+ comes in handy; it makes Wget not "see" NUMBER remote directory
+ components. Here are several examples of how `--cut-dirs' option
+ works.
+
+ No options -> ftp.xemacs.org/pub/xemacs/
+ -nH -> pub/xemacs/
+ -nH --cut-dirs=1 -> xemacs/
+ -nH --cut-dirs=2 -> .
+
+ --cut-dirs=1 -> ftp.xemacs.org/xemacs/
+ ...
+
+ If you just want to get rid of the directory structure, this
+ option is similar to a combination of `-nd' and `-P'. However,
+ unlike `-nd', `--cut-dirs' does not lose with subdirectories--for
+ instance, with `-nH --cut-dirs=1', a `beta/' subdirectory will be
+ placed to `xemacs/beta', as one would expect.
+
+`-P PREFIX'
+`--directory-prefix=PREFIX'
+ Set directory prefix to PREFIX. The "directory prefix" is the
+ directory where all other files and subdirectories will be saved
+ to, i.e. the top of the retrieval tree. The default is `.' (the
+ current directory).
+
+\1f
+File: wget.info, Node: HTTP Options, Next: FTP Options, Prev: Directory Options, Up: Invoking
+
+HTTP Options
+============
+
+`--http-user=USER'
+`--http-passwd=PASSWORD'
+ Specify the username USER and password PASSWORD on an HTTP server.
+ According to the type of the challenge, Wget will encode them
+ using either the `basic' (insecure) or the `digest' authentication
+ scheme.
+
+ Another way to specify username and password is in the URL itself
+ (*Note URL Format::). For more information about security issues
+ with Wget, *Note Security Considerations::.
+
+`-C on/off'
+`--cache=on/off'
+ When set to off, disable server-side cache. In this case, Wget
+ will send the remote server an appropriate directive (`Pragma:
+ no-cache') to get the file from the remote service, rather than
+ returning the cached version. This is especially useful for
+ retrieving and flushing out-of-date documents on proxy servers.
+
+ Caching is allowed by default.
+
+`--ignore-length'
+ Unfortunately, some HTTP servers (CGI programs, to be more
+ precise) send out bogus `Content-Length' headers, which makes Wget
+ go wild, as it thinks not all the document was retrieved. You can
+ spot this syndrome if Wget retries getting the same document again
+ and again, each time claiming that the (otherwise normal)
+ connection has closed on the very same byte.
+
+ With this option, Wget will ignore the `Content-Length' header--as
+ if it never existed.
+
+`--header=ADDITIONAL-HEADER'
+ Define an ADDITIONAL-HEADER to be passed to the HTTP servers.
+ Headers must contain a `:' preceded by one or more non-blank
+ characters, and must not contain newlines.
+
+ You may define more than one additional header by specifying
+ `--header' more than once.
+
+ wget --header='Accept-Charset: iso-8859-2' \
+ --header='Accept-Language: hr' \
+ http://fly.cc.fer.hr/
+
+ Specification of an empty string as the header value will clear all
+ previous user-defined headers.
+
+`--proxy-user=USER'
+`--proxy-passwd=PASSWORD'
+ Specify the username USER and password PASSWORD for authentication
+ on a proxy server. Wget will encode them using the `basic'
+ authentication scheme.
+
+`-s'
+`--save-headers'
+ Save the headers sent by the HTTP server to the file, preceding the
+ actual contents, with an empty line as the separator.
+
+`-U AGENT-STRING'
+`--user-agent=AGENT-STRING'
+ Identify as AGENT-STRING to the HTTP server.
+
+ The HTTP protocol allows the clients to identify themselves using a
+ `User-Agent' header field. This enables distinguishing the WWW
+ software, usually for statistical purposes or for tracing of
+ protocol violations. Wget normally identifies as `Wget/VERSION',
+ VERSION being the current version number of Wget.
+
+ However, some sites have been known to impose the policy of
+ tailoring the output according to the `User-Agent'-supplied
+ information. While conceptually this is not such a bad idea, it
+ has been abused by servers denying information to clients other
+ than `Mozilla' or Microsoft `Internet Explorer'. This option
+ allows you to change the `User-Agent' line issued by Wget. Use of
+ this option is discouraged, unless you really know what you are
+ doing.
+
+ *NOTE* that Netscape Communications Corp. has claimed that false
+ transmissions of `Mozilla' as the `User-Agent' are a copyright
+ infringement, which will be prosecuted. *DO NOT* misrepresent
+ Wget as Mozilla.
+
+\1f
+File: wget.info, Node: FTP Options, Next: Recursive Retrieval Options, Prev: HTTP Options, Up: Invoking
+
+FTP Options
+===========
+
+`--retr-symlinks'
+ Retrieve symbolic links on FTP sites as if they were plain files,
+ i.e. don't just create links locally.
+
+`-g on/off'
+`--glob=on/off'
+ Turn FTP globbing on or off. Globbing means you may use the
+ shell-like special characters ("wildcards"), like `*', `?', `['
+ and `]' to retrieve more than one file from the same directory at
+ once, like:
+
+ wget ftp://gnjilux.cc.fer.hr/*.msg
+
+ By default, globbing will be turned on if the URL contains a
+ globbing character. This option may be used to turn globbing on
+ or off permanently.
+
+ You may have to quote the URL to protect it from being expanded by
+ your shell. Globbing makes Wget look for a directory listing,
+ which is system-specific. This is why it currently works only
+ with Unix FTP servers (and the ones emulating Unix `ls' output).
+
+`--passive-ftp'
+ Use the "passive" FTP retrieval scheme, in which the client
+ initiates the data connection. This is sometimes required for FTP
+ to work behind firewalls.
+
+\1f
+File: wget.info, Node: Recursive Retrieval Options, Next: Recursive Accept/Reject Options, Prev: FTP Options, Up: Invoking
+
+Recursive Retrieval Options
+===========================
+
+`-r'
+`--recursive'
+ Turn on recursive retrieving. *Note Recursive Retrieval:: for more
+ details.
+
+`-l DEPTH'
+`--level=DEPTH'
+ Specify recursion maximum depth level DEPTH (*Note Recursive
+ Retrieval::). The default maximum depth is 5.
+
+`--delete-after'
+ This option tells Wget to delete every single file it downloads,
+ *after* having done so. It is useful for pre-fetching popular
+ pages through proxy, e.g.:
+
+ wget -r -nd --delete-after http://whatever.com/~popular/page/
+
+ The `-r' option is to retrieve recursively, and `-nd' not to
+ create directories.
+
+`-k'
+`--convert-links'
+ Convert the non-relative links to relative ones locally. Only the
+ references to the documents actually downloaded will be converted;
+ the rest will be left unchanged.
+
+ Note that only at the end of the download can Wget know which
+ links have been downloaded. Because of that, much of the work
+ done by `-k' will be performed at the end of the downloads.
+
+`-m'
+`--mirror'
+ Turn on options suitable for mirroring. This option turns on
+ recursion and time-stamping, sets infinite recursion depth and
+ keeps FTP directory listings. It is currently equivalent to `-r
+ -N -l inf -nr'.
+
+`-nr'
+`--dont-remove-listing'
+ Don't remove the temporary `.listing' files generated by FTP
+ retrievals. Normally, these files contain the raw directory
+ listings received from FTP servers. Not removing them can be
+ useful to access the full remote file list when running a mirror,
+ or for debugging purposes.
+
+\1f
+File: wget.info, Node: Recursive Accept/Reject Options, Prev: Recursive Retrieval Options, Up: Invoking
+
+Recursive Accept/Reject Options
+===============================
+
+`-A ACCLIST --accept ACCLIST'
+`-R REJLIST --reject REJLIST'
+ Specify comma-separated lists of file name suffixes or patterns to
+ accept or reject (*Note Types of Files:: for more details).
+
+`-D DOMAIN-LIST'
+`--domains=DOMAIN-LIST'
+ Set domains to be accepted and DNS looked-up, where DOMAIN-LIST is
+ a comma-separated list. Note that it does *not* turn on `-H'.
+ This option speeds things up, even if only one host is spanned
+ (*Note Domain Acceptance::).
+
+`--exclude-domains DOMAIN-LIST'
+ Exclude the domains given in a comma-separated DOMAIN-LIST from
+ DNS-lookup (*Note Domain Acceptance::).
+
+`-L'
+`--relative'
+ Follow relative links only. Useful for retrieving a specific home
+ page without any distractions, not even those from the same hosts
+ (*Note Relative Links::).
+
+`--follow-ftp'
+ Follow FTP links from HTML documents. Without this option, Wget
+ will ignore all the FTP links.
+
+`-H'
+`--span-hosts'
+ Enable spanning across hosts when doing recursive retrieving
+ (*Note All Hosts::).
+
+`-I LIST'
+`--include-directories=LIST'
+ Specify a comma-separated list of directories you wish to follow
+ when downloading (*Note Directory-Based Limits:: for more
+ details.) Elements of LIST may contain wildcards.
+
+`-X LIST'
+`--exclude-directories=LIST'
+ Specify a comma-separated list of directories you wish to exclude
+ from download (*Note Directory-Based Limits:: for more details.)
+ Elements of LIST may contain wildcards.
+
+`-nh'
+`--no-host-lookup'
+ Disable the time-consuming DNS lookup of almost all hosts (*Note
+ Host Checking::).
+
+`-np'
+`--no-parent'
+ Do not ever ascend to the parent directory when retrieving
+ recursively. This is a useful option, since it guarantees that
+ only the files *below* a certain hierarchy will be downloaded.
+ *Note Directory-Based Limits:: for more details.
+
+\1f
+File: wget.info, Node: Recursive Retrieval, Next: Following Links, Prev: Invoking, Up: Top
+
+Recursive Retrieval
+*******************
+
+ GNU Wget is capable of traversing parts of the Web (or a single HTTP
+or FTP server), depth-first following links and directory structure.
+This is called "recursive" retrieving, or "recursion".
+
+ With HTTP URLs, Wget retrieves and parses the HTML from the given
+URL, documents, retrieving the files the HTML document was referring
+to, through markups like `href', or `src'. If the freshly downloaded
+file is also of type `text/html', it will be parsed and followed
+further.
+
+ The maximum "depth" to which the retrieval may descend is specified
+with the `-l' option (the default maximum depth is five layers). *Note
+Recursive Retrieval::.
+
+ When retrieving an FTP URL recursively, Wget will retrieve all the
+data from the given directory tree (including the subdirectories up to
+the specified depth) on the remote server, creating its mirror image
+locally. FTP retrieval is also limited by the `depth' parameter.
+
+ By default, Wget will create a local directory tree, corresponding to
+the one found on the remote server.
+
+ Recursive retrieving can find a number of applications, the most
+important of which is mirroring. It is also useful for WWW
+presentations, and any other opportunities where slow network
+connections should be bypassed by storing the files locally.
+
+ You should be warned that invoking recursion may cause grave
+overloading on your system, because of the fast exchange of data
+through the network; all of this may hamper other users' work. The
+same stands for the foreign server you are mirroring--the more requests
+it gets in a rows, the greater is its load.
+
+ Careless retrieving can also fill your file system unctrollably,
+which can grind the machine to a halt.
+
+ The load can be minimized by lowering the maximum recursion level
+(`-l') and/or by lowering the number of retries (`-t'). You may also
+consider using the `-w' option to slow down your requests to the remote
+servers, as well as the numerous options to narrow the number of
+followed links (*Note Following Links::).
+
+ Recursive retrieval is a good thing when used properly. Please take
+all precautions not to wreak havoc through carelessness.
+
+\1f
+File: wget.info, Node: Following Links, Next: Time-Stamping, Prev: Recursive Retrieval, Up: Top
+
+Following Links
+***************
+
+ When retrieving recursively, one does not wish to retrieve the loads
+of unnecessary data. Most of the time the users bear in mind exactly
+what they want to download, and want Wget to follow only specific links.
+
+ For example, if you wish to download the music archive from
+`fly.cc.fer.hr', you will not want to download all the home pages that
+happen to be referenced by an obscure part of the archive.
+
+ Wget possesses several mechanisms that allows you to fine-tune which
+links it will follow.
+
+* Menu:
+
+* Relative Links:: Follow relative links only.
+* Host Checking:: Follow links on the same host.
+* Domain Acceptance:: Check on a list of domains.
+* All Hosts:: No host restrictions.
+* Types of Files:: Getting only certain files.
+* Directory-Based Limits:: Getting only certain directories.
+* FTP Links:: Following FTP links.
+
+\1f
+File: wget.info, Node: Relative Links, Next: Host Checking, Prev: Following Links, Up: Following Links
+
+Relative Links
+==============
+
+ When only relative links are followed (option `-L'), recursive
+retrieving will never span hosts. No time-expensive DNS-lookups will
+be performed, and the process will be very fast, with the minimum
+strain of the network. This will suit your needs often, especially when
+mirroring the output of various `x2html' converters, since they
+generally output relative links.
+
+\1f
+File: wget.info, Node: Host Checking, Next: Domain Acceptance, Prev: Relative Links, Up: Following Links
+
+Host Checking
+=============
+
+ The drawback of following the relative links solely is that humans
+often tend to mix them with absolute links to the very same host, and
+the very same page. In this mode (which is the default mode for
+following links) all URLs the that refer to the same host will be
+retrieved.
+
+ The problem with this option are the aliases of the hosts and
+domains. Thus there is no way for Wget to know that `regoc.srce.hr' and
+`www.srce.hr' are the same host, or that `fly.cc.fer.hr' is the same as
+`fly.cc.etf.hr'. Whenever an absolute link is encountered, the host is
+DNS-looked-up with `gethostbyname' to check whether we are maybe
+dealing with the same hosts. Although the results of `gethostbyname'
+are cached, it is still a great slowdown, e.g. when dealing with large
+indices of home pages on different hosts (because each of the hosts
+must be and DNS-resolved to see whether it just *might* an alias of the
+starting host).
+
+ To avoid the overhead you may use `-nh', which will turn off
+DNS-resolving and make Wget compare hosts literally. This will make
+things run much faster, but also much less reliable (e.g. `www.srce.hr'
+and `regoc.srce.hr' will be flagged as different hosts).
+
+ Note that modern HTTP servers allows one IP address to host several
+"virtual servers", each having its own directory hieratchy. Such
+"servers" are distinguished by their hostnames (all of which point to
+the same IP address); for this to work, a client must send a `Host'
+header, which is what Wget does. However, in that case Wget *must not*
+try to divine a host's "real" address, nor try to use the same hostname
+for each access, i.e. `-nh' must be turned on.
+
+ In other words, the `-nh' option must be used to enabling the
+retrieval from virtual servers distinguished by their hostnames. As the
+number of such server setups grow, the behavior of `-nh' may become the
+default in the future.
+
+\1f
+File: wget.info, Node: Domain Acceptance, Next: All Hosts, Prev: Host Checking, Up: Following Links
+
+Domain Acceptance
+=================
+
+ With the `-D' option you may specify the domains that will be
+followed. The hosts the domain of which is not in this list will not be
+DNS-resolved. Thus you can specify `-Dmit.edu' just to make sure that
+*nothing outside of MIT gets looked up*. This is very important and
+useful. It also means that `-D' does *not* imply `-H' (span all
+hosts), which must be specified explicitly. Feel free to use this
+options since it will speed things up, with almost all the reliability
+of checking for all hosts. Thus you could invoke
+
+ wget -r -D.hr http://fly.cc.fer.hr/
+
+ to make sure that only the hosts in `.hr' domain get DNS-looked-up
+for being equal to `fly.cc.fer.hr'. So `fly.cc.etf.hr' will be checked
+(only once!) and found equal, but `www.gnu.ai.mit.edu' will not even be
+checked.
+
+ Of course, domain acceptance can be used to limit the retrieval to
+particular domains with spanning of hosts in them, but then you must
+specify `-H' explicitly. E.g.:
+
+ wget -r -H -Dmit.edu,stanford.edu http://www.mit.edu/
+
+ will start with `http://www.mit.edu/', following links across MIT
+and Stanford.
+
+ If there are domains you want to exclude specifically, you can do it
+with `--exclude-domains', which accepts the same type of arguments of
+`-D', but will *exclude* all the listed domains. For example, if you
+want to download all the hosts from `foo.edu' domain, with the
+exception of `sunsite.foo.edu', you can do it like this:
+
+ wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu http://www.foo.edu/
+
+\1f
+File: wget.info, Node: All Hosts, Next: Types of Files, Prev: Domain Acceptance, Up: Following Links
+
+All Hosts
+=========
+
+ When `-H' is specified without `-D', all hosts are freely spanned.
+There are no restrictions whatsoever as to what part of the net Wget
+will go to fetch documents, other than maximum retrieval depth. If a
+page references `www.yahoo.com', so be it. Such an option is rarely
+useful for itself.
+
+\1f
+File: wget.info, Node: Types of Files, Next: Directory-Based Limits, Prev: All Hosts, Up: Following Links
+
+Types of Files
+==============
+
+ When downloading material from the web, you will often want to
+restrict the retrieval to only certain file types. For example, if you
+are interested in downloading GIFS, you will not be overjoyed to get
+loads of Postscript documents, and vice versa.
+
+ Wget offers two options to deal with this problem. Each option
+description lists a short name, a long name, and the equivalent command
+in `.wgetrc'.
+
+`-A ACCLIST'
+`--accept ACCLIST'
+`accept = ACCLIST'
+ The argument to `--accept' option is a list of file suffixes or
+ patterns that Wget will download during recursive retrieval. A
+ suffix is the ending part of a file, and consists of "normal"
+ letters, e.g. `gif' or `.jpg'. A matching pattern contains
+ shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'.
+
+ So, specifying `wget -A gif,jpg' will make Wget download only the
+ files ending with `gif' or `jpg', i.e. GIFs and JPEGs. On the
+ other hand, `wget -A "zelazny*196[0-9]*"' will download only files
+ beginning with `zelazny' and containing numbers from 1960 to 1969
+ anywhere within. Look up the manual of your shell for a
+ description of how pattern matching works.
+
+ Of course, any number of suffixes and patterns can be combined
+ into a comma-separated list, and given as an argument to `-A'.
+
+`-R REJLIST'
+`--reject REJLIST'
+`reject = REJLIST'
+ The `--reject' option works the same way as `--accept', only its
+ logic is the reverse; Wget will download all files *except* the
+ ones matching the suffixes (or patterns) in the list.
+
+ So, if you want to download a whole page except for the cumbersome
+ MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'.
+ Analogously, to download all files except the ones beginning with
+ `bjork', use `wget -R "bjork*"'. The quotes are to prevent
+ expansion by the shell.
+
+ The `-A' and `-R' options may be combined to achieve even better
+fine-tuning of which files to retrieve. E.g. `wget -A "*zelazny*" -R
+.ps' will download all the files having `zelazny' as a part of their
+name, but *not* the postscript files.
+
+ Note that these two options do not affect the downloading of HTML
+files; Wget must load all the HTMLs to know where to go at
+all--recursive retrieval would make no sense otherwise.
+
+\1f
+File: wget.info, Node: Directory-Based Limits, Next: FTP Links, Prev: Types of Files, Up: Following Links
+
+Directory-Based Limits
+======================
+
+ Regardless of other link-following facilities, it is often useful to
+place the restriction of what files to retrieve based on the directories
+those files are placed in. There can be many reasons for this--the
+home pages may be organized in a reasonable directory structure; or some
+directories may contain useless information, e.g. `/cgi-bin' or `/dev'
+directories.
+
+ Wget offers three different options to deal with this requirement.
+Each option description lists a short name, a long name, and the
+equivalent command in `.wgetrc'.
+
+`-I LIST'
+`--include LIST'
+`include_directories = LIST'
+ `-I' option accepts a comma-separated list of directories included
+ in the retrieval. Any other directories will simply be ignored.
+ The directories are absolute paths.
+
+ So, if you wish to download from `http://host/people/bozo/'
+ following only links to bozo's colleagues in the `/people'
+ directory and the bogus scripts in `/cgi-bin', you can specify:
+
+ wget -I /people,/cgi-bin http://host/people/bozo/
+
+`-X LIST'
+`--exclude LIST'
+`exclude_directories = LIST'
+ `-X' option is exactly the reverse of `-I'--this is a list of
+ directories *excluded* from the download. E.g. if you do not want
+ Wget to download things from `/cgi-bin' directory, specify `-X
+ /cgi-bin' on the command line.
+
+ The same as with `-A'/`-R', these two options can be combined to
+ get a better fine-tuning of downloading subdirectories. E.g. if
+ you want to load all the files from `/pub' hierarchy except for
+ `/pub/worthless', specify `-I/pub -X/pub/worthless'.
+
+`-np'
+`--no-parent'
+`no_parent = on'
+ The simplest, and often very useful way of limiting directories is
+ disallowing retrieval of the links that refer to the hierarchy
+ "upper" than the beginning directory, i.e. disallowing ascent to
+ the parent directory/directories.
+
+ The `--no-parent' option (short `-np') is useful in this case.
+ Using it guarantees that you will never leave the existing
+ hierarchy. Supposing you issue Wget with:
+
+ wget -r --no-parent http://somehost/~luzer/my-archive/
+
+ You may rest assured that none of the references to
+ `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be
+ followed. Only the archive you are interested in will be
+ downloaded. Essentially, `--no-parent' is similar to
+ `-I/~luzer/my-archive', only it handles redirections in a more
+ intelligent fashion.
+
+\1f
+File: wget.info, Node: FTP Links, Prev: Directory-Based Limits, Up: Following Links
+
+Following FTP Links
+===================
+
+ The rules for FTP are somewhat specific, as it is necessary for them
+to be. FTP links in HTML documents are often included for purposes of
+reference, and it is often inconvenient to download them by default.
+
+ To have FTP links followed from HTML documents, you need to specify
+the `--follow-ftp' option. Having done that, FTP links will span hosts
+regardless of `-H' setting. This is logical, as FTP links rarely point
+to the same host where the HTTP server resides. For similar reasons,
+the `-L' options has no effect on such downloads. On the other hand,
+domain acceptance (`-D') and suffix rules (`-A' and `-R') apply
+normally.
+
+ Also note that followed links to FTP directories will not be
+retrieved recursively further.
+
+\1f
+File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top
+
+Time-Stamping
+*************
+
+ One of the most important aspects of mirroring information from the
+Internet is updating your archives.
+
+ Downloading the whole archive again and again, just to replace a few
+changed files is expensive, both in terms of wasted bandwidth and money,
+and the time to do the update. This is why all the mirroring tools
+offer the option of incremental updating.
+
+ Such an updating mechanism means that the remote server is scanned in
+search of "new" files. Only those new files will be downloaded in the
+place of the old ones.
+
+ A file is considered new if one of these two conditions are met:
+
+ 1. A file of that name does not already exist locally.
+
+ 2. A file of that name does exist, but the remote file was modified
+ more recently than the local file.
+
+ To implement this, the program needs to be aware of the time of last
+modification of both remote and local files. Such information are
+called the "time-stamps".
+
+ The time-stamping in GNU Wget is turned on using `--timestamping'
+(`-N') option, or through `timestamping = on' directive in `.wgetrc'.
+With this option, for each file it intends to download, Wget will check
+whether a local file of the same name exists. If it does, and the
+remote file is older, Wget will not download it.
+
+ If the local file does not exist, or the sizes of the files do not
+match, Wget will download the remote file no matter what the time-stamps
+say.
+
+* Menu:
+
+* Time-Stamping Usage::
+* HTTP Time-Stamping Internals::
+* FTP Time-Stamping Internals::
+
+\1f
+File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping
+
+Time-Stamping Usage
+===================
+
+ The usage of time-stamping is simple. Say you would like to
+download a file so that it keeps its date of modification.
+
+ wget -S http://www.gnu.ai.mit.edu/
+
+ A simple `ls -l' shows that the time stamp on the local file equals
+the state of the `Last-Modified' header, as returned by the server. As
+you can see, the time-stamping info is preserved locally, even without
+`-N'.
+
+ Several days later, you would like Wget to check if the remote file
+has changed, and download it if it has.
+
+ wget -N http://www.gnu.ai.mit.edu/
+
+ Wget will ask the server for the last-modified date. If the local
+file is newer, the remote file will not be re-fetched. However, if the
+remote file is more recent, Wget will proceed fetching it normally.
+
+ The same goes for FTP. For example:
+
+ wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/*
+
+ `ls' will show that the timestamps are set according to the state on
+the remote server. Reissuing the command with `-N' will make Wget
+re-fetch *only* the files that have been modified.
+
+ In both HTTP and FTP retrieval Wget will time-stamp the local file
+correctly (with or without `-N') if it gets the stamps, i.e. gets the
+directory listing for FTP or the `Last-Modified' header for HTTP.
+
+ If you wished to mirror the GNU archive every week, you would use the
+following command every week:
+
+ wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/
+
+\1f
+File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping
+
+HTTP Time-Stamping Internals
+============================
+
+ Time-stamping in HTTP is implemented by checking of the
+`Last-Modified' header. If you wish to retrieve the file `foo.html'
+through HTTP, Wget will check whether `foo.html' exists locally. If it
+doesn't, `foo.html' will be retrieved unconditionally.
+
+ If the file does exist locally, Wget will first check its local
+time-stamp (similar to the way `ls -l' checks it), and then send a
+`HEAD' request to the remote server, demanding the information on the
+remote file.
+
+ The `Last-Modified' header is examined to find which file was
+modified more recently (which makes it "newer"). If the remote file is
+newer, it will be downloaded; if it is older, Wget will give up.(1)
+
+ Arguably, HTTP time-stamping should be implemented using the
+`If-Modified-Since' request.
+
+ ---------- Footnotes ----------
+
+ (1) As an additional check, Wget will look at the `Content-Length'
+header, and compare the sizes; if they are not the same, the remote
+file will be downloaded no matter what the time-stamp says.
+
+\1f
+File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping
+
+FTP Time-Stamping Internals
+===========================
+
+ In theory, FTP time-stamping works much the same as HTTP, only FTP
+has no headers--time-stamps must be received from the directory
+listings.
+
+ For each directory files must be retrieved from, Wget will use the
+`LIST' command to get the listing. It will try to analyze the listing,
+assuming that it is a Unix `ls -l' listing, and extract the
+time-stamps. The rest is exactly the same as for HTTP.
+
+ Assumption that every directory listing is a Unix-style listing may
+sound extremely constraining, but in practice it is not, as many
+non-Unix FTP servers use the Unixoid listing format because most (all?)
+of the clients understand it. Bear in mind that RFC959 defines no
+standard way to get a file list, let alone the time-stamps. We can
+only hope that a future standard will define this.
+
+ Another non-standard solution includes the use of `MDTM' command
+that is supported by some FTP servers (including the popular
+`wu-ftpd'), which returns the exact time of the specified file. Wget
+may support this command in the future.
+
+\1f
+File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top
+
+Startup File
+************
+
+ Once you know how to change default settings of Wget through command
+line arguments, you may wish to make some of those settings permanent.
+You can do that in a convenient way by creating the Wget startup
+file--`.wgetrc'.
+
+ Besides `.wgetrc' is the "main" initialization file, it is
+convenient to have a special facility for storing passwords. Thus Wget
+reads and interprets the contents of `$HOME/.netrc', if it finds it.
+You can find `.netrc' format in your system manuals.
+
+ Wget reads `.wgetrc' upon startup, recognizing a limited set of
+commands.
+
+* Menu:
+
+* Wgetrc Location:: Location of various wgetrc files.
+* Wgetrc Syntax:: Syntax of wgetrc.
+* Wgetrc Commands:: List of available commands.
+* Sample Wgetrc:: A wgetrc example.
+
+\1f
+File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File
+
+Wgetrc Location
+===============
+
+ When initializing, Wget will look for a "global" startup file,
+`/usr/local/etc/wgetrc' by default (or some prefix other than
+`/usr/local', if Wget was not installed there) and read commands from
+there, if it exists.
+
+ Then it will look for the user's file. If the environmental variable
+`WGETRC' is set, Wget will try to load that file. Failing that, no
+further attempts will be made.
+
+ If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'.
+
+ The fact that user's settings are loaded after the system-wide ones
+means that in case of collision user's wgetrc *overrides* the
+system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist
+admins, away!
+
--- /dev/null
+This is Info file wget.info, produced by Makeinfo version 1.67 from the
+input file ./wget.texi.
+
+INFO-DIR-SECTION Net Utilities
+INFO-DIR-SECTION World Wide Web
+START-INFO-DIR-ENTRY
+* Wget: (wget). The non-interactive network downloader.
+END-INFO-DIR-ENTRY
+
+ This file documents the the GNU Wget utility for downloading network
+data.
+
+ Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+ Permission is granted to copy and distribute modified versions of
+this manual under the conditions for verbatim copying, provided also
+that the sections entitled "Copying" and "GNU General Public License"
+are included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+
+\1f
+File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File
+
+Wgetrc Syntax
+=============
+
+ The syntax of a wgetrc command is simple:
+
+ variable = value
+
+ The "variable" will also be called "command". Valid "values" are
+different for different commands.
+
+ The commands are case-insensitive and underscore-insensitive. Thus
+`DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning
+with `#' and lines containing white-space only are discarded.
+
+ Commands that expect a comma-separated list will clear the list on an
+empty command. So, if you wish to reset the rejection list specified in
+global `wgetrc', you can do it with:
+
+ reject =
+
+\1f
+File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File
+
+Wgetrc Commands
+===============
+
+ The complete set of commands is listed below, the letter after `='
+denoting the value the command takes. It is `on/off' for `on' or `off'
+(which can also be `1' or `0'), STRING for any non-empty string or N
+for a positive integer. For example, you may specify `use_proxy = off'
+to disable use of proxy servers by default. You may use `inf' for
+infinite values, where appropriate.
+
+ Most of the commands have their equivalent command-line option
+(*Note Invoking::), except some more obscure or rarely used ones.
+
+accept/reject = STRING
+ Same as `-A'/`-R' (*Note Types of Files::).
+
+add_hostdir = on/off
+ Enable/disable host-prefixed file names. `-nH' disables it.
+
+continue = on/off
+ Enable/disable continuation of the retrieval, the same as `-c'
+ (which enables it).
+
+background = on/off
+ Enable/disable going to background, the same as `-b' (which enables
+ it).
+
+base = STRING
+ Set base for relative URLs, the same as `-B'.
+
+cache = on/off
+ When set to off, disallow server-caching. See the `-C' option.
+
+convert links = on/off
+ Convert non-relative links locally. The same as `-k'.
+
+cut_dirs = N
+ Ignore N remote directory components.
+
+debug = on/off
+ Debug mode, same as `-d'.
+
+delete_after = on/off
+ Delete after download, the same as `--delete-after'.
+
+dir_prefix = STRING
+ Top of directory tree, the same as `-P'.
+
+dirstruct = on/off
+ Turning dirstruct on or off, the same as `-x' or `-nd',
+ respectively.
+
+domains = STRING
+ Same as `-D' (*Note Domain Acceptance::).
+
+dot_bytes = N
+ Specify the number of bytes "contained" in a dot, as seen
+ throughout the retrieval (1024 by default). You can postfix the
+ value with `k' or `m', representing kilobytes and megabytes,
+ respectively. With dot settings you can tailor the dot retrieval
+ to suit your needs, or you can use the predefined "styles" (*Note
+ Download Options::).
+
+dots_in_line = N
+ Specify the number of dots that will be printed in each line
+ throughout the retrieval (50 by default).
+
+dot_spacing = N
+ Specify the number of dots in a single cluster (10 by default).
+
+dot_style = STRING
+ Specify the dot retrieval "style", as with `--dot-style'.
+
+exclude_directories = STRING
+ Specify a comma-separated list of directories you wish to exclude
+ from download, the same as `-X' (*Note Directory-Based Limits::).
+
+exclude_domains = STRING
+ Same as `--exclude-domains' (*Note Domain Acceptance::).
+
+follow_ftp = on/off
+ Follow FTP links from HTML documents, the same as `-f'.
+
+force_html = on/off
+ If set to on, force the input filename to be regarded as an HTML
+ document, the same as `-F'.
+
+ftp_proxy = STRING
+ Use STRING as FTP proxy, instead of the one specified in
+ environment.
+
+glob = on/off
+ Turn globbing on/off, the same as `-g'.
+
+header = STRING
+ Define an additional header, like `--header'.
+
+http_passwd = STRING
+ Set HTTP password.
+
+http_proxy = STRING
+ Use STRING as HTTP proxy, instead of the one specified in
+ environment.
+
+http_user = STRING
+ Set HTTP user to STRING.
+
+ignore_length = on/off
+ When set to on, ignore `Content-Length' header; the same as
+ `--ignore-length'.
+
+include_directories = STRING
+ Specify a comma-separated list of directories you wish to follow
+ when downloading, the same as `-I'.
+
+input = STRING
+ Read the URLs from STRING, like `-i'.
+
+kill_longer = on/off
+ Consider data longer than specified in content-length header as
+ invalid (and retry getting it). The default behaviour is to save
+ as much data as there is, provided there is more than or equal to
+ the value in `Content-Length'.
+
+logfile = STRING
+ Set logfile, the same as `-o'.
+
+login = STRING
+ Your user name on the remote machine, for FTP. Defaults to
+ `anonymous'.
+
+mirror = on/off
+ Turn mirroring on/off. The same as `-m'.
+
+netrc = on/off
+ Turn reading netrc on or off.
+
+noclobber = on/off
+ Same as `-nc'.
+
+no_parent = on/off
+ Disallow retrieving outside the directory hierarchy, like
+ `--no-parent' (*Note Directory-Based Limits::).
+
+no_proxy = STRING
+ Use STRING as the comma-separated list of domains to avoid in
+ proxy loading, instead of the one specified in environment.
+
+output_document = STRING
+ Set the output filename, the same as `-O'.
+
+passive_ftp = on/off
+ Set passive FTP, the same as `--passive-ftp'.
+
+passwd = STRING
+ Set your FTP password to PASSWORD. Without this setting, the
+ password defaults to `username@hostname.domainname'.
+
+proxy_user = STRING
+ Set proxy authentication user name to STRING, like `--proxy-user'.
+
+proxy_passwd = STRING
+ Set proxy authentication password to STRING, like `--proxy-passwd'.
+
+quiet = on/off
+ Quiet mode, the same as `-q'.
+
+quota = QUOTA
+ Specify the download quota, which is useful to put in global
+ wgetrc. When download quota is specified, Wget will stop retrieving
+ after the download sum has become greater than quota. The quota
+ can be specified in bytes (default), kbytes `k' appended) or mbytes
+ (`m' appended). Thus `quota = 5m' will set the quota to 5 mbytes.
+ Note that the user's startup file overrides system settings.
+
+reclevel = N
+ Recursion level, the same as `-l'.
+
+recursive = on/off
+ Recursive on/off, the same as `-r'.
+
+relative_only = on/off
+ Follow only relative links, the same as `-L' (*Note Relative
+ Links::).
+
+remove_listing = on/off
+ If set to on, remove FTP listings downloaded by Wget. Setting it
+ to off is the same as `-nr'.
+
+retr_symlinks = on/off
+ When set to on, retrieve symbolic links as if they were plain
+ files; the same as `--retr-symlinks'.
+
+robots = on/off
+ Use (or not) `/robots.txt' file (*Note Robots::). Be sure to know
+ what you are doing before changing the default (which is `on').
+
+server_response = on/off
+ Choose whether or not to print the HTTP and FTP server responses,
+ the same as `-S'.
+
+simple_host_check = on/off
+ Same as `-nh' (*Note Host Checking::).
+
+span_hosts = on/off
+ Same as `-H'.
+
+timeout = N
+ Set timeout value, the same as `-T'.
+
+timestamping = on/off
+ Turn timestamping on/off. The same as `-N' (*Note Time-Stamping::).
+
+tries = N
+ Set number of retries per URL, the same as `-t'.
+
+use_proxy = on/off
+ Turn proxy support on/off. The same as `-Y'.
+
+verbose = on/off
+ Turn verbose on/off, the same as `-v'/`-nv'.
+
+wait = N
+ Wait N seconds between retrievals, the same as `-w'.
+
+\1f
+File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File
+
+Sample Wgetrc
+=============
+
+ This is the sample initialization file, as given in the distribution.
+It is divided in two section--one for global usage (suitable for global
+startup file), and one for local usage (suitable for `$HOME/.wgetrc').
+Be careful about the things you change.
+
+ Note that all the lines are commented out. For any line to have
+effect, you must remove the `#' prefix at the beginning of line.
+
+ ###
+ ### Sample Wget initialization file .wgetrc
+ ###
+
+ ## You can use this file to change the default behaviour of wget or to
+ ## avoid having to type many many command-line options. This file does
+ ## not contain a comprehensive list of commands -- look at the manual
+ ## to find out what you can put into this file.
+ ##
+ ## Wget initialization file can reside in /usr/local/etc/wgetrc
+ ## (global, for all users) or $HOME/.wgetrc (for a single user).
+ ##
+ ## To use any of the settings in this file, you will have to uncomment
+ ## them (and probably change them).
+
+
+ ##
+ ## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+ ## Think well before you change them, since they may reduce wget's
+ ## functionality, and make it behave contrary to the documentation:
+ ##
+
+ # You can set retrieve quota for beginners by specifying a value
+ # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+ # default quota is unlimited.
+ #quota = inf
+
+ # You can lower (or raise) the default number of retries when
+ # downloading a file (default is 20).
+ #tries = 20
+
+ # Lowering the maximum depth of the recursive retrieval is handy to
+ # prevent newbies from going too "deep" when they unwittingly start
+ # the recursive retrieval. The default is 5.
+ #reclevel = 5
+
+ # Many sites are behind firewalls that do not allow initiation of
+ # connections from the outside. On these sites you have to use the
+ # `passive' feature of FTP. If you are behind such a firewall, you
+ # can turn this on to make Wget use passive FTP by default.
+ #passive_ftp = off
+
+
+ ##
+ ## Local settings (for a user to set in his $HOME/.wgetrc). It is
+ ## *highly* undesirable to put these settings in the global file, since
+ ## they are potentially dangerous to "normal" users.
+ ##
+ ## Even when setting up your own ~/.wgetrc, you should know what you
+ ## are doing before doing so.
+ ##
+
+ # Set this to on to use timestamping by default:
+ #timestamping = off
+
+ # It is a good idea to make Wget send your email address in a `From:'
+ # header with your request (so that server administrators can contact
+ # you in case of errors). Wget does *not* send `From:' by default.
+ #header = From: Your Name <username@site.domain>
+
+ # You can set up other headers, like Accept-Language. Accept-Language
+ # is *not* sent by default.
+ #header = Accept-Language: en
+
+ # You can set the default proxy for Wget to use. It will override the
+ # value in the environment.
+ #http_proxy = http://proxy.yoyodyne.com:18023/
+
+ # If you do not want to use proxy at all, set this to off.
+ #use_proxy = on
+
+ # You can customize the retrieval outlook. Valid options are default,
+ # binary, mega and micro.
+ #dot_style = default
+
+ # Setting this to off makes Wget not download /robots.txt. Be sure to
+ # know *exactly* what /robots.txt is and how it is used before changing
+ # the default!
+ #robots = on
+
+ # It can be useful to make Wget wait between connections. Set this to
+ # the number of seconds you want Wget to wait.
+ #wait = 0
+
+ # You can force creating directory structure, even if a single is being
+ # retrieved, by setting this to on.
+ #dirstruct = off
+
+ # You can turn on recursive retrieving by default (don't do this if
+ # you are not sure you know what it means) by setting this to on.
+ #recursive = off
+
+ # To have Wget follow FTP links from HTML files by default, set this
+ # to on:
+ #follow_ftp = off
+
+\1f
+File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top
+
+Examples
+********
+
+ The examples are classified into three sections, because of clarity.
+The first section is a tutorial for beginners. The second section
+explains some of the more complex program features. The third section
+contains advice for mirror administrators, as well as even more complex
+features (that some would call perverted).
+
+* Menu:
+
+* Simple Usage:: Simple, basic usage of the program.
+* Advanced Usage:: Advanced techniques of usage.
+* Guru Usage:: Mirroring and the hairy stuff.
+
+\1f
+File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples
+
+Simple Usage
+============
+
+ * Say you want to download a URL. Just type:
+
+ wget http://fly.cc.fer.hr/
+
+ The response will be something like:
+
+ --13:30:45-- http://fly.cc.fer.hr:80/en/
+ => `index.html'
+ Connecting to fly.cc.fer.hr:80... connected!
+ HTTP request sent, awaiting response... 200 OK
+ Length: 4,694 [text/html]
+
+ 0K -> .... [100%]
+
+ 13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694]
+
+ * But what will happen if the connection is slow, and the file is
+ lengthy? The connection will probably fail before the whole file
+ is retrieved, more than once. In this case, Wget will try getting
+ the file until it either gets the whole of it, or exceeds the
+ default number of retries (this being 20). It is easy to change
+ the number of tries to 45, to insure that the whole file will
+ arrive safely:
+
+ wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg
+
+ * Now let's leave Wget to work in the background, and write its
+ progress to log file `log'. It is tiring to type `--tries', so we
+ shall use `-t'.
+
+ wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg &
+
+ The ampersand at the end of the line makes sure that Wget works in
+ the background. To unlimit the number of retries, use `-t inf'.
+
+ * The usage of FTP is as simple. Wget will take care of login and
+ password.
+
+ $ wget ftp://gnjilux.cc.fer.hr/welcome.msg
+ --10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg
+ => `welcome.msg'
+ Connecting to gnjilux.cc.fer.hr:21... connected!
+ Logging in as anonymous ... Logged in!
+ ==> TYPE I ... done. ==> CWD not needed.
+ ==> PORT ... done. ==> RETR welcome.msg ... done.
+ Length: 1,340 (unauthoritative)
+
+ 0K -> . [100%]
+
+ 10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340]
+
+ * If you specify a directory, Wget will retrieve the directory
+ listing, parse it and convert it to HTML. Try:
+
+ wget ftp://prep.ai.mit.edu/pub/gnu/
+ lynx index.html
+
+\1f
+File: wget.info, Node: Advanced Usage, Next: Guru Usage, Prev: Simple Usage, Up: Examples
+
+Advanced Usage
+==============
+
+ * You would like to read the list of URLs from a file? Not a problem
+ with that:
+
+ wget -i file
+
+ If you specify `-' as file name, the URLs will be read from
+ standard input.
+
+ * Create a mirror image of GNU WWW site (with the same directory
+ structure the original has) with only one try per document, saving
+ the log of the activities to `gnulog':
+
+ wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog
+
+ * Retrieve the first layer of yahoo links:
+
+ wget -r -l1 http://www.yahoo.com/
+
+ * Retrieve the index.html of `www.lycos.com', showing the original
+ server headers:
+
+ wget -S http://www.lycos.com/
+
+ * Save the server headers with the file:
+ wget -s http://www.lycos.com/
+ more index.html
+
+ * Retrieve the first two levels of `wuarchive.wustl.edu', saving them
+ to /tmp.
+
+ wget -P/tmp -l2 ftp://wuarchive.wustl.edu/
+
+ * You want to download all the GIFs from an HTTP directory. `wget
+ http://host/dir/*.gif' doesn't work, since HTTP retrieval does not
+ support globbing. In that case, use:
+
+ wget -r -l1 --no-parent -A.gif http://host/dir/
+
+ It is a bit of a kludge, but it works. `-r -l1' means to retrieve
+ recursively (*Note Recursive Retrieval::), with maximum depth of 1.
+ `--no-parent' means that references to the parent directory are
+ ignored (*Note Directory-Based Limits::), and `-A.gif' means to
+ download only the GIF files. `-A "*.gif"' would have worked too.
+
+ * Suppose you were in the middle of downloading, when Wget was
+ interrupted. Now you do not want to clobber the files already
+ present. It would be:
+
+ wget -nc -r http://www.gnu.ai.mit.edu/
+
+ * If you want to encode your own username and password to HTTP or
+ FTP, use the appropriate URL syntax (*Note URL Format::).
+
+ wget ftp://hniksic:mypassword@jagor.srce.hr/.emacs
+
+ * If you do not like the default retrieval visualization (1K dots
+ with 10 dots per cluster and 50 dots per line), you can customize
+ it through dot settings (*Note Wgetrc Commands::). For example,
+ many people like the "binary" style of retrieval, with 8K dots and
+ 512K lines:
+
+ wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README
+
+ You can experiment with other styles, like:
+
+ wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz
+ wget --dot-style=micro http://fly.cc.fer.hr/
+
+ To make these settings permanent, put them in your `.wgetrc', as
+ described before (*Note Sample Wgetrc::).
+
+\1f
+File: wget.info, Node: Guru Usage, Prev: Advanced Usage, Up: Examples
+
+Guru Usage
+==========
+
+ * If you wish Wget to keep a mirror of a page (or FTP
+ subdirectories), use `--mirror' (`-m'), which is the shorthand for
+ `-r -N'. You can put Wget in the crontab file asking it to
+ recheck a site each Sunday:
+
+ crontab
+ 0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog
+
+ * You may wish to do the same with someone's home page. But you do
+ not want to download all those images--you're only interested in
+ HTML.
+
+ wget --mirror -A.html http://www.w3.org/
+
+ * But what about mirroring the hosts networkologically close to you?
+ It seems so awfully slow because of all that DNS resolving. Just
+ use `-D' (*Note Domain Acceptance::).
+
+ wget -rN -Dsrce.hr http://www.srce.hr/
+
+ Now Wget will correctly find out that `regoc.srce.hr' is the same
+ as `www.srce.hr', but will not even take into consideration the
+ link to `www.mit.edu'.
+
+ * You have a presentation and would like the dumb absolute links to
+ be converted to relative? Use `-k':
+
+ wget -k -r URL
+
+ * You would like the output documents to go to standard output
+ instead of to files? OK, but Wget will automatically shut up
+ (turn on `--quiet') to prevent mixing of Wget output and the
+ retrieved documents.
+
+ wget -O - http://jagor.srce.hr/ http://www.srce.hr/
+
+ You can also combine the two options and make weird pipelines to
+ retrieve the documents from remote hotlists:
+
+ wget -O - http://cool.list.com/ | wget --force-html -i -
+
+\1f
+File: wget.info, Node: Various, Next: Appendices, Prev: Examples, Up: Top
+
+Various
+*******
+
+ This chapter contains all the stuff that could not fit anywhere else.
+
+* Menu:
+
+* Proxies:: Support for proxy servers
+* Distribution:: Getting the latest version.
+* Mailing List:: Wget mailing list for announcements and discussion.
+* Reporting Bugs:: How and where to report bugs.
+* Portability:: The systems Wget works on.
+* Signals:: Signal-handling performed by Wget.
+
+\1f
+File: wget.info, Node: Proxies, Next: Distribution, Prev: Various, Up: Various
+
+Proxies
+=======
+
+ "Proxies" are special-purpose HTTP servers designed to transfer data
+from remote servers to local clients. One typical use of proxies is
+lightening network load for users behind a slow connection. This is
+achieved by channeling all HTTP and FTP requests through the proxy
+which caches the transferred data. When a cached resource is requested
+again, proxy will return the data from cache. Another use for proxies
+is for companies that separate (for security reasons) their internal
+networks from the rest of Internet. In order to obtain information
+from the Web, their users connect and retrieve remote data using an
+authorized proxy.
+
+ Wget supports proxies for both HTTP and FTP retrievals. The
+standard way to specify proxy location, which Wget recognizes, is using
+the following environment variables:
+
+`http_proxy'
+ This variable should contain the URL of the proxy for HTTP
+ connections.
+
+`ftp_proxy'
+ This variable should contain the URL of the proxy for HTTP
+ connections. It is quite common that HTTP_PROXY and FTP_PROXY are
+ set to the same URL.
+
+`no_proxy'
+ This variable should contain a comma-separated list of domain
+ extensions proxy should *not* be used for. For instance, if the
+ value of `no_proxy' is `.mit.edu', proxy will not be used to
+ retrieve documents from MIT.
+
+ In addition to the environment variables, proxy location and settings
+may be specified from within Wget itself.
+
+`-Y on/off'
+`--proxy=on/off'
+`proxy = on/off'
+ This option may be used to turn the proxy support on or off. Proxy
+ support is on by default, provided that the appropriate environment
+ variables are set.
+
+`http_proxy = URL'
+`ftp_proxy = URL'
+`no_proxy = STRING'
+ These startup file variables allow you to override the proxy
+ settings specified by the environment.
+
+ Some proxy servers require authorization to enable you to use them.
+The authorization consists of "username" and "password", which must be
+sent by Wget. As with HTTP authorization, several authentication
+schemes exist. For proxy authorization only the `Basic' authentication
+scheme is currently implemented.
+
+ You may specify your username and password either through the proxy
+URL or through the command-line options. Assuming that the company's
+proxy is located at `proxy.srce.hr' at port 8001, a proxy URL location
+containing authorization data might look like this:
+
+ http://hniksic:mypassword@proxy.company.com:8001/
+
+ Alternatively, you may use the `proxy-user' and `proxy-password'
+options, and the equivalent `.wgetrc' settings `proxy_user' and
+`proxy_passwd' to set the proxy username and password.
+
+\1f
+File: wget.info, Node: Distribution, Next: Mailing List, Prev: Proxies, Up: Various
+
+Distribution
+============
+
+ Like all GNU utilities, the latest version of Wget can be found at
+the master GNU archive site prep.ai.mit.edu, and its mirrors. For
+example, Wget 1.5.3 can be found at
+`ftp://prep.ai.mit.edu/pub/gnu/wget-1.5.3.tar.gz'
+
+\1f
+File: wget.info, Node: Mailing List, Next: Reporting Bugs, Prev: Distribution, Up: Various
+
+Mailing List
+============
+
+ Wget has its own mailing list at <wget@sunsite.auc.dk>, thanks to
+Karsten Thygesen. The mailing list is for discussion of Wget features
+and web, reporting Wget bugs (those that you think may be of interest
+to the public) and mailing announcements. You are welcome to
+subscribe. The more people on the list, the better!
+
+ To subscribe, send mail to <wget-subscribe@sunsite.auc.dk>. the
+magic word `subscribe' in the subject line. Unsubscribe by mailing to
+<wget-unsubscribe@sunsite.auc.dk>.
+
+ The mailing list is archived at `http://fly.cc.fer.hr/archive/wget'.
+
+\1f
+File: wget.info, Node: Reporting Bugs, Next: Portability, Prev: Mailing List, Up: Various
+
+Reporting Bugs
+==============
+
+ You are welcome to send bug reports about GNU Wget to
+<bug-wget@gnu.org>. The bugs that you think are of the interest to the
+public (i.e. more people should be informed about them) can be Cc-ed to
+the mailing list at <wget@sunsite.auc.dk>.
+
+ Before actually submitting a bug report, please try to follow a few
+simple guidelines.
+
+ 1. Please try to ascertain that the behaviour you see really is a
+ bug. If Wget crashes, it's a bug. If Wget does not behave as
+ documented, it's a bug. If things work strange, but you are not
+ sure about the way they are supposed to work, it might well be a
+ bug.
+
+ 2. Try to repeat the bug in as simple circumstances as possible.
+ E.g. if Wget crashes on `wget -rLl0 -t5 -Y0 http://yoyodyne.com -o
+ /tmp/log', you should try to see if it will crash with a simpler
+ set of options.
+
+ Also, while I will probably be interested to know the contents of
+ your `.wgetrc' file, just dumping it into the debug message is
+ probably a bad idea. Instead, you should first try to see if the
+ bug repeats with `.wgetrc' moved out of the way. Only if it turns
+ out that `.wgetrc' settings affect the bug, should you mail me the
+ relevant parts of the file.
+
+ 3. Please start Wget with `-d' option and send the log (or the
+ relevant parts of it). If Wget was compiled without debug support,
+ recompile it. It is *much* easier to trace bugs with debug support
+ on.
+
+ 4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which
+ wget` core' and type `where' to get the backtrace.
+
+ 5. Find where the bug is, fix it and send me the patches. :-)
+
+\1f
+File: wget.info, Node: Portability, Next: Signals, Prev: Reporting Bugs, Up: Various
+
+Portability
+===========
+
+ Since Wget uses GNU Autoconf for building and configuring, and avoids
+using "special" ultra-mega-cool features of any particular Unix, it
+should compile (and work) on all common Unix flavors.
+
+ Various Wget versions have been compiled and tested under many kinds
+of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital
+Unix), Ultrix, *BSD, IRIX, and others; refer to the file `MACHINES' in
+the distribution directory for a comprehensive list. If you compile it
+on an architecture not listed there, please let me know so I can update
+it.
+
+ Wget should also compile on the other Unix systems, not listed in
+`MACHINES'. If it doesn't, please let me know.
+
+ Thanks to kind contributors, this version of Wget compiles and works
+on Microsoft Windows 95 and Windows NT platforms. It has been compiled
+successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
+with Winsock as networking software. Naturally, it is crippled of some
+features available on Unix, but it should work as a substitute for
+people stuck with Windows. Note that the Windows port is *neither
+tested nor maintained* by me--all questions and problems should be
+reported to Wget mailing list at <wget@sunsite.auc.dk> where the
+maintainers will look at them.
+
+\1f
+File: wget.info, Node: Signals, Prev: Portability, Up: Various
+
+Signals
+=======
+
+ Since the purpose of Wget is background work, it catches the hangup
+signal (`SIGHUP') and ignores it. If the output was on standard
+output, it will be redirected to a file named `wget-log'. Otherwise,
+`SIGHUP' is ignored. This is convenient when you wish to redirect the
+output of Wget after having started it.
+
+ $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &
+ $ kill -HUP %% # Redirect the output to wget-log
+
+ Other than that, Wget will not try to interfere with signals in any
+way. `C-c', `kill -TERM' and `kill -KILL' should kill it alike.
+
+\1f
+File: wget.info, Node: Appendices, Next: Copying, Prev: Various, Up: Top
+
+Appendices
+**********
+
+ This chapter contains some references I consider useful, like the
+Robots Exclusion Standard specification, as well as a list of
+contributors to GNU Wget.
+
+* Menu:
+
+* Robots:: Wget as a WWW robot.
+* Security Considerations:: Security with Wget.
+* Contributors:: People who helped.
+
+\1f
+File: wget.info, Node: Robots, Next: Security Considerations, Prev: Appendices, Up: Appendices
+
+Robots
+======
+
+ Since Wget is able to traverse the web, it counts as one of the Web
+"robots". Thus Wget understands "Robots Exclusion Standard"
+(RES)--contents of `/robots.txt', used by server administrators to
+shield parts of their systems from wanderings of Wget.
+
+ Norobots support is turned on only when retrieving recursively, and
+*never* for the first page. Thus, you may issue:
+
+ wget -r http://fly.cc.fer.hr/
+
+ First the index of fly.cc.fer.hr will be downloaded. If Wget finds
+anything worth downloading on the same host, only *then* will it load
+the robots, and decide whether or not to load the links after all.
+`/robots.txt' is loaded only once per host. Wget does not support the
+robots `META' tag.
+
+ The description of the norobots standard was written, and is
+maintained by Martijn Koster <m.koster@webcrawler.com>. With his
+permission, I contribute a (slightly modified) texified version of the
+RES.
+
+* Menu:
+
+* Introduction to RES::
+* RES Format::
+* User-Agent Field::
+* Disallow Field::
+* Norobots Examples::
+
+\1f
+File: wget.info, Node: Introduction to RES, Next: RES Format, Prev: Robots, Up: Robots
+
+Introduction to RES
+-------------------
+
+ "WWW Robots" (also called "wanderers" or "spiders") are programs
+that traverse many pages in the World Wide Web by recursively
+retrieving linked pages. For more information see the robots page.
+
+ In 1993 and 1994 there have been occasions where robots have visited
+WWW servers where they weren't welcome for various reasons. Sometimes
+these reasons were robot specific, e.g. certain robots swamped servers
+with rapid-fire requests, or retrieved the same files repeatedly. In
+other situations robots traversed parts of WWW servers that weren't
+suitable, e.g. very deep virtual trees, duplicated information,
+temporary information, or cgi-scripts with side-effects (such as
+voting).
+
+ These incidents indicated the need for established mechanisms for
+WWW servers to indicate to robots which parts of their server should
+not be accessed. This standard addresses this need with an operational
+solution.
+
+ This document represents a consensus on 30 June 1994 on the robots
+mailing list (`robots@webcrawler.com'), between the majority of robot
+authors and other people with an interest in robots. It has also been
+open for discussion on the Technical World Wide Web mailing list
+(`www-talk@info.cern.ch'). This document is based on a previous working
+draft under the same title.
+
+ It is not an official standard backed by a standards body, or owned
+by any commercial organization. It is not enforced by anybody, and there
+no guarantee that all current and future robots will use it. Consider
+it a common facility the majority of robot authors offer the WWW
+community to protect WWW server against unwanted accesses by their
+robots.
+
+ The latest version of this document can be found at
+`http://info.webcrawler.com/mak/projects/robots/norobots.html'.
+
+\1f
+File: wget.info, Node: RES Format, Next: User-Agent Field, Prev: Introduction to RES, Up: Robots
+
+RES Format
+----------
+
+ The format and semantics of the `/robots.txt' file are as follows:
+
+ The file consists of one or more records separated by one or more
+blank lines (terminated by `CR', `CR/NL', or `NL'). Each record
+contains lines of the form:
+
+ <field>:<optionalspace><value><optionalspace>
+
+ The field name is case insensitive.
+
+ Comments can be included in file using UNIX bourne shell conventions:
+the `#' character is used to indicate that preceding space (if any) and
+the remainder of the line up to the line termination is discarded.
+Lines containing only a comment are discarded completely, and therefore
+do not indicate a record boundary.
+
+ The record starts with one or more User-agent lines, followed by one
+or more Disallow lines, as detailed below. Unrecognized headers are
+ignored.
+
+ The presence of an empty `/robots.txt' file has no explicit
+associated semantics, it will be treated as if it was not present, i.e.
+all robots will consider themselves welcome.
+
+\1f
+File: wget.info, Node: User-Agent Field, Next: Disallow Field, Prev: RES Format, Up: Robots
+
+User-Agent Field
+----------------
+
+ The value of this field is the name of the robot the record is
+describing access policy for.
+
+ If more than one User-agent field is present the record describes an
+identical access policy for more than one robot. At least one field
+needs to be present per record.
+
+ The robot should be liberal in interpreting this field. A case
+insensitive substring match of the name without version information is
+recommended.
+
+ If the value is `*', the record describes the default access policy
+for any robot that has not matched any of the other records. It is not
+allowed to have multiple such records in the `/robots.txt' file.
+
+\1f
+File: wget.info, Node: Disallow Field, Next: Norobots Examples, Prev: User-Agent Field, Up: Robots
+
+Disallow Field
+--------------
+
+ The value of this field specifies a partial URL that is not to be
+visited. This can be a full path, or a partial path; any URL that
+starts with this value will not be retrieved. For example,
+`Disallow: /help' disallows both `/help.html' and `/help/index.html',
+whereas `Disallow: /help/' would disallow `/help/index.html' but allow
+`/help.html'.
+
+ Any empty value, indicates that all URLs can be retrieved. At least
+one Disallow field needs to be present in a record.
+
+\1f
+File: wget.info, Node: Norobots Examples, Prev: Disallow Field, Up: Robots
+
+Norobots Examples
+-----------------
+
+ The following example `/robots.txt' file specifies that no robots
+should visit any URL starting with `/cyberworld/map/' or `/tmp/':
+
+ # robots.txt for http://www.site.com/
+
+ User-agent: *
+ Disallow: /cyberworld/map/ # This is an infinite virtual URL space
+ Disallow: /tmp/ # these will soon disappear
+
+ This example `/robots.txt' file specifies that no robots should
+visit any URL starting with `/cyberworld/map/', except the robot called
+`cybermapper':
+
+ # robots.txt for http://www.site.com/
+
+ User-agent: *
+ Disallow: /cyberworld/map/ # This is an infinite virtual URL space
+
+ # Cybermapper knows where to go.
+ User-agent: cybermapper
+ Disallow:
+
+ This example indicates that no robots should visit this site further:
+
+ # go away
+ User-agent: *
+ Disallow: /
+
+\1f
+File: wget.info, Node: Security Considerations, Next: Contributors, Prev: Robots, Up: Appendices
+
+Security Considerations
+=======================
+
+ When using Wget, you must be aware that it sends unencrypted
+passwords through the network, which may present a security problem.
+Here are the main issues, and some solutions.
+
+ 1. The passwords on the command line are visible using `ps'. If this
+ is a problem, avoid putting passwords from the command line--e.g.
+ you can use `.netrc' for this.
+
+ 2. Using the insecure "basic" authentication scheme, unencrypted
+ passwords are transmitted through the network routers and gateways.
+
+ 3. The FTP passwords are also in no way encrypted. There is no good
+ solution for this at the moment.
+
+ 4. Although the "normal" output of Wget tries to hide the passwords,
+ debugging logs show them, in all forms. This problem is avoided by
+ being careful when you send debug logs (yes, even when you send
+ them to me).
+
+\1f
+File: wget.info, Node: Contributors, Prev: Security Considerations, Up: Appendices
+
+Contributors
+============
+
+ GNU Wget was written by Hrvoje Niksic <hniksic@srce.hr>. However,
+its development could never have gone as far as it has, were it not for
+the help of many people, either with bug reports, feature proposals,
+patches, or letters saying "Thanks!".
+
+ Special thanks goes to the following people (no particular order):
+
+ * Karsten Thygesen--donated the mailing list and the initial FTP
+ space.
+
+ * Shawn McHorse--bug reports and patches.
+
+ * Kaveh R. Ghazi--on-the-fly `ansi2knr'-ization.
+
+ * Gordon Matzigkeit--`.netrc' support.
+
+ * Zlatko Calusic, Tomislav Vujec and Drazen Kacar--feature
+ suggestions and "philosophical" discussions.
+
+ * Darko Budor--initial port to Windows.
+
+ * Antonio Rosella--help and suggestions, plust the Italian
+ translation.
+
+ * Tomislav Petrovic, Mario Mikocevic--many bug reports and
+ suggestions.
+
+ * Francois Pinard--many thorough bug reports and discussions.
+
+ * Karl Eichwalder--lots of help with internationalization and other
+ things.
+
+ * Junio Hamano--donated support for Opie and HTTP `Digest'
+ authentication.
+
+ * Brian Gough--a generous donation.
+
+ The following people have provided patches, bug/build reports, useful
+suggestions, beta testing services, fan mail and all the other things
+that make maintenance so much fun:
+
+ Tim Adam, Martin Baehr, Dieter Baron, Roger Beeman and the Gurus at
+Cisco, Mark Boyns, John Burden, Wanderlei Cavassin, Gilles Cedoc, Tim
+Charron, Noel Cragg, Kristijan Conkas, Damir Dzeko, Andrew Davison,
+Ulrich Drepper, Marc Duponcheel, Aleksandar Erkalovic, Andy Eskilsson,
+Masashi Fujita, Howard Gayle, Marcel Gerrits, Hans Grobler, Mathieu
+Guillaume, Karl Heuer, Gregor Hoffleit, Erik Magnus Hulthen, Richard
+Huveneers, Simon Josefsson, Mario Juric, Goran Kezunovic, Robert Kleine,
+Fila Kolodny, Alexander Kourakos, Martin Kraemer, Simos KSenitellis,
+Tage Stabell-Kulo, Hrvoje Lacko, Dave Love, Jordan Mendelson, Lin Zhe
+Min, Charlie Negyesi, Andrew Pollock, Steve Pothier, Marin Purgar, Jan
+Prikryl, Keith Refson, Tobias Ringstrom, Juan Jose Rodrigues, Heinz
+Salzmann, Robert Schmidt, Toomas Soome, Sven Sternberger, Markus
+Strasser, Szakacsits Szabolcs, Mike Thomas, Russell Vincent, Douglas E.
+Wegscheid, Jasmin Zainul, Bojan Zdrnja, Kristijan Zimmer.
+
+ Apologies to all who I accidentally left out, and many thanks to all
+the subscribers of the Wget mailing list.
+
--- /dev/null
+This is Info file wget.info, produced by Makeinfo version 1.67 from the
+input file ./wget.texi.
+
+INFO-DIR-SECTION Net Utilities
+INFO-DIR-SECTION World Wide Web
+START-INFO-DIR-ENTRY
+* Wget: (wget). The non-interactive network downloader.
+END-INFO-DIR-ENTRY
+
+ This file documents the the GNU Wget utility for downloading network
+data.
+
+ Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+ Permission is granted to copy and distribute modified versions of
+this manual under the conditions for verbatim copying, provided also
+that the sections entitled "Copying" and "GNU General Public License"
+are included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+
+\1f
+File: wget.info, Node: Copying, Next: Concept Index, Prev: Appendices, Up: Top
+
+GNU GENERAL PUBLIC LICENSE
+**************************
+
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+ 675 Mass Ave, Cambridge, MA 02139, USA
+
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+Preamble
+========
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it in
+new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software,
+and (2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 1. This License applies to any program or other work which contains a
+ notice placed by the copyright holder saying it may be distributed
+ under the terms of this General Public License. The "Program",
+ below, refers to any such program or work, and a "work based on
+ the Program" means either the Program or any derivative work under
+ copyright law: that is to say, a work containing the Program or a
+ portion of it, either verbatim or with modifications and/or
+ translated into another language. (Hereinafter, translation is
+ included without limitation in the term "modification".) Each
+ licensee is addressed as "you".
+
+ Activities other than copying, distribution and modification are
+ not covered by this License; they are outside its scope. The act
+ of running the Program is not restricted, and the output from the
+ Program is covered only if its contents constitute a work based on
+ the Program (independent of having been made by running the
+ Program). Whether that is true depends on what the Program does.
+
+ 2. You may copy and distribute verbatim copies of the Program's
+ source code as you receive it, in any medium, provided that you
+ conspicuously and appropriately publish on each copy an appropriate
+ copyright notice and disclaimer of warranty; keep intact all the
+ notices that refer to this License and to the absence of any
+ warranty; and give any other recipients of the Program a copy of
+ this License along with the Program.
+
+ You may charge a fee for the physical act of transferring a copy,
+ and you may at your option offer warranty protection in exchange
+ for a fee.
+
+ 3. You may modify your copy or copies of the Program or any portion
+ of it, thus forming a work based on the Program, and copy and
+ distribute such modifications or work under the terms of Section 1
+ above, provided that you also meet all of these conditions:
+
+ a. You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b. You must cause any work that you distribute or publish, that
+ in whole or in part contains or is derived from the Program
+ or any part thereof, to be licensed as a whole at no charge
+ to all third parties under the terms of this License.
+
+ c. If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display
+ an announcement including an appropriate copyright notice and
+ a notice that there is no warranty (or else, saying that you
+ provide a warranty) and that users may redistribute the
+ program under these conditions, and telling the user how to
+ view a copy of this License. (Exception: if the Program
+ itself is interactive but does not normally print such an
+ announcement, your work based on the Program is not required
+ to print an announcement.)
+
+ These requirements apply to the modified work as a whole. If
+ identifiable sections of that work are not derived from the
+ Program, and can be reasonably considered independent and separate
+ works in themselves, then this License, and its terms, do not
+ apply to those sections when you distribute them as separate
+ works. But when you distribute the same sections as part of a
+ whole which is a work based on the Program, the distribution of
+ the whole must be on the terms of this License, whose permissions
+ for other licensees extend to the entire whole, and thus to each
+ and every part regardless of who wrote it.
+
+ Thus, it is not the intent of this section to claim rights or
+ contest your rights to work written entirely by you; rather, the
+ intent is to exercise the right to control the distribution of
+ derivative or collective works based on the Program.
+
+ In addition, mere aggregation of another work not based on the
+ Program with the Program (or with a work based on the Program) on
+ a volume of a storage or distribution medium does not bring the
+ other work under the scope of this License.
+
+ 4. You may copy and distribute the Program (or a work based on it,
+ under Section 2) in object code or executable form under the terms
+ of Sections 1 and 2 above provided that you also do one of the
+ following:
+
+ a. Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of
+ Sections 1 and 2 above on a medium customarily used for
+ software interchange; or,
+
+ b. Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a
+ medium customarily used for software interchange; or,
+
+ c. Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with
+ such an offer, in accord with Subsection b above.)
+
+ The source code for a work means the preferred form of the work for
+ making modifications to it. For an executable work, complete
+ source code means all the source code for all modules it contains,
+ plus any associated interface definition files, plus the scripts
+ used to control compilation and installation of the executable.
+ However, as a special exception, the source code distributed need
+ not include anything that is normally distributed (in either
+ source or binary form) with the major components (compiler,
+ kernel, and so on) of the operating system on which the executable
+ runs, unless that component itself accompanies the executable.
+
+ If distribution of executable or object code is made by offering
+ access to copy from a designated place, then offering equivalent
+ access to copy the source code from the same place counts as
+ distribution of the source code, even though third parties are not
+ compelled to copy the source along with the object code.
+
+ 5. You may not copy, modify, sublicense, or distribute the Program
+ except as expressly provided under this License. Any attempt
+ otherwise to copy, modify, sublicense or distribute the Program is
+ void, and will automatically terminate your rights under this
+ License. However, parties who have received copies, or rights,
+ from you under this License will not have their licenses
+ terminated so long as such parties remain in full compliance.
+
+ 6. You are not required to accept this License, since you have not
+ signed it. However, nothing else grants you permission to modify
+ or distribute the Program or its derivative works. These actions
+ are prohibited by law if you do not accept this License.
+ Therefore, by modifying or distributing the Program (or any work
+ based on the Program), you indicate your acceptance of this
+ License to do so, and all its terms and conditions for copying,
+ distributing or modifying the Program or works based on it.
+
+ 7. Each time you redistribute the Program (or any work based on the
+ Program), the recipient automatically receives a license from the
+ original licensor to copy, distribute or modify the Program
+ subject to these terms and conditions. You may not impose any
+ further restrictions on the recipients' exercise of the rights
+ granted herein. You are not responsible for enforcing compliance
+ by third parties to this License.
+
+ 8. If, as a consequence of a court judgment or allegation of patent
+ infringement or for any other reason (not limited to patent
+ issues), conditions are imposed on you (whether by court order,
+ agreement or otherwise) that contradict the conditions of this
+ License, they do not excuse you from the conditions of this
+ License. If you cannot distribute so as to satisfy simultaneously
+ your obligations under this License and any other pertinent
+ obligations, then as a consequence you may not distribute the
+ Program at all. For example, if a patent license would not permit
+ royalty-free redistribution of the Program by all those who
+ receive copies directly or indirectly through you, then the only
+ way you could satisfy both it and this License would be to refrain
+ entirely from distribution of the Program.
+
+ If any portion of this section is held invalid or unenforceable
+ under any particular circumstance, the balance of the section is
+ intended to apply and the section as a whole is intended to apply
+ in other circumstances.
+
+ It is not the purpose of this section to induce you to infringe any
+ patents or other property right claims or to contest validity of
+ any such claims; this section has the sole purpose of protecting
+ the integrity of the free software distribution system, which is
+ implemented by public license practices. Many people have made
+ generous contributions to the wide range of software distributed
+ through that system in reliance on consistent application of that
+ system; it is up to the author/donor to decide if he or she is
+ willing to distribute software through any other system and a
+ licensee cannot impose that choice.
+
+ This section is intended to make thoroughly clear what is believed
+ to be a consequence of the rest of this License.
+
+ 9. If the distribution and/or use of the Program is restricted in
+ certain countries either by patents or by copyrighted interfaces,
+ the original copyright holder who places the Program under this
+ License may add an explicit geographical distribution limitation
+ excluding those countries, so that distribution is permitted only
+ in or among countries not thus excluded. In such case, this
+ License incorporates the limitation as if written in the body of
+ this License.
+
+ 10. The Free Software Foundation may publish revised and/or new
+ versions of the General Public License from time to time. Such
+ new versions will be similar in spirit to the present version, but
+ may differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+ Program specifies a version number of this License which applies
+ to it and "any later version", you have the option of following
+ the terms and conditions either of that version or of any later
+ version published by the Free Software Foundation. If the Program
+ does not specify a version number of this License, you may choose
+ any version ever published by the Free Software Foundation.
+
+ 11. If you wish to incorporate parts of the Program into other free
+ programs whose distribution conditions are different, write to the
+ author to ask for permission. For software which is copyrighted
+ by the Free Software Foundation, write to the Free Software
+ Foundation; we sometimes make exceptions for this. Our decision
+ will be guided by the two goals of preserving the free status of
+ all derivatives of our free software and of promoting the sharing
+ and reuse of software generally.
+
+ NO WARRANTY
+
+ 12. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
+ WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE
+ LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
+ WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT
+ NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
+ FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
+ QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+ PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY
+ SERVICING, REPAIR OR CORRECTION.
+
+ 13. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+ WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY
+ MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE
+ LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
+ INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
+ INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU
+ OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY
+ OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN
+ ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+How to Apply These Terms to Your New Programs
+=============================================
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these
+terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ ONE LINE TO GIVE THE PROGRAM'S NAME AND AN IDEA OF WHAT IT DOES.
+ Copyright (C) 19YY NAME OF AUTHOR
+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License
+ as published by the Free Software Foundation; either version 2
+ of the License, or (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ Also add information on how to contact you by electronic and paper
+mail.
+
+ If the program is interactive, make it output a short notice like
+this when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) 19YY NAME OF AUTHOR
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
+ type `show w'. This is free software, and you are welcome
+ to redistribute it under certain conditions; type `show c'
+ for details.
+
+ The hypothetical commands `show w' and `show c' should show the
+appropriate parts of the General Public License. Of course, the
+commands you use may be called something other than `show w' and `show
+c'; they could even be mouse-clicks or menu items--whatever suits your
+program.
+
+ You should also get your employer (if you work as a programmer) or
+your school, if any, to sign a "copyright disclaimer" for the program,
+if necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright
+ interest in the program `Gnomovision'
+ (which makes passes at compilers) written
+ by James Hacker.
+
+ SIGNATURE OF TY COON, 1 April 1989
+ Ty Coon, President of Vice
+
+ This General Public License does not permit incorporating your
+program into proprietary programs. If your program is a subroutine
+library, you may consider it more useful to permit linking proprietary
+applications with the library. If this is what you want to do, use the
+GNU Library General Public License instead of this License.
+
+\1f
+File: wget.info, Node: Concept Index, Prev: Copying, Up: Top
+
+Concept Index
+*************
+
+* Menu:
+
+* .netrc: Startup File.
+* .wgetrc: Startup File.
+* accept directories: Directory-Based Limits.
+* accept suffixes: Types of Files.
+* accept wildcards: Types of Files.
+* all hosts: All Hosts.
+* append to log: Logging and Input File Options.
+* arguments: Invoking.
+* authentication: HTTP Options.
+* bug reports: Reporting Bugs.
+* bugs: Reporting Bugs.
+* cache: HTTP Options.
+* command line: Invoking.
+* Content-Length, ignore: HTTP Options.
+* continue retrieval: Download Options.
+* contributors: Contributors.
+* conversion of links: Recursive Retrieval Options.
+* copying: Copying.
+* cut directories: Directory Options.
+* debug: Logging and Input File Options.
+* delete after retrieval: Recursive Retrieval Options.
+* directories: Directory-Based Limits.
+* directories, exclude: Directory-Based Limits.
+* directories, include: Directory-Based Limits.
+* directory limits: Directory-Based Limits.
+* directory prefix: Directory Options.
+* DNS lookup: Host Checking.
+* dot style: Download Options.
+* examples: Examples.
+* exclude directories: Directory-Based Limits.
+* execute wgetrc command: Basic Startup Options.
+* features: Overview.
+* filling proxy cache: Recursive Retrieval Options.
+* follow FTP links: Recursive Accept/Reject Options.
+* following ftp links: FTP Links.
+* following links: Following Links.
+* force html: Logging and Input File Options.
+* ftp time-stamping: FTP Time-Stamping Internals.
+* globbing, toggle: FTP Options.
+* GPL: Copying.
+* hangup: Signals.
+* header, add: HTTP Options.
+* host checking: Host Checking.
+* host lookup: Host Checking.
+* http password: HTTP Options.
+* http time-stamping: HTTP Time-Stamping Internals.
+* http user: HTTP Options.
+* ignore length: HTTP Options.
+* include directories: Directory-Based Limits.
+* incremental updating: Time-Stamping.
+* input-file: Logging and Input File Options.
+* invoking: Invoking.
+* latest version: Distribution.
+* links: Following Links.
+* links conversion: Recursive Retrieval Options.
+* list: Mailing List.
+* location of wgetrc: Wgetrc Location.
+* log file: Logging and Input File Options.
+* mailing list: Mailing List.
+* mirroring: Guru Usage.
+* no parent: Directory-Based Limits.
+* no warranty: Copying.
+* no-clobber: Download Options.
+* nohup: Invoking.
+* norobots disallow: Disallow Field.
+* norobots examples: Norobots Examples.
+* norobots format: RES Format.
+* norobots introduction: Introduction to RES.
+* norobots user-agent: User-Agent Field.
+* number of retries: Download Options.
+* operating systems: Portability.
+* option syntax: Option Syntax.
+* output file: Logging and Input File Options.
+* overview: Overview.
+* passive ftp: FTP Options.
+* pause: Download Options.
+* portability: Portability.
+* proxies: Proxies.
+* proxy <1>: Download Options.
+* proxy: HTTP Options.
+* proxy authentication: HTTP Options.
+* proxy filling: Recursive Retrieval Options.
+* proxy password: HTTP Options.
+* proxy user: HTTP Options.
+* quiet: Logging and Input File Options.
+* quota: Download Options.
+* recursion: Recursive Retrieval.
+* recursive retrieval: Recursive Retrieval.
+* redirecting output: Guru Usage.
+* reject directories: Directory-Based Limits.
+* reject suffixes: Types of Files.
+* reject wildcards: Types of Files.
+* relative links: Relative Links.
+* reporting bugs: Reporting Bugs.
+* retries: Download Options.
+* retrieval tracing style: Download Options.
+* retrieve symbolic links: FTP Options.
+* retrieving: Recursive Retrieval.
+* robots: Robots.
+* robots.txt: Robots.
+* sample wgetrc: Sample Wgetrc.
+* security: Security Considerations.
+* server maintenance: Robots.
+* server response, print: Download Options.
+* server response, save: HTTP Options.
+* signal handling: Signals.
+* span hosts: All Hosts.
+* spider: Download Options.
+* startup: Startup File.
+* startup file: Startup File.
+* suffixes, accept: Types of Files.
+* suffixes, reject: Types of Files.
+* syntax of options: Option Syntax.
+* syntax of wgetrc: Wgetrc Syntax.
+* time-stamping: Time-Stamping.
+* time-stamping usage: Time-Stamping Usage.
+* timeout: Download Options.
+* timestamping: Time-Stamping.
+* tries: Download Options.
+* types of files: Types of Files.
+* updating the archives: Time-Stamping.
+* URL: URL Format.
+* URL syntax: URL Format.
+* usage, time-stamping: Time-Stamping Usage.
+* user-agent: HTTP Options.
+* various: Various.
+* verbose: Logging and Input File Options.
+* wait: Download Options.
+* Wget as spider: Download Options.
+* wgetrc: Startup File.
+* wgetrc commands: Wgetrc Commands.
+* wgetrc location: Wgetrc Location.
+* wgetrc syntax: Wgetrc Syntax.
+* wildcards, accept: Types of Files.
+* wildcards, reject: Types of Files.
+
+
--- /dev/null
+\input texinfo @c -*-texinfo-*-
+
+@c %**start of header
+@setfilename wget.info
+@settitle GNU Wget Manual
+@c Disable the monstrous rectangles beside overfull hbox-es.
+@finalout
+@c Use `odd' to print double-sided.
+@setchapternewpage on
+@c %**end of header
+
+@iftex
+@c Remove this if you don't use A4 paper.
+@afourpaper
+@end iftex
+
+@set VERSION 1.5.3
+@set UPDATED Sep 1998
+
+@dircategory Net Utilities
+@dircategory World Wide Web
+@direntry
+* Wget: (wget). The non-interactive network downloader.
+@end direntry
+
+@ifinfo
+This file documents the the GNU Wget utility for downloading network
+data.
+
+Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc.
+
+Permission is granted to make and distribute verbatim copies of
+this manual provided the copyright notice and this permission notice
+are preserved on all copies.
+
+@ignore
+Permission is granted to process this file through TeX and print the
+results, provided the printed document carries a copying permission
+notice identical to this one except for the removal of this paragraph
+(this paragraph not being relevant to the printed manual).
+@end ignore
+Permission is granted to copy and distribute modified versions of this
+manual under the conditions for verbatim copying, provided also that the
+sections entitled ``Copying'' and ``GNU General Public License'' are
+included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+@end ifinfo
+
+@titlepage
+@title GNU Wget
+@subtitle The noninteractive downloading utility
+@subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
+@author by Hrvoje Nik@v{s}i@'{c}
+
+@page
+@vskip 0pt plus 1filll
+Copyright @copyright{} 1996, 1997, 1998 Free Software Foundation, Inc.
+
+Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+Permission is granted to copy and distribute modified versions of this
+manual under the conditions for verbatim copying, provided also that the
+sections entitled ``Copying'' and ``GNU General Public License'' are
+included exactly as in the original, and provided that the entire
+resulting derived work is distributed under the terms of a permission
+notice identical to this one.
+
+Permission is granted to copy and distribute translations of this manual
+into another language, under the above conditions for modified versions,
+except that this permission notice may be stated in a translation
+approved by the Free Software Foundation.
+@end titlepage
+
+@ifinfo
+@node Top, Overview, (dir), (dir)
+@top Wget @value{VERSION}
+
+This manual documents version @value{VERSION} of GNU Wget, the freely
+available utility for network download.
+
+Copyright @copyright{} 1996, 1997, 1998 Free Software Foundation, Inc.
+
+@menu
+* Overview:: Features of Wget.
+* Invoking:: Wget command-line arguments.
+* Recursive Retrieval:: Description of recursive retrieval.
+* Following Links:: The available methods of chasing links.
+* Time-Stamping:: Mirroring according to time-stamps.
+* Startup File:: Wget's initialization file.
+* Examples:: Examples of usage.
+* Various:: The stuff that doesn't fit anywhere else.
+* Appendices:: Some useful references.
+* Copying:: You may give out copies of Wget.
+* Concept Index:: Topics covered by this manual.
+@end menu
+@end ifinfo
+
+@node Overview, Invoking, Top, Top
+@chapter Overview
+@cindex overview
+@cindex features
+
+GNU Wget is a freely available network utility to retrieve files from
+the World Wide Web, using @sc{http} (Hyper Text Transfer Protocol) and
+@sc{ftp} (File Transfer Protocol), the two most widely used Internet
+protocols. It has many useful features to make downloading easier, some
+of them being:
+
+@itemize @bullet
+@item
+Wget is non-interactive, meaning that it can work in the background,
+while the user is not logged on. This allows you to start a retrieval
+and disconnect from the system, letting Wget finish the work. By
+contrast, most of the Web browsers require constant user's presence,
+which can be a great hindrance when transferring a lot of data.
+
+@sp 1
+@item
+Wget is capable of descending recursively through the structure of
+@sc{html} documents and @sc{ftp} directory trees, making a local copy of
+the directory hierarchy similar to the one on the remote server. This
+feature can be used to mirror archives and home pages, or traverse the
+web in search of data, like a @sc{www} robot (@xref{Robots}). In that
+spirit, Wget understands the @code{norobots} convention.
+
+@sp 1
+@item
+File name wildcard matching and recursive mirroring of directories are
+available when retrieving via @sc{ftp}. Wget can read the time-stamp
+information given by both @sc{http} and @sc{ftp} servers, and store it
+locally. Thus Wget can see if the remote file has changed since last
+retrieval, and automatically retrieve the new version if it has. This
+makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
+pages.
+
+@sp 1
+@item
+Wget works exceedingly well on slow or unstable connections,
+retrying the document until it is fully retrieved, or until a
+user-specified retry count is surpassed. It will try to resume the
+download from the point of interruption, using @code{REST} with @sc{ftp}
+and @code{Range} with @sc{http} servers that support them.
+
+@sp 1
+@item
+By default, Wget supports proxy servers, which can lighten the network
+load, speed up retrieval and provide access behind firewalls. However,
+if you are behind a firewall that requires that you use a socks style
+gateway, you can get the socks library and build wget with support for
+socks. Wget also supports the passive @sc{ftp} downloading as an
+option.
+
+@sp 1
+@item
+Builtin features offer mechanisms to tune which links you wish to follow
+(@xref{Following Links}).
+
+@sp 1
+@item
+The retrieval is conveniently traced with printing dots, each dot
+representing a fixed amount of data received (1KB by default). These
+representations can be customized to your preferences.
+
+@sp 1
+@item
+Most of the features are fully configurable, either through command line
+options, or via the initialization file @file{.wgetrc} (@xref{Startup
+File}). Wget allows you to define @dfn{global} startup files
+(@file{/usr/local/etc/wgetrc} by default) for site settings.
+
+@sp 1
+@item
+Finally, GNU Wget is free software. This means that everyone may use
+it, redistribute it and/or modify it under the terms of the GNU General
+Public License, as published by the Free Software Foundation
+(@xref{Copying}).
+@end itemize
+
+@node Invoking, Recursive Retrieval, Overview, Top
+@chapter Invoking
+@cindex invoking
+@cindex command line
+@cindex arguments
+@cindex nohup
+
+By default, Wget is very simple to invoke. The basic syntax is:
+
+@example
+wget [@var{option}]@dots{} [@var{URL}]@dots{}
+@end example
+
+Wget will simply download all the @sc{url}s specified on the command
+line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
+
+However, you may wish to change some of the default parameters of
+Wget. You can do it two ways: permanently, adding the appropriate
+command to @file{.wgetrc} (@xref{Startup File}), or specifying it on
+the command line.
+
+@menu
+* URL Format::
+* Option Syntax::
+* Basic Startup Options::
+* Logging and Input File Options::
+* Download Options::
+* Directory Options::
+* HTTP Options::
+* FTP Options::
+* Recursive Retrieval Options::
+* Recursive Accept/Reject Options::
+@end menu
+
+@node URL Format, Option Syntax, Invoking, Invoking
+@section URL Format
+@cindex URL
+@cindex URL syntax
+
+@dfn{URL} is an acronym for Uniform Resource Locator. A uniform
+resource locator is a compact string representation for a resource
+available via the Internet. Wget recognizes the @sc{url} syntax as per
+@sc{rfc1738}. This is the most widely used form (square brackets denote
+optional parts):
+
+@example
+http://host[:port]/directory/file
+ftp://host[:port]/directory/file
+@end example
+
+You can also encode your username and password within a @sc{url}:
+
+@example
+ftp://user:password@@host/path
+http://user:password@@host/path
+@end example
+
+Either @var{user} or @var{password}, or both, may be left out. If you
+leave out either the @sc{http} username or password, no authentication
+will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
+will be used. If you leave out the @sc{ftp} password, your email
+address will be supplied as a default password.@footnote{If you have a
+@file{.netrc} file in your home directory, password will also be
+searched for there.}
+
+You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
+being the hexadecimal representation of the character's @sc{ascii}
+value. Some common unsafe characters include @samp{%} (quoted as
+@samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
+@samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
+characters.
+
+Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
+default, @sc{ftp} documents are retrieved in the binary mode (type
+@samp{i}), which means that they are downloaded unchanged. Another
+useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
+delimiters between the different operating systems, and is thus useful
+for text files. Here is an example:
+
+@example
+ftp://host/directory/file;type=a
+@end example
+
+Two alternative variants of @sc{url} specification are also supported,
+because of historical (hysterical?) reasons and their wide-spreadedness.
+
+@sc{ftp}-only syntax (supported by @code{NcFTP}):
+@example
+host:/dir/file
+@end example
+
+@sc{http}-only syntax (introduced by @code{Netscape}):
+@example
+host[:port]/dir/file
+@end example
+
+These two alternative forms are deprecated, and may cease being
+supported in the future.
+
+If you do not understand the difference between these notations, or do
+not know which one to use, just use the plain ordinary format you use
+with your favorite browser, like @code{Lynx} or @code{Netscape}.
+
+@node Option Syntax, Basic Startup Options, URL Format, Invoking
+@section Option Syntax
+@cindex option syntax
+@cindex syntax of options
+
+Since Wget uses GNU getopts to process its arguments, every option has a
+short form and a long form. Long options are more convenient to
+remember, but take time to type. You may freely mix different option
+styles, or specify options after the command-line arguments. Thus you
+may write:
+
+@example
+wget -r --tries=10 http://fly.cc.fer.hr/ -o log
+@end example
+
+The space between the option accepting an argument and the argument may
+be omitted. Instead @samp{-o log} you can write @samp{-olog}.
+
+You may put several options that do not require arguments together,
+like:
+
+@example
+wget -drc @var{URL}
+@end example
+
+This is a complete equivalent of:
+
+@example
+wget -d -r -c @var{URL}
+@end example
+
+Since the options can be specified after the arguments, you may
+terminate them with @samp{--}. So the following will try to download
+@sc{url} @samp{-x}, reporting failure to @file{log}:
+
+@example
+wget -o log -- -x
+@end example
+
+The options that accept comma-separated lists all respect the convention
+that specifying an empty list clears its value. This can be useful to
+clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
+sets @code{exclude_directories} to @file{/cgi-bin}, the following
+example will first reset it, and then set it to exclude @file{/~nobody}
+and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
+(@xref{Wgetrc Syntax}).
+
+@example
+wget -X '' -X /~nobody,/~somebody
+@end example
+
+@node Basic Startup Options, Logging and Input File Options, Option Syntax, Invoking
+@section Basic Startup Options
+
+@table @samp
+@item -V
+@itemx --version
+Display the version of Wget.
+
+@item -h
+@itemx --help
+Print a help message describing all of Wget's command-line options.
+
+@item -b
+@itemx --background
+Go to background immediately after startup. If no output file is
+specified via the @samp{-o}, output is redirected to @file{wget-log}.
+
+@cindex execute wgetrc command
+@item -e @var{command}
+@itemx --execute @var{command}
+Execute @var{command} as if it were a part of @file{.wgetrc}
+(@xref{Startup File}). A command thus invoked will be executed
+@emph{after} the commands in @file{.wgetrc}, thus taking precedence over
+them.
+@end table
+
+@node Logging and Input File Options, Download Options, Basic Startup Options, Invoking
+@section Logging and Input File Options
+
+@table @samp
+@cindex output file
+@cindex log file
+@item -o @var{logfile}
+@itemx --output-file=@var{logfile}
+Log all messages to @var{logfile}. The messages are normally reported
+to standard error.
+
+@cindex append to log
+@item -a @var{logfile}
+@itemx --append-output=@var{logfile}
+Append to @var{logfile}. This is the same as @samp{-o}, only it appends
+to @var{logfile} instead of overwriting the old log file. If
+@var{logfile} does not exist, a new file is created.
+
+@cindex debug
+@item -d
+@itemx --debug
+Turn on debug output, meaning various information important to the
+developers of Wget if it does not work properly. Your system
+administrator may have chosen to compile Wget without debug support, in
+which case @samp{-d} will not work. Please note that compiling with
+debug support is always safe---Wget compiled with the debug support will
+@emph{not} print any debug info unless requested with @samp{-d}.
+@xref{Reporting Bugs} for more information on how to use @samp{-d} for
+sending bug reports.
+
+@cindex quiet
+@item -q
+@itemx --quiet
+Turn off Wget's output.
+
+@cindex verbose
+@item -v
+@itemx --verbose
+Turn on verbose output, with all the available data. The default output
+is verbose.
+
+@item -nv
+@itemx --non-verbose
+Non-verbose output---turn off verbose without being completely quiet
+(use @samp{-q} for that), which means that error messages and basic
+information still get printed.
+
+@cindex input-file
+@item -i @var{file}
+@itemx --input-file=@var{file}
+Read @sc{url}s from @var{file}, in which case no @sc{url}s need to be on
+the command line. If there are @sc{url}s both on the command line and
+in an input file, those on the command lines will be the first ones to
+be retrieved. The @var{file} need not be an @sc{html} document (but no
+harm if it is)---it is enough if the @sc{url}s are just listed
+sequentially.
+
+However, if you specify @samp{--force-html}, the document will be
+regarded as @samp{html}. In that case you may have problems with
+relative links, which you can solve either by adding @code{<base
+href="@var{url}">} to the documents or by specifying
+@samp{--base=@var{url}} on the command line.
+
+@cindex force html
+@item -F
+@itemx --force-html
+When input is read from a file, force it to be treated as an @sc{html}
+file. This enables you to retrieve relative links from existing
+@sc{html} files on your local disk, by adding @code{<base
+href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
+option.
+@end table
+
+@node Download Options, Directory Options, Logging and Input File Options, Invoking
+@section Download Options
+
+@table @samp
+@cindex retries
+@cindex tries
+@cindex number of retries
+@item -t @var{number}
+@itemx --tries=@var{number}
+Set number of retries to @var{number}. Specify 0 or @samp{inf} for
+infinite retrying.
+
+@item -O @var{file}
+@itemx --output-document=@var{file}
+The documents will not be written to the appropriate files, but all will
+be concatenated together and written to @var{file}. If @var{file}
+already exists, it will be overwritten. If the @var{file} is @samp{-},
+the documents will be written to standard output. Including this option
+automatically sets the number of tries to 1.
+
+@cindex no-clobber
+@item -nc
+@itemx --no-clobber
+Do not clobber existing files when saving to directory hierarchy within
+recursive retrieval of several files. This option is @emph{extremely}
+useful when you wish to continue where you left off with retrieval of
+many files. If the files have the @samp{.html} or (yuck) @samp{.htm}
+suffix, they will be loaded from the local disk, and parsed as if they
+have been retrieved from the Web.
+
+@cindex continue retrieval
+@item -c
+@itemx --continue
+Continue getting an existing file. This is useful when you want to
+finish up the download started by another program, or a previous
+instance of Wget. Thus you can write:
+
+@example
+wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
+@end example
+
+If there is a file name @file{ls-lR.Z} in the current directory, Wget
+will assume that it is the first portion of the remote file, and will
+require the server to continue the retrieval from an offset equal to the
+length of the local file.
+
+Note that you need not specify this option if all you want is Wget to
+continue retrieving where it left off when the connection is lost---Wget
+does this by default. You need this option only when you want to
+continue retrieval of a file already halfway retrieved, saved by another
+@sc{ftp} client, or left by Wget being killed.
+
+Without @samp{-c}, the previous example would just begin to download the
+remote file to @file{ls-lR.Z.1}. The @samp{-c} option is also
+applicable for @sc{http} servers that support the @code{Range} header.
+
+@cindex dot style
+@cindex retrieval tracing style
+@item --dot-style=@var{style}
+Set the retrieval style to @var{style}. Wget traces the retrieval of
+each document by printing dots on the screen, each dot representing a
+fixed amount of retrieved data. Any number of dots may be separated in
+a @dfn{cluster}, to make counting easier. This option allows you to
+choose one of the pre-defined styles, determining the number of bytes
+represented by a dot, the number of dots in a cluster, and the number of
+dots on the line.
+
+With the @code{default} style each dot represents 1K, there are ten dots
+in a cluster and 50 dots in a line. The @code{binary} style has a more
+``computer''-like orientation---8K dots, 16-dots clusters and 48 dots
+per line (which makes for 384K lines). The @code{mega} style is
+suitable for downloading very large files---each dot represents 64K
+retrieved, there are eight dots in a cluster, and 48 dots on each line
+(so each line contains 3M). The @code{micro} style is exactly the
+reverse; it is suitable for downloading small files, with 128-byte dots,
+8 dots per cluster, and 48 dots (6K) per line.
+
+@item -N
+@itemx --timestamping
+Turn on time-stamping. @xref{Time-Stamping} for details.
+
+@cindex server response, print
+@item -S
+@itemx --server-response
+Print the headers sent by @sc{http} servers and responses sent by
+@sc{ftp} servers.
+
+@cindex Wget as spider
+@cindex spider
+@item --spider
+When invoked with this option, Wget will behave as a Web @dfn{spider},
+which means that it will not download the pages, just check that they
+are there. You can use it to check your bookmarks, e.g. with:
+
+@example
+wget --spider --force-html -i bookmarks.html
+@end example
+
+This feature needs much more work for Wget to get close to the
+functionality of real @sc{www} spiders.
+
+@cindex timeout
+@item -T seconds
+@itemx --timeout=@var{seconds}
+Set the read timeout to @var{seconds} seconds. Whenever a network read
+is issued, the file descriptor is checked for a timeout, which could
+otherwise leave a pending connection (uninterrupted read). The default
+timeout is 900 seconds (fifteen minutes). Setting timeout to 0 will
+disable checking for timeouts.
+
+Please do not lower the default timeout value with this option unless
+you know what you are doing.
+
+@cindex pause
+@cindex wait
+@item -w @var{seconds}
+@itemx --wait=@var{seconds}
+Wait the specified number of seconds between the retrievals. Use of
+this option is recommended, as it lightens the server load by making the
+requests less frequent. Instead of in seconds, the time can be
+specified in minutes using the @code{m} suffix, in hours using @code{h}
+suffix, or in days using @code{d} suffix.
+
+Specifying a large value for this option is useful if the network or the
+destination host is down, so that Wget can wait long enough to
+reasonably expect the network error to be fixed before the retry.
+
+@cindex proxy
+@item -Y on/off
+@itemx --proxy=on/off
+Turn proxy support on or off. The proxy is on by default if the
+appropriate environmental variable is defined.
+
+@cindex quota
+@item -Q @var{quota}
+@itemx --quota=@var{quota}
+Specify download quota for automatic retrievals. The value can be
+specified in bytes (default), kilobytes (with @samp{k} suffix), or
+megabytes (with @samp{m} suffix).
+
+Note that quota will never affect downloading a single file. So if you
+specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
+@file{ls-lR.gz} will be downloaded. The same goes even when several
+@sc{url}s are specified on the command-line. However, quota is
+respected when retrieving either recursively, or from an input file.
+Thus you may safely type @samp{wget -Q2m -i sites}---download will be
+aborted when the quota is exceeded.
+
+Setting quota to 0 or to @samp{inf} unlimits the download quota.
+@end table
+
+@node Directory Options, HTTP Options, Download Options, Invoking
+@section Directory Options
+
+@table @samp
+@item -nd
+@itemx --no-directories
+Do not create a hierarchy of directories when retrieving
+recursively. With this option turned on, all files will get saved to the
+current directory, without clobbering (if a name shows up more than
+once, the filenames will get extensions @samp{.n}).
+
+@item -x
+@itemx --force-directories
+The opposite of @samp{-nd}---create a hierarchy of directories, even if
+one would not have been created otherwise. E.g. @samp{wget -x
+http://fly.cc.fer.hr/robots.txt} will save the downloaded file to
+@file{fly.cc.fer.hr/robots.txt}.
+
+@item -nH
+@itemx --no-host-directories
+Disable generation of host-prefixed directories. By default, invoking
+Wget with @samp{-r http://fly.cc.fer.hr/} will create a structure of
+directories beginning with @file{fly.cc.fer.hr/}. This option disables
+such behavior.
+
+@cindex cut directories
+@item --cut-dirs=@var{number}
+Ignore @var{number} directory components. This is useful for getting a
+fine-grained control over the directory where recursive retrieval will
+be saved.
+
+Take, for example, the directory at
+@samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
+@samp{-r}, it will be saved locally under
+@file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
+remove the @file{ftp.xemacs.org/} part, you are still stuck with
+@file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
+makes Wget not ``see'' @var{number} remote directory components. Here
+are several examples of how @samp{--cut-dirs} option works.
+
+@example
+@group
+No options -> ftp.xemacs.org/pub/xemacs/
+-nH -> pub/xemacs/
+-nH --cut-dirs=1 -> xemacs/
+-nH --cut-dirs=2 -> .
+
+--cut-dirs=1 -> ftp.xemacs.org/xemacs/
+...
+@end group
+@end example
+
+If you just want to get rid of the directory structure, this option is
+similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
+@samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
+instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
+be placed to @file{xemacs/beta}, as one would expect.
+
+@cindex directory prefix
+@item -P @var{prefix}
+@itemx --directory-prefix=@var{prefix}
+Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
+directory where all other files and subdirectories will be saved to,
+i.e. the top of the retrieval tree. The default is @samp{.} (the
+current directory).
+@end table
+
+@node HTTP Options, FTP Options, Directory Options, Invoking
+@section HTTP Options
+
+@table @samp
+@cindex http user
+@cindex http password
+@cindex authentication
+@item --http-user=@var{user}
+@itemx --http-passwd=@var{password}
+Specify the username @var{user} and password @var{password} on an
+@sc{http} server. According to the type of the challenge, Wget will
+encode them using either the @code{basic} (insecure) or the
+@code{digest} authentication scheme.
+
+Another way to specify username and password is in the @sc{url} itself
+(@xref{URL Format}). For more information about security issues with
+Wget, @xref{Security Considerations}.
+
+@cindex proxy
+@cindex cache
+@item -C on/off
+@itemx --cache=on/off
+When set to off, disable server-side cache. In this case, Wget will
+send the remote server an appropriate directive (@samp{Pragma:
+no-cache}) to get the file from the remote service, rather than
+returning the cached version. This is especially useful for retrieving
+and flushing out-of-date documents on proxy servers.
+
+Caching is allowed by default.
+
+@cindex Content-Length, ignore
+@cindex ignore length
+@item --ignore-length
+Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
+precise) send out bogus @code{Content-Length} headers, which makes Wget
+go wild, as it thinks not all the document was retrieved. You can spot
+this syndrome if Wget retries getting the same document again and again,
+each time claiming that the (otherwise normal) connection has closed on
+the very same byte.
+
+With this option, Wget will ignore the @code{Content-Length} header---as
+if it never existed.
+
+@cindex header, add
+@item --header=@var{additional-header}
+Define an @var{additional-header} to be passed to the @sc{http} servers.
+Headers must contain a @samp{:} preceded by one or more non-blank
+characters, and must not contain newlines.
+
+You may define more than one additional header by specifying
+@samp{--header} more than once.
+
+@example
+@group
+wget --header='Accept-Charset: iso-8859-2' \
+ --header='Accept-Language: hr' \
+ http://fly.cc.fer.hr/
+@end group
+@end example
+
+Specification of an empty string as the header value will clear all
+previous user-defined headers.
+
+@cindex proxy user
+@cindex proxy password
+@cindex proxy authentication
+@item --proxy-user=@var{user}
+@itemx --proxy-passwd=@var{password}
+Specify the username @var{user} and password @var{password} for
+authentication on a proxy server. Wget will encode them using the
+@code{basic} authentication scheme.
+
+@cindex server response, save
+@item -s
+@itemx --save-headers
+Save the headers sent by the @sc{http} server to the file, preceding the
+actual contents, with an empty line as the separator.
+
+@cindex user-agent
+@item -U @var{agent-string}
+@itemx --user-agent=@var{agent-string}
+Identify as @var{agent-string} to the @sc{http} server.
+
+The @sc{http} protocol allows the clients to identify themselves using a
+@code{User-Agent} header field. This enables distinguishing the
+@sc{www} software, usually for statistical purposes or for tracing of
+protocol violations. Wget normally identifies as
+@samp{Wget/@var{version}}, @var{version} being the current version
+number of Wget.
+
+However, some sites have been known to impose the policy of tailoring
+the output according to the @code{User-Agent}-supplied information.
+While conceptually this is not such a bad idea, it has been abused by
+servers denying information to clients other than @code{Mozilla} or
+Microsoft @code{Internet Explorer}. This option allows you to change
+the @code{User-Agent} line issued by Wget. Use of this option is
+discouraged, unless you really know what you are doing.
+
+@strong{NOTE} that Netscape Communications Corp. has claimed that false
+transmissions of @samp{Mozilla} as the @code{User-Agent} are a copyright
+infringement, which will be prosecuted. @strong{DO NOT} misrepresent
+Wget as Mozilla.
+@end table
+
+@node FTP Options, Recursive Retrieval Options, HTTP Options, Invoking
+@section FTP Options
+
+@table @samp
+@cindex retrieve symbolic links
+@item --retr-symlinks
+Retrieve symbolic links on @sc{ftp} sites as if they were plain files,
+i.e. don't just create links locally.
+
+@cindex globbing, toggle
+@item -g on/off
+@itemx --glob=on/off
+Turn @sc{ftp} globbing on or off. Globbing means you may use the
+shell-like special characters (@dfn{wildcards}), like @samp{*},
+@samp{?}, @samp{[} and @samp{]} to retrieve more than one file from the
+same directory at once, like:
+
+@example
+wget ftp://gnjilux.cc.fer.hr/*.msg
+@end example
+
+By default, globbing will be turned on if the @sc{url} contains a
+globbing character. This option may be used to turn globbing on or off
+permanently.
+
+You may have to quote the @sc{url} to protect it from being expanded by
+your shell. Globbing makes Wget look for a directory listing, which is
+system-specific. This is why it currently works only with Unix @sc{ftp}
+servers (and the ones emulating Unix @code{ls} output).
+
+@cindex passive ftp
+@item --passive-ftp
+Use the @dfn{passive} @sc{ftp} retrieval scheme, in which the client
+initiates the data connection. This is sometimes required for @sc{ftp}
+to work behind firewalls.
+@end table
+
+@node Recursive Retrieval Options, Recursive Accept/Reject Options, FTP Options, Invoking
+@section Recursive Retrieval Options
+
+@table @samp
+@item -r
+@itemx --recursive
+Turn on recursive retrieving. @xref{Recursive Retrieval} for more
+details.
+
+@item -l @var{depth}
+@itemx --level=@var{depth}
+Specify recursion maximum depth level @var{depth} (@xref{Recursive
+Retrieval}). The default maximum depth is 5.
+
+@cindex proxy filling
+@cindex delete after retrieval
+@cindex filling proxy cache
+@item --delete-after
+This option tells Wget to delete every single file it downloads,
+@emph{after} having done so. It is useful for pre-fetching popular
+pages through proxy, e.g.:
+
+@example
+wget -r -nd --delete-after http://whatever.com/~popular/page/
+@end example
+
+The @samp{-r} option is to retrieve recursively, and @samp{-nd} not to
+create directories.
+
+@cindex conversion of links
+@cindex links conversion
+@item -k
+@itemx --convert-links
+Convert the non-relative links to relative ones locally. Only the
+references to the documents actually downloaded will be converted; the
+rest will be left unchanged.
+
+Note that only at the end of the download can Wget know which links have
+been downloaded. Because of that, much of the work done by @samp{-k}
+will be performed at the end of the downloads.
+
+@item -m
+@itemx --mirror
+Turn on options suitable for mirroring. This option turns on recursion
+and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
+directory listings. It is currently equivalent to
+@samp{-r -N -l inf -nr}.
+
+@item -nr
+@itemx --dont-remove-listing
+Don't remove the temporary @file{.listing} files generated by @sc{ftp}
+retrievals. Normally, these files contain the raw directory listings
+received from @sc{ftp} servers. Not removing them can be useful to
+access the full remote file list when running a mirror, or for debugging
+purposes.
+@end table
+
+@node Recursive Accept/Reject Options, , Recursive Retrieval Options, Invoking
+@section Recursive Accept/Reject Options
+
+@table @samp
+@item -A @var{acclist} --accept @var{acclist}
+@itemx -R @var{rejlist} --reject @var{rejlist}
+Specify comma-separated lists of file name suffixes or patterns to
+accept or reject (@xref{Types of Files} for more details).
+
+@item -D @var{domain-list}
+@itemx --domains=@var{domain-list}
+Set domains to be accepted and @sc{dns} looked-up, where
+@var{domain-list} is a comma-separated list. Note that it does
+@emph{not} turn on @samp{-H}. This option speeds things up, even if
+only one host is spanned (@xref{Domain Acceptance}).
+
+@item --exclude-domains @var{domain-list}
+Exclude the domains given in a comma-separated @var{domain-list} from
+@sc{dns}-lookup (@xref{Domain Acceptance}).
+
+@item -L
+@itemx --relative
+Follow relative links only. Useful for retrieving a specific home page
+without any distractions, not even those from the same hosts
+(@xref{Relative Links}).
+
+@cindex follow FTP links
+@item --follow-ftp
+Follow @sc{ftp} links from @sc{html} documents. Without this option,
+Wget will ignore all the @sc{ftp} links.
+
+@item -H
+@itemx --span-hosts
+Enable spanning across hosts when doing recursive retrieving (@xref{All
+Hosts}).
+
+@item -I @var{list}
+@itemx --include-directories=@var{list}
+Specify a comma-separated list of directories you wish to follow when
+downloading (@xref{Directory-Based Limits} for more details.) Elements
+of @var{list} may contain wildcards.
+
+@item -X @var{list}
+@itemx --exclude-directories=@var{list}
+Specify a comma-separated list of directories you wish to exclude from
+download (@xref{Directory-Based Limits} for more details.) Elements of
+@var{list} may contain wildcards.
+
+@item -nh
+@itemx --no-host-lookup
+Disable the time-consuming @sc{dns} lookup of almost all hosts
+(@xref{Host Checking}).
+
+@item -np
+@item --no-parent
+Do not ever ascend to the parent directory when retrieving recursively.
+This is a useful option, since it guarantees that only the files
+@emph{below} a certain hierarchy will be downloaded.
+@xref{Directory-Based Limits} for more details.
+@end table
+
+@node Recursive Retrieval, Following Links, Invoking, Top
+@chapter Recursive Retrieval
+@cindex recursion
+@cindex retrieving
+@cindex recursive retrieval
+
+GNU Wget is capable of traversing parts of the Web (or a single
+@sc{http} or @sc{ftp} server), depth-first following links and directory
+structure. This is called @dfn{recursive} retrieving, or
+@dfn{recursion}.
+
+With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from
+the given @sc{url}, documents, retrieving the files the @sc{html}
+document was referring to, through markups like @code{href}, or
+@code{src}. If the freshly downloaded file is also of type
+@code{text/html}, it will be parsed and followed further.
+
+The maximum @dfn{depth} to which the retrieval may descend is specified
+with the @samp{-l} option (the default maximum depth is five layers).
+@xref{Recursive Retrieval}.
+
+When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
+the data from the given directory tree (including the subdirectories up
+to the specified depth) on the remote server, creating its mirror image
+locally. @sc{ftp} retrieval is also limited by the @code{depth}
+parameter.
+
+By default, Wget will create a local directory tree, corresponding to
+the one found on the remote server.
+
+Recursive retrieving can find a number of applications, the most
+important of which is mirroring. It is also useful for @sc{www}
+presentations, and any other opportunities where slow network
+connections should be bypassed by storing the files locally.
+
+You should be warned that invoking recursion may cause grave overloading
+on your system, because of the fast exchange of data through the
+network; all of this may hamper other users' work. The same stands for
+the foreign server you are mirroring---the more requests it gets in a
+rows, the greater is its load.
+
+Careless retrieving can also fill your file system unctrollably, which
+can grind the machine to a halt.
+
+The load can be minimized by lowering the maximum recursion level
+(@samp{-l}) and/or by lowering the number of retries (@samp{-t}). You
+may also consider using the @samp{-w} option to slow down your requests
+to the remote servers, as well as the numerous options to narrow the
+number of followed links (@xref{Following Links}).
+
+Recursive retrieval is a good thing when used properly. Please take all
+precautions not to wreak havoc through carelessness.
+
+@node Following Links, Time-Stamping, Recursive Retrieval, Top
+@chapter Following Links
+@cindex links
+@cindex following links
+
+When retrieving recursively, one does not wish to retrieve the loads of
+unnecessary data. Most of the time the users bear in mind exactly what
+they want to download, and want Wget to follow only specific links.
+
+For example, if you wish to download the music archive from
+@samp{fly.cc.fer.hr}, you will not want to download all the home pages
+that happen to be referenced by an obscure part of the archive.
+
+Wget possesses several mechanisms that allows you to fine-tune which
+links it will follow.
+
+@menu
+* Relative Links:: Follow relative links only.
+* Host Checking:: Follow links on the same host.
+* Domain Acceptance:: Check on a list of domains.
+* All Hosts:: No host restrictions.
+* Types of Files:: Getting only certain files.
+* Directory-Based Limits:: Getting only certain directories.
+* FTP Links:: Following FTP links.
+@end menu
+
+@node Relative Links, Host Checking, Following Links, Following Links
+@section Relative Links
+@cindex relative links
+
+When only relative links are followed (option @samp{-L}), recursive
+retrieving will never span hosts. No time-expensive @sc{dns}-lookups
+will be performed, and the process will be very fast, with the minimum
+strain of the network. This will suit your needs often, especially when
+mirroring the output of various @code{x2html} converters, since they
+generally output relative links.
+
+@node Host Checking, Domain Acceptance, Relative Links, Following Links
+@section Host Checking
+@cindex DNS lookup
+@cindex host lookup
+@cindex host checking
+
+The drawback of following the relative links solely is that humans often
+tend to mix them with absolute links to the very same host, and the very
+same page. In this mode (which is the default mode for following links)
+all @sc{url}s the that refer to the same host will be retrieved.
+
+The problem with this option are the aliases of the hosts and domains.
+Thus there is no way for Wget to know that @samp{regoc.srce.hr} and
+@samp{www.srce.hr} are the same host, or that @samp{fly.cc.fer.hr} is
+the same as @samp{fly.cc.etf.hr}. Whenever an absolute link is
+encountered, the host is @sc{dns}-looked-up with @code{gethostbyname} to
+check whether we are maybe dealing with the same hosts. Although the
+results of @code{gethostbyname} are cached, it is still a great
+slowdown, e.g. when dealing with large indices of home pages on different
+hosts (because each of the hosts must be and @sc{dns}-resolved to see
+whether it just @emph{might} an alias of the starting host).
+
+To avoid the overhead you may use @samp{-nh}, which will turn off
+@sc{dns}-resolving and make Wget compare hosts literally. This will
+make things run much faster, but also much less reliable
+(e.g. @samp{www.srce.hr} and @samp{regoc.srce.hr} will be flagged as
+different hosts).
+
+Note that modern @sc{http} servers allows one IP address to host several
+@dfn{virtual servers}, each having its own directory hieratchy. Such
+``servers'' are distinguished by their hostnames (all of which point to
+the same IP address); for this to work, a client must send a @code{Host}
+header, which is what Wget does. However, in that case Wget @emph{must
+not} try to divine a host's ``real'' address, nor try to use the same
+hostname for each access, i.e. @samp{-nh} must be turned on.
+
+In other words, the @samp{-nh} option must be used to enabling the
+retrieval from virtual servers distinguished by their hostnames. As the
+number of such server setups grow, the behavior of @samp{-nh} may become
+the default in the future.
+
+@node Domain Acceptance, All Hosts, Host Checking, Following Links
+@section Domain Acceptance
+
+With the @samp{-D} option you may specify the domains that will be
+followed. The hosts the domain of which is not in this list will not be
+@sc{dns}-resolved. Thus you can specify @samp{-Dmit.edu} just to make
+sure that @strong{nothing outside of @sc{mit} gets looked up}. This is
+very important and useful. It also means that @samp{-D} does @emph{not}
+imply @samp{-H} (span all hosts), which must be specified explicitly.
+Feel free to use this options since it will speed things up, with almost
+all the reliability of checking for all hosts. Thus you could invoke
+
+@example
+wget -r -D.hr http://fly.cc.fer.hr/
+@end example
+
+to make sure that only the hosts in @samp{.hr} domain get
+@sc{dns}-looked-up for being equal to @samp{fly.cc.fer.hr}. So
+@samp{fly.cc.etf.hr} will be checked (only once!) and found equal, but
+@samp{www.gnu.ai.mit.edu} will not even be checked.
+
+Of course, domain acceptance can be used to limit the retrieval to
+particular domains with spanning of hosts in them, but then you must
+specify @samp{-H} explicitly. E.g.:
+
+@example
+wget -r -H -Dmit.edu,stanford.edu http://www.mit.edu/
+@end example
+
+will start with @samp{http://www.mit.edu/}, following links across
+@sc{mit} and Stanford.
+
+If there are domains you want to exclude specifically, you can do it
+with @samp{--exclude-domains}, which accepts the same type of arguments
+of @samp{-D}, but will @emph{exclude} all the listed domains. For
+example, if you want to download all the hosts from @samp{foo.edu}
+domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
+this:
+
+@example
+wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu http://www.foo.edu/
+@end example
+
+@node All Hosts, Types of Files, Domain Acceptance, Following Links
+@section All Hosts
+@cindex all hosts
+@cindex span hosts
+
+When @samp{-H} is specified without @samp{-D}, all hosts are freely
+spanned. There are no restrictions whatsoever as to what part of the
+net Wget will go to fetch documents, other than maximum retrieval depth.
+If a page references @samp{www.yahoo.com}, so be it. Such an option is
+rarely useful for itself.
+
+@node Types of Files, Directory-Based Limits, All Hosts, Following Links
+@section Types of Files
+@cindex types of files
+
+When downloading material from the web, you will often want to restrict
+the retrieval to only certain file types. For example, if you are
+interested in downloading @sc{gifs}, you will not be overjoyed to get
+loads of Postscript documents, and vice versa.
+
+Wget offers two options to deal with this problem. Each option
+description lists a short name, a long name, and the equivalent command
+in @file{.wgetrc}.
+
+@cindex accept wildcards
+@cindex accept suffixes
+@cindex wildcards, accept
+@cindex suffixes, accept
+@table @samp
+@item -A @var{acclist}
+@itemx --accept @var{acclist}
+@itemx accept = @var{acclist}
+The argument to @samp{--accept} option is a list of file suffixes or
+patterns that Wget will download during recursive retrieval. A suffix
+is the ending part of a file, and consists of ``normal'' letters,
+e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
+wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
+
+So, specifying @samp{wget -A gif,jpg} will make Wget download only the
+files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
+@sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
+download only files beginning with @samp{zelazny} and containing numbers
+from 1960 to 1969 anywhere within. Look up the manual of your shell for
+a description of how pattern matching works.
+
+Of course, any number of suffixes and patterns can be combined into a
+comma-separated list, and given as an argument to @samp{-A}.
+
+@cindex reject wildcards
+@cindex reject suffixes
+@cindex wildcards, reject
+@cindex suffixes, reject
+@item -R @var{rejlist}
+@itemx --reject @var{rejlist}
+@itemx reject = @var{rejlist}
+The @samp{--reject} option works the same way as @samp{--accept}, only
+its logic is the reverse; Wget will download all files @emph{except} the
+ones matching the suffixes (or patterns) in the list.
+
+So, if you want to download a whole page except for the cumbersome
+@sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
+Analogously, to download all files except the ones beginning with
+@samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
+expansion by the shell.
+@end table
+
+The @samp{-A} and @samp{-R} options may be combined to achieve even
+better fine-tuning of which files to retrieve. E.g. @samp{wget -A
+"*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
+a part of their name, but @emph{not} the postscript files.
+
+Note that these two options do not affect the downloading of @sc{html}
+files; Wget must load all the @sc{html}s to know where to go at
+all---recursive retrieval would make no sense otherwise.
+
+@node Directory-Based Limits, FTP Links, Types of Files, Following Links
+@section Directory-Based Limits
+@cindex directories
+@cindex directory limits
+
+Regardless of other link-following facilities, it is often useful to
+place the restriction of what files to retrieve based on the directories
+those files are placed in. There can be many reasons for this---the
+home pages may be organized in a reasonable directory structure; or some
+directories may contain useless information, e.g. @file{/cgi-bin} or
+@file{/dev} directories.
+
+Wget offers three different options to deal with this requirement. Each
+option description lists a short name, a long name, and the equivalent
+command in @file{.wgetrc}.
+
+@cindex directories, include
+@cindex include directories
+@cindex accept directories
+@table @samp
+@item -I @var{list}
+@itemx --include @var{list}
+@itemx include_directories = @var{list}
+@samp{-I} option accepts a comma-separated list of directories included
+in the retrieval. Any other directories will simply be ignored. The
+directories are absolute paths.
+
+So, if you wish to download from @samp{http://host/people/bozo/}
+following only links to bozo's colleagues in the @file{/people}
+directory and the bogus scripts in @file{/cgi-bin}, you can specify:
+
+@example
+wget -I /people,/cgi-bin http://host/people/bozo/
+@end example
+
+@cindex directories, exclude
+@cindex exclude directories
+@cindex reject directories
+@item -X @var{list}
+@itemx --exclude @var{list}
+@itemx exclude_directories = @var{list}
+@samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
+directories @emph{excluded} from the download. E.g. if you do not want
+Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
+/cgi-bin} on the command line.
+
+The same as with @samp{-A}/@samp{-R}, these two options can be combined
+to get a better fine-tuning of downloading subdirectories. E.g. if you
+want to load all the files from @file{/pub} hierarchy except for
+@file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
+
+@cindex no parent
+@item -np
+@itemx --no-parent
+@itemx no_parent = on
+The simplest, and often very useful way of limiting directories is
+disallowing retrieval of the links that refer to the hierarchy
+@dfn{upper} than the beginning directory, i.e. disallowing ascent to the
+parent directory/directories.
+
+The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
+Using it guarantees that you will never leave the existing hierarchy.
+Supposing you issue Wget with:
+
+@example
+wget -r --no-parent http://somehost/~luzer/my-archive/
+@end example
+
+You may rest assured that none of the references to
+@file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
+followed. Only the archive you are interested in will be downloaded.
+Essentially, @samp{--no-parent} is similar to
+@samp{-I/~luzer/my-archive}, only it handles redirections in a more
+intelligent fashion.
+@end table
+
+@node FTP Links, , Directory-Based Limits, Following Links
+@section Following FTP Links
+@cindex following ftp links
+
+The rules for @sc{ftp} are somewhat specific, as it is necessary for
+them to be. @sc{ftp} links in @sc{html} documents are often included
+for purposes of reference, and it is often inconvenient to download them
+by default.
+
+To have @sc{ftp} links followed from @sc{html} documents, you need to
+specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
+links will span hosts regardless of @samp{-H} setting. This is logical,
+as @sc{ftp} links rarely point to the same host where the @sc{http}
+server resides. For similar reasons, the @samp{-L} options has no
+effect on such downloads. On the other hand, domain acceptance
+(@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
+
+Also note that followed links to @sc{ftp} directories will not be
+retrieved recursively further.
+
+@node Time-Stamping, Startup File, Following Links, Top
+@chapter Time-Stamping
+@cindex time-stamping
+@cindex timestamping
+@cindex updating the archives
+@cindex incremental updating
+
+One of the most important aspects of mirroring information from the
+Internet is updating your archives.
+
+Downloading the whole archive again and again, just to replace a few
+changed files is expensive, both in terms of wasted bandwidth and money,
+and the time to do the update. This is why all the mirroring tools
+offer the option of incremental updating.
+
+Such an updating mechanism means that the remote server is scanned in
+search of @dfn{new} files. Only those new files will be downloaded in
+the place of the old ones.
+
+A file is considered new if one of these two conditions are met:
+
+@enumerate
+@item
+A file of that name does not already exist locally.
+
+@item
+A file of that name does exist, but the remote file was modified more
+recently than the local file.
+@end enumerate
+
+To implement this, the program needs to be aware of the time of last
+modification of both remote and local files. Such information are
+called the @dfn{time-stamps}.
+
+The time-stamping in GNU Wget is turned on using @samp{--timestamping}
+(@samp{-N}) option, or through @code{timestamping = on} directive in
+@file{.wgetrc}. With this option, for each file it intends to download,
+Wget will check whether a local file of the same name exists. If it
+does, and the remote file is older, Wget will not download it.
+
+If the local file does not exist, or the sizes of the files do not
+match, Wget will download the remote file no matter what the time-stamps
+say.
+
+@menu
+* Time-Stamping Usage::
+* HTTP Time-Stamping Internals::
+* FTP Time-Stamping Internals::
+@end menu
+
+@node Time-Stamping Usage, HTTP Time-Stamping Internals, Time-Stamping, Time-Stamping
+@section Time-Stamping Usage
+@cindex time-stamping usage
+@cindex usage, time-stamping
+
+The usage of time-stamping is simple. Say you would like to download a
+file so that it keeps its date of modification.
+
+@example
+wget -S http://www.gnu.ai.mit.edu/
+@end example
+
+A simple @code{ls -l} shows that the time stamp on the local file equals
+the state of the @code{Last-Modified} header, as returned by the server.
+As you can see, the time-stamping info is preserved locally, even
+without @samp{-N}.
+
+Several days later, you would like Wget to check if the remote file has
+changed, and download it if it has.
+
+@example
+wget -N http://www.gnu.ai.mit.edu/
+@end example
+
+Wget will ask the server for the last-modified date. If the local file
+is newer, the remote file will not be re-fetched. However, if the remote
+file is more recent, Wget will proceed fetching it normally.
+
+The same goes for @sc{ftp}. For example:
+
+@example
+wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/*
+@end example
+
+@code{ls} will show that the timestamps are set according to the state
+on the remote server. Reissuing the command with @samp{-N} will make
+Wget re-fetch @emph{only} the files that have been modified.
+
+In both @sc{http} and @sc{ftp} retrieval Wget will time-stamp the local
+file correctly (with or without @samp{-N}) if it gets the stamps,
+i.e. gets the directory listing for @sc{ftp} or the @code{Last-Modified}
+header for @sc{http}.
+
+If you wished to mirror the GNU archive every week, you would use the
+following command every week:
+
+@example
+wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/
+@end example
+
+@node HTTP Time-Stamping Internals, FTP Time-Stamping Internals, Time-Stamping Usage, Time-Stamping
+@section HTTP Time-Stamping Internals
+@cindex http time-stamping
+
+Time-stamping in @sc{http} is implemented by checking of the
+@code{Last-Modified} header. If you wish to retrieve the file
+@file{foo.html} through @sc{http}, Wget will check whether
+@file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
+retrieved unconditionally.
+
+If the file does exist locally, Wget will first check its local
+time-stamp (similar to the way @code{ls -l} checks it), and then send a
+@code{HEAD} request to the remote server, demanding the information on
+the remote file.
+
+The @code{Last-Modified} header is examined to find which file was
+modified more recently (which makes it ``newer''). If the remote file
+is newer, it will be downloaded; if it is older, Wget will give
+up.@footnote{As an additional check, Wget will look at the
+@code{Content-Length} header, and compare the sizes; if they are not the
+same, the remote file will be downloaded no matter what the time-stamp
+says.}
+
+Arguably, @sc{http} time-stamping should be implemented using the
+@code{If-Modified-Since} request.
+
+@node FTP Time-Stamping Internals, , HTTP Time-Stamping Internals, Time-Stamping
+@section FTP Time-Stamping Internals
+@cindex ftp time-stamping
+
+In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
+@sc{ftp} has no headers---time-stamps must be received from the
+directory listings.
+
+For each directory files must be retrieved from, Wget will use the
+@code{LIST} command to get the listing. It will try to analyze the
+listing, assuming that it is a Unix @code{ls -l} listing, and extract
+the time-stamps. The rest is exactly the same as for @sc{http}.
+
+Assumption that every directory listing is a Unix-style listing may
+sound extremely constraining, but in practice it is not, as many
+non-Unix @sc{ftp} servers use the Unixoid listing format because most
+(all?) of the clients understand it. Bear in mind that @sc{rfc959}
+defines no standard way to get a file list, let alone the time-stamps.
+We can only hope that a future standard will define this.
+
+Another non-standard solution includes the use of @code{MDTM} command
+that is supported by some @sc{ftp} servers (including the popular
+@code{wu-ftpd}), which returns the exact time of the specified file.
+Wget may support this command in the future.
+
+@node Startup File, Examples, Time-Stamping, Top
+@chapter Startup File
+@cindex startup file
+@cindex wgetrc
+@cindex .wgetrc
+@cindex startup
+@cindex .netrc
+
+Once you know how to change default settings of Wget through command
+line arguments, you may wish to make some of those settings permanent.
+You can do that in a convenient way by creating the Wget startup
+file---@file{.wgetrc}.
+
+Besides @file{.wgetrc} is the ``main'' initialization file, it is
+convenient to have a special facility for storing passwords. Thus Wget
+reads and interprets the contents of @file{$HOME/.netrc}, if it finds
+it. You can find @file{.netrc} format in your system manuals.
+
+Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
+commands.
+
+@menu
+* Wgetrc Location:: Location of various wgetrc files.
+* Wgetrc Syntax:: Syntax of wgetrc.
+* Wgetrc Commands:: List of available commands.
+* Sample Wgetrc:: A wgetrc example.
+@end menu
+
+@node Wgetrc Location, Wgetrc Syntax, Startup File, Startup File
+@section Wgetrc Location
+@cindex wgetrc location
+@cindex location of wgetrc
+
+When initializing, Wget will look for a @dfn{global} startup file,
+@file{/usr/local/etc/wgetrc} by default (or some prefix other than
+@file{/usr/local}, if Wget was not installed there) and read commands
+from there, if it exists.
+
+Then it will look for the user's file. If the environmental variable
+@code{WGETRC} is set, Wget will try to load that file. Failing that, no
+further attempts will be made.
+
+If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
+
+The fact that user's settings are loaded after the system-wide ones
+means that in case of collision user's wgetrc @emph{overrides} the
+system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
+Fascist admins, away!
+
+@node Wgetrc Syntax, Wgetrc Commands, Wgetrc Location, Startup File
+@section Wgetrc Syntax
+@cindex wgetrc syntax
+@cindex syntax of wgetrc
+
+The syntax of a wgetrc command is simple:
+
+@example
+variable = value
+@end example
+
+The @dfn{variable} will also be called @dfn{command}. Valid
+@dfn{values} are different for different commands.
+
+The commands are case-insensitive and underscore-insensitive. Thus
+@samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
+beginning with @samp{#} and lines containing white-space only are
+discarded.
+
+Commands that expect a comma-separated list will clear the list on an
+empty command. So, if you wish to reset the rejection list specified in
+global @file{wgetrc}, you can do it with:
+
+@example
+reject =
+@end example
+
+@node Wgetrc Commands, Sample Wgetrc, Wgetrc Syntax, Startup File
+@section Wgetrc Commands
+@cindex wgetrc commands
+
+The complete set of commands is listed below, the letter after @samp{=}
+denoting the value the command takes. It is @samp{on/off} for @samp{on}
+or @samp{off} (which can also be @samp{1} or @samp{0}), @var{string} for
+any non-empty string or @var{n} for a positive integer. For example,
+you may specify @samp{use_proxy = off} to disable use of proxy servers
+by default. You may use @samp{inf} for infinite values, where
+appropriate.
+
+Most of the commands have their equivalent command-line option
+(@xref{Invoking}), except some more obscure or rarely used ones.
+
+@table @asis
+@item accept/reject = @var{string}
+Same as @samp{-A}/@samp{-R} (@xref{Types of Files}).
+
+@item add_hostdir = on/off
+Enable/disable host-prefixed file names. @samp{-nH} disables it.
+
+@item continue = on/off
+Enable/disable continuation of the retrieval, the same as @samp{-c}
+(which enables it).
+
+@item background = on/off
+Enable/disable going to background, the same as @samp{-b} (which enables
+it).
+
+@c @item backups = @var{number}
+@c #### Document me!
+@item base = @var{string}
+Set base for relative @sc{url}s, the same as @samp{-B}.
+
+@item cache = on/off
+When set to off, disallow server-caching. See the @samp{-C} option.
+
+@item convert links = on/off
+Convert non-relative links locally. The same as @samp{-k}.
+
+@item cut_dirs = @var{n}
+Ignore @var{n} remote directory components.
+
+@item debug = on/off
+Debug mode, same as @samp{-d}.
+
+@item delete_after = on/off
+Delete after download, the same as @samp{--delete-after}.
+
+@item dir_prefix = @var{string}
+Top of directory tree, the same as @samp{-P}.
+
+@item dirstruct = on/off
+Turning dirstruct on or off, the same as @samp{-x} or @samp{-nd},
+respectively.
+
+@item domains = @var{string}
+Same as @samp{-D} (@xref{Domain Acceptance}).
+
+@item dot_bytes = @var{n}
+Specify the number of bytes ``contained'' in a dot, as seen throughout
+the retrieval (1024 by default). You can postfix the value with
+@samp{k} or @samp{m}, representing kilobytes and megabytes,
+respectively. With dot settings you can tailor the dot retrieval to
+suit your needs, or you can use the predefined @dfn{styles}
+(@xref{Download Options}).
+
+@item dots_in_line = @var{n}
+Specify the number of dots that will be printed in each line throughout
+the retrieval (50 by default).
+
+@item dot_spacing = @var{n}
+Specify the number of dots in a single cluster (10 by default).
+
+@item dot_style = @var{string}
+Specify the dot retrieval @dfn{style}, as with @samp{--dot-style}.
+
+@item exclude_directories = @var{string}
+Specify a comma-separated list of directories you wish to exclude from
+download, the same as @samp{-X} (@xref{Directory-Based Limits}).
+
+@item exclude_domains = @var{string}
+Same as @samp{--exclude-domains} (@xref{Domain Acceptance}).
+
+@item follow_ftp = on/off
+Follow @sc{ftp} links from @sc{html} documents, the same as @samp{-f}.
+
+@item force_html = on/off
+If set to on, force the input filename to be regarded as an @sc{html}
+document, the same as @samp{-F}.
+
+@item ftp_proxy = @var{string}
+Use @var{string} as @sc{ftp} proxy, instead of the one specified in
+environment.
+
+@item glob = on/off
+Turn globbing on/off, the same as @samp{-g}.
+
+@item header = @var{string}
+Define an additional header, like @samp{--header}.
+
+@item http_passwd = @var{string}
+Set @sc{http} password.
+
+@item http_proxy = @var{string}
+Use @var{string} as @sc{http} proxy, instead of the one specified in
+environment.
+
+@item http_user = @var{string}
+Set @sc{http} user to @var{string}.
+
+@item ignore_length = on/off
+When set to on, ignore @code{Content-Length} header; the same as
+@samp{--ignore-length}.
+
+@item include_directories = @var{string}
+Specify a comma-separated list of directories you wish to follow when
+downloading, the same as @samp{-I}.
+
+@item input = @var{string}
+Read the @sc{url}s from @var{string}, like @samp{-i}.
+
+@item kill_longer = on/off
+Consider data longer than specified in content-length header
+as invalid (and retry getting it). The default behaviour is to save
+as much data as there is, provided there is more than or equal
+to the value in @code{Content-Length}.
+
+@item logfile = @var{string}
+Set logfile, the same as @samp{-o}.
+
+@item login = @var{string}
+Your user name on the remote machine, for @sc{ftp}. Defaults to
+@samp{anonymous}.
+
+@item mirror = on/off
+Turn mirroring on/off. The same as @samp{-m}.
+
+@item netrc = on/off
+Turn reading netrc on or off.
+
+@item noclobber = on/off
+Same as @samp{-nc}.
+
+@item no_parent = on/off
+Disallow retrieving outside the directory hierarchy, like
+@samp{--no-parent} (@xref{Directory-Based Limits}).
+
+@item no_proxy = @var{string}
+Use @var{string} as the comma-separated list of domains to avoid in
+proxy loading, instead of the one specified in environment.
+
+@item output_document = @var{string}
+Set the output filename, the same as @samp{-O}.
+
+@item passive_ftp = on/off
+Set passive @sc{ftp}, the same as @samp{--passive-ftp}.
+
+@item passwd = @var{string}
+Set your @sc{ftp} password to @var{password}. Without this setting, the
+password defaults to @samp{username@@hostname.domainname}.
+
+@item proxy_user = @var{string}
+Set proxy authentication user name to @var{string}, like
+@samp{--proxy-user}.
+
+@item proxy_passwd = @var{string}
+Set proxy authentication password to @var{string}, like
+@samp{--proxy-passwd}.
+
+@item quiet = on/off
+Quiet mode, the same as @samp{-q}.
+
+@item quota = @var{quota}
+Specify the download quota, which is useful to put in global
+wgetrc. When download quota is specified, Wget will stop retrieving
+after the download sum has become greater than quota. The quota can be
+specified in bytes (default), kbytes @samp{k} appended) or mbytes
+(@samp{m} appended). Thus @samp{quota = 5m} will set the quota to 5
+mbytes. Note that the user's startup file overrides system settings.
+
+@item reclevel = @var{n}
+Recursion level, the same as @samp{-l}.
+
+@item recursive = on/off
+Recursive on/off, the same as @samp{-r}.
+
+@item relative_only = on/off
+Follow only relative links, the same as @samp{-L} (@xref{Relative
+Links}).
+
+@item remove_listing = on/off
+If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
+to off is the same as @samp{-nr}.
+
+@item retr_symlinks = on/off
+When set to on, retrieve symbolic links as if they were plain files; the
+same as @samp{--retr-symlinks}.
+
+@item robots = on/off
+Use (or not) @file{/robots.txt} file (@xref{Robots}). Be sure to know
+what you are doing before changing the default (which is @samp{on}).
+
+@item server_response = on/off
+Choose whether or not to print the @sc{http} and @sc{ftp} server
+responses, the same as @samp{-S}.
+
+@item simple_host_check = on/off
+Same as @samp{-nh} (@xref{Host Checking}).
+
+@item span_hosts = on/off
+Same as @samp{-H}.
+
+@item timeout = @var{n}
+Set timeout value, the same as @samp{-T}.
+
+@item timestamping = on/off
+Turn timestamping on/off. The same as @samp{-N} (@xref{Time-Stamping}).
+
+@item tries = @var{n}
+Set number of retries per @sc{url}, the same as @samp{-t}.
+
+@item use_proxy = on/off
+Turn proxy support on/off. The same as @samp{-Y}.
+
+@item verbose = on/off
+Turn verbose on/off, the same as @samp{-v}/@samp{-nv}.
+
+@item wait = @var{n}
+Wait @var{n} seconds between retrievals, the same as @samp{-w}.
+@end table
+
+@node Sample Wgetrc, , Wgetrc Commands, Startup File
+@section Sample Wgetrc
+@cindex sample wgetrc
+
+This is the sample initialization file, as given in the distribution.
+It is divided in two section---one for global usage (suitable for global
+startup file), and one for local usage (suitable for
+@file{$HOME/.wgetrc}). Be careful about the things you change.
+
+Note that all the lines are commented out. For any line to have effect,
+you must remove the @samp{#} prefix at the beginning of line.
+
+@example
+###
+### Sample Wget initialization file .wgetrc
+###
+
+## You can use this file to change the default behaviour of wget or to
+## avoid having to type many many command-line options. This file does
+## not contain a comprehensive list of commands -- look at the manual
+## to find out what you can put into this file.
+##
+## Wget initialization file can reside in /usr/local/etc/wgetrc
+## (global, for all users) or $HOME/.wgetrc (for a single user).
+##
+## To use any of the settings in this file, you will have to uncomment
+## them (and probably change them).
+
+
+##
+## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+## Think well before you change them, since they may reduce wget's
+## functionality, and make it behave contrary to the documentation:
+##
+
+# You can set retrieve quota for beginners by specifying a value
+# optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+# default quota is unlimited.
+#quota = inf
+
+# You can lower (or raise) the default number of retries when
+# downloading a file (default is 20).
+#tries = 20
+
+# Lowering the maximum depth of the recursive retrieval is handy to
+# prevent newbies from going too "deep" when they unwittingly start
+# the recursive retrieval. The default is 5.
+#reclevel = 5
+
+# Many sites are behind firewalls that do not allow initiation of
+# connections from the outside. On these sites you have to use the
+# `passive' feature of FTP. If you are behind such a firewall, you
+# can turn this on to make Wget use passive FTP by default.
+#passive_ftp = off
+
+
+##
+## Local settings (for a user to set in his $HOME/.wgetrc). It is
+## *highly* undesirable to put these settings in the global file, since
+## they are potentially dangerous to "normal" users.
+##
+## Even when setting up your own ~/.wgetrc, you should know what you
+## are doing before doing so.
+##
+
+# Set this to on to use timestamping by default:
+#timestamping = off
+
+# It is a good idea to make Wget send your email address in a `From:'
+# header with your request (so that server administrators can contact
+# you in case of errors). Wget does *not* send `From:' by default.
+#header = From: Your Name <username@@site.domain>
+
+# You can set up other headers, like Accept-Language. Accept-Language
+# is *not* sent by default.
+#header = Accept-Language: en
+
+# You can set the default proxy for Wget to use. It will override the
+# value in the environment.
+#http_proxy = http://proxy.yoyodyne.com:18023/
+
+# If you do not want to use proxy at all, set this to off.
+#use_proxy = on
+
+# You can customize the retrieval outlook. Valid options are default,
+# binary, mega and micro.
+#dot_style = default
+
+# Setting this to off makes Wget not download /robots.txt. Be sure to
+# know *exactly* what /robots.txt is and how it is used before changing
+# the default!
+#robots = on
+
+# It can be useful to make Wget wait between connections. Set this to
+# the number of seconds you want Wget to wait.
+#wait = 0
+
+# You can force creating directory structure, even if a single is being
+# retrieved, by setting this to on.
+#dirstruct = off
+
+# You can turn on recursive retrieving by default (don't do this if
+# you are not sure you know what it means) by setting this to on.
+#recursive = off
+
+# To have Wget follow FTP links from HTML files by default, set this
+# to on:
+#follow_ftp = off
+@end example
+
+@node Examples, Various, Startup File, Top
+@chapter Examples
+@cindex examples
+
+The examples are classified into three sections, because of clarity.
+The first section is a tutorial for beginners. The second section
+explains some of the more complex program features. The third section
+contains advice for mirror administrators, as well as even more complex
+features (that some would call perverted).
+
+@menu
+* Simple Usage:: Simple, basic usage of the program.
+* Advanced Usage:: Advanced techniques of usage.
+* Guru Usage:: Mirroring and the hairy stuff.
+@end menu
+
+@node Simple Usage, Advanced Usage, Examples, Examples
+@section Simple Usage
+
+@itemize @bullet
+@item
+Say you want to download a @sc{url}. Just type:
+
+@example
+wget http://fly.cc.fer.hr/
+@end example
+
+The response will be something like:
+
+@example
+@group
+--13:30:45-- http://fly.cc.fer.hr:80/en/
+ => `index.html'
+Connecting to fly.cc.fer.hr:80... connected!
+HTTP request sent, awaiting response... 200 OK
+Length: 4,694 [text/html]
+
+ 0K -> .... [100%]
+
+13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694]
+@end group
+@end example
+
+@item
+But what will happen if the connection is slow, and the file is lengthy?
+The connection will probably fail before the whole file is retrieved,
+more than once. In this case, Wget will try getting the file until it
+either gets the whole of it, or exceeds the default number of retries
+(this being 20). It is easy to change the number of tries to 45, to
+insure that the whole file will arrive safely:
+
+@example
+wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg
+@end example
+
+@item
+Now let's leave Wget to work in the background, and write its progress
+to log file @file{log}. It is tiring to type @samp{--tries}, so we
+shall use @samp{-t}.
+
+@example
+wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg &
+@end example
+
+The ampersand at the end of the line makes sure that Wget works in the
+background. To unlimit the number of retries, use @samp{-t inf}.
+
+@item
+The usage of @sc{ftp} is as simple. Wget will take care of login and
+password.
+
+@example
+@group
+$ wget ftp://gnjilux.cc.fer.hr/welcome.msg
+--10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg
+ => `welcome.msg'
+Connecting to gnjilux.cc.fer.hr:21... connected!
+Logging in as anonymous ... Logged in!
+==> TYPE I ... done. ==> CWD not needed.
+==> PORT ... done. ==> RETR welcome.msg ... done.
+Length: 1,340 (unauthoritative)
+
+ 0K -> . [100%]
+
+10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340]
+@end group
+@end example
+
+@item
+If you specify a directory, Wget will retrieve the directory listing,
+parse it and convert it to @sc{html}. Try:
+
+@example
+wget ftp://prep.ai.mit.edu/pub/gnu/
+lynx index.html
+@end example
+@end itemize
+
+@node Advanced Usage, Guru Usage, Simple Usage, Examples
+@section Advanced Usage
+
+@itemize @bullet
+@item
+You would like to read the list of @sc{url}s from a file? Not a problem
+with that:
+
+@example
+wget -i file
+@end example
+
+If you specify @samp{-} as file name, the @sc{url}s will be read from
+standard input.
+
+@item
+Create a mirror image of GNU @sc{www} site (with the same directory structure
+the original has) with only one try per document, saving the log of the
+activities to @file{gnulog}:
+
+@example
+wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog
+@end example
+
+@item
+Retrieve the first layer of yahoo links:
+
+@example
+wget -r -l1 http://www.yahoo.com/
+@end example
+
+@item
+Retrieve the index.html of @samp{www.lycos.com}, showing the original
+server headers:
+
+@example
+wget -S http://www.lycos.com/
+@end example
+
+@item
+Save the server headers with the file:
+@example
+wget -s http://www.lycos.com/
+more index.html
+@end example
+
+@item
+Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
+to /tmp.
+
+@example
+wget -P/tmp -l2 ftp://wuarchive.wustl.edu/
+@end example
+
+@item
+You want to download all the @sc{gif}s from an @sc{http} directory.
+@samp{wget http://host/dir/*.gif} doesn't work, since @sc{http}
+retrieval does not support globbing. In that case, use:
+
+@example
+wget -r -l1 --no-parent -A.gif http://host/dir/
+@end example
+
+It is a bit of a kludge, but it works. @samp{-r -l1} means to retrieve
+recursively (@xref{Recursive Retrieval}), with maximum depth of 1.
+@samp{--no-parent} means that references to the parent directory are
+ignored (@xref{Directory-Based Limits}), and @samp{-A.gif} means to
+download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
+too.
+
+@item
+Suppose you were in the middle of downloading, when Wget was
+interrupted. Now you do not want to clobber the files already present.
+It would be:
+
+@example
+wget -nc -r http://www.gnu.ai.mit.edu/
+@end example
+
+@item
+If you want to encode your own username and password to @sc{http} or
+@sc{ftp}, use the appropriate @sc{url} syntax (@xref{URL Format}).
+
+@example
+wget ftp://hniksic:mypassword@@jagor.srce.hr/.emacs
+@end example
+
+@item
+If you do not like the default retrieval visualization (1K dots with 10
+dots per cluster and 50 dots per line), you can customize it through dot
+settings (@xref{Wgetrc Commands}). For example, many people like the
+``binary'' style of retrieval, with 8K dots and 512K lines:
+
+@example
+wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README
+@end example
+
+You can experiment with other styles, like:
+
+@example
+wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz
+wget --dot-style=micro http://fly.cc.fer.hr/
+@end example
+
+To make these settings permanent, put them in your @file{.wgetrc}, as
+described before (@xref{Sample Wgetrc}).
+@end itemize
+
+@node Guru Usage, , Advanced Usage, Examples
+@section Guru Usage
+
+@cindex mirroring
+@itemize @bullet
+@item
+If you wish Wget to keep a mirror of a page (or @sc{ftp}
+subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
+for @samp{-r -N}. You can put Wget in the crontab file asking it to
+recheck a site each Sunday:
+
+@example
+crontab
+0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog
+@end example
+
+@item
+You may wish to do the same with someone's home page. But you do not
+want to download all those images---you're only interested in @sc{html}.
+
+@example
+wget --mirror -A.html http://www.w3.org/
+@end example
+
+@item
+But what about mirroring the hosts networkologically close to you? It
+seems so awfully slow because of all that @sc{dns} resolving. Just use
+@samp{-D} (@xref{Domain Acceptance}).
+
+@example
+wget -rN -Dsrce.hr http://www.srce.hr/
+@end example
+
+Now Wget will correctly find out that @samp{regoc.srce.hr} is the same
+as @samp{www.srce.hr}, but will not even take into consideration the
+link to @samp{www.mit.edu}.
+
+@item
+You have a presentation and would like the dumb absolute links to be
+converted to relative? Use @samp{-k}:
+
+@example
+wget -k -r @var{URL}
+@end example
+
+@cindex redirecting output
+@item
+You would like the output documents to go to standard output instead of
+to files? OK, but Wget will automatically shut up (turn on
+@samp{--quiet}) to prevent mixing of Wget output and the retrieved
+documents.
+
+@example
+wget -O - http://jagor.srce.hr/ http://www.srce.hr/
+@end example
+
+You can also combine the two options and make weird pipelines to
+retrieve the documents from remote hotlists:
+
+@example
+wget -O - http://cool.list.com/ | wget --force-html -i -
+@end example
+@end itemize
+
+@node Various, Appendices, Examples, Top
+@chapter Various
+@cindex various
+
+This chapter contains all the stuff that could not fit anywhere else.
+
+@menu
+* Proxies:: Support for proxy servers
+* Distribution:: Getting the latest version.
+* Mailing List:: Wget mailing list for announcements and discussion.
+* Reporting Bugs:: How and where to report bugs.
+* Portability:: The systems Wget works on.
+* Signals:: Signal-handling performed by Wget.
+@end menu
+
+@node Proxies, Distribution, Various, Various
+@section Proxies
+@cindex proxies
+
+@dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
+data from remote servers to local clients. One typical use of proxies
+is lightening network load for users behind a slow connection. This is
+achieved by channeling all @sc{http} and @sc{ftp} requests through the
+proxy which caches the transferred data. When a cached resource is
+requested again, proxy will return the data from cache. Another use for
+proxies is for companies that separate (for security reasons) their
+internal networks from the rest of Internet. In order to obtain
+information from the Web, their users connect and retrieve remote data
+using an authorized proxy.
+
+Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
+standard way to specify proxy location, which Wget recognizes, is using
+the following environment variables:
+
+@table @code
+@item http_proxy
+This variable should contain the @sc{url} of the proxy for @sc{http}
+connections.
+
+@item ftp_proxy
+This variable should contain the @sc{url} of the proxy for @sc{http}
+connections. It is quite common that @sc{http_proxy} and @sc{ftp_proxy}
+are set to the same @sc{url}.
+
+@item no_proxy
+This variable should contain a comma-separated list of domain extensions
+proxy should @emph{not} be used for. For instance, if the value of
+@code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
+documents from MIT.
+@end table
+
+In addition to the environment variables, proxy location and settings
+may be specified from within Wget itself.
+
+@table @samp
+@item -Y on/off
+@itemx --proxy=on/off
+@itemx proxy = on/off
+This option may be used to turn the proxy support on or off. Proxy
+support is on by default, provided that the appropriate environment
+variables are set.
+
+@item http_proxy = @var{URL}
+@itemx ftp_proxy = @var{URL}
+@itemx no_proxy = @var{string}
+These startup file variables allow you to override the proxy settings
+specified by the environment.
+@end table
+
+Some proxy servers require authorization to enable you to use them. The
+authorization consists of @dfn{username} and @dfn{password}, which must
+be sent by Wget. As with @sc{http} authorization, several
+authentication schemes exist. For proxy authorization only the
+@code{Basic} authentication scheme is currently implemented.
+
+You may specify your username and password either through the proxy
+@sc{url} or through the command-line options. Assuming that the
+company's proxy is located at @samp{proxy.srce.hr} at port 8001, a proxy
+@sc{url} location containing authorization data might look like this:
+
+@example
+http://hniksic:mypassword@@proxy.company.com:8001/
+@end example
+
+Alternatively, you may use the @samp{proxy-user} and
+@samp{proxy-password} options, and the equivalent @file{.wgetrc}
+settings @code{proxy_user} and @code{proxy_passwd} to set the proxy
+username and password.
+
+@node Distribution, Mailing List, Proxies, Various
+@section Distribution
+@cindex latest version
+
+Like all GNU utilities, the latest version of Wget can be found at the
+master GNU archive site prep.ai.mit.edu, and its mirrors. For example,
+Wget @value{VERSION} can be found at
+@url{ftp://prep.ai.mit.edu/pub/gnu/wget-@value{VERSION}.tar.gz}
+
+@node Mailing List, Reporting Bugs, Distribution, Various
+@section Mailing List
+@cindex mailing list
+@cindex list
+
+Wget has its own mailing list at @email{wget@@sunsite.auc.dk}, thanks
+to Karsten Thygesen. The mailing list is for discussion of Wget
+features and web, reporting Wget bugs (those that you think may be of
+interest to the public) and mailing announcements. You are welcome to
+subscribe. The more people on the list, the better!
+
+To subscribe, send mail to @email{wget-subscribe@@sunsite.auc.dk}.
+the magic word @samp{subscribe} in the subject line. Unsubscribe by
+mailing to @email{wget-unsubscribe@@sunsite.auc.dk}.
+
+The mailing list is archived at @url{http://fly.cc.fer.hr/archive/wget}.
+
+@node Reporting Bugs, Portability, Mailing List, Various
+@section Reporting Bugs
+@cindex bugs
+@cindex reporting bugs
+@cindex bug reports
+
+You are welcome to send bug reports about GNU Wget to
+@email{bug-wget@@gnu.org}. The bugs that you think are of the
+interest to the public (i.e. more people should be informed about them)
+can be Cc-ed to the mailing list at @email{wget@@sunsite.auc.dk}.
+
+Before actually submitting a bug report, please try to follow a few
+simple guidelines.
+
+@enumerate
+@item
+Please try to ascertain that the behaviour you see really is a bug. If
+Wget crashes, it's a bug. If Wget does not behave as documented,
+it's a bug. If things work strange, but you are not sure about the way
+they are supposed to work, it might well be a bug.
+
+@item
+Try to repeat the bug in as simple circumstances as possible. E.g. if
+Wget crashes on @samp{wget -rLl0 -t5 -Y0 http://yoyodyne.com -o
+/tmp/log}, you should try to see if it will crash with a simpler set of
+options.
+
+Also, while I will probably be interested to know the contents of your
+@file{.wgetrc} file, just dumping it into the debug message is probably
+a bad idea. Instead, you should first try to see if the bug repeats
+with @file{.wgetrc} moved out of the way. Only if it turns out that
+@file{.wgetrc} settings affect the bug, should you mail me the relevant
+parts of the file.
+
+@item
+Please start Wget with @samp{-d} option and send the log (or the
+relevant parts of it). If Wget was compiled without debug support,
+recompile it. It is @emph{much} easier to trace bugs with debug support
+on.
+
+@item
+If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
+wget` core} and type @code{where} to get the backtrace.
+
+@item
+Find where the bug is, fix it and send me the patches. :-)
+@end enumerate
+
+@node Portability, Signals, Reporting Bugs, Various
+@section Portability
+@cindex portability
+@cindex operating systems
+
+Since Wget uses GNU Autoconf for building and configuring, and avoids
+using ``special'' ultra--mega--cool features of any particular Unix, it
+should compile (and work) on all common Unix flavors.
+
+Various Wget versions have been compiled and tested under many kinds of
+Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix),
+Ultrix, *BSD, IRIX, and others; refer to the file @file{MACHINES} in the
+distribution directory for a comprehensive list. If you compile it on
+an architecture not listed there, please let me know so I can update it.
+
+Wget should also compile on the other Unix systems, not listed in
+@file{MACHINES}. If it doesn't, please let me know.
+
+Thanks to kind contributors, this version of Wget compiles and works on
+Microsoft Windows 95 and Windows NT platforms. It has been compiled
+successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
+with Winsock as networking software. Naturally, it is crippled of some
+features available on Unix, but it should work as a substitute for
+people stuck with Windows. Note that the Windows port is
+@strong{neither tested nor maintained} by me---all questions and
+problems should be reported to Wget mailing list at
+@email{wget@@sunsite.auc.dk} where the maintainers will look at them.
+
+@node Signals, , Portability, Various
+@section Signals
+@cindex signal handling
+@cindex hangup
+
+Since the purpose of Wget is background work, it catches the hangup
+signal (@code{SIGHUP}) and ignores it. If the output was on standard
+output, it will be redirected to a file named @file{wget-log}.
+Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
+to redirect the output of Wget after having started it.
+
+@example
+$ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &
+$ kill -HUP %% # Redirect the output to wget-log
+@end example
+
+Other than that, Wget will not try to interfere with signals in any
+way. @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it
+alike.
+
+@node Appendices, Copying, Various, Top
+@chapter Appendices
+
+This chapter contains some references I consider useful, like the Robots
+Exclusion Standard specification, as well as a list of contributors to
+GNU Wget.
+
+@menu
+* Robots:: Wget as a WWW robot.
+* Security Considerations:: Security with Wget.
+* Contributors:: People who helped.
+@end menu
+
+@node Robots, Security Considerations, Appendices, Appendices
+@section Robots
+@cindex robots
+@cindex robots.txt
+@cindex server maintenance
+
+Since Wget is able to traverse the web, it counts as one of the Web
+@dfn{robots}. Thus Wget understands @dfn{Robots Exclusion Standard}
+(@sc{res})---contents of @file{/robots.txt}, used by server
+administrators to shield parts of their systems from wanderings of Wget.
+
+Norobots support is turned on only when retrieving recursively, and
+@emph{never} for the first page. Thus, you may issue:
+
+@example
+wget -r http://fly.cc.fer.hr/
+@end example
+
+First the index of fly.cc.fer.hr will be downloaded. If Wget finds
+anything worth downloading on the same host, only @emph{then} will it
+load the robots, and decide whether or not to load the links after all.
+@file{/robots.txt} is loaded only once per host. Wget does not support
+the robots @code{META} tag.
+
+The description of the norobots standard was written, and is maintained
+by Martijn Koster @email{m.koster@@webcrawler.com}. With his
+permission, I contribute a (slightly modified) texified version of the
+@sc{res}.
+
+@menu
+* Introduction to RES::
+* RES Format::
+* User-Agent Field::
+* Disallow Field::
+* Norobots Examples::
+@end menu
+
+@node Introduction to RES, RES Format, Robots, Robots
+@subsection Introduction to RES
+@cindex norobots introduction
+
+@dfn{WWW Robots} (also called @dfn{wanderers} or @dfn{spiders}) are
+programs that traverse many pages in the World Wide Web by recursively
+retrieving linked pages. For more information see the robots page.
+
+In 1993 and 1994 there have been occasions where robots have visited
+@sc{www} servers where they weren't welcome for various
+reasons. Sometimes these reasons were robot specific, e.g. certain
+robots swamped servers with rapid-fire requests, or retrieved the same
+files repeatedly. In other situations robots traversed parts of @sc{www}
+servers that weren't suitable, e.g. very deep virtual trees, duplicated
+information, temporary information, or cgi-scripts with side-effects
+(such as voting).
+
+These incidents indicated the need for established mechanisms for
+@sc{www} servers to indicate to robots which parts of their server
+should not be accessed. This standard addresses this need with an
+operational solution.
+
+This document represents a consensus on 30 June 1994 on the robots
+mailing list (@code{robots@@webcrawler.com}), between the majority of
+robot authors and other people with an interest in robots. It has also
+been open for discussion on the Technical World Wide Web mailing list
+(@code{www-talk@@info.cern.ch}). This document is based on a previous
+working draft under the same title.
+
+It is not an official standard backed by a standards body, or owned by
+any commercial organization. It is not enforced by anybody, and there
+no guarantee that all current and future robots will use it. Consider
+it a common facility the majority of robot authors offer the @sc{www}
+community to protect @sc{www} server against unwanted accesses by their
+robots.
+
+The latest version of this document can be found at
+@url{http://info.webcrawler.com/mak/projects/robots/norobots.html}.
+
+@node RES Format, User-Agent Field, Introduction to RES, Robots
+@subsection RES Format
+@cindex norobots format
+
+The format and semantics of the @file{/robots.txt} file are as follows:
+
+The file consists of one or more records separated by one or more blank
+lines (terminated by @code{CR}, @code{CR/NL}, or @code{NL}). Each
+record contains lines of the form:
+
+@example
+<field>:<optionalspace><value><optionalspace>
+@end example
+
+The field name is case insensitive.
+
+Comments can be included in file using UNIX bourne shell conventions:
+the @samp{#} character is used to indicate that preceding space (if any)
+and the remainder of the line up to the line termination is discarded.
+Lines containing only a comment are discarded completely, and therefore
+do not indicate a record boundary.
+
+The record starts with one or more User-agent lines, followed by one or
+more Disallow lines, as detailed below. Unrecognized headers are
+ignored.
+
+The presence of an empty @file{/robots.txt} file has no explicit
+associated semantics, it will be treated as if it was not present,
+i.e. all robots will consider themselves welcome.
+
+@node User-Agent Field, Disallow Field, RES Format, Robots
+@subsection User-Agent Field
+@cindex norobots user-agent
+
+The value of this field is the name of the robot the record is
+describing access policy for.
+
+If more than one User-agent field is present the record describes an
+identical access policy for more than one robot. At least one field
+needs to be present per record.
+
+The robot should be liberal in interpreting this field. A case
+insensitive substring match of the name without version information is
+recommended.
+
+If the value is @samp{*}, the record describes the default access policy
+for any robot that has not matched any of the other records. It is not
+allowed to have multiple such records in the @file{/robots.txt} file.
+
+@node Disallow Field, Norobots Examples, User-Agent Field, Robots
+@subsection Disallow Field
+@cindex norobots disallow
+
+The value of this field specifies a partial @sc{url} that is not to be
+visited. This can be a full path, or a partial path; any @sc{url} that
+starts with this value will not be retrieved. For example,
+@w{@samp{Disallow: /help}} disallows both @samp{/help.html} and
+@samp{/help/index.html}, whereas @w{@samp{Disallow: /help/}} would
+disallow @samp{/help/index.html} but allow @samp{/help.html}.
+
+Any empty value, indicates that all @sc{url}s can be retrieved. At least
+one Disallow field needs to be present in a record.
+
+@node Norobots Examples, , Disallow Field, Robots
+@subsection Norobots Examples
+@cindex norobots examples
+
+The following example @samp{/robots.txt} file specifies that no robots
+should visit any @sc{url} starting with @samp{/cyberworld/map/} or
+@samp{/tmp/}:
+
+@example
+# robots.txt for http://www.site.com/
+
+User-agent: *
+Disallow: /cyberworld/map/ # This is an infinite virtual URL space
+Disallow: /tmp/ # these will soon disappear
+@end example
+
+This example @samp{/robots.txt} file specifies that no robots should
+visit any @sc{url} starting with @samp{/cyberworld/map/}, except the
+robot called @samp{cybermapper}:
+
+@example
+# robots.txt for http://www.site.com/
+
+User-agent: *
+Disallow: /cyberworld/map/ # This is an infinite virtual URL space
+
+# Cybermapper knows where to go.
+User-agent: cybermapper
+Disallow:
+@end example
+
+This example indicates that no robots should visit this site further:
+
+@example
+# go away
+User-agent: *
+Disallow: /
+@end example
+
+@node Security Considerations, Contributors, Robots, Appendices
+@section Security Considerations
+@cindex security
+
+When using Wget, you must be aware that it sends unencrypted passwords
+through the network, which may present a security problem. Here are the
+main issues, and some solutions.
+
+@enumerate
+@item
+The passwords on the command line are visible using @code{ps}. If this
+is a problem, avoid putting passwords from the command line---e.g. you
+can use @file{.netrc} for this.
+
+@item
+Using the insecure @dfn{basic} authentication scheme, unencrypted
+passwords are transmitted through the network routers and gateways.
+
+@item
+The @sc{ftp} passwords are also in no way encrypted. There is no good
+solution for this at the moment.
+
+@item
+Although the ``normal'' output of Wget tries to hide the passwords,
+debugging logs show them, in all forms. This problem is avoided by
+being careful when you send debug logs (yes, even when you send them to
+me).
+@end enumerate
+
+@node Contributors, , Security Considerations, Appendices
+@section Contributors
+@cindex contributors
+
+@iftex
+GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@srce.hr}.
+@end iftex
+@ifinfo
+GNU Wget was written by Hrvoje Niksic @email{hniksic@@srce.hr}.
+@end ifinfo
+However, its development could never have gone as far as it has, were it
+not for the help of many people, either with bug reports, feature
+proposals, patches, or letters saying ``Thanks!''.
+
+Special thanks goes to the following people (no particular order):
+
+@itemize @bullet
+@item
+Karsten Thygesen---donated the mailing list and the initial @sc{ftp}
+space.
+
+@item
+Shawn McHorse---bug reports and patches.
+
+@item
+Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization.
+
+@item
+Gordon Matzigkeit---@file{.netrc} support.
+
+@item
+@iftex
+Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
+Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
+@end iftex
+@ifinfo
+Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
+and ``philosophical'' discussions.
+@end ifinfo
+
+@item
+Darko Budor---initial port to Windows.
+
+@item
+Antonio Rosella---help and suggestions, plust the Italian translation.
+
+@item
+@iftex
+Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
+suggestions.
+@end iftex
+@ifinfo
+Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
+@end ifinfo
+
+@item
+@iftex
+Fran@,{c}ois Pinard---many thorough bug reports and discussions.
+@end iftex
+@ifinfo
+Francois Pinard---many thorough bug reports and discussions.
+@end ifinfo
+
+@item
+Karl Eichwalder---lots of help with internationalization and other
+things.
+
+@item
+Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
+authentication.
+
+@item
+Brian Gough---a generous donation.
+@end itemize
+
+The following people have provided patches, bug/build reports, useful
+suggestions, beta testing services, fan mail and all the other things
+that make maintenance so much fun:
+
+Tim Adam,
+Martin Baehr,
+Dieter Baron,
+Roger Beeman and the Gurus at Cisco,
+Mark Boyns,
+John Burden,
+Wanderlei Cavassin,
+Gilles Cedoc,
+Tim Charron,
+Noel Cragg,
+@iftex
+Kristijan @v{C}onka@v{s},
+@end iftex
+@ifinfo
+Kristijan Conkas,
+@end ifinfo
+@iftex
+Damir D@v{z}eko,
+@end iftex
+@ifinfo
+Damir Dzeko,
+@end ifinfo
+Andrew Davison,
+Ulrich Drepper,
+Marc Duponcheel,
+@iftex
+Aleksandar Erkalovi@'{c},
+@end iftex
+@ifinfo
+Aleksandar Erkalovic,
+@end ifinfo
+Andy Eskilsson,
+Masashi Fujita,
+Howard Gayle,
+Marcel Gerrits,
+Hans Grobler,
+Mathieu Guillaume,
+Karl Heuer,
+Gregor Hoffleit,
+Erik Magnus Hulthen,
+Richard Huveneers,
+Simon Josefsson,
+@iftex
+Mario Juri@'{c},
+@end iftex
+@ifinfo
+Mario Juric,
+@end ifinfo
+@iftex
+Goran Kezunovi@'{c},
+@end iftex
+@ifinfo
+Goran Kezunovic,
+@end ifinfo
+Robert Kleine,
+Fila Kolodny,
+Alexander Kourakos,
+Martin Kraemer,
+@tex
+$\Sigma\acute{\iota}\mu o\varsigma\;
+\Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
+(Simos KSenitellis),
+@end tex
+@ifinfo
+Simos KSenitellis,
+@end ifinfo
+Tage Stabell-Kulo,
+Hrvoje Lacko,
+Dave Love,
+Jordan Mendelson,
+Lin Zhe Min,
+Charlie Negyesi,
+Andrew Pollock,
+Steve Pothier,
+Marin Purgar,
+Jan Prikryl,
+Keith Refson,
+Tobias Ringstrom,
+@c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
+@tex
+Juan Jos\'{e} Rodr\'{\i}gues,
+@end tex
+@ifinfo
+Juan Jose Rodrigues,
+@end ifinfo
+Heinz Salzmann,
+Robert Schmidt,
+Toomas Soome,
+Sven Sternberger,
+Markus Strasser,
+Szakacsits Szabolcs,
+Mike Thomas,
+Russell Vincent,
+Douglas E. Wegscheid,
+Jasmin Zainul,
+@iftex
+Bojan @v{Z}drnja,
+@end iftex
+@ifinfo
+Bojan Zdrnja,
+@end ifinfo
+Kristijan Zimmer.
+
+Apologies to all who I accidentally left out, and many thanks to all the
+subscribers of the Wget mailing list.
+
+@node Copying, Concept Index, Appendices, Top
+@unnumbered GNU GENERAL PUBLIC LICENSE
+@cindex copying
+@cindex GPL
+@center Version 2, June 1991
+
+@display
+Copyright @copyright{} 1989, 1991 Free Software Foundation, Inc.
+675 Mass Ave, Cambridge, MA 02139, USA
+
+Everyone is permitted to copy and distribute verbatim copies
+of this license document, but changing it is not allowed.
+@end display
+
+@unnumberedsec Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software---to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+@iftex
+@unnumberedsec TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+@end iftex
+@ifinfo
+@center TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+@end ifinfo
+
+@enumerate
+@item
+This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The ``Program'', below,
+refers to any such program or work, and a ``work based on the Program''
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term ``modification''.) Each licensee is addressed as ``you''.
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+@item
+You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+@item
+You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+@enumerate a
+@item
+You must cause the modified files to carry prominent notices
+stating that you changed the files and the date of any change.
+
+@item
+You must cause any work that you distribute or publish, that in
+whole or in part contains or is derived from the Program or any
+part thereof, to be licensed as a whole at no charge to all third
+parties under the terms of this License.
+
+@item
+If the modified program normally reads commands interactively
+when run, you must cause it, when started running for such
+interactive use in the most ordinary way, to print or display an
+announcement including an appropriate copyright notice and a
+notice that there is no warranty (or else, saying that you provide
+a warranty) and that users may redistribute the program under
+these conditions, and telling the user how to view a copy of this
+License. (Exception: if the Program itself is interactive but
+does not normally print such an announcement, your work based on
+the Program is not required to print an announcement.)
+@end enumerate
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+@item
+You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+@enumerate a
+@item
+Accompany it with the complete corresponding machine-readable
+source code, which must be distributed under the terms of Sections
+1 and 2 above on a medium customarily used for software interchange; or,
+
+@item
+Accompany it with a written offer, valid for at least three
+years, to give any third party, for a charge no more than your
+cost of physically performing source distribution, a complete
+machine-readable copy of the corresponding source code, to be
+distributed under the terms of Sections 1 and 2 above on a medium
+customarily used for software interchange; or,
+
+@item
+Accompany it with the information you received as to the offer
+to distribute corresponding source code. (This alternative is
+allowed only for noncommercial distribution and only if you
+received the program in object code or executable form with such
+an offer, in accord with Subsection b above.)
+@end enumerate
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+@item
+You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+@item
+You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+@item
+Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+@item
+If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+@item
+If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+@item
+The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and ``any
+later version'', you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+@item
+If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+@iftex
+@heading NO WARRANTY
+@end iftex
+@ifinfo
+@center NO WARRANTY
+@end ifinfo
+@cindex no warranty
+
+@item
+BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM ``AS IS'' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+@item
+IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+@end enumerate
+
+@iftex
+@heading END OF TERMS AND CONDITIONS
+@end iftex
+@ifinfo
+@center END OF TERMS AND CONDITIONS
+@end ifinfo
+
+@page
+@unnumberedsec How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the ``copyright'' line and a pointer to where the full notice is found.
+
+@smallexample
+@var{one line to give the program's name and an idea of what it does.}
+Copyright (C) 19@var{yy} @var{name of author}
+
+This program is free software; you can redistribute it and/or
+modify it under the terms of the GNU General Public License
+as published by the Free Software Foundation; either version 2
+of the License, or (at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+@end smallexample
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+@smallexample
+Gnomovision version 69, Copyright (C) 19@var{yy} @var{name of author}
+Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
+type `show w'. This is free software, and you are welcome
+to redistribute it under certain conditions; type `show c'
+for details.
+@end smallexample
+
+The hypothetical commands @samp{show w} and @samp{show c} should show
+the appropriate parts of the General Public License. Of course, the
+commands you use may be called something other than @samp{show w} and
+@samp{show c}; they could even be mouse-clicks or menu items---whatever
+suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a ``copyright disclaimer'' for the program, if
+necessary. Here is a sample; alter the names:
+
+@smallexample
+@group
+Yoyodyne, Inc., hereby disclaims all copyright
+interest in the program `Gnomovision'
+(which makes passes at compilers) written
+by James Hacker.
+
+@var{signature of Ty Coon}, 1 April 1989
+Ty Coon, President of Vice
+@end group
+@end smallexample
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Library General
+Public License instead of this License.
+
+@node Concept Index, , Copying, Top
+@unnumbered Concept Index
+@printindex cp
+
+@contents
+
+@bye
--- /dev/null
+#! /bin/sh
+#
+# install - install a program, script, or datafile
+# This comes from X11R5 (mit/util/scripts/install.sh).
+#
+# Copyright 1991 by the Massachusetts Institute of Technology
+#
+# Permission to use, copy, modify, distribute, and sell this software and its
+# documentation for any purpose is hereby granted without fee, provided that
+# the above copyright notice appear in all copies and that both that
+# copyright notice and this permission notice appear in supporting
+# documentation, and that the name of M.I.T. not be used in advertising or
+# publicity pertaining to distribution of the software without specific,
+# written prior permission. M.I.T. makes no representations about the
+# suitability of this software for any purpose. It is provided "as is"
+# without express or implied warranty.
+#
+# Calling this script install-sh is preferred over install.sh, to prevent
+# `make' implicit rules from creating a file called install from it
+# when there is no Makefile.
+#
+# This script is compatible with the BSD install script, but was written
+# from scratch. It can only install one file at a time, a restriction
+# shared with many OS's install programs.
+
+
+# set DOITPROG to echo to test this script
+
+# Don't use :- since 4.3BSD and earlier shells don't like it.
+doit="${DOITPROG-}"
+
+
+# put in absolute paths if you don't have them in your path; or use env. vars.
+
+mvprog="${MVPROG-mv}"
+cpprog="${CPPROG-cp}"
+chmodprog="${CHMODPROG-chmod}"
+chownprog="${CHOWNPROG-chown}"
+chgrpprog="${CHGRPPROG-chgrp}"
+stripprog="${STRIPPROG-strip}"
+rmprog="${RMPROG-rm}"
+mkdirprog="${MKDIRPROG-mkdir}"
+
+transformbasename=""
+transform_arg=""
+instcmd="$mvprog"
+chmodcmd="$chmodprog 0755"
+chowncmd=""
+chgrpcmd=""
+stripcmd=""
+rmcmd="$rmprog -f"
+mvcmd="$mvprog"
+src=""
+dst=""
+dir_arg=""
+
+while [ x"$1" != x ]; do
+ case $1 in
+ -c) instcmd="$cpprog"
+ shift
+ continue;;
+
+ -d) dir_arg=true
+ shift
+ continue;;
+
+ -m) chmodcmd="$chmodprog $2"
+ shift
+ shift
+ continue;;
+
+ -o) chowncmd="$chownprog $2"
+ shift
+ shift
+ continue;;
+
+ -g) chgrpcmd="$chgrpprog $2"
+ shift
+ shift
+ continue;;
+
+ -s) stripcmd="$stripprog"
+ shift
+ continue;;
+
+ -t=*) transformarg=`echo $1 | sed 's/-t=//'`
+ shift
+ continue;;
+
+ -b=*) transformbasename=`echo $1 | sed 's/-b=//'`
+ shift
+ continue;;
+
+ *) if [ x"$src" = x ]
+ then
+ src=$1
+ else
+ # this colon is to work around a 386BSD /bin/sh bug
+ :
+ dst=$1
+ fi
+ shift
+ continue;;
+ esac
+done
+
+if [ x"$src" = x ]
+then
+ echo "install: no input file specified"
+ exit 1
+else
+ true
+fi
+
+if [ x"$dir_arg" != x ]; then
+ dst=$src
+ src=""
+
+ if [ -d $dst ]; then
+ instcmd=:
+ else
+ instcmd=mkdir
+ fi
+else
+
+# Waiting for this to be detected by the "$instcmd $src $dsttmp" command
+# might cause directories to be created, which would be especially bad
+# if $src (and thus $dsttmp) contains '*'.
+
+ if [ -f $src -o -d $src ]
+ then
+ true
+ else
+ echo "install: $src does not exist"
+ exit 1
+ fi
+
+ if [ x"$dst" = x ]
+ then
+ echo "install: no destination specified"
+ exit 1
+ else
+ true
+ fi
+
+# If destination is a directory, append the input filename; if your system
+# does not like double slashes in filenames, you may need to add some logic
+
+ if [ -d $dst ]
+ then
+ dst="$dst"/`basename $src`
+ else
+ true
+ fi
+fi
+
+## this sed command emulates the dirname command
+dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'`
+
+# Make sure that the destination directory exists.
+# this part is taken from Noah Friedman's mkinstalldirs script
+
+# Skip lots of stat calls in the usual case.
+if [ ! -d "$dstdir" ]; then
+defaultIFS='
+'
+IFS="${IFS-${defaultIFS}}"
+
+oIFS="${IFS}"
+# Some sh's can't handle IFS=/ for some reason.
+IFS='%'
+set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'`
+IFS="${oIFS}"
+
+pathcomp=''
+
+while [ $# -ne 0 ] ; do
+ pathcomp="${pathcomp}${1}"
+ shift
+
+ if [ ! -d "${pathcomp}" ] ;
+ then
+ $mkdirprog "${pathcomp}"
+ else
+ true
+ fi
+
+ pathcomp="${pathcomp}/"
+done
+fi
+
+if [ x"$dir_arg" != x ]
+then
+ $doit $instcmd $dst &&
+
+ if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else true ; fi &&
+ if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else true ; fi &&
+ if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else true ; fi &&
+ if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else true ; fi
+else
+
+# If we're going to rename the final executable, determine the name now.
+
+ if [ x"$transformarg" = x ]
+ then
+ dstfile=`basename $dst`
+ else
+ dstfile=`basename $dst $transformbasename |
+ sed $transformarg`$transformbasename
+ fi
+
+# don't allow the sed command to completely eliminate the filename
+
+ if [ x"$dstfile" = x ]
+ then
+ dstfile=`basename $dst`
+ else
+ true
+ fi
+
+# Make a temp file name in the proper directory.
+
+ dsttmp=$dstdir/#inst.$$#
+
+# Move or copy the file name to the temp name
+
+ $doit $instcmd $src $dsttmp &&
+
+ trap "rm -f ${dsttmp}" 0 &&
+
+# and set any options; do chmod last to preserve setuid bits
+
+# If any of these fail, we abort the whole thing. If we want to
+# ignore errors from any of these, just make sure not to ignore
+# errors from the above "$doit $instcmd $src $dsttmp" command.
+
+ if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else true;fi &&
+ if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi &&
+ if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else true;fi &&
+ if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else true;fi &&
+
+# Now rename the file to the real destination.
+
+ $doit $rmcmd -f $dstdir/$dstfile &&
+ $doit $mvcmd $dsttmp $dstdir/$dstfile
+
+fi &&
+
+
+exit 0
--- /dev/null
+#! /bin/sh
+# mkinstalldirs --- make directory hierarchy
+# Author: Noah Friedman <friedman@prep.ai.mit.edu>
+# Created: 1993-05-16
+# Public domain
+
+# $Id: mkinstalldirs 2 1999-12-02 07:42:23Z kwget $
+
+errstatus=0
+
+for file
+do
+ set fnord `echo ":$file" | sed -ne 's/^:\//#/;s/^://;s/\// /g;s/^#/\//;p'`
+ shift
+
+ pathcomp=
+ for d
+ do
+ pathcomp="$pathcomp$d"
+ case "$pathcomp" in
+ -* ) pathcomp=./$pathcomp ;;
+ esac
+
+ if test ! -d "$pathcomp"; then
+ echo "mkdir $pathcomp" 1>&2
+
+ mkdir "$pathcomp" || lasterr=$?
+
+ if test ! -d "$pathcomp"; then
+ errstatus=$lasterr
+ fi
+ fi
+
+ pathcomp="$pathcomp/"
+ done
+done
+
+exit $errstatus
+
+# mkinstalldirs ends here
--- /dev/null
+# Makefile for program source directory in GNU NLS utilities package.
+# Copyright (C) 1995, 1996, 1997 by Ulrich Drepper <drepper@gnu.ai.mit.edu>
+#
+# This file file be copied and used freely without restrictions. It can
+# be used in projects which are not available under the GNU Public License
+# but which still want to provide support for the GNU gettext functionality.
+# Please note that the actual code is *not* freely available.
+
+PACKAGE = @PACKAGE@
+VERSION = @VERSION@
+
+SHELL = /bin/sh
+@SET_MAKE@
+
+srcdir = @srcdir@
+top_srcdir = @top_srcdir@
+VPATH = @srcdir@
+
+prefix = @prefix@
+exec_prefix = @exec_prefix@
+datadir = $(prefix)/@DATADIRNAME@
+localedir = $(datadir)/locale
+gnulocaledir = $(prefix)/share/locale
+gettextsrcdir = $(prefix)/share/gettext/po
+subdir = po
+
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+
+CC = @CC@
+GMSGFMT = PATH=../src:$$PATH @GMSGFMT@
+MSGFMT = @MSGFMT@
+XGETTEXT = PATH=../src:$$PATH @XGETTEXT@
+MSGMERGE = PATH=../src:$$PATH msgmerge
+
+DEFS = @DEFS@
+CFLAGS = @CFLAGS@
+CPPFLAGS = @CPPFLAGS@
+
+INCLUDES = -I.. -I$(top_srcdir)/intl
+
+COMPILE = $(CC) -c $(DEFS) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) $(XCFLAGS)
+
+POFILES = @POFILES@
+GMOFILES = @GMOFILES@
+DISTFILES = ChangeLog Makefile.in.in POTFILES.in $(PACKAGE).pot \
+$(POFILES) $(GMOFILES) $(SOURCES)
+
+POTFILES = \
+
+CATALOGS = @CATALOGS@
+CATOBJEXT = @CATOBJEXT@
+INSTOBJEXT = @INSTOBJEXT@
+
+.SUFFIXES:
+.SUFFIXES: .c .o .po .pox .gmo .mo .msg
+
+.c.o:
+ $(COMPILE) $<
+
+.po.pox:
+ $(MAKE) $(PACKAGE).pot
+ $(MSGMERGE) $< $(srcdir)/$(PACKAGE).pot -o $*.pox
+
+.po.mo:
+ $(MSGFMT) -o $@ $<
+
+.po.gmo:
+ file=$(srcdir)/`echo $* | sed 's,.*/,,'`.gmo \
+ && rm -f $$file && $(GMSGFMT) -o $$file $<
+
+
+all: all-@USE_NLS@
+
+all-yes: $(CATALOGS)
+all-no:
+
+$(srcdir)/$(PACKAGE).pot: $(POTFILES)
+ $(XGETTEXT) --default-domain=$(PACKAGE) --directory=$(top_srcdir) \
+ --add-comments --keyword=_ --keyword=N_ \
+ --files-from=$(srcdir)/POTFILES.in
+ rm -f $(srcdir)/$(PACKAGE).pot
+ mv $(PACKAGE).po $(srcdir)/$(PACKAGE).pot
+
+install.mo: install
+install: install-exec install-data
+install-exec:
+install-data: install-data-@USE_NLS@
+install-data-no: all
+install-data-yes: all
+ @catalogs='$(CATALOGS)'; \
+ for cat in $$catalogs; do \
+ cat=`basename $$cat`; \
+ case "$$cat" in \
+ *.gmo) destdir=$(gnulocaledir);; \
+ *) destdir=$(localedir);; \
+ esac; \
+ lang=`echo $$cat | sed 's/\$(CATOBJEXT)$$//'`; \
+ dir=$$destdir/$$lang/LC_MESSAGES; \
+ $(top_srcdir)/mkinstalldirs $$dir; \
+ if test -r $$cat; then \
+ $(INSTALL_DATA) $$cat $$dir/$(PACKAGE)$(INSTOBJEXT); \
+ echo "installing $$cat as $$dir/$(PACKAGE)$(INSTOBJEXT)"; \
+ else \
+ $(INSTALL_DATA) $(srcdir)/$$cat $$dir/$(PACKAGE)$(INSTOBJEXT); \
+ echo "installing $(srcdir)/$$cat as" \
+ "$$dir/$(PACKAGE)$(INSTOBJEXT)"; \
+ fi; \
+ if test -r $$cat.m; then \
+ $(INSTALL_DATA) $$cat.m $$dir/$(PACKAGE)$(INSTOBJEXT).m; \
+ echo "installing $$cat.m as $$dir/$(PACKAGE)$(INSTOBJEXT).m"; \
+ else \
+ if test -r $(srcdir)/$$cat.m ; then \
+ $(INSTALL_DATA) $(srcdir)/$$cat.m \
+ $$dir/$(PACKAGE)$(INSTOBJEXT).m; \
+ echo "installing $(srcdir)/$$cat as" \
+ "$$dir/$(PACKAGE)$(INSTOBJEXT).m"; \
+ else \
+ true; \
+ fi; \
+ fi; \
+ done
+ if test "$(PACKAGE)" = "gettext"; then \
+ $(INSTALL_DATA) $(srcdir)/Makefile.in.in \
+ $(gettextsrcdir)/Makefile.in.in; \
+ else \
+ : ; \
+ fi
+
+# Define this as empty until I found a useful application.
+installcheck:
+
+uninstall:
+ catalogs='$(CATALOGS)'; \
+ for cat in $$catalogs; do \
+ cat=`basename $$cat`; \
+ lang=`echo $$cat | sed 's/\$(CATOBJEXT)$$//'`; \
+ rm -f $(localedir)/$$lang/LC_MESSAGES/$(PACKAGE)$(INSTOBJEXT); \
+ rm -f $(localedir)/$$lang/LC_MESSAGES/$(PACKAGE)$(INSTOBJEXT).m; \
+ rm -f $(gnulocaledir)/$$lang/LC_MESSAGES/$(PACKAGE)$(INSTOBJEXT); \
+ rm -f $(gnulocaledir)/$$lang/LC_MESSAGES/$(PACKAGE)$(INSTOBJEXT).m; \
+ done
+ rm -f $(gettextsrcdir)/po-Makefile.in.in
+
+check: all
+
+cat-id-tbl.o: ../intl/libgettext.h
+
+dvi info tags TAGS ID:
+
+mostlyclean:
+ rm -f core core.* *.pox $(PACKAGE).po *.old.po
+ rm -fr *.o
+
+clean: mostlyclean
+
+distclean: clean
+ rm -f Makefile Makefile.in POTFILES *.mo *.msg
+
+maintainer-clean: distclean
+ @echo "This command is intended for maintainers to use;"
+ @echo "it deletes files that may require special tools to rebuild."
+ rm -f $(GMOFILES)
+
+distdir = ../$(PACKAGE)-$(VERSION)/$(subdir)
+dist distdir: update-po $(DISTFILES)
+ dists="$(DISTFILES)"; \
+ for file in $$dists; do \
+ ln $(srcdir)/$$file $(distdir) 2> /dev/null \
+ || cp -p $(srcdir)/$$file $(distdir); \
+ done
+
+update-po: Makefile
+ $(MAKE) $(PACKAGE).pot
+ PATH=`pwd`/../src:$$PATH; \
+ cd $(srcdir); \
+ catalogs='$(CATALOGS)'; \
+ for cat in $$catalogs; do \
+ cat=`basename $$cat`; \
+ lang=`echo $$cat | sed 's/\$(CATOBJEXT)$$//'`; \
+ mv $$lang.po $$lang.old.po; \
+ echo "$$lang:"; \
+ if $(MSGMERGE) $$lang.old.po $(PACKAGE).pot -o $$lang.po; then \
+ rm -f $$lang.old.po; \
+ else \
+ echo "msgmerge for $$cat failed!"; \
+ rm -f $$lang.po; \
+ mv $$lang.old.po $$lang.po; \
+ fi; \
+ done
+
+POTFILES: POTFILES.in
+ ( if test 'x$(srcdir)' != 'x.'; then \
+ posrcprefix='$(top_srcdir)/'; \
+ else \
+ posrcprefix="../"; \
+ fi; \
+ rm -f $@-t $@ \
+ && (sed -e '/^#/d' -e '/^[ ]*$$/d' \
+ -e "s@.*@ $$posrcprefix& \\\\@" < $(srcdir)/$@.in \
+ | sed -e '$$s/\\$$//') > $@-t \
+ && chmod a-w $@-t \
+ && mv $@-t $@ )
+
+Makefile: Makefile.in.in ../config.status POTFILES
+ cd .. \
+ && CONFIG_FILES=$(subdir)/$@.in CONFIG_HEADERS= \
+ $(SHELL) ./config.status
+
+# Tell versions [3.59,3.63) of GNU make not to export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
--- /dev/null
+# List of files which containing translatable strings.
+# Copyright (C) 1998 Free Software Foundation, Inc.
+
+# Package source files
+src/cmpt.c
+src/connect.c
+src/fnmatch.c
+src/ftp-basic.c
+src/ftp-ls.c
+src/ftp-opie.c
+src/ftp.c
+src/getopt.c
+src/headers.c
+src/host.c
+src/html.c
+src/http.c
+src/init.c
+src/log.c
+src/main.c
+src/mswindows.c
+src/netrc.c
+src/rbuf.c
+src/recur.c
+src/retr.c
+src/url.c
+src/utils.c
--- /dev/null
+# Czech translations for GNU wget
+# Copyright (C) 1998 Free Software Foundation, Inc.
+# Jan Prikryl <prikryl@acm.org>, 1998
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: GNU wget 1.5.2-b1\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-06-05 08:47\n"
+"Last-Translator: Jan Prikryl <prikryl@acm.org>\n"
+"Language-Team: Czech <cs@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=iso-8859-2\n"
+"Content-Transfer-Encoding: 8-bit\n"
+
+# , c-format
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Navazuji spojení s %s:%hu... "
+
+# , c-format
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "Spojení s %s:%hu odmítnuto.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "spojeno!\n"
+
+# , c-format
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Pøihla¹uji se jako %s ... "
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr "Server odpovìdìl chybnì, uzavírám øídicí spojení.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Úvodní odpovìï serveru je chybná.\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Nemohu zapsat data, uzavírám øídicí spojení.\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "Server odmítá pøihlá¹ení.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Chyba pøi pøihlá¹ení.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Pøihlá¹en!\n"
+
+# , c-format
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Neznámý typ `%c', uzavírám øídicí spojení.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "hotovo."
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD není potøeba.\n"
+
+# , c-format
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+"Adresáø `%s' neexistuje.\n"
+"\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "hotovo.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD není potøeba.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Nemohu inicializovat pøenos pøíkazem PASV.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Odpovìï na PASV není pochopitelná.\n"
+
+# , c-format
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Pokusím se spojit s %s:%hu.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "hotovo. "
+
+# , c-format
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Chyba pøi operaci \"bind\" (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "Neplatný PORT.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"Pøíkaz REST selhal, pøená¹ím soubor od zaèátku.\n"
+
+# , c-format
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Soubor `%s' neexistuje.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Soubor èi adresáø `%s' neexistuje.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Délka: %s"
+
+# , c-format
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [%s zbývá]"
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (není smìrodatné)\n"
+
+# , c-format
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, uzavírám øídicí spojení.\n"
+
+# , c-format
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - Datové spojení: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Øídicí spojení uzavøeno.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Pøenos dat byl pøedèasnì ukonèen.\n"
+
+# , c-format
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "Soubor `%s' je ji¾ zde, nebudu jej pøená¹et.\n"
+
+# , c-format
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(pokus:%2d)"
+
+# , c-format
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' ulo¾en [%ld]\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Seznam souborù bude doèasnì ulo¾en v `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "Vymazal jsem `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "Hloubka rekurze %d pøekroèila maximální povolenou hloubku %d.\n"
+
+# , c-format
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Soubor `%s' nebudu pøená¹et, proto¾e lokální verze je novìj¹í.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Velikosti se neshodují (lokálnì %ld), pøená¹ím.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Neplatný název symoblického odkazu, pøeskakuji.\n"
+
+# , c-format
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+"Korektní symbolický odkaz %s -> %s ji¾ existuje.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Vytváøím symbolický odkaz %s -> %s\n"
+
+# , c-format
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr ""
+"Pøeskakuji symbolický odkaz `%s', proto¾e tento systém symbolické odkazy\n"
+"nepodporuje.\n"
+
+# , c-format
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Pøeskakuji adresáø `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: neznámý/nepodporovaný typ souboru.\n"
+
+# , c-format
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: èasové razítko souboru je poru¹ené.\n"
+
+# , c-format
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr ""
+"Podadresáøe nebudu pøená¹et, proto¾e jsme ji¾ v hloubce %d (maximum je %d).\n"
+
+# , c-format
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr ""
+"Nesestupuji do adresáøe `%s', proto¾e tento adresáø se má vynechat èi\n"
+"nebyl zadán k procházení.\n"
+
+# , c-format
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Odmítám `%s'.\n"
+
+# , c-format
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Vzorku `%s' nic neodpovídá.\n"
+
+# , c-format
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Výpis adresáøe v HTML formátu byl zapsán do `%s' [%ld].\n"
+
+# , c-format
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "Výpis adresáøe v HTML formátu byl zapsán do `%s'.\n"
+
+# , c-format
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: pøepínaè `%s' není jednoznaèný\n"
+
+# , c-format
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: pøepínaè `--%s' nemá argument\n"
+
+# , c-format
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: pøepínaè `%c%s' nemá argument\n"
+
+# , c-format
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: pøepínaè `%s' vy¾aduje argument\n"
+
+# , c-format
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: neznámý pøepínaè `--%s'\n"
+
+# , c-format
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: neznámý pøepínaè `%c%s'\n"
+
+# , c-format
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: nepøípustný pøepínaè -- %c\n"
+
+# , c-format
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: pøepínaè vy¾aduje argument -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Nemohu identifikovat u¾ivatele.\n"
+
+# , c-format
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Varování: volání \"uname\" skonèilo chybou %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Varování: volání \"gethostname\" skonèilo chybou\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Varování: nemohu urèit lokální IP adresu.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr "%s: Varování: lokální IP adresa nemá reverzní DNS záznam.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr ""
+"%s: Varování: reverzní vyhledání lokální adresy nenavrátilo plnì\n"
+" kvalifikované jméno!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Poèítaè nebyl nalezen"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Neznámá chyba"
+
+# , c-format
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Obsah /%s na %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "èas neznámý "
+
+#: src/html.c:467
+msgid "File "
+msgstr "Soubor "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Adresáø "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Sym. odkaz "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Neznámý typ "
+
+# , c-format
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s bajtù)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "HTTP po¾adavek nebylo mo¾né odeslat.\n"
+
+# , c-format
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s po¾adavek odeslán, èekám na odpovìï ... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Hlavièka není úplná.\n"
+
+# , c-format
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Chyba (%s) pøi ètení hlavièek.\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr "Nepøi¹la ¾ádná data"
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr "Odpovìï serveru má zkomolený stavový øádek"
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(¾ádný popis)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr "Autorizace selhala.\n"
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr "Server po¾aduje neznámý zpùsob autentifikace.\n"
+
+# , c-format
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Pøesmìrováno na: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "neudáno"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr " [následuji]"
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Délka: "
+
+# , c-format
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (%s zbývá)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "je ignorována"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Varování: HTTP nepodporuje ¾olíkové znaky.\n"
+
+# , c-format
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Soubor `%s' nebudu pøená¹et, je ji¾ zde.\n"
+
+# , c-format
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Nemohu zapsat do `%s' (%s).\n"
+
+# , c-format
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "CHYBA: Pøesmìrování (%d) bez udané nové adresy.\n"
+
+# , c-format
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s CHYBA %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr ""
+"Nebudu pou¾ívat èasová razítka (`time-stamps'), proto¾e hlavièka\n"
+"\"Last-modified\" v odpovìdi serveru schází.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr ""
+"Ignoruji èasové razítko souboru (`time-stamp'), proto¾e hlavièka \n"
+"\"Last-modified\" obsahuje neplatné údaje.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Vzdálený soubor je novìj¹ího data, pøená¹ím.\n"
+
+# , c-format
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' ulo¾en [%ld/%ld]\n"
+"\n"
+
+# , c-format
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Spojení uzavøeno na bajtu %ld. "
+
+# , c-format
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' ulo¾eno [%ld/%ld])\n"
+"\n"
+
+# , c-format
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Spojení uzavøeno na bajtu %ld/%ld. "
+
+# , c-format
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Chyba pøi ètení dat na bajtu %ld (%s)."
+
+# , c-format
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Chyba pøi ètení dat na bajtu %ld/%ld (%s). "
+
+# , c-format
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Nemohu pøeèíst %s (%s).\n"
+
+# , c-format
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Chyba v %s na øádku %d.\n"
+
+# , c-format
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr ""
+"%s: Varování: Globální i u¾ivatelské wgetrc jsou shodnì ulo¾eny v `%s'.\n"
+
+# , c-format
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: Chyba: Neznámý pøíkaz `%s', hodnota `%s'.\n"
+
+# , c-format
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Zadejte prosím `on' nebo `off'.\n"
+
+# , c-format
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Neplatná specifikace `%s'\n"
+
+# , c-format
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "%s: Neplatná specifikace `%s'\n"
+
+# , c-format
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Pou¾ití: %s [PØEPÍNAÈ]... [URL]...\n"
+
+# , c-format
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr "GNU Wget %s, program pro neinteraktivní stahování souborù.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Argumenty, povinné u dlouhých pøepínaèù, jsou povinné i pro krátké verze\n"
+"pøepínaèù.\n"
+"\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Zaèátek:\n"
+" -V, --version vypi¹ informaci o verzi programu Wget a skonèi\n"
+" -h, --help vypi¹ tuto nápovìdu\n"
+" -b, --background po spu¹tìní pokraèuj v bìhu na pozadí\n"
+" -e, --execute=PØÍKAZ proveï `.wgetrc' pøíkaz\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"Protokolování a vstupní soubor:\n"
+" -o, --output-file=SOUBOR do tohoto souboru ukládej protokol\n"
+" -a, --append-output=SOUBOR protokol pøipoj na konec tohoto souboru\n"
+" -d, --debug vypisuj ladicí informace\n"
+" -q, --quiet nevypisuj vùbec nic\n"
+" -v, --verbose buï upovídaný (implicitnì zapnuto)\n"
+" -nv, --non-verbose vypisuj pouze nejdùle¾itìj¹í informace\n"
+" -i, --input-file=SOUBOR poèíteèní URL odkazy naèti z tohoto souboru\n"
+" -F, --force-html soubor s URL je v HTML formátu\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Stahování:\n"
+" -t, --tries=ÈÍSLO poèet pokusù stáhnout URL (0 donekoneèna)\n"
+" -O --output-document=SOUBOR sta¾ené dokumenty ukládej do tohoto souboru\n"
+" -nc, --no-clobber nepøepisuj existující soubory\n"
+" -c, --continue zaèni stahovat ji¾ èásteènì pøenesená data\n"
+" --dot-style=STYL nastav zpùsob zobrazení pøi stahování dat\n"
+" -N, --timestamping nestahuj star¹í soubory (zapni èasová "
+"razítka)\n"
+" -S, --server-response vypisuj odpovìdi serveru\n"
+" --spider nic nestahuj\n"
+" -T, --timeout=SEKUNDY nastav timeout pøi ètení na tuto hodnotu\n"
+" -w, --wait=SEKUND pøed ka¾dým stahováním poèkej SEKUND sekund\n"
+" -Y, --proxy=on/off zapni pøenos pøes proxy (standardnì `off')\n"
+" -Q, --quota=NUMBER nastav limit objemu ulo¾ených dat\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Adresáøe:\n"
+" -nd --no-directories nevytváøej adresáøe\n"
+" -x, --force-directories v¾dy vytváøej adresáøe\n"
+" -nH, --no-host-directories nevytváøej adresáøe s adresou serveru\n"
+" -P, --directory-prefix=PREFIX ukládej data do PREFIX/...\n"
+" --cut-dirs=POÈET nevytváøej prvních POÈET podadresáøù\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"Pøepínaèe pro HTTP:\n"
+" --http-user=U®IVATEL u¾ivatelské jméno pro autorizovaný http pøenos\n"
+" --http-passwd=HESLO heslo pro autorizovaný http pøenos \n"
+" -C, --cache=on/off povol èi zaka¾ pou¾ití vyrovnávací pamìti na\n"
+" stranì serveru (implicitnì `on')\n"
+" --ignore-length ignoruj pole `Content-Length' v hlavièce\n"
+" --header=ØETÌZEC po¹li ØETÌZEC serveru jako souèást hlavièek\n"
+" --proxy-user=U®IVATEL jméno u¾ivatele vy¾adované pro proxy pøenos\n"
+" --proxy-passwd=HESLO heslo pro proxy pøenos\n"
+" -s, --save-headers do stahovaného souboru ulo¾ i HTTP hlavièky\n"
+" -U, --user-agent=AGENT místo identifikace `Wget/VERZE' posílej\n"
+" v hlavièce identifikaèní øetìzec AGENT\n"
+
+# , fuzzy
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"Pøepínaèe pro FTP protokol:\n"
+" --retr-symlinks stahuj i symbolické odkazy\n"
+" -g, --glob=on/off zapni èi vypni expanzi ¾olíkù ve jménech souborù\n"
+" (implicitnì `on')\n"
+" --passive-ftp pou¾ij pasivní mód pøenosu dat\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Rekurzivní stahování:\n"
+" -r, --recursive rekurzivní stahování -- buïte opatrní!\n"
+" -l, --level=NUMBER maximální hloubka rekurze (0 bez limitu)\n"
+" --delete-after po pøenosu sma¾ sta¾ené soubory\n"
+" -k, --convert-links absolutní URL pøeveï na relativní\n"
+" -m, --mirror zapni pøepínaèe vhodné pro zrcadlení dat\n"
+" -nr, --dont-remove-listing nema¾ soubory `.listing' s obsahy adresáøù\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Omezení pøi rekurzi:\n"
+" -A, --accept=SEZNAM seznam povolených extenzí souborù\n"
+" -R, --reject=SEZNAM seznam nepovolených extenzí souborù\n"
+" -D, --domains=SEZNAM seznam povolených domén\n"
+" --exclude-domains=SEZNAM seznam nepovolených domén\n"
+" -L, --relative následuj pouze relativní odkazy\n"
+" --follow-ftp následuj FTP odkazy v HTML dokumentech\n"
+" -H, --span-hosts naèítej dokumenty i z ostatních serverù\n"
+" -I, --include-directories=SEZNAM seznam povolených adresáøù\n"
+" -X, --exclude-directories=SEZNAM seznam vylouèených adresáøù\n"
+" -nh, --no-host-lookup nevyhledávej adresy v DNS\n"
+" -np, --no-parent nesestupuj do rodièovského adresáøe\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr ""
+"Zprávy o chybách a návrhy na vylep¹ení programu zasílejte na adresu\n"
+"<bug-wget@gnu.org> (pouze anglicky).\n"
+"Komentáøe k èeskému pøekladu zasílejte na adresu <cs@li.org>. \n"
+
+# , fuzzy
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr "%s: program nebyl zkompilován s podporou pro ladìní.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Tento program je ¹íøen v nadìji, ¾e bude u¾iteèný, av¹ak\n"
+"BEZ JAKÉKOLI ZÁRUKY; neposkytují se ani odvozené záruky PRODEJNOSTI \n"
+"anebo VHODNOSTI PRO URÈITÝ ÚÈEL. Dal¹í podrobnosti hledejte \n"
+"v Obecné veøejné licenci GNU.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Autorem tohto programu je Hrvoje Nik¹iæ <hniksic@srce.hr>\n"
+
+# , c-format
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: neplatný pøíkaz\n"
+
+# , c-format
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: nepøípustný pøepínaè -- `-n%c'\n"
+
+# , c-format
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "Pøíkaz `%s --help' vypí¹e význam platných pøepínaèù.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "Nedoká¾u být upovídaný a zitcha najednou.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr "Nedoká¾u pou¾ívat èasová razítka a nemazat pøitom staré soubory.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: postrádám URL\n"
+
+# , c-format
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "V souboru `%s' nebyla nalezena ¾ádná URL.\n"
+
+# , c-format
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"KONEC --%s--\n"
+"Celkem naèteno %s bajtù v %d souborech\n"
+
+# , c-format
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "Pøekroèen limit objemu ulo¾ených dat (%s bajtù)!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "Zachycen signál %s , výstup pøesmìrován do `%%s'.\n"
+
+# , c-format
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"Stisknut CTRL+Break, pøesmìrovávám výstup do `%s'\n"
+"Program pokraèuje v bìhu na pozadí.\n"
+"Wget lze zastavit stiskem CTRL+ALT+DELETE.\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Pokraèuji v bìhu na pozadí.\n"
+
+# , c-format
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "Výstup bude zapsán do `%s'.\n"
+
+# , c-format
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "Spou¹tím WinHelp %s\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Nemohu najít pou¾itelný ovladaè socketù.\n"
+
+# , c-format
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr ""
+"%s: %s:%d varování: token \"%s\" je uveden je¹tì pøed jakýmkoliv\n"
+" názvem poèítaèe\n"
+
+# , c-format
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: neznámý token \"%s\"\n"
+
+# , c-format
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Pou¾ití: %s NETRC [NÁZEV POÈÍTAÈE]\n"
+
+# , c-format
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: volání `stat %s' skonèilo chybou: %s\n"
+
+# , c-format
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Ma¾u %s.\n"
+
+# , c-format
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Ma¾u %s, proto¾e tento soubor není po¾adován.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Naèítám `robots.txt'. Chybová hlá¹ení ignorujte, prosím.\n"
+
+# , c-format
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ pøeskakuji %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Nemohu najít proxy server.\n"
+
+# , c-format
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Musí být HTTP.\n"
+
+# , c-format
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Pøesmìrování na sebe sama.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Vzdávám to.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Zkou¹ím to znovu.\n"
+"\n"
+
+# , c-format
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Chyba (%s): K relativnímu odkazu %s nelze najít bázový odkaz.\n"
+
+# , c-format
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Chyba (%s): Bázový odkaz %s nesmí být relativní.\n"
+
+# , c-format
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Konvertuji %s... "
+
+# , c-format
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Nedoká¾u pøevést odkazy v %s: %s\n"
+
+# , c-format
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Není dost pamìti.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Neznámý/nepodporovaný protokol"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Neplatná specifikace portu"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Neplatné jméno stroje"
+
+# , c-format
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Nebylo mo¾né odstranit symbolický odkaz `%s': %s\n"
--- /dev/null
+# German messages for GNU Wget.
+# Copyright © 1997, 1998 Free Software Foundation, Inc.
+# Karl Eichwalder <ke@suse.de>, 1998
+# Karl Eichwalder <ke@ke.Central.DE>, 1997-1998
+#
+# 1998-06-15 19:31:58 MEST
+# Kosmetische Änderungen für 1.5.2-b4. -ke-
+#
+# 1998-05-03 09:56:27 MEST
+# Nachträge für wget-1.5.1. -ke-
+#
+# 1998-04-01 20:19:31 MEST
+# Nachträge für wget-1.5-b14.
+# getopt.c übersetzt. -ke-
+#
+# 1998-02-21 13:39:23 MET
+# Nachträge für wget-1.5-b8. -ke-
+#
+# 1998-02-08 12:29:34 MET
+# Abstimmungen auf wget-1.5-b5.
+# Meldungen von getopt.c habe ich bewusst nicht übersetzt. -ke-
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: wget 1.5.2-b4\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-06-15 19:25+02:00\n"
+"Last-Translator: Karl Eichwalder <ke@suse.de>\n"
+"Language-Team: German <de@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=iso-8859-1\n"
+"Content-Transfer-Encoding: 8-bit\n"
+
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Verbindungsaufbau zu %s:%hu... "
+
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "Verbindung nach %s:%hu zurückgewiesen.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "verbunden!\n"
+
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Einloggen als %s ... "
+
+# Ist das gemeint?
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr "Fehler bei der Antwort des Servers, schließe Kontroll-Verbindung.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Fehler bei der Begrüßung des Servers.\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Schreiben schlug fehl, schließe Kontroll-Verbindung.\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "Der Server weist Einloggen zurück.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Einloggen nicht richtig.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Eingeloggt!\n"
+
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Unbekannte Art »%c«, schließe Kontroll-Verbindung.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "fertig. "
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD nicht notwendig.\n"
+
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+"Kein solches Verzeichnis »%s«.\n"
+"\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "fertig.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD nicht erforderlich.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Kann PASV-Übertragung nicht beginnen.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Kann PASV-Antwort nicht auswerten.\n"
+
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Versuche Verbindung zu %s:%hu herzustellen.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "fertig. "
+
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Verbindungsfehler (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "Ungültiger PORT.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"REST schlug fehl, starte von Null.\n"
+
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Keine solche Datei »%s«.\n"
+"\n"
+
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Keine solche Datei oder kein solches Verzeichnis »%s«.\n"
+"\n"
+
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Länge: %s"
+
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [noch %s]"
+
+# wohl "unmaßgeblich", nicht "ohne Berechtigung"
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (unmaßgeblich)\n"
+
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, schließe Kontroll-Verbindung.\n"
+
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - Daten-Verbindung: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Kontroll-Verbindung geschlossen.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Daten-Übertragung abgeschlossen.\n"
+
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "Datei »%s« ist schon vorhanden, kein Hol-Versuch.\n"
+
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(versuche:%2d)"
+
+# oder "gesichert"?
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - »%s« gespeichert [%ld]\n"
+"\n"
+
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Benutze »%s« als temporäre Auflistungsdatei.\n"
+
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "Entfernt »%s«.\n"
+
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "Die Rekursionstiefe %d übersteigt die max. Tiefe %d.\n"
+
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Lokale Datei »%s« ist neuer, kein Hol-Versuch.\n"
+"\n"
+
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Größen stimmen nicht überein (lokal %ld), Hol-Versuch.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Ungültiger Name für einen symbolischen Verweis, überspringe.\n"
+
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr "Der richtige symbolische Verweis %s -> %s ist schon vorhanden\n"
+
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Lege symbolischen Verweis %s -> %s an\n"
+
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr ""
+"Symbolische Verweise nicht unterstützt, überspringe symbolischen Verweis "
+"»%s«.\n"
+
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Überspringe Verzeichnis »%s«.\n"
+
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: unbekannter/nicht unterstüzter Dateityp.\n"
+
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: beschädigter Zeitstempel.\n"
+
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr "Hole Verzeichnisse nicht, da die Tiefe %d ist (max %d).\n"
+
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr ""
+"Steige nicht zu »%s« hinab, da es ausgeschlossen/nicht eingeschlossen ist.\n"
+
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Weise zurück »%s«.\n"
+
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Keine Übereinstimmungen bei dem Muster »%s«.\n"
+
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Schreibe HTML-artigen Index nach »%s« [%ld].\n"
+
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "HTML-artiger Index nach »%s« geschrieben.\n"
+
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: Option `%s' ist zweideutig\n"
+
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: Option `--%s' erlaubt kein Argument\n"
+
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: Option `%c%s' erlaubt kein Argument\n"
+
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: Option `%s' benötigt kein Argument\n"
+
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: nicht erkannte Option `--%s'\n"
+
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: nicht erkannte Option `%c%s'\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: ungültige Option -- %c\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: Option verlangt ein Argument -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Kann Benutzer-Kennung (User-ID) nicht bestimmen.\n"
+
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Warnung: uname fehlgeschlagen: %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Warnung: gethostname fehlgeschlagen\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Warnung: lokale IP-Adresse nicht bestimmbar.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr "%s: Warnung: kein \"reverse-lookup\" für lokale IP-Adresse möglich.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr ""
+"%s: Warnung: \"reverse-lookup\" für lokale Adresse ergibt keinen FQDN!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Host nicht gefunden"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Unbekannter Fehler"
+
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Index von /%s auf %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "Zeit unbekannt "
+
+#: src/html.c:467
+msgid "File "
+msgstr "Datei "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Verzeichnis "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Verweis "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Nicht sicher"
+
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s Bytes)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "HTTP-Anforderung zu schreiben schlug fehl.\n"
+
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s Anforderung gesendet, warte auf Antwort... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Dateiende beim auswerten der Kopfzeilen.\n"
+
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Lesefehler (%s) bei den Kopfzeilen.\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr "Keine Daten empfangen"
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr "Nicht korrekte Statuszeile"
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(keine Beschreibung)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr "Authorisierung fehlgeschlagen.\n"
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr "Unbekannten Authentifizierungsablauf.\n"
+
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Platz: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "nicht spezifiziert"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr "[folge]"
+
+# Header
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Länge: "
+
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (noch %s)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "übergangen"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Warnung: Joker-Zeichen werden bei HTTP nicht unterstützt.\n"
+
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Datei »%s« schon vorhanden, kein Hol-Versuch.\n"
+
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Kann nicht nach »%s« schreiben (%s).\n"
+
+# Was meint hier location?
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "FEHLER: Redirektion (%d) ohne Ziel(?).\n"
+
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s FEHLER %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr "»Last-modified«-Kopfzeile fehlt -- Zeitstempel abgeschaltet.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr "»Last-modified«-Kopfzeile ungültig -- Zeitstempeln übergangen.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Entfernte Datei ist neuer, Hol-Versuch.\n"
+
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - »%s« gesichert [%ld/%ld]\n"
+"\n"
+
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Verbindung bei Byte %ld geschlossen. "
+
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - »%s« gesichert [%ld/%ld])\n"
+"\n"
+
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Verbindung bei Byte %ld/%ld geschlossen. "
+
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Lesefehler bei Byte %ld (%s)."
+
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Lesefehler bei Byte %ld/%ld (%s). "
+
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Kann »%s« nicht lesen (%s).\n"
+
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Fehler in »%s« bei Zeile %d.\n"
+
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr "%s: Warnung: wgetrc des Systems und des Benutzers zeigen nach »%s«.\n"
+
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: Unbekannter Befehl »%s«, Wert »%s«.\n"
+
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Bitte »on« oder »off« angeben.\n"
+
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Ungültige Angabe »%s«\n"
+
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "%s: Ungültige Angabe »%s«\n"
+
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Syntax: %s [OPTION]... [URL]...\n"
+
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr "GNU Wget %s, ein nicht-interaktives Netzwerk-Tool zum Holen.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Zwingende Argumente zu langen Optionen sind auch zwingend bei kurzen "
+"Optionen.\n"
+"\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Beim Start:\n"
+" -V, --version Programmversion anzeigen\n"
+" -h, --help diese Hilfe anzeigen\n"
+" -b, --background nach dem Starten in den Hintergrund gehen\n"
+" -e, --execute=BEFEHL einen ».wgetrc«-Befehl ausführen\n"
+"\n"
+
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"Log-Datei schreiben und Eingabe-Datei:\n"
+" -o, --output-file=DATEI Log-Meldungen in DATEI schreiben\n"
+" -a, --append-output=DATEI Meldungen der DATEI anhängen\n"
+" -d, --debug Debug-Ausgabe anzeigen\n"
+" -q, --quiet still (keine Ausgabe von Meldungen)\n"
+" -v, --verbose mitteilsam (dies ist Standard)\n"
+" -nv, --non-verbose Mitteilsamkeit reduzieren; nicht ganz still\n"
+" -i, --input-file=DATEI URLs aus DATEI lesen\n"
+" -F, --force-html Eingabe-Datei als HTML behandeln\n"
+"\n"
+
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Holen (download):\n"
+" -t, --tries=ZAHL setze Anzahl der Wiederholversuch auf ZAHL\n"
+" (0 ohne Beschränkung)\n"
+" -O --output-document=DATEI schreibe Dokumente in DATEI\n"
+" -nc, --no-clobber bestehende Dateien nicht überschreiben\n"
+" -c, --continue beginne erneut, eine existierende Datei\n"
+" zu holen\n"
+" --dot-style=STYLE Hol-Anzeige auf STYLE setzen\n"
+" -N, --timestamping hole keine Dateien, die älter als die "
+"lokalen\n"
+" sind\n"
+" -S, --server-response Antwort des Servers anzeigen\n"
+" --spider nichts holen (don't download anything)\n"
+" -T, --timeout=SEKUNDEN den Lese-Timeout auf SEKUNDEN setzen\n"
+" -w, --wait=SEKUNDEN SEKUNDEN zwischen den Hol-Versuchen warten\n"
+" -Y, --proxy=on/off Proxy ein (»on«) oder aus (»off«) stellen\n"
+" -Q, --quota=ZAHL setze die Hol-Vorgänge auf ZAHL\n"
+"\n"
+
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Verzeichnisse:\n"
+" -nd --no-directories keine Verzeichnisse anlegen\n"
+" -x, --force-directories Anlegen von Verzeichnissen erwingen\n"
+" -nH, --no-host-directories keine Host-Verzeichnisse anlegen\n"
+" -P, --directory-prefix=PREFIX Dateien nach PREFIX/... sichern\n"
+" --cut-dirs=ZAHL ignoriere die ZAHL der entfernten\n"
+" Verzeichnisbestandteile\n"
+"\n"
+
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"HTTP-Optionen:\n"
+" --http-user=USER setze http-Benutzer auf USER\n"
+" --http-passwd=PASS setse http-Passwort auf PASS\n"
+" -C, --cache=on/off erlaube/verbiete server-gepufferte Daten\n"
+" (server-cached data) (normalerweise "
+"erlaubt)\n"
+" --ignore-length ignoriere das »Content-Length«-Kopffeld\n"
+" --header=ZEICHENKETTE ZEICHENKETTE zwischen die Kopfzeilen einfügen\n"
+" --proxy-user=USER setze USER als Proxy-Benutzername\n"
+" --proxy-passwd=PASS setze PASS als Proxy-Passwort\n"
+" -s, --save-headers sichere die HTTP-Kopfzeilen in Datei\n"
+" -U, --user-agent=AGENT als AGENT anstelle of Wget/VERSION "
+"identifizieren\n"
+"\n"
+
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"FTP-Optionen:\n"
+" --retr-symlinks hole symbolische Verweise (FTP)\n"
+" -g, --glob=on/off Dateinamen-»Globbing« ein (»on«) oder aus (»off«)\n"
+" stellen\n"
+" --passive-ftp den \"passiven\" Übertragungsmodus verwenden\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Rekursives Holen:\n"
+" -r, --recursive rekursives Web-Saugen -- mit Umsicht "
+"verwenden!\n"
+" -l, --level=Zahl maximale Rekursionstiefe (0 ohne Begrenzung)\n"
+" --delete-after geholte Dateien löschen\n"
+" -k, --convert-links nicht-relative Verweise in relative "
+"umwandeln\n"
+" -m, --mirror geeignete Optionen fürs Spiegeln (mirroring)\n"
+" einschalten\n"
+" -nr, --dont-remove-listing ».listing«-Dateien nicht entfernen\n"
+"\n"
+
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Recursiv erlauben/zurückweisen:\n"
+" -A, --accept=LISTE Liste der erlaubten Erweiterungen\n"
+" -R, --reject=LISTE Liste der zurückzuweisenden "
+"Erweiterungen\n"
+" -D, --domains=LISTE Liste der erlaubten Domains\n"
+" --exclude-domains=LISTE komma-unterteilte Liste der\n"
+" zurückzuweisen Domains\n"
+" -L, --relative nur relativen Verweisen folgen\n"
+" --follow-ftp FTP-Verweisen von HTML-Dokumenten aus\n"
+" folgen\n"
+" -H, --span-hosts wenn »--recursive«, auch zu fremden "
+"Hosts\n"
+" gehen\n"
+" -I, --include-directories=LISTE Liste der erlaubten Verzeichnisse\n"
+" -X, --exclude-directories=LISTE Liste der auszuschließenden "
+"Verzeichnisse\n"
+" -nh, --no-host-lookup kein DNS-lookup für Hosts durchführen\n"
+" -np, --no-parent nicht zum übergeordneten Verzeichnis\n"
+" hinaufsteigen\n"
+"\n"
+
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr ""
+"Fehlerberichte und Verbesserungsvorschläge bitte an <bug-wget@gnu.org>\n"
+"schicken.\n"
+"\n"
+"Für die deutsche Übersetzung ist die Mailingliste <de@li.org> zuständig.\n"
+
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr "%s: Debug-Unterstützung nicht hineinkompiliert.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright © 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Es gibt KEINERLEI Garantie, nicht einmal für die TAUGLICHKEIT oder die,\n"
+"VERWENDBARKEIT ZU EINEM ANGEGEBENEN ZWECK. In den Quellen befindet sich "
+"die\n"
+"Lizenz- und Kopierbedingung; die Einzelheiten sind in der Datei COPYING\n"
+"(GNU General Public License) beschrieben.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Geschrieben von Hrvoje Niksic <hniksic@srce.hr>.\n"
+
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: ungültiger Befehl\n"
+
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: ungültige Option -- »-n%c«\n"
+
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "»%s --help« gibt weitere Informationen.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "\"Mitteilsam\" und \"still\" ist gleichzeitig unmöglich.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr ""
+"Zeitstempeln und nicht Überschreiben alter Dateien ist gleichzeitig "
+"unmöglich.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: URL fehlt\n"
+
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "Keine URLs in %s gefunden.\n"
+
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"BEENDET --%s--\n"
+"Geholt: %s Bytes in %d Dateien\n"
+
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "Hol-Kontingent (%s Bytes) ERSCHÖPFT!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "%s erhalten, weise Ausgabe nach »%%s« zurück.\n"
+
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"CTRL+Break (= Strg+Abbruch) empfangen, Ausgabe wird nach »%s« umgeleitet.\n"
+"Ausführung wird im Hintergrund fortgeführt.\n"
+"Wget kann durch das Drücken von CTRL+ALT+DELETE (= Strg+Alt+Entf)\n"
+"gestopt werden.\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Im Hintergrund geht's weiter.\n"
+
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "Ausgabe wird nach »%s« geschrieben.\n"
+
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "WinHelp %s wird gestartet\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Kann keinen benutzbaren \"socket driver\" finden.\n"
+
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr "%s: %s:%d: Warnung: »%s«-Wortteil erscheint vor einem Maschinennamen\n"
+
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: unbekannter Wortteil »%s«\n"
+
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Syntax: %s NETRC [HOSTNAME]\n"
+
+# stat
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: kann nicht finden %s: %s\n"
+
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Entferne »%s«.\n"
+
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Entferne »%s«, da dies zurückgewiesen werden soll.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Lade »robots.txt«; bitte Fehler ignorieren.\n"
+
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ überspringe %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Kann Proxy-Host nicht finden.\n"
+
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Muss HTTP sein.\n"
+
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Redirektion auf sich selber.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Gebe auf.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Versuche erneut.\n"
+"\n"
+
+# ???
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Fehler (%s): Verweis »%s« ohne »base« versucht.\n"
+
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Fehler (%s): »Base« %s relativ, ohne Bezugs-URL.\n"
+
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Wandle um %s... "
+
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Kann Verweise nicht umwandeln zu %s: %s\n"
+
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Nicht genügend Speicher.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Unbekanntes/nicht unterstütztes Protokoll"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Ungültige Port-Angabe"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Ungültiger Hostname"
+
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Entfernen des symbolischen Verweises »%s« schlug fehlt: %s\n"
--- /dev/null
+# Croatian messages for GNU Wget
+# Copyright (C) 1998 Free Software Foundation, Inc.
+# Hrvoje Niksic <hniksic@srce.hr>, 1998.
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: wget 1.5.2-b2\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-02-29 21:05+01:00\n"
+"Last-Translator: Hrvoje Niksic <hniksic@srce.hr>\n"
+"Language-Team: Croatian <hr-translation@bagan.srce.hr>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=iso-8859-2\n"
+"Content-Transfer-Encoding: 8bit\n"
+
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Spajam se na %s:%hu... "
+
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "%s:%hu odbija vezu.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "spojen!\n"
+
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Logiram se kao %s ... "
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr "Gre¹ka u odgovoru, zatvaram kontrolnu vezu.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Gre¹ka u poslu¾iteljevom pozdravu.\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Write nije uspio, zatvaram kontrolnu vezu.\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "Poslu¾itelj odbija prijavu.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Pogre¹na prijava.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Ulogiran!\n"
+
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Nepoznat tip `%c', zatvaram kontrolnu vezu.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "gotovo."
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD ne treba.\n"
+
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr "Nema direktorija `%s'.\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "gotovo.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD se ne tra¾i.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Ne mogu otpoèeti PASV prijenos.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Ne mogu raspoznati PASV odgovor.\n"
+
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Poku¹at æu se spojiti na %s:%hu.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "gotovo. "
+
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Gre¹ka u bindu (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "Pogre¹an PORT.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"REST nije uspio, poèinjem ispoèetka.\n"
+
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Nema datoteke `%s'.\n"
+"\n"
+
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Nema datoteke ili direktorija `%s'.\n"
+"\n"
+
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Duljina: %s"
+
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [jo¹ %s]"
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (neautorizirana)\n"
+
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, zatvaram kontrolnu vezu.\n"
+
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - Podatkovna veza: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Kontrolna veza prekinuta.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Prijenos podataka prekinut.\n"
+
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "Datoteka `%s' veæ postoji, ne skidam.\n"
+
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(pok:%2d)"
+
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' snimljen [%ld]\n"
+"\n"
+
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Koristim `%s' kao privremenu datoteku za listing.\n"
+
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "Izbrisao `%s'.\n"
+
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "Dubina rekurzije %d prelazi najveæu dozvoljenu %d.\n"
+
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Lokalna datoteka `%s' je novija, ne skidam.\n"
+"\n"
+
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Velièine se ne sla¾u (lokalno %ld), skidam.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Pogre¹no ime simbolièkog linka, preskaèem.\n"
+
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+"Veæ postoji ispravan link %s -> %s\n"
+"\n"
+
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Stvaram simbolièki link %s -> %s\n"
+
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr "Linkovi nisu podr¾ani, preskaèem link `%s'.\n"
+
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Preskaèem direktorij `%s'.\n"
+
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: nepoznata/nepodr¾ana vrsta datoteke.\n"
+
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: pogre¹no vrijeme.\n"
+
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr "Ne skidam direktorije jer je dubina %d (maksimalno %d).\n"
+
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr "Ne idem u `%s' jer je iskljuèen ili nije ukljuèen.\n"
+
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Odbijam `%s'.\n"
+
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Ni¹ta ne ide uz `%s'.\n"
+
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Snimio HTML-iziran indeks u `%s' [%ld].\n"
+
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "Snimio HTML-iziran indeks u `%s'.\n"
+
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: opcija `%s' je dvosmislena\n"
+
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: uz opciju `--%s' ne ide argument\n"
+
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: opcija `%c%s' ne dozvoljava argument\n"
+
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: opcija `%s' tra¾i argument\n"
+
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: nepoznata opcija `--%s'\n"
+
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: nepoznata opcija `%c%s'\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: nedozvoljena opcija -- %c\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: opcija tra¾i argument -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Ne mogu utvrditi user-id.\n"
+
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Upozorenje: uname nije uspio: %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Upozorenje: gethostname nije uspio\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Upozorenje: ne mogu utvrditi lokalnu IP adresu.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr "%s: Upozorenje: ne mogu napraviti reverzni lookup lokalne IP adrese.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr "%s: Upozorenje: reverzni lookup lokalne adrese ne daje FQDN!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Raèunalo nije pronaðeno"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Nepoznata gre¹ka"
+
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Indeks direktorija /%s na %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "nepoznato vrijeme "
+
+#: src/html.c:467
+msgid "File "
+msgstr "Datoteka "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Direktorij "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Link "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Ne znam "
+
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s bajtova)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "Nisam uspio poslati HTTP zahtjev.\n"
+
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s zahtjev poslan, èekam odgovor... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Kraj datoteke za vrijeme obrade zaglavlja.\n"
+
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Gre¹ka pri èitanju zaglavlja (%s).\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr "Podaci nisu primljeni"
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr "Deformirana statusna linija"
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(bez opisa)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr "Ovjera nije uspjela.\n"
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr "Nepoznata metoda ovjere.\n"
+
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Polo¾aj: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "neodreðen"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr " [pratim]"
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Duljina: "
+
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (jo¹ %s)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "zanemarena"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Upozorenje: wildcardi nisu podr¾ani za HTTP.\n"
+
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Datoteka `%s' veæ postoji, ne skidam.\n"
+
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Ne mogu pisati u `%s' (%s).\n"
+
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "GRE©KA: Redirekcija (%d) bez novog polo¾aja (location).\n"
+
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s GRE©KA %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr "Nedostaje Last-Modified zaglavlje -- ignoriram vremensku oznaku.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr "Nevaljan Last-Modified header -- ignoriram vremensku oznaku.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Datoteka na poslu¾itelju je novija, skidam.\n"
+
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' snimljen [%ld/%ld]\n"
+"\n"
+
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Veza zatvorena na bajtu %ld. "
+
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' snimljen [%ld/%ld])\n"
+"\n"
+
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Veza zatvorena na bajtu %ld/%ld. "
+
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Gre¹ka pri èitanju na bajtu %ld (%s)."
+
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Gre¹ka pri èitanju na bajtu %ld/%ld (%s). "
+
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Ne mogu proèitati %s (%s).\n"
+
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Gre¹ka u %s na liniji %d.\n"
+
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr "%s: Upozorenje: sistemski i korisnikov wgetrc su `%s'.\n"
+
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: BUG: Nepoznata naredba `%s', vrijednost `%s'.\n"
+
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Molim postavite na on ili off.\n"
+
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Pogre¹na specifikacija `%s'\n"
+
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "wget: %s: Pogre¹na specifikacija `%s'\n"
+
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Uporaba: %s [OPCIJA]... [URL]...\n"
+
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr "GNU Wget %s, alat za neinteraktivno skidanje preko mre¾e.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Ako duga opcija zahtijeva argument, tada to vrijedi i za kratku.\n"
+"\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Pokretanje:\n"
+" -V, --version prika¾i verziju Wget-a i izaði.\n"
+" -h, --help ispi¹i pomoæ.\n"
+" -b, --background radi u pozadini nakon pokretanja.\n"
+" -e, --execute=NAREDBA izvr¹i naredbu `.wgetrc'-a.\n"
+"\n"
+
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"Logging and input file:\n"
+" -o, --output-file=DATOTEKA spremaj poruke u DATOTEKU.\n"
+" -a, --append-output=DATOTEKA dodaj poruke u DATOTEKU.\n"
+" -d, --debug ispisuj debug izlaz.\n"
+" -q, --quiet ti¹ina (bez ispisa).\n"
+" -v, --verbose ukljuèi puni ispis (podrazumijeva se).\n"
+" -nv, --non-verbose iskljuèi veæinu ispisa.\n"
+" -i, --input-file=DATOTEKA èitaj URL-ove iz DATOTEKE.\n"
+" -F, --force-html tretiraj ulaznu datoteku kao HTML.\n"
+"\n"
+
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Download:\n"
+" -t, --tries=BROJ broj poku¹aja na BROJ (0 je beskonaèno)\n"
+" -O --output-document=DATOTEKA pi¹i dokumente u DATOTEKU.\n"
+" -nc, --no-clobber nemoj prebrisati postojeæe datoteke.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STIL postavi stil prikaza skidanja.\n"
+" -N, --timestamping ne skidaj datoteke starije od lokalnih.\n"
+" -S, --server-response ispisuj poslu¾iteljev odaziv.\n"
+" --spider ni¹ta ne skidaj.\n"
+" -T, --timeout=SEKUNDE postavi timeout èitanja na SEKUNDE.\n"
+" -w, --wait=SEKUNDE èekaj SEKUNDE izmeðu skidanja.\n"
+" -Y, --proxy=on/off ukljuèi ili iskljuèi proxy.\n"
+" -Q, --quota=BROJ postavi ogranièenje skidanja na BROJ.\n"
+"\n"
+
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Direktoriji:\n"
+" -nd --no-directories ne stvaraj direktorije.\n"
+" -x, --force-directories uvijek stvaraj direktorije.\n"
+" -nH, --no-host-directories ne stvaraj direktorije po raèunalima.\n"
+" -P, --directory-prefix=PREFIKS snimaj datoteke u PREFIKS/...\n"
+" --cut-dirs=BROJ ignoriraj BROJ stranih direktorija.\n"
+"\n"
+
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"HTTP options:\n"
+" --http-user=KORISNIK postavi HTTP korisnika na KORISNIK.\n"
+" --http-passwd=ZAPORKA postavi HTTP zaporku na ZAPORKA.\n"
+" -C, --cache=on/off dozvoli ili zabrani ke¹iranje na "
+"poslu¾itelju\n"
+" (obièno dozvoljeno).\n"
+" --ignore-length ignoriraj `Content-Length' zaglavlje.\n"
+" --header=STRING umetni STRING meðu zaglavlja.\n"
+" --proxy-user=KORISNIK postavi KORISNIKA kao proxy korisnika\n"
+" --proxy-passwd=ZAPORKA postavi proxy zaporku na ZAPORKU.\n"
+" -s, --save-headers snimaj HTTP zaglavlja na disk.\n"
+" -U, --user-agent=KLIJENT identificiraj se kao KLIJENT umjesto\n"
+" Wget/VERZIJA.\n"
+"\n"
+
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"FTP options:\n"
+" --retr-symlinks skidaj FTP simbolièke linkove.\n"
+" -g, --glob=on/off ukljuèi ili iskljuèi globbing.\n"
+" --passive-ftp koristi \"pasivni\" mod prijenosa.\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Rekurzivno skidanje:\n"
+" -r, --recursive rekurzivno skidanje -- koristi pa¾ljivo!\n"
+" -l, --level=NUMBER maksimalna dubina rekurzije (0 za "
+"beskonaènu).\n"
+" --delete-after bri¹i skinute datoteke.\n"
+" -k, --convert-links konvertiraj apsolutne linkove u relativne.\n"
+" -m, --mirror ukljuèi opcije pogodne za \"mirror\".\n"
+" -nr, --dont-remove-listing ne uklanjaj `.listing' datoteke.\n"
+"\n"
+
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Rekurzivno prihvaæanje/odbijanje:\n"
+" -A, --accept=POPIS popis prihvaæenih nastavaka.\n"
+" -R, --reject=POPIS popis odbijenih nastavaka.\n"
+" -D, --domains=POPIS popis prihvaæenih domena.\n"
+" --exclude-domains=POPIS zarezom odvojen popis odbijenih "
+"domena.\n"
+" -L, --relative prati samo relativne linkove.\n"
+" --follow-ftp prati FTP linkove iz HTML dokumenata.\n"
+" -H, --span-hosts idi na strana raèunala pri rekurzivnom\n"
+" skidanju.\n"
+" -I, --include-directories=POPIS popis dozvoljenih direktorija.\n"
+" -X, --exclude-directories=POPIS popis nedozvoljenih direktorija.\n"
+" -nh, --no-host-lookup nemoj pregledavati hostove DNS-om.\n"
+" -np, --no-parent ne idi u direktorij vi¹e.\n"
+"\n"
+
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr "©aljite izvje¹taje o bugovima i prijedloge na <bug-wget@gnu.org>.\n"
+
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr "%s: podr¹ka za debugiranje nije ugraðena.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Sva prava zadr¾ana. Ovaj program distribuira se u nadi da æe biti\n"
+"koristan, ali BEZ IKAKVOG JAMSTVA; bez èak i impliciranog jamstva\n"
+"PROIZVODNOSTI ili UPOTREBLJIVOSTI ZA ODREÐENU SVRHU. Pogledajte GNU\n"
+"General Public License za vi¹e detalja.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Napisao Hrvoje Nik¹iæ <hniksic@srce.hr>.\n"
+
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: nedozvoljena naredba\n"
+
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: nedozvoljena opcija -- `-n%c'\n"
+
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "Poku¹ajte `%s --help' za vi¹e opcija.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "Ne mogu istovremeno biti verbozan i tih.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr "Ne mogu istovremeno paziti na vrijeme i ne gaziti stare datoteke.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: nedostaje URL\n"
+
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "Nijedan URL nije pronaðen u %s.\n"
+
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"ZAVR©IO --%s--\n"
+"Skinuo: %s bajta u %d datoteka\n"
+
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "Kvota (%s bajtova) je PREKORAÈENA!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "%s primljen, usmjeravam izlaz na `%%s'.\n"
+
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"CTRL+Break je pritisnut, usmjeravam izlaz u `%s'.\n"
+"Izvr¹avanje se nastavlja u pozadini.\n"
+"Mo¾ete prekinuti Wget pritiskom na CTRL+ALT+DELETE.\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Nastavljam u pozadini.\n"
+
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "Izlaz se sprema u `%s'.\n"
+
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "Pokreæem WinHelp %s\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Ne mogu naæi upotrebljiv driver za sockete.\n"
+
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr ""
+"%s: %s:%d: upozorenje: \"%s\" token se pojavljuje prije naziva stroja\n"
+
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: nepoznat token \"%s\"\n"
+
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Uporaba: %s NETRC [RAÈUNALO]\n"
+
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: ne mogu stat-irati %s: %s\n"
+
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Uklanjam %s.\n"
+
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Uklanjam %s buduæi da bi ga trebalo odbiti.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Uèitavam robots.txt; molim ne obazirati se na gre¹ke.\n"
+
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ preskaèem %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Ne mogu naæi proxy raèunalo.\n"
+
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Mora biti HTTP.\n"
+
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Redirekcija na samog sebe.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Odustajem.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Poku¹avam ponovo.\n"
+"\n"
+
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Gre¹ka (%s): Zadan je link %s bez osnove.\n"
+
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Gre¹ka (%s): Baza %s je relativna, bez referirajuæeg URL-a.\n"
+
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Konvertiram %s... "
+
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Ne mogu konvertirati linkove u %s: %s\n"
+
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Nema dovoljno memorije.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Nepoznat/nepodr¾an protokol"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Pogre¹na specifikacija porta"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Pogre¹an naziv raèunala"
+
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Ne mogu izbrisati link `%s': %s\n"
--- /dev/null
+# Italian messages for GNU Wget
+# Copyright (C) 1998 Free Software Foundation, Inc.
+# Giovanni Bortolozzo <borto@dei.unipd.it>, 1998
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: wget 1.5.2-b1\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-06-13 15:22+02:00\n"
+"Last-Translator: Giovanni Bortolozzo <borto@dei.unipd.it>\n"
+"Language-Team: Italian <it@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=iso-8859-1\n"
+"Content-Transfer-Encoding: 8bit\n"
+
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Mi sto connettendo a %s:%hu..."
+
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "Connessione a %s:%hu rifiutata.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "connesso!\n"
+
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Accesso come utente %s ... "
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr ""
+"Errore nella risposta del server, chiudo la connessione di controllo.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Errore nel codice di benvenuto del server\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Errore in scrittura, chiudo la connessione di controllo\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "Il server rifiuta il login.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Login non corretto.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Login eseguito!\n"
+
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Tipo `%c' sconosciuto, chiudo la connessione di controllo.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "fatto. "
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD non necessaria.\n"
+
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+"La directory `%s' non esiste.\n"
+"\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "fatto.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD non necessaria.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Non riesco ad inizializzare il trasferimento PASV.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Non riesco a comprendere la risposta PASV.\n"
+
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Proverò a connettermi a %s:%hu.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "fatto. "
+
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Errore di bind (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "PORT non valido.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"REST fallito, ricomincio dall'inizio.\n"
+
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Il file `%s' non esiste.\n"
+"\n"
+
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Il file o la directory `%s' non esiste.\n"
+"\n"
+
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Lunghezza: %s"
+
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [%s alla fine]"
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (non autorevole)\n"
+
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, chiudo la connessione di controllo.\n"
+
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - Connessione dati: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Connessione di controllo chiusa.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Trasferimento dati abortito.\n"
+
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "Il file `%s' è già presente, non lo scarico.\n"
+
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(provo:%2d)"
+
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' salvato [%ld]\n"
+"\n"
+
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Utilizzo `%s' come file temporaneo per il listing.\n"
+
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "`%s' rimosso.\n"
+
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "La profondità di %d nella ricorsione eccede il massimo ( %d ).\n"
+
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Il file locale `%s' è più recente, non lo scarico.\n"
+"\n"
+
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Le dimensioni non coincidono (locale %ld), lo scarico.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Il nome del link simbolico non è valido, passo oltre.\n"
+
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+"Ho già il link simbolico %s -> %s\n"
+"\n"
+
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Creo il link simbolico %s -> %s\n"
+
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr "Link simbolici non supportati, ignoro il link `%s'.\n"
+
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Ignoro la directory `%s'.\n"
+
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: tipo di file sconosciuto/non supportato.\n"
+
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: time-stamp corrotto.\n"
+
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr "Non scarico le directory perché la profondità é %d (max %d).\n"
+
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr "Non scendo nella directory `%s' perché è esclusa/non inclusa.\n"
+
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Rifiuto `%s'.\n"
+
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Nessun corrispondenza con il modello `%s'.\n"
+
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Scrivo l'indice in formato HTML in `%s' [%ld].\n"
+
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "Scrivo l'indice in formato HTML in `%s'.\n"
+
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: l'opzione `%s' è ambigua\n"
+
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: l'opzione `--%s' non ammette argomenti\n"
+
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: l'opzione `%c%s' non ammette argomenti\n"
+
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: l'opzione `%s' richide un argomento\n"
+
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: opzione non riconosciuta`--%s'\n"
+
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: opzione non riconosciuta `%c%s'\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: opzione illegale -- %c\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: l'opzione richiede un argomento -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Impossibile determinare lo user-id .\n"
+
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Attenzione: uname fallita: %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Attenzione: gethostname fallita\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Attenzione: impossibile determinare l'indirizzo IP locale.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr ""
+"%s: Attenzione: impossibile fare la risoluzione inversa dell'indirizzo\n"
+" IP locale.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr ""
+"%s: Attenzione: la risoluzione inversa dell'indirizzo locale non ha\n"
+" prodotto un FQDN!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Host non trovato"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Errore sconosciuto"
+
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Indice della directory /%s su %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "data sconosciuta "
+
+#: src/html.c:467
+msgid "File "
+msgstr "File "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Directory "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Link "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Incerto "
+
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s byte)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "Non riesco a scrivere la richiesta HTTP.\n"
+
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s richiesta inviata, aspetto la risposta... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Raggiunta la fine del file durante l'analisi degli header.\n"
+
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Errore di lettura degli header (%s).\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr "Nessun dato ricevuto"
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr "Riga di stato malformata"
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(nessuna descrizione)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr "Autorizzazione fallita.\n"
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr "Schema di autotentificazione sconosciuto.\n"
+
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Location: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "non specificato"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr " [segue]"
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Lunghezza: "
+
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (%s per finire)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "ignorato"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Attenzione: le wildcard non sono supportate in HTTP.\n"
+
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Il file `%s' è già presente, non lo scarico.\n"
+
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Non riesco a scrivere in `%s' (%s).\n"
+
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "ERRORE: Redirezione (%d) senza posizione.\n"
+
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s ERRORE %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr "Manca l'header last-modified -- date disattivate.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr "Header last-modified non valido -- data ignorata.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Il file remoto è più recente, lo scarico.\n"
+
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' salvato [%ld/%ld]\n"
+"\n"
+
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Connessione chiusa al byte %ld. "
+
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' salvati [%ld/%ld])\n"
+"\n"
+
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Connessione chiusa al byte %ld/%ld. "
+
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Errore di lettura al byte %ld (%s). "
+
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Errore di lettura al %ld/%ld (%s). "
+
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Impossibile leggere %s (%s).\n"
+
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Errore in %s alla linea %d.\n"
+
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr ""
+"%s: Attenzione: Sia il wgetrc di sistema che quello personale puntano\n"
+" a `%s'.\n"
+
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: BUG: comando `%s' sconosciuto, valore `%s'.\n"
+
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Specificare on oppure off.\n"
+
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Specificazione non valida `%s'\n"
+
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "wget: %s: Specificazione non valida `%s'\n"
+
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Uso: %s [OPZIONE]... [URL]...\n"
+
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr ""
+"GNU Wget %s, un programma non interattivo per scaricare file dalla rete.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Gli argomenti obbligatori per le opzioni lunghe lo sono anche per quelle\n"
+"corte.\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Avvio:\n"
+" -V, --version mostra la versione di Wget ed esce.\n"
+" -h, --help mostra questo aiuto.\n"
+" -b, --background va in background dopo l'avvio.\n"
+" -e, --execute=COMANDO esegue un comando `.wgetrc'.\n"
+"\n"
+
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"File di log e d'ingresso:\n"
+" -o, --output-file=FILE registra i messaggi su FILE.\n"
+" -a, --append-output=FILE accoda i messaggi a FILE.\n"
+" -d, --debug mostra l'output di debug.\n"
+" -q, --quiet silenzioso (nessun output).\n"
+" -v, --verbose prolisso (questo è il comportamento\n"
+" predefinito).\n"
+" -nv, --non-verbose meno prolisso, senza diventare silenzioso.\n"
+" -i, --input-file=FILE legge gli URL da FILE.\n"
+" -F, --force-html tratta il file di input come HTML.\n"
+"\n"
+
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Download:\n"
+" -t, --tries=NUMERO imposta il numero di tentativi a NUMERO\n"
+" (0 = illimitati)\n"
+" -O --output-document=FILE scrive l'output su FILE.\n"
+" -nc, --no-clobber non sovrascrive i file già esistenti.\n"
+" -c, --continue riprende a scaricare un file già esistente.\n"
+" --dot-style=STILE imposta lo stile di visualizzazione dello\n"
+" scaricamento.\n"
+" -N, --timestamping non scarica i file se sono più vecchi di\n"
+" quelli locali.\n"
+" -S, --server-response mostra le risposte del server.\n"
+" --spider non scarica niente.\n"
+" -T, --timeout=SECONDI imposta il timeout di lettura a SECONDI.\n"
+" -w, --wait=SECONDI aspetta SECONDI tra i vari scarichi.\n"
+" -Y, --proxy=on/off attiva o disabilita l'uso del proxy.\n"
+" -Q, --quota=NUMERO imposta la quota di scarico a NUMERO.\n"
+"\n"
+
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Directory:\n"
+" -nd --no-directories non crea directory.\n"
+" -x, --force-directories forza la creazione delle directory.\n"
+" -nH, --no-host-directories non crea directory sull'host.\n"
+" -P, --directory-prefix=PREFISSO salva i file in PREFISSO/...\n"
+" --cut-dirs=NUMERO ignora NUMERO componenti delle\n"
+" directory remote.\n"
+"\n"
+
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"Opzioni HTTP:\n"
+" --http-user=UTENTE imposta l'utente http a UTENTE.\n"
+" --http-passwd=PASS Imposta la password http a PASS.\n"
+" -C, --cache=on/off permette o non permette la cache dei dati sul\n"
+" server (normalmente permessa).\n"
+" --ignore-length ignora il campo `Content-Length' degli header.\n"
+" --header=STRINGA inserisce STRINGA tra gli header.\n"
+" --proxy-user=UTENTE usa UTENTE come nome utente per il proxy.\n"
+" --proxy-passwd=PASS usa PASS come password per il proxy.\n"
+" -s, --save-headers salva gli header HTTP sul file.\n"
+" -U, --user-agent=AGENT si identifica come AGENT invece che come\n"
+" Wget/VERSIONE.\n"
+"\n"
+
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"Opzioni FTP:\n"
+" --retr-symlinks scarica i link simbolici FTP.\n"
+" -g, --glob=on/off abilita o disabilita il file name globbing.\n"
+" --passive-ftp usa il modo di trasferimento \"passivo\".\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Scarico ricorsivo:\n"
+" -r, --recursive web-suck ricorsivo -- usare con cautela!\n"
+" -l, --level=NUMERO profondità massima di ricorsione\n"
+" (0 = illimitata).\n"
+" --delete-after cancella i file scaricati.\n"
+" -k, --convert-links converti i link simbolici non relativi in\n"
+" relativi.\n"
+" -m, --mirror abilita le opzioni adatte per il mirroring.\n"
+" -nr, --dont-remove-listing non rimuove i file `.listing'.\n"
+"\n"
+
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Accetto/rifiuto ricorsivo:\n"
+" -A, --accept=LISTA lista di estensioni accettate.\n"
+" -R, --reject=LISTA lista di estensioni rifiutate.\n"
+" -D, --domains=LISTA lista di domini accettati.\n"
+" --exclude-domains=LISTA lista separata da virgole di domini\n"
+" rifiutati\n"
+" -L, --relative segue solo i link relativi.\n"
+" --follow-ftp segue i link FTP dai documenti HTTP.\n"
+" -H, --span-hosts in modo ricorsivo passa anche ad altri\n"
+" host\n"
+" -I, --include-directories=LISTA lista di directory permesse.\n"
+" -X, --exclude-directories=LISTA lista di directory escluse.\n"
+" -nh, --no-host-lookup non effettua la risoluzione DNS degli\n"
+" host.\n"
+" -np, --no-parent non risale alla directory genitrice.\n"
+"\n"
+
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr "Inviare segnalazioni di bug e suggerimenti a <bug-wget@gnu.org>.\n"
+
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr ""
+"wget: %s: supporto per il debug non attivato in fase di compilazione.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Questo programma è distribuito nella speranza che possa essere utile,\n"
+"ma SENZA ALCUNA GARANZIA; anche senza la garanzia implicita di\n"
+"COMMERCIABILITA` o di ADEGUATEZZA AD UN PARTICOLARE SCOPO. Si consulti\n"
+"la GNU General Public License per maggiori dettagli.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Scritto da Hrvoje Niksic <hniksic@srce.hr>.\n"
+
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: comando non valido\n"
+
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: opzione illegale -- `-n%c'\n"
+
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "Usare `%s --help' per ulteriori opzioni.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "Non posso essere prolisso e silenzioso allo stesso tempo.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr ""
+"Non posso impostare le date e contemporaneamente non modificare\n"
+"i vecchi file.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: manca l'URL\n"
+
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "Non ci sono URL in %s.\n"
+
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"FINITO --%s--\n"
+"Scaricati: %s byte in %d file\n"
+
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "Quota per lo scarico (%s byte) SUPERATA!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "%s ricevuti, redirigo l'output su `%%s'.\n"
+
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"CTRL+Break intercettato, ridirigo l'output su `%s'.\n"
+"L'esecuzione continuerà in background\n"
+"Wget può essere fermato premendo CTRL+ALT+DELETE.\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Continuo in background.\n"
+
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "L'output sarà scritto su `%s'.\n"
+
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "Avvio WinHelp %s\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Non riesco a trovare un driver utilizzabile per i socket.\n"
+
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr ""
+"%s: %s:%d: attenzione: il token \"%s\" appare prima di un nome di macchina\n"
+
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: token \"%s\" sconosciuto\n"
+
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Uso: %s NETRC [HOSTNAME]\n"
+
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: stat su %s fallita: %s\n"
+
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Rimuovo %s.\n"
+
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Rimuovo %s poiché deve essere rifiutato.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Carico robots.txt; si ignorino eventuali errori.\n"
+
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ salto %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Non riesco a trovare il proxy host.\n"
+
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Deve essere HTTP.\n"
+
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Redirezione su se stesso.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Rinuncio.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Ritento.\n"
+"\n"
+
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Errore (%s): Link %s fornito senza una base.\n"
+
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Errore (%s): Base %s relativa, senza URL di riferimento\n"
+
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Converto %s... "
+
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Non riesco a convertire i link in %s: %s\n"
+
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Memoria insufficiente.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Protocollo sconosciuto/non supportato"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Porta specificata non valida"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Nome host non valido"
+
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Non riesco a rimuovere il link simbolico `%s': %s\n"
--- /dev/null
+# Norwegian messages for GNU wget (bokmål dialect)
+# Copyright (C) 1998 Free Software Foundation, Inc.
+# Robert Schmidt <rsc@vingmed.no>, 1998.
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: wget 1.5.2-b1\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-05-22 09:00+0100\n"
+"Last-Translator: Robert Schmidt <rsc@vingmed.no>\n"
+"Language-Team: Norwegian <no@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=iso-8859-2\n"
+"Content-Transfer-Encoding: 8bit\n"
+
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Kontakter %s:%hu... "
+
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "Kontakt med %s:%hu nektet.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "kontakt!\n"
+
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Logger inn som %s ... "
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr "Feil i svar fra tjener, lukker kontrollforbindelsen.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Feil i melding fra tjener.\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Feil ved skriving, lukker kontrollforbindelsen.\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "Tjeneren tillater ikke innlogging.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Feil ved innlogging.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Logget inn!\n"
+
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Ukjent type «%c», lukker kontrollforbindelsen.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "OK. "
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD ikke nødvendig.\n"
+
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+"Ingen katalog ved navn «%s».\n"
+"\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "OK.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD ikke nødvendig.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Kan ikke sette opp PASV-overføring.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Kan ikke tolke PASV-tilbakemelding.\n"
+
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Vil prøve å kontakte %s:%hu.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "OK. "
+
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Bind-feil (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "Ugyldig PORT.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"Feil ved REST, starter fra begynnelsen.\n"
+
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Ingen fil ved navn «%s».\n"
+"\n"
+
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Ingen fil eller katalog ved navn «%s».\n"
+"\n"
+
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Lengde: %s"
+
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [%s igjen]"
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (ubekreftet)\n"
+
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, lukker kontrollforbindelsen.\n"
+
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - dataforbindelse: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Forbindelsen brutt.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Dataoverføring brutt.\n"
+
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "File «%s» eksisterer allerede, ignoreres.\n"
+
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(forsøk:%2d)"
+
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - «%s» lagret [%ld]\n"
+"\n"
+
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Bruker «%s» som temporær katalogliste.\n"
+
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "Slettet «%s».\n"
+
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "Rekursjonsdybde %d overskred maksimal dybde %d.\n"
+
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Lokal fil «%s» er samme/nyere, ignoreres.\n"
+"\n"
+
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Filstørrelsene er forskjellige (local %ld), hentes.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Ugyldig navn for symbolsk link, ignoreres.\n"
+
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+"Har allerede gyldig symbolsk link %s -> %s\n"
+"\n"
+
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Lager symbolsk link %s -> %s\n"
+
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr "Symbolske linker ikke støttet, ignorerer «%s».\n"
+
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Ignorerer katalog «%s».\n"
+
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: filtypen er ukjent/ikke støttet.\n"
+
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: ugyldig tidsstempel.\n"
+
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr "Henter ikke kataloger på dybde %d (max %d).\n"
+
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr "Behandler ikke «%s» da det er ekskludert/ikke inkludert.\n"
+
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Ignorerer «%s».\n"
+
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Ingenting passer med mønsteret «%s».\n"
+
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Skrev HTML-formattert indeks til «%s» [%ld].\n"
+
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "Skrev HTML-formattert indeks til «%s».\n"
+
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: flagget «%s» er tvetydig\n"
+
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: flagget «--%s» tillater ikke argumenter\n"
+
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: flagget «%c%s» tillater ikke argumenter\n"
+
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: flagget «%s» krever et argument\n"
+
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: ukjent flagg «--%s»\n"
+
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: ukjent flagg «%c%s»\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: ugyldig flagg -- %c\n"
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: flagget krever et argument -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Fant ikke bruker-ID.\n"
+
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Advarsel: feil fra «uname»: %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Advarsel: feil fra «gethostname»\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Advarsel: fant ikke lokal IP-adresse.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr "%s: Advarsel: feil fra tilbake-oppslag for lokal IP-adresse.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr ""
+"%s: Advarsel: fikk ikke FQDN fra tilbake-oppslag for lokal IP-adresse!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Tjener ikke funnet"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Ukjent feil"
+
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Indeks for /%s på %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "ukjent tid "
+
+#: src/html.c:467
+msgid "File "
+msgstr "Fil "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Katalog "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Link "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Usikker "
+
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s bytes)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "Feil ved sending av HTTP-forespørsel.\n"
+
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s forespørsel sendt, mottar topptekster... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Filslutt funnet ved lesing av topptekster.\n"
+
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Lesefeil (%s) i topptekster.\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr "Ingen data mottatt"
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr "Feil i statuslinje"
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(ingen beskrivelse)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr "Autorisasjon mislyktes\n"
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr "Ukjent autorisasjons-protokoll.\n"
+
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Sted: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "uspesifisert"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr " [omdirigert]"
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Lengde: "
+
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (%s igjen)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "ignoreres"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Advarsel: jokertegn ikke støttet i HTTP.\n"
+
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Filen «%s» hentes ikke, fordi den allerede eksisterer.\n"
+
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Kan ikke skrive til «%s» (%s).\n"
+
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "FEIL: Omdirigering (%d) uten nytt sted.\n"
+
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s FEIL %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr "Last-modified topptekst mangler -- tidsstempling slås av.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr "Last-modified topptekst ugyldig -- tidsstempel ignoreres.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Fil på tjener er nyere - hentes.\n"
+
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - «%s» lagret [%ld/%ld]\n"
+"\n"
+
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Forbindelse brutt ved byte %ld. "
+
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - «%s» lagret [%ld/%ld]\n"
+"\n"
+
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Forbindelse brutt ved byte %ld/%ld. "
+
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Lesefeil ved byte %ld (%s)."
+
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Lesefeil ved byte %ld/%ld (%s)."
+
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Kan ikke lese %s (%s).\n"
+
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Feil i %s på linje %d.\n"
+
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr "%s: Advarsel: Både systemets og brukerens wgetrc peker til «%s».\n"
+
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: Ukjent kommando «%s», verdi «%s».\n"
+
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Vennligst spesifiser «on» eller «off».\n"
+
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Ugyldig spesifikasjon «%s»\n"
+
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "%s: Ugyldig spesifikasjon «%s»\n"
+
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Bruk: %s [FLAGG]... [URL]...\n"
+
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr "GNU Wget %s, en ikke-interaktiv informasjonsagent.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Obligatoriske argumenter til lange flagg er obligatoriske også \n"
+"for korte.\n"
+"\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Oppstart:\n"
+" -V, --version viser Wget's versjonsnummer og avslutter.\n"
+" -h, --help skriver ut denne hjelpeteksten.\n"
+" -b, --background kjører i bakgrunnen etter oppstart.\n"
+" -e, --execute=KOMMANDO utfør en «.wgetrc»-kommando.\n"
+"\n"
+
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"Utskrifter og innlesing:\n"
+" -o, --output-file=FIL skriv meldinger til ny FIL.\n"
+" -a, --append-output=FIL skriv meldinger på slutten av FIL.\n"
+" -d, --debug skriv avlusingsinformasjon.\n"
+" -q, --quiet stille (ingen utskrifter).\n"
+" -v, --verbose vær utførlig (standard).\n"
+" -nv, --non-verbose mindre utførlig, men ikke stille.\n"
+" -i, --input-file=FIL les URLer fra FIL.\n"
+" -F, --force-html les inn filer som HTML.\n"
+"\n"
+
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Nedlasting:\n"
+" -t, --tries=ANTALL maksimalt antall forsøk (0 for uendelig).\n"
+" -O --output-document=FIL skriv nedlastede filer til FIL.\n"
+" -nc, --no-clobber ikke berør eksisterende filer.\n"
+" -c, --continue fortsett nedlasting av en eksisterende fil.\n"
+" --dot-style=TYPE velg format for nedlastings-status.\n"
+" -N, --timestamping ikke hent filer som er eldre enn "
+"eksisterende.\n"
+" -S, --server-response vis svar fra tjeneren.\n"
+" --spider ikke hent filer.\n"
+" -T, --timeout=SEKUNDER sett ventetid ved lesing til SEKUNDER.\n"
+" -w, --wait=SEKUNDER sett ventetid mellom filer til SEKUNDER.\n"
+" -Y, --proxy=on/off sett bruk av proxy på eller av.\n"
+" -Q, --quota=ANTALL sett nedlastingskvote til ANTALL.\n"
+"\n"
+
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Kataloger:\n"
+" -nd --no-directories ikke lag kataloger.\n"
+" -x, --force-directories lag kataloger.\n"
+" -nH, --no-host-directories ikke lag ovenstående kataloger.\n"
+" -P, --directory-prefix=PREFIKS skriv filer til PREFIKS/...\n"
+" --cut-dirs=ANTALL ignorer ANTALL komponenter av tjenerens\n"
+" katalognavn.\n"
+"\n"
+
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"HTTP-flagg:\n"
+" --http-user=BRUKER sett HTTP-bruker til BRUKER.\n"
+" --http-passwd=PASSORD sett HTTP-passord til PASSORD.\n"
+" -C, --cache=on/off (ikke) tillat bruk av hurtiglager på tjener.\n"
+" --ignore-length ignorer «Content-Length» felt i topptekst.\n"
+" --header=TEKST sett TEKST inn som en topptekst.\n"
+" --proxy-user=BRUKER sett proxy-bruker til BRUKER.\n"
+" --proxy-passwd=PASSORD sett proxy-passord til PASSORD.\n"
+" -s, --save-headers skriv HTTP-topptekster til fil.\n"
+" -U, --user-agent=AGENT identifiser som AGENT i stedet for \n"
+" «Wget/VERSJON».\n"
+"\n"
+
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"FTP-flagg:\n"
+" --retr-symlinks hent symbolske linker via FTP.\n"
+" -g, --glob=on/off (ikke) tolk bruk av jokertegn i filnavn.\n"
+" --passive-ftp bruk passiv overføringsmodus.\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Rekursiv nedlasting:\n"
+" -r, --recursive tillat rekursiv nedlasting -- bruk med "
+"omtanke!\n"
+" -l, --level=ANTALL maksimalt antall rekursjonsnivåer "
+"(0=uendelig).\n"
+" --delete-after slett nedlastede filer.\n"
+" -k, --convert-links konverter absolutte linker til relative.\n"
+" -m, --mirror sett passende flagg for speiling av tjenere.\n"
+" -nr, --dont-remove-listing ikke slett «.listing»-filer.\n"
+"\n"
+
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Hva er tillatt ved rekursjon:\n"
+" -A, --accept=LISTE liste med tillatte filtyper.\n"
+" -R, --reject=LISTE liste med ikke tillatte filtyper.\n"
+" -D, --domains=LISTE liste med tillatte domener.\n"
+" --exclude-domains=LISTE liste med ikke tillatte domener.\n"
+" -L, --relative følg kun relative linker.\n"
+" --follow-ftp følg FTP-linker fra HTML-dokumenter.\n"
+" -H, --span-hosts følg linker til andre tjenere.\n"
+" -I, --include-directories=LISTE liste med tillatte katalognavn.\n"
+" -X, --exclude-directories=LISTE liste med ikke tillatte katalognavn.\n"
+" -nh, --no-host-lookup ikke let etter tjenernavn via DNS.\n"
+" -np, --no-parent ikke følg linker til ovenstående "
+"katalog.\n"
+"\n"
+
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr "Rapportér feil og send forslag til <bug-wget@gnu.org>.\n"
+
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr "%s: støtte for avlusing ikke inkludert ved kompilering.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Dette programmet distribueres i håp om at det blir funnet nyttig,\n"
+"men UTEN NOEN GARANTIER; ikke en gang for SALGBARHET eller\n"
+"EGNETHET TIL NOEN SPESIELL OPPGAVE.\n"
+"Se «GNU General Public License» for detaljer.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Skrevet av Hrvoje Niksic <hniksic@srce.hr>.\n"
+
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: ugyldig kommando\n"
+
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: ugyldig flagg -- «-n%c»\n"
+
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "Prøv «%s --help» for flere flagg.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "Kan ikke være utførlig og stille på samme tid.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr ""
+"Kan ikke tidsstemple og la være å berøre eksisterende filer på samme tid.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: URL mangler.\n"
+
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "Fant ingen URLer i %s.\n"
+
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"FERDIG --%s--\n"
+"Lastet ned %s bytes i %d filer\n"
+
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "Nedlastingskvote (%s bytes) overskredet!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "%s mottatt, omdirigerer utskrifter til «%%s».\n"
+
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"CTRL+Break mottatt, omdirigerer utskrifter til `%s'.\n"
+"Kjøring fortsetter i bakgrunnen.\n"
+"Du kan stoppe Wget ved å trykke CTRL+ALT+DELETE.\n"
+"\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Fortsetter i bakgrunnen.\n"
+
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "Utskrifter vil bli skrevet til «%s».\n"
+
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "Starter WinHelp %s\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Fant ingen brukbar socket-driver.\n"
+
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr "%s: %s:%d: Advarsel: symbolet «%s» funnet før tjener-navn\n"
+
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: ukjent symbol «%s»\n"
+
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Bruk: %s NETRC [TJENERNAVN]\n"
+
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: «stat» feilet for %s: %s\n"
+
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Fjerner %s.\n"
+
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Fjerner %s fordi den skal forkastes.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Henter robots.txt; ignorer eventuelle feilmeldinger.\n"
+
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ hopper over %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Fant ikke proxy-tjener.\n"
+
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Må støtte HTTP.\n"
+
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Omdirigerer til seg selv.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Gir opp.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Prøver igjen.\n"
+"\n"
+
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Feil (%s): Link %s gitt uten utgangspunkt.\n"
+
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Feil (%s): Utgangspunktet %s er relativt, ukjent URL som referent.\n"
+
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Konverterer %s... "
+
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Kan ikke konvertere linker i %s: %s\n"
+
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Ikke nok minne.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Protokollen er ukjent/ikke støttet"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Port-spesifikasjonen er ugyldig"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Tjenernavnet er ugyldig"
+
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Kan ikke slette den symbolske linken «%s»: %s\n"
--- /dev/null
+# Brazilian Portuguese translation of the "wget" messages
+# Copyright (C) 1998 Free Software Foundation, Inc.
+# Wanderlei Antonio Cavassin <cavassin@conectiva.com.br>, 1998.
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: wget 1.5-b9\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: 1998-04-06 22:09-0300\n"
+"Last-Translator: Wanderlei Antonio Cavasin <cavassin@conectiva.com.br>\n"
+"Language-Team: Portuguese <pt@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=ISO-8859-1\n"
+"Content-Transfer-Encoding: 8-bit\n"
+
+# , c-format
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr "Conectando-se a %s:%hu... "
+
+# , c-format
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr "Conexão para %s:%hu recusada.\n"
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr "conectado!\n"
+
+# , c-format
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr "Logando como %s ... "
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr "Erro na resposta do servidor, fechando a conexão de controle.\n"
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr "Erro na saudação do servidor.\n"
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr "Falha de escrita, fechando a conexão de controle.\n"
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr "O servidor recusou o login.\n"
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr "Login incorreto.\n"
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr "Logado!\n"
+
+# , c-format
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr "Tipo `%c' desconhecido, fechando a conexão de controle.\n"
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr "feito. "
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr "==> CWD não necessário.\n"
+
+# , c-format
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+"Diretório `%s' não encontrado.\n"
+"\n"
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr "feito.\n"
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr "==> CWD não requerido.\n"
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr "Não foi possível iniciar transferência PASV.\n"
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr "Não foi possível entender resposta do comando PASV.\n"
+
+# , c-format
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr "Tentando conectar-se a %s:%hu.\n"
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr "feito. "
+
+# , c-format
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr "Erro no bind (%s).\n"
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr "PORT inválido.\n"
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+"\n"
+"REST falhou, recomeçando do zero.\n"
+
+# , c-format
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+"Arquivo `%s' não encontrado.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+"Arquivo ou diretório `%s' não encontrado.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr "Tamanho: %s"
+
+# , c-format
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr " [%s para terminar]"
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr " (sem autorização)\n"
+
+# , c-format
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr "%s: %s, fechando conexão de controle.\n"
+
+# , c-format
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr "%s (%s) - Conexão de dados: %s; "
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr "Conexão de controle fechada.\n"
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr "Transferência dos dados abortada.\n"
+
+# , c-format
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr "Arquivo `%s' já existente, não será baixado.\n"
+
+# , c-format
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr "(tentativa:%2d)"
+
+# , c-format
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' recebido [%ld]\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr "Usando `%s' como arquivo temporário de listagem.\n"
+
+# , c-format
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr "Removido `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr "Nível de recursão %d excede nível máximo %d.\n"
+
+# , c-format
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+"Arquivo local `%s' é mais novo, não será baixado.\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr "Os tamanhos não são iguais (local %ld), baixando.\n"
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr "Nome inválido do link simbólico, ignorando.\n"
+
+# , c-format
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+"Link simbólico já está correto %s -> %s\n"
+"\n"
+
+# , c-format
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr "Criando link simbólico %s -> %s\n"
+
+# , c-format
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr "Links simbólicos não suportados, %s será ignorado.\n"
+
+# , c-format
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr "Ignorando diretório `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr "%s: tipo de arquivo desconhecido/não suportado.\n"
+
+# , c-format
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr "%s: horário (timestamp) inválido.\n"
+
+# , c-format
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr ""
+"Não serão buscados diretórios, pois o nível de recursão é %d (max %d).\n"
+
+# , c-format
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr "Não descendo para `%s', pois está excluído/não incluído.\n"
+
+# , c-format
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr "Rejeitando `%s'.\n"
+
+# , c-format
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr "Nada encontrado com o padrão `%s'.\n"
+
+# , c-format
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr "Escrito index em formato HTML para `%s' [%ld].\n"
+
+# , c-format
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr "Escrito índice em formato HTML para `%s'.\n"
+
+# , c-format
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr "%s: opção `%s' é ambígua\n"
+
+# , c-format
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr "%s: opção `--%s' não permite argumento\n"
+
+# , c-format
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr "%s: opção `%c%s' não permite argumento\n"
+
+# , c-format
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr "%s: opção `%s' requer um argumento\n"
+
+# , c-format
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr "%s: opção não reconhecida `--%s'\n"
+
+# , c-format
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr "%s: opção não reconhecida `%c%s'\n"
+
+# , c-format
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr "%s: opção ilegal -- %c\n"
+
+# , c-format
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr "%s: opção requer um argumento -- %c\n"
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr "%s: Não foi possível determinar user-id.\n"
+
+# , c-format
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr "%s: Aviso: falha em uname: %s\n"
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr "%s: Aviso: falha em gethostname\n"
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr "%s: Aviso: não foi possível determinar endereço IP local.\n"
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr "%s: Aviso: não foi possível resolver endereço IP local.\n"
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr "%s: Aviso: resolução do endereço local não resultou em FQDN!\n"
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr "Host não encontrado"
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr "Erro desconhecido"
+
+# , c-format
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr "Índice de /%s em %s:%d"
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr "horário desconhecido "
+
+#: src/html.c:467
+msgid "File "
+msgstr "Arquivo "
+
+#: src/html.c:470
+msgid "Directory "
+msgstr "Diretório "
+
+#: src/html.c:473
+msgid "Link "
+msgstr "Link "
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr "Sem certeza "
+
+# , c-format
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr " (%s bytes)"
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr "Falha na requisição HTTP.\n"
+
+# , c-format
+#: src/http.c:497
+#, fuzzy, c-format
+msgid "%s request sent, awaiting response... "
+msgstr "%s requisição enviada, buscando headers... "
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr "Fim de arquivo durante a leitura dos headers.\n"
+
+# , c-format
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr "Erro de leitura (%s) nos headers.\n"
+
+#: src/http.c:587
+msgid "No data received"
+msgstr ""
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr ""
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr "(sem descrição)"
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr ""
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr ""
+
+# , c-format
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr "Localização: %s%s\n"
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr "nao especificado"
+
+#: src/http.c:750
+msgid " [following]"
+msgstr " [seguinte]"
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr "Tamanho: "
+
+# , c-format
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr " (%s para o fim)"
+
+#: src/http.c:774
+msgid "ignored"
+msgstr "ignorado"
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr "Aviso: wildcards não suportados para HTTP.\n"
+
+# , c-format
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr "Arquivo `%s' já presente, não será baixado.\n"
+
+# , c-format
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr "Não foi possível escrever em `%s' (%s).\n"
+
+# , c-format
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr "ERRO: Redireção (%d) sem Location.\n"
+
+# , c-format
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr "%s ERRO %d: %s.\n"
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr "Header Last-modified não recebido -- time-stamps desligados.\n"
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr "Header Last-modified inválido -- time-stamp ignorado.\n"
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr "Arquivo remoto é mais novo, buscando.\n"
+
+# , c-format
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' recebido [%ld/%ld]\n"
+"\n"
+
+# , c-format
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr "%s (%s) - Conexão fechada no byte %ld. "
+
+# , c-format
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+"%s (%s) - `%s' recebido [%ld/%ld])\n"
+"\n"
+
+# , c-format
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr "%s (%s) - Conexão fechada no byte %ld/%ld. "
+
+# , c-format
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr "%s (%s) - Erro de leitura no byte %ld (%s)."
+
+# , c-format
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr "%s (%s) - Erro de leitura no byte %ld/%ld (%s)."
+
+# , c-format
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr "%s: Não foi possível ler %s (%s).\n"
+
+# , c-format
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr "%s: Erro em %s na linha %d.\n"
+
+# , c-format
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr ""
+"%s: Aviso: os arquivos wgetrc do sistema e do usuário apontam para `%s'.\n"
+
+# , c-format
+#: src/init.c:458
+#, fuzzy, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr "%s: Comando desconhecido `%s', valor `%s'.\n"
+
+# , c-format
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr "%s: %s: Por favor especifique on ou off.\n"
+
+# , c-format
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr "%s: %s: Especificação inválida `%s'\n"
+
+# , c-format
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr "%s: Especificação inválida `%s'\n"
+
+# , c-format
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr "Uso: %s [OPÇÃO]... [URL]...\n"
+
+# , c-format
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr ""
+"GNU Wget %s, um programa não interativo para buscar arquivos da rede.\n"
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+"\n"
+"Argumentos obrigatórios para opções longas são também\n"
+"obrigatórios para opções curtas.\n"
+"\n"
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+"Início:\n"
+" -V, --version mostra a versão do Wget e sai.\n"
+" -h, --help mostra esta ajuda.\n"
+" -b, --background executa em background.\n"
+" -e, --execute=COMANDO executa um comando `.wgetrc'.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+"Geração de log e arquivo de entrada:\n"
+" -o, --output-file=ARQUIVO mensagens de log para ARQUIVO.\n"
+" -a, --append-output=ARQUIVO apenda mensagens em ARQUIVO.\n"
+" -d, --debug mostra saídas de debug.\n"
+" -q, --quiet quieto (sem saídas).\n"
+" -nv, --non-verbose desliga modo verboso, sem ser quieto.\n"
+" -i, --input-file=ARQUIVO lê URL-s de ARQUIVO.\n"
+" -F, --force-html trata arquivo de entrada como HTML.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+"Download:\n"
+" -t, --tries=NÚMERO configura número de tentativas "
+"(0=infinitas).\n"
+" -O --output-document=ARQUIVO escreve os documentos no ARQUIVO.\n"
+" -nc, --no-clobber não sobrescreve arquivos existentes.\n"
+" --dot-style=ESTILO configura estilo do display de download.\n"
+" -N, --timestamping não busca arquivos mais antigos que os "
+"locais.\n"
+" -S, --server-response mostra respostas do servidor.\n"
+" --spider não baixa nenhum arquivo.\n"
+" -T, --timeout=SEGUNDOS configura o timeout de leitura.\n"
+" -w, --wait=SEGUNDOS espera SEGUNDOS entre buscas de arquivos.\n"
+" -Y, --proxy=on/off liga ou desliga proxy.\n"
+" -Q, --quota=NÚMERO configura quota de recepção.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+"Diretórios:\n"
+" -nd --no-directories não cria diretórios.\n"
+" -x, --force-directories força a criação de diretórios.\n"
+" -nH, --no-host-directories não cria diretórios com nome do host.\n"
+" -P, --directory-prefix=PREFIXO salva arquivos em PREFIXO/...\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+"Opções HTTP:\n"
+" --http-user=USUÁRIO configura usuário http.\n"
+" --http-passwd=SENHA configura senha http.\n"
+" -C, --cache=on/off liga/desliga busca de dados do cache\n"
+" (normalmente ligada).\n"
+" --ignore-length ignora o header `Content-Length'.\n"
+" --header=STRING insere STRING entre os headers.\n"
+" --proxy-user=USUÁRIO configura nome do usuário do proxy.\n"
+" --proxy-passwd=SENHA configura a senha do usuário do proxy.\n"
+" -s, --save-headers salva os headers HTTP no arquivo.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+"Opções FTP:\n"
+" --retr-symlinks busca links simbólicos FTP.\n"
+" -c, --continue-ftp recomeça a busca ftp aproveitando arquivos.\n"
+" existentes e já recebidos em parte.\n"
+" -g, --glob=on/off liga/desliga expansão de nomes de arquivos.\n"
+" --passive-ftp usa modo de transferência \"passivo\".\n"
+"\n"
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+"Busca recursiva:\n"
+" -r, --recursive busca recursiva -- use com cuidado!.\n"
+" -l, --level=NÚMERO nível máximo de recursão (0 para ilimitado).\n"
+" --delete-after deleta arquivos baixados.\n"
+" -k, --convert-links converte links não relativos para relativos.\n"
+" -m, --mirror liga opções para espelhamento (mirror).\n"
+" -nr, --dont-remove-listing não remove arquivos `.listing'.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+"Aceitação/rejeição recursiva:\n"
+" -A, --accept=LISTA lista de extensões aceitas.\n"
+" -D, --domains=LISTA lista de domínios aceitos.\n"
+" -R, --reject=LISTA lista de extensões rejeitadas.\n"
+" -L, --relative segue somente links relativos.\n"
+" --exclude-domains=LISTA lista de domínios rejeitados.\n"
+" --follow-ftp segue links FTP em documentos HTML.\n"
+" -H, --span-hosts segue hosts externos quando recursivo.\n"
+" -I, --include-directories=LISTA lista de diretórios permitidos.\n"
+" -X, --exclude-directories=LISTA lista de diretórios excluídos.\n"
+" -nh, --no-host-lookup não faz DNS-lookup dos hosts.\n"
+" -np, --no-parent não sobe para o diretório pai.\n"
+"\n"
+
+# , fuzzy
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr "Relatos de bugs e sugestões para <bug-wget@prep.ai.mit.edu>.\n"
+
+# , fuzzy
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr "%s: compilado sem debug.\n"
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"Este programa é distribuído com o objetivo de que seja útil,\n"
+"mas SEM QUALQUER GARANTIA; nem mesmo a garantia ímplicita de\n"
+"COMERCIABILIDADE ou de UTILIDADE PARA UM PROPÓSITO PARTICULAR.\n"
+"Veja a Licença Pública Geral GNU (GNU GPL) para mais detalhes.\n"
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+"\n"
+"Escrito por Hrvoje Niksic <hniksic@srce.hr>.\n"
+
+# , c-format
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr "%s: %s: comando inválido\n"
+
+# , c-format
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr "%s: opção ilegal -- `-n%c'\n"
+
+# , c-format
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr "Tente `%s --help' para mais opções.\n"
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr "Não pode ser verboso e quieto ao mesmo tempo.\n"
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr ""
+"Não é possível usar as opções \"timestamp\" e \"no clobber\" ao mesmo "
+"tempo.\n"
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr "%s: URL faltando\n"
+
+# , c-format
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr "Nenhuma URL encontrada em %s.\n"
+
+# , c-format
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+"\n"
+"FINALIZADO --%s--\n"
+"Baixados: %s bytes em %d arquivos\n"
+
+# , c-format
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr "EXCEDIDA a quota (%s bytes) de recepção!\n"
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr "%s recebido, redirecionando saída para `%%s'.\n"
+
+# , c-format
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+"\n"
+"CTRL+Break recebido, redirecionando saída para `%s'.\n"
+"Execução continuará em background.\n"
+"Você pode parar o Wget pressionando CTRL+ALT+DELETE.\n"
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr "Continuando em background.\n"
+
+# , c-format
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr "Saída será escrita em `%s'.\n"
+
+# , c-format
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr "Disparando WinHelp %s\n"
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr "%s: Não foi possivel encontrar um driver de sockets usável.\n"
+
+# , c-format
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr ""
+"%s: %s:%d: aviso: token \"%s\" aparece antes de qualquer nome de máquina\n"
+
+# , c-format
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr "%s: %s:%d: token desconhecido \"%s\"\n"
+
+# , c-format
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr "Uso: %s NETRC [NOME DO HOST]\n"
+
+# , c-format
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr "%s: não foi possível acessar %s: %s\n"
+
+# , c-format
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr "Removendo %s.\n"
+
+# , c-format
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr "Removendo %s pois ele deve ser rejeitado.\n"
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr "Buscando robots.txt; por favor ignore qualquer erro.\n"
+
+# , c-format
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+"\n"
+" [ ignorando %dK ]"
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr "Não foi possível encontrar o proxy.\n"
+
+# , c-format
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr "Proxy %s: Deve ser HTTP.\n"
+
+# , c-format
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr "%s: Redireção para si mesmo.\n"
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+"Desistindo.\n"
+"\n"
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+"Tentando novamente.\n"
+"\n"
+
+# , c-format
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr "Erro (%s): Link %s sem uma base fornecida.\n"
+
+# , c-format
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr "Erro (%s): Base %s relativa, sem URL referenciadora.\n"
+
+# , c-format
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr "Convertendo %s... "
+
+# , c-format
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr "Não foi possível converter links em %s: %s\n"
+
+# , c-format
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr "%s: %s: Memória insuficiente.\n"
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr "Protocolo desconhecido/não suportado"
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr "Especificação de porta inválida"
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr "Nome do host inválido"
+
+# , c-format
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr "Falha na remoção do link simbólico `%s': %s\n"
+
+# , c-format
+#~ msgid "%s: unrecognized option, character code 0%o\n"
+#~ msgstr "%s: opção não reconhecida, caractere código 0%o\n"
+
+# , c-format
+#~ msgid "%s: unrecognized option `-%c'\n"
+#~ msgstr "%s: opção não reconhecida `-%c'\n"
+
+# , c-format
+#~ msgid "%s: option `-%c' requires an argument\n"
+#~ msgstr "%s: opção `-%c' requer um argumento\n"
+
+# , c-format
+#~ msgid "wget: %s: Invalid specification `%s'.\n"
+#~ msgstr "wget: %s: Especificação inválida `%s'.\n"
+
+# , c-format
+#~ msgid "wget: illegal option -- `-n%c'\n"
+#~ msgstr "wget: opção ilegal -- `-n%c'\n"
+
+#~ msgid "done."
+#~ msgstr "feito."
+
+#~ msgid "UNKNOWN"
+#~ msgstr "DESCONHECIDO"
--- /dev/null
+# SOME DESCRIPTIVE TITLE.
+# Copyright (C) YEAR Free Software Foundation, Inc.
+# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
+#
+#, fuzzy
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"POT-Creation-Date: 1998-09-21 19:08+0200\n"
+"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
+"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
+"Language-Team: LANGUAGE <LL@li.org>\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=CHARSET\n"
+"Content-Transfer-Encoding: ENCODING\n"
+
+#. Login to the server:
+#. First: Establish the control connection.
+#: src/ftp.c:147 src/http.c:346
+#, c-format
+msgid "Connecting to %s:%hu... "
+msgstr ""
+
+#: src/ftp.c:169 src/ftp.c:411 src/http.c:363
+#, c-format
+msgid "Connection to %s:%hu refused.\n"
+msgstr ""
+
+#. Second: Login with proper USER/PASS sequence.
+#: src/ftp.c:190 src/http.c:374
+msgid "connected!\n"
+msgstr ""
+
+#: src/ftp.c:191
+#, c-format
+msgid "Logging in as %s ... "
+msgstr ""
+
+#: src/ftp.c:200 src/ftp.c:253 src/ftp.c:301 src/ftp.c:353 src/ftp.c:447
+#: src/ftp.c:520 src/ftp.c:568 src/ftp.c:616
+msgid "Error in server response, closing control connection.\n"
+msgstr ""
+
+#: src/ftp.c:208
+msgid "Error in server greeting.\n"
+msgstr ""
+
+#: src/ftp.c:216 src/ftp.c:262 src/ftp.c:310 src/ftp.c:362 src/ftp.c:457
+#: src/ftp.c:530 src/ftp.c:578 src/ftp.c:626
+msgid "Write failed, closing control connection.\n"
+msgstr ""
+
+#: src/ftp.c:223
+msgid "The server refuses login.\n"
+msgstr ""
+
+#: src/ftp.c:230
+msgid "Login incorrect.\n"
+msgstr ""
+
+#: src/ftp.c:237
+msgid "Logged in!\n"
+msgstr ""
+
+#: src/ftp.c:270
+#, c-format
+msgid "Unknown type `%c', closing control connection.\n"
+msgstr ""
+
+#: src/ftp.c:283
+msgid "done. "
+msgstr ""
+
+#: src/ftp.c:289
+msgid "==> CWD not needed.\n"
+msgstr ""
+
+#: src/ftp.c:317
+#, c-format
+msgid ""
+"No such directory `%s'.\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:331 src/ftp.c:599 src/ftp.c:647 src/url.c:1431
+msgid "done.\n"
+msgstr ""
+
+#. do not CWD
+#: src/ftp.c:335
+msgid "==> CWD not required.\n"
+msgstr ""
+
+#: src/ftp.c:369
+msgid "Cannot initiate PASV transfer.\n"
+msgstr ""
+
+#: src/ftp.c:373
+msgid "Cannot parse PASV response.\n"
+msgstr ""
+
+#: src/ftp.c:387
+#, c-format
+msgid "Will try connecting to %s:%hu.\n"
+msgstr ""
+
+#: src/ftp.c:432 src/ftp.c:504 src/ftp.c:548
+msgid "done. "
+msgstr ""
+
+#: src/ftp.c:474
+#, c-format
+msgid "Bind error (%s).\n"
+msgstr ""
+
+#: src/ftp.c:490
+msgid "Invalid PORT.\n"
+msgstr ""
+
+#: src/ftp.c:537
+msgid ""
+"\n"
+"REST failed, starting from scratch.\n"
+msgstr ""
+
+#: src/ftp.c:586
+#, c-format
+msgid ""
+"No such file `%s'.\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:634
+#, c-format
+msgid ""
+"No such file or directory `%s'.\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:692 src/ftp.c:699
+#, c-format
+msgid "Length: %s"
+msgstr ""
+
+#: src/ftp.c:694 src/ftp.c:701
+#, c-format
+msgid " [%s to go]"
+msgstr ""
+
+#: src/ftp.c:703
+msgid " (unauthoritative)\n"
+msgstr ""
+
+#: src/ftp.c:721
+#, c-format
+msgid "%s: %s, closing control connection.\n"
+msgstr ""
+
+#: src/ftp.c:729
+#, c-format
+msgid "%s (%s) - Data connection: %s; "
+msgstr ""
+
+#: src/ftp.c:746
+msgid "Control connection closed.\n"
+msgstr ""
+
+#: src/ftp.c:764
+msgid "Data transfer aborted.\n"
+msgstr ""
+
+#: src/ftp.c:830
+#, c-format
+msgid "File `%s' already there, not retrieving.\n"
+msgstr ""
+
+#: src/ftp.c:896 src/http.c:922
+#, c-format
+msgid "(try:%2d)"
+msgstr ""
+
+#: src/ftp.c:955 src/http.c:1116
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld]\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:1001
+#, c-format
+msgid "Using `%s' as listing tmp file.\n"
+msgstr ""
+
+#: src/ftp.c:1013
+#, c-format
+msgid "Removed `%s'.\n"
+msgstr ""
+
+#: src/ftp.c:1049
+#, c-format
+msgid "Recursion depth %d exceeded max. depth %d.\n"
+msgstr ""
+
+#: src/ftp.c:1096 src/http.c:1054
+#, c-format
+msgid ""
+"Local file `%s' is more recent, not retrieving.\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:1102 src/http.c:1060
+#, c-format
+msgid "The sizes do not match (local %ld), retrieving.\n"
+msgstr ""
+
+#: src/ftp.c:1119
+msgid "Invalid name of the symlink, skipping.\n"
+msgstr ""
+
+#: src/ftp.c:1136
+#, c-format
+msgid ""
+"Already have correct symlink %s -> %s\n"
+"\n"
+msgstr ""
+
+#: src/ftp.c:1144
+#, c-format
+msgid "Creating symlink %s -> %s\n"
+msgstr ""
+
+#: src/ftp.c:1155
+#, c-format
+msgid "Symlinks not supported, skipping symlink `%s'.\n"
+msgstr ""
+
+#: src/ftp.c:1167
+#, c-format
+msgid "Skipping directory `%s'.\n"
+msgstr ""
+
+#: src/ftp.c:1176
+#, c-format
+msgid "%s: unknown/unsupported file type.\n"
+msgstr ""
+
+#: src/ftp.c:1193
+#, c-format
+msgid "%s: corrupt time-stamp.\n"
+msgstr ""
+
+#: src/ftp.c:1213
+#, c-format
+msgid "Will not retrieve dirs since depth is %d (max %d).\n"
+msgstr ""
+
+#: src/ftp.c:1252
+#, c-format
+msgid "Not descending to `%s' as it is excluded/not-included.\n"
+msgstr ""
+
+#: src/ftp.c:1297
+#, c-format
+msgid "Rejecting `%s'.\n"
+msgstr ""
+
+#. No luck.
+#. #### This message SUCKS. We should see what was the
+#. reason that nothing was retrieved.
+#: src/ftp.c:1344
+#, c-format
+msgid "No matches on pattern `%s'.\n"
+msgstr ""
+
+#: src/ftp.c:1404
+#, c-format
+msgid "Wrote HTML-ized index to `%s' [%ld].\n"
+msgstr ""
+
+#: src/ftp.c:1409
+#, c-format
+msgid "Wrote HTML-ized index to `%s'.\n"
+msgstr ""
+
+#: src/getopt.c:454
+#, c-format
+msgid "%s: option `%s' is ambiguous\n"
+msgstr ""
+
+#: src/getopt.c:478
+#, c-format
+msgid "%s: option `--%s' doesn't allow an argument\n"
+msgstr ""
+
+#: src/getopt.c:483
+#, c-format
+msgid "%s: option `%c%s' doesn't allow an argument\n"
+msgstr ""
+
+#: src/getopt.c:498
+#, c-format
+msgid "%s: option `%s' requires an argument\n"
+msgstr ""
+
+#. --option
+#: src/getopt.c:528
+#, c-format
+msgid "%s: unrecognized option `--%s'\n"
+msgstr ""
+
+#. +option or -option
+#: src/getopt.c:532
+#, c-format
+msgid "%s: unrecognized option `%c%s'\n"
+msgstr ""
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:563
+#, c-format
+msgid "%s: illegal option -- %c\n"
+msgstr ""
+
+#. 1003.2 specifies the format of this message.
+#: src/getopt.c:602
+#, c-format
+msgid "%s: option requires an argument -- %c\n"
+msgstr ""
+
+#: src/host.c:432
+#, c-format
+msgid "%s: Cannot determine user-id.\n"
+msgstr ""
+
+#: src/host.c:444
+#, c-format
+msgid "%s: Warning: uname failed: %s\n"
+msgstr ""
+
+#: src/host.c:456
+#, c-format
+msgid "%s: Warning: gethostname failed\n"
+msgstr ""
+
+#: src/host.c:484
+#, c-format
+msgid "%s: Warning: cannot determine local IP address.\n"
+msgstr ""
+
+#: src/host.c:498
+#, c-format
+msgid "%s: Warning: cannot reverse-lookup local IP address.\n"
+msgstr ""
+
+#. This gets ticked pretty often. Karl Berry reports
+#. that there can be valid reasons for the local host
+#. name not to be an FQDN, so I've decided to remove the
+#. annoying warning.
+#: src/host.c:511
+#, c-format
+msgid "%s: Warning: reverse-lookup of local address did not yield FQDN!\n"
+msgstr ""
+
+#: src/host.c:539
+msgid "Host not found"
+msgstr ""
+
+#: src/host.c:541
+msgid "Unknown error"
+msgstr ""
+
+#: src/html.c:439 src/html.c:441
+#, c-format
+msgid "Index of /%s on %s:%d"
+msgstr ""
+
+#: src/html.c:463
+msgid "time unknown "
+msgstr ""
+
+#: src/html.c:467
+msgid "File "
+msgstr ""
+
+#: src/html.c:470
+msgid "Directory "
+msgstr ""
+
+#: src/html.c:473
+msgid "Link "
+msgstr ""
+
+#: src/html.c:476
+msgid "Not sure "
+msgstr ""
+
+#: src/html.c:494
+#, c-format
+msgid " (%s bytes)"
+msgstr ""
+
+#: src/http.c:492
+msgid "Failed writing HTTP request.\n"
+msgstr ""
+
+#: src/http.c:497
+#, c-format
+msgid "%s request sent, awaiting response... "
+msgstr ""
+
+#: src/http.c:536
+msgid "End of file while parsing headers.\n"
+msgstr ""
+
+#: src/http.c:547
+#, c-format
+msgid "Read error (%s) in headers.\n"
+msgstr ""
+
+#: src/http.c:587
+msgid "No data received"
+msgstr ""
+
+#: src/http.c:589
+msgid "Malformed status line"
+msgstr ""
+
+#: src/http.c:594
+msgid "(no description)"
+msgstr ""
+
+#. If we have tried it already, then there is not point
+#. retrying it.
+#: src/http.c:678
+msgid "Authorization failed.\n"
+msgstr ""
+
+#: src/http.c:685
+msgid "Unknown authentication scheme.\n"
+msgstr ""
+
+#: src/http.c:748
+#, c-format
+msgid "Location: %s%s\n"
+msgstr ""
+
+#: src/http.c:749 src/http.c:774
+msgid "unspecified"
+msgstr ""
+
+#: src/http.c:750
+msgid " [following]"
+msgstr ""
+
+#. No need to print this output if the body won't be
+#. downloaded at all, or if the original server response is
+#. printed.
+#: src/http.c:764
+msgid "Length: "
+msgstr ""
+
+#: src/http.c:769
+#, c-format
+msgid " (%s to go)"
+msgstr ""
+
+#: src/http.c:774
+msgid "ignored"
+msgstr ""
+
+#: src/http.c:857
+msgid "Warning: wildcards not supported in HTTP.\n"
+msgstr ""
+
+#. If opt.noclobber is turned on and file already exists, do not
+#. retrieve the file
+#: src/http.c:872
+#, c-format
+msgid "File `%s' already there, will not retrieve.\n"
+msgstr ""
+
+#: src/http.c:978
+#, c-format
+msgid "Cannot write to `%s' (%s).\n"
+msgstr ""
+
+#: src/http.c:988
+#, c-format
+msgid "ERROR: Redirection (%d) without location.\n"
+msgstr ""
+
+#: src/http.c:1011
+#, c-format
+msgid "%s ERROR %d: %s.\n"
+msgstr ""
+
+#: src/http.c:1023
+msgid "Last-modified header missing -- time-stamps turned off.\n"
+msgstr ""
+
+#: src/http.c:1031
+msgid "Last-modified header invalid -- time-stamp ignored.\n"
+msgstr ""
+
+#: src/http.c:1064
+msgid "Remote file is newer, retrieving.\n"
+msgstr ""
+
+#: src/http.c:1098
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld]\n"
+"\n"
+msgstr ""
+
+#: src/http.c:1130
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld. "
+msgstr ""
+
+#: src/http.c:1138
+#, c-format
+msgid ""
+"%s (%s) - `%s' saved [%ld/%ld])\n"
+"\n"
+msgstr ""
+
+#: src/http.c:1150
+#, c-format
+msgid "%s (%s) - Connection closed at byte %ld/%ld. "
+msgstr ""
+
+#: src/http.c:1161
+#, c-format
+msgid "%s (%s) - Read error at byte %ld (%s)."
+msgstr ""
+
+#: src/http.c:1169
+#, c-format
+msgid "%s (%s) - Read error at byte %ld/%ld (%s). "
+msgstr ""
+
+#: src/init.c:312 src/netrc.c:250
+#, c-format
+msgid "%s: Cannot read %s (%s).\n"
+msgstr ""
+
+#: src/init.c:333 src/init.c:339
+#, c-format
+msgid "%s: Error in %s at line %d.\n"
+msgstr ""
+
+#: src/init.c:370
+#, c-format
+msgid "%s: Warning: Both system and user wgetrc point to `%s'.\n"
+msgstr ""
+
+#: src/init.c:458
+#, c-format
+msgid "%s: BUG: unknown command `%s', value `%s'.\n"
+msgstr ""
+
+#: src/init.c:485
+#, c-format
+msgid "%s: %s: Please specify on or off.\n"
+msgstr ""
+
+#: src/init.c:503 src/init.c:760 src/init.c:782 src/init.c:855
+#, c-format
+msgid "%s: %s: Invalid specification `%s'.\n"
+msgstr ""
+
+#: src/init.c:616 src/init.c:638 src/init.c:660 src/init.c:686
+#, c-format
+msgid "%s: Invalid specification `%s'\n"
+msgstr ""
+
+#: src/main.c:101
+#, c-format
+msgid "Usage: %s [OPTION]... [URL]...\n"
+msgstr ""
+
+#: src/main.c:109
+#, c-format
+msgid "GNU Wget %s, a non-interactive network retriever.\n"
+msgstr ""
+
+#. Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+#. don't bitch. Also, it makes translation much easier.
+#: src/main.c:114
+msgid ""
+"\n"
+"Mandatory arguments to long options are mandatory for short options too.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:117
+msgid ""
+"Startup:\n"
+" -V, --version display the version of Wget and exit.\n"
+" -h, --help print this help.\n"
+" -b, --background go to background after startup.\n"
+" -e, --execute=COMMAND execute a `.wgetrc' command.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:123
+msgid ""
+"Logging and input file:\n"
+" -o, --output-file=FILE log messages to FILE.\n"
+" -a, --append-output=FILE append messages to FILE.\n"
+" -d, --debug print debug output.\n"
+" -q, --quiet quiet (no output).\n"
+" -v, --verbose be verbose (this is the default).\n"
+" -nv, --non-verbose turn off verboseness, without being quiet.\n"
+" -i, --input-file=FILE read URL-s from file.\n"
+" -F, --force-html treat input file as HTML.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:133
+msgid ""
+"Download:\n"
+" -t, --tries=NUMBER set number of retries to NUMBER (0 "
+"unlimits).\n"
+" -O --output-document=FILE write documents to FILE.\n"
+" -nc, --no-clobber don't clobber existing files.\n"
+" -c, --continue restart getting an existing file.\n"
+" --dot-style=STYLE set retrieval display style.\n"
+" -N, --timestamping don't retrieve files if older than local.\n"
+" -S, --server-response print server response.\n"
+" --spider don't download anything.\n"
+" -T, --timeout=SECONDS set the read timeout to SECONDS.\n"
+" -w, --wait=SECONDS wait SECONDS between retrievals.\n"
+" -Y, --proxy=on/off turn proxy on or off.\n"
+" -Q, --quota=NUMBER set retrieval quota to NUMBER.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:147
+msgid ""
+"Directories:\n"
+" -nd --no-directories don't create directories.\n"
+" -x, --force-directories force creation of directories.\n"
+" -nH, --no-host-directories don't create host directories.\n"
+" -P, --directory-prefix=PREFIX save files to PREFIX/...\n"
+" --cut-dirs=NUMBER ignore NUMBER remote directory "
+"components.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:154
+msgid ""
+"HTTP options:\n"
+" --http-user=USER set http user to USER.\n"
+" --http-passwd=PASS set http password to PASS.\n"
+" -C, --cache=on/off (dis)allow server-cached data (normally "
+"allowed).\n"
+" --ignore-length ignore `Content-Length' header field.\n"
+" --header=STRING insert STRING among the headers.\n"
+" --proxy-user=USER set USER as proxy username.\n"
+" --proxy-passwd=PASS set PASS as proxy password.\n"
+" -s, --save-headers save the HTTP headers to file.\n"
+" -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:165
+msgid ""
+"FTP options:\n"
+" --retr-symlinks retrieve FTP symbolic links.\n"
+" -g, --glob=on/off turn file name globbing on or off.\n"
+" --passive-ftp use the \"passive\" transfer mode.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:170
+msgid ""
+"Recursive retrieval:\n"
+" -r, --recursive recursive web-suck -- use with care!.\n"
+" -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n"
+" --delete-after delete downloaded files.\n"
+" -k, --convert-links convert non-relative links to relative.\n"
+" -m, --mirror turn on options suitable for mirroring.\n"
+" -nr, --dont-remove-listing don't remove `.listing' files.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:178
+msgid ""
+"Recursive accept/reject:\n"
+" -A, --accept=LIST list of accepted extensions.\n"
+" -R, --reject=LIST list of rejected extensions.\n"
+" -D, --domains=LIST list of accepted domains.\n"
+" --exclude-domains=LIST comma-separated list of rejected "
+"domains.\n"
+" -L, --relative follow relative links only.\n"
+" --follow-ftp follow FTP links from HTML documents.\n"
+" -H, --span-hosts go to foreign hosts when recursive.\n"
+" -I, --include-directories=LIST list of allowed directories.\n"
+" -X, --exclude-directories=LIST list of excluded directories.\n"
+" -nh, --no-host-lookup don't DNS-lookup hosts.\n"
+" -np, --no-parent don't ascend to the parent directory.\n"
+"\n"
+msgstr ""
+
+#: src/main.c:191
+msgid "Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"
+msgstr ""
+
+#: src/main.c:347
+#, c-format
+msgid "%s: debug support not compiled in.\n"
+msgstr ""
+
+#: src/main.c:395
+msgid ""
+"Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n"
+"This program is distributed in the hope that it will be useful,\n"
+"but WITHOUT ANY WARRANTY; without even the implied warranty of\n"
+"MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n"
+"GNU General Public License for more details.\n"
+msgstr ""
+
+#: src/main.c:401
+msgid ""
+"\n"
+"Written by Hrvoje Niksic <hniksic@srce.hr>.\n"
+msgstr ""
+
+#: src/main.c:465
+#, c-format
+msgid "%s: %s: invalid command\n"
+msgstr ""
+
+#: src/main.c:515
+#, c-format
+msgid "%s: illegal option -- `-n%c'\n"
+msgstr ""
+
+#. #### Something nicer should be printed here -- similar to the
+#. pre-1.5 `--help' page.
+#: src/main.c:518 src/main.c:560 src/main.c:591
+#, c-format
+msgid "Try `%s --help' for more options.\n"
+msgstr ""
+
+#: src/main.c:571
+msgid "Can't be verbose and quiet at the same time.\n"
+msgstr ""
+
+#: src/main.c:577
+msgid "Can't timestamp and not clobber old files at the same time.\n"
+msgstr ""
+
+#. No URL specified.
+#: src/main.c:586
+#, c-format
+msgid "%s: missing URL\n"
+msgstr ""
+
+#: src/main.c:674
+#, c-format
+msgid "No URLs found in %s.\n"
+msgstr ""
+
+#: src/main.c:683
+#, c-format
+msgid ""
+"\n"
+"FINISHED --%s--\n"
+"Downloaded: %s bytes in %d files\n"
+msgstr ""
+
+#: src/main.c:688
+#, c-format
+msgid "Download quota (%s bytes) EXCEEDED!\n"
+msgstr ""
+
+#. Please note that the double `%' in `%%s' is intentional, because
+#. redirect_output passes tmp through printf.
+#: src/main.c:715
+msgid "%s received, redirecting output to `%%s'.\n"
+msgstr ""
+
+#: src/mswindows.c:118
+#, c-format
+msgid ""
+"\n"
+"CTRL+Break received, redirecting output to `%s'.\n"
+"Execution continued in background.\n"
+"You may stop Wget by pressing CTRL+ALT+DELETE.\n"
+msgstr ""
+
+#. parent, no error
+#: src/mswindows.c:135 src/utils.c:268
+msgid "Continuing in background.\n"
+msgstr ""
+
+#: src/mswindows.c:137 src/utils.c:270
+#, c-format
+msgid "Output will be written to `%s'.\n"
+msgstr ""
+
+#: src/mswindows.c:227
+#, c-format
+msgid "Starting WinHelp %s\n"
+msgstr ""
+
+#: src/mswindows.c:254 src/mswindows.c:262
+#, c-format
+msgid "%s: Couldn't find usable socket driver.\n"
+msgstr ""
+
+#: src/netrc.c:334
+#, c-format
+msgid "%s: %s:%d: warning: \"%s\" token appears before any machine name\n"
+msgstr ""
+
+#: src/netrc.c:365
+#, c-format
+msgid "%s: %s:%d: unknown token \"%s\"\n"
+msgstr ""
+
+#: src/netrc.c:429
+#, c-format
+msgid "Usage: %s NETRC [HOSTNAME]\n"
+msgstr ""
+
+#: src/netrc.c:439
+#, c-format
+msgid "%s: cannot stat %s: %s\n"
+msgstr ""
+
+#: src/recur.c:449 src/retr.c:462
+#, c-format
+msgid "Removing %s.\n"
+msgstr ""
+
+#: src/recur.c:450
+#, c-format
+msgid "Removing %s since it should be rejected.\n"
+msgstr ""
+
+#: src/recur.c:609
+msgid "Loading robots.txt; please ignore errors.\n"
+msgstr ""
+
+#: src/retr.c:193
+#, c-format
+msgid ""
+"\n"
+" [ skipping %dK ]"
+msgstr ""
+
+#: src/retr.c:344
+msgid "Could not find proxy host.\n"
+msgstr ""
+
+#: src/retr.c:355
+#, c-format
+msgid "Proxy %s: Must be HTTP.\n"
+msgstr ""
+
+#: src/retr.c:398
+#, c-format
+msgid "%s: Redirection to itself.\n"
+msgstr ""
+
+#: src/retr.c:483
+msgid ""
+"Giving up.\n"
+"\n"
+msgstr ""
+
+#: src/retr.c:483
+msgid ""
+"Retrying.\n"
+"\n"
+msgstr ""
+
+#: src/url.c:940
+#, c-format
+msgid "Error (%s): Link %s without a base provided.\n"
+msgstr ""
+
+#: src/url.c:955
+#, c-format
+msgid "Error (%s): Base %s relative, without referer URL.\n"
+msgstr ""
+
+#: src/url.c:1373
+#, c-format
+msgid "Converting %s... "
+msgstr ""
+
+#: src/url.c:1378 src/url.c:1389
+#, c-format
+msgid "Cannot convert links in %s: %s\n"
+msgstr ""
+
+#: src/utils.c:71
+#, c-format
+msgid "%s: %s: Not enough memory.\n"
+msgstr ""
+
+#: src/utils.c:203
+msgid "Unknown/unsupported protocol"
+msgstr ""
+
+#: src/utils.c:206
+msgid "Invalid port specification"
+msgstr ""
+
+#: src/utils.c:209
+msgid "Invalid host name"
+msgstr ""
+
+#: src/utils.c:430
+#, c-format
+msgid "Failed to unlink symlink `%s': %s\n"
+msgstr ""
--- /dev/null
+1998-09-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.3 is released.
+
+1998-09-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (ftp_getaddress): Don't warn when reverse-lookup of local
+ address doesn't yield FQDN.
+
+1998-09-21 Andreas Schwab <schwab@issan.informatik.uni-dortmund.de>
+
+ * cmpt.c (strerror): Fix declaration of sys_errlist.
+
+1998-09-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): Don't use an array subscript as the first
+ argument to STRDUP_ALLOCA.
+ From Kaveh R. Ghazi.
+
+1998-09-11 Szakacsits Szabolcs <szaka@sienet.hu>
+
+ * html.c (htmlfindurl): Download table background.
+
+1998-09-11 Hans Grobler <grobh@conde.ee.sun.ac.za>
+
+ * init.c (parse_line): Would free *com before allocating it.
+ (parse_line): Would free com instead of *com.
+
+1998-09-10 Howard Gayle <howard@fjst.com>
+
+ * url.c (get_urls_html): Would drop the last character of the
+ link.
+
+1998-09-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Don't print status code if quiet.
+
+1998-09-10 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
+
+ * log.c: Use <stdarg.h> only when __STDC__.
+
+1998-09-10 Adam D. Moss <adam@foxbox.org>
+
+ * html.c (htmlfindurl): Download <layer src=...>.
+
+1998-09-10 Howard Gayle <howard@fjst.com>
+
+ * ftp.c (ftp_retrieve_list): Don't update the time stamp of a file
+ not retrieved.
+
+1998-06-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c: Include <libc.h> on NeXT.
+
+1998-06-26 Heinz Salzmann <heinz.salzmann@intermetall.de>
+
+ * url.c (get_urls_html): Fix calculation of URL position.
+
+1998-06-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.2 is released.
+
+1998-06-23 Dave Love <d.love@dl.ac.uk>
+
+ * ftp.c, init.c, netrc.c: Include errno.h.
+
+ * http.c: Include errno.h and time header.
+
+ * Makefile.in (exext): Define.
+ (install.bin, uninstall.bin): Use it.
+
+1998-06-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Don't attempt to compare local and remote
+ sizes if the remote size is unknown.
+
+1998-06-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Use malloc() instead of alloca in the
+ loop.
+
+1998-06-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.2-b4 is released.
+
+1998-06-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Ignore spaces before and after the URI.
+
+1998-06-08 Wanderlei Antonio Cavassin <cavassin@conectiva.com.br>
+
+ * ftp.c (getftp): Translate `done'.
+
+1998-06-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.2-b3 is released.
+
+1998-06-06 Alexander Kourakos <awk@bnt.com>
+
+ * init.c (cleanup): Close dfp, don't free it.
+
+1998-06-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (make_directory): Twiddle.
+
+ * config.h.in: Added template for access().
+
+1998-06-05 Mathieu Guillaume <mat@cythere.com>
+
+ * html.c (htmlfindurl): Download <input src=...>
+
+1998-06-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (file_exists_p): Use access() with two arguments.
+
+1998-05-27 Martin Kraemer <Martin.Kraemer@mch.sni.de>
+
+ * netrc.c (parse_netrc): Correct logic.
+
+1998-05-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Added `break'; suggested by Lin Zhe Min
+ <ljm@ljm.wownet.net>.
+
+1998-05-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.2-b2 is released.
+
+1998-05-18 Juan Jose Rodriguez <jcnsoft@jal1.telmex.net.mx>
+
+ * mswindows.h: Don't translate mkdir to _mkdir under Borland.
+
+1998-05-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (elapsed_time): Return correct value when
+ HAVE_GETTIMEOFDAY is undefined.
+
+1998-05-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.2-b1 is released.
+
+1998-05-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * getopt.c (_getopt_internal): Use exec_name instead of argv[0].
+ (_getopt_internal): Don't translate `#if 0'-ed strings.
+
+1998-05-06 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * mswindows.c (ws_handler): Use fork_to_background().
+
+1998-05-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.1 is released.
+
+1998-05-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (parse_http_status_line): Avoid `minor' and `major'
+ names.
+
+1998-05-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (mkdirhier): Renamed to make_directory.
+
+1998-05-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * mswindows.c (fork_to_background): Define under Windows.
+
+ * utils.c (fork_to_background): New function.
+
+ * html.c (htmlfindurl): Removed rerdundant casts.
+
+1998-05-01 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * mswindows.c (ws_mypath): Cache the path.
+
+1998-04-30 Douglas E. Wegscheid <wegscd@whirlpool.com>
+
+ * ftp.h: Prefix enum ftype members with FT_.
+
+ * ftp-ls.c, ftp.c, html.h: Adjust accordingly.
+
+ * mswindows.h: Use stat under Borland, _stat under MSVC.
+
+1998-04-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (known_authentication_scheme_p): New function.
+ (gethttp): Handle authorization more correctly.
+
+ * ftp-basic.h: Removed.
+
+ * cmpt.h: Removed.
+
+ * utils.c: Include <unistd.h> before <pwd.h>; needed under SunOS
+ with gcc 2.8.
+ (numdigit): Use `while' loop.
+
+ * http.c (create_authorization_line): Detect authentication
+ schemes case-insensitively.
+
+ * http.c (extract_header_attr): Use strdupdelim().
+ (digest_authentication_encode): Move declaration of local
+ variables to smaller scope.
+ (digest_authentication_encode): Reset REALM, OPAQUE and NONCE.
+ (create_authorization_line): Detect authentication schemes
+ case-insensitively.
+
+ * utils.c (touch): Constify.
+
+ * http.c (gethttp): Report a nicer error when no data is received.
+
+ * rbuf.h (RBUF_READCHAR): Ditto.
+
+ * ftp-basic.c (ftp_response): Use sizeof.
+
+1998-04-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (print_percentage): EXPECTED is long, not int.
+ (print_percentage): Use floating-point arithmetic to avoid
+ overflow with large files' sizes multiplied with 100.
+
+1998-04-27 Gregor Hoffleit <flight@mathi.uni-heidelberg.de>
+
+ * config.h.in: Added pid_t stub.
+
+ * sysdep.h (S_ISREG): Moved here from mswindows.h (NeXT doesn't
+ define it).
+
+1998-04-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5.0 is released.
+
+1998-04-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (str_url): Ditto.
+
+ * ftp-basic.c (ftp_rest): Use new name.
+
+ * utils.c (long_to_string): Renamed from prnum().
+
+1998-04-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b17 is released.
+
+1998-04-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * headers.c (header_get): New argument FLAGS.
+
+ * http.c (gethttp): If request is malformed, bail out of the
+ header loop.
+ (gethttp): Check for empty header *after* the status line checks.
+ (gethttp): Disallow continuations for status line.
+
+1998-04-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b16 is released.
+
+1998-04-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (commands): Renamed `always_rest' to `continue'.
+
+1998-04-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * all: Use it.
+
+ * log.c (logputs): New argument.
+ (logvprintf): Ditto.
+ (logprintf): Ditto.
+
+1998-04-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_atotm): Update comment.
+
+ * main.c (i18n_initialize): Set LC_MESSAGES, not LC_ALL.
+
+ * wget.h: Renamed ENABLED_NLS to HAVE_NLS.
+
+ * main.c (i18n_initialize): New function.
+ (main): Use it.
+
+ * log.c: Include <unistd.h>.
+
+ * retr.c (show_progress): Cast alloca to char *.
+
+1998-04-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b15 is released.
+
+1998-04-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.h: Declare file_non_directory_p().
+
+1998-04-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): It's `tries', not `numtries' now.
+
+1998-04-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (getperms): Removed.
+
+1998-04-01 Tim Charron <tcharron@interlog.com>
+
+ * log.c (logvprintf): Don't use ARGS twice.
+
+1998-04-01 John <john@futuresguide.com>
+
+ * mswindows.c: Cleaned up.
+
+1998-04-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b14 is released.
+
+1998-04-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp-opie.c (STRLEN4): New macro.
+ (btoe): Use it.
+
+1998-04-01 Junio Hamano <junio@twinsun.com>
+
+ * http.c: Document all the Digest functions.
+
+1998-04-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (file_non_directory_p): Renamed from isfile().
+
+ * mswindows.h (S_ISREG): New macro, suggested by Tim Adam.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (mkdirhier): Use 0777 instead of opt.dirmode.
+
+ * init.c (cmd_spec_dotstyle): Use 48 dots per line for binary
+ style.
+ (cmd_permissions): Removed.
+
+ * config.h.in: Add template for WORDS_BIGENDIAN.
+
+1998-03-31 Junio Hamano <junio@twinsun.com>
+
+ * http.c (HEXD2asc): New macro.
+ (dump_hash): Use it.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b13 is released.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): Don't try to use `com'.
+
+1998-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (cmd_permissions): New function.
+
+1998-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b12 is released.
+
+1998-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (commands): Renamed `numtries' to `tries'.
+ (cmd_spec_debug): Removed.
+ (home_dir): Under Windows, return `C:\' if HOME is undefined.
+
+1998-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * config.h.in: Define _XOPEN_SOURCE.
+
+ * init.c (check_user_specified_header): New function.
+ (cmd_spec_header): Use it.
+ (cmd_spec_useragent): New function.
+
+1998-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b11 is released.
+
+1998-03-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.h: Include <libintl.h> only if NLS is enabled.
+
+1998-03-26 Hrvoje Niksic <hniksic@srce.hr>
+
+ * options.h (struct options): Made `wait' a long.
+ (struct options): Ditto for `timeout'.
+
+1998-03-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (exists): Renamed to file_exists_p.
+ (file_exists_p): Use access() if available.
+
+1998-03-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (memfatal): Set save_log_p to 0 to avoid potential
+ infloop.
+
+ * log.c: do_logging -> save_log_p.
+
+ * config.h.in: Added template for HAVE_VSNPRINTF.
+
+1998-03-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c: Ditto.
+
+ * http.c: Protect declaration against non-ANSI compiler.
+
+ * log.c (logvprintf): Use vsnprintf() if available.
+
+ * getopt.c (main): Don't translate test stuff.
+
+1998-03-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b10 is released.
+
+1998-03-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Don't translate "CWD %s".
+
+ * wget.h (GCC_FORMAT_ATTR): Renamed from FORMAT_ATTR.
+
+1998-03-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp-opie.c (btoe): Use memcpy() instead of strncat().
+
+ * log.c (logputs): New function.
+ (logvprintf): Renamed from vlogmsg; use logputs().
+
+ * retr.c (show_progress): Print `[100%]' when the retrieval is
+ finished.
+
+ * init.c (run_wgetrc): Use FILE, not PATH.
+ (wgetrc_file_name): Ditto.
+
+1998-03-07 Tim Adam <tma@osa.com.au>
+
+ * recur.c (parse_robots): Correctly reset `entries' on empty
+ disallow.
+
+1998-03-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (cmd_spec_debug): Use cmd_boolean().
+
+1998-02-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Create proxy-authorization correctly.
+
+1998-02-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * md5.c: Ditto.
+
+ * getopt.c: Use ANSI function definitions.
+
+ * ftp-opie.c: New file.
+
+ * options.h: Don't redefine EXTERN.
+
+ * init.c: Sort it correctly.
+
+1998-02-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b9 is released.
+
+1998-02-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (recursive_retrieve): Reset `first_time'.
+
+ * ftp.c (getftp): Added `default' clause to switches of uerr_t.
+
+ * rbuf.c (rbuf_peek): Simplified.
+ (rbuf_flush): Use MINVAL.
+
+ * wget.h (MINVAL): Moved from url.h.
+
+ * rbuf.h (RBUF_FD): New macro.
+
+ * url.c (add_url): Add to the head of the list.
+
+ * ftp.c (ftp_retrieve_list): Set the permissions to downloaded
+ file.
+ (getftp): Set the default permissions to 0600.
+
+1998-02-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Ditto.
+ (convert_links): Ditto.
+
+ * recur.c (parse_robots): Ditto.
+
+ * html.c (ftp_index): Ditto.
+
+ * ftp-ls.c (ftp_parse_unix_ls): Open file as binary.
+
+ * init.c (defaults): Initialize `opt' to zero via memset.
+
+ * http.c (digest_authentication_encode): goto considered harmful.
+
+1998-02-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (delelement): Simplify and fix leak.
+
+1998-02-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (dump_hash): Use HEXD2ASC instead of home-grown stuff.
+
+ * url.h (HEXD2ASC): Removed warning.
+
+ * init.c (comind): Use binary search.
+ (commands): Reorganized.
+ (setval): Ditto.
+ (cmd_boolean): New function.
+ (cmd_number): Ditto.
+ (cmd_number_inf): Ditto.
+ (cmd_string): Ditto.
+ (cmd_vector): Ditto.
+ (cmd_directory_vector): Ditto.
+ (cmd_bytes): Ditto.
+ (cmd_time): Ditto.
+ (cmd_spec_debug): Ditto.
+ (cmd_spec_dirmode): Ditto.
+ (cmd_spec_dirstruct): Ditto.
+ (cmd_spec_dotstyle): Ditto.
+ (cmd_spec_header): Ditto.
+ (cmd_spec_htmlify): Ditto.
+ (cmd_spec_mirror): Ditto.
+ (cmd_spec_outputdocument): Ditto.
+ (cmd_spec_recursive): Ditto.
+ (settime): Merged with cmd_time().
+ (setbytes): Merged with cmd_bytes().
+ (setonoff): Merged with cmd_boolean().
+ (onoff): Ditto.
+
+1998-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (distclean): Remove `config.h'.
+
+1998-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b8 is released.
+
+1998-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (digest_authentication_encode): New function.
+ (create_authorization_line): Use it.
+ (dump_hash): New function.
+ (digest_authentication_encode): Use it.
+
+ * fnmatch.c: Renamed from `mtch.c'.
+
+1998-02-15 Karl Eichwalder <ke@suse.de>
+
+ * main.c (main): Tag "Written by..." string as translatable.
+
+1998-02-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.h (FREE_MAYBE): New macro.
+
+ * http.c (create_authorization_line): Don't use ANSI C string
+ concatenation feature.
+ (basic_authentication_encode): Use alloca() for temporary
+ variables.
+
+ * recur.h: Ditto.
+
+ * http.c: Ditto.
+
+ * headers.h: Ditto.
+
+ * ftp-basic.c: Protect declaration against non-ANSI compiler.
+
+ * http.c (create_authorization_line): Cast `unsigned char *' to
+ `char *' for sprintf, to shut up the noisy Digital Unix cc.
+
+1998-02-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b7 is released.
+
+1998-02-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * cmpt.c (strstr): Synched with glibc-2.0.6.
+
+ * ftp-basic.c (calculate_skey_response): Ditto.
+ (calculate_skey_response): Use alloca().
+
+ * http.c (create_authorization_line): Work with FSF's version of
+ md5.c.
+
+ * md5.c: New file, from GNU libc.
+
+1998-02-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.h (URL_CLEANSE): Name the temporary variable more carefully.
+
+1998-02-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (basic_authentication_encode): New function, instead of
+ the macro.
+
+1998-02-13 Junio Hamano <junio@twinsun.com>
+
+ * http.c: Add HTTP-DA support.
+ * ftp-basic.c: Add Opie/S-key support.
+ * config.h.in, Makefile.in: Add HTTP-DA and Opie/S-key support.
+ * md5.c, md5.h: New files.
+
+1998-02-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_process_range): Renamed from hprocrange().
+ (http_process_range): Parse the whole header.
+
+ * headers.c: New file.
+ (header_process): New function.
+ (header_get): Renamed from fetch_next_header.
+
+ * all: Include utils.h only where necessary.
+
+ * wget.h: Declare xmalloc(), xrealloc() and xstrdup() here.
+
+ * wget.h: Add provisions for dmalloc.
+
+1998-02-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b6 is released.
+
+1998-02-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop): Determine `filename' more precisely.
+
+ * init.c (setval): Don't set `opt.quiet' if output-document is
+ `-'.
+
+ * log.c (log_init): Print to STDERR instead of STDOUT.
+ (vlogmsg): Use STDERR by default.
+ (logflush): Ditto.
+
+1998-02-11 Simon Josefsson <jas@pdc.kth.se>
+
+ * host.c: Use addr_in again.
+
+1998-02-08 Karl Eichwalder <karl@suse.de>
+
+ * http.c (gethttp): Fixed typo.
+
+1998-02-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b5 is released.
+
+1998-02-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (show_progress): Use it.
+
+ * log.c (logflush): New function.
+
+ * wget.h: Utilize __attribute__ if on gcc.
+
+1998-02-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (base64_encode_line): New argument LENGTH.
+ (BASIC_AUTHENTICATION_ENCODE): Use it.
+ (BASIC_AUTHENTICATION_ENCODE): Take length of HEADER into account.
+
+ * main.c (main): Fixed fprintf() format mismatch.
+
+1998-02-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b4 is released.
+
+1998-02-03 Simon Josefsson <jas@pdc.kth.se>
+
+ * host.c: use sockaddr_in instead of addr_in.
+
+1998-02-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (cleanup): Use it.
+
+ * recur.c (recursive_cleanup): New function.
+
+ * retr.c (retrieve_from_file): Ditto.
+
+ * main.c (main): Use it.
+
+ * recur.c (recursive_reset): New function.
+
+ * retr.c (retrieve_from_file): Ditto.
+
+ * main.c (main): Simplify call to recursive_retrieve().
+
+ * recur.c (recursive_retrieve): Removed FLAGS argument.
+
+ * http.c (gethttp): Changed call to iwrite().
+
+1998-02-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Ditto.
+ (free_urlpos): Ditto.
+ (mkstruct): Ditto.
+ (construct): Ditto.
+
+ * retr.c (retrieve_url): Move declaration of local variables to
+ smaller scope.
+
+ * url.c (urlproto): Use it.
+ (parseurl): Ditto.
+ (str_url): Ditto.
+ (get_urls_html): Ditto.
+
+ * utils.h (ARRAY_SIZE): New macro.
+
+ * url.c (proto): Moved from url.h.
+
+ * url.h (URL_CLEANSE): Reformatted.
+ (USE_PROXY_P): Renamed from USE_PROXY.
+
+ * ftp-basic.c: Adjust to the new interface of iwrite().
+
+ * ftp-basic.c (ftp_port): Use alloca().
+
+1998-02-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b3 is released.
+
+ * host.c (ftp_getaddress): Don't print to stderr directly.
+
+ * init.c (setbytes): Support `g' for gigabytes.
+ (cmdtype): New specification CTIME.
+ (setval): Use it with settime().
+ (commands): Use it for WAIT and TIMEOUT.
+
+1998-02-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (BASIC_AUTHENTICATION_ENCODE): New macro.
+ (gethttp): Use it.
+
+ * utils.c (unique_name_1): Moved from url.c.
+ (unique_name): Ditto.
+
+ * url.c (url_filename): Ditto.
+
+ * log.c (redirect_output): Changed call to unique_name().
+
+ * url.c (unique_name_1): Renamed from unique_name().
+ (unique_name): Changed interface.
+
+ * init.c (enum cmdid): Moved from init.h.
+ (cmdtype): Ditto.
+ (struct cmd): Ditto.
+
+ * main.c (main): Use it.
+ (main): Moved `--backups' to not have a short option.
+
+ * options.h (struct options): New member BACKGROUND.
+
+ * main.c (print_help): Rearranged.
+ (main): New long options for -n* short options: --no-directories,
+ --no-host-directories, --non-verbose, --no-host-lookup and
+ --dont-remove-listing.
+
+1998-02-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): Use log_close().
+
+ * log.c: New variable LOGFP.
+ (vlogmsg): Use it.
+ (redirect_output): Don't open /dev/null; set LOGFP to stdin
+ instead.
+ (log_close): New function.
+
+ * options.h (struct options): Removed LFILE.
+
+ * log.c (log_enable): Removed.
+
+ * main.c (main): Use it.
+
+ * log.c (log_init): New function.
+
+ * url.c (get_urls_html): Removed needless assignment to BASE.
+
+ * host.c (add_hlist): Don't set CMP needlessly.
+
+ * utils.c (match_backwards): Ditto.
+ (in_acclist): Ditto.
+
+ * url.c (findurl): Ditto.
+
+ * netrc.c (parse_netrc): Ditto.
+
+ * log.c (log_dump): Ditto.
+
+ * html.c (html_quote_string): Ditto.
+
+ * ftp-basic.c (ftp_request): Made static.
+
+ * connect.c: Made global variables static.
+
+ * url.c (construct): Ditto.
+
+ * init.c (init_path): Avoid assignment inside `if'-condition.
+
+ * ftp.c: Don't include in.h or winsock.h.
+
+ * ftp.c (ftp_loop): Use SZ.
+
+ * connect.c (bindport): Cast &addrlen to int *.
+ (conaddr): Ditto.
+
+ * init.c (initialize): Don't use SYSTEM_WGETRC unconditionally.
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Initialize opt.ftp_pass here.
+ (ftp_retrieve_dirs): Use alloca().
+
+ * init.c (defaults): Don't initialize opt.ftp_pass.
+
+ * sysdep.h (S_ISLNK): Declare for OS/2; ditto for lstat.
+ From Ivan F. Martinez <ivanfm@ecodigit.com.br>.
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (parse_robots): Check for comments more correctly.
+
+ * host.c (ftp_getaddress): Use STRDUP_ALLOCA.
+ (ftp_getaddress): Add diagnostics when reverse-lookup yields only
+ hostname.
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget 1.5-b2 is released.
+
+ * netrc.c (NETRC_FILE_NAME): Moved from netrc.h.
+
+ * utils.c (proclist): Pass FNM_PATHNAME to fnmatch().
+
+ * ftp-basic.c (ftp_pasv): Avoid unnecessary casting to unsigned
+ char.
+
+ * log.c: Don't attempt to hide arguments from ansi2knr.
+
+ * cmpt.c: Synched strptime() and mktime() with glibc-2.0.6.
+
+ * ansi2knr.c: Use a later version, from fileutils-3.16l alpha.
+
+ * ftp.c (getftp): Ditto.
+
+ * http.c (gethttp): Use it.
+
+ * retr.c (get_contents): New argument EXPECTED; pass it to
+ show_progress().
+ (show_progress): New argument EXPECTED; use it to display
+ percentages.
+
+ * init.c (setval): Ditto.
+
+ * http.c (gethttp): Ditto.
+ (http_loop): Ditto.
+
+ * ftp.c (getftp): Ditto.
+ (ftp_loop_internal): Ditto.
+
+ * ftp-ls.c (ftp_parse_unix_ls): Use abort() instead of assert(0).
+
+ * sysdep.h (CLOSE): Simplify; use DEBUGP.
+
+ * netrc.c (search_netrc): Use alloca().
+
+ * init.c (defaults): Initialize no_flush.
+
+ * log.c (vlogmsg): Don't flush if no_flush.
+
+ * options.h (struct options): New variable no_flush.
+
+ * main.c (main): Don't play games with buffering.
+
+ * log.c (vlogmsg): Flush the output after every message.
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (parse_line): Ditto.
+
+ * url.c (get_urls_html): Ditto.
+
+ * main.c (main): Don't cast to unsigned char.
+
+ * init.c (run_wgetrc): Don't cast to unsigned char.
+ (parse_line): Accept char instead of unsigned char.
+
+ * html.c (htmlfindurl): Use char instead of unsigned char.
+
+ * all: Use them.
+
+ * sysdep.h: Add wrappers to ctype macros to make them
+ eight-bit-clean:
+
+1998-01-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * html.c (htmlfindurl): Download <img lowsrc=...>
+
+ * main.c (main): Ignore SIGPIPE.
+
+ * connect.c (select_fd): New argument WRITEP.
+ (iwrite): Call select_fd().
+
+1997-02-27 Fila Kolodny <fila@ibi.com>
+
+ * ftp.c (ftp_retrieve_list): If retrieving symlink and the proper
+ one already exists, just skip it.
+
+1998-01-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Cosmetic changes.
+
+ * http.c (check_end): Allow `+D...' instead of `GMT'.
+ From Fabrizio Pollastri <pollastri@cstv.to.cnr.it>.
+
+ * url.c (process_ftp_type): New function.
+ (parseurl): Use it.
+
+ * connect.c (iwrite): Allow writing in a few chunks.
+ (bindport): Made SRV static, so addr can point to it.
+ (select_fd): Removed HPUX kludge.
+
+ * host.c (free_hlist): Incorporated into clean_hosts().
+
+1998-01-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (hlist): Made static.
+ (search_address): Cosmetic change.
+
+1998-01-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget v1.5-b1 is released.
+
+ * http.c (hgetlen): Use sizeof() to get the header length.
+ (hgetrange): Ditto.
+ (hgettype): Ditto.
+ (hgetlocation): Ditto.
+ (hgetmodified): Ditto.
+ (haccepts_none): Ditto.
+
+ * main.c (main): Updated `--version' and `--help' output, as per
+ Francois Pinard's suggestions.
+
+ * main.c: Include locale.h; call setlocale(), bindtextdomain() and
+ textdomain().
+
+ * config.h.in: Define stubs for I18N3.
+
+ * wget.h: Include libintl.h.
+
+1998-01-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (mkstruct): Check for opt.cut_dirs.
+ (mkstruct): alloca()-te more, xmalloc() less.
+
+ * utils.c (load_file): Check for ferror().
+
+ * url.c (get_urls_file): Close only the files we opened.
+ (get_urls_html): Ditto.
+ (count_slashes): New function.
+
+ * http.h: Removed.
+
+ * http.c (gethttp): Respect username and password provided by
+ proxy URL.
+ (base64_encode_line): Write into an existing buffer instead of
+ malloc-ing a new one.
+ (struct http_stat): Moved from http.h
+
+ * retr.c (retrieve_url): Free SUF.
+
+ * all: Removed lots of unnecessary .h dependencies.
+
+ * html.c (global_state): Made static.
+
+ * utils.h (ALLOCA_ARRAY): New macro.
+
+ * main.c (main): New option `--cut-dirs'.
+
+ * url.c (construct): Use alloca() for T.
+
+ * utils.c (mkdirhier): Use STRDUP_ALLOCA.
+
+ * host.c (_host_t): Moved from host.h.
+ (struct host): Renamed from _host_t.
+ (store_hostaddress): Use STRDUP_ALLOCA for INET_S.
+ (realhost): Ditto.
+
+ * host.h: Don't include url.h.
+
+ * ftp.c (LIST_FILENAME): Moved from ftp.h.
+
+ * init.c (DEFAULT_FTP_ACCT): Moved from ftp.h.
+
+ * main.c (main): Enable log if the output goes to a TTY.
+
+ * connect.h: Removed unused constant `BACKLOG'.
+
+ * config.h.in: Check for isatty().
+
+ * Makefile.in (LINK): Use CFLAGS when linking.
+
+1998-01-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * mswindows.c (ws_hangup): Use redirect_output().
+
+ * main.c (redirect_output_signal): New function; use
+ redirect_output().
+
+ * log.c (redirect_output): New function, based on hangup(), which
+ is deleted.
+
+ * log.c (vlogmsg): New function.
+
+ * wget.h (DEBUGP): Use debug_logmsg().
+
+ * main.c (hangup): Use it.
+
+ * log.c (log_dump): New function.
+
+ * utils.h (DO_REALLOC): Use `long' for various sizes.
+
+ * http.c (hskip_lws): Use `while', for clarity.
+ (HTTP_DYNAMIC_LINE_BUFFER): New constant.
+ (fetch_next_header): Use it instead of DYNAMIC_LINE_BUFFER.
+
+ * ftp-basic.c (FTP_DYNAMIC_LINE_BUFFER): New constant.
+ (ftp_response): Use it instead of DYNAMIC_LINE_BUFFER.
+
+ * utils.c (DYNAMIC_LINE_BUFFER): Moved from utils.c.
+ (LEGIBLE_SEPARATOR): Ditto.
+ (FILE_BUFFER_SIZE): Ditto.
+
+ * retr.c (BUFFER_SIZE): Moved from retr.h.
+
+ * log.c: New file.
+ (logmsg): Moved from utils.c.
+ (debug_logmsg): New function.
+
+ * mswindows.h: Include it here.
+
+ * init.c: Ditto.
+
+ * utils.c: Don't include <windows.h>.
+
+1998-01-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (ftp_getaddress): Ditto.
+
+ * main.c (main): Use it.
+
+ * utils.h (STRDUP_ALLOCA): New macro.
+
+ * init.c: Prepend `wget: ' to error messages printed on stderr.
+
+ * utils.c (mkdirhier): Renamed from mymkdir.
+ (touch): Renamed from my_touch.
+ (pwd_cuserid): Renamed from my_cuserid().
+
+1998-01-24 Andy Eskilsson <andy.eskilsson@telelogic.se>
+
+ * utils.c (accdir): Process wildcards.
+ (proclist): New function.
+ (accdir): Use it to avoid code repetition.
+
+1998-01-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (parse_robots): Respect opt.useragent; use alloca().
+
+ * http.c (gethttp): Construct useragent accordingly.
+
+ * version.c: Changed version string to numbers-only.
+
+ * main.c (print_help): List all the options.
+
+ * mswindows.c (windows_main_junk): Initialize argv0 here.
+
+1998-01-24 Karl Heuer <kwzh@gnu.org>
+
+ * netrc.c (search_netrc): Initialize `l' only after processing
+ netrc.
+
+ * main.c (main): Don't trap SIGHUP if it's being ignored.
+
+1998-01-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * all: Use logmsg().
+
+ * utils.c (time_str): Moved from retr.c.
+ (logmsg): New function.
+ (logmsg_noflush): Ditto.
+
+ * rbuf.c: New file, moved buf_* functions here.
+
+ * ftp.c (ftp_expected_bytes): Moved from ftp-basic.c.
+
+ * ftp-basic.c (ftp_rest): Use prnum().
+
+ * ftp-basic.c: Ditto.
+
+ * ftp.c: Use the new reading functions and macros.
+
+ * retr.c (buf_initialize): New function.
+ (buf_initialized_p): Ditto.
+ (buf_uninitialize): Ditto.
+ (buf_fd): Ditto.
+
+ * http.c (fetch_next_header): Use the BUF_READCHAR macro for
+ efficiency.
+ (gethttp): Use alloca() where appropriate.
+
+ * retr.c (buf_readchar): Use it.
+ (buf_peek): Use rstreams.
+
+ * retr.h (BUF_READCHAR): New macro.
+
+ * init.c (home_dir): Rewritten for clarity.
+ (init_path): Ditto.
+
+ * mswindows.c (ws_backgnd): Made static.
+ (read_registry): Ditto.
+ (ws_cleanup): Ditto.
+ (ws_handler): Ditto.
+
+1998-01-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * alloca.c: New file.
+
+ * Makefile.in (ALLOCA): Define.
+
+ * mswindows.c (ws_help): Constify.
+ (ws_help): Use alloca.
+
+ * mswindows.c: Reformat.
+
+ * all: Added _(...) annotations for I18N snarfing and translation.
+
+ * host.c (ftp_getaddress): Nuke SYSINFO.
+ (ftp_getaddress): Don't use getdomainname().
+ (ftp_getaddress): Use uname(), where available.
+
+ * http.c (gethttp): Protect a stray fprintf().
+
+ * init.c (settime): New function.
+ (setval): Treat WAIT specially, allowing suffixes like `m' for
+ minutes, etc.
+
+1998-01-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Use alloca() for TEMP.
+
+1998-01-21 Jordan Mendelson <jordy@wserv.com>
+
+ * url.c (rotate_backups): New function.
+
+ * http.c (gethttp): Ditto.
+
+ * ftp.c (getftp): Rotate backups.
+
+1997-12-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * all: Renamed nmalloc(), nrealloc() and nstrdup() to xmalloc(),
+ xrealloc() and xstrdup(). Use the new functions.
+
+ * url.c (decode_string): Made static.
+ (has_proto): Ditto.
+ (parse_dir): Ditto.
+ (parse_uname): Ditto.
+ (mkstruct): Ditto.
+ (construct): Ditto.
+ (construct_relative): Ditto.
+
+ * retr.c (show_progress): Made static.
+
+ * recur.c (robots_url): Made static.
+ (retrieve_robots): Ditto.
+ (parse_robots): Ditto.
+ (robots_match): Ditto.
+
+ * main.h: Removed.
+
+ * main.c (printhelp): Made static.
+ (hangup): Ditto.
+
+ * init.c (comind): Made static.
+ (defaults): Ditto.
+ (init_path): Ditto.
+ (run_wgetrc): Ditto.
+ (onoff): Ditto.
+ (setonoff): Ditto.
+ (setnum): Ditto.
+ (myatoi): Ditto.
+ (getperms): Ditto.
+ (setbytes): Ditto.
+
+ * http.c (fetch_next_header): Made static.
+ (hparsestatline): Ditto.
+ (hskip_lws): Ditto.
+ (hgetlen): Ditto.
+ (hgetrange): Ditto.
+ (hgettype): Ditto.
+ (hgetlocation): Ditto.
+ (hgetmodified): Ditto.
+ (haccepts_none): Ditto.
+ (gethttp): Ditto.
+ (base64_encode_line): Ditto.
+ (mktime_from_utc): Ditto.
+ (http_atotm): Ditto.
+
+ * html.c (idmatch): Made static.
+
+ * host.c (search_host): Made static.
+ (search_address): Ditto.
+ (free_hlist): Ditto.
+
+ * ftp.c (getftp): Made static.
+ (ftp_loop_internal): Ditto.
+ (ftp_get_listing): Ditto.
+ (ftp_retrieve_list): Ditto.
+ (ftp_retrieve_dirs): Ditto.
+ (ftp_retrieve_glob): Ditto.
+ (freefileinfo): Ditto.
+ (delelement): Ditto.
+
+ * ftp-ls.c (symperms): Made static.
+ (ftp_parse_unix_ls): Ditto.
+
+ * connect.c (select_fd): Made static.
+
+ * utils.c (xmalloc): Renamed from nmalloc.
+ (xrealloc): Renamed from nrealloc.
+ (xstrdup): Renamed from nstrdup.
+
+ * getopt.c (exchange): Use alloca.
+
+ * mswindows.c (mycuserid): Use strncpy.
+
+ * New files mswindows.c, mswindows.h, sysdep.h. winjunk.c,
+ systhings.h, windecl.h and winjunk.h removed.
+
+ * mswindows.c (sleep): New function.
+
+ * utils.c: Include <windows.h> under Windows.
+
+1997-06-12 Darko Budor <dbudor@zesoi.fer.hr>
+
+ * url.h (URL_UNSAFE): Change default under Windows.
+
+ * retr.c (retrieve_from_file): Respect opt.delete_after.
+
+ * main.c (main): Call ws_help on Windows.
+
+ * winjunk.c (windows_main_junk): New function.
+
+ * main.c (main): Junk-process argv[0].
+
+ * http.c (mktime_from_utc): Return -1 if mktime failed.
+
+ * http.c (http_loop): Ditto.
+
+ * ftp.c (ftp_loop_internal): Change title on Windows when using a
+ new URL.
+
+ * winjunk.c (getdomainname): Lots of functions.
+
+1997-06-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * cmpt.c (strptime_internal): Handle years more correctly for
+ `%y'.
+
+1997-06-09 Mike Thomas <mthomas@reality.ctron.com>
+
+ * http.c (gethttp): Allocate enough space for
+ `Proxy-Authorization' header.
+
+1997-05-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Wget/1.4.5 is released.
+
+1997-05-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (get_contents): Check return value of fwrite more
+ carefully.
+
+1997-03-30 Andreas Schwab <schwab@issan.informatik.uni-dortmund.de>
+
+ * cmpt.c (strptime_internal) [case 'Y']: Always subtract 1900 from
+ year, regardless of century.
+
+1997-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (isfile): Use `lstat' instead of `stat'.
+
+1997-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (numdigit): Use explicit test.
+
+1997-03-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Always use `url_filename' to get u->local.
+
+1997-03-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c: Recognize https.
+
+1997-03-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (recursive_retrieve): Lowercase just the host name.
+
+1997-03-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_file): Use the correct test.
+ (get_urls_html): Ditto.
+
+1997-03-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * connect.c: Reverted addrlen to int.
+
+ * init.c (parse_line): Check for -1 instead of NONE.
+
+ * version.c: Changed version to 1.4.5.
+
+1997-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c: New option netrc.
+ (initialize): Don't parse .netrc.
+
+ * cmpt.c (recursive): Return rp.
+ (strptime_internal): Match the long strings first, the abbreviated
+ second.
+
+1997-02-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (check_end): New function.
+ (http_atotm): Use it.
+
+1997-02-13 gilles Cedoc <gilles@cedocar.fr>
+
+ * http.c (gethttp): Use them.
+
+ * init.c: New options proxy_user and proxy_passwd.
+
+1997-02-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_retrieve_list): Create links even if not relative.
+
+1997-02-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (recursive_retrieve): Lowercase the host name, if the
+ URL is not "optimized".
+
+ * host.c (realhost): Return l->hostname, even if it matches with
+ host.
+
+1997-02-10 Marin Purgar <pmc@asgard.hr>
+
+ * connect.c: Make addrlen size_t instead of int.
+ (conaddr): Ditto.
+
+1997-02-09 Gregor Hoffleit <flight@mathi.uni-heidelberg.DE>
+
+ * systhings.h: Define S_ISLNK on NeXT too.
+
+1997-02-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.3.
+
+ * url.c: Futher update to list of protostrings.
+ (skip_proto): Skip `//' correctly for FTP and HTTP.
+
+ * url.c (get_urls_html): Handle bogus `http:' things a little
+ different.
+
+ * main.c (main): Removed `follow-ftp' from `f'.
+ (main): Dumped the `prefix-files' and `file-prefix' options and
+ features; old and bogus.
+ (main): Exit on failed setval() in `-e'.
+
+ * http.c (fetch_next_header): Use it to detect header continuation
+ correctly.
+
+ * retr.c (buf_peek): New function.
+
+1997-02-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.h: Include time.h and stuff.
+
+1997-02-08 Roger Beeman <beeman@cisco.com>
+
+ * ftp.c: Include <time.h>
+
+1997-02-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (findurl): Would read over buffer limits.
+
+1997-02-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp-ls.c (ftp_parse_unix_ls): Allow spaces in file names.
+
+1997-02-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_atotm): Initialize tm.is_dst.
+
+1997-02-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Don't print the number of retrieved headers.
+
+ * main.c (main): New option `--no-clobber', alias for `-nc'.
+
+ * url.c: Recognize `https://'.
+
+1997-02-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (herrmsg): Don't use h_errno.
+
+1997-01-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (accept_domain): Use it.
+
+ * main.c (main): New option `--exclude-domains'.
+
+ * retr.c (retrieve_url): Use it.
+ (retrieve_url): Bail out when an URL is redirecting to itself.
+
+ * url.c (url_equal): New function.
+
+1997-01-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * connect.c: Include arpa/inet.h instead of arpa/nameser.h.
+
+ * http.c (mk_utc_time): New function.
+ (http_atotm): Use it; handle time zones correctly.
+
+1997-01-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c: Ditto.
+
+ * ftp-basic.c: Use it instead of WRITE.
+
+ * connect.c (iwrite): New function.
+
+1997-01-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * cmpt.c (mktime): New function.
+
+ * netrc.c: Include <sys/types.h>.
+
+ * main.c (main): Wouldn't recognize --spider.
+
+ * retr.c (rate): Use `B', `KB' and `MB'.
+ (reset_timer,elapsed_time): Moved from utils.c.
+
+ * ftp.c (ftp_retrieve_list): Ditto.
+
+ * http.c (http_loop): Don't touch the file if opt.dfp.
+
+1997-01-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * cmpt.c: New file.
+
+ * ftp.c (ftp_retrieve_glob): New argument semantics.
+ (ftp_retrieve_dirs): Use it.
+ (ftp_loop): Ditto.
+
+ * html.c (htmlfindurl): Recognize `'' as the quote char.
+
+1997-01-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop_internal): Use it.
+
+ * utils.c (remove_link): New function.
+
+1997-01-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (retrieve_url): Require STRICT redirection URL.
+
+ * url.c (parseurl): New argument STRICT.
+
+ * http.c (hparsestatline): Be a little-bit less strict about
+ status line format.
+
+1997-01-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Use it.
+
+ * main.c (main): Don't use '<digit>' as options.
+
+ * init.c: New option ignore_length.
+
+ * http.c (gethttp): Ditto.
+ (http_loop): Check for redirection without Location:.
+ (gethttp): Don't print Length unless RETROKF.
+
+ * ftp.c (getftp): Use it.
+
+ * url.c (mkalldirs): New function.
+
+ * utils.c (mymkdir): Don't check for existing non-directory.
+
+ * url.c (mkstruct): Don't create the directory.
+
+1997-01-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (setval): Removed NO_RECURSION checks.
+
+1997-01-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4.3-pre2.
+
+ * recur.c (recursive_retrieve): Bypass host checking only if URL
+ is ftp AND parent URL is not ftp.
+
+ * ftp-basic.c (ftp_request): Print out Turtle Power.
+
+ * ftp.c (ftp_loop): Call ftp_retrieve_glob with 0 if there's no
+ wildcard.
+ (ftp_retrieve_glob): Call ftp_loop_internal even on empty list, if
+ not glob.
+
+ * http.c (gethttp): Be a little bit smarter about status codes.
+
+ * recur.c (recursive_retrieve): Always reset opt.recursive when
+ dealing with FTP.
+
+1997-01-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (retrieve_url): New variable location_changed; use it for
+ tests instead of mynewloc.
+ (retrieve_url): Allow heuristic adding of html.
+
+ * url.c (url_filename): Don't use the `%' in Windows file names.
+
+ * http.c (http_loop): Always time-stamp the local file.
+
+ * http.c (http_loop): Ditto.
+
+ * ftp.c (ftp_retrieve_list): Use it.
+
+ * utils.c (my_touch): New function.
+
+ * ftp.c (ftp_retrieve_list): Use #ifdef HAVE_STRUCT_UTIMBUF
+ instead of #ifndef NeXT.
+
+ * utils.c (strptime): New version, by Ulrich Drepper.
+
+1997-01-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (haccepts_none): Renamed from `haccepts_bytes'.
+ (gethttp): If haccepts_none(), disable ACCEPTRANGES.
+ (http_loop): Would remove ACCEPTRANGES.
+
+ * ftp.c (getftp): Call ftp_list with NULL.
+
+1997-01-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * html.c (ftp_index): Don't print minutes and seconds if we don't
+ know them; beautify the output.
+
+ * ftp.c (getftp): Don't close the socket on FTPNSFOD.
+
+1997-01-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (strptime): New function.
+ (strptime): Don't use get_alt_number.
+ (strptime): Don't use locale.
+ (match_string): Made it a function.
+
+1997-01-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_atotm): New function.
+ (http_loop): Use it.
+
+ * atotm.c: Removed from the distribution.
+
+ * http.c (base64_encode_line): Rewrite.
+
+1997-01-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Use ftp_expected_bytes; print size.
+
+ * ftp-basic.c (ftp_response): Use ftp_last_respline.
+ (ftp_expected_bytes): New function.
+
+ * ftp.c (getftp): Print the unauthoritative file length.
+
+ * ftp-ls.c: Renamed from ftp-unix.c.
+ (ftp_parse_ls): Moved from ftp.c.
+ (ftp_parse_unix_ls): Recognize seconds in time spec.
+ (ftp_parse_unix_ls): Recognize year-less dates of the previous
+ year.
+
+1997-01-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp-basic.c: Don't declare errno if #defined.
+
+ * host.c (ftp_getaddress): Check for sysinfo legally.
+
+1997-01-08 Darko Budor <dbudor@diana.zems.fer.hr>
+
+ * connect.c (iread): Use READ.
+
+1996-12-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c: Recognize finger, rlogin, tn3270, mid and cid as valid
+ schemes.
+
+1996-12-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (ftp_getaddress): Allow `.' in hostname.
+
+1996-12-26 Darko Budor <dbudor@zems.fer.hr>
+
+ * wget.h: READ and WRITE macros for use instead of read and write
+ on sockets, grep READ *.c, grep WRITE *.c
+
+ * wsstartup.c: new file - startup for winsock
+
+ * wsstartup.h: new file
+
+ * win32decl.h: new file - fixup for <errno.h> and winsock trouble
+
+ * configure.bat: Configure utility for MSVC
+
+ * src/Makefile.ms,config.h.ms: new files for use with MSVC 4.x
+
+1996-12-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.3-pre.
+
+ * utils.c (prnum): Accept long.
+ (legible): Use prnum().
+
+ * connect.c (make_connection): Accept port as short.
+ (bindport): Ditto.
+
+ * http.c (gethttp): Use search_netrc.
+
+1996-12-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Use search_netrc.
+
+ * netrc.c (free_netrc): New function.
+
+ * init.c (home_dir): New function.
+
+ * url.c (convert_links): Allow REL2ABS changes.
+
+1996-12-21 Gordon Matzigkeit <gord@gnu.ai.mit.edu>
+
+ * netrc.c: New file.
+ (parse_netrc, maybe_add_to_list): New functions.
+
+1996-12-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (retrieve_url): Reset opt.recursion before calling
+ ftp_loop if it is reached through newloc.
+
+ * init.c (run_wgetrc): Print the wgetrc path too, when reporting
+ error; don't use "Syntax error", since we don't know if it is
+ really a syntax error.
+
+1996-12-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (acceptable): Extract the filename part of the path.
+
+ * recur.c (recursive_retrieve): Call acceptable() with the right
+ argument; would bug out on wildcards.
+
+ * init.c (parse_line): Likewise.
+
+ * html.c (htmlfindurl): Cast to char * when calling stuff.
+
+1996-12-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Use ftp_pasv.
+
+ * ftp-basic.c (ftp_request): Accept NULL value.
+ (ftp_pasv): New function.
+
+ * options.h (struct options): Add passive FTP option.
+
+1996-12-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (parseurl): Debug output.
+
+ * utils.c (path_simplify): New one, adapted from bash's
+ canonicalize_pathname().
+
+1996-12-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Don't discard the buffer.
+
+ * retr.c (get_contents): New parameter nobuf.
+
+1996-12-13 Shawn McHorse <riffraff@txdirect.net>
+
+ * html.c (htmlfindurl): Recognize <meta contents="d; URL=...".
+
+ * init.c (setval): Strip the trailing slashes on CVECDIR.
+
+1996-12-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c: Make excludes and includes under CVECDIR instead of
+ CVEC.
+
+1996-12-13 Shawn McHorse <riffraff@txdirect.net>
+
+ * url.c (get_urls_html): Skip "http:".
+
+1996-12-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (strcasecmp): From glibc.
+ (strncasecmp): Also.
+ (strstr): Also.
+
+ * url.c: Added javascript: to the list of URLs prefixes.
+
+1996-12-12 Shawn McHorse <riffraff@txdirect.net>
+
+ * recur.c (retrieve_robots): Print the warning message only if
+ verbose.
+
+1996-12-12 Gregor Hoffleit <flight@mathi.uni-heidelberg.DE>
+
+ * ftp.c (ftp_retrieve_list): Use NeXT old utime interface.
+
+1996-12-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * systhings.h: New file.
+
+ * ../configure.in: Check for utime.h
+
+ * ftp.c: Check whether we have unistd.h.
+
+1996-11-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (recursive_retrieve): Send the canonical URL as referer.
+ (recursive_retrieve): Call get_urls_html with the canonical URL.
+
+1996-12-13 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
+
+ * (configure.in, config.h.in, src/Makefile.in, src/*.[ch]): Add
+ ansi2knr support for compilers which don't support ANSI style
+ function prototypes and signatures.
+
+ * (aclocal.m4, src/ansi2knr.c, src/ansi2knr.1): New files.
+
+1996-11-26 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c: Use it; Recognize paths ending with "." and ".." as
+ directories.
+ (url_filename): Append .n whenever file exists and could be a
+ directory.
+
+ * url.h (ISDDOT): New macro.
+
+ * init.c (parse_line): Use unsigned char.
+
+ * url.c (get_urls_html): Cast to unsigned char * when calling
+ htmlfindurl.
+
+ * html.c (htmlfindurl): Use unsigned char.
+
+ * version.c: Changed version to 1.4.3.
+
+1996-11-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.2.
+
+ * ftp.c (getftp): Simplified assertion.
+ (ftp_loop_internal): Remove symlink before downloading.
+ (ftp_retrieve_list): Unlink the symlink name before attempting to
+ create a symlink!
+
+ * options.h (struct options): Renamed print_server_response to
+ server_response.
+
+ * ftp.c (rel_constr): Removed.
+ (ftp_retrieve_list): Don't use it.
+ (ftp_retrieve_list): Use opt.retr_symlinks.
+
+1996-11-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): New option retr_symlinks.
+
+ * url.c (convert_links): Print verbose message.
+
+1996-11-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Reset newloc in the beginning of function;
+ would cause FMR in retrieve_url.
+
+1996-11-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (convert_all_links): Find the URL of each HTML document,
+ and feed it to get_urls_html; would bug out.
+ (convert_all_links): Check for l2 instead of dl; removed dl.
+
+ * url.c (convert_links): Don't refer to freed newname.
+
+ * recur.c (recursive_retrieve): Add this_url to urls_downloaded.
+
+ * main.c (main): Print the OS_TYPE in the debug output, too.
+
+ * recur.c (recursive_retrieve): Check for opt.delete_after.
+
+ * main.c (main): New option delete-after.
+
+ * init.c (setval): Cleaned up.
+
+1996-11-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (wget): Make `wget' the default target.
+
+ * ftp.c (ftp_loop_internal): Move noclobber checking out of the
+ loop.
+ (ftp_retrieve_list): Warn about non-matching sizes.
+
+ * http.c (http_loop): Made -nc non-dependent on opt.recursive.
+
+ * init.c (setnum): Renamed from setnuminf; New argument flags.
+ (setval): Use it.
+
+ * main.c (main): Sorted the options.
+ (main): New option --wait.
+
+1996-11-21 Shawn McHorse <riffraff@txdirect.net>
+
+ * html.c (htmlfindurl): Reset s->in_quote after getting out of
+ quotes.
+
+1996-11-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Changed version to 1.4.2.
+
+1996-11-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.1.
+
+ * html.c (html_quote_string): New function.
+ (ftp_index): Use it.
+ (htmlfindurl): A more gentle ending debug message.
+
+ * ftp.c (ftp_loop): Check for opt.htmlify.
+
+ * init.c: New command htmlify.
+
+ * ftp.c (getftp): Nicer error messages, with `'-encapsulated
+ strings.
+ (ftp_loop): Print size of index.html.
+
+ * init.c (setval): Implement "styles".
+
+ * main.c (main): New option dotstyle.
+
+1996-11-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Close the master socket in case of errors, after
+ bindport().
+
+ * connect.c (bindport): Initialize msock to -1.
+
+ * ftp.c (getftp): Initialize dtsock to -1.
+
+ * connect.c (closeport): Don't close sock if sock == -1.
+
+1996-11-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (setnuminf): Nuked default value -- just leave unchanged.
+ (setval): Don't send default values.
+ (defaults): Use DEFAULT_TIMEOUT -- aaargh.
+
+ * options.h (struct options): Use long for dot_bytes.
+
+ * init.c (setquota): Renamed to setbytes.
+ (setval): Use setbytes on DOTBYTES.
+
+1996-11-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Initialize con->dltime.
+
+ * recur.c (recursive_retrieve): Use same_host instead of
+ try_robots; simply load robots_txt whenever the host is changed.
+ (recursive_retrieve): Free forbidden before calling parse_robots.
+
+1996-11-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (show_progress): Use them.
+
+ * options.h (struct options): New options dot_bytes, dots_on_line
+ and dot_spacing.
+
+1996-11-16 Mark Boyns <boyns@sdsu.edu>
+
+ * recur.c (recursive_retrieve): Retrieve directories regardless of
+ acc/rej rules; check for empty u->file.
+
+1996-11-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (setval): Use it.
+
+ * utils.c (merge_vecs): New function.
+
+ * init.c (setval): Reset the list-type functions when encountering
+ "".
+
+1996-11-14 Shawn McHorse <riffraff@txdirect.net>
+
+ * recur.c (recursive_retrieve): Use base_url instead of this_url
+ for no_parent.
+
+1996-11-14 Shawn McHorse <riffraff@txdirect.net>
+
+ * html.c (htmlfindurl): Reset s->in_quote after exiting the quote.
+
+1996-11-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c (sepstring): Rewrote; don't use strtok.
+
+ * recur.c (recursive_retrieve): Enter assorted this_url to slist
+ when running the first time.
+ (retrieve_robots): Warn to ignore errors when robots are loaded.
+
+ * utils.c (load_file): Moved from url.c.
+
+ * http.c: Made static variables const too in h* functions.
+
+ * main.c (main): Renamed --continue-ftp to --continue.
+
+ * recur.c (recursive_retrieve): Use it.
+
+ * utils.c (frontcmp): New function.
+
+ * url.c (accdir): New function.
+
+ * html.c (htmlfindurl): Recognize <area href=...>.
+
+ * ftp.c (ftp_retrieve_dirs): Implemented opt.includes.
+
+ * init.c (setval): Free the existing opt.excludes and
+ opt.includes, if available.
+
+ * main.c (main): New option -I.
+
+1996-11-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_retrieve_glob): Do not weed out directories.
+
+ * version.c: Changed version to 1.4.1.
+
+1996-11-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.0.
+
+1996-11-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * main.c (main): Free com and val after parse_line.
+ (printhelp): Reorder the listing.
+
+ * http.c: More robust header parsing.
+
+ * http.c: Allow any number of spaces, or no spaces, precede ':'.
+ (hskip_lws): New function.
+ (haccepts_bytes): New function.
+ (gethttp): Use it.
+
+ * init.c (setval): Check header sanity.
+ (setval): Allow resetting of headers.
+
+1996-11-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Don't use has_wildcards.
+
+ * http.c (gethttp): Free all_headers -- would leak.
+
+ * recur.c (recursive_retrieve): Initialize depth to 1 instead of
+ 0 -- this fixes a long-standing bug in -rl.
+
+1996-11-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c: Use -1 as "impossible" value for con->fd.
+
+ * url.h (URL_SEPARATOR): Don't treat `*' and `+' as separators.
+
+ * init.c (parse_line): Use isalpha.
+
+ * ftp-unix.c: Use HAVE_UNISTD_H.
+
+ * mtch.c (has_wildcards): Don't match \.
+
+ * http.c (http_loop): Warn on HTTP wildcard usage.
+
+1996-11-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (url_filename): Do not create numbered suffixes if
+ opt.noclobber -- would bug out on -nc.
+
+1996-11-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (parse_robots): Don't chuck out the commands without
+ arguments (`Disallow:<empty>' didn't work).
+ (parse_robots): Compare versions lowercase.
+ (parse_robots): Match on base_version, not version_string!
+ (parse_robots): Handle comments properly.
+ (parse_robots): Match versions in a sane way.
+
+ * init.c: Print nicer error messages.
+
+ * version.c: Changed version to 1.4.0.
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.0-test2.
+
+ * init.c (run_wgetrc): Close fp.
+
+ * ftp.c (ftp_retrieve_dirs): Allocate the correct length for
+ u->dir.
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c (setquota): Allow inf as quota specification.
+
+1996-11-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_retrieve_dirs): Return QUOTEXC if quota exceeded.
+ (ftp_retrieve_glob): Return QUOTEXC on quota exceeded.
+
+ * main.c (main): Check for quota by comparison with downloaded
+ stuff, not from status.
+
+ * connect.c (select_fd): Should compile on HPUX without warnings now.
+
+ * ftp.c (ftp_get_listing): Check whether ftp_loop_internal
+ returned RETROK.
+
+1996-11-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_retrieve_glob): Print the pattern nicely.
+ (getftp): Return FTPRETRINT on control connection error.
+
+ * html.c (htmlfindurl): Recognize <embed src=...> and
+ <bgsound src=...>.
+ (ftp_index): Handle username and password correctly.
+
+ * main.c (main): Made `-np' a synonim for --no-parent.
+
+1996-11-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop): Check for opt.ftp_glob too before calling
+ ftp_retrieve_glob.
+
+ * version.c: Changed version to 1.4.0-test2.
+
+1996-11-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4.0-test1.
+
+ * url.c (str_url): Don't use sprintf when creating %2F-prefixed
+ directory.
+ (convert_links): Removed definition of make_backup.
+
+ * http.h: Removed definition of MAX_ERROR_LENGTH.
+
+ * host.c (ftp_getaddress): Check for "(none)" domains.
+
+ * ftp.c (ftp_retrieve_dirs): Docfix.
+
+ * http.c (gethttp): Use ou->referer instead of u->referer.
+
+ * retr.c (retrieve_url): Reset u to avoid freeing pointers twice;
+ this was known to cause coredumps on Linux.
+
+ * html.c (ftp_index): Cast the argument to local_time to time_t *.
+
+1996-11-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * connect.c (select_fd): Use exceptfds -- once and for all.
+
+ * retr.c (retrieve_from_file): Free filename after
+ recursive_retrieve.
+ (retrieve_from_file): Send RFIRST_TIME to recursive_retrieve on
+ first-time retrieval.
+ (retrieve_from_file): Return uerr_t; new argument, count.
+ (retrieve_from_file): Break on QUOTEXC.
+
+ * init.c (setquota): Fixed a bug that caused rejection of
+ non-postfixed values..
+
+1996-10-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Changed name to wget.
+
+ * connect.c (iread): Smarter use of select.
+ (select_fd): Set errno on timeout. If not timeout, return 1
+ instead of 0.
+
+1996-10-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop_internal): Don't use con->cmd before
+ establishing it.
+
+1996-10-26 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Send correct referer when using proxy.
+ (gethttp): Use struct urlinfo ou to access the relevant data; send
+ correct authorization in all cases.
+
+ * host.c (same_host): Use skip_uname to skip username and
+ password.
+
+ * url.c (skip_uname): New function.
+ (parseurl): Use it.
+
+ * host.c (same_host): Do not assume HTTP -- same_host should now
+ be totally foolproof.
+
+ * url.c (skip_proto): New function.
+ (parse_uname): Use it.
+
+ * http.c (gethttp): Create local user and passwd from what is
+ given.
+
+ * url.c (parseurl): Check for HTTP username and password too.
+
+1996-10-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * config.h.in: Removed #define gethostbyname R...
+
+1996-10-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Changed version to 1.4.0-test1.
+
+1996-10-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b29.
+
+ * recur.c (recursive_retrieve): Check for no_parent.
+
+ * init.c (setval): Option update.
+
+ * main.c (main): New option no-parent.
+
+ * options.h (struct options): New variable no_parent.
+
+ * recur.c (recursive_retrieve): Only files are checked for
+ opt.accepts and opt.rejects.
+ (recursive_retrieve): Check directories for opt.excludes.
+ (recursive_retrieve): Make the dir absolute when checking
+ opt.excludes.
+
+ * html.c (htmlfindurl): Recognize <applet code=...> and <script
+ src=...>
+
+1996-10-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (getftp): Do not line-break assert entries at all.
+ (ftp_retrieve_dirs): docfix.
+
+ * connect.c (select_fd): Use fd + 1 as nfds.
+
+ * version.c: Changed version to 1.4b29.
+
+1996-10-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b28.
+
+ * ftp.c (ftp_loop_internal): Check whether f->size == len and
+ don't continue the loop if it is.
+ (ftp_get_listing): Remove list_filename on unsuccesful loop.
+
+1996-10-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop_internal): Use strcpy to initialize tmp.
+ (getftp): Do not use multiline assert.
+
+ * http.c (hparsestatline): Use mjr and mnr instead of major and
+ minor, which don't compile on Ultrix.
+ (http_loop): Use strcpy() to initialize tmp.
+
+ * all: Geturl -> Fetch
+
+1996-10-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (parse_robots): Fixed an off-by-one bug when looking for
+ ':'.
+
+ * html.c (htmlfindurl): Fixed several possible off-by-one bugs by
+ moving `bufsize &&' to the beginning of each check in for-loops.
+
+ * recur.c (parse_robots): Close fp on exit.
+
+ * url.c (mymkdir): Check for each directory before creating.
+
+1996-10-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Changed version to 1.4b28.
+
+1996-10-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b27.
+
+ * init.c (parse_line): Use isspace.
+ (parse_line): Free *com on all errors.
+
+ * ftp.c (ftp_loop): Change FTPOK to RETROK before exiting.
+ (delelement): Use next instead of f->next and prev instead of
+ f->prev.
+ (delelement): Free the members of the deleted element.
+
+ * http.c (http_loop): Do not return RETROK on code != 20x.
+
+ * init.c (cleanup): Free opt.user_header.
+ (cleanup): Free opt.domains.
+
+ * url.c (freelists): Moved to cleanup().
+
+ * http.c (hparsestatline): Docfix.
+
+ * main.c (main): Return with error status on unsuccesful
+ retrieval.
+
+ * init.c (setval): Do not remove listing when mirroring.
+
+ * url.c (url_filename): Use opt.fileprefix.
+
+ * ftp.c (ftp_get_listing): Use url_filename to get filename for
+ .listing.
+
+ * main.c (main): New option: -rn.
+
+1996-10-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (RM): Added RM = rm -f.
+
+ * host.c (clean_hosts): New function.
+ (free_hlist): Just free the list, no reset.
+
+ * version.c: Changed version to 1.4b27.
+
+1996-10-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b26.
+
+ * retr.c (retrieve_from_file): If call get_urls_html with
+ opt.spider to make it silent in spider mode.
+
+ * url.c (str_url): Use CLEANDUP instead of URL_CLEANSE.
+
+ * url.h (CLEANDUP): New macro.
+
+ * http.c (gethttp): Fixed a bug that freed location only when it
+ was NULL.
+
+ * retr.c (retrieve_url): Free url if it will not be stored,
+ i.e. newloc is NULL.
+
+ * html.c (htmlfindurl): Handle exiting from quotes correctly; the
+ old version would bug out on <a href="x#a"href="y">.
+
+ * html.h (state_t): New member in_quote.
+
+ * html.c (htmlfindurl): Free s->attr at the beginning of
+ attr-loop.
+
+ * recur.c (recursive_retrieve): Recognize RCLEANUP.
+ (tried_robots): Make hosts a global variable.
+ (recursive_retrieve): Free constr after URL host optimization.
+ (tried_robots): Free urlinfo before exiting.
+
+ * utils.c (free_slist): New function.
+
+ * recur.c (recursive_retrieve): Use flags to add cleanup
+ possibility.
+
+ * main.c (main): Free filename after recursive_retrieve.
+
+ * http.c (gethttp): Store successful responses too.
+
+1996-10-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * all: Constified the whole source. This required some minor
+ changes in many functions in url.c, possibly introducing bugs -- I
+ hope not.
+
+ * ftp-basic.c: Removed last_respline.
+
+ * http.c (gethttp): Free type.
+
+ * host.c (same_host): Free real1 and real2.
+
+ * main.c (main): New option --spider.
+
+ * retr.c (get_contents): Don't reset errno.
+
+ * main.c (main): Sorted the options.
+
+ * connect.c (iread): Set errno to ETIMEDOUT only if it was left
+ uninitialized by select().
+
+ * http.c (http_loop): Print the time when the connection is
+ closed.
+ (gethttp): Debug-print the HTTP request.
+
+1996-10-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * connect.c (iread): Do not try reading after timeout.
+
+ * main.c (main): Would bug out on -T.
+
+ * connect.c (select_fd): Do not use exceptfds.
+ (iread): Set ETIMEDOUT on select_fd <= 0.
+
+ * version.c: Changed version to 1.4b26.
+
+1996-10-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b25.
+
+ * ftp-unix.c (ftp_parse_unix_ls): Ignore lines without file name
+ or link name.
+
+ * http.c (gethttp): Add errcode to struct hstat.
+ (http_loop): Use it.
+
+ * url.c (no_proxy_match): Simplify using char** for no_proxy.
+
+ * options.h (struct options): Make opt.no_proxy a vector.
+
+ * utils.c (sepstring): Use !*s instead of !strlen(s).
+
+ * init.c (setval): Set opt.maxreclevel to 0 on --mirror.
+ (getperms): Use ISODIGIT instead of isdigit.
+
+ * ftp.c (getftp): Print time.
+
+ * main.c (main): Use legible output of downloaded quantity.
+
+ * ftp.c (getftp): Use elapsed_time().
+ (ftp_loop_internal): Use rate().
+
+ * http.c (http_loop): Add download ratio output; Use rate().
+
+ * utils.c (rate): New function.
+
+1996-10-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Use timer.
+
+ * ftp.c: Split to ftp-basic.c and ftp.c
+
+ * utils.c (reset_timer): New function.
+ (elapsed_time): New function.
+
+ * retr.c (show_progress): Make bytes_in_line and offs long; should
+ work on 16-bit machines.
+
+1996-10-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (in_acclist): New argument backward.
+
+ * ftp.c (ftp_retrieve_glob): Use acceptable() to determine whether
+ a file should be retrieved according to suffix.
+ (ftp_get_listing): Check the return value of unlink; Do not call
+ ftp_retrieve_dirs if depth reached maxreclevel.
+ (ftp_retrieve_dirs): Check whether the directory is in
+ exclude-list.
+
+ * main.c (main): Print the version number at the beginning of
+ DEBUG output.
+ (main): Use strrchr when creating exec_name.
+
+ * ftp.c (ftp_retrieve_glob): Do not close control connection.
+
+ * version.c: Changed version to 1.4b25.
+
+1996-10-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b24.
+
+ * Makefile.in: Rewrite.
+
+ * ftp.c (ftp_loop_internal): Likewise.
+
+ * retr.c (time_str): Check for failed time().
+
+ * html.c (htmlfindurl): Recognize <fig src> and <overlay src> from
+ HTML3.0.
+
+ * retr.c (time_str): Return time_t *.
+
+ * connect.c (bindport): Close msock on unsuccesful bind.
+ (bindport): The same for getsockname and listen.
+
+ * retr.c (retrieve_url): Allow any number of retries on
+ proxy.
+
+ * http.c (gethttp): Do not treat errno == 0 as timeout.
+ (http_loop): Likewise.
+ (http_loop): Cosmetic changes.
+
+ * connect.c (iread): Set errno to ETIMEDOUT in case of timeout.
+
+ * retr.c (get_contents): Reset errno.
+
+ * ftp.c (getftp): Minor fixes.
+
+1996-10-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c: Do not use backups.
+
+ * geturl.1 (WARNING): Warn that man-page could be obsolete.
+
+ * getopt.c (getopt_long): Moved to getopt.c
+
+ * geturl.texi: Enhanced.
+
+ * main.c (main): Use it.
+
+ * recur.c (convert_all_links): New function.
+
+ * utils.c (add_slist): New argument flags.
+
+ * recur.c (recursive_retrieve): Update a list of downloaded URLs.
+ (parse_robots): Do not chuck out empty value fields.
+ (parse_robots): Make yourself welcome on empty Disallow.
+
+ * version.c: Changed version to 1.4b24.
+
+1996-10-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b23.
+
+ * ftp.c (ftp_loop_internal): Get the time after getftp.
+
+ * Makefile.in (install.info): New target.
+ (install): Use it.
+
+ * http.c (http_loop): Fix output when doing -O.
+
+1996-10-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi: New file.
+
+ * main.c (main): Do not print the warnings and download summary if
+ opt.quiet is set.
+
+ * version.c: Changed version to 1.4b23.
+
+1996-10-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * "Released" 1.4b22.
+
+ * atotm.c (atotm): Use True and False instead of TRUE and FALSE,
+ to avoid redefinition warnings.
+
+ * host.c (store_hostaddress): Use memcpy() to copy the address
+ returned by inet_addr.
+
+ * version.c: Changed version to 1.4b22.
+
+1996-10-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b21.
+
+ * ftp-unix.c (ftp_parse_ls): Renamed to ftp_parse_unix_ls.
+
+ * ftp.c (ftp_port): Use conaddr.
+ (getftp): Print the file length.
+ (ftp_retrieve_list): Check the stamps of plain files only.
+
+ * connect.c (closeport): Do not call shutdown().
+ (conaddr): New function.
+
+ * html.c (ftp_index): Made it dfp-aware.
+
+ * init.c (cleanup): New name of freemem. Close opt.dfp.
+
+ * ftp.c (getftp): Use opt.dfp if it is set.
+
+ * ftp-unix.c (ftp_parse_ls): Recognize time in h:mm format.
+
+ * ftp.c (ftp_retrieve_dirs): Fixed a bug that caused incorrect
+ CWDs to be sent with recursive FTP retrievals.
+
+1996-10-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * recur.c (parse_robots): Made it more compliant with "official"
+ specifications.
+
+ * http.c: New function.
+
+ * ftp-unix.c (ftp_parse_ls): Added better debug output.
+
+ * ftp.c (getftp): Print out the LIST in case of
+ opt.print_server_response.
+
+ * version.c: Changed version to 1.4b21.
+
+1996-10-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b20.
+
+ * README: Update.
+
+ * http.c (gethttp): Preset lengths of various headers instead of
+ calculating them dynamically.
+ (gethttp): Check for 206 partial contents.
+
+1996-09-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: Set SYSTEM_GETURLRC to $libdir/geturlrc
+
+ * http.c (gethttp): Send the port number in the Host: header.
+
+1996-09-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Send host: header.
+ (gethttp): Add the possibility of user-defined headers.
+ (gethttp): Move decision about pragma: no-cache to http_loop,
+ where it belongs.
+ (gethttp): Pass a struct instead of enormous argument list.
+ (http_loop): Use a new, fancier display format.
+ (ftp_loop): Likewise.
+
+ * main.c: (hangup): Turn off buffering of the new log file.
+
+ * install-sh: Likewise.
+
+ * config.sub: Replace with the one in autoconf-2.10
+
+ * geturl.1: Update.
+
+ * init.c: New options httpuser and httppasswd.
+
+ * http.c: (base64_encode_line): New function.
+ (gethttp): Send authentication.
+
+ * connect.c (make_connection): Use store_hostaddress.
+
+1996-09-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (store_hostaddress): New function.
+
+ * NEWS: Update.
+
+ * http.c (hgetrange): New function.
+ (gethttp): Use ranges.
+
+ * utils.c (numdigit): Accept long instead of int.
+
+ * http.c (http_loop): Add restart capabilities.
+
+ * ftp.c (ftp_retrieve_glob): Fixed a bug that could cause matchres
+ being used uninitialized.
+ (ftp_retrieve_list): Similar fix.
+
+ * host.c (add_hlist): Fixed a bug that could cause cmp being used
+ uninitialized.
+
+ * url.c (construct_relative): New function.
+
+ * recur.c (recursive_retrieve): Use it.
+
+ * retr.c (convert_links): New function.
+
+1996-09-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (free_urlpos): New function.
+
+ * recur.c (recursive_retrieve): Adapt.
+
+ * url.c (get_urls_html): Return a linked list instead of a vector.
+
+ * url.c (get_urls_file): Return a linked list instead of a vector.
+
+ * geturl.1: Update.
+
+ * http.c (gethttp): Implement it.
+
+ * init.c (setval): New option: SAVEHEADERS
+
+ * ftp.c (ftp_loop_internal): Do not set restval if listing is to
+ be retrieved. Lack of this test caused bugs when the connection
+ was lost during listing.
+
+ * retr.c (retrieve_url): Fixed a bug that caused
+ coredumps. *newloc is now reset by default.
+ (retrieve_url): Lift the twenty-tries limit on proxies.
+
+ * version.c: Changed version to 1.4b20.
+
+1996-09-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b19.
+
+1996-09-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop_internal): Renamed from ftp_1fl_loop.
+ (getftp): Changed prototype to accept ccon *.
+
+1996-09-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_retrieve_list): Fixed a bug that caused setting
+ incorrect values to files pointed to by symbolic links.
+ (ftp_1fl_loop): Do not count listings among the downloaded URL-s.
+
+1996-09-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (mkstruct): Do not prepend "./" in front of a pathname.
+
+ * main.c (main): New option: --user-agent.
+
+ * geturl.1: Ditto.
+
+ * init.h: Ditto.
+
+ * init.c (setval): Ditto.
+
+ * main.c (main): Rename "server-headers" to "server-response".
+
+ * ftp-unix.c (ftp_parse_ls): Check for asterisks at the end of
+ executables in 'ls -F' listings.
+
+1996-09-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (parseurl): Remove realloc() and sprintf().
+ (str_url): Get rid of sprintf().
+
+ * recur.c (recursive_retrieve): Enable FTP recursion through proxy
+ servers.
+
+ * url.h (URL_CLEANSE): Made it else-resistant.
+ (USE_PROXY): New macro.
+
+1996-09-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * NEWS: Update.
+
+ * version.c: Changed version to 1.4b19.
+
+1996-09-14 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b18.
+
+ * url.c: Made it reallocate space exponentially.
+
+1996-09-14 Drazen Kacar <dave@fly.cc.fer.hr>
+
+ * html.c (htmlfindurl): Added <frame src> and <iframe src> among
+ the list of stuff to fetch.
+
+1996-09-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (get_urls_html): Fixed a bug that caused SIGSEGV's with
+ -Fi.
+
+ * html.c (htmlfindurl): Rewrite.
+
+ * http.c (gethttp): Use opt.proxy_cache.
+
+ * main.c (main): Added --cache option.
+
+ * ftp.c (ftp_response): Print server response if opt.print_server
+ response is set.
+ (getftp): Print newlines after each request if the server response
+ is to be printed.
+ (ftp_response): Copy the last response line to last_respline.
+
+ * http.c (gethttp): Add Pragma: nocache for retried
+ proxy-retrievals.
+
+ * ftp.c (getftp): Use it.
+
+ * retr.c (buf_discard): New function.
+
+ * ftp.c (ftp_response): Use buf_readchar().
+ (getftp): Flush the control connection buffer before calling
+ get_contents().
+
+ * retr.c (buf_readchar): New function.
+ (buf_flush): New function.
+ (get_contents): Use buf_readchar() instead of read(x, x, 1).
+ (get_contents): Use buf_flush.
+
+1996-09-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c: Incorporate changes to ftp_response.
+
+ * ftp.c (ftp_response): Allocate the server response dynamically,
+ as in read_whole_line and fetch_next_header.
+
+ * utils.c (read_whole_line): Fixed a bug that prevented reading
+ the last line if it is not \n-terminated. Also fixed a possible
+ memory overflow.
+
+ * http.c (fetch_next_header): Return malloc-ed string as large as
+ needed.
+ (gethttp): Use new fetch_next_header.
+
+1996-09-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (hgetlen): Compute the header length the first time only.
+ (hgettype): Ditto.
+ (hgetlocation): Ditto.
+ (hgetmodified): Ditto.
+
+1996-09-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.geturlrc: Update.
+
+1996-09-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): Ditto.
+
+ * ftp.c (getftp): Open the output file as binary.
+
+ * version.c: Changed version to 1.4b18.
+
+1996-09-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b17.
+
+ * ftp-unix.c (ftp_parse_ls): If unable to open file, return NULL
+ instead of failed assertion.
+
+1996-09-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_get_listing): Add a numbered suffix to LIST_FILENAME
+ if a file of that name already exists.
+
+1996-09-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_1fl_loop): Handler FTPPORTERR and FOPENERR correctly.
+
+ * config.h.in: Define gethostbyname as Rgethostbyname when using
+ Socks.
+
+ * configure.in: Check for -lresolv if using Socks.
+
+ * version.c: Changed version to 1.4b17.
+
+1996-07-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b16.
+
+ * http.c (gethttp): More intelligent check for first line of HTTP
+ response.
+ (gethttp): Would bug out on time-stamping.
+
+ * version.c: Changed version to 1.4b16.
+
+1996-07-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Released 1.4b15.
+
+ * http.c (http_loop): Print \n after the loop entry, not before.
+
+ * url.c (url_filename): Use ISDOT.
+
+ * url.h (ISDOT): New macro.
+
+ * recur.c (recursive_retrieve): Change only opt.recursive for
+ following FTP.
+
+1996-07-11 Antonio Rosella <antonio.rosella@agip.it>
+
+ * socks/geturl.cgi: Fixed version No.
+
+ * socks/download-netscape.html: Ditto.
+
+ * socks/download.html: Changed socks.html to download.html.
+
+1996-07-11 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (url_filename): Check for opt.dirstruct instead for
+ opt.recursive && opt.dirstruct.
+
+ * init.c (defaults): Ditto.
+ (defaults): Reset dirstruct by default.
+ (setval): Set opt.dirstruct whenever setting recursive.
+
+ * init.h: Removed FORCEDIRHIER.
+
+ * INSTALL: Added -L to socks-description.
+
+ * version.c: Changed version to 1.4b15.
+
+1996-07-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b14.
+
+ * geturl.1: Update AUTHOR to include Rosella as contributor.
+
+ * NEWS: Update.
+
+ * socks/geturl.cgi: Simplified command creation, nuked <blink>.
+
+ * socks/geturl.cgi: Wrap nutscape extensions within if $netscape.
+ (cal_time): Fix == to eq.
+
+ * socks/geturl.cgi: GPL-ized with permission of A. Rosella.
+
+ * geturl.1 (hostname): Moved URL CONVENTIONS to the beginning.
+
+ * Makefile.in: Use @VERSION@.
+
+ * configure.in: Check version from version.c.
+
+ * socks/geturl.cgi: Changed /pub/bin/perl to /usr/bin/perl.
+
+ * socks/download.html: Created from download-netscape.html, made
+ HTML-2.0 compliant.
+
+ * recur.c (recursive_retrieve): Set opt.force_dir_hier when
+ following FTP links from recursions.
+
+1996-07-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (mymkdir): Fixed a bug that prevented mymkdir() to create
+ absolute directories correctly.
+
+ * version.c: Changed version to 1.4b14.
+
+1996-07-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b13.
+
+ * url.c (make_backup): New function.
+
+ * http.c (http_loop): Make a backup copy of the local file (using
+ rename(2)) before opening it.
+
+ * main.c (main): Added --backups.
+
+ * host.c (ftp_getaddress): Bail out on failed mycuserid().
+ (ftp_getaddress): Check for leading dot on MY_DOMAIN.
+ (ftp_getaddress): Check for empty, null or (null) domain.
+
+ * url.c (get_urls_html): If this_url is NULL, the base must have a
+ protocol.
+ (parseurl): Use has_proto.
+
+ * retr.c (retrieve_url): Warn when proxy is used with more than 20
+ retries.
+
+ * url.c (mkstruct): Create the directory (calling mymkdir()) only
+ if it is not already there.
+ (has_proto): New function.
+ (get_urls_html): Eliminate the remaining call to findurl -- use
+ has_proto.
+
+ * geturl.1: Ditto.
+
+ * main.c: Change -X to -x.
+
+ * url.c (url_filename): Simplify creation of filename if
+ prefix_files is set.
+ (url_filename): Simplify everything. And I do mean *everything*.
+ (mkstruct): Add dir_prefix before hostname.
+ (path_simplify): Fixed a bug that caused writing outside the path
+ string in case of "." and ".." path strings.
+
+1996-07-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * init.c: Added --mirror.
+
+ * main.c (main): Added -X to force saving of directory hierarchy.
+
+ * ftp.c (ftp_retrieve_list): Added recursion depth counter.
+ (ftp_retrieve_list): Check whether quota is exceeded.
+
+ * url.c (get_urls_html): Skip leading blanks for absolute URIs.
+
+ * http.c (gethttp): Use referer if present.
+
+ * recur.c (recursive_retrieve): Set u->referer before calling
+ retrieve_url.
+
+ * url.c (newurl): Use memset to nullify the struct members.
+ (freeurl): Free the referer field too.
+
+ * url.h: Added referer to urlinfo.
+
+ * geturl.1: Updated the manual to document some of the new features.
+
+ * utils.c (numdigit): Moved from url.c.
+
+ * README: Rewritten.
+
+ * config.h.in: Add the support for socks.
+
+ * configure.in: Add the support for socks.
+
+ * url.c (url_filename): If the dir_prefix is ".", work with just
+ the file name.
+ (url_filename): Do not look for .n extensions if timestamping if
+ turned on.
+
+ * retr.c (show_progress): Skip the over-abundant restval data, and
+ print the rest of it with ',' instead of '.'.
+
+1996-07-05 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (show_progress): Changed second arg. to long (as it
+ should be).
+ (show_progress): Moved to retr.c.
+ (get_contents): Moved to retr.c.
+
+ * version.c: Change version to 1.4b13.
+
+1996-07-05 Hrvoje Lacko <hlacko@fly.cc.fer.hr>
+
+ * url.c (in_acclist): Would return after the first suffix.
+
+1996-07-04 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: "Released" 1.4b12.
+
+ * url.c (path_simplify): More kludgifications.
+ (get_urls_html): Use new parameters for htmlfindurl.
+
+ * html.c: Removed memorizing "parser states", since the new
+ organization does not require them.
+
+ * init.c (run_geturlrc): Use read_whole_line.
+
+ * ftp-unix.c (ftp_parse_ls): Use read_whole_line.
+
+ * recur.c (parse_robots): Use read_whole_line.
+
+ * utils.c (read_whole_line): New function.
+
+ * recur.c (tried_robots): Use add_slist/in_slist, *much* cleaner.
+
+ * host.c (ngethostbyname): Call inet_addr just once. Yet to be
+ tested on OSF and Ultrix.
+ (add_hlist): New function.
+ (free_hlist): New function.
+ (search_host): New function.
+ (search_address): New function.
+ (realhost): Use search_host, search_address and add_hlist.
+ (same_host): Replaced realloc() with strdupdelim(), made
+ case-insensitive, fixed a memory leak.
+
+ * html.c (ftp_index): Fixed tm_min and tm_sec to be tm_hour and
+ tm_min, like intended.
+
+ * version.c: Change user agent information to
+ Geturl/version.
+
+1996-07-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * utils.c: Renamed nmalloc.c to utils.c, .h likewise.
+
+ * url.c (acceptable): Always accept directories.
+
+ * ftp-unix.c (ftp_parse_ls): Support brain-damaged "ls -F"-loving
+ servers by stripping trailing @ from symlinks and trailing / from
+ directories.
+
+ * ftp.c (ftp_loop): Debugged the "enhanced" heuristics. :-)
+
+ * url.c (skip_url): Use toupper instead of UCASE.
+
+ * host.c (sufmatch): Made it case-insensitive.
+
+ * url.c (match_backwards_or_pattern): Fixed i == -1 to j == -1.
+ (match_backwards): New function, instead of
+ match_backwards_or_pattern.
+
+ * recur.c (recursive_retrieve): Increased performance by
+ introducing inl, which reduces number of calls to in_slist to only
+ one.
+
+ * ftp.c (ftp_loop): Enhanced the heuristics that decides which
+ routine to use.
+
+ * main.c (printhelp): Removed the warranty stuff.
+
+1996-07-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (add_slist): Simplify.
+ (match_backwards_or_pattern): New function.
+ (in_acclist): Use match_backwards_or_pattern.
+ (matches): Remove.
+
+1996-06-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (ftp_loop): Call ftp_index on empty file names, if not
+ recursive.
+
+ * html.c (ftp_index): Fixed to work. Beautified the output.
+
+ * ftp.c (ftp_retrieve_glob): Another argument to control whether
+ globbing is to be used.
+ (ftp_retrieve_list): Compare the time-stamps of local and remote
+ files to determine whether to download.
+
+1996-06-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (rel_constr): New function.
+
+ * retr.c (retrieve_from_file): Check for text/html before
+ retrieving recursively.
+
+ * main.c (main): Check whether the file is HTML before going into
+ recursive HTML retrieving.
+
+ * ftp.c (ftp_retrieve_list): Manage directories.
+ (ftp_retrieve_glob): Pass all the file-types to ftp_retrieve_list.
+ (ftp_1fl_loop): Fixed a bug that caused con->com to be incorrectly
+ initialized, causing bugchecks in getftp to fail.
+
+ * configure.in: Check for symlink.
+
+ * ftp.c (ftp_retrieve_list): Added support for symlinks.
+
+ * version.c: "Released" 1.4b10.
+
+ * atotm.c (atotm): Redeclared as time_t.
+
+ * init.c: New variable "timestamping".
+
+ * main.c (main): New option 'N'.
+
+ * http.c (hgetlocation): Case-insensitive match.
+ (hgetmodified): New function.
+ (http_loop): Implement time-stamping.
+
+1996-06-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: Changed version to 1.4b10
+
+ * atotm.c: New file, from phttpd.
+
+ * options.h (struct options): New parameter timestamping.
+
+ * version.c: 1.4b9 "released".
+
+ * recur.c (recursive_retrieve): Used linked list (ulist) for
+ faster storing of URLs.
+
+ * url.c (get_urls_html): Removed the old kludge with comparing the
+ outputs of htmlfindurl and findurl.
+ (get_urls_html): Added better protocol support here.
+ (create_hash): Removed, as well as add_hash and in_hash.
+ (addslist): New function.
+ (in_slist): ditto
+
+ * version.c: Released 1.4b8, changed version to b9.
+
+1996-06-27 Hrvoje Niksic <hniksic@srce.hr>
+
+ * ftp.c (freefileinfo): New function.
+ (delelement): New function.
+
+ * everywhere: GPL!
+
+ * ftp.c (ftp_loop): Use ccon.
+ (ftp_retrieve_glob): Likewise.
+
+ * ftp.h: Define ccon, to define status of control connection.
+
+ * ftp.c (ftp_get_listing): New function.
+ (ftp_retrieve_more): New function.
+ (ftp_retrieve_glob): New function.
+
+1996-06-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * configure.in: Removed the search for cuserid().
+
+ * init.c (getmode): Renamed to getperms.
+
+1996-06-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * version.c: New version.
+
+ * main.c (hangup): New function, that handles hangup. Hangup
+ signal now causes geturl to stop writing on stdout, and to write
+ to a log file.
+
+ * ftp.c (getftp): "Released" 1.4b7.
+
+ * html.c (htmlfindurl): Ignore everything inside <head>...</head>.
+ (ftp_index): Use fileinfo/urlinfo.
+
+ * ftp-unix.c (ftp_parse_ls): New function.
+ (symperms): New function.
+
+ * ftp.c (ftp_1fl_loop): New function, to handle 1-file loops.
+
+ * retr.c (retrieve_url): Added FTP support.
+
+1996-06-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.h: Removed NOTFTP2HTML enum.
+ Added DO_LOGIN, DO_CWD and DO_LIST. LIST_ONLY is obsolete.
+
+ * ftp.c (getftp): Resynched with urlinfo.
+ (getftp): Removed HMTL-ization of index.html from getftp.
+
+ * version.c: 1.4b6 "released".
+
+ * options.h (options): New struct, to keep options in.
+
+ * http.c (http_loop,gethttp): Synched with proxy.
+
+ * retr.c (retrieve_url): Implemented proxy retrieval.
+
+ * main.c (main): Use retrieve_from_file.
+
+1996-06-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * retr.c (retrieve_from_file): New function.
+
+ * url.c (parseurl): Modified to return URLOK if all OK. Protocol
+ can be found in u->proto.
+
+ * ftp.c (ftp_response): Fixed to accept multi-line responses as
+ per RFC 959.
+
+ * recr.c (recursive_retrieve): Take newloc from retrieve_url.
+
+ * url.c (mymkdir): Removed the file of the same name, if one
+ exists.
+ (isfile): New function.
+ (mkstruct): Fixed the '/' glitches.
+ (path_simplify): Hacked to treat something/.. correctly.
+
+1996-06-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (gethttp): Close the socket after error in headers.
+ (http_loop): HEOF no longer a fatal header.
+
+ * loop.c (retrieve_url): When dt is NULL, do not modify it. This
+ simplifies the syntax of calling retrieve_url.
+
+ * recr.c (recursive_retrieve): Modified to use get_urls_html.
+
+ * url.c (get_urls_file): New function.
+ (get_urls_html): New function.
+
+ * recr.c (recursive_retrieve): Patched up to conform to the
+ standards.
+
+ * http.c (gethttp): Synched with the rest...
+ (gethttp): Treat only CONREFUSED specially, with connection
+ errors.
+
+ * init.c,geturl.1,http.c (http_loop): Removed kill_error.
+
+1996-06-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * http.c (http_loop): New function.
+
+ * loop.c: Removed *lots* of stuff from retrieve_url.
+
+ * url.c (parseurl): Changed to work with urlinfo. Integrated
+ username finding and path parsing.
+ (newurl): New function.
+ (freeurl): New function.
+ (mkstruct): Removed the old bogosities, made it urlinfo-compliant.
+ (url_filename): Likewise.
+ (path_simplify): Accept relative paths too.
+ (opt_url): Made urlinfo-compliant, removed bogosities.
+ (path_simplify): Expanded to accept relative paths.
+ (str_url): A replacement for hide_url
+ (decode_string): Fixed a bug that caused malfunctioning when
+ encountering an illegal %.. combination.
+ (opt_url): Removed the argument. Dot-optimizations are now default.
+
+ * nmalloc.c (strdupdelim): New function.
+
+ * url.h: Added the urlinfo structure
+
+1996-06-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * url.c (hide_url): Thrown out the protocol assertion. Do not
+ change the URL if the protocol if not recognized.
+ (findurl): Put continue instead of break.
+
+1996-06-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.geturlrc: Changed the defaults to be commented out and
+ harmless (previous defaults caused pains if copied to
+ ~/.geturlrc).
+
+ * http.c (gethttp): Print the HTTP request in debug mode.
+
+ * connect.c (iread): Added EINTR check loop to select-ing
+ too. EINTR is now correctly handled with select().
+
+ * TODO: new file
+
+1996-05-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * host.c (same_host): Made the function a little bit more
+ intelligent regarding diversified URL syntaxes.
+
+ * url.c (skip_url): Spaces are now skipped after URL:
+
+ * Released 1.3.1 with the patch to prevent crashing when sending
+ NULL to robot* functions and the patch to compile "out of the box"
+ on AIX.
+
+ * recr.c (recursive_retrieve): Added checking whether this_url is
+ NULL when calling the robot functions.
+
+ * ChangeLog: New file.
--- /dev/null
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+#
+# Version: @VERSION@
+#
+
+SHELL = /bin/sh
+
+top_srcdir = @top_srcdir@
+srcdir = @srcdir@
+VPATH = @srcdir@
+ANSI2KNR = @ANSI2KNR@
+o = .@U@o
+
+prefix = @prefix@
+exec_prefix = @exec_prefix@
+bindir = @bindir@
+sysconfdir = @sysconfdir@
+localedir = $(prefix)/share/locale
+
+CC = @CC@
+CPPFLAGS = @CPPFLAGS@
+# The following line is losing on some versions of make!
+DEFS = @DEFS@ -DSYSTEM_WGETRC=\"$(sysconfdir)/wgetrc\" -DLOCALEDIR=\"$(localedir)\"
+CFLAGS = @CFLAGS@
+LDFLAGS = @LDFLAGS@
+LIBS = @LIBS@
+exeext = @exeext@
+
+INCLUDES = -I. -I$(srcdir)
+
+COMPILE = $(CC) $(INCLUDES) $(CPPFLAGS) $(DEFS) $(CFLAGS)
+LINK = $(CC) $(CFLAGS) $(LDFLAGS) -o $@
+INSTALL = @INSTALL@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+RM = rm -f
+ETAGS = etags
+
+# Conditional compiles
+ALLOCA = @ALLOCA@
+MD5_OBJ = @MD5_OBJ@
+OPIE_OBJ = @OPIE_OBJ@
+
+OBJ = $(ALLOCA) cmpt$o connect$o fnmatch$o ftp$o ftp-basic$o \
+ ftp-ls$o $(OPIE_OBJ) getopt$o headers$o host$o html$o \
+ http$o init$o log$o main$o $(MD5_OBJ) netrc$o rbuf$o \
+ recur$o retr$o url$o utils$o version$o
+
+.SUFFIXES:
+.SUFFIXES: .c .o ._c ._o
+
+.c.o:
+ $(COMPILE) -c $<
+
+.c._c: $(ANSI2KNR)
+ $(ANSI2KNR) $< > $*.tmp && mv $*.tmp $@
+
+._c._o:
+ @echo $(COMPILE) -c $<
+ @rm -f _$*.c
+ @ln $< _$*.c && $(COMPILE) -c _$*.c && mv _$*.o $@ && rm _$*.c
+
+.c._o: $(ANSI2KNR)
+ $(ANSI2KNR) $< > $*.tmp && mv $*.tmp $*._c
+ @echo $(COMPILE) -c $*._c
+ @rm -f _$*.c
+ @ln $*._c _$*.c && $(COMPILE) -c _$*.c && mv _$*.o $@ && rm _$*.c
+
+# Dependencies for building
+
+wget$(exeext): $(OBJ)
+ $(LINK) $(OBJ) $(LIBS)
+
+ansi2knr: ansi2knr.o
+ $(CC) -o ansi2knr ansi2knr.o $(LIBS)
+
+$(OBJ): $(ANSI2KNR)
+
+#
+# Dependencies for installing
+#
+
+install: install.bin
+
+uninstall: uninstall.bin
+
+install.bin: wget$(exeext)
+ $(top_srcdir)/mkinstalldirs $(bindir)
+ $(INSTALL_PROGRAM) wget$(exeext) $(bindir)/wget$(exeext)
+
+uninstall.bin:
+ $(RM) $(bindir)/wget$(exeext)
+
+#
+# Dependencies for cleanup
+#
+
+clean:
+ $(RM) *.o wget$(exeext) *~ *.bak core $(ANSI2KNR) *._o *._c
+
+distclean: clean
+ $(RM) Makefile config.h
+
+realclean: distclean
+ $(RM) TAGS
+
+#
+# Dependencies for maintenance
+#
+
+subdir = src
+
+Makefile: Makefile.in ../config.status
+ cd .. && CONFIG_FILES=$(subdir)/$@ CONFIG_HEADERS= ./config.status
+
+TAGS: *.c *.h
+ -$(ETAGS) *.c *.h
+
+# DO NOT DELETE THIS LINE -- make depend depends on it.
+
+cmpt$o: config.h wget.h sysdep.h options.h
+connect$o: config.h wget.h sysdep.h options.h connect.h host.h
+fnmatch$o: config.h wget.h sysdep.h options.h fnmatch.h
+ftp-basic$o: config.h wget.h sysdep.h options.h utils.h rbuf.h connect.h host.h
+ftp-ls$o: config.h wget.h sysdep.h options.h utils.h ftp.h rbuf.h
+ftp-opie$o: config.h wget.h sysdep.h options.h md5.h
+ftp$o: config.h wget.h sysdep.h options.h utils.h url.h rbuf.h retr.h ftp.h html.h connect.h host.h fnmatch.h netrc.h
+getopt$o: wget.h sysdep.h options.h
+headers$o: config.h wget.h sysdep.h options.h connect.h rbuf.h headers.h
+host$o: config.h wget.h sysdep.h options.h utils.h host.h url.h
+html$o: config.h wget.h sysdep.h options.h url.h utils.h ftp.h rbuf.h html.h
+http$o: config.h wget.h sysdep.h options.h utils.h url.h host.h rbuf.h retr.h headers.h connect.h fnmatch.h netrc.h
+init$o: config.h wget.h sysdep.h options.h utils.h init.h host.h recur.h netrc.h
+log$o: config.h wget.h sysdep.h options.h utils.h
+main$o: config.h wget.h sysdep.h options.h utils.h getopt.h init.h retr.h rbuf.h recur.h host.h
+md5$o: wget.h sysdep.h options.h md5.h
+mswindows$o: config.h winsock.h wget.h sysdep.h options.h url.h
+netrc$o: wget.h sysdep.h options.h utils.h netrc.h init.h
+rbuf$o: config.h wget.h sysdep.h options.h rbuf.h connect.h
+recur$o: config.h wget.h sysdep.h options.h url.h recur.h utils.h retr.h rbuf.h ftp.h fnmatch.h host.h
+retr$o: config.h wget.h sysdep.h options.h utils.h retr.h rbuf.h url.h recur.h ftp.h host.h connect.h
+url$o: config.h wget.h sysdep.h options.h utils.h url.h host.h html.h
+utils$o: config.h wget.h sysdep.h options.h utils.h fnmatch.h
--- /dev/null
+/* alloca.c -- allocate automatically reclaimed memory
+ (Mostly) portable public-domain implementation -- D A Gwyn
+
+ This implementation of the PWB library alloca function,
+ which is used to allocate space off the run-time stack so
+ that it is automatically reclaimed upon procedure exit,
+ was inspired by discussions with J. Q. Johnson of Cornell.
+ J.Otto Tennant <jot@cray.com> contributed the Cray support.
+
+ There are some preprocessor constants that can
+ be defined when compiling for your specific system, for
+ improved efficiency; however, the defaults should be okay.
+
+ The general concept of this implementation is to keep
+ track of all alloca-allocated blocks, and reclaim any
+ that are found to be deeper in the stack than the current
+ invocation. This heuristic does not reclaim storage as
+ soon as it becomes invalid, but it will do so eventually.
+
+ As a special case, alloca(0) reclaims storage without
+ allocating any. It is a good idea to use alloca(0) in
+ your main control loop, etc. to force garbage collection. */
+
+#ifdef HAVE_CONFIG_H
+#include <config.h>
+#endif
+
+#ifdef HAVE_STRING_H
+#include <string.h>
+#endif
+#ifdef HAVE_STDLIB_H
+#include <stdlib.h>
+#endif
+
+#ifdef emacs
+#include "blockinput.h"
+#endif
+
+/* If compiling with GCC 2, this file's not needed. */
+#if !defined (__GNUC__) || __GNUC__ < 2
+
+/* If someone has defined alloca as a macro,
+ there must be some other way alloca is supposed to work. */
+#ifndef alloca
+
+#ifdef emacs
+#ifdef static
+/* actually, only want this if static is defined as ""
+ -- this is for usg, in which emacs must undefine static
+ in order to make unexec workable
+ */
+#ifndef STACK_DIRECTION
+you
+lose
+-- must know STACK_DIRECTION at compile-time
+#endif /* STACK_DIRECTION undefined */
+#endif /* static */
+#endif /* emacs */
+
+/* If your stack is a linked list of frames, you have to
+ provide an "address metric" ADDRESS_FUNCTION macro. */
+
+#if defined (CRAY) && defined (CRAY_STACKSEG_END)
+long i00afunc ();
+#define ADDRESS_FUNCTION(arg) (char *) i00afunc (&(arg))
+#else
+#define ADDRESS_FUNCTION(arg) &(arg)
+#endif
+
+#if __STDC__
+typedef void *pointer;
+#else
+typedef char *pointer;
+#endif
+
+#ifndef NULL
+#define NULL 0
+#endif
+
+/* Different portions of Emacs need to call different versions of
+ malloc. The Emacs executable needs alloca to call xmalloc, because
+ ordinary malloc isn't protected from input signals. On the other
+ hand, the utilities in lib-src need alloca to call malloc; some of
+ them are very simple, and don't have an xmalloc routine.
+
+ Non-Emacs programs expect this to call xmalloc.
+
+ Callers below should use malloc. */
+
+#ifndef emacs
+#define malloc xmalloc
+#endif
+extern pointer malloc ();
+
+/* Define STACK_DIRECTION if you know the direction of stack
+ growth for your system; otherwise it will be automatically
+ deduced at run-time.
+
+ STACK_DIRECTION > 0 => grows toward higher addresses
+ STACK_DIRECTION < 0 => grows toward lower addresses
+ STACK_DIRECTION = 0 => direction of growth unknown */
+
+#ifndef STACK_DIRECTION
+#define STACK_DIRECTION 0 /* Direction unknown. */
+#endif
+
+#if STACK_DIRECTION != 0
+
+#define STACK_DIR STACK_DIRECTION /* Known at compile-time. */
+
+#else /* STACK_DIRECTION == 0; need run-time code. */
+
+static int stack_dir; /* 1 or -1 once known. */
+#define STACK_DIR stack_dir
+
+static void
+find_stack_direction ()
+{
+ static char *addr = NULL; /* Address of first `dummy', once known. */
+ auto char dummy; /* To get stack address. */
+
+ if (addr == NULL)
+ { /* Initial entry. */
+ addr = ADDRESS_FUNCTION (dummy);
+
+ find_stack_direction (); /* Recurse once. */
+ }
+ else
+ {
+ /* Second entry. */
+ if (ADDRESS_FUNCTION (dummy) > addr)
+ stack_dir = 1; /* Stack grew upward. */
+ else
+ stack_dir = -1; /* Stack grew downward. */
+ }
+}
+
+#endif /* STACK_DIRECTION == 0 */
+
+/* An "alloca header" is used to:
+ (a) chain together all alloca'ed blocks;
+ (b) keep track of stack depth.
+
+ It is very important that sizeof(header) agree with malloc
+ alignment chunk size. The following default should work okay. */
+
+#ifndef ALIGN_SIZE
+#define ALIGN_SIZE sizeof(double)
+#endif
+
+typedef union hdr
+{
+ char align[ALIGN_SIZE]; /* To force sizeof(header). */
+ struct
+ {
+ union hdr *next; /* For chaining headers. */
+ char *deep; /* For stack depth measure. */
+ } h;
+} header;
+
+static header *last_alloca_header = NULL; /* -> last alloca header. */
+
+/* Return a pointer to at least SIZE bytes of storage,
+ which will be automatically reclaimed upon exit from
+ the procedure that called alloca. Originally, this space
+ was supposed to be taken from the current stack frame of the
+ caller, but that method cannot be made to work for some
+ implementations of C, for example under Gould's UTX/32. */
+
+pointer
+alloca (size)
+ unsigned size;
+{
+ auto char probe; /* Probes stack depth: */
+ register char *depth = ADDRESS_FUNCTION (probe);
+
+#if STACK_DIRECTION == 0
+ if (STACK_DIR == 0) /* Unknown growth direction. */
+ find_stack_direction ();
+#endif
+
+ /* Reclaim garbage, defined as all alloca'd storage that
+ was allocated from deeper in the stack than currently. */
+
+ {
+ register header *hp; /* Traverses linked list. */
+
+#ifdef emacs
+ BLOCK_INPUT;
+#endif
+
+ for (hp = last_alloca_header; hp != NULL;)
+ if ((STACK_DIR > 0 && hp->h.deep > depth)
+ || (STACK_DIR < 0 && hp->h.deep < depth))
+ {
+ register header *np = hp->h.next;
+
+ free ((pointer) hp); /* Collect garbage. */
+
+ hp = np; /* -> next header. */
+ }
+ else
+ break; /* Rest are not deeper. */
+
+ last_alloca_header = hp; /* -> last valid storage. */
+
+#ifdef emacs
+ UNBLOCK_INPUT;
+#endif
+ }
+
+ if (size == 0)
+ return NULL; /* No allocation required. */
+
+ /* Allocate combined header + user data storage. */
+
+ {
+ register pointer new = malloc (sizeof (header) + size);
+ /* Address of header. */
+
+ if (new == 0)
+ abort();
+
+ ((header *) new)->h.next = last_alloca_header;
+ ((header *) new)->h.deep = depth;
+
+ last_alloca_header = (header *) new;
+
+ /* User storage begins just after header. */
+
+ return (pointer) ((char *) new + sizeof (header));
+ }
+}
+
+#if defined (CRAY) && defined (CRAY_STACKSEG_END)
+
+#ifdef DEBUG_I00AFUNC
+#include <stdio.h>
+#endif
+
+#ifndef CRAY_STACK
+#define CRAY_STACK
+#ifndef CRAY2
+/* Stack structures for CRAY-1, CRAY X-MP, and CRAY Y-MP */
+struct stack_control_header
+ {
+ long shgrow:32; /* Number of times stack has grown. */
+ long shaseg:32; /* Size of increments to stack. */
+ long shhwm:32; /* High water mark of stack. */
+ long shsize:32; /* Current size of stack (all segments). */
+ };
+
+/* The stack segment linkage control information occurs at
+ the high-address end of a stack segment. (The stack
+ grows from low addresses to high addresses.) The initial
+ part of the stack segment linkage control information is
+ 0200 (octal) words. This provides for register storage
+ for the routine which overflows the stack. */
+
+struct stack_segment_linkage
+ {
+ long ss[0200]; /* 0200 overflow words. */
+ long sssize:32; /* Number of words in this segment. */
+ long ssbase:32; /* Offset to stack base. */
+ long:32;
+ long sspseg:32; /* Offset to linkage control of previous
+ segment of stack. */
+ long:32;
+ long sstcpt:32; /* Pointer to task common address block. */
+ long sscsnm; /* Private control structure number for
+ microtasking. */
+ long ssusr1; /* Reserved for user. */
+ long ssusr2; /* Reserved for user. */
+ long sstpid; /* Process ID for pid based multi-tasking. */
+ long ssgvup; /* Pointer to multitasking thread giveup. */
+ long sscray[7]; /* Reserved for Cray Research. */
+ long ssa0;
+ long ssa1;
+ long ssa2;
+ long ssa3;
+ long ssa4;
+ long ssa5;
+ long ssa6;
+ long ssa7;
+ long sss0;
+ long sss1;
+ long sss2;
+ long sss3;
+ long sss4;
+ long sss5;
+ long sss6;
+ long sss7;
+ };
+
+#else /* CRAY2 */
+/* The following structure defines the vector of words
+ returned by the STKSTAT library routine. */
+struct stk_stat
+ {
+ long now; /* Current total stack size. */
+ long maxc; /* Amount of contiguous space which would
+ be required to satisfy the maximum
+ stack demand to date. */
+ long high_water; /* Stack high-water mark. */
+ long overflows; /* Number of stack overflow ($STKOFEN) calls. */
+ long hits; /* Number of internal buffer hits. */
+ long extends; /* Number of block extensions. */
+ long stko_mallocs; /* Block allocations by $STKOFEN. */
+ long underflows; /* Number of stack underflow calls ($STKRETN). */
+ long stko_free; /* Number of deallocations by $STKRETN. */
+ long stkm_free; /* Number of deallocations by $STKMRET. */
+ long segments; /* Current number of stack segments. */
+ long maxs; /* Maximum number of stack segments so far. */
+ long pad_size; /* Stack pad size. */
+ long current_address; /* Current stack segment address. */
+ long current_size; /* Current stack segment size. This
+ number is actually corrupted by STKSTAT to
+ include the fifteen word trailer area. */
+ long initial_address; /* Address of initial segment. */
+ long initial_size; /* Size of initial segment. */
+ };
+
+/* The following structure describes the data structure which trails
+ any stack segment. I think that the description in 'asdef' is
+ out of date. I only describe the parts that I am sure about. */
+
+struct stk_trailer
+ {
+ long this_address; /* Address of this block. */
+ long this_size; /* Size of this block (does not include
+ this trailer). */
+ long unknown2;
+ long unknown3;
+ long link; /* Address of trailer block of previous
+ segment. */
+ long unknown5;
+ long unknown6;
+ long unknown7;
+ long unknown8;
+ long unknown9;
+ long unknown10;
+ long unknown11;
+ long unknown12;
+ long unknown13;
+ long unknown14;
+ };
+
+#endif /* CRAY2 */
+#endif /* not CRAY_STACK */
+
+#ifdef CRAY2
+/* Determine a "stack measure" for an arbitrary ADDRESS.
+ I doubt that "lint" will like this much. */
+
+static long
+i00afunc (long *address)
+{
+ struct stk_stat status;
+ struct stk_trailer *trailer;
+ long *block, size;
+ long result = 0;
+
+ /* We want to iterate through all of the segments. The first
+ step is to get the stack status structure. We could do this
+ more quickly and more directly, perhaps, by referencing the
+ $LM00 common block, but I know that this works. */
+
+ STKSTAT (&status);
+
+ /* Set up the iteration. */
+
+ trailer = (struct stk_trailer *) (status.current_address
+ + status.current_size
+ - 15);
+
+ /* There must be at least one stack segment. Therefore it is
+ a fatal error if "trailer" is null. */
+
+ if (trailer == 0)
+ abort ();
+
+ /* Discard segments that do not contain our argument address. */
+
+ while (trailer != 0)
+ {
+ block = (long *) trailer->this_address;
+ size = trailer->this_size;
+ if (block == 0 || size == 0)
+ abort ();
+ trailer = (struct stk_trailer *) trailer->link;
+ if ((block <= address) && (address < (block + size)))
+ break;
+ }
+
+ /* Set the result to the offset in this segment and add the sizes
+ of all predecessor segments. */
+
+ result = address - block;
+
+ if (trailer == 0)
+ {
+ return result;
+ }
+
+ do
+ {
+ if (trailer->this_size <= 0)
+ abort ();
+ result += trailer->this_size;
+ trailer = (struct stk_trailer *) trailer->link;
+ }
+ while (trailer != 0);
+
+ /* We are done. Note that if you present a bogus address (one
+ not in any segment), you will get a different number back, formed
+ from subtracting the address of the first block. This is probably
+ not what you want. */
+
+ return (result);
+}
+
+#else /* not CRAY2 */
+/* Stack address function for a CRAY-1, CRAY X-MP, or CRAY Y-MP.
+ Determine the number of the cell within the stack,
+ given the address of the cell. The purpose of this
+ routine is to linearize, in some sense, stack addresses
+ for alloca. */
+
+static long
+i00afunc (long address)
+{
+ long stkl = 0;
+
+ long size, pseg, this_segment, stack;
+ long result = 0;
+
+ struct stack_segment_linkage *ssptr;
+
+ /* Register B67 contains the address of the end of the
+ current stack segment. If you (as a subprogram) store
+ your registers on the stack and find that you are past
+ the contents of B67, you have overflowed the segment.
+
+ B67 also points to the stack segment linkage control
+ area, which is what we are really interested in. */
+
+ stkl = CRAY_STACKSEG_END ();
+ ssptr = (struct stack_segment_linkage *) stkl;
+
+ /* If one subtracts 'size' from the end of the segment,
+ one has the address of the first word of the segment.
+
+ If this is not the first segment, 'pseg' will be
+ nonzero. */
+
+ pseg = ssptr->sspseg;
+ size = ssptr->sssize;
+
+ this_segment = stkl - size;
+
+ /* It is possible that calling this routine itself caused
+ a stack overflow. Discard stack segments which do not
+ contain the target address. */
+
+ while (!(this_segment <= address && address <= stkl))
+ {
+#ifdef DEBUG_I00AFUNC
+ fprintf (stderr, "%011o %011o %011o\n", this_segment, address, stkl);
+#endif
+ if (pseg == 0)
+ break;
+ stkl = stkl - pseg;
+ ssptr = (struct stack_segment_linkage *) stkl;
+ size = ssptr->sssize;
+ pseg = ssptr->sspseg;
+ this_segment = stkl - size;
+ }
+
+ result = address - this_segment;
+
+ /* If you subtract pseg from the current end of the stack,
+ you get the address of the previous stack segment's end.
+ This seems a little convoluted to me, but I'll bet you save
+ a cycle somewhere. */
+
+ while (pseg != 0)
+ {
+#ifdef DEBUG_I00AFUNC
+ fprintf (stderr, "%011o %011o\n", pseg, size);
+#endif
+ stkl = stkl - pseg;
+ ssptr = (struct stack_segment_linkage *) stkl;
+ size = ssptr->sssize;
+ pseg = ssptr->sspseg;
+ result += size;
+ }
+ return (result);
+}
+
+#endif /* not CRAY2 */
+#endif /* CRAY */
+
+#endif /* no alloca */
+#endif /* not GCC version 2 */
--- /dev/null
+/* ansi2knr.c */
+/* Convert ANSI C function definitions to K&R ("traditional C") syntax */
+
+/*
+ansi2knr is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY. No author or distributor accepts responsibility to anyone for the
+consequences of using it or for whether it serves any particular purpose or
+works at all, unless he says so in writing. Refer to the GNU General Public
+License (the "GPL") for full details.
+
+Everyone is granted permission to copy, modify and redistribute ansi2knr,
+but only under the conditions described in the GPL. A copy of this license
+is supposed to have been given to you along with ansi2knr so you can know
+your rights and responsibilities. It should be in a file named COPYLEFT.
+Among other things, the copyright notice and this notice must be preserved
+on all copies.
+
+We explicitly state here what we believe is already implied by the GPL: if
+the ansi2knr program is distributed as a separate set of sources and a
+separate executable file which are aggregated on a storage medium together
+with another program, this in itself does not bring the other program under
+the GPL, nor does the mere fact that such a program or the procedures for
+constructing it invoke the ansi2knr executable bring any other part of the
+program under the GPL.
+*/
+
+/*
+ * Usage:
+ ansi2knr input_file [output_file]
+ * If no output_file is supplied, output goes to stdout.
+ * There are no error messages.
+ *
+ * ansi2knr recognizes function definitions by seeing a non-keyword
+ * identifier at the left margin, followed by a left parenthesis,
+ * with a right parenthesis as the last character on the line,
+ * and with a left brace as the first token on the following line
+ * (ignoring possible intervening comments).
+ * It will recognize a multi-line header provided that no intervening
+ * line ends with a left or right brace or a semicolon.
+ * These algorithms ignore whitespace and comments, except that
+ * the function name must be the first thing on the line.
+ * The following constructs will confuse it:
+ * - Any other construct that starts at the left margin and
+ * follows the above syntax (such as a macro or function call).
+ * - Some macros that tinker with the syntax of the function header.
+ */
+
+/*
+ * The original and principal author of ansi2knr is L. Peter Deutsch
+ * <ghost@aladdin.com>. Other authors are noted in the change history
+ * that follows (in reverse chronological order):
+ lpd 96-01-21 added code to cope with not HAVE_CONFIG_H and with
+ compilers that don't understand void, as suggested by
+ Tom Lane
+ lpd 96-01-15 changed to require that the first non-comment token
+ on the line following a function header be a left brace,
+ to reduce sensitivity to macros, as suggested by Tom Lane
+ <tgl@sss.pgh.pa.us>
+ lpd 95-06-22 removed #ifndefs whose sole purpose was to define
+ undefined preprocessor symbols as 0; changed all #ifdefs
+ for configuration symbols to #ifs
+ lpd 95-04-05 changed copyright notice to make it clear that
+ including ansi2knr in a program does not bring the entire
+ program under the GPL
+ lpd 94-12-18 added conditionals for systems where ctype macros
+ don't handle 8-bit characters properly, suggested by
+ Francois Pinard <pinard@iro.umontreal.ca>;
+ removed --varargs switch (this is now the default)
+ lpd 94-10-10 removed CONFIG_BROKETS conditional
+ lpd 94-07-16 added some conditionals to help GNU `configure',
+ suggested by Francois Pinard <pinard@iro.umontreal.ca>;
+ properly erase prototype args in function parameters,
+ contributed by Jim Avera <jima@netcom.com>;
+ correct error in writeblanks (it shouldn't erase EOLs)
+ lpd 89-xx-xx original version
+ */
+
+/* Most of the conditionals here are to make ansi2knr work with */
+/* or without the GNU configure machinery. */
+
+#if HAVE_CONFIG_H
+# include <config.h>
+#endif
+
+#include <stdio.h>
+#include <ctype.h>
+
+#if HAVE_CONFIG_H
+
+/*
+ For properly autoconfiguring ansi2knr, use AC_CONFIG_HEADER(config.h).
+ This will define HAVE_CONFIG_H and so, activate the following lines.
+ */
+
+# if STDC_HEADERS || HAVE_STRING_H
+# include <string.h>
+# else
+# include <strings.h>
+# endif
+
+#else /* not HAVE_CONFIG_H */
+
+/* Otherwise do it the hard way */
+
+# ifdef BSD
+# include <strings.h>
+# else
+# ifdef VMS
+ extern int strlen(), strncmp();
+# else
+# include <string.h>
+# endif
+# endif
+
+#endif /* not HAVE_CONFIG_H */
+
+#if STDC_HEADERS
+# include <stdlib.h>
+#else
+/*
+ malloc and free should be declared in stdlib.h,
+ but if you've got a K&R compiler, they probably aren't.
+ */
+# ifdef MSDOS
+# include <malloc.h>
+# else
+# ifdef VMS
+ extern char *malloc();
+ extern void free();
+# else
+ extern char *malloc();
+ extern int free();
+# endif
+# endif
+
+#endif
+
+/*
+ * The ctype macros don't always handle 8-bit characters correctly.
+ * Compensate for this here.
+ */
+#ifdef isascii
+# undef HAVE_ISASCII /* just in case */
+# define HAVE_ISASCII 1
+#else
+#endif
+#if STDC_HEADERS || !HAVE_ISASCII
+# define is_ascii(c) 1
+#else
+# define is_ascii(c) isascii(c)
+#endif
+
+#define is_space(c) (is_ascii(c) && isspace(c))
+#define is_alpha(c) (is_ascii(c) && isalpha(c))
+#define is_alnum(c) (is_ascii(c) && isalnum(c))
+
+/* Scanning macros */
+#define isidchar(ch) (is_alnum(ch) || (ch) == '_')
+#define isidfirstchar(ch) (is_alpha(ch) || (ch) == '_')
+
+/* Forward references */
+char *skipspace();
+int writeblanks();
+int test1();
+int convert1();
+
+/* The main program */
+int
+main(argc, argv)
+ int argc;
+ char *argv[];
+{ FILE *in, *out;
+#define bufsize 5000 /* arbitrary size */
+ char *buf;
+ char *line;
+ char *more;
+ /*
+ * In previous versions, ansi2knr recognized a --varargs switch.
+ * If this switch was supplied, ansi2knr would attempt to convert
+ * a ... argument to va_alist and va_dcl; if this switch was not
+ * supplied, ansi2knr would simply drop any such arguments.
+ * Now, ansi2knr always does this conversion, and we only
+ * check for this switch for backward compatibility.
+ */
+ int convert_varargs = 1;
+
+ if ( argc > 1 && argv[1][0] == '-' )
+ { if ( !strcmp(argv[1], "--varargs") )
+ { convert_varargs = 1;
+ argc--;
+ argv++;
+ }
+ else
+ { fprintf(stderr, "Unrecognized switch: %s\n", argv[1]);
+ exit(1);
+ }
+ }
+ if (argc < 2 || argc > 3)
+ {
+ printf("Usage: ansi2knr input_file [output_file]\n");
+ exit(1);
+ }
+ in = fopen(argv[1], "r");
+ if ( in == NULL )
+ {
+ fprintf(stderr, "Cannot open input file %s\n", argv[1]);
+ exit(1);
+ }
+ if (argc == 3)
+ {
+ out = fopen(argv[2], "w");
+ if ( out == NULL )
+ {
+ fprintf(stderr, "Cannot open output file %s\n", argv[2]);
+ exit(1);
+ }
+ }
+ else
+ {
+ out = stdout;
+ }
+ fprintf(out, "#line 1 \"%s\"\n", argv[1]);
+ buf = malloc(bufsize);
+ line = buf;
+ while ( fgets(line, (unsigned)(buf + bufsize - line), in) != NULL )
+ {
+test: line += strlen(line);
+ switch ( test1(buf) )
+ {
+ case 2: /* a function header */
+ convert1(buf, out, 1, convert_varargs);
+ break;
+ case 1: /* a function */
+ /* Check for a { at the start of the next line. */
+ more = ++line;
+f: if ( line >= buf + (bufsize - 1) ) /* overflow check */
+ goto wl;
+ if ( fgets(line, (unsigned)(buf + bufsize - line), in) == NULL )
+ goto wl;
+ switch ( *skipspace(more, 1) )
+ {
+ case '{':
+ /* Definitely a function header. */
+ convert1(buf, out, 0, convert_varargs);
+ fputs(more, out);
+ break;
+ case 0:
+ /* The next line was blank or a comment: */
+ /* keep scanning for a non-comment. */
+ line += strlen(line);
+ goto f;
+ default:
+ /* buf isn't a function header, but */
+ /* more might be. */
+ fputs(buf, out);
+ strcpy(buf, more);
+ line = buf;
+ goto test;
+ }
+ break;
+ case -1: /* maybe the start of a function */
+ if ( line != buf + (bufsize - 1) ) /* overflow check */
+ continue;
+ /* falls through */
+ default: /* not a function */
+wl: fputs(buf, out);
+ break;
+ }
+ line = buf;
+ }
+ if ( line != buf )
+ fputs(buf, out);
+ free(buf);
+ fclose(out);
+ fclose(in);
+ return 0;
+}
+
+/* Skip over space and comments, in either direction. */
+char *
+skipspace(p, dir)
+ register char *p;
+ register int dir; /* 1 for forward, -1 for backward */
+{ for ( ; ; )
+ { while ( is_space(*p) )
+ p += dir;
+ if ( !(*p == '/' && p[dir] == '*') )
+ break;
+ p += dir; p += dir;
+ while ( !(*p == '*' && p[dir] == '/') )
+ { if ( *p == 0 )
+ return p; /* multi-line comment?? */
+ p += dir;
+ }
+ p += dir; p += dir;
+ }
+ return p;
+}
+
+/*
+ * Write blanks over part of a string.
+ * Don't overwrite end-of-line characters.
+ */
+int
+writeblanks(start, end)
+ char *start;
+ char *end;
+{ char *p;
+ for ( p = start; p < end; p++ )
+ if ( *p != '\r' && *p != '\n' )
+ *p = ' ';
+ return 0;
+}
+
+/*
+ * Test whether the string in buf is a function definition.
+ * The string may contain and/or end with a newline.
+ * Return as follows:
+ * 0 - definitely not a function definition;
+ * 1 - definitely a function definition;
+ * 2 - definitely a function prototype (NOT USED);
+ * -1 - may be the beginning of a function definition,
+ * append another line and look again.
+ * The reason we don't attempt to convert function prototypes is that
+ * Ghostscript's declaration-generating macros look too much like
+ * prototypes, and confuse the algorithms.
+ */
+int
+test1(buf)
+ char *buf;
+{ register char *p = buf;
+ char *bend;
+ char *endfn;
+ int contin;
+
+ if ( !isidfirstchar(*p) )
+ return 0; /* no name at left margin */
+ bend = skipspace(buf + strlen(buf) - 1, -1);
+ switch ( *bend )
+ {
+ case ';': contin = 0 /*2*/; break;
+ case ')': contin = 1; break;
+ case '{': return 0; /* not a function */
+ case '}': return 0; /* not a function */
+ default: contin = -1;
+ }
+ while ( isidchar(*p) )
+ p++;
+ endfn = p;
+ p = skipspace(p, 1);
+ if ( *p++ != '(' )
+ return 0; /* not a function */
+ p = skipspace(p, 1);
+ if ( *p == ')' )
+ return 0; /* no parameters */
+ /* Check that the apparent function name isn't a keyword. */
+ /* We only need to check for keywords that could be followed */
+ /* by a left parenthesis (which, unfortunately, is most of them). */
+ { static char *words[] =
+ { "asm", "auto", "case", "char", "const", "double",
+ "extern", "float", "for", "if", "int", "long",
+ "register", "return", "short", "signed", "sizeof",
+ "static", "switch", "typedef", "unsigned",
+ "void", "volatile", "while", 0
+ };
+ char **key = words;
+ char *kp;
+ int len = endfn - buf;
+
+ while ( (kp = *key) != 0 )
+ { if ( strlen(kp) == len && !strncmp(kp, buf, len) )
+ return 0; /* name is a keyword */
+ key++;
+ }
+ }
+ return contin;
+}
+
+/* Convert a recognized function definition or header to K&R syntax. */
+int
+convert1(buf, out, header, convert_varargs)
+ char *buf;
+ FILE *out;
+ int header; /* Boolean */
+ int convert_varargs; /* Boolean */
+{ char *endfn;
+ register char *p;
+ /*
+ * The breaks table contains pointers to the beginning and end
+ * of each argument.
+ */
+ char **breaks;
+ unsigned num_breaks = 2; /* for testing */
+ char **btop;
+ char **bp;
+ char **ap;
+ char *vararg = 0;
+
+ /* Pre-ANSI implementations don't agree on whether strchr */
+ /* is called strchr or index, so we open-code it here. */
+ for ( endfn = buf; *(endfn++) != '('; )
+ ;
+top: p = endfn;
+ breaks = (char **)malloc(sizeof(char *) * num_breaks * 2);
+ if ( breaks == 0 )
+ { /* Couldn't allocate break table, give up */
+ fprintf(stderr, "Unable to allocate break table!\n");
+ fputs(buf, out);
+ return -1;
+ }
+ btop = breaks + num_breaks * 2 - 2;
+ bp = breaks;
+ /* Parse the argument list */
+ do
+ { int level = 0;
+ char *lp = NULL;
+ char *rp;
+ char *end = NULL;
+
+ if ( bp >= btop )
+ { /* Filled up break table. */
+ /* Allocate a bigger one and start over. */
+ free((char *)breaks);
+ num_breaks <<= 1;
+ goto top;
+ }
+ *bp++ = p;
+ /* Find the end of the argument */
+ for ( ; end == NULL; p++ )
+ { switch(*p)
+ {
+ case ',':
+ if ( !level ) end = p;
+ break;
+ case '(':
+ if ( !level ) lp = p;
+ level++;
+ break;
+ case ')':
+ if ( --level < 0 ) end = p;
+ else rp = p;
+ break;
+ case '/':
+ p = skipspace(p, 1) - 1;
+ break;
+ default:
+ ;
+ }
+ }
+ /* Erase any embedded prototype parameters. */
+ if ( lp )
+ writeblanks(lp + 1, rp);
+ p--; /* back up over terminator */
+ /* Find the name being declared. */
+ /* This is complicated because of procedure and */
+ /* array modifiers. */
+ for ( ; ; )
+ { p = skipspace(p - 1, -1);
+ switch ( *p )
+ {
+ case ']': /* skip array dimension(s) */
+ case ')': /* skip procedure args OR name */
+ { int level = 1;
+ while ( level )
+ switch ( *--p )
+ {
+ case ']': case ')': level++; break;
+ case '[': case '(': level--; break;
+ case '/': p = skipspace(p, -1) + 1; break;
+ default: ;
+ }
+ }
+ if ( *p == '(' && *skipspace(p + 1, 1) == '*' )
+ { /* We found the name being declared */
+ while ( !isidfirstchar(*p) )
+ p = skipspace(p, 1) + 1;
+ goto found;
+ }
+ break;
+ default:
+ goto found;
+ }
+ }
+found: if ( *p == '.' && p[-1] == '.' && p[-2] == '.' )
+ { if ( convert_varargs )
+ { *bp++ = "va_alist";
+ vararg = p-2;
+ }
+ else
+ { p++;
+ if ( bp == breaks + 1 ) /* sole argument */
+ writeblanks(breaks[0], p);
+ else
+ writeblanks(bp[-1] - 1, p);
+ bp--;
+ }
+ }
+ else
+ { while ( isidchar(*p) ) p--;
+ *bp++ = p+1;
+ }
+ p = end;
+ }
+ while ( *p++ == ',' );
+ *bp = p;
+ /* Make a special check for 'void' arglist */
+ if ( bp == breaks+2 )
+ { p = skipspace(breaks[0], 1);
+ if ( !strncmp(p, "void", 4) )
+ { p = skipspace(p+4, 1);
+ if ( p == breaks[2] - 1 )
+ { bp = breaks; /* yup, pretend arglist is empty */
+ writeblanks(breaks[0], p + 1);
+ }
+ }
+ }
+ /* Put out the function name and left parenthesis. */
+ p = buf;
+ while ( p != endfn ) putc(*p, out), p++;
+ /* Put out the declaration. */
+ if ( header )
+ { fputs(");", out);
+ for ( p = breaks[0]; *p; p++ )
+ if ( *p == '\r' || *p == '\n' )
+ putc(*p, out);
+ }
+ else
+ { for ( ap = breaks+1; ap < bp; ap += 2 )
+ { p = *ap;
+ while ( isidchar(*p) )
+ putc(*p, out), p++;
+ if ( ap < bp - 1 )
+ fputs(", ", out);
+ }
+ fputs(") ", out);
+ /* Put out the argument declarations */
+ for ( ap = breaks+2; ap <= bp; ap += 2 )
+ (*ap)[-1] = ';';
+ if ( vararg != 0 )
+ { *vararg = 0;
+ fputs(breaks[0], out); /* any prior args */
+ fputs("va_dcl", out); /* the final arg */
+ fputs(bp[0], out);
+ }
+ else
+ fputs(breaks[0], out);
+ }
+ free((char *)breaks);
+ return 0;
+}
--- /dev/null
+/* Replacements for routines missing on some systems.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif /* HAVE_STRING_H */
+#include <ctype.h>
+
+#include <sys/types.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <limits.h>
+
+#include "wget.h"
+
+#ifndef HAVE_STRERROR
+/* A strerror() clone, for systems that don't have it. */
+char *
+strerror (int err)
+{
+ /* This loses on a system without `sys_errlist'. */
+ extern char *sys_errlist[];
+ return sys_errlist[err];
+}
+#endif /* not HAVE_STRERROR */
+
+/* Some systems don't have some str* functions in libc. Here we
+ define them. The same goes for strptime. */
+
+#ifndef HAVE_STRCASECMP
+/* From GNU libc. */
+/* Compare S1 and S2, ignoring case, returning less than, equal to or
+ greater than zero if S1 is lexiographically less than,
+ equal to or greater than S2. */
+int
+strcasecmp (const char *s1, const char *s2)
+{
+ register const unsigned char *p1 = (const unsigned char *) s1;
+ register const unsigned char *p2 = (const unsigned char *) s2;
+ unsigned char c1, c2;
+
+ if (p1 == p2)
+ return 0;
+
+ do
+ {
+ c1 = tolower (*p1++);
+ c2 = tolower (*p2++);
+ if (c1 == '\0')
+ break;
+ }
+ while (c1 == c2);
+
+ return c1 - c2;
+}
+#endif /* not HAVE_STRCASECMP */
+
+#ifndef HAVE_STRNCASECMP
+/* From GNU libc. */
+/* Compare no more than N characters of S1 and S2,
+ ignoring case, returning less than, equal to or
+ greater than zero if S1 is lexicographically less
+ than, equal to or greater than S2. */
+int
+strncasecmp (const char *s1, const char *s2, size_t n)
+{
+ register const unsigned char *p1 = (const unsigned char *) s1;
+ register const unsigned char *p2 = (const unsigned char *) s2;
+ unsigned char c1, c2;
+
+ if (p1 == p2 || n == 0)
+ return 0;
+
+ do
+ {
+ c1 = tolower (*p1++);
+ c2 = tolower (*p2++);
+ if (c1 == '\0' || c1 != c2)
+ return c1 - c2;
+ } while (--n > 0);
+
+ return c1 - c2;
+}
+#endif /* not HAVE_STRNCASECMP */
+
+#ifndef HAVE_STRSTR
+/* From GNU libc 2.0.6. */
+/* Return the first ocurrence of NEEDLE in HAYSTACK. */
+/*
+ * My personal strstr() implementation that beats most other algorithms.
+ * Until someone tells me otherwise, I assume that this is the
+ * fastest implementation of strstr() in C.
+ * I deliberately chose not to comment it. You should have at least
+ * as much fun trying to understand it, as I had to write it :-).
+ *
+ * Stephen R. van den Berg, berg@pool.informatik.rwth-aachen.de */
+typedef unsigned chartype;
+
+char *
+strstr (phaystack, pneedle)
+ const char *phaystack;
+ const char *pneedle;
+{
+ register const unsigned char *haystack, *needle;
+ register chartype b, c;
+
+ haystack = (const unsigned char *) phaystack;
+ needle = (const unsigned char *) pneedle;
+
+ b = *needle;
+ if (b != '\0')
+ {
+ haystack--; /* possible ANSI violation */
+ do
+ {
+ c = *++haystack;
+ if (c == '\0')
+ goto ret0;
+ }
+ while (c != b);
+
+ c = *++needle;
+ if (c == '\0')
+ goto foundneedle;
+ ++needle;
+ goto jin;
+
+ for (;;)
+ {
+ register chartype a;
+ register const unsigned char *rhaystack, *rneedle;
+
+ do
+ {
+ a = *++haystack;
+ if (a == '\0')
+ goto ret0;
+ if (a == b)
+ break;
+ a = *++haystack;
+ if (a == '\0')
+ goto ret0;
+shloop: }
+ while (a != b);
+
+jin: a = *++haystack;
+ if (a == '\0')
+ goto ret0;
+
+ if (a != c)
+ goto shloop;
+
+ rhaystack = haystack-- + 1;
+ rneedle = needle;
+ a = *rneedle;
+
+ if (*rhaystack == a)
+ do
+ {
+ if (a == '\0')
+ goto foundneedle;
+ ++rhaystack;
+ a = *++needle;
+ if (*rhaystack != a)
+ break;
+ if (a == '\0')
+ goto foundneedle;
+ ++rhaystack;
+ a = *++needle;
+ }
+ while (*rhaystack == a);
+
+ needle = rneedle; /* took the register-poor approach */
+
+ if (a == '\0')
+ break;
+ }
+ }
+foundneedle:
+ return (char*) haystack;
+ret0:
+ return 0;
+}
+#endif /* not HAVE_STRSTR */
+
+#ifndef HAVE_MKTIME
+/* From GNU libc 2.0. */
+
+/* Copyright (C) 1993, 1994, 1995, 1996, 1997 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+ Contributed by Paul Eggert (eggert@twinsun.com). */
+
+#ifdef _LIBC
+# define HAVE_LIMITS_H 1
+# define HAVE_LOCALTIME_R 1
+# define STDC_HEADERS 1
+#endif
+
+/* Assume that leap seconds are possible, unless told otherwise.
+ If the host has a `zic' command with a `-L leapsecondfilename' option,
+ then it supports leap seconds; otherwise it probably doesn't. */
+#ifndef LEAP_SECONDS_POSSIBLE
+# define LEAP_SECONDS_POSSIBLE 1
+#endif
+
+#ifndef __P
+# define __P(args) PARAMS (args)
+#endif /* Not __P. */
+
+#ifndef CHAR_BIT
+# define CHAR_BIT 8
+#endif
+
+#ifndef INT_MIN
+# define INT_MIN (~0 << (sizeof (int) * CHAR_BIT - 1))
+#endif
+#ifndef INT_MAX
+# define INT_MAX (~0 - INT_MIN)
+#endif
+
+#ifndef TIME_T_MIN
+/* The outer cast to time_t works around a bug in Cray C 5.0.3.0. */
+# define TIME_T_MIN ((time_t) \
+ (0 < (time_t) -1 ? (time_t) 0 \
+ : ~ (time_t) 0 << (sizeof (time_t) * CHAR_BIT - 1)))
+#endif
+#ifndef TIME_T_MAX
+# define TIME_T_MAX (~ (time_t) 0 - TIME_T_MIN)
+#endif
+
+#define TM_YEAR_BASE 1900
+#define EPOCH_YEAR 1970
+
+#ifndef __isleap
+/* Nonzero if YEAR is a leap year (every 4 years,
+ except every 100th isn't, and every 400th is). */
+# define __isleap(year) \
+ ((year) % 4 == 0 && ((year) % 100 != 0 || (year) % 400 == 0))
+#endif
+
+/* How many days come before each month (0-12). */
+const unsigned short int __mon_yday[2][13] =
+ {
+ /* Normal years. */
+ { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
+ /* Leap years. */
+ { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
+ };
+
+static time_t ydhms_tm_diff __P ((int, int, int, int, int, const struct tm *));
+time_t __mktime_internal __P ((struct tm *,
+ struct tm *(*) (const time_t *, struct tm *),
+ time_t *));
+
+
+#ifdef _LIBC
+# define localtime_r __localtime_r
+#else
+# if ! HAVE_LOCALTIME_R && ! defined localtime_r
+/* Approximate localtime_r as best we can in its absence. */
+# define localtime_r my_mktime_localtime_r
+static struct tm *localtime_r __P ((const time_t *, struct tm *));
+static struct tm *
+localtime_r (t, tp)
+ const time_t *t;
+ struct tm *tp;
+{
+ struct tm *l = localtime (t);
+ if (! l)
+ return 0;
+ *tp = *l;
+ return tp;
+}
+# endif /* ! HAVE_LOCALTIME_R && ! defined (localtime_r) */
+#endif /* ! _LIBC */
+
+
+/* Yield the difference between (YEAR-YDAY HOUR:MIN:SEC) and (*TP),
+ measured in seconds, ignoring leap seconds.
+ YEAR uses the same numbering as TM->tm_year.
+ All values are in range, except possibly YEAR.
+ If overflow occurs, yield the low order bits of the correct answer. */
+static time_t
+ydhms_tm_diff (year, yday, hour, min, sec, tp)
+ int year, yday, hour, min, sec;
+ const struct tm *tp;
+{
+ /* Compute intervening leap days correctly even if year is negative.
+ Take care to avoid int overflow. time_t overflow is OK, since
+ only the low order bits of the correct time_t answer are needed.
+ Don't convert to time_t until after all divisions are done, since
+ time_t might be unsigned. */
+ int a4 = (year >> 2) + (TM_YEAR_BASE >> 2) - ! (year & 3);
+ int b4 = (tp->tm_year >> 2) + (TM_YEAR_BASE >> 2) - ! (tp->tm_year & 3);
+ int a100 = a4 / 25 - (a4 % 25 < 0);
+ int b100 = b4 / 25 - (b4 % 25 < 0);
+ int a400 = a100 >> 2;
+ int b400 = b100 >> 2;
+ int intervening_leap_days = (a4 - b4) - (a100 - b100) + (a400 - b400);
+ time_t years = year - (time_t) tp->tm_year;
+ time_t days = (365 * years + intervening_leap_days
+ + (yday - tp->tm_yday));
+ return (60 * (60 * (24 * days + (hour - tp->tm_hour))
+ + (min - tp->tm_min))
+ + (sec - tp->tm_sec));
+}
+
+
+static time_t localtime_offset;
+
+/* Convert *TP to a time_t value. */
+time_t
+mktime (tp)
+ struct tm *tp;
+{
+#ifdef _LIBC
+ /* POSIX.1 8.1.1 requires that whenever mktime() is called, the
+ time zone names contained in the external variable `tzname' shall
+ be set as if the tzset() function had been called. */
+ __tzset ();
+#endif
+
+ return __mktime_internal (tp, localtime_r, &localtime_offset);
+}
+
+/* Convert *TP to a time_t value, inverting
+ the monotonic and mostly-unit-linear conversion function CONVERT.
+ Use *OFFSET to keep track of a guess at the offset of the result,
+ compared to what the result would be for UTC without leap seconds.
+ If *OFFSET's guess is correct, only one CONVERT call is needed. */
+time_t
+__mktime_internal (tp, convert, offset)
+ struct tm *tp;
+ struct tm *(*convert) __P ((const time_t *, struct tm *));
+ time_t *offset;
+{
+ time_t t, dt, t0;
+ struct tm tm;
+
+ /* The maximum number of probes (calls to CONVERT) should be enough
+ to handle any combinations of time zone rule changes, solar time,
+ and leap seconds. Posix.1 prohibits leap seconds, but some hosts
+ have them anyway. */
+ int remaining_probes = 4;
+
+ /* Time requested. Copy it in case CONVERT modifies *TP; this can
+ occur if TP is localtime's returned value and CONVERT is localtime. */
+ int sec = tp->tm_sec;
+ int min = tp->tm_min;
+ int hour = tp->tm_hour;
+ int mday = tp->tm_mday;
+ int mon = tp->tm_mon;
+ int year_requested = tp->tm_year;
+ int isdst = tp->tm_isdst;
+
+ /* Ensure that mon is in range, and set year accordingly. */
+ int mon_remainder = mon % 12;
+ int negative_mon_remainder = mon_remainder < 0;
+ int mon_years = mon / 12 - negative_mon_remainder;
+ int year = year_requested + mon_years;
+
+ /* The other values need not be in range:
+ the remaining code handles minor overflows correctly,
+ assuming int and time_t arithmetic wraps around.
+ Major overflows are caught at the end. */
+
+ /* Calculate day of year from year, month, and day of month.
+ The result need not be in range. */
+ int yday = ((__mon_yday[__isleap (year + TM_YEAR_BASE)]
+ [mon_remainder + 12 * negative_mon_remainder])
+ + mday - 1);
+
+ int sec_requested = sec;
+#if LEAP_SECONDS_POSSIBLE
+ /* Handle out-of-range seconds specially,
+ since ydhms_tm_diff assumes every minute has 60 seconds. */
+ if (sec < 0)
+ sec = 0;
+ if (59 < sec)
+ sec = 59;
+#endif
+
+ /* Invert CONVERT by probing. First assume the same offset as last time.
+ Then repeatedly use the error to improve the guess. */
+
+ tm.tm_year = EPOCH_YEAR - TM_YEAR_BASE;
+ tm.tm_yday = tm.tm_hour = tm.tm_min = tm.tm_sec = 0;
+ t0 = ydhms_tm_diff (year, yday, hour, min, sec, &tm);
+
+ for (t = t0 + *offset;
+ (dt = ydhms_tm_diff (year, yday, hour, min, sec, (*convert) (&t, &tm)));
+ t += dt)
+ if (--remaining_probes == 0)
+ return -1;
+
+ /* Check whether tm.tm_isdst has the requested value, if any. */
+ if (0 <= isdst && 0 <= tm.tm_isdst)
+ {
+ int dst_diff = (isdst != 0) - (tm.tm_isdst != 0);
+ if (dst_diff)
+ {
+ /* Move two hours in the direction indicated by the disagreement,
+ probe some more, and switch to a new time if found.
+ The largest known fallback due to daylight savings is two hours:
+ once, in Newfoundland, 1988-10-30 02:00 -> 00:00. */
+ time_t ot = t - 2 * 60 * 60 * dst_diff;
+ while (--remaining_probes != 0)
+ {
+ struct tm otm;
+ if (! (dt = ydhms_tm_diff (year, yday, hour, min, sec,
+ (*convert) (&ot, &otm))))
+ {
+ t = ot;
+ tm = otm;
+ break;
+ }
+ if ((ot += dt) == t)
+ break; /* Avoid a redundant probe. */
+ }
+ }
+ }
+
+ *offset = t - t0;
+
+#if LEAP_SECONDS_POSSIBLE
+ if (sec_requested != tm.tm_sec)
+ {
+ /* Adjust time to reflect the tm_sec requested, not the normalized value.
+ Also, repair any damage from a false match due to a leap second. */
+ t += sec_requested - sec + (sec == 0 && tm.tm_sec == 60);
+ (*convert) (&t, &tm);
+ }
+#endif
+
+ if (TIME_T_MAX / INT_MAX / 366 / 24 / 60 / 60 < 3)
+ {
+ /* time_t isn't large enough to rule out overflows in ydhms_tm_diff,
+ so check for major overflows. A gross check suffices,
+ since if t has overflowed, it is off by a multiple of
+ TIME_T_MAX - TIME_T_MIN + 1. So ignore any component of
+ the difference that is bounded by a small value. */
+
+ double dyear = (double) year_requested + mon_years - tm.tm_year;
+ double dday = 366 * dyear + mday;
+ double dsec = 60 * (60 * (24 * dday + hour) + min) + sec_requested;
+
+ if (TIME_T_MAX / 3 - TIME_T_MIN / 3 < (dsec < 0 ? - dsec : dsec))
+ return -1;
+ }
+
+ *tp = tm;
+ return t;
+}
+
+#ifdef weak_alias
+weak_alias (mktime, timelocal)
+#endif
+#endif /* not HAVE_MKTIME */
+
+
+#ifndef HAVE_STRPTIME
+/* From GNU libc 2.0.6. */
+/* Ulrich, thanks for helping me out with this! --hniksic */
+
+/* strptime - Convert a string representation of time to a time value.
+ Copyright (C) 1996, 1997 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+ Contributed by Ulrich Drepper <drepper@cygnus.com>, 1996. */
+
+/* XXX This version of the implementation is not really complete.
+ Some of the fields cannot add information alone. But if seeing
+ some of them in the same format (such as year, week and weekday)
+ this is enough information for determining the date. */
+
+#ifndef __P
+# define __P(args) PARAMS (args)
+#endif /* not __P */
+
+#if ! HAVE_LOCALTIME_R && ! defined (localtime_r)
+#ifdef _LIBC
+#define localtime_r __localtime_r
+#else
+/* Approximate localtime_r as best we can in its absence. */
+#define localtime_r my_localtime_r
+static struct tm *localtime_r __P ((const time_t *, struct tm *));
+static struct tm *
+localtime_r (t, tp)
+ const time_t *t;
+ struct tm *tp;
+{
+ struct tm *l = localtime (t);
+ if (! l)
+ return 0;
+ *tp = *l;
+ return tp;
+}
+#endif /* ! _LIBC */
+#endif /* ! HAVE_LOCALTIME_R && ! defined (localtime_r) */
+
+
+#define match_char(ch1, ch2) if (ch1 != ch2) return NULL
+#if defined __GNUC__ && __GNUC__ >= 2
+# define match_string(cs1, s2) \
+ ({ size_t len = strlen (cs1); \
+ int result = strncasecmp ((cs1), (s2), len) == 0; \
+ if (result) (s2) += len; \
+ result; })
+#else
+/* Oh come on. Get a reasonable compiler. */
+# define match_string(cs1, s2) \
+ (strncasecmp ((cs1), (s2), strlen (cs1)) ? 0 : ((s2) += strlen (cs1), 1))
+#endif
+/* We intentionally do not use isdigit() for testing because this will
+ lead to problems with the wide character version. */
+#define get_number(from, to) \
+ do { \
+ val = 0; \
+ if (*rp < '0' || *rp > '9') \
+ return NULL; \
+ do { \
+ val *= 10; \
+ val += *rp++ - '0'; \
+ } while (val * 10 <= to && *rp >= '0' && *rp <= '9'); \
+ if (val < from || val > to) \
+ return NULL; \
+ } while (0)
+#ifdef _NL_CURRENT
+# define get_alt_number(from, to) \
+ do { \
+ if (*decided != raw) \
+ { \
+ const char *alts = _NL_CURRENT (LC_TIME, ALT_DIGITS); \
+ val = 0; \
+ while (*alts != '\0') \
+ { \
+ size_t len = strlen (alts); \
+ if (strncasecmp (alts, rp, len) == 0) \
+ break; \
+ alts = strchr (alts, '\0') + 1; \
+ ++val; \
+ } \
+ if (*alts == '\0') \
+ { \
+ if (*decided == loc && val != 0) \
+ return NULL; \
+ } \
+ else \
+ { \
+ *decided = loc; \
+ break; \
+ } \
+ } \
+ get_number (from, to); \
+ } while (0)
+#else
+# define get_alt_number(from, to) \
+ /* We don't have the alternate representation. */ \
+ get_number(from, to)
+#endif
+#define recursive(new_fmt) \
+ (*(new_fmt) != '\0' \
+ && (rp = strptime_internal (rp, (new_fmt), tm, decided)) != NULL)
+
+
+#ifdef _LIBC
+/* This is defined in locale/C-time.c in the GNU libc. */
+extern const struct locale_data _nl_C_LC_TIME;
+
+# define weekday_name (&_nl_C_LC_TIME.values[_NL_ITEM_INDEX (DAY_1)].string)
+# define ab_weekday_name \
+ (&_nl_C_LC_TIME.values[_NL_ITEM_INDEX (ABDAY_1)].string)
+# define month_name (&_nl_C_LC_TIME.values[_NL_ITEM_INDEX (MON_1)].string)
+# define ab_month_name (&_nl_C_LC_TIME.values[_NL_ITEM_INDEX (ABMON_1)].string)
+# define HERE_D_T_FMT (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (D_T_FMT)].string)
+# define HERE_D_FMT (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (D_T_FMT)].string)
+# define HERE_AM_STR (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (AM_STR)].string)
+# define HERE_PM_STR (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (PM_STR)].string)
+# define HERE_T_FMT_AMPM \
+ (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (T_FMT_AMPM)].string)
+# define HERE_T_FMT (_nl_C_LC_TIME.values[_NL_ITEM_INDEX (T_FMT)].string)
+#else
+static char const weekday_name[][10] =
+ {
+ "Sunday", "Monday", "Tuesday", "Wednesday",
+ "Thursday", "Friday", "Saturday"
+ };
+static char const ab_weekday_name[][4] =
+ {
+ "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"
+ };
+static char const month_name[][10] =
+ {
+ "January", "February", "March", "April", "May", "June",
+ "July", "August", "September", "October", "November", "December"
+ };
+static char const ab_month_name[][4] =
+ {
+ "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+ "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
+ };
+# define HERE_D_T_FMT "%a %b %e %H:%M:%S %Y"
+# define HERE_D_FMT "%m/%d/%y"
+# define HERE_AM_STR "AM"
+# define HERE_PM_STR "PM"
+# define HERE_T_FMT_AMPM "%I:%M:%S %p"
+# define HERE_T_FMT "%H:%M:%S"
+#endif
+
+/* Status of lookup: do we use the locale data or the raw data? */
+enum locale_status { not, loc, raw };
+
+static char *
+strptime_internal __P ((const char *buf, const char *format, struct tm *tm,
+ enum locale_status *decided));
+
+static char *
+strptime_internal (buf, format, tm, decided)
+ const char *buf;
+ const char *format;
+ struct tm *tm;
+ enum locale_status *decided;
+{
+ const char *rp;
+ const char *fmt;
+ int cnt;
+ size_t val;
+ int have_I, is_pm;
+
+ rp = buf;
+ fmt = format;
+ have_I = is_pm = 0;
+
+ while (*fmt != '\0')
+ {
+ /* A white space in the format string matches 0 more or white
+ space in the input string. */
+ if (isspace (*fmt))
+ {
+ while (isspace (*rp))
+ ++rp;
+ ++fmt;
+ continue;
+ }
+
+ /* Any character but `%' must be matched by the same character
+ in the iput string. */
+ if (*fmt != '%')
+ {
+ match_char (*fmt++, *rp++);
+ continue;
+ }
+
+ ++fmt;
+#ifndef _NL_CURRENT
+ /* We need this for handling the `E' modifier. */
+ start_over:
+#endif
+ switch (*fmt++)
+ {
+ case '%':
+ /* Match the `%' character itself. */
+ match_char ('%', *rp++);
+ break;
+ case 'a':
+ case 'A':
+ /* Match day of week. */
+ for (cnt = 0; cnt < 7; ++cnt)
+ {
+#ifdef _NL_CURRENT
+ if (*decided !=raw)
+ {
+ if (match_string (_NL_CURRENT (LC_TIME, DAY_1 + cnt), rp))
+ {
+ if (*decided == not
+ && strcmp (_NL_CURRENT (LC_TIME, DAY_1 + cnt),
+ weekday_name[cnt]))
+ *decided = loc;
+ break;
+ }
+ if (match_string (_NL_CURRENT (LC_TIME, ABDAY_1 + cnt), rp))
+ {
+ if (*decided == not
+ && strcmp (_NL_CURRENT (LC_TIME, ABDAY_1 + cnt),
+ ab_weekday_name[cnt]))
+ *decided = loc;
+ break;
+ }
+ }
+#endif
+ if (*decided != loc
+ && (match_string (weekday_name[cnt], rp)
+ || match_string (ab_weekday_name[cnt], rp)))
+ {
+ *decided = raw;
+ break;
+ }
+ }
+ if (cnt == 7)
+ /* Does not match a weekday name. */
+ return NULL;
+ tm->tm_wday = cnt;
+ break;
+ case 'b':
+ case 'B':
+ case 'h':
+ /* Match month name. */
+ for (cnt = 0; cnt < 12; ++cnt)
+ {
+#ifdef _NL_CURRENT
+ if (*decided !=raw)
+ {
+ if (match_string (_NL_CURRENT (LC_TIME, MON_1 + cnt), rp))
+ {
+ if (*decided == not
+ && strcmp (_NL_CURRENT (LC_TIME, MON_1 + cnt),
+ month_name[cnt]))
+ *decided = loc;
+ break;
+ }
+ if (match_string (_NL_CURRENT (LC_TIME, ABMON_1 + cnt), rp))
+ {
+ if (*decided == not
+ && strcmp (_NL_CURRENT (LC_TIME, ABMON_1 + cnt),
+ ab_month_name[cnt]))
+ *decided = loc;
+ break;
+ }
+ }
+#endif
+ if (match_string (month_name[cnt], rp)
+ || match_string (ab_month_name[cnt], rp))
+ {
+ *decided = raw;
+ break;
+ }
+ }
+ if (cnt == 12)
+ /* Does not match a month name. */
+ return NULL;
+ tm->tm_mon = cnt;
+ break;
+ case 'c':
+ /* Match locale's date and time format. */
+#ifdef _NL_CURRENT
+ if (*decided != raw)
+ {
+ if (!recursive (_NL_CURRENT (LC_TIME, D_T_FMT)))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (*decided == not &&
+ strcmp (_NL_CURRENT (LC_TIME, D_T_FMT), HERE_D_T_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+#endif
+ if (!recursive (HERE_D_T_FMT))
+ return NULL;
+ break;
+ case 'C':
+ /* Match century number. */
+ get_number (0, 99);
+ /* We don't need the number. */
+ break;
+ case 'd':
+ case 'e':
+ /* Match day of month. */
+ get_number (1, 31);
+ tm->tm_mday = val;
+ break;
+ case 'x':
+#ifdef _NL_CURRENT
+ if (*decided != raw)
+ {
+ if (!recursive (_NL_CURRENT (LC_TIME, D_FMT)))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (decided == not
+ && strcmp (_NL_CURRENT (LC_TIME, D_FMT), HERE_D_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+#endif
+ /* Fall through. */
+ case 'D':
+ /* Match standard day format. */
+ if (!recursive (HERE_D_FMT))
+ return NULL;
+ break;
+ case 'H':
+ /* Match hour in 24-hour clock. */
+ get_number (0, 23);
+ tm->tm_hour = val;
+ have_I = 0;
+ break;
+ case 'I':
+ /* Match hour in 12-hour clock. */
+ get_number (1, 12);
+ tm->tm_hour = val % 12;
+ have_I = 1;
+ break;
+ case 'j':
+ /* Match day number of year. */
+ get_number (1, 366);
+ tm->tm_yday = val - 1;
+ break;
+ case 'm':
+ /* Match number of month. */
+ get_number (1, 12);
+ tm->tm_mon = val - 1;
+ break;
+ case 'M':
+ /* Match minute. */
+ get_number (0, 59);
+ tm->tm_min = val;
+ break;
+ case 'n':
+ case 't':
+ /* Match any white space. */
+ while (isspace (*rp))
+ ++rp;
+ break;
+ case 'p':
+ /* Match locale's equivalent of AM/PM. */
+#ifdef _NL_CURRENT
+ if (*decided != raw)
+ {
+ if (match_string (_NL_CURRENT (LC_TIME, AM_STR), rp))
+ {
+ if (strcmp (_NL_CURRENT (LC_TIME, AM_STR), HERE_AM_STR))
+ *decided = loc;
+ break;
+ }
+ if (match_string (_NL_CURRENT (LC_TIME, PM_STR), rp))
+ {
+ if (strcmp (_NL_CURRENT (LC_TIME, PM_STR), HERE_PM_STR))
+ *decided = loc;
+ is_pm = 1;
+ break;
+ }
+ *decided = raw;
+ }
+#endif
+ if (!match_string (HERE_AM_STR, rp))
+ if (match_string (HERE_PM_STR, rp))
+ is_pm = 1;
+ else
+ return NULL;
+ break;
+ case 'r':
+#ifdef _NL_CURRENT
+ if (*decided != raw)
+ {
+ if (!recursive (_NL_CURRENT (LC_TIME, T_FMT_AMPM)))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (*decided == not &&
+ strcmp (_NL_CURRENT (LC_TIME, T_FMT_AMPM),
+ HERE_T_FMT_AMPM))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+#endif
+ if (!recursive (HERE_T_FMT_AMPM))
+ return NULL;
+ break;
+ case 'R':
+ if (!recursive ("%H:%M"))
+ return NULL;
+ break;
+ case 's':
+ {
+ /* The number of seconds may be very high so we cannot use
+ the `get_number' macro. Instead read the number
+ character for character and construct the result while
+ doing this. */
+ time_t secs;
+ if (*rp < '0' || *rp > '9')
+ /* We need at least one digit. */
+ return NULL;
+
+ do
+ {
+ secs *= 10;
+ secs += *rp++ - '0';
+ }
+ while (*rp >= '0' && *rp <= '9');
+
+ if (localtime_r (&secs, tm) == NULL)
+ /* Error in function. */
+ return NULL;
+ }
+ break;
+ case 'S':
+ get_number (0, 61);
+ tm->tm_sec = val;
+ break;
+ case 'X':
+#ifdef _NL_CURRENT
+ if (*decided != raw)
+ {
+ if (!recursive (_NL_CURRENT (LC_TIME, T_FMT)))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (strcmp (_NL_CURRENT (LC_TIME, T_FMT), HERE_T_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+#endif
+ /* Fall through. */
+ case 'T':
+ if (!recursive (HERE_T_FMT))
+ return NULL;
+ break;
+ case 'u':
+ get_number (1, 7);
+ tm->tm_wday = val % 7;
+ break;
+ case 'g':
+ get_number (0, 99);
+ /* XXX This cannot determine any field in TM. */
+ break;
+ case 'G':
+ if (*rp < '0' || *rp > '9')
+ return NULL;
+ /* XXX Ignore the number since we would need some more
+ information to compute a real date. */
+ do
+ ++rp;
+ while (*rp >= '0' && *rp <= '9');
+ break;
+ case 'U':
+ case 'V':
+ case 'W':
+ get_number (0, 53);
+ /* XXX This cannot determine any field in TM without some
+ information. */
+ break;
+ case 'w':
+ /* Match number of weekday. */
+ get_number (0, 6);
+ tm->tm_wday = val;
+ break;
+ case 'y':
+ /* Match year within century. */
+ get_number (0, 99);
+ tm->tm_year = val >= 50 ? val : val + 100;
+ break;
+ case 'Y':
+ /* Match year including century number. */
+ if (sizeof (time_t) > 4)
+ get_number (0, 9999);
+ else
+ get_number (0, 2036);
+ tm->tm_year = val - 1900;
+ break;
+ case 'Z':
+ /* XXX How to handle this? */
+ break;
+ case 'E':
+#ifdef _NL_CURRENT
+ switch (*fmt++)
+ {
+ case 'c':
+ /* Match locale's alternate date and time format. */
+ if (*decided != raw)
+ {
+ const char *fmt = _NL_CURRENT (LC_TIME, ERA_D_T_FMT);
+
+ if (*fmt == '\0')
+ fmt = _NL_CURRENT (LC_TIME, D_T_FMT);
+
+ if (!recursive (fmt))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (strcmp (fmt, HERE_D_T_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+ /* The C locale has no era information, so use the
+ normal representation. */
+ if (!recursive (HERE_D_T_FMT))
+ return NULL;
+ break;
+ case 'C':
+ case 'y':
+ case 'Y':
+ /* Match name of base year in locale's alternate
+ representation. */
+ /* XXX This is currently not implemented. It should
+ use the value _NL_CURRENT (LC_TIME, ERA). */
+ break;
+ case 'x':
+ if (*decided != raw)
+ {
+ const char *fmt = _NL_CURRENT (LC_TIME, ERA_D_FMT);
+
+ if (*fmt == '\0')
+ fmt = _NL_CURRENT (LC_TIME, D_FMT);
+
+ if (!recursive (fmt))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (strcmp (fmt, HERE_D_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+ if (!recursive (HERE_D_FMT))
+ return NULL;
+ break;
+ case 'X':
+ if (*decided != raw)
+ {
+ const char *fmt = _NL_CURRENT (LC_TIME, ERA_T_FMT);
+
+ if (*fmt == '\0')
+ fmt = _NL_CURRENT (LC_TIME, T_FMT);
+
+ if (!recursive (fmt))
+ {
+ if (*decided == loc)
+ return NULL;
+ }
+ else
+ {
+ if (strcmp (fmt, HERE_T_FMT))
+ *decided = loc;
+ break;
+ }
+ *decided = raw;
+ }
+ if (!recursive (HERE_T_FMT))
+ return NULL;
+ break;
+ default:
+ return NULL;
+ }
+ break;
+#else
+ /* We have no information about the era format. Just use
+ the normal format. */
+ if (*fmt != 'c' && *fmt != 'C' && *fmt != 'y' && *fmt != 'Y'
+ && *fmt != 'x' && *fmt != 'X')
+ /* This is an illegal format. */
+ return NULL;
+
+ goto start_over;
+#endif
+ case 'O':
+ switch (*fmt++)
+ {
+ case 'd':
+ case 'e':
+ /* Match day of month using alternate numeric symbols. */
+ get_alt_number (1, 31);
+ tm->tm_mday = val;
+ break;
+ case 'H':
+ /* Match hour in 24-hour clock using alternate numeric
+ symbols. */
+ get_alt_number (0, 23);
+ tm->tm_hour = val;
+ have_I = 0;
+ break;
+ case 'I':
+ /* Match hour in 12-hour clock using alternate numeric
+ symbols. */
+ get_alt_number (1, 12);
+ tm->tm_hour = val - 1;
+ have_I = 1;
+ break;
+ case 'm':
+ /* Match month using alternate numeric symbols. */
+ get_alt_number (1, 12);
+ tm->tm_mon = val - 1;
+ break;
+ case 'M':
+ /* Match minutes using alternate numeric symbols. */
+ get_alt_number (0, 59);
+ tm->tm_min = val;
+ break;
+ case 'S':
+ /* Match seconds using alternate numeric symbols. */
+ get_alt_number (0, 61);
+ tm->tm_sec = val;
+ break;
+ case 'U':
+ case 'V':
+ case 'W':
+ get_alt_number (0, 53);
+ /* XXX This cannot determine any field in TM without
+ further information. */
+ break;
+ case 'w':
+ /* Match number of weekday using alternate numeric symbols. */
+ get_alt_number (0, 6);
+ tm->tm_wday = val;
+ break;
+ case 'y':
+ /* Match year within century using alternate numeric symbols. */
+ get_alt_number (0, 99);
+ break;
+ default:
+ return NULL;
+ }
+ break;
+ default:
+ return NULL;
+ }
+ }
+
+ if (have_I && is_pm)
+ tm->tm_hour += 12;
+
+ return (char *) rp;
+}
+
+
+char *
+strptime (buf, format, tm)
+ const char *buf;
+ const char *format;
+ struct tm *tm;
+{
+ enum locale_status decided;
+#ifdef _NL_CURRENT
+ decided = not;
+#else
+ decided = raw;
+#endif
+ return strptime_internal (buf, format, tm, &decided);
+}
+#endif /* not HAVE_STRPTIME */
--- /dev/null
+/* Configuration header file.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef CONFIG_H
+#define CONFIG_H
+
+/* Define if you have the <alloca.h> header file. */
+#undef HAVE_ALLOCA_H
+
+/* AIX requires this to be the first thing in the file. */
+#ifdef __GNUC__
+# define alloca __builtin_alloca
+#else
+# if HAVE_ALLOCA_H
+# include <alloca.h>
+# else
+# ifdef _AIX
+ #pragma alloca
+# else
+# ifndef alloca /* predefined by HP cc +Olibcalls */
+char *alloca ();
+# endif
+# endif
+# endif
+#endif
+
+/* Define if on AIX 3.
+ System headers sometimes define this.
+ We just want to avoid a redefinition error message. */
+#ifndef _ALL_SOURCE
+#undef _ALL_SOURCE
+#endif
+
+/* Define to empty if the keyword does not work. */
+#undef const
+
+/* Define to `unsigned' if <sys/types.h> doesn't define. */
+#undef size_t
+
+/* Define to `int' if <sys/types.h> doesn't define. */
+#undef pid_t
+
+/* Define if you have the ANSI C header files. */
+#undef STDC_HEADERS
+
+/* Define as the return type of signal handlers (int or void). */
+#undef RETSIGTYPE
+
+/* Define if your architecture is big endian (with the most
+ significant byte first). */
+#undef WORDS_BIGENDIAN
+
+/* Define this if you want the NLS support. */
+#undef HAVE_NLS
+
+/* Define if you want the FTP support for Opie compiled in. */
+#undef USE_OPIE
+
+/* Define if you want the HTTP Digest Authorization compiled in. */
+#undef USE_DIGEST
+
+/* Define if you want the debug output support compiled in. */
+#undef DEBUG
+
+/* Define if you have sys/time.h header. */
+#undef HAVE_SYS_TIME_H
+
+/* Define if you can safely include both <sys/time.h> and <time.h>. */
+#undef TIME_WITH_SYS_TIME
+
+/* Define if you have struct utimbuf. */
+#undef HAVE_STRUCT_UTIMBUF
+
+/* Define if you have the uname function. */
+#undef HAVE_UNAME
+
+/* Define if you have the gethostname function. */
+#undef HAVE_GETHOSTNAME
+
+/* Define if you have the select function. */
+#undef HAVE_SELECT
+
+/* Define if you have the gettimeofday function. */
+#undef HAVE_GETTIMEOFDAY
+
+/* Define if you have the strdup function. */
+#undef HAVE_STRDUP
+
+/* Define if you have the sys/utsname.h header. */
+#undef HAVE_SYS_UTSNAME_H
+
+/* Define if you have the strerror function. */
+#undef HAVE_STRERROR
+
+/* Define if you have the vsnprintf function. */
+#undef HAVE_VSNPRINTF
+
+/* Define if you have the strstr function. */
+#undef HAVE_STRSTR
+
+/* Define if you have the strcasecmp function. */
+#undef HAVE_STRCASECMP
+
+/* Define if you have the strncasecmp function. */
+#undef HAVE_STRNCASECMP
+
+/* Define if you have the strptime function. */
+#undef HAVE_STRPTIME
+
+/* Define if you have the mktime function. */
+#undef HAVE_MKTIME
+
+/* Define if you have the symlink function. */
+#undef HAVE_SYMLINK
+
+/* Define if you have the access function. */
+#undef HAVE_ACCESS
+
+/* Define if you have the isatty function. */
+#undef HAVE_ISATTY
+
+/* Define if you have the signal function. */
+#undef HAVE_SIGNAL
+
+/* Define if you have the gettext function. */
+#undef HAVE_GETTEXT
+
+/* Define if you have the <string.h> header file. */
+#undef HAVE_STRING_H
+
+/* Define if you have the <stdarg.h> header file. */
+#undef HAVE_STDARG_H
+
+/* Define if you have the <unistd.h> header file. */
+#undef HAVE_UNISTD_H
+
+/* Define if you have the <utime.h> header file. */
+#undef HAVE_UTIME_H
+
+/* Define if you have the <sys/utime.h> header file. */
+#undef HAVE_SYS_UTIME_H
+
+/* Define if you have the <sys/select.h> header file. */
+#undef HAVE_SYS_SELECT_H
+
+/* Define if you have the <pwd.h> header file. */
+#undef HAVE_PWD_H
+
+/* Define if you have the <signal.h> header file. */
+#undef HAVE_SIGNAL_H
+
+/* Define if you have the <libintl.h> header file. */
+#undef HAVE_LIBINTL_H
+
+/* Define if you have the <locale.h> header file. */
+#undef HAVE_LOCALE_H
+
+/* Define to be the name of the operating system. */
+#undef OS_TYPE
+
+/* Define if you wish to compile with socks support. */
+#undef HAVE_SOCKS
+
+/* Define to 1 if ANSI function prototypes are usable. */
+#undef PROTOTYPES
+
+#endif /* CONFIG_H */
--- /dev/null
+/* Establishing and handling network connections.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <sys/types.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+
+#ifdef WINDOWS
+# include <winsock.h>
+#else
+# include <sys/socket.h>
+# include <netdb.h>
+# include <netinet/in.h>
+# include <arpa/inet.h>
+#endif /* WINDOWS */
+
+#include <errno.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif /* HAVE_STRING_H */
+#ifdef HAVE_SYS_SELECT_H
+# include <sys/select.h>
+#endif /* HAVE_SYS_SELECT_H */
+
+#include "wget.h"
+#include "connect.h"
+#include "host.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+/* Variables shared by bindport and acceptport: */
+static int msock = -1;
+static struct sockaddr *addr;
+
+
+/* Create an internet connection to HOSTNAME on PORT. The created
+ socket will be stored to *SOCK. */
+uerr_t
+make_connection (int *sock, char *hostname, unsigned short port)
+{
+ struct sockaddr_in sock_name;
+ /* struct hostent *hptr; */
+
+ /* Get internet address of the host. We can do it either by calling
+ ngethostbyname, or by calling store_hostaddress, from host.c.
+ storehostaddress is better since it caches calls to
+ gethostbyname. */
+#if 1
+ if (!store_hostaddress ((unsigned char *)&sock_name.sin_addr, hostname))
+ return HOSTERR;
+#else /* never */
+ if (!(hptr = ngethostbyname (hostname)))
+ return HOSTERR;
+ /* Copy the address of the host to socket description. */
+ memcpy (&sock_name.sin_addr, hptr->h_addr, hptr->h_length);
+#endif /* never */
+
+ /* Set port and protocol */
+ sock_name.sin_family = AF_INET;
+ sock_name.sin_port = htons (port);
+
+ /* Make an internet socket, stream type. */
+ if ((*sock = socket (AF_INET, SOCK_STREAM, 0)) == -1)
+ return CONSOCKERR;
+
+ /* Connect the socket to the remote host. */
+ if (connect (*sock, (struct sockaddr *) &sock_name, sizeof (sock_name)))
+ {
+ if (errno == ECONNREFUSED)
+ return CONREFUSED;
+ else
+ return CONERROR;
+ }
+ DEBUGP (("Created fd %d.\n", *sock));
+ return NOCONERROR;
+}
+
+/* Bind the local port PORT. This does all the necessary work, which
+ is creating a socket, setting SO_REUSEADDR option on it, then
+ calling bind() and listen(). If *PORT is 0, a random port is
+ chosen by the system, and its value is stored to *PORT. The
+ internal variable MPORT is set to the value of the ensuing master
+ socket. Call acceptport() to block for and accept a connection. */
+uerr_t
+bindport (unsigned short *port)
+{
+ int optval = 1;
+ static struct sockaddr_in srv;
+
+ msock = -1;
+ addr = (struct sockaddr *) &srv;
+ if ((msock = socket (AF_INET, SOCK_STREAM, 0)) < 0)
+ return CONSOCKERR;
+ if (setsockopt (msock, SOL_SOCKET, SO_REUSEADDR,
+ (char *)&optval, sizeof (optval)) < 0)
+ return CONSOCKERR;
+ srv.sin_family = AF_INET;
+ srv.sin_addr.s_addr = htonl (INADDR_ANY);
+ srv.sin_port = htons (*port);
+ if (bind (msock, addr, sizeof (struct sockaddr_in)) < 0)
+ {
+ CLOSE (msock);
+ msock = -1;
+ return BINDERR;
+ }
+ DEBUGP (("Master socket fd %d bound.\n", msock));
+ if (!*port)
+ {
+ size_t addrlen = sizeof (struct sockaddr_in);
+ if (getsockname (msock, addr, (int *)&addrlen) < 0)
+ {
+ CLOSE (msock);
+ msock = -1;
+ return CONPORTERR;
+ }
+ *port = ntohs (srv.sin_port);
+ }
+ if (listen (msock, 1) < 0)
+ {
+ CLOSE (msock);
+ msock = -1;
+ return LISTENERR;
+ }
+ return BINDOK;
+}
+
+#ifdef HAVE_SELECT
+/* Wait for file descriptor FD to be readable, MAXTIME being the
+ timeout in seconds. If WRITEP is non-zero, checks for FD being
+ writable instead.
+
+ Returns 1 if FD is accessible, 0 for timeout and -1 for error in
+ select(). */
+static int
+select_fd (int fd, int maxtime, int writep)
+{
+ fd_set fds, exceptfds;
+ struct timeval timeout;
+
+ FD_ZERO (&fds);
+ FD_SET (fd, &fds);
+ FD_ZERO (&exceptfds);
+ FD_SET (fd, &exceptfds);
+ timeout.tv_sec = maxtime;
+ timeout.tv_usec = 0;
+ /* HPUX reportedly warns here. What is the correct incantation? */
+ return select (fd + 1, writep ? NULL : &fds, writep ? &fds : NULL,
+ &exceptfds, &timeout);
+}
+#endif /* HAVE_SELECT */
+
+/* Call accept() on MSOCK and store the result to *SOCK. This assumes
+ that bindport() has been used to initialize MSOCK to a correct
+ value. It blocks the caller until a connection is established. If
+ no connection is established for OPT.TIMEOUT seconds, the function
+ exits with an error status. */
+uerr_t
+acceptport (int *sock)
+{
+ int addrlen = sizeof (struct sockaddr_in);
+
+#ifdef HAVE_SELECT
+ if (select_fd (msock, opt.timeout, 0) <= 0)
+ return ACCEPTERR;
+#endif
+ if ((*sock = accept (msock, addr, &addrlen)) < 0)
+ return ACCEPTERR;
+ DEBUGP (("Created socket fd %d.\n", *sock));
+ return ACCEPTOK;
+}
+
+/* Close SOCK, as well as the most recently remembered MSOCK, created
+ via bindport(). If SOCK is -1, close MSOCK only. */
+void
+closeport (int sock)
+{
+ /*shutdown (sock, 2);*/
+ if (sock != -1)
+ CLOSE (sock);
+ if (msock != -1)
+ CLOSE (msock);
+ msock = -1;
+}
+
+/* Return the local IP address associated with the connection on FD.
+ It is returned in a static buffer. */
+unsigned char *
+conaddr (int fd)
+{
+ static unsigned char res[4];
+ struct sockaddr_in mysrv;
+ struct sockaddr *myaddr;
+ size_t addrlen = sizeof (mysrv);
+
+ myaddr = (struct sockaddr *) (&mysrv);
+ if (getsockname (fd, myaddr, (int *)&addrlen) < 0)
+ return NULL;
+ memcpy (res, &mysrv.sin_addr, 4);
+ return res;
+}
+
+/* Read at most LEN bytes from FD, storing them to BUF. This is
+ virtually the same as read(), but takes care of EINTR braindamage
+ and uses select() to timeout the stale connections (a connection is
+ stale if more than OPT.TIMEOUT time is spent in select() or
+ read()). */
+int
+iread (int fd, char *buf, int len)
+{
+ int res;
+
+ do
+ {
+#ifdef HAVE_SELECT
+ if (opt.timeout)
+ {
+ do
+ {
+ res = select_fd (fd, opt.timeout, 0);
+ }
+ while (res == -1 && errno == EINTR);
+ if (res <= 0)
+ {
+ /* Set errno to ETIMEDOUT on timeout. */
+ if (res == 0)
+ /* #### Potentially evil! */
+ errno = ETIMEDOUT;
+ return -1;
+ }
+ }
+#endif
+ res = READ (fd, buf, len);
+ }
+ while (res == -1 && errno == EINTR);
+
+ return res;
+}
+
+/* Write LEN bytes from BUF to FD. This is similar to iread(), but
+ doesn't bother with select(). Unlike iread(), it makes sure that
+ all of BUF is actually written to FD, so callers needn't bother
+ with checking that the return value equals to LEN. Instead, you
+ should simply check for -1. */
+int
+iwrite (int fd, char *buf, int len)
+{
+ int res = 0;
+
+ /* `write' may write less than LEN bytes, thus the outward loop
+ keeps trying it until all was written, or an error occurred. The
+ inner loop is reserved for the usual EINTR f*kage, and the
+ innermost loop deals with the same during select(). */
+ while (len > 0)
+ {
+ do
+ {
+#ifdef HAVE_SELECT
+ if (opt.timeout)
+ {
+ do
+ {
+ res = select_fd (fd, opt.timeout, 1);
+ }
+ while (res == -1 && errno == EINTR);
+ if (res <= 0)
+ {
+ /* Set errno to ETIMEDOUT on timeout. */
+ if (res == 0)
+ /* #### Potentially evil! */
+ errno = ETIMEDOUT;
+ return -1;
+ }
+ }
+#endif
+ res = WRITE (fd, buf, len);
+ }
+ while (res == -1 && errno == EINTR);
+ if (res <= 0)
+ break;
+ buf += res;
+ len -= res;
+ }
+ return res;
+}
--- /dev/null
+/* Declarations for connect.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef CONNECT_H
+#define CONNECT_H
+
+/* Function declarations */
+uerr_t make_connection PARAMS ((int *, char *, unsigned short));
+uerr_t bindport PARAMS ((unsigned short *));
+uerr_t acceptport PARAMS ((int *));
+void closeport PARAMS ((int));
+unsigned char *conaddr PARAMS ((int));
+
+int iread PARAMS ((int, char *, int));
+int iwrite PARAMS ((int, char *, int));
+
+#endif /* CONNECT_H */
--- /dev/null
+/* Pattern matching (globbing).
+ Copyright (C) 1991, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* NOTE: Some Un*xes have their own fnmatch() -- yet, they are
+ reportedly unreliable and buggy. Thus I chose never to use it;
+ this version (from GNU Bash) is used unconditionally. */
+
+#include <config.h>
+
+#include <errno.h>
+#include "wget.h"
+#include "fnmatch.h"
+
+/* Match STRING against the filename pattern PATTERN, returning zero
+ if it matches, FNM_NOMATCH if not. */
+int
+fnmatch (const char *pattern, const char *string, int flags)
+{
+ register const char *p = pattern, *n = string;
+ register char c;
+
+ if ((flags & ~__FNM_FLAGS) != 0)
+ {
+ errno = EINVAL;
+ return (-1);
+ }
+
+ while ((c = *p++) != '\0')
+ {
+ switch (c)
+ {
+ case '?':
+ if (*n == '\0')
+ return (FNM_NOMATCH);
+ else if ((flags & FNM_PATHNAME) && *n == '/')
+ return (FNM_NOMATCH);
+ else if ((flags & FNM_PERIOD) && *n == '.' &&
+ (n == string || ((flags & FNM_PATHNAME) && n[-1] == '/')))
+ return (FNM_NOMATCH);
+ break;
+
+ case '\\':
+ if (!(flags & FNM_NOESCAPE))
+ c = *p++;
+ if (*n != c)
+ return (FNM_NOMATCH);
+ break;
+
+ case '*':
+ if ((flags & FNM_PERIOD) && *n == '.' &&
+ (n == string || ((flags & FNM_PATHNAME) && n[-1] == '/')))
+ return (FNM_NOMATCH);
+
+ for (c = *p++; c == '?' || c == '*'; c = *p++, ++n)
+ if (((flags & FNM_PATHNAME) && *n == '/') ||
+ (c == '?' && *n == '\0'))
+ return (FNM_NOMATCH);
+
+ if (c == '\0')
+ return (0);
+
+ {
+ char c1 = (!(flags & FNM_NOESCAPE) && c == '\\') ? *p : c;
+ for (--p; *n != '\0'; ++n)
+ if ((c == '[' || *n == c1) &&
+ fnmatch (p, n, flags & ~FNM_PERIOD) == 0)
+ return (0);
+ return (FNM_NOMATCH);
+ }
+
+ case '[':
+ {
+ /* Nonzero if the sense of the character class is
+ inverted. */
+ register int not;
+
+ if (*n == '\0')
+ return (FNM_NOMATCH);
+
+ if ((flags & FNM_PERIOD) && *n == '.' &&
+ (n == string || ((flags & FNM_PATHNAME) && n[-1] == '/')))
+ return (FNM_NOMATCH);
+
+ /* Make sure there is a closing `]'. If there isn't,
+ the `[' is just a character to be matched. */
+ {
+ register const char *np;
+
+ for (np = p; np && *np && *np != ']'; np++);
+
+ if (np && !*np)
+ {
+ if (*n != '[')
+ return (FNM_NOMATCH);
+ goto next_char;
+ }
+ }
+
+ not = (*p == '!' || *p == '^');
+ if (not)
+ ++p;
+
+ c = *p++;
+ while (1)
+ {
+ register char cstart = c, cend = c;
+
+ if (!(flags & FNM_NOESCAPE) && c == '\\')
+ cstart = cend = *p++;
+
+ if (c == '\0')
+ /* [ (unterminated) loses. */
+ return (FNM_NOMATCH);
+
+ c = *p++;
+
+ if ((flags & FNM_PATHNAME) && c == '/')
+ /* [/] can never match. */
+ return (FNM_NOMATCH);
+
+ if (c == '-' && *p != ']')
+ {
+ cend = *p++;
+ if (!(flags & FNM_NOESCAPE) && cend == '\\')
+ cend = *p++;
+ if (cend == '\0')
+ return (FNM_NOMATCH);
+ c = *p++;
+ }
+
+ if (*n >= cstart && *n <= cend)
+ goto matched;
+
+ if (c == ']')
+ break;
+ }
+ if (!not)
+ return (FNM_NOMATCH);
+
+ next_char:
+ break;
+
+ matched:
+ /* Skip the rest of the [...] that already matched. */
+ while (c != ']')
+ {
+ if (c == '\0')
+ /* [... (unterminated) loses. */
+ return (FNM_NOMATCH);
+
+ c = *p++;
+ if (!(flags & FNM_NOESCAPE) && c == '\\')
+ /* 1003.2d11 is unclear if this is right. %%% */
+ ++p;
+ }
+ if (not)
+ return (FNM_NOMATCH);
+ }
+ break;
+
+ default:
+ if (c != *n)
+ return (FNM_NOMATCH);
+ }
+
+ ++n;
+ }
+
+ if (*n == '\0')
+ return (0);
+
+ return (FNM_NOMATCH);
+}
+
+/* Return non-zero if S contains globbing wildcards (`*', `?', `[' or
+ `]'). */
+int
+has_wildcards_p (const char *s)
+{
+ for (; *s; s++)
+ if (*s == '*' || *s == '?' || *s == '[' || *s == ']')
+ return 1;
+ return 0;
+}
--- /dev/null
+/* Declarations for fnmatch.c.
+ Copyright (C) 1991, 1995, 1996 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef MTCH_H
+#define MTCH_H
+
+/* Bits set in the FLAGS argument to `fnmatch'. */
+#define FNM_PATHNAME (1 << 0) /* No wildcard can ever match `/'. */
+#define FNM_NOESCAPE (1 << 1) /* Backslashes don't quote special chars. */
+#define FNM_PERIOD (1 << 2) /* Leading `.' is matched only explicitly. */
+#define __FNM_FLAGS (FNM_PATHNAME|FNM_NOESCAPE|FNM_PERIOD)
+
+/* Value returned by `fnmatch' if STRING does not match PATTERN. */
+#define FNM_NOMATCH 1
+
+int fnmatch PARAMS ((const char *, const char *, int));
+int has_wildcards_p PARAMS ((const char *));
+
+#endif /* MTCH_H */
--- /dev/null
+/* Basic FTP routines.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <ctype.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <sys/types.h>
+
+#ifdef WINDOWS
+# include <winsock.h>
+#endif
+
+#include "wget.h"
+#include "utils.h"
+#include "rbuf.h"
+#include "connect.h"
+#include "host.h"
+
+#ifndef errno
+extern int errno;
+#endif
+#ifndef h_errno
+extern int h_errno;
+#endif
+
+char ftp_last_respline[128];
+
+\f
+/* Get the response of FTP server and allocate enough room to handle
+ it. <CR> and <LF> characters are stripped from the line, and the
+ line is 0-terminated. All the response lines but the last one are
+ skipped. The last line is determined as described in RFC959. */
+uerr_t
+ftp_response (struct rbuf *rbuf, char **line)
+{
+ int i;
+ int bufsize = 40;
+
+ *line = (char *)xmalloc (bufsize);
+ do
+ {
+ for (i = 0; 1; i++)
+ {
+ int res;
+ if (i > bufsize - 1)
+ *line = (char *)xrealloc (*line, (bufsize <<= 1));
+ res = RBUF_READCHAR (rbuf, *line + i);
+ /* RES is number of bytes read. */
+ if (res == 1)
+ {
+ if ((*line)[i] == '\n')
+ {
+ (*line)[i] = '\0';
+ /* Get rid of \r. */
+ if (i > 0 && (*line)[i - 1] == '\r')
+ (*line)[i - 1] = '\0';
+ break;
+ }
+ }
+ else
+ return FTPRERR;
+ }
+ if (opt.server_response)
+ logprintf (LOG_ALWAYS, "%s\n", *line);
+ else
+ DEBUGP (("%s\n", *line));
+ }
+ while (!(i >= 3 && ISDIGIT (**line) && ISDIGIT ((*line)[1]) &&
+ ISDIGIT ((*line)[2]) && (*line)[3] == ' '));
+ strncpy (ftp_last_respline, *line, sizeof (ftp_last_respline));
+ ftp_last_respline[sizeof (ftp_last_respline) - 1] = '\0';
+ return FTPOK;
+}
+
+/* Returns the malloc-ed FTP request, ending with <CR><LF>, printing
+ it if printing is required. If VALUE is NULL, just use
+ command<CR><LF>. */
+static char *
+ftp_request (const char *command, const char *value)
+{
+ char *res = (char *)xmalloc (strlen (command)
+ + (value ? (1 + strlen (value)) : 0)
+ + 2 + 1);
+ sprintf (res, "%s%s%s\r\n", command, value ? " " : "", value ? value : "");
+ if (opt.server_response)
+ {
+ /* Hack: don't print out password. */
+ if (strncmp (res, "PASS", 4) != 0)
+ logprintf (LOG_ALWAYS, "--> %s\n", res);
+ else
+ logputs (LOG_ALWAYS, "--> PASS Turtle Power!\n");
+ }
+ else
+ DEBUGP (("\n--> %s\n", res));
+ return res;
+}
+
+#ifdef USE_OPIE
+const char *calculate_skey_response PARAMS ((int, const char *, const char *));
+#endif
+
+/* Sends the USER and PASS commands to the server, to control
+ connection socket csock. */
+uerr_t
+ftp_login (struct rbuf *rbuf, const char *acc, const char *pass)
+{
+ uerr_t err;
+ char *request, *respline;
+ int nwritten;
+
+ /* Get greeting. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPSRVERR;
+ }
+ free (respline);
+ /* Send USER username. */
+ request = ftp_request ("USER", acc);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ /* An unprobable possibility of logging without a password. */
+ if (*respline == '2')
+ {
+ free (respline);
+ return FTPOK;
+ }
+ /* Else, only response 3 is appropriate. */
+ if (*respline != '3')
+ {
+ free (respline);
+ return FTPLOGREFUSED;
+ }
+#ifdef USE_OPIE
+ {
+ static const char *skey_head[] = {
+ "331 s/key ",
+ "331 opiekey "
+ };
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE (skey_head); i++)
+ {
+ if (strncmp (skey_head[i], respline, strlen (skey_head[i])) == 0)
+ break;
+ }
+ if (i < ARRAY_SIZE (skey_head))
+ {
+ const char *cp;
+ int skey_sequence = 0;
+
+ for (cp = respline + strlen (skey_head[i]);
+ '0' <= *cp && *cp <= '9';
+ cp++)
+ {
+ skey_sequence = skey_sequence * 10 + *cp - '0';
+ }
+ if (*cp == ' ')
+ cp++;
+ else
+ {
+ bad:
+ free (respline);
+ return FTPLOGREFUSED;
+ }
+ if ((cp = calculate_skey_response (skey_sequence, cp, pass)) == 0)
+ goto bad;
+ pass = cp;
+ }
+ }
+#endif /* USE_OPIE */
+ free (respline);
+ /* Send PASS password. */
+ request = ftp_request ("PASS", pass);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPLOGINC;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
+
+/* Bind a port and send the appropriate PORT command to the FTP
+ server. Use acceptport after RETR, to get the socket of data
+ connection. */
+uerr_t
+ftp_port (struct rbuf *rbuf)
+{
+ uerr_t err;
+ char *request, *respline, *bytes;
+ unsigned char *in_addr;
+ int nwritten;
+ unsigned short port;
+
+ /* Setting port to 0 lets the system choose a free port. */
+ port = 0;
+ /* Bind the port. */
+ err = bindport (&port);
+ if (err != BINDOK)
+ return err;
+ /* Get the address of this side of the connection. */
+ if (!(in_addr = conaddr (RBUF_FD (rbuf))))
+ return HOSTERR;
+ /* Construct the argument of PORT (of the form a,b,c,d,e,f). */
+ bytes = (char *)alloca (6 * 4 + 1);
+ sprintf (bytes, "%d,%d,%d,%d,%d,%d", in_addr[0], in_addr[1],
+ in_addr[2], in_addr[3], (unsigned) (port & 0xff00) >> 8,
+ port & 0xff);
+ /* Send PORT request. */
+ request = ftp_request ("PORT", bytes);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPPORTERR;
+ }
+ free (respline);
+ return FTPOK;
+}
+
+/* Similar to ftp_port, but uses `PASV' to initiate the passive FTP
+ transfer. Reads the response from server and parses it. Reads the
+ host and port addresses and returns them. */
+uerr_t
+ftp_pasv (struct rbuf *rbuf, unsigned char *addr)
+{
+ char *request, *respline, *s;
+ int nwritten, i;
+ uerr_t err;
+
+ /* Form the request. */
+ request = ftp_request ("PASV", NULL);
+ /* And send it. */
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get the server response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPNOPASV;
+ }
+ /* Parse the request. */
+ s = respline;
+ for (s += 4; *s && !ISDIGIT (*s); s++);
+ if (!*s)
+ return FTPINVPASV;
+ for (i = 0; i < 6; i++)
+ {
+ addr[i] = 0;
+ for (; ISDIGIT (*s); s++)
+ addr[i] = (*s - '0') + 10 * addr[i];
+ if (*s == ',')
+ s++;
+ else if (i < 5)
+ {
+ /* When on the last number, anything can be a terminator. */
+ free (respline);
+ return FTPINVPASV;
+ }
+ }
+ free (respline);
+ return FTPOK;
+}
+
+/* Sends the TYPE request to the server. */
+uerr_t
+ftp_type (struct rbuf *rbuf, int type)
+{
+ char *request, *respline;
+ int nwritten;
+ uerr_t err;
+ char stype[2];
+
+ /* Construct argument. */
+ stype[0] = type;
+ stype[1] = 0;
+ /* Send TYPE request. */
+ request = ftp_request ("TYPE", stype);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPUNKNOWNTYPE;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
+
+/* Changes the working directory by issuing a CWD command to the
+ server. */
+uerr_t
+ftp_cwd (struct rbuf *rbuf, const char *dir)
+{
+ char *request, *respline;
+ int nwritten;
+ uerr_t err;
+
+ /* Send CWD request. */
+ request = ftp_request ("CWD", dir);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline == '5')
+ {
+ free (respline);
+ return FTPNSFOD;
+ }
+ if (*respline != '2')
+ {
+ free (respline);
+ return FTPRERR;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
+
+/* Sends REST command to the FTP server. */
+uerr_t
+ftp_rest (struct rbuf *rbuf, long offset)
+{
+ char *request, *respline;
+ int nwritten;
+ uerr_t err;
+ static char numbuf[20]; /* Buffer for the number */
+
+ long_to_string (numbuf, offset);
+ request = ftp_request ("REST", numbuf);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline != '3')
+ {
+ free (respline);
+ return FTPRESTFAIL;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
+
+/* Sends RETR command to the FTP server. */
+uerr_t
+ftp_retr (struct rbuf *rbuf, const char *file)
+{
+ char *request, *respline;
+ int nwritten;
+ uerr_t err;
+
+ /* Send RETR request. */
+ request = ftp_request ("RETR", file);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate response. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline == '5')
+ {
+ free (respline);
+ return FTPNSFOD;
+ }
+ if (*respline != '1')
+ {
+ free (respline);
+ return FTPRERR;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
+
+/* Sends the LIST command to the server. If FILE is NULL, send just
+ `LIST' (no space). */
+uerr_t
+ftp_list (struct rbuf *rbuf, const char *file)
+{
+ char *request, *respline;
+ int nwritten;
+ uerr_t err;
+
+ /* Send LIST request. */
+ request = ftp_request ("LIST", file);
+ nwritten = iwrite (RBUF_FD (rbuf), request, strlen (request));
+ if (nwritten < 0)
+ {
+ free (request);
+ return WRITEFAILED;
+ }
+ free (request);
+ /* Get appropriate respone. */
+ err = ftp_response (rbuf, &respline);
+ if (err != FTPOK)
+ {
+ free (respline);
+ return err;
+ }
+ if (*respline == '5')
+ {
+ free (respline);
+ return FTPNSFOD;
+ }
+ if (*respline != '1')
+ {
+ free (respline);
+ return FTPRERR;
+ }
+ free (respline);
+ /* All OK. */
+ return FTPOK;
+}
--- /dev/null
+/* Parsing FTP `ls' output.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <sys/types.h>
+#include <ctype.h>
+#include <errno.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "ftp.h"
+
+/* Converts symbolic permissions to number-style ones, e.g. string
+ rwxr-xr-x to 755. For now, it knows nothing of
+ setuid/setgid/sticky. ACLs are ignored. */
+static int
+symperms (const char *s)
+{
+ int perms = 0, i;
+
+ if (strlen (s) < 9)
+ return 0;
+ for (i = 0; i < 3; i++, s += 3)
+ {
+ perms <<= 3;
+ perms += (((s[0] == 'r') << 2) + ((s[1] == 'w') << 1) +
+ (s[2] == 'x' || s[2] == 's'));
+ }
+ return perms;
+}
+
+
+/* Convert the Un*x-ish style directory listing stored in FILE to a
+ linked list of fileinfo (system-independent) entries. The contents
+ of FILE are considered to be produced by the standard Unix `ls -la'
+ output (whatever that might be). BSD (no group) and SYSV (with
+ group) listings are handled.
+
+ The time stamps are stored in a separate variable, time_t
+ compatible (I hope). The timezones are ignored. */
+static struct fileinfo *
+ftp_parse_unix_ls (const char *file)
+{
+ FILE *fp;
+ static const char *months[] = {
+ "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+ "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
+ };
+ int next, len, i, error, ignore;
+ int year, month, day; /* for time analysis */
+ int hour, min, sec;
+ struct tm timestruct, *tnow;
+ time_t timenow;
+
+ char *line, *tok; /* tokenizer */
+ struct fileinfo *dir, *l, cur; /* list creation */
+
+ fp = fopen (file, "rb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", file, strerror (errno));
+ return NULL;
+ }
+ dir = l = NULL;
+
+ /* Line loop to end of file: */
+ while ((line = read_whole_line (fp)))
+ {
+ DEBUGP (("%s\n", line));
+ len = strlen (line);
+ /* Destroy <CR> if there is one. */
+ if (len && line[len - 1] == '\r')
+ line[--len] = '\0';
+
+ /* Skip if total... */
+ if (!strncasecmp (line, "total", 5))
+ {
+ free (line);
+ continue;
+ }
+ /* Get the first token (permissions). */
+ tok = strtok (line, " ");
+ if (!tok)
+ {
+ free (line);
+ continue;
+ }
+
+ cur.name = NULL;
+ cur.linkto = NULL;
+
+ /* Decide whether we deal with a file or a directory. */
+ switch (*tok)
+ {
+ case '-':
+ cur.type = FT_PLAINFILE;
+ DEBUGP (("PLAINFILE; "));
+ break;
+ case 'd':
+ cur.type = FT_DIRECTORY;
+ DEBUGP (("DIRECTORY; "));
+ break;
+ case 'l':
+ cur.type = FT_SYMLINK;
+ DEBUGP (("SYMLINK; "));
+ break;
+ default:
+ cur.type = FT_UNKNOWN;
+ DEBUGP (("UNKOWN; "));
+ break;
+ }
+
+ cur.perms = symperms (tok + 1);
+ DEBUGP (("perms %0o; ", cur.perms));
+
+ error = ignore = 0; /* Errnoeous and ignoring entries are
+ treated equally for now. */
+ year = hour = min = sec = 0; /* Silence the compiler. */
+ month = day = 0;
+ next = -1;
+ /* While there are tokens on the line, parse them. Next is the
+ number of tokens left until the filename.
+
+ Use the month-name token as the "anchor" (the place where the
+ position wrt the file name is "known"). When a month name is
+ encountered, `next' is set to 5. Also, the preceding
+ characters are parsed to get the file size.
+
+ This tactic is quite dubious when it comes to
+ internationalization issues (non-English month names), but it
+ works for now. */
+ while ((tok = strtok (NULL, " ")))
+ {
+ --next;
+ if (next < 0) /* a month name was not encountered */
+ {
+ for (i = 0; i < 12; i++)
+ if (!strcmp (tok, months[i]))
+ break;
+ /* If we got a month, it means the token before it is the
+ size, and the filename is three tokens away. */
+ if (i != 12)
+ {
+ char *t = tok - 2;
+ long mul = 1;
+
+ for (cur.size = 0; t > line && ISDIGIT (*t); mul *= 10, t--)
+ cur.size += mul * (*t - '0');
+ if (t == line)
+ {
+ /* Something is seriously wrong. */
+ error = 1;
+ break;
+ }
+ month = i;
+ next = 5;
+ DEBUGP (("month: %s; ", months[month]));
+ }
+ }
+ else if (next == 4) /* days */
+ {
+ if (tok[1]) /* two-digit... */
+ day = 10 * (*tok - '0') + tok[1] - '0';
+ else /* ...or one-digit */
+ day = *tok - '0';
+ DEBUGP (("day: %d; ", day));
+ }
+ else if (next == 3)
+ {
+ /* This ought to be either the time, or the year. Let's
+ be flexible!
+
+ If we have a number x, it's a year. If we have x:y,
+ it's hours and minutes. If we have x:y:z, z are
+ seconds. */
+ year = 0;
+ min = hour = sec = 0;
+ /* We must deal with digits. */
+ if (ISDIGIT (*tok))
+ {
+ /* Suppose it's year. */
+ for (; ISDIGIT (*tok); tok++)
+ year = (*tok - '0') + 10 * year;
+ if (*tok == ':')
+ {
+ /* This means these were hours! */
+ hour = year;
+ year = 0;
+ ++tok;
+ /* Get the minutes... */
+ for (; ISDIGIT (*tok); tok++)
+ min = (*tok - '0') + 10 * min;
+ if (*tok == ':')
+ {
+ /* ...and the seconds. */
+ ++tok;
+ for (; ISDIGIT (*tok); tok++)
+ sec = (*tok - '0') + 10 * sec;
+ }
+ }
+ }
+ if (year)
+ DEBUGP (("year: %d (no tm); ", year));
+ else
+ DEBUGP (("time: %02d:%02d:%02d (no yr); ", hour, min, sec));
+ }
+ else if (next == 2) /* The file name */
+ {
+ int fnlen;
+ char *p;
+
+ /* Since the file name may contain a SPC, it is possible
+ for strtok to handle it wrong. */
+ fnlen = strlen (tok);
+ if (fnlen < len - (tok - line))
+ {
+ /* So we have a SPC in the file name. Restore the
+ original. */
+ tok[fnlen] = ' ';
+ /* If the file is a symbolic link, it should have a
+ ` -> ' somewhere. */
+ if (cur.type == FT_SYMLINK)
+ {
+ p = strstr (tok, " -> ");
+ if (!p)
+ {
+ error = 1;
+ break;
+ }
+ cur.linkto = xstrdup (p + 4);
+ DEBUGP (("link to: %s\n", cur.linkto));
+ /* And separate it from the file name. */
+ *p = '\0';
+ }
+ }
+ /* If we have the filename, add it to the list of files or
+ directories. */
+ /* "." and ".." are an exception! */
+ if (!strcmp (tok, ".") || !strcmp (tok, ".."))
+ {
+ DEBUGP (("\nIgnoring `.' and `..'; "));
+ ignore = 1;
+ break;
+ }
+ /* Some FTP sites choose to have ls -F as their default
+ LIST output, which marks the symlinks with a trailing
+ `@', directory names with a trailing `/' and
+ executables with a trailing `*'. This is no problem
+ unless encountering a symbolic link ending with `@',
+ or an executable ending with `*' on a server without
+ default -F output. I believe these cases are very
+ rare. */
+ fnlen = strlen (tok); /* re-calculate `fnlen' */
+ cur.name = (char *)xmalloc (fnlen + 1);
+ memcpy (cur.name, tok, fnlen + 1);
+ if (fnlen)
+ {
+ if (cur.type == FT_DIRECTORY && cur.name[fnlen - 1] == '/')
+ {
+ cur.name[fnlen - 1] = '\0';
+ DEBUGP (("trailing `/' on dir.\n"));
+ }
+ else if (cur.type == FT_SYMLINK && cur.name[fnlen - 1] == '@')
+ {
+ cur.name[fnlen - 1] = '\0';
+ DEBUGP (("trailing `@' on link.\n"));
+ }
+ else if (cur.type == FT_PLAINFILE
+ && (cur.perms & 0111)
+ && cur.name[fnlen - 1] == '*')
+ {
+ cur.name[fnlen - 1] = '\0';
+ DEBUGP (("trailing `*' on exec.\n"));
+ }
+ } /* if (fnlen) */
+ else
+ error = 1;
+ break;
+ }
+ else
+ abort ();
+ } /* while */
+
+ if (!cur.name || (cur.type == FT_SYMLINK && !cur.linkto))
+ error = 1;
+
+ DEBUGP (("\n"));
+
+ if (error || ignore)
+ {
+ DEBUGP (("Skipping.\n"));
+ FREE_MAYBE (cur.name);
+ FREE_MAYBE (cur.linkto);
+ free (line);
+ continue;
+ }
+
+ if (!dir)
+ {
+ l = dir = (struct fileinfo *)xmalloc (sizeof (struct fileinfo));
+ memcpy (l, &cur, sizeof (cur));
+ l->prev = l->next = NULL;
+ }
+ else
+ {
+ cur.prev = l;
+ l->next = (struct fileinfo *)xmalloc (sizeof (struct fileinfo));
+ l = l->next;
+ memcpy (l, &cur, sizeof (cur));
+ l->next = NULL;
+ }
+ /* Get the current time. */
+ timenow = time (NULL);
+ tnow = localtime (&timenow);
+ /* Build the time-stamp (the idea by zaga@fly.cc.fer.hr). */
+ timestruct.tm_sec = sec;
+ timestruct.tm_min = min;
+ timestruct.tm_hour = hour;
+ timestruct.tm_mday = day;
+ timestruct.tm_mon = month;
+ if (year == 0)
+ {
+ /* Some listings will not specify the year if it is "obvious"
+ that the file was from the previous year. E.g. if today
+ is 97-01-12, and you see a file of Dec 15th, its year is
+ 1996, not 1997. Thanks to Vladimir Volovich for
+ mentioning this! */
+ if (month > tnow->tm_mon)
+ timestruct.tm_year = tnow->tm_year - 1;
+ else
+ timestruct.tm_year = tnow->tm_year;
+ }
+ else
+ timestruct.tm_year = year;
+ if (timestruct.tm_year >= 1900)
+ timestruct.tm_year -= 1900;
+ timestruct.tm_wday = 0;
+ timestruct.tm_yday = 0;
+ timestruct.tm_isdst = -1;
+ l->tstamp = mktime (×truct); /* store the time-stamp */
+
+ free (line);
+ }
+
+ fclose (fp);
+ return dir;
+}
+
+/* This function is just a stub. It should actually accept some kind
+ of information what system it is running on -- e.g. FPL_UNIX,
+ FPL_DOS, FPL_NT, FPL_VMS, etc. and a "guess-me" value, like
+ FPL_GUESS. Then it would call the appropriate parsers to fill up
+ fileinfos.
+
+ Since we currently support only the Unix FTP servers, this function
+ simply returns the result of ftp_parse_unix_ls(). */
+struct fileinfo *
+ftp_parse_ls (const char *file)
+{
+ return ftp_parse_unix_ls (file);
+}
--- /dev/null
+/* Opie (s/key) support for FTP.
+ Copyright (C) 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+
+#include "wget.h"
+#include "md5.h"
+
+/* Dictionary for integer-word translations. */
+static char Wp[2048][4] = {
+ { 'A', '\0', '\0', '\0' },
+ { 'A', 'B', 'E', '\0' },
+ { 'A', 'C', 'E', '\0' },
+ { 'A', 'C', 'T', '\0' },
+ { 'A', 'D', '\0', '\0' },
+ { 'A', 'D', 'A', '\0' },
+ { 'A', 'D', 'D', '\0' },
+ { 'A', 'G', 'O', '\0' },
+ { 'A', 'I', 'D', '\0' },
+ { 'A', 'I', 'M', '\0' },
+ { 'A', 'I', 'R', '\0' },
+ { 'A', 'L', 'L', '\0' },
+ { 'A', 'L', 'P', '\0' },
+ { 'A', 'M', '\0', '\0' },
+ { 'A', 'M', 'Y', '\0' },
+ { 'A', 'N', '\0', '\0' },
+ { 'A', 'N', 'A', '\0' },
+ { 'A', 'N', 'D', '\0' },
+ { 'A', 'N', 'N', '\0' },
+ { 'A', 'N', 'T', '\0' },
+ { 'A', 'N', 'Y', '\0' },
+ { 'A', 'P', 'E', '\0' },
+ { 'A', 'P', 'S', '\0' },
+ { 'A', 'P', 'T', '\0' },
+ { 'A', 'R', 'C', '\0' },
+ { 'A', 'R', 'E', '\0' },
+ { 'A', 'R', 'K', '\0' },
+ { 'A', 'R', 'M', '\0' },
+ { 'A', 'R', 'T', '\0' },
+ { 'A', 'S', '\0', '\0' },
+ { 'A', 'S', 'H', '\0' },
+ { 'A', 'S', 'K', '\0' },
+ { 'A', 'T', '\0', '\0' },
+ { 'A', 'T', 'E', '\0' },
+ { 'A', 'U', 'G', '\0' },
+ { 'A', 'U', 'K', '\0' },
+ { 'A', 'V', 'E', '\0' },
+ { 'A', 'W', 'E', '\0' },
+ { 'A', 'W', 'K', '\0' },
+ { 'A', 'W', 'L', '\0' },
+ { 'A', 'W', 'N', '\0' },
+ { 'A', 'X', '\0', '\0' },
+ { 'A', 'Y', 'E', '\0' },
+ { 'B', 'A', 'D', '\0' },
+ { 'B', 'A', 'G', '\0' },
+ { 'B', 'A', 'H', '\0' },
+ { 'B', 'A', 'M', '\0' },
+ { 'B', 'A', 'N', '\0' },
+ { 'B', 'A', 'R', '\0' },
+ { 'B', 'A', 'T', '\0' },
+ { 'B', 'A', 'Y', '\0' },
+ { 'B', 'E', '\0', '\0' },
+ { 'B', 'E', 'D', '\0' },
+ { 'B', 'E', 'E', '\0' },
+ { 'B', 'E', 'G', '\0' },
+ { 'B', 'E', 'N', '\0' },
+ { 'B', 'E', 'T', '\0' },
+ { 'B', 'E', 'Y', '\0' },
+ { 'B', 'I', 'B', '\0' },
+ { 'B', 'I', 'D', '\0' },
+ { 'B', 'I', 'G', '\0' },
+ { 'B', 'I', 'N', '\0' },
+ { 'B', 'I', 'T', '\0' },
+ { 'B', 'O', 'B', '\0' },
+ { 'B', 'O', 'G', '\0' },
+ { 'B', 'O', 'N', '\0' },
+ { 'B', 'O', 'O', '\0' },
+ { 'B', 'O', 'P', '\0' },
+ { 'B', 'O', 'W', '\0' },
+ { 'B', 'O', 'Y', '\0' },
+ { 'B', 'U', 'B', '\0' },
+ { 'B', 'U', 'D', '\0' },
+ { 'B', 'U', 'G', '\0' },
+ { 'B', 'U', 'M', '\0' },
+ { 'B', 'U', 'N', '\0' },
+ { 'B', 'U', 'S', '\0' },
+ { 'B', 'U', 'T', '\0' },
+ { 'B', 'U', 'Y', '\0' },
+ { 'B', 'Y', '\0', '\0' },
+ { 'B', 'Y', 'E', '\0' },
+ { 'C', 'A', 'B', '\0' },
+ { 'C', 'A', 'L', '\0' },
+ { 'C', 'A', 'M', '\0' },
+ { 'C', 'A', 'N', '\0' },
+ { 'C', 'A', 'P', '\0' },
+ { 'C', 'A', 'R', '\0' },
+ { 'C', 'A', 'T', '\0' },
+ { 'C', 'A', 'W', '\0' },
+ { 'C', 'O', 'D', '\0' },
+ { 'C', 'O', 'G', '\0' },
+ { 'C', 'O', 'L', '\0' },
+ { 'C', 'O', 'N', '\0' },
+ { 'C', 'O', 'O', '\0' },
+ { 'C', 'O', 'P', '\0' },
+ { 'C', 'O', 'T', '\0' },
+ { 'C', 'O', 'W', '\0' },
+ { 'C', 'O', 'Y', '\0' },
+ { 'C', 'R', 'Y', '\0' },
+ { 'C', 'U', 'B', '\0' },
+ { 'C', 'U', 'E', '\0' },
+ { 'C', 'U', 'P', '\0' },
+ { 'C', 'U', 'R', '\0' },
+ { 'C', 'U', 'T', '\0' },
+ { 'D', 'A', 'B', '\0' },
+ { 'D', 'A', 'D', '\0' },
+ { 'D', 'A', 'M', '\0' },
+ { 'D', 'A', 'N', '\0' },
+ { 'D', 'A', 'R', '\0' },
+ { 'D', 'A', 'Y', '\0' },
+ { 'D', 'E', 'E', '\0' },
+ { 'D', 'E', 'L', '\0' },
+ { 'D', 'E', 'N', '\0' },
+ { 'D', 'E', 'S', '\0' },
+ { 'D', 'E', 'W', '\0' },
+ { 'D', 'I', 'D', '\0' },
+ { 'D', 'I', 'E', '\0' },
+ { 'D', 'I', 'G', '\0' },
+ { 'D', 'I', 'N', '\0' },
+ { 'D', 'I', 'P', '\0' },
+ { 'D', 'O', '\0', '\0' },
+ { 'D', 'O', 'E', '\0' },
+ { 'D', 'O', 'G', '\0' },
+ { 'D', 'O', 'N', '\0' },
+ { 'D', 'O', 'T', '\0' },
+ { 'D', 'O', 'W', '\0' },
+ { 'D', 'R', 'Y', '\0' },
+ { 'D', 'U', 'B', '\0' },
+ { 'D', 'U', 'D', '\0' },
+ { 'D', 'U', 'E', '\0' },
+ { 'D', 'U', 'G', '\0' },
+ { 'D', 'U', 'N', '\0' },
+ { 'E', 'A', 'R', '\0' },
+ { 'E', 'A', 'T', '\0' },
+ { 'E', 'D', '\0', '\0' },
+ { 'E', 'E', 'L', '\0' },
+ { 'E', 'G', 'G', '\0' },
+ { 'E', 'G', 'O', '\0' },
+ { 'E', 'L', 'I', '\0' },
+ { 'E', 'L', 'K', '\0' },
+ { 'E', 'L', 'M', '\0' },
+ { 'E', 'L', 'Y', '\0' },
+ { 'E', 'M', '\0', '\0' },
+ { 'E', 'N', 'D', '\0' },
+ { 'E', 'S', 'T', '\0' },
+ { 'E', 'T', 'C', '\0' },
+ { 'E', 'V', 'A', '\0' },
+ { 'E', 'V', 'E', '\0' },
+ { 'E', 'W', 'E', '\0' },
+ { 'E', 'Y', 'E', '\0' },
+ { 'F', 'A', 'D', '\0' },
+ { 'F', 'A', 'N', '\0' },
+ { 'F', 'A', 'R', '\0' },
+ { 'F', 'A', 'T', '\0' },
+ { 'F', 'A', 'Y', '\0' },
+ { 'F', 'E', 'D', '\0' },
+ { 'F', 'E', 'E', '\0' },
+ { 'F', 'E', 'W', '\0' },
+ { 'F', 'I', 'B', '\0' },
+ { 'F', 'I', 'G', '\0' },
+ { 'F', 'I', 'N', '\0' },
+ { 'F', 'I', 'R', '\0' },
+ { 'F', 'I', 'T', '\0' },
+ { 'F', 'L', 'O', '\0' },
+ { 'F', 'L', 'Y', '\0' },
+ { 'F', 'O', 'E', '\0' },
+ { 'F', 'O', 'G', '\0' },
+ { 'F', 'O', 'R', '\0' },
+ { 'F', 'R', 'Y', '\0' },
+ { 'F', 'U', 'M', '\0' },
+ { 'F', 'U', 'N', '\0' },
+ { 'F', 'U', 'R', '\0' },
+ { 'G', 'A', 'B', '\0' },
+ { 'G', 'A', 'D', '\0' },
+ { 'G', 'A', 'G', '\0' },
+ { 'G', 'A', 'L', '\0' },
+ { 'G', 'A', 'M', '\0' },
+ { 'G', 'A', 'P', '\0' },
+ { 'G', 'A', 'S', '\0' },
+ { 'G', 'A', 'Y', '\0' },
+ { 'G', 'E', 'E', '\0' },
+ { 'G', 'E', 'L', '\0' },
+ { 'G', 'E', 'M', '\0' },
+ { 'G', 'E', 'T', '\0' },
+ { 'G', 'I', 'G', '\0' },
+ { 'G', 'I', 'L', '\0' },
+ { 'G', 'I', 'N', '\0' },
+ { 'G', 'O', '\0', '\0' },
+ { 'G', 'O', 'T', '\0' },
+ { 'G', 'U', 'M', '\0' },
+ { 'G', 'U', 'N', '\0' },
+ { 'G', 'U', 'S', '\0' },
+ { 'G', 'U', 'T', '\0' },
+ { 'G', 'U', 'Y', '\0' },
+ { 'G', 'Y', 'M', '\0' },
+ { 'G', 'Y', 'P', '\0' },
+ { 'H', 'A', '\0', '\0' },
+ { 'H', 'A', 'D', '\0' },
+ { 'H', 'A', 'L', '\0' },
+ { 'H', 'A', 'M', '\0' },
+ { 'H', 'A', 'N', '\0' },
+ { 'H', 'A', 'P', '\0' },
+ { 'H', 'A', 'S', '\0' },
+ { 'H', 'A', 'T', '\0' },
+ { 'H', 'A', 'W', '\0' },
+ { 'H', 'A', 'Y', '\0' },
+ { 'H', 'E', '\0', '\0' },
+ { 'H', 'E', 'M', '\0' },
+ { 'H', 'E', 'N', '\0' },
+ { 'H', 'E', 'R', '\0' },
+ { 'H', 'E', 'W', '\0' },
+ { 'H', 'E', 'Y', '\0' },
+ { 'H', 'I', '\0', '\0' },
+ { 'H', 'I', 'D', '\0' },
+ { 'H', 'I', 'M', '\0' },
+ { 'H', 'I', 'P', '\0' },
+ { 'H', 'I', 'S', '\0' },
+ { 'H', 'I', 'T', '\0' },
+ { 'H', 'O', '\0', '\0' },
+ { 'H', 'O', 'B', '\0' },
+ { 'H', 'O', 'C', '\0' },
+ { 'H', 'O', 'E', '\0' },
+ { 'H', 'O', 'G', '\0' },
+ { 'H', 'O', 'P', '\0' },
+ { 'H', 'O', 'T', '\0' },
+ { 'H', 'O', 'W', '\0' },
+ { 'H', 'U', 'B', '\0' },
+ { 'H', 'U', 'E', '\0' },
+ { 'H', 'U', 'G', '\0' },
+ { 'H', 'U', 'H', '\0' },
+ { 'H', 'U', 'M', '\0' },
+ { 'H', 'U', 'T', '\0' },
+ { 'I', '\0', '\0', '\0' },
+ { 'I', 'C', 'Y', '\0' },
+ { 'I', 'D', 'A', '\0' },
+ { 'I', 'F', '\0', '\0' },
+ { 'I', 'K', 'E', '\0' },
+ { 'I', 'L', 'L', '\0' },
+ { 'I', 'N', 'K', '\0' },
+ { 'I', 'N', 'N', '\0' },
+ { 'I', 'O', '\0', '\0' },
+ { 'I', 'O', 'N', '\0' },
+ { 'I', 'Q', '\0', '\0' },
+ { 'I', 'R', 'A', '\0' },
+ { 'I', 'R', 'E', '\0' },
+ { 'I', 'R', 'K', '\0' },
+ { 'I', 'S', '\0', '\0' },
+ { 'I', 'T', '\0', '\0' },
+ { 'I', 'T', 'S', '\0' },
+ { 'I', 'V', 'Y', '\0' },
+ { 'J', 'A', 'B', '\0' },
+ { 'J', 'A', 'G', '\0' },
+ { 'J', 'A', 'M', '\0' },
+ { 'J', 'A', 'N', '\0' },
+ { 'J', 'A', 'R', '\0' },
+ { 'J', 'A', 'W', '\0' },
+ { 'J', 'A', 'Y', '\0' },
+ { 'J', 'E', 'T', '\0' },
+ { 'J', 'I', 'G', '\0' },
+ { 'J', 'I', 'M', '\0' },
+ { 'J', 'O', '\0', '\0' },
+ { 'J', 'O', 'B', '\0' },
+ { 'J', 'O', 'E', '\0' },
+ { 'J', 'O', 'G', '\0' },
+ { 'J', 'O', 'T', '\0' },
+ { 'J', 'O', 'Y', '\0' },
+ { 'J', 'U', 'G', '\0' },
+ { 'J', 'U', 'T', '\0' },
+ { 'K', 'A', 'Y', '\0' },
+ { 'K', 'E', 'G', '\0' },
+ { 'K', 'E', 'N', '\0' },
+ { 'K', 'E', 'Y', '\0' },
+ { 'K', 'I', 'D', '\0' },
+ { 'K', 'I', 'M', '\0' },
+ { 'K', 'I', 'N', '\0' },
+ { 'K', 'I', 'T', '\0' },
+ { 'L', 'A', '\0', '\0' },
+ { 'L', 'A', 'B', '\0' },
+ { 'L', 'A', 'C', '\0' },
+ { 'L', 'A', 'D', '\0' },
+ { 'L', 'A', 'G', '\0' },
+ { 'L', 'A', 'M', '\0' },
+ { 'L', 'A', 'P', '\0' },
+ { 'L', 'A', 'W', '\0' },
+ { 'L', 'A', 'Y', '\0' },
+ { 'L', 'E', 'A', '\0' },
+ { 'L', 'E', 'D', '\0' },
+ { 'L', 'E', 'E', '\0' },
+ { 'L', 'E', 'G', '\0' },
+ { 'L', 'E', 'N', '\0' },
+ { 'L', 'E', 'O', '\0' },
+ { 'L', 'E', 'T', '\0' },
+ { 'L', 'E', 'W', '\0' },
+ { 'L', 'I', 'D', '\0' },
+ { 'L', 'I', 'E', '\0' },
+ { 'L', 'I', 'N', '\0' },
+ { 'L', 'I', 'P', '\0' },
+ { 'L', 'I', 'T', '\0' },
+ { 'L', 'O', '\0', '\0' },
+ { 'L', 'O', 'B', '\0' },
+ { 'L', 'O', 'G', '\0' },
+ { 'L', 'O', 'P', '\0' },
+ { 'L', 'O', 'S', '\0' },
+ { 'L', 'O', 'T', '\0' },
+ { 'L', 'O', 'U', '\0' },
+ { 'L', 'O', 'W', '\0' },
+ { 'L', 'O', 'Y', '\0' },
+ { 'L', 'U', 'G', '\0' },
+ { 'L', 'Y', 'E', '\0' },
+ { 'M', 'A', '\0', '\0' },
+ { 'M', 'A', 'C', '\0' },
+ { 'M', 'A', 'D', '\0' },
+ { 'M', 'A', 'E', '\0' },
+ { 'M', 'A', 'N', '\0' },
+ { 'M', 'A', 'O', '\0' },
+ { 'M', 'A', 'P', '\0' },
+ { 'M', 'A', 'T', '\0' },
+ { 'M', 'A', 'W', '\0' },
+ { 'M', 'A', 'Y', '\0' },
+ { 'M', 'E', '\0', '\0' },
+ { 'M', 'E', 'G', '\0' },
+ { 'M', 'E', 'L', '\0' },
+ { 'M', 'E', 'N', '\0' },
+ { 'M', 'E', 'T', '\0' },
+ { 'M', 'E', 'W', '\0' },
+ { 'M', 'I', 'D', '\0' },
+ { 'M', 'I', 'N', '\0' },
+ { 'M', 'I', 'T', '\0' },
+ { 'M', 'O', 'B', '\0' },
+ { 'M', 'O', 'D', '\0' },
+ { 'M', 'O', 'E', '\0' },
+ { 'M', 'O', 'O', '\0' },
+ { 'M', 'O', 'P', '\0' },
+ { 'M', 'O', 'S', '\0' },
+ { 'M', 'O', 'T', '\0' },
+ { 'M', 'O', 'W', '\0' },
+ { 'M', 'U', 'D', '\0' },
+ { 'M', 'U', 'G', '\0' },
+ { 'M', 'U', 'M', '\0' },
+ { 'M', 'Y', '\0', '\0' },
+ { 'N', 'A', 'B', '\0' },
+ { 'N', 'A', 'G', '\0' },
+ { 'N', 'A', 'N', '\0' },
+ { 'N', 'A', 'P', '\0' },
+ { 'N', 'A', 'T', '\0' },
+ { 'N', 'A', 'Y', '\0' },
+ { 'N', 'E', '\0', '\0' },
+ { 'N', 'E', 'D', '\0' },
+ { 'N', 'E', 'E', '\0' },
+ { 'N', 'E', 'T', '\0' },
+ { 'N', 'E', 'W', '\0' },
+ { 'N', 'I', 'B', '\0' },
+ { 'N', 'I', 'L', '\0' },
+ { 'N', 'I', 'P', '\0' },
+ { 'N', 'I', 'T', '\0' },
+ { 'N', 'O', '\0', '\0' },
+ { 'N', 'O', 'B', '\0' },
+ { 'N', 'O', 'D', '\0' },
+ { 'N', 'O', 'N', '\0' },
+ { 'N', 'O', 'R', '\0' },
+ { 'N', 'O', 'T', '\0' },
+ { 'N', 'O', 'V', '\0' },
+ { 'N', 'O', 'W', '\0' },
+ { 'N', 'U', '\0', '\0' },
+ { 'N', 'U', 'N', '\0' },
+ { 'N', 'U', 'T', '\0' },
+ { 'O', '\0', '\0', '\0' },
+ { 'O', 'A', 'F', '\0' },
+ { 'O', 'A', 'K', '\0' },
+ { 'O', 'A', 'R', '\0' },
+ { 'O', 'A', 'T', '\0' },
+ { 'O', 'D', 'D', '\0' },
+ { 'O', 'D', 'E', '\0' },
+ { 'O', 'F', '\0', '\0' },
+ { 'O', 'F', 'F', '\0' },
+ { 'O', 'F', 'T', '\0' },
+ { 'O', 'H', '\0', '\0' },
+ { 'O', 'I', 'L', '\0' },
+ { 'O', 'K', '\0', '\0' },
+ { 'O', 'L', 'D', '\0' },
+ { 'O', 'N', '\0', '\0' },
+ { 'O', 'N', 'E', '\0' },
+ { 'O', 'R', '\0', '\0' },
+ { 'O', 'R', 'B', '\0' },
+ { 'O', 'R', 'E', '\0' },
+ { 'O', 'R', 'R', '\0' },
+ { 'O', 'S', '\0', '\0' },
+ { 'O', 'T', 'T', '\0' },
+ { 'O', 'U', 'R', '\0' },
+ { 'O', 'U', 'T', '\0' },
+ { 'O', 'V', 'A', '\0' },
+ { 'O', 'W', '\0', '\0' },
+ { 'O', 'W', 'E', '\0' },
+ { 'O', 'W', 'L', '\0' },
+ { 'O', 'W', 'N', '\0' },
+ { 'O', 'X', '\0', '\0' },
+ { 'P', 'A', '\0', '\0' },
+ { 'P', 'A', 'D', '\0' },
+ { 'P', 'A', 'L', '\0' },
+ { 'P', 'A', 'M', '\0' },
+ { 'P', 'A', 'N', '\0' },
+ { 'P', 'A', 'P', '\0' },
+ { 'P', 'A', 'R', '\0' },
+ { 'P', 'A', 'T', '\0' },
+ { 'P', 'A', 'W', '\0' },
+ { 'P', 'A', 'Y', '\0' },
+ { 'P', 'E', 'A', '\0' },
+ { 'P', 'E', 'G', '\0' },
+ { 'P', 'E', 'N', '\0' },
+ { 'P', 'E', 'P', '\0' },
+ { 'P', 'E', 'R', '\0' },
+ { 'P', 'E', 'T', '\0' },
+ { 'P', 'E', 'W', '\0' },
+ { 'P', 'H', 'I', '\0' },
+ { 'P', 'I', '\0', '\0' },
+ { 'P', 'I', 'E', '\0' },
+ { 'P', 'I', 'N', '\0' },
+ { 'P', 'I', 'T', '\0' },
+ { 'P', 'L', 'Y', '\0' },
+ { 'P', 'O', '\0', '\0' },
+ { 'P', 'O', 'D', '\0' },
+ { 'P', 'O', 'E', '\0' },
+ { 'P', 'O', 'P', '\0' },
+ { 'P', 'O', 'T', '\0' },
+ { 'P', 'O', 'W', '\0' },
+ { 'P', 'R', 'O', '\0' },
+ { 'P', 'R', 'Y', '\0' },
+ { 'P', 'U', 'B', '\0' },
+ { 'P', 'U', 'G', '\0' },
+ { 'P', 'U', 'N', '\0' },
+ { 'P', 'U', 'P', '\0' },
+ { 'P', 'U', 'T', '\0' },
+ { 'Q', 'U', 'O', '\0' },
+ { 'R', 'A', 'G', '\0' },
+ { 'R', 'A', 'M', '\0' },
+ { 'R', 'A', 'N', '\0' },
+ { 'R', 'A', 'P', '\0' },
+ { 'R', 'A', 'T', '\0' },
+ { 'R', 'A', 'W', '\0' },
+ { 'R', 'A', 'Y', '\0' },
+ { 'R', 'E', 'B', '\0' },
+ { 'R', 'E', 'D', '\0' },
+ { 'R', 'E', 'P', '\0' },
+ { 'R', 'E', 'T', '\0' },
+ { 'R', 'I', 'B', '\0' },
+ { 'R', 'I', 'D', '\0' },
+ { 'R', 'I', 'G', '\0' },
+ { 'R', 'I', 'M', '\0' },
+ { 'R', 'I', 'O', '\0' },
+ { 'R', 'I', 'P', '\0' },
+ { 'R', 'O', 'B', '\0' },
+ { 'R', 'O', 'D', '\0' },
+ { 'R', 'O', 'E', '\0' },
+ { 'R', 'O', 'N', '\0' },
+ { 'R', 'O', 'T', '\0' },
+ { 'R', 'O', 'W', '\0' },
+ { 'R', 'O', 'Y', '\0' },
+ { 'R', 'U', 'B', '\0' },
+ { 'R', 'U', 'E', '\0' },
+ { 'R', 'U', 'G', '\0' },
+ { 'R', 'U', 'M', '\0' },
+ { 'R', 'U', 'N', '\0' },
+ { 'R', 'Y', 'E', '\0' },
+ { 'S', 'A', 'C', '\0' },
+ { 'S', 'A', 'D', '\0' },
+ { 'S', 'A', 'G', '\0' },
+ { 'S', 'A', 'L', '\0' },
+ { 'S', 'A', 'M', '\0' },
+ { 'S', 'A', 'N', '\0' },
+ { 'S', 'A', 'P', '\0' },
+ { 'S', 'A', 'T', '\0' },
+ { 'S', 'A', 'W', '\0' },
+ { 'S', 'A', 'Y', '\0' },
+ { 'S', 'E', 'A', '\0' },
+ { 'S', 'E', 'C', '\0' },
+ { 'S', 'E', 'E', '\0' },
+ { 'S', 'E', 'N', '\0' },
+ { 'S', 'E', 'T', '\0' },
+ { 'S', 'E', 'W', '\0' },
+ { 'S', 'H', 'E', '\0' },
+ { 'S', 'H', 'Y', '\0' },
+ { 'S', 'I', 'N', '\0' },
+ { 'S', 'I', 'P', '\0' },
+ { 'S', 'I', 'R', '\0' },
+ { 'S', 'I', 'S', '\0' },
+ { 'S', 'I', 'T', '\0' },
+ { 'S', 'K', 'I', '\0' },
+ { 'S', 'K', 'Y', '\0' },
+ { 'S', 'L', 'Y', '\0' },
+ { 'S', 'O', '\0', '\0' },
+ { 'S', 'O', 'B', '\0' },
+ { 'S', 'O', 'D', '\0' },
+ { 'S', 'O', 'N', '\0' },
+ { 'S', 'O', 'P', '\0' },
+ { 'S', 'O', 'W', '\0' },
+ { 'S', 'O', 'Y', '\0' },
+ { 'S', 'P', 'A', '\0' },
+ { 'S', 'P', 'Y', '\0' },
+ { 'S', 'U', 'B', '\0' },
+ { 'S', 'U', 'D', '\0' },
+ { 'S', 'U', 'E', '\0' },
+ { 'S', 'U', 'M', '\0' },
+ { 'S', 'U', 'N', '\0' },
+ { 'S', 'U', 'P', '\0' },
+ { 'T', 'A', 'B', '\0' },
+ { 'T', 'A', 'D', '\0' },
+ { 'T', 'A', 'G', '\0' },
+ { 'T', 'A', 'N', '\0' },
+ { 'T', 'A', 'P', '\0' },
+ { 'T', 'A', 'R', '\0' },
+ { 'T', 'E', 'A', '\0' },
+ { 'T', 'E', 'D', '\0' },
+ { 'T', 'E', 'E', '\0' },
+ { 'T', 'E', 'N', '\0' },
+ { 'T', 'H', 'E', '\0' },
+ { 'T', 'H', 'Y', '\0' },
+ { 'T', 'I', 'C', '\0' },
+ { 'T', 'I', 'E', '\0' },
+ { 'T', 'I', 'M', '\0' },
+ { 'T', 'I', 'N', '\0' },
+ { 'T', 'I', 'P', '\0' },
+ { 'T', 'O', '\0', '\0' },
+ { 'T', 'O', 'E', '\0' },
+ { 'T', 'O', 'G', '\0' },
+ { 'T', 'O', 'M', '\0' },
+ { 'T', 'O', 'N', '\0' },
+ { 'T', 'O', 'O', '\0' },
+ { 'T', 'O', 'P', '\0' },
+ { 'T', 'O', 'W', '\0' },
+ { 'T', 'O', 'Y', '\0' },
+ { 'T', 'R', 'Y', '\0' },
+ { 'T', 'U', 'B', '\0' },
+ { 'T', 'U', 'G', '\0' },
+ { 'T', 'U', 'M', '\0' },
+ { 'T', 'U', 'N', '\0' },
+ { 'T', 'W', 'O', '\0' },
+ { 'U', 'N', '\0', '\0' },
+ { 'U', 'P', '\0', '\0' },
+ { 'U', 'S', '\0', '\0' },
+ { 'U', 'S', 'E', '\0' },
+ { 'V', 'A', 'N', '\0' },
+ { 'V', 'A', 'T', '\0' },
+ { 'V', 'E', 'T', '\0' },
+ { 'V', 'I', 'E', '\0' },
+ { 'W', 'A', 'D', '\0' },
+ { 'W', 'A', 'G', '\0' },
+ { 'W', 'A', 'R', '\0' },
+ { 'W', 'A', 'S', '\0' },
+ { 'W', 'A', 'Y', '\0' },
+ { 'W', 'E', '\0', '\0' },
+ { 'W', 'E', 'B', '\0' },
+ { 'W', 'E', 'D', '\0' },
+ { 'W', 'E', 'E', '\0' },
+ { 'W', 'E', 'T', '\0' },
+ { 'W', 'H', 'O', '\0' },
+ { 'W', 'H', 'Y', '\0' },
+ { 'W', 'I', 'N', '\0' },
+ { 'W', 'I', 'T', '\0' },
+ { 'W', 'O', 'K', '\0' },
+ { 'W', 'O', 'N', '\0' },
+ { 'W', 'O', 'O', '\0' },
+ { 'W', 'O', 'W', '\0' },
+ { 'W', 'R', 'Y', '\0' },
+ { 'W', 'U', '\0', '\0' },
+ { 'Y', 'A', 'M', '\0' },
+ { 'Y', 'A', 'P', '\0' },
+ { 'Y', 'A', 'W', '\0' },
+ { 'Y', 'E', '\0', '\0' },
+ { 'Y', 'E', 'A', '\0' },
+ { 'Y', 'E', 'S', '\0' },
+ { 'Y', 'E', 'T', '\0' },
+ { 'Y', 'O', 'U', '\0' },
+ { 'A', 'B', 'E', 'D' },
+ { 'A', 'B', 'E', 'L' },
+ { 'A', 'B', 'E', 'T' },
+ { 'A', 'B', 'L', 'E' },
+ { 'A', 'B', 'U', 'T' },
+ { 'A', 'C', 'H', 'E' },
+ { 'A', 'C', 'I', 'D' },
+ { 'A', 'C', 'M', 'E' },
+ { 'A', 'C', 'R', 'E' },
+ { 'A', 'C', 'T', 'A' },
+ { 'A', 'C', 'T', 'S' },
+ { 'A', 'D', 'A', 'M' },
+ { 'A', 'D', 'D', 'S' },
+ { 'A', 'D', 'E', 'N' },
+ { 'A', 'F', 'A', 'R' },
+ { 'A', 'F', 'R', 'O' },
+ { 'A', 'G', 'E', 'E' },
+ { 'A', 'H', 'E', 'M' },
+ { 'A', 'H', 'O', 'Y' },
+ { 'A', 'I', 'D', 'A' },
+ { 'A', 'I', 'D', 'E' },
+ { 'A', 'I', 'D', 'S' },
+ { 'A', 'I', 'R', 'Y' },
+ { 'A', 'J', 'A', 'R' },
+ { 'A', 'K', 'I', 'N' },
+ { 'A', 'L', 'A', 'N' },
+ { 'A', 'L', 'E', 'C' },
+ { 'A', 'L', 'G', 'A' },
+ { 'A', 'L', 'I', 'A' },
+ { 'A', 'L', 'L', 'Y' },
+ { 'A', 'L', 'M', 'A' },
+ { 'A', 'L', 'O', 'E' },
+ { 'A', 'L', 'S', 'O' },
+ { 'A', 'L', 'T', 'O' },
+ { 'A', 'L', 'U', 'M' },
+ { 'A', 'L', 'V', 'A' },
+ { 'A', 'M', 'E', 'N' },
+ { 'A', 'M', 'E', 'S' },
+ { 'A', 'M', 'I', 'D' },
+ { 'A', 'M', 'M', 'O' },
+ { 'A', 'M', 'O', 'K' },
+ { 'A', 'M', 'O', 'S' },
+ { 'A', 'M', 'R', 'A' },
+ { 'A', 'N', 'D', 'Y' },
+ { 'A', 'N', 'E', 'W' },
+ { 'A', 'N', 'N', 'A' },
+ { 'A', 'N', 'N', 'E' },
+ { 'A', 'N', 'T', 'E' },
+ { 'A', 'N', 'T', 'I' },
+ { 'A', 'Q', 'U', 'A' },
+ { 'A', 'R', 'A', 'B' },
+ { 'A', 'R', 'C', 'H' },
+ { 'A', 'R', 'E', 'A' },
+ { 'A', 'R', 'G', 'O' },
+ { 'A', 'R', 'I', 'D' },
+ { 'A', 'R', 'M', 'Y' },
+ { 'A', 'R', 'T', 'S' },
+ { 'A', 'R', 'T', 'Y' },
+ { 'A', 'S', 'I', 'A' },
+ { 'A', 'S', 'K', 'S' },
+ { 'A', 'T', 'O', 'M' },
+ { 'A', 'U', 'N', 'T' },
+ { 'A', 'U', 'R', 'A' },
+ { 'A', 'U', 'T', 'O' },
+ { 'A', 'V', 'E', 'R' },
+ { 'A', 'V', 'I', 'D' },
+ { 'A', 'V', 'I', 'S' },
+ { 'A', 'V', 'O', 'N' },
+ { 'A', 'V', 'O', 'W' },
+ { 'A', 'W', 'A', 'Y' },
+ { 'A', 'W', 'R', 'Y' },
+ { 'B', 'A', 'B', 'E' },
+ { 'B', 'A', 'B', 'Y' },
+ { 'B', 'A', 'C', 'H' },
+ { 'B', 'A', 'C', 'K' },
+ { 'B', 'A', 'D', 'E' },
+ { 'B', 'A', 'I', 'L' },
+ { 'B', 'A', 'I', 'T' },
+ { 'B', 'A', 'K', 'E' },
+ { 'B', 'A', 'L', 'D' },
+ { 'B', 'A', 'L', 'E' },
+ { 'B', 'A', 'L', 'I' },
+ { 'B', 'A', 'L', 'K' },
+ { 'B', 'A', 'L', 'L' },
+ { 'B', 'A', 'L', 'M' },
+ { 'B', 'A', 'N', 'D' },
+ { 'B', 'A', 'N', 'E' },
+ { 'B', 'A', 'N', 'G' },
+ { 'B', 'A', 'N', 'K' },
+ { 'B', 'A', 'R', 'B' },
+ { 'B', 'A', 'R', 'D' },
+ { 'B', 'A', 'R', 'E' },
+ { 'B', 'A', 'R', 'K' },
+ { 'B', 'A', 'R', 'N' },
+ { 'B', 'A', 'R', 'R' },
+ { 'B', 'A', 'S', 'E' },
+ { 'B', 'A', 'S', 'H' },
+ { 'B', 'A', 'S', 'K' },
+ { 'B', 'A', 'S', 'S' },
+ { 'B', 'A', 'T', 'E' },
+ { 'B', 'A', 'T', 'H' },
+ { 'B', 'A', 'W', 'D' },
+ { 'B', 'A', 'W', 'L' },
+ { 'B', 'E', 'A', 'D' },
+ { 'B', 'E', 'A', 'K' },
+ { 'B', 'E', 'A', 'M' },
+ { 'B', 'E', 'A', 'N' },
+ { 'B', 'E', 'A', 'R' },
+ { 'B', 'E', 'A', 'T' },
+ { 'B', 'E', 'A', 'U' },
+ { 'B', 'E', 'C', 'K' },
+ { 'B', 'E', 'E', 'F' },
+ { 'B', 'E', 'E', 'N' },
+ { 'B', 'E', 'E', 'R' },
+ { 'B', 'E', 'E', 'T' },
+ { 'B', 'E', 'L', 'A' },
+ { 'B', 'E', 'L', 'L' },
+ { 'B', 'E', 'L', 'T' },
+ { 'B', 'E', 'N', 'D' },
+ { 'B', 'E', 'N', 'T' },
+ { 'B', 'E', 'R', 'G' },
+ { 'B', 'E', 'R', 'N' },
+ { 'B', 'E', 'R', 'T' },
+ { 'B', 'E', 'S', 'S' },
+ { 'B', 'E', 'S', 'T' },
+ { 'B', 'E', 'T', 'A' },
+ { 'B', 'E', 'T', 'H' },
+ { 'B', 'H', 'O', 'Y' },
+ { 'B', 'I', 'A', 'S' },
+ { 'B', 'I', 'D', 'E' },
+ { 'B', 'I', 'E', 'N' },
+ { 'B', 'I', 'L', 'E' },
+ { 'B', 'I', 'L', 'K' },
+ { 'B', 'I', 'L', 'L' },
+ { 'B', 'I', 'N', 'D' },
+ { 'B', 'I', 'N', 'G' },
+ { 'B', 'I', 'R', 'D' },
+ { 'B', 'I', 'T', 'E' },
+ { 'B', 'I', 'T', 'S' },
+ { 'B', 'L', 'A', 'B' },
+ { 'B', 'L', 'A', 'T' },
+ { 'B', 'L', 'E', 'D' },
+ { 'B', 'L', 'E', 'W' },
+ { 'B', 'L', 'O', 'B' },
+ { 'B', 'L', 'O', 'C' },
+ { 'B', 'L', 'O', 'T' },
+ { 'B', 'L', 'O', 'W' },
+ { 'B', 'L', 'U', 'E' },
+ { 'B', 'L', 'U', 'M' },
+ { 'B', 'L', 'U', 'R' },
+ { 'B', 'O', 'A', 'R' },
+ { 'B', 'O', 'A', 'T' },
+ { 'B', 'O', 'C', 'A' },
+ { 'B', 'O', 'C', 'K' },
+ { 'B', 'O', 'D', 'E' },
+ { 'B', 'O', 'D', 'Y' },
+ { 'B', 'O', 'G', 'Y' },
+ { 'B', 'O', 'H', 'R' },
+ { 'B', 'O', 'I', 'L' },
+ { 'B', 'O', 'L', 'D' },
+ { 'B', 'O', 'L', 'O' },
+ { 'B', 'O', 'L', 'T' },
+ { 'B', 'O', 'M', 'B' },
+ { 'B', 'O', 'N', 'A' },
+ { 'B', 'O', 'N', 'D' },
+ { 'B', 'O', 'N', 'E' },
+ { 'B', 'O', 'N', 'G' },
+ { 'B', 'O', 'N', 'N' },
+ { 'B', 'O', 'N', 'Y' },
+ { 'B', 'O', 'O', 'K' },
+ { 'B', 'O', 'O', 'M' },
+ { 'B', 'O', 'O', 'N' },
+ { 'B', 'O', 'O', 'T' },
+ { 'B', 'O', 'R', 'E' },
+ { 'B', 'O', 'R', 'G' },
+ { 'B', 'O', 'R', 'N' },
+ { 'B', 'O', 'S', 'E' },
+ { 'B', 'O', 'S', 'S' },
+ { 'B', 'O', 'T', 'H' },
+ { 'B', 'O', 'U', 'T' },
+ { 'B', 'O', 'W', 'L' },
+ { 'B', 'O', 'Y', 'D' },
+ { 'B', 'R', 'A', 'D' },
+ { 'B', 'R', 'A', 'E' },
+ { 'B', 'R', 'A', 'G' },
+ { 'B', 'R', 'A', 'N' },
+ { 'B', 'R', 'A', 'Y' },
+ { 'B', 'R', 'E', 'D' },
+ { 'B', 'R', 'E', 'W' },
+ { 'B', 'R', 'I', 'G' },
+ { 'B', 'R', 'I', 'M' },
+ { 'B', 'R', 'O', 'W' },
+ { 'B', 'U', 'C', 'K' },
+ { 'B', 'U', 'D', 'D' },
+ { 'B', 'U', 'F', 'F' },
+ { 'B', 'U', 'L', 'B' },
+ { 'B', 'U', 'L', 'K' },
+ { 'B', 'U', 'L', 'L' },
+ { 'B', 'U', 'N', 'K' },
+ { 'B', 'U', 'N', 'T' },
+ { 'B', 'U', 'O', 'Y' },
+ { 'B', 'U', 'R', 'G' },
+ { 'B', 'U', 'R', 'L' },
+ { 'B', 'U', 'R', 'N' },
+ { 'B', 'U', 'R', 'R' },
+ { 'B', 'U', 'R', 'T' },
+ { 'B', 'U', 'R', 'Y' },
+ { 'B', 'U', 'S', 'H' },
+ { 'B', 'U', 'S', 'S' },
+ { 'B', 'U', 'S', 'T' },
+ { 'B', 'U', 'S', 'Y' },
+ { 'B', 'Y', 'T', 'E' },
+ { 'C', 'A', 'D', 'Y' },
+ { 'C', 'A', 'F', 'E' },
+ { 'C', 'A', 'G', 'E' },
+ { 'C', 'A', 'I', 'N' },
+ { 'C', 'A', 'K', 'E' },
+ { 'C', 'A', 'L', 'F' },
+ { 'C', 'A', 'L', 'L' },
+ { 'C', 'A', 'L', 'M' },
+ { 'C', 'A', 'M', 'E' },
+ { 'C', 'A', 'N', 'E' },
+ { 'C', 'A', 'N', 'T' },
+ { 'C', 'A', 'R', 'D' },
+ { 'C', 'A', 'R', 'E' },
+ { 'C', 'A', 'R', 'L' },
+ { 'C', 'A', 'R', 'R' },
+ { 'C', 'A', 'R', 'T' },
+ { 'C', 'A', 'S', 'E' },
+ { 'C', 'A', 'S', 'H' },
+ { 'C', 'A', 'S', 'K' },
+ { 'C', 'A', 'S', 'T' },
+ { 'C', 'A', 'V', 'E' },
+ { 'C', 'E', 'I', 'L' },
+ { 'C', 'E', 'L', 'L' },
+ { 'C', 'E', 'N', 'T' },
+ { 'C', 'E', 'R', 'N' },
+ { 'C', 'H', 'A', 'D' },
+ { 'C', 'H', 'A', 'R' },
+ { 'C', 'H', 'A', 'T' },
+ { 'C', 'H', 'A', 'W' },
+ { 'C', 'H', 'E', 'F' },
+ { 'C', 'H', 'E', 'N' },
+ { 'C', 'H', 'E', 'W' },
+ { 'C', 'H', 'I', 'C' },
+ { 'C', 'H', 'I', 'N' },
+ { 'C', 'H', 'O', 'U' },
+ { 'C', 'H', 'O', 'W' },
+ { 'C', 'H', 'U', 'B' },
+ { 'C', 'H', 'U', 'G' },
+ { 'C', 'H', 'U', 'M' },
+ { 'C', 'I', 'T', 'E' },
+ { 'C', 'I', 'T', 'Y' },
+ { 'C', 'L', 'A', 'D' },
+ { 'C', 'L', 'A', 'M' },
+ { 'C', 'L', 'A', 'N' },
+ { 'C', 'L', 'A', 'W' },
+ { 'C', 'L', 'A', 'Y' },
+ { 'C', 'L', 'O', 'D' },
+ { 'C', 'L', 'O', 'G' },
+ { 'C', 'L', 'O', 'T' },
+ { 'C', 'L', 'U', 'B' },
+ { 'C', 'L', 'U', 'E' },
+ { 'C', 'O', 'A', 'L' },
+ { 'C', 'O', 'A', 'T' },
+ { 'C', 'O', 'C', 'A' },
+ { 'C', 'O', 'C', 'K' },
+ { 'C', 'O', 'C', 'O' },
+ { 'C', 'O', 'D', 'A' },
+ { 'C', 'O', 'D', 'E' },
+ { 'C', 'O', 'D', 'Y' },
+ { 'C', 'O', 'E', 'D' },
+ { 'C', 'O', 'I', 'L' },
+ { 'C', 'O', 'I', 'N' },
+ { 'C', 'O', 'K', 'E' },
+ { 'C', 'O', 'L', 'A' },
+ { 'C', 'O', 'L', 'D' },
+ { 'C', 'O', 'L', 'T' },
+ { 'C', 'O', 'M', 'A' },
+ { 'C', 'O', 'M', 'B' },
+ { 'C', 'O', 'M', 'E' },
+ { 'C', 'O', 'O', 'K' },
+ { 'C', 'O', 'O', 'L' },
+ { 'C', 'O', 'O', 'N' },
+ { 'C', 'O', 'O', 'T' },
+ { 'C', 'O', 'R', 'D' },
+ { 'C', 'O', 'R', 'E' },
+ { 'C', 'O', 'R', 'K' },
+ { 'C', 'O', 'R', 'N' },
+ { 'C', 'O', 'S', 'T' },
+ { 'C', 'O', 'V', 'E' },
+ { 'C', 'O', 'W', 'L' },
+ { 'C', 'R', 'A', 'B' },
+ { 'C', 'R', 'A', 'G' },
+ { 'C', 'R', 'A', 'M' },
+ { 'C', 'R', 'A', 'Y' },
+ { 'C', 'R', 'E', 'W' },
+ { 'C', 'R', 'I', 'B' },
+ { 'C', 'R', 'O', 'W' },
+ { 'C', 'R', 'U', 'D' },
+ { 'C', 'U', 'B', 'A' },
+ { 'C', 'U', 'B', 'E' },
+ { 'C', 'U', 'F', 'F' },
+ { 'C', 'U', 'L', 'L' },
+ { 'C', 'U', 'L', 'T' },
+ { 'C', 'U', 'N', 'Y' },
+ { 'C', 'U', 'R', 'B' },
+ { 'C', 'U', 'R', 'D' },
+ { 'C', 'U', 'R', 'E' },
+ { 'C', 'U', 'R', 'L' },
+ { 'C', 'U', 'R', 'T' },
+ { 'C', 'U', 'T', 'S' },
+ { 'D', 'A', 'D', 'E' },
+ { 'D', 'A', 'L', 'E' },
+ { 'D', 'A', 'M', 'E' },
+ { 'D', 'A', 'N', 'A' },
+ { 'D', 'A', 'N', 'E' },
+ { 'D', 'A', 'N', 'G' },
+ { 'D', 'A', 'N', 'K' },
+ { 'D', 'A', 'R', 'E' },
+ { 'D', 'A', 'R', 'K' },
+ { 'D', 'A', 'R', 'N' },
+ { 'D', 'A', 'R', 'T' },
+ { 'D', 'A', 'S', 'H' },
+ { 'D', 'A', 'T', 'A' },
+ { 'D', 'A', 'T', 'E' },
+ { 'D', 'A', 'V', 'E' },
+ { 'D', 'A', 'V', 'Y' },
+ { 'D', 'A', 'W', 'N' },
+ { 'D', 'A', 'Y', 'S' },
+ { 'D', 'E', 'A', 'D' },
+ { 'D', 'E', 'A', 'F' },
+ { 'D', 'E', 'A', 'L' },
+ { 'D', 'E', 'A', 'N' },
+ { 'D', 'E', 'A', 'R' },
+ { 'D', 'E', 'B', 'T' },
+ { 'D', 'E', 'C', 'K' },
+ { 'D', 'E', 'E', 'D' },
+ { 'D', 'E', 'E', 'M' },
+ { 'D', 'E', 'E', 'R' },
+ { 'D', 'E', 'F', 'T' },
+ { 'D', 'E', 'F', 'Y' },
+ { 'D', 'E', 'L', 'L' },
+ { 'D', 'E', 'N', 'T' },
+ { 'D', 'E', 'N', 'Y' },
+ { 'D', 'E', 'S', 'K' },
+ { 'D', 'I', 'A', 'L' },
+ { 'D', 'I', 'C', 'E' },
+ { 'D', 'I', 'E', 'D' },
+ { 'D', 'I', 'E', 'T' },
+ { 'D', 'I', 'M', 'E' },
+ { 'D', 'I', 'N', 'E' },
+ { 'D', 'I', 'N', 'G' },
+ { 'D', 'I', 'N', 'T' },
+ { 'D', 'I', 'R', 'E' },
+ { 'D', 'I', 'R', 'T' },
+ { 'D', 'I', 'S', 'C' },
+ { 'D', 'I', 'S', 'H' },
+ { 'D', 'I', 'S', 'K' },
+ { 'D', 'I', 'V', 'E' },
+ { 'D', 'O', 'C', 'K' },
+ { 'D', 'O', 'E', 'S' },
+ { 'D', 'O', 'L', 'E' },
+ { 'D', 'O', 'L', 'L' },
+ { 'D', 'O', 'L', 'T' },
+ { 'D', 'O', 'M', 'E' },
+ { 'D', 'O', 'N', 'E' },
+ { 'D', 'O', 'O', 'M' },
+ { 'D', 'O', 'O', 'R' },
+ { 'D', 'O', 'R', 'A' },
+ { 'D', 'O', 'S', 'E' },
+ { 'D', 'O', 'T', 'E' },
+ { 'D', 'O', 'U', 'G' },
+ { 'D', 'O', 'U', 'R' },
+ { 'D', 'O', 'V', 'E' },
+ { 'D', 'O', 'W', 'N' },
+ { 'D', 'R', 'A', 'B' },
+ { 'D', 'R', 'A', 'G' },
+ { 'D', 'R', 'A', 'M' },
+ { 'D', 'R', 'A', 'W' },
+ { 'D', 'R', 'E', 'W' },
+ { 'D', 'R', 'U', 'B' },
+ { 'D', 'R', 'U', 'G' },
+ { 'D', 'R', 'U', 'M' },
+ { 'D', 'U', 'A', 'L' },
+ { 'D', 'U', 'C', 'K' },
+ { 'D', 'U', 'C', 'T' },
+ { 'D', 'U', 'E', 'L' },
+ { 'D', 'U', 'E', 'T' },
+ { 'D', 'U', 'K', 'E' },
+ { 'D', 'U', 'L', 'L' },
+ { 'D', 'U', 'M', 'B' },
+ { 'D', 'U', 'N', 'E' },
+ { 'D', 'U', 'N', 'K' },
+ { 'D', 'U', 'S', 'K' },
+ { 'D', 'U', 'S', 'T' },
+ { 'D', 'U', 'T', 'Y' },
+ { 'E', 'A', 'C', 'H' },
+ { 'E', 'A', 'R', 'L' },
+ { 'E', 'A', 'R', 'N' },
+ { 'E', 'A', 'S', 'E' },
+ { 'E', 'A', 'S', 'T' },
+ { 'E', 'A', 'S', 'Y' },
+ { 'E', 'B', 'E', 'N' },
+ { 'E', 'C', 'H', 'O' },
+ { 'E', 'D', 'D', 'Y' },
+ { 'E', 'D', 'E', 'N' },
+ { 'E', 'D', 'G', 'E' },
+ { 'E', 'D', 'G', 'Y' },
+ { 'E', 'D', 'I', 'T' },
+ { 'E', 'D', 'N', 'A' },
+ { 'E', 'G', 'A', 'N' },
+ { 'E', 'L', 'A', 'N' },
+ { 'E', 'L', 'B', 'A' },
+ { 'E', 'L', 'L', 'A' },
+ { 'E', 'L', 'S', 'E' },
+ { 'E', 'M', 'I', 'L' },
+ { 'E', 'M', 'I', 'T' },
+ { 'E', 'M', 'M', 'A' },
+ { 'E', 'N', 'D', 'S' },
+ { 'E', 'R', 'I', 'C' },
+ { 'E', 'R', 'O', 'S' },
+ { 'E', 'V', 'E', 'N' },
+ { 'E', 'V', 'E', 'R' },
+ { 'E', 'V', 'I', 'L' },
+ { 'E', 'Y', 'E', 'D' },
+ { 'F', 'A', 'C', 'E' },
+ { 'F', 'A', 'C', 'T' },
+ { 'F', 'A', 'D', 'E' },
+ { 'F', 'A', 'I', 'L' },
+ { 'F', 'A', 'I', 'N' },
+ { 'F', 'A', 'I', 'R' },
+ { 'F', 'A', 'K', 'E' },
+ { 'F', 'A', 'L', 'L' },
+ { 'F', 'A', 'M', 'E' },
+ { 'F', 'A', 'N', 'G' },
+ { 'F', 'A', 'R', 'M' },
+ { 'F', 'A', 'S', 'T' },
+ { 'F', 'A', 'T', 'E' },
+ { 'F', 'A', 'W', 'N' },
+ { 'F', 'E', 'A', 'R' },
+ { 'F', 'E', 'A', 'T' },
+ { 'F', 'E', 'E', 'D' },
+ { 'F', 'E', 'E', 'L' },
+ { 'F', 'E', 'E', 'T' },
+ { 'F', 'E', 'L', 'L' },
+ { 'F', 'E', 'L', 'T' },
+ { 'F', 'E', 'N', 'D' },
+ { 'F', 'E', 'R', 'N' },
+ { 'F', 'E', 'S', 'T' },
+ { 'F', 'E', 'U', 'D' },
+ { 'F', 'I', 'E', 'F' },
+ { 'F', 'I', 'G', 'S' },
+ { 'F', 'I', 'L', 'E' },
+ { 'F', 'I', 'L', 'L' },
+ { 'F', 'I', 'L', 'M' },
+ { 'F', 'I', 'N', 'D' },
+ { 'F', 'I', 'N', 'E' },
+ { 'F', 'I', 'N', 'K' },
+ { 'F', 'I', 'R', 'E' },
+ { 'F', 'I', 'R', 'M' },
+ { 'F', 'I', 'S', 'H' },
+ { 'F', 'I', 'S', 'K' },
+ { 'F', 'I', 'S', 'T' },
+ { 'F', 'I', 'T', 'S' },
+ { 'F', 'I', 'V', 'E' },
+ { 'F', 'L', 'A', 'G' },
+ { 'F', 'L', 'A', 'K' },
+ { 'F', 'L', 'A', 'M' },
+ { 'F', 'L', 'A', 'T' },
+ { 'F', 'L', 'A', 'W' },
+ { 'F', 'L', 'E', 'A' },
+ { 'F', 'L', 'E', 'D' },
+ { 'F', 'L', 'E', 'W' },
+ { 'F', 'L', 'I', 'T' },
+ { 'F', 'L', 'O', 'C' },
+ { 'F', 'L', 'O', 'G' },
+ { 'F', 'L', 'O', 'W' },
+ { 'F', 'L', 'U', 'B' },
+ { 'F', 'L', 'U', 'E' },
+ { 'F', 'O', 'A', 'L' },
+ { 'F', 'O', 'A', 'M' },
+ { 'F', 'O', 'G', 'Y' },
+ { 'F', 'O', 'I', 'L' },
+ { 'F', 'O', 'L', 'D' },
+ { 'F', 'O', 'L', 'K' },
+ { 'F', 'O', 'N', 'D' },
+ { 'F', 'O', 'N', 'T' },
+ { 'F', 'O', 'O', 'D' },
+ { 'F', 'O', 'O', 'L' },
+ { 'F', 'O', 'O', 'T' },
+ { 'F', 'O', 'R', 'D' },
+ { 'F', 'O', 'R', 'E' },
+ { 'F', 'O', 'R', 'K' },
+ { 'F', 'O', 'R', 'M' },
+ { 'F', 'O', 'R', 'T' },
+ { 'F', 'O', 'S', 'S' },
+ { 'F', 'O', 'U', 'L' },
+ { 'F', 'O', 'U', 'R' },
+ { 'F', 'O', 'W', 'L' },
+ { 'F', 'R', 'A', 'U' },
+ { 'F', 'R', 'A', 'Y' },
+ { 'F', 'R', 'E', 'D' },
+ { 'F', 'R', 'E', 'E' },
+ { 'F', 'R', 'E', 'T' },
+ { 'F', 'R', 'E', 'Y' },
+ { 'F', 'R', 'O', 'G' },
+ { 'F', 'R', 'O', 'M' },
+ { 'F', 'U', 'E', 'L' },
+ { 'F', 'U', 'L', 'L' },
+ { 'F', 'U', 'M', 'E' },
+ { 'F', 'U', 'N', 'D' },
+ { 'F', 'U', 'N', 'K' },
+ { 'F', 'U', 'R', 'Y' },
+ { 'F', 'U', 'S', 'E' },
+ { 'F', 'U', 'S', 'S' },
+ { 'G', 'A', 'F', 'F' },
+ { 'G', 'A', 'G', 'E' },
+ { 'G', 'A', 'I', 'L' },
+ { 'G', 'A', 'I', 'N' },
+ { 'G', 'A', 'I', 'T' },
+ { 'G', 'A', 'L', 'A' },
+ { 'G', 'A', 'L', 'E' },
+ { 'G', 'A', 'L', 'L' },
+ { 'G', 'A', 'L', 'T' },
+ { 'G', 'A', 'M', 'E' },
+ { 'G', 'A', 'N', 'G' },
+ { 'G', 'A', 'R', 'B' },
+ { 'G', 'A', 'R', 'Y' },
+ { 'G', 'A', 'S', 'H' },
+ { 'G', 'A', 'T', 'E' },
+ { 'G', 'A', 'U', 'L' },
+ { 'G', 'A', 'U', 'R' },
+ { 'G', 'A', 'V', 'E' },
+ { 'G', 'A', 'W', 'K' },
+ { 'G', 'E', 'A', 'R' },
+ { 'G', 'E', 'L', 'D' },
+ { 'G', 'E', 'N', 'E' },
+ { 'G', 'E', 'N', 'T' },
+ { 'G', 'E', 'R', 'M' },
+ { 'G', 'E', 'T', 'S' },
+ { 'G', 'I', 'B', 'E' },
+ { 'G', 'I', 'F', 'T' },
+ { 'G', 'I', 'L', 'D' },
+ { 'G', 'I', 'L', 'L' },
+ { 'G', 'I', 'L', 'T' },
+ { 'G', 'I', 'N', 'A' },
+ { 'G', 'I', 'R', 'D' },
+ { 'G', 'I', 'R', 'L' },
+ { 'G', 'I', 'S', 'T' },
+ { 'G', 'I', 'V', 'E' },
+ { 'G', 'L', 'A', 'D' },
+ { 'G', 'L', 'E', 'E' },
+ { 'G', 'L', 'E', 'N' },
+ { 'G', 'L', 'I', 'B' },
+ { 'G', 'L', 'O', 'B' },
+ { 'G', 'L', 'O', 'M' },
+ { 'G', 'L', 'O', 'W' },
+ { 'G', 'L', 'U', 'E' },
+ { 'G', 'L', 'U', 'M' },
+ { 'G', 'L', 'U', 'T' },
+ { 'G', 'O', 'A', 'D' },
+ { 'G', 'O', 'A', 'L' },
+ { 'G', 'O', 'A', 'T' },
+ { 'G', 'O', 'E', 'R' },
+ { 'G', 'O', 'E', 'S' },
+ { 'G', 'O', 'L', 'D' },
+ { 'G', 'O', 'L', 'F' },
+ { 'G', 'O', 'N', 'E' },
+ { 'G', 'O', 'N', 'G' },
+ { 'G', 'O', 'O', 'D' },
+ { 'G', 'O', 'O', 'F' },
+ { 'G', 'O', 'R', 'E' },
+ { 'G', 'O', 'R', 'Y' },
+ { 'G', 'O', 'S', 'H' },
+ { 'G', 'O', 'U', 'T' },
+ { 'G', 'O', 'W', 'N' },
+ { 'G', 'R', 'A', 'B' },
+ { 'G', 'R', 'A', 'D' },
+ { 'G', 'R', 'A', 'Y' },
+ { 'G', 'R', 'E', 'G' },
+ { 'G', 'R', 'E', 'W' },
+ { 'G', 'R', 'E', 'Y' },
+ { 'G', 'R', 'I', 'D' },
+ { 'G', 'R', 'I', 'M' },
+ { 'G', 'R', 'I', 'N' },
+ { 'G', 'R', 'I', 'T' },
+ { 'G', 'R', 'O', 'W' },
+ { 'G', 'R', 'U', 'B' },
+ { 'G', 'U', 'L', 'F' },
+ { 'G', 'U', 'L', 'L' },
+ { 'G', 'U', 'N', 'K' },
+ { 'G', 'U', 'R', 'U' },
+ { 'G', 'U', 'S', 'H' },
+ { 'G', 'U', 'S', 'T' },
+ { 'G', 'W', 'E', 'N' },
+ { 'G', 'W', 'Y', 'N' },
+ { 'H', 'A', 'A', 'G' },
+ { 'H', 'A', 'A', 'S' },
+ { 'H', 'A', 'C', 'K' },
+ { 'H', 'A', 'I', 'L' },
+ { 'H', 'A', 'I', 'R' },
+ { 'H', 'A', 'L', 'E' },
+ { 'H', 'A', 'L', 'F' },
+ { 'H', 'A', 'L', 'L' },
+ { 'H', 'A', 'L', 'O' },
+ { 'H', 'A', 'L', 'T' },
+ { 'H', 'A', 'N', 'D' },
+ { 'H', 'A', 'N', 'G' },
+ { 'H', 'A', 'N', 'K' },
+ { 'H', 'A', 'N', 'S' },
+ { 'H', 'A', 'R', 'D' },
+ { 'H', 'A', 'R', 'K' },
+ { 'H', 'A', 'R', 'M' },
+ { 'H', 'A', 'R', 'T' },
+ { 'H', 'A', 'S', 'H' },
+ { 'H', 'A', 'S', 'T' },
+ { 'H', 'A', 'T', 'E' },
+ { 'H', 'A', 'T', 'H' },
+ { 'H', 'A', 'U', 'L' },
+ { 'H', 'A', 'V', 'E' },
+ { 'H', 'A', 'W', 'K' },
+ { 'H', 'A', 'Y', 'S' },
+ { 'H', 'E', 'A', 'D' },
+ { 'H', 'E', 'A', 'L' },
+ { 'H', 'E', 'A', 'R' },
+ { 'H', 'E', 'A', 'T' },
+ { 'H', 'E', 'B', 'E' },
+ { 'H', 'E', 'C', 'K' },
+ { 'H', 'E', 'E', 'D' },
+ { 'H', 'E', 'E', 'L' },
+ { 'H', 'E', 'F', 'T' },
+ { 'H', 'E', 'L', 'D' },
+ { 'H', 'E', 'L', 'L' },
+ { 'H', 'E', 'L', 'M' },
+ { 'H', 'E', 'R', 'B' },
+ { 'H', 'E', 'R', 'D' },
+ { 'H', 'E', 'R', 'E' },
+ { 'H', 'E', 'R', 'O' },
+ { 'H', 'E', 'R', 'S' },
+ { 'H', 'E', 'S', 'S' },
+ { 'H', 'E', 'W', 'N' },
+ { 'H', 'I', 'C', 'K' },
+ { 'H', 'I', 'D', 'E' },
+ { 'H', 'I', 'G', 'H' },
+ { 'H', 'I', 'K', 'E' },
+ { 'H', 'I', 'L', 'L' },
+ { 'H', 'I', 'L', 'T' },
+ { 'H', 'I', 'N', 'D' },
+ { 'H', 'I', 'N', 'T' },
+ { 'H', 'I', 'R', 'E' },
+ { 'H', 'I', 'S', 'S' },
+ { 'H', 'I', 'V', 'E' },
+ { 'H', 'O', 'B', 'O' },
+ { 'H', 'O', 'C', 'K' },
+ { 'H', 'O', 'F', 'F' },
+ { 'H', 'O', 'L', 'D' },
+ { 'H', 'O', 'L', 'E' },
+ { 'H', 'O', 'L', 'M' },
+ { 'H', 'O', 'L', 'T' },
+ { 'H', 'O', 'M', 'E' },
+ { 'H', 'O', 'N', 'E' },
+ { 'H', 'O', 'N', 'K' },
+ { 'H', 'O', 'O', 'D' },
+ { 'H', 'O', 'O', 'F' },
+ { 'H', 'O', 'O', 'K' },
+ { 'H', 'O', 'O', 'T' },
+ { 'H', 'O', 'R', 'N' },
+ { 'H', 'O', 'S', 'E' },
+ { 'H', 'O', 'S', 'T' },
+ { 'H', 'O', 'U', 'R' },
+ { 'H', 'O', 'V', 'E' },
+ { 'H', 'O', 'W', 'E' },
+ { 'H', 'O', 'W', 'L' },
+ { 'H', 'O', 'Y', 'T' },
+ { 'H', 'U', 'C', 'K' },
+ { 'H', 'U', 'E', 'D' },
+ { 'H', 'U', 'F', 'F' },
+ { 'H', 'U', 'G', 'E' },
+ { 'H', 'U', 'G', 'H' },
+ { 'H', 'U', 'G', 'O' },
+ { 'H', 'U', 'L', 'K' },
+ { 'H', 'U', 'L', 'L' },
+ { 'H', 'U', 'N', 'K' },
+ { 'H', 'U', 'N', 'T' },
+ { 'H', 'U', 'R', 'D' },
+ { 'H', 'U', 'R', 'L' },
+ { 'H', 'U', 'R', 'T' },
+ { 'H', 'U', 'S', 'H' },
+ { 'H', 'Y', 'D', 'E' },
+ { 'H', 'Y', 'M', 'N' },
+ { 'I', 'B', 'I', 'S' },
+ { 'I', 'C', 'O', 'N' },
+ { 'I', 'D', 'E', 'A' },
+ { 'I', 'D', 'L', 'E' },
+ { 'I', 'F', 'F', 'Y' },
+ { 'I', 'N', 'C', 'A' },
+ { 'I', 'N', 'C', 'H' },
+ { 'I', 'N', 'T', 'O' },
+ { 'I', 'O', 'N', 'S' },
+ { 'I', 'O', 'T', 'A' },
+ { 'I', 'O', 'W', 'A' },
+ { 'I', 'R', 'I', 'S' },
+ { 'I', 'R', 'M', 'A' },
+ { 'I', 'R', 'O', 'N' },
+ { 'I', 'S', 'L', 'E' },
+ { 'I', 'T', 'C', 'H' },
+ { 'I', 'T', 'E', 'M' },
+ { 'I', 'V', 'A', 'N' },
+ { 'J', 'A', 'C', 'K' },
+ { 'J', 'A', 'D', 'E' },
+ { 'J', 'A', 'I', 'L' },
+ { 'J', 'A', 'K', 'E' },
+ { 'J', 'A', 'N', 'E' },
+ { 'J', 'A', 'V', 'A' },
+ { 'J', 'E', 'A', 'N' },
+ { 'J', 'E', 'F', 'F' },
+ { 'J', 'E', 'R', 'K' },
+ { 'J', 'E', 'S', 'S' },
+ { 'J', 'E', 'S', 'T' },
+ { 'J', 'I', 'B', 'E' },
+ { 'J', 'I', 'L', 'L' },
+ { 'J', 'I', 'L', 'T' },
+ { 'J', 'I', 'V', 'E' },
+ { 'J', 'O', 'A', 'N' },
+ { 'J', 'O', 'B', 'S' },
+ { 'J', 'O', 'C', 'K' },
+ { 'J', 'O', 'E', 'L' },
+ { 'J', 'O', 'E', 'Y' },
+ { 'J', 'O', 'H', 'N' },
+ { 'J', 'O', 'I', 'N' },
+ { 'J', 'O', 'K', 'E' },
+ { 'J', 'O', 'L', 'T' },
+ { 'J', 'O', 'V', 'E' },
+ { 'J', 'U', 'D', 'D' },
+ { 'J', 'U', 'D', 'E' },
+ { 'J', 'U', 'D', 'O' },
+ { 'J', 'U', 'D', 'Y' },
+ { 'J', 'U', 'J', 'U' },
+ { 'J', 'U', 'K', 'E' },
+ { 'J', 'U', 'L', 'Y' },
+ { 'J', 'U', 'N', 'E' },
+ { 'J', 'U', 'N', 'K' },
+ { 'J', 'U', 'N', 'O' },
+ { 'J', 'U', 'R', 'Y' },
+ { 'J', 'U', 'S', 'T' },
+ { 'J', 'U', 'T', 'E' },
+ { 'K', 'A', 'H', 'N' },
+ { 'K', 'A', 'L', 'E' },
+ { 'K', 'A', 'N', 'E' },
+ { 'K', 'A', 'N', 'T' },
+ { 'K', 'A', 'R', 'L' },
+ { 'K', 'A', 'T', 'E' },
+ { 'K', 'E', 'E', 'L' },
+ { 'K', 'E', 'E', 'N' },
+ { 'K', 'E', 'N', 'O' },
+ { 'K', 'E', 'N', 'T' },
+ { 'K', 'E', 'R', 'N' },
+ { 'K', 'E', 'R', 'R' },
+ { 'K', 'E', 'Y', 'S' },
+ { 'K', 'I', 'C', 'K' },
+ { 'K', 'I', 'L', 'L' },
+ { 'K', 'I', 'N', 'D' },
+ { 'K', 'I', 'N', 'G' },
+ { 'K', 'I', 'R', 'K' },
+ { 'K', 'I', 'S', 'S' },
+ { 'K', 'I', 'T', 'E' },
+ { 'K', 'L', 'A', 'N' },
+ { 'K', 'N', 'E', 'E' },
+ { 'K', 'N', 'E', 'W' },
+ { 'K', 'N', 'I', 'T' },
+ { 'K', 'N', 'O', 'B' },
+ { 'K', 'N', 'O', 'T' },
+ { 'K', 'N', 'O', 'W' },
+ { 'K', 'O', 'C', 'H' },
+ { 'K', 'O', 'N', 'G' },
+ { 'K', 'U', 'D', 'O' },
+ { 'K', 'U', 'R', 'D' },
+ { 'K', 'U', 'R', 'T' },
+ { 'K', 'Y', 'L', 'E' },
+ { 'L', 'A', 'C', 'E' },
+ { 'L', 'A', 'C', 'K' },
+ { 'L', 'A', 'C', 'Y' },
+ { 'L', 'A', 'D', 'Y' },
+ { 'L', 'A', 'I', 'D' },
+ { 'L', 'A', 'I', 'N' },
+ { 'L', 'A', 'I', 'R' },
+ { 'L', 'A', 'K', 'E' },
+ { 'L', 'A', 'M', 'B' },
+ { 'L', 'A', 'M', 'E' },
+ { 'L', 'A', 'N', 'D' },
+ { 'L', 'A', 'N', 'E' },
+ { 'L', 'A', 'N', 'G' },
+ { 'L', 'A', 'R', 'D' },
+ { 'L', 'A', 'R', 'K' },
+ { 'L', 'A', 'S', 'S' },
+ { 'L', 'A', 'S', 'T' },
+ { 'L', 'A', 'T', 'E' },
+ { 'L', 'A', 'U', 'D' },
+ { 'L', 'A', 'V', 'A' },
+ { 'L', 'A', 'W', 'N' },
+ { 'L', 'A', 'W', 'S' },
+ { 'L', 'A', 'Y', 'S' },
+ { 'L', 'E', 'A', 'D' },
+ { 'L', 'E', 'A', 'F' },
+ { 'L', 'E', 'A', 'K' },
+ { 'L', 'E', 'A', 'N' },
+ { 'L', 'E', 'A', 'R' },
+ { 'L', 'E', 'E', 'K' },
+ { 'L', 'E', 'E', 'R' },
+ { 'L', 'E', 'F', 'T' },
+ { 'L', 'E', 'N', 'D' },
+ { 'L', 'E', 'N', 'S' },
+ { 'L', 'E', 'N', 'T' },
+ { 'L', 'E', 'O', 'N' },
+ { 'L', 'E', 'S', 'K' },
+ { 'L', 'E', 'S', 'S' },
+ { 'L', 'E', 'S', 'T' },
+ { 'L', 'E', 'T', 'S' },
+ { 'L', 'I', 'A', 'R' },
+ { 'L', 'I', 'C', 'E' },
+ { 'L', 'I', 'C', 'K' },
+ { 'L', 'I', 'E', 'D' },
+ { 'L', 'I', 'E', 'N' },
+ { 'L', 'I', 'E', 'S' },
+ { 'L', 'I', 'E', 'U' },
+ { 'L', 'I', 'F', 'E' },
+ { 'L', 'I', 'F', 'T' },
+ { 'L', 'I', 'K', 'E' },
+ { 'L', 'I', 'L', 'A' },
+ { 'L', 'I', 'L', 'T' },
+ { 'L', 'I', 'L', 'Y' },
+ { 'L', 'I', 'M', 'A' },
+ { 'L', 'I', 'M', 'B' },
+ { 'L', 'I', 'M', 'E' },
+ { 'L', 'I', 'N', 'D' },
+ { 'L', 'I', 'N', 'E' },
+ { 'L', 'I', 'N', 'K' },
+ { 'L', 'I', 'N', 'T' },
+ { 'L', 'I', 'O', 'N' },
+ { 'L', 'I', 'S', 'A' },
+ { 'L', 'I', 'S', 'T' },
+ { 'L', 'I', 'V', 'E' },
+ { 'L', 'O', 'A', 'D' },
+ { 'L', 'O', 'A', 'F' },
+ { 'L', 'O', 'A', 'M' },
+ { 'L', 'O', 'A', 'N' },
+ { 'L', 'O', 'C', 'K' },
+ { 'L', 'O', 'F', 'T' },
+ { 'L', 'O', 'G', 'E' },
+ { 'L', 'O', 'I', 'S' },
+ { 'L', 'O', 'L', 'A' },
+ { 'L', 'O', 'N', 'E' },
+ { 'L', 'O', 'N', 'G' },
+ { 'L', 'O', 'O', 'K' },
+ { 'L', 'O', 'O', 'N' },
+ { 'L', 'O', 'O', 'T' },
+ { 'L', 'O', 'R', 'D' },
+ { 'L', 'O', 'R', 'E' },
+ { 'L', 'O', 'S', 'E' },
+ { 'L', 'O', 'S', 'S' },
+ { 'L', 'O', 'S', 'T' },
+ { 'L', 'O', 'U', 'D' },
+ { 'L', 'O', 'V', 'E' },
+ { 'L', 'O', 'W', 'E' },
+ { 'L', 'U', 'C', 'K' },
+ { 'L', 'U', 'C', 'Y' },
+ { 'L', 'U', 'G', 'E' },
+ { 'L', 'U', 'K', 'E' },
+ { 'L', 'U', 'L', 'U' },
+ { 'L', 'U', 'N', 'D' },
+ { 'L', 'U', 'N', 'G' },
+ { 'L', 'U', 'R', 'A' },
+ { 'L', 'U', 'R', 'E' },
+ { 'L', 'U', 'R', 'K' },
+ { 'L', 'U', 'S', 'H' },
+ { 'L', 'U', 'S', 'T' },
+ { 'L', 'Y', 'L', 'E' },
+ { 'L', 'Y', 'N', 'N' },
+ { 'L', 'Y', 'O', 'N' },
+ { 'L', 'Y', 'R', 'A' },
+ { 'M', 'A', 'C', 'E' },
+ { 'M', 'A', 'D', 'E' },
+ { 'M', 'A', 'G', 'I' },
+ { 'M', 'A', 'I', 'D' },
+ { 'M', 'A', 'I', 'L' },
+ { 'M', 'A', 'I', 'N' },
+ { 'M', 'A', 'K', 'E' },
+ { 'M', 'A', 'L', 'E' },
+ { 'M', 'A', 'L', 'I' },
+ { 'M', 'A', 'L', 'L' },
+ { 'M', 'A', 'L', 'T' },
+ { 'M', 'A', 'N', 'A' },
+ { 'M', 'A', 'N', 'N' },
+ { 'M', 'A', 'N', 'Y' },
+ { 'M', 'A', 'R', 'C' },
+ { 'M', 'A', 'R', 'E' },
+ { 'M', 'A', 'R', 'K' },
+ { 'M', 'A', 'R', 'S' },
+ { 'M', 'A', 'R', 'T' },
+ { 'M', 'A', 'R', 'Y' },
+ { 'M', 'A', 'S', 'H' },
+ { 'M', 'A', 'S', 'K' },
+ { 'M', 'A', 'S', 'S' },
+ { 'M', 'A', 'S', 'T' },
+ { 'M', 'A', 'T', 'E' },
+ { 'M', 'A', 'T', 'H' },
+ { 'M', 'A', 'U', 'L' },
+ { 'M', 'A', 'Y', 'O' },
+ { 'M', 'E', 'A', 'D' },
+ { 'M', 'E', 'A', 'L' },
+ { 'M', 'E', 'A', 'N' },
+ { 'M', 'E', 'A', 'T' },
+ { 'M', 'E', 'E', 'K' },
+ { 'M', 'E', 'E', 'T' },
+ { 'M', 'E', 'L', 'D' },
+ { 'M', 'E', 'L', 'T' },
+ { 'M', 'E', 'M', 'O' },
+ { 'M', 'E', 'N', 'D' },
+ { 'M', 'E', 'N', 'U' },
+ { 'M', 'E', 'R', 'T' },
+ { 'M', 'E', 'S', 'H' },
+ { 'M', 'E', 'S', 'S' },
+ { 'M', 'I', 'C', 'E' },
+ { 'M', 'I', 'K', 'E' },
+ { 'M', 'I', 'L', 'D' },
+ { 'M', 'I', 'L', 'E' },
+ { 'M', 'I', 'L', 'K' },
+ { 'M', 'I', 'L', 'L' },
+ { 'M', 'I', 'L', 'T' },
+ { 'M', 'I', 'M', 'I' },
+ { 'M', 'I', 'N', 'D' },
+ { 'M', 'I', 'N', 'E' },
+ { 'M', 'I', 'N', 'I' },
+ { 'M', 'I', 'N', 'K' },
+ { 'M', 'I', 'N', 'T' },
+ { 'M', 'I', 'R', 'E' },
+ { 'M', 'I', 'S', 'S' },
+ { 'M', 'I', 'S', 'T' },
+ { 'M', 'I', 'T', 'E' },
+ { 'M', 'I', 'T', 'T' },
+ { 'M', 'O', 'A', 'N' },
+ { 'M', 'O', 'A', 'T' },
+ { 'M', 'O', 'C', 'K' },
+ { 'M', 'O', 'D', 'E' },
+ { 'M', 'O', 'L', 'D' },
+ { 'M', 'O', 'L', 'E' },
+ { 'M', 'O', 'L', 'L' },
+ { 'M', 'O', 'L', 'T' },
+ { 'M', 'O', 'N', 'A' },
+ { 'M', 'O', 'N', 'K' },
+ { 'M', 'O', 'N', 'T' },
+ { 'M', 'O', 'O', 'D' },
+ { 'M', 'O', 'O', 'N' },
+ { 'M', 'O', 'O', 'R' },
+ { 'M', 'O', 'O', 'T' },
+ { 'M', 'O', 'R', 'E' },
+ { 'M', 'O', 'R', 'N' },
+ { 'M', 'O', 'R', 'T' },
+ { 'M', 'O', 'S', 'S' },
+ { 'M', 'O', 'S', 'T' },
+ { 'M', 'O', 'T', 'H' },
+ { 'M', 'O', 'V', 'E' },
+ { 'M', 'U', 'C', 'H' },
+ { 'M', 'U', 'C', 'K' },
+ { 'M', 'U', 'D', 'D' },
+ { 'M', 'U', 'F', 'F' },
+ { 'M', 'U', 'L', 'E' },
+ { 'M', 'U', 'L', 'L' },
+ { 'M', 'U', 'R', 'K' },
+ { 'M', 'U', 'S', 'H' },
+ { 'M', 'U', 'S', 'T' },
+ { 'M', 'U', 'T', 'E' },
+ { 'M', 'U', 'T', 'T' },
+ { 'M', 'Y', 'R', 'A' },
+ { 'M', 'Y', 'T', 'H' },
+ { 'N', 'A', 'G', 'Y' },
+ { 'N', 'A', 'I', 'L' },
+ { 'N', 'A', 'I', 'R' },
+ { 'N', 'A', 'M', 'E' },
+ { 'N', 'A', 'R', 'Y' },
+ { 'N', 'A', 'S', 'H' },
+ { 'N', 'A', 'V', 'E' },
+ { 'N', 'A', 'V', 'Y' },
+ { 'N', 'E', 'A', 'L' },
+ { 'N', 'E', 'A', 'R' },
+ { 'N', 'E', 'A', 'T' },
+ { 'N', 'E', 'C', 'K' },
+ { 'N', 'E', 'E', 'D' },
+ { 'N', 'E', 'I', 'L' },
+ { 'N', 'E', 'L', 'L' },
+ { 'N', 'E', 'O', 'N' },
+ { 'N', 'E', 'R', 'O' },
+ { 'N', 'E', 'S', 'S' },
+ { 'N', 'E', 'S', 'T' },
+ { 'N', 'E', 'W', 'S' },
+ { 'N', 'E', 'W', 'T' },
+ { 'N', 'I', 'B', 'S' },
+ { 'N', 'I', 'C', 'E' },
+ { 'N', 'I', 'C', 'K' },
+ { 'N', 'I', 'L', 'E' },
+ { 'N', 'I', 'N', 'A' },
+ { 'N', 'I', 'N', 'E' },
+ { 'N', 'O', 'A', 'H' },
+ { 'N', 'O', 'D', 'E' },
+ { 'N', 'O', 'E', 'L' },
+ { 'N', 'O', 'L', 'L' },
+ { 'N', 'O', 'N', 'E' },
+ { 'N', 'O', 'O', 'K' },
+ { 'N', 'O', 'O', 'N' },
+ { 'N', 'O', 'R', 'M' },
+ { 'N', 'O', 'S', 'E' },
+ { 'N', 'O', 'T', 'E' },
+ { 'N', 'O', 'U', 'N' },
+ { 'N', 'O', 'V', 'A' },
+ { 'N', 'U', 'D', 'E' },
+ { 'N', 'U', 'L', 'L' },
+ { 'N', 'U', 'M', 'B' },
+ { 'O', 'A', 'T', 'H' },
+ { 'O', 'B', 'E', 'Y' },
+ { 'O', 'B', 'O', 'E' },
+ { 'O', 'D', 'I', 'N' },
+ { 'O', 'H', 'I', 'O' },
+ { 'O', 'I', 'L', 'Y' },
+ { 'O', 'I', 'N', 'T' },
+ { 'O', 'K', 'A', 'Y' },
+ { 'O', 'L', 'A', 'F' },
+ { 'O', 'L', 'D', 'Y' },
+ { 'O', 'L', 'G', 'A' },
+ { 'O', 'L', 'I', 'N' },
+ { 'O', 'M', 'A', 'N' },
+ { 'O', 'M', 'E', 'N' },
+ { 'O', 'M', 'I', 'T' },
+ { 'O', 'N', 'C', 'E' },
+ { 'O', 'N', 'E', 'S' },
+ { 'O', 'N', 'L', 'Y' },
+ { 'O', 'N', 'T', 'O' },
+ { 'O', 'N', 'U', 'S' },
+ { 'O', 'R', 'A', 'L' },
+ { 'O', 'R', 'G', 'Y' },
+ { 'O', 'S', 'L', 'O' },
+ { 'O', 'T', 'I', 'S' },
+ { 'O', 'T', 'T', 'O' },
+ { 'O', 'U', 'C', 'H' },
+ { 'O', 'U', 'S', 'T' },
+ { 'O', 'U', 'T', 'S' },
+ { 'O', 'V', 'A', 'L' },
+ { 'O', 'V', 'E', 'N' },
+ { 'O', 'V', 'E', 'R' },
+ { 'O', 'W', 'L', 'Y' },
+ { 'O', 'W', 'N', 'S' },
+ { 'Q', 'U', 'A', 'D' },
+ { 'Q', 'U', 'I', 'T' },
+ { 'Q', 'U', 'O', 'D' },
+ { 'R', 'A', 'C', 'E' },
+ { 'R', 'A', 'C', 'K' },
+ { 'R', 'A', 'C', 'Y' },
+ { 'R', 'A', 'F', 'T' },
+ { 'R', 'A', 'G', 'E' },
+ { 'R', 'A', 'I', 'D' },
+ { 'R', 'A', 'I', 'L' },
+ { 'R', 'A', 'I', 'N' },
+ { 'R', 'A', 'K', 'E' },
+ { 'R', 'A', 'N', 'K' },
+ { 'R', 'A', 'N', 'T' },
+ { 'R', 'A', 'R', 'E' },
+ { 'R', 'A', 'S', 'H' },
+ { 'R', 'A', 'T', 'E' },
+ { 'R', 'A', 'V', 'E' },
+ { 'R', 'A', 'Y', 'S' },
+ { 'R', 'E', 'A', 'D' },
+ { 'R', 'E', 'A', 'L' },
+ { 'R', 'E', 'A', 'M' },
+ { 'R', 'E', 'A', 'R' },
+ { 'R', 'E', 'C', 'K' },
+ { 'R', 'E', 'E', 'D' },
+ { 'R', 'E', 'E', 'F' },
+ { 'R', 'E', 'E', 'K' },
+ { 'R', 'E', 'E', 'L' },
+ { 'R', 'E', 'I', 'D' },
+ { 'R', 'E', 'I', 'N' },
+ { 'R', 'E', 'N', 'A' },
+ { 'R', 'E', 'N', 'D' },
+ { 'R', 'E', 'N', 'T' },
+ { 'R', 'E', 'S', 'T' },
+ { 'R', 'I', 'C', 'E' },
+ { 'R', 'I', 'C', 'H' },
+ { 'R', 'I', 'C', 'K' },
+ { 'R', 'I', 'D', 'E' },
+ { 'R', 'I', 'F', 'T' },
+ { 'R', 'I', 'L', 'L' },
+ { 'R', 'I', 'M', 'E' },
+ { 'R', 'I', 'N', 'G' },
+ { 'R', 'I', 'N', 'K' },
+ { 'R', 'I', 'S', 'E' },
+ { 'R', 'I', 'S', 'K' },
+ { 'R', 'I', 'T', 'E' },
+ { 'R', 'O', 'A', 'D' },
+ { 'R', 'O', 'A', 'M' },
+ { 'R', 'O', 'A', 'R' },
+ { 'R', 'O', 'B', 'E' },
+ { 'R', 'O', 'C', 'K' },
+ { 'R', 'O', 'D', 'E' },
+ { 'R', 'O', 'I', 'L' },
+ { 'R', 'O', 'L', 'L' },
+ { 'R', 'O', 'M', 'E' },
+ { 'R', 'O', 'O', 'D' },
+ { 'R', 'O', 'O', 'F' },
+ { 'R', 'O', 'O', 'K' },
+ { 'R', 'O', 'O', 'M' },
+ { 'R', 'O', 'O', 'T' },
+ { 'R', 'O', 'S', 'A' },
+ { 'R', 'O', 'S', 'E' },
+ { 'R', 'O', 'S', 'S' },
+ { 'R', 'O', 'S', 'Y' },
+ { 'R', 'O', 'T', 'H' },
+ { 'R', 'O', 'U', 'T' },
+ { 'R', 'O', 'V', 'E' },
+ { 'R', 'O', 'W', 'E' },
+ { 'R', 'O', 'W', 'S' },
+ { 'R', 'U', 'B', 'E' },
+ { 'R', 'U', 'B', 'Y' },
+ { 'R', 'U', 'D', 'E' },
+ { 'R', 'U', 'D', 'Y' },
+ { 'R', 'U', 'I', 'N' },
+ { 'R', 'U', 'L', 'E' },
+ { 'R', 'U', 'N', 'G' },
+ { 'R', 'U', 'N', 'S' },
+ { 'R', 'U', 'N', 'T' },
+ { 'R', 'U', 'S', 'E' },
+ { 'R', 'U', 'S', 'H' },
+ { 'R', 'U', 'S', 'K' },
+ { 'R', 'U', 'S', 'S' },
+ { 'R', 'U', 'S', 'T' },
+ { 'R', 'U', 'T', 'H' },
+ { 'S', 'A', 'C', 'K' },
+ { 'S', 'A', 'F', 'E' },
+ { 'S', 'A', 'G', 'E' },
+ { 'S', 'A', 'I', 'D' },
+ { 'S', 'A', 'I', 'L' },
+ { 'S', 'A', 'L', 'E' },
+ { 'S', 'A', 'L', 'K' },
+ { 'S', 'A', 'L', 'T' },
+ { 'S', 'A', 'M', 'E' },
+ { 'S', 'A', 'N', 'D' },
+ { 'S', 'A', 'N', 'E' },
+ { 'S', 'A', 'N', 'G' },
+ { 'S', 'A', 'N', 'K' },
+ { 'S', 'A', 'R', 'A' },
+ { 'S', 'A', 'U', 'L' },
+ { 'S', 'A', 'V', 'E' },
+ { 'S', 'A', 'Y', 'S' },
+ { 'S', 'C', 'A', 'N' },
+ { 'S', 'C', 'A', 'R' },
+ { 'S', 'C', 'A', 'T' },
+ { 'S', 'C', 'O', 'T' },
+ { 'S', 'E', 'A', 'L' },
+ { 'S', 'E', 'A', 'M' },
+ { 'S', 'E', 'A', 'R' },
+ { 'S', 'E', 'A', 'T' },
+ { 'S', 'E', 'E', 'D' },
+ { 'S', 'E', 'E', 'K' },
+ { 'S', 'E', 'E', 'M' },
+ { 'S', 'E', 'E', 'N' },
+ { 'S', 'E', 'E', 'S' },
+ { 'S', 'E', 'L', 'F' },
+ { 'S', 'E', 'L', 'L' },
+ { 'S', 'E', 'N', 'D' },
+ { 'S', 'E', 'N', 'T' },
+ { 'S', 'E', 'T', 'S' },
+ { 'S', 'E', 'W', 'N' },
+ { 'S', 'H', 'A', 'G' },
+ { 'S', 'H', 'A', 'M' },
+ { 'S', 'H', 'A', 'W' },
+ { 'S', 'H', 'A', 'Y' },
+ { 'S', 'H', 'E', 'D' },
+ { 'S', 'H', 'I', 'M' },
+ { 'S', 'H', 'I', 'N' },
+ { 'S', 'H', 'O', 'D' },
+ { 'S', 'H', 'O', 'E' },
+ { 'S', 'H', 'O', 'T' },
+ { 'S', 'H', 'O', 'W' },
+ { 'S', 'H', 'U', 'N' },
+ { 'S', 'H', 'U', 'T' },
+ { 'S', 'I', 'C', 'K' },
+ { 'S', 'I', 'D', 'E' },
+ { 'S', 'I', 'F', 'T' },
+ { 'S', 'I', 'G', 'H' },
+ { 'S', 'I', 'G', 'N' },
+ { 'S', 'I', 'L', 'K' },
+ { 'S', 'I', 'L', 'L' },
+ { 'S', 'I', 'L', 'O' },
+ { 'S', 'I', 'L', 'T' },
+ { 'S', 'I', 'N', 'E' },
+ { 'S', 'I', 'N', 'G' },
+ { 'S', 'I', 'N', 'K' },
+ { 'S', 'I', 'R', 'E' },
+ { 'S', 'I', 'T', 'E' },
+ { 'S', 'I', 'T', 'S' },
+ { 'S', 'I', 'T', 'U' },
+ { 'S', 'K', 'A', 'T' },
+ { 'S', 'K', 'E', 'W' },
+ { 'S', 'K', 'I', 'D' },
+ { 'S', 'K', 'I', 'M' },
+ { 'S', 'K', 'I', 'N' },
+ { 'S', 'K', 'I', 'T' },
+ { 'S', 'L', 'A', 'B' },
+ { 'S', 'L', 'A', 'M' },
+ { 'S', 'L', 'A', 'T' },
+ { 'S', 'L', 'A', 'Y' },
+ { 'S', 'L', 'E', 'D' },
+ { 'S', 'L', 'E', 'W' },
+ { 'S', 'L', 'I', 'D' },
+ { 'S', 'L', 'I', 'M' },
+ { 'S', 'L', 'I', 'T' },
+ { 'S', 'L', 'O', 'B' },
+ { 'S', 'L', 'O', 'G' },
+ { 'S', 'L', 'O', 'T' },
+ { 'S', 'L', 'O', 'W' },
+ { 'S', 'L', 'U', 'G' },
+ { 'S', 'L', 'U', 'M' },
+ { 'S', 'L', 'U', 'R' },
+ { 'S', 'M', 'O', 'G' },
+ { 'S', 'M', 'U', 'G' },
+ { 'S', 'N', 'A', 'G' },
+ { 'S', 'N', 'O', 'B' },
+ { 'S', 'N', 'O', 'W' },
+ { 'S', 'N', 'U', 'B' },
+ { 'S', 'N', 'U', 'G' },
+ { 'S', 'O', 'A', 'K' },
+ { 'S', 'O', 'A', 'R' },
+ { 'S', 'O', 'C', 'K' },
+ { 'S', 'O', 'D', 'A' },
+ { 'S', 'O', 'F', 'A' },
+ { 'S', 'O', 'F', 'T' },
+ { 'S', 'O', 'I', 'L' },
+ { 'S', 'O', 'L', 'D' },
+ { 'S', 'O', 'M', 'E' },
+ { 'S', 'O', 'N', 'G' },
+ { 'S', 'O', 'O', 'N' },
+ { 'S', 'O', 'O', 'T' },
+ { 'S', 'O', 'R', 'E' },
+ { 'S', 'O', 'R', 'T' },
+ { 'S', 'O', 'U', 'L' },
+ { 'S', 'O', 'U', 'R' },
+ { 'S', 'O', 'W', 'N' },
+ { 'S', 'T', 'A', 'B' },
+ { 'S', 'T', 'A', 'G' },
+ { 'S', 'T', 'A', 'N' },
+ { 'S', 'T', 'A', 'R' },
+ { 'S', 'T', 'A', 'Y' },
+ { 'S', 'T', 'E', 'M' },
+ { 'S', 'T', 'E', 'W' },
+ { 'S', 'T', 'I', 'R' },
+ { 'S', 'T', 'O', 'W' },
+ { 'S', 'T', 'U', 'B' },
+ { 'S', 'T', 'U', 'N' },
+ { 'S', 'U', 'C', 'H' },
+ { 'S', 'U', 'D', 'S' },
+ { 'S', 'U', 'I', 'T' },
+ { 'S', 'U', 'L', 'K' },
+ { 'S', 'U', 'M', 'S' },
+ { 'S', 'U', 'N', 'G' },
+ { 'S', 'U', 'N', 'K' },
+ { 'S', 'U', 'R', 'E' },
+ { 'S', 'U', 'R', 'F' },
+ { 'S', 'W', 'A', 'B' },
+ { 'S', 'W', 'A', 'G' },
+ { 'S', 'W', 'A', 'M' },
+ { 'S', 'W', 'A', 'N' },
+ { 'S', 'W', 'A', 'T' },
+ { 'S', 'W', 'A', 'Y' },
+ { 'S', 'W', 'I', 'M' },
+ { 'S', 'W', 'U', 'M' },
+ { 'T', 'A', 'C', 'K' },
+ { 'T', 'A', 'C', 'T' },
+ { 'T', 'A', 'I', 'L' },
+ { 'T', 'A', 'K', 'E' },
+ { 'T', 'A', 'L', 'E' },
+ { 'T', 'A', 'L', 'K' },
+ { 'T', 'A', 'L', 'L' },
+ { 'T', 'A', 'N', 'K' },
+ { 'T', 'A', 'S', 'K' },
+ { 'T', 'A', 'T', 'E' },
+ { 'T', 'A', 'U', 'T' },
+ { 'T', 'E', 'A', 'L' },
+ { 'T', 'E', 'A', 'M' },
+ { 'T', 'E', 'A', 'R' },
+ { 'T', 'E', 'C', 'H' },
+ { 'T', 'E', 'E', 'M' },
+ { 'T', 'E', 'E', 'N' },
+ { 'T', 'E', 'E', 'T' },
+ { 'T', 'E', 'L', 'L' },
+ { 'T', 'E', 'N', 'D' },
+ { 'T', 'E', 'N', 'T' },
+ { 'T', 'E', 'R', 'M' },
+ { 'T', 'E', 'R', 'N' },
+ { 'T', 'E', 'S', 'S' },
+ { 'T', 'E', 'S', 'T' },
+ { 'T', 'H', 'A', 'N' },
+ { 'T', 'H', 'A', 'T' },
+ { 'T', 'H', 'E', 'E' },
+ { 'T', 'H', 'E', 'M' },
+ { 'T', 'H', 'E', 'N' },
+ { 'T', 'H', 'E', 'Y' },
+ { 'T', 'H', 'I', 'N' },
+ { 'T', 'H', 'I', 'S' },
+ { 'T', 'H', 'U', 'D' },
+ { 'T', 'H', 'U', 'G' },
+ { 'T', 'I', 'C', 'K' },
+ { 'T', 'I', 'D', 'E' },
+ { 'T', 'I', 'D', 'Y' },
+ { 'T', 'I', 'E', 'D' },
+ { 'T', 'I', 'E', 'R' },
+ { 'T', 'I', 'L', 'E' },
+ { 'T', 'I', 'L', 'L' },
+ { 'T', 'I', 'L', 'T' },
+ { 'T', 'I', 'M', 'E' },
+ { 'T', 'I', 'N', 'A' },
+ { 'T', 'I', 'N', 'E' },
+ { 'T', 'I', 'N', 'T' },
+ { 'T', 'I', 'N', 'Y' },
+ { 'T', 'I', 'R', 'E' },
+ { 'T', 'O', 'A', 'D' },
+ { 'T', 'O', 'G', 'O' },
+ { 'T', 'O', 'I', 'L' },
+ { 'T', 'O', 'L', 'D' },
+ { 'T', 'O', 'L', 'L' },
+ { 'T', 'O', 'N', 'E' },
+ { 'T', 'O', 'N', 'G' },
+ { 'T', 'O', 'N', 'Y' },
+ { 'T', 'O', 'O', 'K' },
+ { 'T', 'O', 'O', 'L' },
+ { 'T', 'O', 'O', 'T' },
+ { 'T', 'O', 'R', 'E' },
+ { 'T', 'O', 'R', 'N' },
+ { 'T', 'O', 'T', 'E' },
+ { 'T', 'O', 'U', 'R' },
+ { 'T', 'O', 'U', 'T' },
+ { 'T', 'O', 'W', 'N' },
+ { 'T', 'R', 'A', 'G' },
+ { 'T', 'R', 'A', 'M' },
+ { 'T', 'R', 'A', 'Y' },
+ { 'T', 'R', 'E', 'E' },
+ { 'T', 'R', 'E', 'K' },
+ { 'T', 'R', 'I', 'G' },
+ { 'T', 'R', 'I', 'M' },
+ { 'T', 'R', 'I', 'O' },
+ { 'T', 'R', 'O', 'D' },
+ { 'T', 'R', 'O', 'T' },
+ { 'T', 'R', 'O', 'Y' },
+ { 'T', 'R', 'U', 'E' },
+ { 'T', 'U', 'B', 'A' },
+ { 'T', 'U', 'B', 'E' },
+ { 'T', 'U', 'C', 'K' },
+ { 'T', 'U', 'F', 'T' },
+ { 'T', 'U', 'N', 'A' },
+ { 'T', 'U', 'N', 'E' },
+ { 'T', 'U', 'N', 'G' },
+ { 'T', 'U', 'R', 'F' },
+ { 'T', 'U', 'R', 'N' },
+ { 'T', 'U', 'S', 'K' },
+ { 'T', 'W', 'I', 'G' },
+ { 'T', 'W', 'I', 'N' },
+ { 'T', 'W', 'I', 'T' },
+ { 'U', 'L', 'A', 'N' },
+ { 'U', 'N', 'I', 'T' },
+ { 'U', 'R', 'G', 'E' },
+ { 'U', 'S', 'E', 'D' },
+ { 'U', 'S', 'E', 'R' },
+ { 'U', 'S', 'E', 'S' },
+ { 'U', 'T', 'A', 'H' },
+ { 'V', 'A', 'I', 'L' },
+ { 'V', 'A', 'I', 'N' },
+ { 'V', 'A', 'L', 'E' },
+ { 'V', 'A', 'R', 'Y' },
+ { 'V', 'A', 'S', 'E' },
+ { 'V', 'A', 'S', 'T' },
+ { 'V', 'E', 'A', 'L' },
+ { 'V', 'E', 'D', 'A' },
+ { 'V', 'E', 'I', 'L' },
+ { 'V', 'E', 'I', 'N' },
+ { 'V', 'E', 'N', 'D' },
+ { 'V', 'E', 'N', 'T' },
+ { 'V', 'E', 'R', 'B' },
+ { 'V', 'E', 'R', 'Y' },
+ { 'V', 'E', 'T', 'O' },
+ { 'V', 'I', 'C', 'E' },
+ { 'V', 'I', 'E', 'W' },
+ { 'V', 'I', 'N', 'E' },
+ { 'V', 'I', 'S', 'E' },
+ { 'V', 'O', 'I', 'D' },
+ { 'V', 'O', 'L', 'T' },
+ { 'V', 'O', 'T', 'E' },
+ { 'W', 'A', 'C', 'K' },
+ { 'W', 'A', 'D', 'E' },
+ { 'W', 'A', 'G', 'E' },
+ { 'W', 'A', 'I', 'L' },
+ { 'W', 'A', 'I', 'T' },
+ { 'W', 'A', 'K', 'E' },
+ { 'W', 'A', 'L', 'E' },
+ { 'W', 'A', 'L', 'K' },
+ { 'W', 'A', 'L', 'L' },
+ { 'W', 'A', 'L', 'T' },
+ { 'W', 'A', 'N', 'D' },
+ { 'W', 'A', 'N', 'E' },
+ { 'W', 'A', 'N', 'G' },
+ { 'W', 'A', 'N', 'T' },
+ { 'W', 'A', 'R', 'D' },
+ { 'W', 'A', 'R', 'M' },
+ { 'W', 'A', 'R', 'N' },
+ { 'W', 'A', 'R', 'T' },
+ { 'W', 'A', 'S', 'H' },
+ { 'W', 'A', 'S', 'T' },
+ { 'W', 'A', 'T', 'S' },
+ { 'W', 'A', 'T', 'T' },
+ { 'W', 'A', 'V', 'E' },
+ { 'W', 'A', 'V', 'Y' },
+ { 'W', 'A', 'Y', 'S' },
+ { 'W', 'E', 'A', 'K' },
+ { 'W', 'E', 'A', 'L' },
+ { 'W', 'E', 'A', 'N' },
+ { 'W', 'E', 'A', 'R' },
+ { 'W', 'E', 'E', 'D' },
+ { 'W', 'E', 'E', 'K' },
+ { 'W', 'E', 'I', 'R' },
+ { 'W', 'E', 'L', 'D' },
+ { 'W', 'E', 'L', 'L' },
+ { 'W', 'E', 'L', 'T' },
+ { 'W', 'E', 'N', 'T' },
+ { 'W', 'E', 'R', 'E' },
+ { 'W', 'E', 'R', 'T' },
+ { 'W', 'E', 'S', 'T' },
+ { 'W', 'H', 'A', 'M' },
+ { 'W', 'H', 'A', 'T' },
+ { 'W', 'H', 'E', 'E' },
+ { 'W', 'H', 'E', 'N' },
+ { 'W', 'H', 'E', 'T' },
+ { 'W', 'H', 'O', 'A' },
+ { 'W', 'H', 'O', 'M' },
+ { 'W', 'I', 'C', 'K' },
+ { 'W', 'I', 'F', 'E' },
+ { 'W', 'I', 'L', 'D' },
+ { 'W', 'I', 'L', 'L' },
+ { 'W', 'I', 'N', 'D' },
+ { 'W', 'I', 'N', 'E' },
+ { 'W', 'I', 'N', 'G' },
+ { 'W', 'I', 'N', 'K' },
+ { 'W', 'I', 'N', 'O' },
+ { 'W', 'I', 'R', 'E' },
+ { 'W', 'I', 'S', 'E' },
+ { 'W', 'I', 'S', 'H' },
+ { 'W', 'I', 'T', 'H' },
+ { 'W', 'O', 'L', 'F' },
+ { 'W', 'O', 'N', 'T' },
+ { 'W', 'O', 'O', 'D' },
+ { 'W', 'O', 'O', 'L' },
+ { 'W', 'O', 'R', 'D' },
+ { 'W', 'O', 'R', 'E' },
+ { 'W', 'O', 'R', 'K' },
+ { 'W', 'O', 'R', 'M' },
+ { 'W', 'O', 'R', 'N' },
+ { 'W', 'O', 'V', 'E' },
+ { 'W', 'R', 'I', 'T' },
+ { 'W', 'Y', 'N', 'N' },
+ { 'Y', 'A', 'L', 'E' },
+ { 'Y', 'A', 'N', 'G' },
+ { 'Y', 'A', 'N', 'K' },
+ { 'Y', 'A', 'R', 'D' },
+ { 'Y', 'A', 'R', 'N' },
+ { 'Y', 'A', 'W', 'L' },
+ { 'Y', 'A', 'W', 'N' },
+ { 'Y', 'E', 'A', 'H' },
+ { 'Y', 'E', 'A', 'R' },
+ { 'Y', 'E', 'L', 'L' },
+ { 'Y', 'O', 'G', 'A' },
+ { 'Y', 'O', 'K', 'E' }
+};
+
+/* Extract LENGTH bits from the char array S starting with bit number
+ START. */
+static unsigned long
+extract (const char *s, int start, int length)
+{
+ unsigned char cl = s[start / 8];
+ unsigned char cc = s[start / 8 + 1];
+ unsigned char cr = s[start / 8 + 2];
+ unsigned long x = ((long)(cl << 8 | cc) << 8 | cr);
+
+ x = x >> (24 - (length + (start % 8)));
+ x = (x & (0xffff >> (16 - length)));
+ return x;
+}
+
+#define STRLEN4(s) (!*(s) ? 0 : \
+ (!*(s + 1) ? 1 : \
+ (!*(s + 2) ? 2 : \
+ (!*(s + 3) ? 3 : 4))))
+
+/* Encode 8 bytes in C as a string of English words and store them to
+ STORE. Returns STORE. */
+static char *
+btoe (char *store, const char *c)
+{
+ char cp[10]; /* add in room for the parity 2 bits +
+ extract() slop. */
+ int p, i;
+ char *ostore = store;
+
+ *store = '\0';
+ /* Workaround for extract() reads beyond end of data */
+ memset (cp, 0, sizeof(cp));
+ memcpy (cp, c, 8);
+ /* Compute parity. */
+ for (p = 0, i = 0; i < 64; i += 2)
+ p += extract (cp, i, 2);
+
+ cp[8] = (char)p << 6;
+ memcpy (store, &Wp[extract (cp, 0, 11)][0], 4);
+ store += STRLEN4 (store);
+ *store++ = ' ';
+ memcpy (store, &Wp[extract (cp, 11, 11)][0], 4);
+ store += STRLEN4 (store);
+ *store++ = ' ';
+ memcpy (store, &Wp[extract (cp, 22, 11)][0], 4);
+ store += STRLEN4 (store);
+ *store++ = ' ';
+ memcpy (store, &Wp[extract (cp, 33, 11)][0], 4);
+ store += STRLEN4 (store);
+ *store++ = ' ';
+ memcpy (store, &Wp[extract (cp, 44, 11)][0], 4);
+ store += STRLEN4 (store);
+ *store++ = ' ';
+ memcpy (store, &Wp[extract (cp, 55, 11)][0], 4);
+
+ DEBUGP (("store is `%s'\n", ostore));
+
+ return ostore;
+}
+
+/* #### Document me! */
+const char *
+calculate_skey_response (int sequence, const char *seed, const char *pass)
+{
+ char key[8];
+ static char buf[33];
+
+ struct md5_ctx ctx;
+ unsigned long results[4]; /* #### this looks 32-bit-minded */
+ char *feed = (char *) alloca (strlen (seed) + strlen (pass) + 1);
+
+ strcpy (feed, seed);
+ strcat (feed, pass);
+
+ md5_init_ctx (&ctx);
+ md5_process_bytes (feed, strlen (feed), &ctx);
+ md5_finish_ctx (&ctx, results);
+
+ results[0] ^= results[2];
+ results[1] ^= results[3];
+ memcpy (key, (char *) results, 8);
+
+ while (0 < sequence--)
+ {
+ md5_init_ctx (&ctx);
+ md5_process_bytes (key, 8, &ctx);
+ md5_finish_ctx (&ctx, results);
+ results[0] ^= results[2];
+ results[1] ^= results[3];
+ memcpy (key, (char *) results, 8);
+ }
+ btoe (buf, key);
+ return buf;
+}
--- /dev/null
+/* File Transfer Protocol support.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <ctype.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <sys/types.h>
+#include <assert.h>
+#include <errno.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "url.h"
+#include "rbuf.h"
+#include "retr.h"
+#include "ftp.h"
+#include "html.h"
+#include "connect.h"
+#include "host.h"
+#include "fnmatch.h"
+#include "netrc.h"
+
+#ifndef errno
+extern int errno;
+#endif
+#ifndef h_errno
+extern int h_errno;
+#endif
+
+/* File where the "ls -al" listing will be saved. */
+#define LIST_FILENAME ".listing"
+
+extern char ftp_last_respline[];
+
+/* Look for regexp "( *[0-9]+ *byte" (literal parenthesis) anywhere in
+ the string S, and return the number converted to long, if found, 0
+ otherwise. */
+static long
+ftp_expected_bytes (const char *s)
+{
+ long res;
+
+ while (1)
+ {
+ while (*s && *s != '(')
+ ++s;
+ if (!*s)
+ return 0;
+ for (++s; *s && ISSPACE (*s); s++);
+ if (!*s)
+ return 0;
+ if (!ISDIGIT (*s))
+ continue;
+ res = 0;
+ do
+ {
+ res = (*s - '0') + 10 * res;
+ ++s;
+ }
+ while (*s && ISDIGIT (*s));
+ if (!*s)
+ return 0;
+ while (*s && ISSPACE (*s))
+ ++s;
+ if (!*s)
+ return 0;
+ if (tolower (*s) != 'b')
+ continue;
+ if (strncasecmp (s, "byte", 4))
+ continue;
+ else
+ break;
+ }
+ return res;
+}
+
+/* Retrieves a file with denoted parameters through opening an FTP
+ connection to the server. It always closes the data connection,
+ and closes the control connection in case of error. */
+static uerr_t
+getftp (const struct urlinfo *u, long *len, long restval, ccon *con)
+{
+ int csock, dtsock, res;
+ uerr_t err;
+ FILE *fp;
+ char *user, *passwd, *respline;
+ char *tms, *tmrate;
+ unsigned char pasv_addr[6];
+ int cmd = con->cmd;
+ int passive_mode_open = 0;
+ long expected_bytes = 0L;
+
+ assert (con != NULL);
+ assert (u->local != NULL);
+ /* Debug-check of the sanity of the request by making sure that LIST
+ and RETR are never both requested (since we can handle only one
+ at a time. */
+ assert (!((cmd & DO_LIST) && (cmd & DO_RETR)));
+ /* Make sure that at least *something* is requested. */
+ assert ((cmd & (DO_LIST | DO_CWD | DO_RETR | DO_LOGIN)) != 0);
+
+ user = u->user;
+ passwd = u->passwd;
+ search_netrc (u->host, (const char **)&user, (const char **)&passwd, 1);
+ user = user ? user : opt.ftp_acc;
+ if (!opt.ftp_pass)
+ opt.ftp_pass = xstrdup (ftp_getaddress ());
+ passwd = passwd ? passwd : opt.ftp_pass;
+ assert (user && passwd);
+
+ dtsock = -1;
+ con->dltime = 0;
+
+ if (!(cmd & DO_LOGIN))
+ csock = RBUF_FD (&con->rbuf);
+ else /* cmd & DO_LOGIN */
+ {
+ /* Login to the server: */
+
+ /* First: Establish the control connection. */
+ logprintf (LOG_VERBOSE, _("Connecting to %s:%hu... "), u->host, u->port);
+ err = make_connection (&csock, u->host, u->port);
+ if (cmd & LEAVE_PENDING)
+ rbuf_initialize (&con->rbuf, csock);
+ else
+ rbuf_uninitialize (&con->rbuf);
+ switch (err)
+ {
+ /* Do not close the socket in first several cases, since it
+ wasn't created at all. */
+ case HOSTERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "%s: %s\n", u->host, herrmsg (h_errno));
+ return HOSTERR;
+ break;
+ case CONSOCKERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "socket: %s\n", strerror (errno));
+ return CONSOCKERR;
+ break;
+ case CONREFUSED:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("Connection to %s:%hu refused.\n"),
+ u->host, u->port);
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return CONREFUSED;
+ case CONERROR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "connect: %s\n", strerror (errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return CONERROR;
+ break;
+ default:
+ DO_NOTHING;
+ /* #### Hmm? */
+ }
+ /* Since this is a new connection, we may safely discard
+ anything left in the buffer. */
+ rbuf_discard (&con->rbuf);
+
+ /* Second: Login with proper USER/PASS sequence. */
+ logputs (LOG_VERBOSE, _("connected!\n"));
+ logprintf (LOG_VERBOSE, _("Logging in as %s ... "), user);
+ if (opt.server_response)
+ logputs (LOG_ALWAYS, "\n");
+ err = ftp_login (&con->rbuf, user, passwd);
+ /* FTPRERR, FTPSRVERR, WRITEFAILED, FTPLOGREFUSED, FTPLOGINC */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPSRVERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("Error in server greeting.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPLOGREFUSED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("The server refuses login.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return FTPLOGREFUSED;
+ break;
+ case FTPLOGINC:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("Login incorrect.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return FTPLOGINC;
+ break;
+ case FTPOK:
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("Logged in!\n"));
+ break;
+ default:
+ abort ();
+ exit (1);
+ break;
+ }
+ /* Third: Set type to Image (binary). */
+ if (!opt.server_response)
+ logprintf (LOG_VERBOSE, "==> TYPE %c ... ", toupper (u->ftp_type));
+ err = ftp_type (&con->rbuf, toupper (u->ftp_type));
+ /* FTPRERR, WRITEFAILED, FTPUNKNOWNTYPE */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPUNKNOWNTYPE:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET,
+ _("Unknown type `%c', closing control connection.\n"),
+ toupper (u->ftp_type));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ case FTPOK:
+ /* Everything is OK. */
+ break;
+ default:
+ abort ();
+ break;
+ }
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done. "));
+ } /* do login */
+
+ if (cmd & DO_CWD)
+ {
+ if (!*u->dir)
+ logputs (LOG_VERBOSE, _("==> CWD not needed.\n"));
+ else
+ {
+ /* Change working directory. */
+ if (!opt.server_response)
+ logprintf (LOG_VERBOSE, "==> CWD %s ... ", u->dir);
+ err = ftp_cwd (&con->rbuf, u->dir);
+ /* FTPRERR, WRITEFAILED, FTPNSFOD */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPNSFOD:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("No such directory `%s'.\n\n"),
+ u->dir);
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ }
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done.\n"));
+ }
+ }
+ else /* do not CWD */
+ logputs (LOG_VERBOSE, _("==> CWD not required.\n"));
+
+ /* If anything is to be retrieved, PORT (or PASV) must be sent. */
+ if (cmd & (DO_LIST | DO_RETR))
+ {
+ if (opt.ftp_pasv)
+ {
+ char thost[256];
+ unsigned short tport;
+
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, "==> PASV ... ");
+ err = ftp_pasv (&con->rbuf, pasv_addr);
+ /* FTPRERR, WRITEFAILED, FTPNOPASV, FTPINVPASV */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPNOPASV:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("Cannot initiate PASV transfer.\n"));
+ break;
+ case FTPINVPASV:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("Cannot parse PASV response.\n"));
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ }
+ if (err==FTPOK)
+ {
+ sprintf (thost, "%d.%d.%d.%d",
+ pasv_addr[0], pasv_addr[1], pasv_addr[2], pasv_addr[3]);
+ tport = (pasv_addr[4] << 8) + pasv_addr[5];
+ DEBUGP ((_("Will try connecting to %s:%hu.\n"), thost, tport));
+ err = make_connection (&dtsock, thost, tport);
+ switch (err)
+ {
+ /* Do not close the socket in first several cases,
+ since it wasn't created at all. */
+ case HOSTERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "%s: %s\n", thost,
+ herrmsg (h_errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return HOSTERR;
+ break;
+ case CONSOCKERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "socket: %s\n", strerror (errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return CONSOCKERR;
+ break;
+ case CONREFUSED:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET,
+ _("Connection to %s:%hu refused.\n"),
+ thost, tport);
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ closeport (dtsock);
+ return CONREFUSED;
+ case CONERROR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "connect: %s\n",
+ strerror (errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ closeport (dtsock);
+ return CONERROR;
+ break;
+ default:
+ /* #### What?! */
+ DO_NOTHING;
+ }
+ passive_mode_open= 1; /* Flag to avoid accept port */
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done. "));
+ } /* err==FTP_OK */
+ }
+
+ if (!passive_mode_open) /* Try to use a port command if PASV failed */
+ {
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, "==> PORT ... ");
+ err = ftp_port (&con->rbuf);
+ /* FTPRERR, WRITEFAILED, bindport (CONSOCKERR, CONPORTERR, BINDERR,
+ LISTENERR), HOSTERR, FTPPORTERR */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case CONSOCKERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "socket: %s\n", strerror (errno));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case CONPORTERR: case BINDERR: case LISTENERR:
+ /* What now? These problems are local... */
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("Bind error (%s).\n"),
+ strerror (errno));
+ closeport (dtsock);
+ return err;
+ break;
+ case HOSTERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "%s: %s\n", u->host,
+ herrmsg (h_errno));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return HOSTERR;
+ break;
+ case FTPPORTERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("Invalid PORT.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ } /* port switch */
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done. "));
+ } /* dtsock == -1 */
+ } /* cmd & (DO_LIST | DO_RETR) */
+
+ /* Restart if needed. */
+ if (restval && (cmd & DO_RETR))
+ {
+ if (!opt.server_response)
+ logprintf (LOG_VERBOSE, "==> REST %ld ... ", restval);
+ err = ftp_rest (&con->rbuf, restval);
+
+ /* FTPRERR, WRITEFAILED, FTPRESTFAIL */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPRESTFAIL:
+ logputs (LOG_VERBOSE, _("\nREST failed, starting from scratch.\n"));
+ restval = 0L;
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ }
+ if (err != FTPRESTFAIL && !opt.server_response)
+ logputs (LOG_VERBOSE, _("done. "));
+ } /* restval && cmd & DO_RETR */
+
+ if (cmd & DO_RETR)
+ {
+ if (opt.verbose)
+ {
+ if (!opt.server_response)
+ {
+ if (restval)
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_VERBOSE, "==> RETR %s ... ", u->file);
+ }
+ }
+ err = ftp_retr (&con->rbuf, u->file);
+ /* FTPRERR, WRITEFAILED, FTPNSFOD */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPNSFOD:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("No such file `%s'.\n\n"), u->file);
+ closeport (dtsock);
+ return err;
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ }
+
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done.\n"));
+ expected_bytes = ftp_expected_bytes (ftp_last_respline);
+ } /* do retrieve */
+
+ if (cmd & DO_LIST)
+ {
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, "==> LIST ... ");
+ /* As Maciej W. Rozycki (macro@ds2.pg.gda.pl) says, `LIST'
+ without arguments is better than `LIST .'; confirmed by
+ RFC959. */
+ err = ftp_list (&con->rbuf, NULL);
+ /* FTPRERR, WRITEFAILED */
+ switch (err)
+ {
+ case FTPRERR:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("\
+Error in server response, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case WRITEFAILED:
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET,
+ _("Write failed, closing control connection.\n"));
+ CLOSE (csock);
+ closeport (dtsock);
+ rbuf_uninitialize (&con->rbuf);
+ return err;
+ break;
+ case FTPNSFOD:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("No such file or directory `%s'.\n\n"),
+ ".");
+ closeport (dtsock);
+ return err;
+ break;
+ case FTPOK:
+ /* fine and dandy */
+ break;
+ default:
+ abort ();
+ break;
+ }
+ if (!opt.server_response)
+ logputs (LOG_VERBOSE, _("done.\n"));
+ expected_bytes = ftp_expected_bytes (ftp_last_respline);
+ } /* cmd & DO_LIST */
+
+ /* If no transmission was required, then everything is OK. */
+ if (!(cmd & (DO_LIST | DO_RETR)))
+ return RETRFINISHED;
+
+ if (!passive_mode_open) /* we are not using pasive mode so we need
+ to accept */
+ {
+ /* Open the data transmission socket by calling acceptport(). */
+ err = acceptport (&dtsock);
+ /* Possible errors: ACCEPTERR. */
+ if (err == ACCEPTERR)
+ {
+ logprintf (LOG_NOTQUIET, "accept: %s\n", strerror (errno));
+ return err;
+ }
+ }
+
+ /* Open the file -- if opt.dfp is set, use it instead. */
+ if (!opt.dfp || con->cmd & DO_LIST)
+ {
+ mkalldirs (u->local);
+ if (opt.backups)
+ rotate_backups (u->local);
+ /* #### Is this correct? */
+ chmod (u->local, 0600);
+
+ fp = fopen (u->local, restval ? "ab" : "wb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", u->local, strerror (errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ closeport (dtsock);
+ return FOPENERR;
+ }
+ }
+ else
+ fp = opt.dfp;
+
+ if (*len)
+ {
+ logprintf (LOG_VERBOSE, _("Length: %s"), legible (*len));
+ if (restval)
+ logprintf (LOG_VERBOSE, _(" [%s to go]"), legible (*len - restval));
+ logputs (LOG_VERBOSE, "\n");
+ }
+ else if (expected_bytes)
+ {
+ logprintf (LOG_VERBOSE, _("Length: %s"), legible (expected_bytes));
+ if (restval)
+ logprintf (LOG_VERBOSE, _(" [%s to go]"),
+ legible (expected_bytes - restval));
+ logputs (LOG_VERBOSE, _(" (unauthoritative)\n"));
+ }
+ reset_timer ();
+ /* Get the contents of the document. */
+ res = get_contents (dtsock, fp, len, restval, expected_bytes, &con->rbuf);
+ con->dltime = elapsed_time ();
+ tms = time_str (NULL);
+ tmrate = rate (*len - restval, con->dltime);
+ /* Close data connection socket. */
+ closeport (dtsock);
+ /* Close the local file. */
+ if (!opt.dfp || con->cmd & DO_LIST)
+ fclose (fp);
+ else
+ fflush (fp);
+ /* If get_contents couldn't write to fp, bail out. */
+ if (res == -2)
+ {
+ logprintf (LOG_NOTQUIET, _("%s: %s, closing control connection.\n"),
+ u->local, strerror (errno));
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return FWRITEERR;
+ }
+ else if (res == -1)
+ {
+ logprintf (LOG_NOTQUIET, _("%s (%s) - Data connection: %s; "),
+ tms, tmrate, strerror (errno));
+ if (opt.server_response)
+ logputs (LOG_ALWAYS, "\n");
+ }
+
+ /* Get the server to tell us if everything is retrieved. */
+ err = ftp_response (&con->rbuf, &respline);
+ /* ...and empty the buffer. */
+ rbuf_discard (&con->rbuf);
+ if (err != FTPOK)
+ {
+ free (respline);
+ /* The control connection is decidedly closed. Print the time
+ only if it hasn't already been printed. */
+ if (res != -1)
+ logprintf (LOG_NOTQUIET, "%s (%s) - ", tms, tmrate);
+ logputs (LOG_NOTQUIET, _("Control connection closed.\n"));
+ /* If there is an error on the control connection, close it, but
+ return FTPRETRINT, since there is a possibility that the
+ whole file was retrieved nevertheless (but that is for
+ ftp_loop_internal to decide). */
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ return FTPRETRINT;
+ } /* err != FTPOK */
+ /* If retrieval failed for any reason, return FTPRETRINT, but do not
+ close socket, since the control connection is still alive. If
+ there is something wrong with the control connection, it will
+ become apparent later. */
+ if (*respline != '2')
+ {
+ free (respline);
+ if (res != -1)
+ logprintf (LOG_NOTQUIET, "%s (%s) - ", tms, tmrate);
+ logputs (LOG_NOTQUIET, _("Data transfer aborted.\n"));
+ return FTPRETRINT;
+ }
+ free (respline);
+
+ if (res == -1)
+ {
+ /* What now? The data connection was erroneous, whereas the
+ response says everything is OK. We shall play it safe. */
+ return FTPRETRINT;
+ }
+
+ if (!(cmd & LEAVE_PENDING))
+ {
+ /* I should probably send 'QUIT' and check for a reply, but this
+ is faster. #### Is it OK, though? */
+ CLOSE (csock);
+ rbuf_uninitialize (&con->rbuf);
+ }
+ /* If it was a listing, and opt.server_response is true,
+ print it out. */
+ if (opt.server_response && (con->cmd & DO_LIST))
+ {
+ mkalldirs (u->local);
+ fp = fopen (u->local, "r");
+ if (!fp)
+ logprintf (LOG_ALWAYS, "%s: %s\n", u->local, strerror (errno));
+ else
+ {
+ char *line;
+ /* The lines are being read with read_whole_line because of
+ no-buffering on opt.lfile. */
+ while ((line = read_whole_line (fp)))
+ {
+ logprintf (LOG_ALWAYS, "%s\n", line);
+ free (line);
+ }
+ fclose (fp);
+ }
+ } /* con->cmd & DO_LIST && server_response */
+
+ return RETRFINISHED;
+}
+
+/* A one-file FTP loop. This is the part where FTP retrieval is
+ retried, and retried, and retried, and...
+
+ This loop either gets commands from con, or (if ON_YOUR_OWN is
+ set), makes them up to retrieve the file given by the URL. */
+static uerr_t
+ftp_loop_internal (struct urlinfo *u, struct fileinfo *f, ccon *con)
+{
+ static int first_retrieval = 1;
+
+ int count, orig_lp;
+ long restval, len;
+ char *tms, *tmrate, *locf;
+ uerr_t err;
+ struct stat st;
+
+ if (!u->local)
+ u->local = url_filename (u);
+
+ if (opt.noclobber && file_exists_p (u->local))
+ {
+ logprintf (LOG_VERBOSE,
+ _("File `%s' already there, not retrieving.\n"), u->local);
+ /* If the file is there, we suppose it's retrieved OK. */
+ return RETROK;
+ }
+
+ /* Remove it if it's a link. */
+ remove_link (u->local);
+ if (!opt.output_document)
+ locf = u->local;
+ else
+ locf = opt.output_document;
+
+ count = 0;
+
+ if (con->st & ON_YOUR_OWN)
+ con->st = ON_YOUR_OWN;
+
+ orig_lp = con->cmd & LEAVE_PENDING ? 1 : 0;
+
+ /* THE loop. */
+ do
+ {
+ /* Increment the pass counter. */
+ ++count;
+ /* Wait before the retrieval (unless this is the very first
+ retrieval). */
+ if (!first_retrieval && opt.wait)
+ sleep (opt.wait);
+ if (first_retrieval)
+ first_retrieval = 0;
+ if (con->st & ON_YOUR_OWN)
+ {
+ con->cmd = 0;
+ con->cmd |= (DO_RETR | LEAVE_PENDING);
+ if (rbuf_initialized_p (&con->rbuf))
+ con->cmd &= ~ (DO_LOGIN | DO_CWD);
+ else
+ con->cmd |= (DO_LOGIN | DO_CWD);
+ }
+ else /* not on your own */
+ {
+ if (rbuf_initialized_p (&con->rbuf))
+ con->cmd &= ~DO_LOGIN;
+ else
+ con->cmd |= DO_LOGIN;
+ if (con->st & DONE_CWD)
+ con->cmd &= ~DO_CWD;
+ else
+ con->cmd |= DO_CWD;
+ }
+ /* Assume no restarting. */
+ restval = 0L;
+ if ((count > 1 || opt.always_rest)
+ && !(con->cmd & DO_LIST)
+ && file_exists_p (u->local))
+ if (stat (u->local, &st) == 0)
+ restval = st.st_size;
+ /* Get the current time string. */
+ tms = time_str (NULL);
+ /* Print fetch message, if opt.verbose. */
+ if (opt.verbose)
+ {
+ char *hurl = str_url (u->proxy ? u->proxy : u, 1);
+ char tmp[15];
+ strcpy (tmp, " ");
+ if (count > 1)
+ sprintf (tmp, _("(try:%2d)"), count);
+ logprintf (LOG_VERBOSE, "--%s-- %s\n %s => `%s'\n",
+ tms, hurl, tmp, locf);
+#ifdef WINDOWS
+ ws_changetitle (hurl, 1);
+#endif
+ free (hurl);
+ }
+ /* Send getftp the proper length, if fileinfo was provided. */
+ if (f)
+ len = f->size;
+ else
+ len = 0;
+ err = getftp (u, &len, restval, con);
+ /* Time? */
+ tms = time_str (NULL);
+ tmrate = rate (len - restval, con->dltime);
+
+ if (!rbuf_initialized_p (&con->rbuf))
+ con->st &= ~DONE_CWD;
+ else
+ con->st |= DONE_CWD;
+
+ switch (err)
+ {
+ case HOSTERR: case CONREFUSED: case FWRITEERR: case FOPENERR:
+ case FTPNSFOD: case FTPLOGINC: case FTPNOPASV:
+ /* Fatal errors, give up. */
+ return err;
+ break;
+ case CONSOCKERR: case CONERROR: case FTPSRVERR: case FTPRERR:
+ case WRITEFAILED: case FTPUNKNOWNTYPE: case CONPORTERR:
+ case BINDERR: case LISTENERR: case ACCEPTERR:
+ case FTPPORTERR: case FTPLOGREFUSED: case FTPINVPASV:
+ printwhat (count, opt.ntry);
+ /* non-fatal errors */
+ continue;
+ break;
+ case FTPRETRINT:
+ /* If the control connection was closed, the retrieval
+ will be considered OK if f->size == len. */
+ if (!f || len != f->size)
+ {
+ printwhat (count, opt.ntry);
+ continue;
+ }
+ break;
+ case RETRFINISHED:
+ /* Great! */
+ break;
+ default:
+ /* Not as great. */
+ abort ();
+ }
+ if (con->st & ON_YOUR_OWN)
+ {
+ CLOSE (RBUF_FD (&con->rbuf));
+ rbuf_uninitialize (&con->rbuf);
+ }
+ logprintf (LOG_VERBOSE, _("%s (%s) - `%s' saved [%ld]\n\n"),
+ tms, tmrate, locf, len);
+ logprintf (LOG_NONVERBOSE, "%s URL: %s [%ld] -> \"%s\" [%d]\n",
+ tms, u->url, len, locf, count);
+ /* Do not count listings among the downloaded stuff, since they
+ will get deleted anyway. */
+ if (!(con->cmd & DO_LIST))
+ {
+ ++opt.numurls;
+ opt.downloaded += len;
+ }
+ /* Restore the original leave-pendingness. */
+ if (orig_lp)
+ con->cmd |= LEAVE_PENDING;
+ else
+ con->cmd &= ~LEAVE_PENDING;
+ return RETROK;
+ } while (!opt.ntry || (count < opt.ntry));
+
+ if (rbuf_initialized_p (&con->rbuf) && (con->st & ON_YOUR_OWN))
+ {
+ CLOSE (RBUF_FD (&con->rbuf));
+ rbuf_uninitialize (&con->rbuf);
+ }
+ return TRYLIMEXC;
+}
+
+/* Return the directory listing in a reusable format. The directory
+ is specifed in u->dir. */
+static struct fileinfo *
+ftp_get_listing (struct urlinfo *u, ccon *con)
+{
+ struct fileinfo *f;
+ uerr_t err;
+ char *olocal = u->local;
+ char *list_filename, *ofile;
+
+ con->st &= ~ON_YOUR_OWN;
+ con->cmd |= (DO_LIST | LEAVE_PENDING);
+ con->cmd &= ~DO_RETR;
+ /* Get the listing filename. */
+ ofile = u->file;
+ u->file = LIST_FILENAME;
+ list_filename = url_filename (u);
+ u->file = ofile;
+ u->local = list_filename;
+ DEBUGP ((_("Using `%s' as listing tmp file.\n"), list_filename));
+ err = ftp_loop_internal (u, NULL, con);
+ u->local = olocal;
+ if (err == RETROK)
+ f = ftp_parse_ls (list_filename);
+ else
+ f = NULL;
+ if (opt.remove_listing)
+ {
+ if (unlink (list_filename))
+ logprintf (LOG_NOTQUIET, "unlink: %s\n", strerror (errno));
+ else
+ logprintf (LOG_VERBOSE, _("Removed `%s'.\n"), list_filename);
+ }
+ free (list_filename);
+ con->cmd &= ~DO_LIST;
+ return f;
+}
+
+static uerr_t ftp_retrieve_dirs PARAMS ((struct urlinfo *, struct fileinfo *,
+ ccon *));
+static uerr_t ftp_retrieve_glob PARAMS ((struct urlinfo *, ccon *, int));
+static struct fileinfo *delelement PARAMS ((struct fileinfo *,
+ struct fileinfo **));
+static void freefileinfo PARAMS ((struct fileinfo *f));
+
+/* Retrieve a list of files given in struct fileinfo linked list. If
+ a file is a symbolic link, do not retrieve it, but rather try to
+ set up a similar link on the local disk, if the symlinks are
+ supported.
+
+ If opt.recursive is set, after all files have been retrieved,
+ ftp_retrieve_dirs will be called to retrieve the directories. */
+static uerr_t
+ftp_retrieve_list (struct urlinfo *u, struct fileinfo *f, ccon *con)
+{
+ static int depth = 0;
+ uerr_t err;
+ char *olocal, *ofile;
+ struct fileinfo *orig;
+ long local_size;
+ time_t tml;
+ int dlthis;
+
+ /* Increase the depth. */
+ ++depth;
+ if (opt.reclevel && depth > opt.reclevel)
+ {
+ DEBUGP ((_("Recursion depth %d exceeded max. depth %d.\n"),
+ depth, opt.reclevel));
+ --depth;
+ return RECLEVELEXC;
+ }
+
+ assert (f != NULL);
+ orig = f;
+
+ con->st &= ~ON_YOUR_OWN;
+ if (!(con->st & DONE_CWD))
+ con->cmd |= DO_CWD;
+ else
+ con->cmd &= ~DO_CWD;
+ con->cmd |= (DO_RETR | LEAVE_PENDING);
+
+ if (!rbuf_initialized_p (&con->rbuf))
+ con->cmd |= DO_LOGIN;
+ else
+ con->cmd &= ~DO_LOGIN;
+
+ err = RETROK; /* in case it's not used */
+
+ while (f)
+ {
+ if (opt.quota && opt.downloaded > opt.quota)
+ {
+ --depth;
+ return QUOTEXC;
+ }
+ olocal = u->local;
+ ofile = u->file;
+ u->file = f->name;
+ u->local = url_filename (u);
+ err = RETROK;
+
+ dlthis = 1;
+ if (opt.timestamping && f->type == FT_PLAINFILE)
+ {
+ struct stat st;
+ if (!stat (u->local, &st))
+ {
+ /* Else, get it from the file. */
+ local_size = st.st_size;
+ tml = st.st_mtime;
+ if (local_size == f->size && tml >= f->tstamp)
+ {
+ logprintf (LOG_VERBOSE, _("\
+Local file `%s' is more recent, not retrieving.\n\n"), u->local);
+ dlthis = 0;
+ }
+ else if (local_size != f->size)
+ {
+ logprintf (LOG_VERBOSE, _("\
+The sizes do not match (local %ld), retrieving.\n"), local_size);
+ }
+ }
+ } /* opt.timestamping && f->type == FT_PLAINFILE */
+ switch (f->type)
+ {
+ case FT_SYMLINK:
+ /* If opt.retr_symlinks is defined, we treat symlinks as
+ if they were normal files. There is currently no way
+ to distinguish whether they might be directories, and
+ follow them. */
+ if (!opt.retr_symlinks)
+ {
+#ifdef HAVE_SYMLINK
+ if (!f->linkto)
+ logputs (LOG_NOTQUIET,
+ _("Invalid name of the symlink, skipping.\n"));
+ else
+ {
+ struct stat st;
+ /* Check whether we already have the correct
+ symbolic link. */
+ int rc = lstat (u->local, &st);
+ if (rc == 0)
+ {
+ size_t len = strlen (f->linkto) + 1;
+ if (S_ISLNK (st.st_mode))
+ {
+ char *link_target = (char *)alloca (len);
+ size_t n = readlink (u->local, link_target, len);
+ if ((n == len - 1)
+ && (memcmp (link_target, f->linkto, n) == 0))
+ {
+ logprintf (LOG_VERBOSE, _("\
+Already have correct symlink %s -> %s\n\n"),
+ u->local, f->linkto);
+ dlthis = 0;
+ break;
+ }
+ }
+ }
+ logprintf (LOG_VERBOSE, _("Creating symlink %s -> %s\n"),
+ u->local, f->linkto);
+ /* Unlink before creating symlink! */
+ unlink (u->local);
+ if (symlink (f->linkto, u->local) == -1)
+ logprintf (LOG_NOTQUIET, "symlink: %s\n",
+ strerror (errno));
+ logputs (LOG_VERBOSE, "\n");
+ } /* have f->linkto */
+#else /* not HAVE_SYMLINK */
+ logprintf (LOG_NOTQUIET,
+ _("Symlinks not supported, skipping symlink `%s'.\n"),
+ u->local);
+#endif /* not HAVE_SYMLINK */
+ }
+ else /* opt.retr_symlinks */
+ {
+ if (dlthis)
+ err = ftp_loop_internal (u, f, con);
+ } /* opt.retr_symlinks */
+ break;
+ case FT_DIRECTORY:
+ if (!opt.recursive)
+ logprintf (LOG_NOTQUIET, _("Skipping directory `%s'.\n"),
+ f->name);
+ break;
+ case FT_PLAINFILE:
+ /* Call the retrieve loop. */
+ if (dlthis)
+ err = ftp_loop_internal (u, f, con);
+ break;
+ case FT_UNKNOWN:
+ logprintf (LOG_NOTQUIET, _("%s: unknown/unsupported file type.\n"),
+ f->name);
+ break;
+ } /* switch */
+
+ /* Set the time-stamp information to the local file. Symlinks
+ are not to be stamped because it sets the stamp on the
+ original. :( */
+ if (!opt.dfp
+ && !(f->type == FT_SYMLINK && !opt.retr_symlinks)
+ && f->tstamp != -1
+ && dlthis
+ && file_exists_p (u->local))
+ {
+ touch (u->local, f->tstamp);
+ }
+ else if (f->tstamp == -1)
+ logprintf (LOG_NOTQUIET, _("%s: corrupt time-stamp.\n"), u->local);
+
+ if (f->perms && dlthis)
+ chmod (u->local, f->perms);
+ else
+ DEBUGP (("Unrecognized permissions for %s.\n", u->local));
+
+ free (u->local);
+ u->local = olocal;
+ u->file = ofile;
+ /* Break on fatals. */
+ if (err == QUOTEXC || err == HOSTERR || err == FWRITEERR)
+ break;
+ con->cmd &= ~ (DO_CWD | DO_LOGIN);
+ f = f->next;
+ } /* while */
+ /* We do not want to call ftp_retrieve_dirs here */
+ if (opt.recursive && !(opt.reclevel && depth >= opt.reclevel))
+ err = ftp_retrieve_dirs (u, orig, con);
+ else if (opt.recursive)
+ DEBUGP ((_("Will not retrieve dirs since depth is %d (max %d).\n"),
+ depth, opt.reclevel));
+ --depth;
+ return err;
+}
+
+/* Retrieve the directories given in a file list. This function works
+ by simply going through the linked list and calling
+ ftp_retrieve_glob on each directory entry. The function knows
+ about excluded directories. */
+static uerr_t
+ftp_retrieve_dirs (struct urlinfo *u, struct fileinfo *f, ccon *con)
+{
+ char *odir;
+ char *current_container = NULL;
+ int current_length = 0;
+
+ for (; f; f = f->next)
+ {
+ int len;
+
+ if (opt.quota && opt.downloaded > opt.quota)
+ break;
+ if (f->type != FT_DIRECTORY)
+ continue;
+ odir = u->dir;
+ len = 1 + strlen (u->dir) + 1 + strlen (f->name) + 1;
+ /* Allocate u->dir off stack, but reallocate only if a larger
+ string is needed. */
+ if (len > current_length)
+ current_container = (char *)alloca (len);
+ u->dir = current_container;
+ /* When retrieving recursively, all directories must be
+ absolute. This restriction will (hopefully!) be lifted in
+ the future. */
+ sprintf (u->dir, "/%s%s%s", odir + (*odir == '/'),
+ (!*odir || (*odir == '/' && !* (odir + 1))) ? "" : "/", f->name);
+ if (!accdir (u->dir, ALLABS))
+ {
+ logprintf (LOG_VERBOSE, _("\
+Not descending to `%s' as it is excluded/not-included.\n"), u->dir);
+ u->dir = odir;
+ continue;
+ }
+ con->st &= ~DONE_CWD;
+ ftp_retrieve_glob (u, con, GETALL);
+ /* Set the time-stamp? */
+ u->dir = odir;
+ }
+ if (opt.quota && opt.downloaded > opt.quota)
+ return QUOTEXC;
+ else
+ return RETROK;
+}
+
+
+/* A near-top-level function to retrieve the files in a directory.
+ The function calls ftp_get_listing, to get a linked list of files.
+ Then it weeds out the file names that do not match the pattern.
+ ftp_retrieve_list is called with this updated list as an argument.
+
+ If the argument ACTION is GETONE, just download the file (but first
+ get the listing, so that the time-stamp is heeded); if it's GLOBALL,
+ use globbing; if it's GETALL, download the whole directory. */
+static uerr_t
+ftp_retrieve_glob (struct urlinfo *u, ccon *con, int action)
+{
+ struct fileinfo *orig, *start;
+ uerr_t res;
+
+ con->cmd |= LEAVE_PENDING;
+
+ orig = ftp_get_listing (u, con);
+ start = orig;
+ /* First: weed out that do not conform the global rules given in
+ opt.accepts and opt.rejects. */
+ if (opt.accepts || opt.rejects)
+ {
+ struct fileinfo *f = orig;
+
+ while (f)
+ {
+ if (f->type != FT_DIRECTORY && !acceptable (f->name))
+ {
+ logprintf (LOG_VERBOSE, _("Rejecting `%s'.\n"), f->name);
+ f = delelement (f, &start);
+ }
+ else
+ f = f->next;
+ }
+ }
+ /* Now weed out the files that do not match our globbing pattern.
+ If we are dealing with a globbing pattern, that is. */
+ if (*u->file && (action == GLOBALL || action == GETONE))
+ {
+ int matchres = 0;
+ struct fileinfo *f = start;
+
+ while (f)
+ {
+ matchres = fnmatch (u->file, f->name, 0);
+ if (matchres == -1)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", u->local,
+ strerror (errno));
+ break;
+ }
+ if (matchres == FNM_NOMATCH)
+ f = delelement (f, &start); /* delete the element from the list */
+ else
+ f = f->next; /* leave the element in the list */
+ }
+ if (matchres == -1)
+ {
+ freefileinfo (start);
+ return RETRBADPATTERN;
+ }
+ }
+ res = RETROK;
+ if (start)
+ {
+ /* Just get everything. */
+ ftp_retrieve_list (u, start, con);
+ }
+ else if (!start)
+ {
+ if (action == GLOBALL)
+ {
+ /* No luck. */
+ /* #### This message SUCKS. We should see what was the
+ reason that nothing was retrieved. */
+ logprintf (LOG_VERBOSE, _("No matches on pattern `%s'.\n"), u->file);
+ }
+ else /* GETONE or GETALL */
+ {
+ /* Let's try retrieving it anyway. */
+ con->st |= ON_YOUR_OWN;
+ res = ftp_loop_internal (u, NULL, con);
+ return res;
+ }
+ }
+ freefileinfo (start);
+ if (opt.quota && opt.downloaded > opt.quota)
+ return QUOTEXC;
+ else
+ /* #### Should we return `res' here? */
+ return RETROK;
+}
+
+/* The wrapper that calls an appropriate routine according to contents
+ of URL. Inherently, its capabilities are limited on what can be
+ encoded into a URL. */
+uerr_t
+ftp_loop (struct urlinfo *u, int *dt)
+{
+ ccon con; /* FTP connection */
+ uerr_t res;
+
+ *dt = 0;
+
+ rbuf_uninitialize (&con.rbuf);
+ con.st = ON_YOUR_OWN;
+ res = RETROK; /* in case it's not used */
+
+ /* If the file name is empty, the user probably wants a directory
+ index. We'll provide one, properly HTML-ized. Unless
+ opt.htmlify is 0, of course. :-) */
+ if (!*u->file && !opt.recursive)
+ {
+ struct fileinfo *f = ftp_get_listing (u, &con);
+
+ if (f)
+ {
+ if (opt.htmlify)
+ {
+ char *filename = (opt.output_document
+ ? xstrdup (opt.output_document)
+ : (u->local ? xstrdup (u->local)
+ : url_filename (u)));
+ res = ftp_index (filename, u, f);
+ if (res == FTPOK && opt.verbose)
+ {
+ if (!opt.output_document)
+ {
+ struct stat st;
+ long sz;
+ if (stat (filename, &st) == 0)
+ sz = st.st_size;
+ else
+ sz = -1;
+ logprintf (LOG_NOTQUIET,
+ _("Wrote HTML-ized index to `%s' [%ld].\n"),
+ filename, sz);
+ }
+ else
+ logprintf (LOG_NOTQUIET,
+ _("Wrote HTML-ized index to `%s'.\n"),
+ filename);
+ }
+ free (filename);
+ }
+ freefileinfo (f);
+ }
+ }
+ else
+ {
+ int wild = has_wildcards_p (u->file);
+ if ((opt.ftp_glob && wild) || opt.recursive || opt.timestamping)
+ {
+ /* ftp_retrieve_glob is a catch-all function that gets called
+ if we need globbing, time-stamping or recursion. Its
+ third argument is just what we really need. */
+ ftp_retrieve_glob (u, &con,
+ (opt.ftp_glob && wild) ? GLOBALL : GETONE);
+ }
+ else
+ res = ftp_loop_internal (u, NULL, &con);
+ }
+ if (res == FTPOK)
+ res = RETROK;
+ if (res == RETROK)
+ *dt |= RETROKF;
+ /* If a connection was left, quench it. */
+ if (rbuf_initialized_p (&con.rbuf))
+ CLOSE (RBUF_FD (&con.rbuf));
+ return res;
+}
+
+/* Delete an element from the fileinfo linked list. Returns the
+ address of the next element, or NULL if the list is exhausted. It
+ can modify the start of the list. */
+static struct fileinfo *
+delelement (struct fileinfo *f, struct fileinfo **start)
+{
+ struct fileinfo *prev = f->prev;
+ struct fileinfo *next = f->next;
+
+ free (f->name);
+ FREE_MAYBE (f->linkto);
+ free (f);
+
+ if (next)
+ next->prev = prev;
+ if (prev)
+ prev->next = next;
+ else
+ *start = next;
+ return next;
+}
+
+/* Free the fileinfo linked list of files. */
+static void
+freefileinfo (struct fileinfo *f)
+{
+ while (f)
+ {
+ struct fileinfo *next = f->next;
+ free (f->name);
+ if (f->linkto)
+ free (f->linkto);
+ free (f);
+ f = next;
+ }
+}
--- /dev/null
+/* Declarations for FTP support.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef FTP_H
+#define FTP_H
+
+/* Need it for struct rbuf. */
+#include "rbuf.h"
+
+uerr_t ftp_response PARAMS ((struct rbuf *, char **));
+uerr_t ftp_login PARAMS ((struct rbuf *, const char *, const char *));
+uerr_t ftp_port PARAMS ((struct rbuf *));
+uerr_t ftp_pasv PARAMS ((struct rbuf *, unsigned char *));
+uerr_t ftp_type PARAMS ((struct rbuf *, int));
+uerr_t ftp_cwd PARAMS ((struct rbuf *, const char *));
+uerr_t ftp_retr PARAMS ((struct rbuf *, const char *));
+uerr_t ftp_rest PARAMS ((struct rbuf *, long));
+uerr_t ftp_list PARAMS ((struct rbuf *, const char *));
+
+struct urlinfo;
+
+/* File types. */
+enum ftype
+{
+ FT_PLAINFILE,
+ FT_DIRECTORY,
+ FT_SYMLINK,
+ FT_UNKNOWN
+};
+
+
+/* Globbing (used by ftp_retrieve_glob). */
+enum
+{
+ GLOBALL, GETALL, GETONE
+};
+
+/* Information about one filename in a linked list. */
+struct fileinfo
+{
+ enum ftype type; /* file type */
+ char *name; /* file name */
+ long size; /* file size */
+ long tstamp; /* time-stamp */
+ int perms; /* file permissions */
+ char *linkto; /* link to which file points */
+ struct fileinfo *prev; /* previous... */
+ struct fileinfo *next; /* ...and next structure. */
+};
+
+/* Commands for FTP functions. */
+enum command
+{
+ DO_LOGIN = 0x0001, /* Connect and login to the server. */
+ DO_CWD = 0x0002, /* Change current directory. */
+ DO_RETR = 0x0004, /* Retrieve the file. */
+ DO_LIST = 0x0008, /* Retrieve the directory list. */
+ LEAVE_PENDING = 0x0010 /* Do not close the socket. */
+};
+
+enum fstatus
+{
+ NOTHING = 0x0000, /* Nothing done yet. */
+ ON_YOUR_OWN = 0x0001, /* The ftp_loop_internal sets the
+ defaults. */
+ DONE_CWD = 0x0002 /* The current working directory is
+ correct. */
+};
+
+typedef struct
+{
+ int st; /* connection status */
+ int cmd; /* command code */
+ struct rbuf rbuf; /* control connection buffer */
+ long dltime; /* time of the download */
+} ccon;
+
+struct fileinfo *ftp_parse_ls PARAMS ((const char *));
+uerr_t ftp_loop PARAMS ((struct urlinfo *, int *));
+
+#endif /* FTP_H */
--- /dev/null
+/* Getopt for GNU.
+ NOTE: getopt is now part of the C library, so if you don't know what
+ "Keep this file name-space clean" means, talk to roland@gnu.ai.mit.edu
+ before changing it!
+
+ Copyright (C) 1987, 88, 89, 90, 91, 92, 1993
+ Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the
+ Free Software Foundation; either version 2, or (at your option) any
+ later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+
+\f
+/* NOTE!!! AIX requires this to be the first thing in the file.
+ Do not put ANYTHING before it! */
+
+#ifdef HAVE_CONFIG_H
+# include <config.h>
+#endif /* HAVE_CONFIG_H */
+#include "wget.h"
+
+#if !__STDC__ && !defined(const) && IN_GCC
+#define const
+#endif
+
+/* This tells Alpha OSF/1 not to define a getopt prototype in <stdio.h>. */
+#ifndef _NO_PROTO
+#define _NO_PROTO
+#endif
+
+#include <stdio.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+
+/* Comment out all this code if we are using the GNU C Library, and are not
+ actually compiling the library itself. This code is part of the GNU C
+ Library, but also included in many other GNU distributions. Compiling
+ and linking in this code is a waste when using the GNU C library
+ (especially if it is a shared library). Rather than having every GNU
+ program understand `configure --with-gnu-libc' and omit the object files,
+ it is simpler to just do this in the source for each such file. */
+
+#if defined (_LIBC) || !defined (__GNU_LIBRARY__)
+
+
+#include <stdlib.h>
+
+/* If GETOPT_COMPAT is defined, `+' as well as `--' can introduce a
+ long-named option. Because this is not POSIX.2 compliant, it is
+ being phased out. */
+/* #define GETOPT_COMPAT */
+
+/* This version of `getopt' appears to the caller like standard Unix `getopt'
+ but it behaves differently for the user, since it allows the user
+ to intersperse the options with the other arguments.
+
+ As `getopt' works, it permutes the elements of ARGV so that,
+ when it is done, all the options precede everything else. Thus
+ all application programs are extended to handle flexible argument order.
+
+ Setting the environment variable POSIXLY_CORRECT disables permutation.
+ Then the behavior is completely standard.
+
+ GNU application programs can use a third alternative mode in which
+ they can distinguish the relative order of options and other arguments. */
+
+#include "getopt.h"
+
+/* For communication from `getopt' to the caller.
+ When `getopt' finds an option that takes an argument,
+ the argument value is returned here.
+ Also, when `ordering' is RETURN_IN_ORDER,
+ each non-option ARGV-element is returned here. */
+
+char *optarg = 0;
+
+/* Index in ARGV of the next element to be scanned.
+ This is used for communication to and from the caller
+ and for communication between successive calls to `getopt'.
+
+ On entry to `getopt', zero means this is the first call; initialize.
+
+ When `getopt' returns EOF, this is the index of the first of the
+ non-option elements that the caller should itself scan.
+
+ Otherwise, `optind' communicates from one call to the next
+ how much of ARGV has been scanned so far. */
+
+/* XXX 1003.2 says this must be 1 before any call. */
+int optind = 0;
+
+/* The next char to be scanned in the option-element
+ in which the last option character we returned was found.
+ This allows us to pick up the scan where we left off.
+
+ If this is zero, or a null string, it means resume the scan
+ by advancing to the next ARGV-element. */
+
+static char *nextchar;
+
+/* Callers store zero here to inhibit the error message
+ for unrecognized options. */
+
+int opterr = 1;
+
+/* Set to an option character which was unrecognized.
+ This must be initialized on some systems to avoid linking in the
+ system's own getopt implementation. */
+
+int optopt = '?';
+
+/* Describe how to deal with options that follow non-option ARGV-elements.
+
+ If the caller did not specify anything,
+ the default is REQUIRE_ORDER if the environment variable
+ POSIXLY_CORRECT is defined, PERMUTE otherwise.
+
+ REQUIRE_ORDER means don't recognize them as options;
+ stop option processing when the first non-option is seen.
+ This is what Unix does.
+ This mode of operation is selected by either setting the environment
+ variable POSIXLY_CORRECT, or using `+' as the first character
+ of the list of option characters.
+
+ PERMUTE is the default. We permute the contents of ARGV as we scan,
+ so that eventually all the non-options are at the end. This allows options
+ to be given in any order, even with programs that were not written to
+ expect this.
+
+ RETURN_IN_ORDER is an option available to programs that were written
+ to expect options and other ARGV-elements in any order and that care about
+ the ordering of the two. We describe each non-option ARGV-element
+ as if it were the argument of an option with character code 1.
+ Using `-' as the first character of the list of option characters
+ selects this mode of operation.
+
+ The special argument `--' forces an end of option-scanning regardless
+ of the value of `ordering'. In the case of RETURN_IN_ORDER, only
+ `--' can cause `getopt' to return EOF with `optind' != ARGC. */
+
+static enum
+{
+ REQUIRE_ORDER, PERMUTE, RETURN_IN_ORDER
+} ordering;
+\f
+#ifdef __GNU_LIBRARY__
+/* We want to avoid inclusion of string.h with non-GNU libraries
+ because there are many ways it can cause trouble.
+ On some systems, it contains special magic macros that don't work
+ in GCC. */
+#include <string.h>
+#define my_index strchr
+#define my_bcopy(src, dst, n) memcpy ((dst), (src), (n))
+#else
+
+/* Avoid depending on library functions or files
+ whose names are inconsistent. */
+
+char *getenv ();
+
+static char *
+my_index (const char *str, int chr)
+{
+ while (*str)
+ {
+ if (*str == chr)
+ return (char *) str;
+ str++;
+ }
+ return 0;
+}
+
+static void
+my_bcopy (const char *from, char *to, int size)
+{
+ int i;
+ for (i = 0; i < size; i++)
+ to[i] = from[i];
+}
+#endif /* GNU C library. */
+\f
+/* Handle permutation of arguments. */
+
+/* Describe the part of ARGV that contains non-options that have
+ been skipped. `first_nonopt' is the index in ARGV of the first of them;
+ `last_nonopt' is the index after the last of them. */
+
+static int first_nonopt;
+static int last_nonopt;
+
+/* Exchange two adjacent subsequences of ARGV.
+ One subsequence is elements [first_nonopt,last_nonopt)
+ which contains all the non-options that have been skipped so far.
+ The other is elements [last_nonopt,optind), which contains all
+ the options processed since those non-options were skipped.
+
+ `first_nonopt' and `last_nonopt' are relocated so that they describe
+ the new indices of the non-options in ARGV after they are moved. */
+
+static void
+exchange (char **argv)
+{
+ int nonopts_size = (last_nonopt - first_nonopt) * sizeof (char *);
+ char **temp = (char **) alloca (nonopts_size);
+
+ /* Interchange the two blocks of data in ARGV. */
+
+ my_bcopy ((char *) &argv[first_nonopt], (char *) temp, nonopts_size);
+ my_bcopy ((char *) &argv[last_nonopt], (char *) &argv[first_nonopt],
+ (optind - last_nonopt) * sizeof (char *));
+ my_bcopy ((char *) temp,
+ (char *) &argv[first_nonopt + optind - last_nonopt],
+ nonopts_size);
+
+ /* Update records for the slots the non-options now occupy. */
+
+ first_nonopt += (optind - last_nonopt);
+ last_nonopt = optind;
+}
+\f
+/* Scan elements of ARGV (whose length is ARGC) for option characters
+ given in OPTSTRING.
+
+ If an element of ARGV starts with '-', and is not exactly "-" or "--",
+ then it is an option element. The characters of this element
+ (aside from the initial '-') are option characters. If `getopt'
+ is called repeatedly, it returns successively each of the option characters
+ from each of the option elements.
+
+ If `getopt' finds another option character, it returns that character,
+ updating `optind' and `nextchar' so that the next call to `getopt' can
+ resume the scan with the following option character or ARGV-element.
+
+ If there are no more option characters, `getopt' returns `EOF'.
+ Then `optind' is the index in ARGV of the first ARGV-element
+ that is not an option. (The ARGV-elements have been permuted
+ so that those that are not options now come last.)
+
+ OPTSTRING is a string containing the legitimate option characters.
+ If an option character is seen that is not listed in OPTSTRING,
+ return '?' after printing an error message. If you set `opterr' to
+ zero, the error message is suppressed but we still return '?'.
+
+ If a char in OPTSTRING is followed by a colon, that means it wants an arg,
+ so the following text in the same ARGV-element, or the text of the following
+ ARGV-element, is returned in `optarg'. Two colons mean an option that
+ wants an optional arg; if there is text in the current ARGV-element,
+ it is returned in `optarg', otherwise `optarg' is set to zero.
+
+ If OPTSTRING starts with `-' or `+', it requests different methods of
+ handling the non-option ARGV-elements.
+ See the comments about RETURN_IN_ORDER and REQUIRE_ORDER, above.
+
+ Long-named options begin with `--' instead of `-'.
+ Their names may be abbreviated as long as the abbreviation is unique
+ or is an exact match for some defined option. If they have an
+ argument, it follows the option name in the same ARGV-element, separated
+ from the option name by a `=', or else the in next ARGV-element.
+ When `getopt' finds a long-named option, it returns 0 if that option's
+ `flag' field is nonzero, the value of the option's `val' field
+ if the `flag' field is zero.
+
+ The elements of ARGV aren't really const, because we permute them.
+ But we pretend they're const in the prototype to be compatible
+ with other systems.
+
+ LONGOPTS is a vector of `struct option' terminated by an
+ element containing a name which is zero.
+
+ LONGIND returns the index in LONGOPT of the long-named option found.
+ It is only valid when a long-named option has been found by the most
+ recent call.
+
+ If LONG_ONLY is nonzero, '-' as well as '--' can introduce
+ long-named options. */
+
+int
+_getopt_internal (int argc, char *const *argv, const char *optstring,
+ const struct option *longopts, int *longind, int long_only)
+{
+ int option_index;
+
+ optarg = 0;
+
+ /* Initialize the internal data when the first call is made.
+ Start processing options with ARGV-element 1 (since ARGV-element 0
+ is the program name); the sequence of previously skipped
+ non-option ARGV-elements is empty. */
+
+ if (optind == 0)
+ {
+ first_nonopt = last_nonopt = optind = 1;
+
+ nextchar = NULL;
+
+ /* Determine how to handle the ordering of options and nonoptions. */
+
+ if (optstring[0] == '-')
+ {
+ ordering = RETURN_IN_ORDER;
+ ++optstring;
+ }
+ else if (optstring[0] == '+')
+ {
+ ordering = REQUIRE_ORDER;
+ ++optstring;
+ }
+ else if (getenv ("POSIXLY_CORRECT") != NULL)
+ ordering = REQUIRE_ORDER;
+ else
+ ordering = PERMUTE;
+ }
+
+ if (nextchar == NULL || *nextchar == '\0')
+ {
+ if (ordering == PERMUTE)
+ {
+ /* If we have just processed some options following some non-options,
+ exchange them so that the options come first. */
+
+ if (first_nonopt != last_nonopt && last_nonopt != optind)
+ exchange ((char **) argv);
+ else if (last_nonopt != optind)
+ first_nonopt = optind;
+
+ /* Now skip any additional non-options
+ and extend the range of non-options previously skipped. */
+
+ while (optind < argc
+ && (argv[optind][0] != '-' || argv[optind][1] == '\0')
+#ifdef GETOPT_COMPAT
+ && (longopts == NULL
+ || argv[optind][0] != '+' || argv[optind][1] == '\0')
+#endif /* GETOPT_COMPAT */
+ )
+ optind++;
+ last_nonopt = optind;
+ }
+
+ /* Special ARGV-element `--' means premature end of options.
+ Skip it like a null option,
+ then exchange with previous non-options as if it were an option,
+ then skip everything else like a non-option. */
+
+ if (optind != argc && !strcmp (argv[optind], "--"))
+ {
+ optind++;
+
+ if (first_nonopt != last_nonopt && last_nonopt != optind)
+ exchange ((char **) argv);
+ else if (first_nonopt == last_nonopt)
+ first_nonopt = optind;
+ last_nonopt = argc;
+
+ optind = argc;
+ }
+
+ /* If we have done all the ARGV-elements, stop the scan
+ and back over any non-options that we skipped and permuted. */
+
+ if (optind == argc)
+ {
+ /* Set the next-arg-index to point at the non-options
+ that we previously skipped, so the caller will digest them. */
+ if (first_nonopt != last_nonopt)
+ optind = first_nonopt;
+ return EOF;
+ }
+
+ /* If we have come to a non-option and did not permute it,
+ either stop the scan or describe it to the caller and pass it by. */
+
+ if ((argv[optind][0] != '-' || argv[optind][1] == '\0')
+#ifdef GETOPT_COMPAT
+ && (longopts == NULL
+ || argv[optind][0] != '+' || argv[optind][1] == '\0')
+#endif /* GETOPT_COMPAT */
+ )
+ {
+ if (ordering == REQUIRE_ORDER)
+ return EOF;
+ optarg = argv[optind++];
+ return 1;
+ }
+
+ /* We have found another option-ARGV-element.
+ Start decoding its characters. */
+
+ nextchar = (argv[optind] + 1
+ + (longopts != NULL && argv[optind][1] == '-'));
+ }
+
+ if (longopts != NULL
+ && ((argv[optind][0] == '-'
+ && (argv[optind][1] == '-' || long_only))
+#ifdef GETOPT_COMPAT
+ || argv[optind][0] == '+'
+#endif /* GETOPT_COMPAT */
+ ))
+ {
+ const struct option *p;
+ char *s = nextchar;
+ int exact = 0;
+ int ambig = 0;
+ const struct option *pfound = NULL;
+ int indfound;
+
+ indfound = 0; /* To silence the compiler. */
+
+ while (*s && *s != '=')
+ s++;
+
+ /* Test all options for either exact match or abbreviated matches. */
+ for (p = longopts, option_index = 0; p->name;
+ p++, option_index++)
+ if (!strncmp (p->name, nextchar, s - nextchar))
+ {
+ if (s - nextchar == strlen (p->name))
+ {
+ /* Exact match found. */
+ pfound = p;
+ indfound = option_index;
+ exact = 1;
+ break;
+ }
+ else if (pfound == NULL)
+ {
+ /* First nonexact match found. */
+ pfound = p;
+ indfound = option_index;
+ }
+ else
+ /* Second nonexact match found. */
+ ambig = 1;
+ }
+
+ if (ambig && !exact)
+ {
+ if (opterr)
+ fprintf (stderr, _("%s: option `%s' is ambiguous\n"),
+ exec_name, argv[optind]);
+ nextchar += strlen (nextchar);
+ optind++;
+ return '?';
+ }
+
+ if (pfound != NULL)
+ {
+ option_index = indfound;
+ optind++;
+ if (*s)
+ {
+ /* Don't test has_arg with >, because some C compilers don't
+ allow it to be used on enums. */
+ if (pfound->has_arg)
+ optarg = s + 1;
+ else
+ {
+ if (opterr)
+ {
+ if (argv[optind - 1][1] == '-')
+ /* --option */
+ fprintf (stderr,
+ _("%s: option `--%s' doesn't allow an argument\n"),
+ exec_name, pfound->name);
+ else
+ /* +option or -option */
+ fprintf (stderr,
+ _("%s: option `%c%s' doesn't allow an argument\n"),
+ exec_name, argv[optind - 1][0], pfound->name);
+ }
+ nextchar += strlen (nextchar);
+ return '?';
+ }
+ }
+ else if (pfound->has_arg == 1)
+ {
+ if (optind < argc)
+ optarg = argv[optind++];
+ else
+ {
+ if (opterr)
+ fprintf (stderr,
+ _("%s: option `%s' requires an argument\n"),
+ exec_name, argv[optind - 1]);
+ nextchar += strlen (nextchar);
+ return optstring[0] == ':' ? ':' : '?';
+ }
+ }
+ nextchar += strlen (nextchar);
+ if (longind != NULL)
+ *longind = option_index;
+ if (pfound->flag)
+ {
+ *(pfound->flag) = pfound->val;
+ return 0;
+ }
+ return pfound->val;
+ }
+ /* Can't find it as a long option. If this is not getopt_long_only,
+ or the option starts with '--' or is not a valid short
+ option, then it's an error.
+ Otherwise interpret it as a short option. */
+ if (!long_only || argv[optind][1] == '-'
+#ifdef GETOPT_COMPAT
+ || argv[optind][0] == '+'
+#endif /* GETOPT_COMPAT */
+ || my_index (optstring, *nextchar) == NULL)
+ {
+ if (opterr)
+ {
+ if (argv[optind][1] == '-')
+ /* --option */
+ fprintf (stderr, _("%s: unrecognized option `--%s'\n"),
+ exec_name, nextchar);
+ else
+ /* +option or -option */
+ fprintf (stderr, _("%s: unrecognized option `%c%s'\n"),
+ exec_name, argv[optind][0], nextchar);
+ }
+ nextchar = (char *) "";
+ optind++;
+ return '?';
+ }
+ }
+
+ /* Look at and handle the next option-character. */
+
+ {
+ char c = *nextchar++;
+ char *temp = my_index (optstring, c);
+
+ /* Increment `optind' when we start to process its last character. */
+ if (*nextchar == '\0')
+ ++optind;
+
+ if (temp == NULL || c == ':')
+ {
+ if (opterr)
+ {
+#if 0
+ if (c < 040 || c >= 0177)
+ fprintf (stderr, "%s: unrecognized option, character code 0%o\n",
+ exec_name, c);
+ else
+ fprintf (stderr, "%s: unrecognized option `-%c'\n", exec_name, c);
+#else
+ /* 1003.2 specifies the format of this message. */
+ fprintf (stderr, _("%s: illegal option -- %c\n"), exec_name, c);
+#endif
+ }
+ optopt = c;
+ return '?';
+ }
+ if (temp[1] == ':')
+ {
+ if (temp[2] == ':')
+ {
+ /* This is an option that accepts an argument optionally. */
+ if (*nextchar != '\0')
+ {
+ optarg = nextchar;
+ optind++;
+ }
+ else
+ optarg = 0;
+ nextchar = NULL;
+ }
+ else
+ {
+ /* This is an option that requires an argument. */
+ if (*nextchar != '\0')
+ {
+ optarg = nextchar;
+ /* If we end this ARGV-element by taking the rest as an arg,
+ we must advance to the next element now. */
+ optind++;
+ }
+ else if (optind == argc)
+ {
+ if (opterr)
+ {
+#if 0
+ fprintf (stderr, "%s: option `-%c' requires an argument\n",
+ exec_name, c);
+#else
+ /* 1003.2 specifies the format of this message. */
+ fprintf (stderr, _("%s: option requires an argument -- %c\n"),
+ exec_name, c);
+#endif
+ }
+ optopt = c;
+ if (optstring[0] == ':')
+ c = ':';
+ else
+ c = '?';
+ }
+ else
+ /* We already incremented `optind' once;
+ increment it again when taking next ARGV-elt as argument. */
+ optarg = argv[optind++];
+ nextchar = NULL;
+ }
+ }
+ return c;
+ }
+}
+
+/* Calls internal getopt function to enable long option names. */
+int
+getopt_long (int argc, char *const *argv, const char *shortopts,
+ const struct option *longopts, int *longind)
+{
+ return _getopt_internal (argc, argv, shortopts, longopts, longind, 0);
+}
+
+int
+getopt (int argc, char *const *argv, const char *optstring)
+{
+ return _getopt_internal (argc, argv, optstring,
+ (const struct option *) 0,
+ (int *) 0,
+ 0);
+}
+
+#endif /* _LIBC or not __GNU_LIBRARY__. */
+\f
+#ifdef TEST
+
+/* Compile with -DTEST to make an executable for use in testing
+ the above definition of `getopt'. */
+
+int
+main (argc, argv)
+ int argc;
+ char **argv;
+{
+ int c;
+ int digit_optind = 0;
+
+ while (1)
+ {
+ int this_option_optind = optind ? optind : 1;
+
+ c = getopt (argc, argv, "abc:d:0123456789");
+ if (c == EOF)
+ break;
+
+ switch (c)
+ {
+ case '0':
+ case '1':
+ case '2':
+ case '3':
+ case '4':
+ case '5':
+ case '6':
+ case '7':
+ case '8':
+ case '9':
+ if (digit_optind != 0 && digit_optind != this_option_optind)
+ printf ("digits occur in two different argv-elements.\n");
+ digit_optind = this_option_optind;
+ printf ("option %c\n", c);
+ break;
+
+ case 'a':
+ printf ("option a\n");
+ break;
+
+ case 'b':
+ printf ("option b\n");
+ break;
+
+ case 'c':
+ printf ("option c with value `%s'\n", optarg);
+ break;
+
+ case '?':
+ break;
+
+ default:
+ printf ("?? getopt returned character code 0%o ??\n", c);
+ }
+ }
+
+ if (optind < argc)
+ {
+ printf ("non-option ARGV-elements: ");
+ while (optind < argc)
+ printf ("%s ", argv[optind++]);
+ printf ("\n");
+ }
+
+ exit (0);
+}
+
+#endif /* TEST */
--- /dev/null
+/* Declarations for getopt.
+ Copyright (C) 1989, 1990, 1991, 1992, 1993 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the
+ Free Software Foundation; either version 2, or (at your option) any
+ later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef _GETOPT_H
+#define _GETOPT_H 1
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* For communication from `getopt' to the caller.
+ When `getopt' finds an option that takes an argument,
+ the argument value is returned here.
+ Also, when `ordering' is RETURN_IN_ORDER,
+ each non-option ARGV-element is returned here. */
+
+extern char *optarg;
+
+/* Index in ARGV of the next element to be scanned.
+ This is used for communication to and from the caller
+ and for communication between successive calls to `getopt'.
+
+ On entry to `getopt', zero means this is the first call; initialize.
+
+ When `getopt' returns EOF, this is the index of the first of the
+ non-option elements that the caller should itself scan.
+
+ Otherwise, `optind' communicates from one call to the next
+ how much of ARGV has been scanned so far. */
+
+extern int optind;
+
+/* Callers store zero here to inhibit the error message `getopt' prints
+ for unrecognized options. */
+
+extern int opterr;
+
+/* Set to an option character which was unrecognized. */
+
+extern int optopt;
+
+/* Describe the long-named options requested by the application.
+ The LONG_OPTIONS argument to getopt_long or getopt_long_only is a vector
+ of `struct option' terminated by an element containing a name which is
+ zero.
+
+ The field `has_arg' is:
+ no_argument (or 0) if the option does not take an argument,
+ required_argument (or 1) if the option requires an argument,
+ optional_argument (or 2) if the option takes an optional argument.
+
+ If the field `flag' is not NULL, it points to a variable that is set
+ to the value given in the field `val' when the option is found, but
+ left unchanged if the option is not found.
+
+ To have a long-named option do something other than set an `int' to
+ a compiled-in constant, such as set a value from `optarg', set the
+ option's `flag' field to zero and its `val' field to a nonzero
+ value (the equivalent single-letter option character, if there is
+ one). For long options that have a zero `flag' field, `getopt'
+ returns the contents of the `val' field. */
+
+struct option
+{
+#if __STDC__
+ const char *name;
+#else
+ char *name;
+#endif
+ /* has_arg can't be an enum because some compilers complain about
+ type mismatches in all the code that assumes it is an int. */
+ int has_arg;
+ int *flag;
+ int val;
+};
+
+/* Names for the values of the `has_arg' field of `struct option'. */
+
+#define no_argument 0
+#define required_argument 1
+#define optional_argument 2
+
+#if __STDC__
+#if defined(__GNU_LIBRARY__)
+/* Many other libraries have conflicting prototypes for getopt, with
+ differences in the consts, in stdlib.h. To avoid compilation
+ errors, only prototype getopt for the GNU C library. */
+extern int getopt (int argc, char *const *argv, const char *shortopts);
+#else /* not __GNU_LIBRARY__ */
+extern int getopt ();
+#endif /* not __GNU_LIBRARY__ */
+extern int getopt_long (int argc, char *const *argv, const char *shortopts,
+ const struct option *longopts, int *longind);
+extern int getopt_long_only (int argc, char *const *argv,
+ const char *shortopts,
+ const struct option *longopts, int *longind);
+
+/* Internal only. Users should not call this directly. */
+extern int _getopt_internal (int argc, char *const *argv,
+ const char *shortopts,
+ const struct option *longopts, int *longind,
+ int long_only);
+#else /* not __STDC__ */
+extern int getopt ();
+extern int getopt_long ();
+extern int getopt_long_only ();
+
+extern int _getopt_internal ();
+#endif /* not __STDC__ */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _GETOPT_H */
--- /dev/null
+/* Generic support for headers.
+ Copyright (C) 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <ctype.h>
+
+#include "wget.h"
+#include "connect.h"
+#include "rbuf.h"
+#include "headers.h"
+
+/* This file contains the generic routines for work with headers.
+ Currently they are used only by HTTP in http.c, but they can be
+ used by anything that cares about RFC822-style headers.
+
+ Header is defined in RFC2068, as quoted below. Note that this
+ definition is not HTTP-specific -- it is virtually
+ indistinguishable from the one given in RFC822 or RFC1036.
+
+ message-header = field-name ":" [ field-value ] CRLF
+
+ field-name = token
+ field-value = *( field-content | LWS )
+
+ field-content = <the OCTETs making up the field-value
+ and consisting of either *TEXT or combinations
+ of token, tspecials, and quoted-string>
+
+ The public functions are header_get() and header_process(), which
+ see. */
+
+\f
+/* Get a header from read-buffer RBUF and return it in *HDR.
+
+ As defined in RFC2068 and elsewhere, a header can be folded into
+ multiple lines if the continuation line begins with a space or
+ horizontal TAB. Also, this function will accept a header ending
+ with just LF instead of CRLF.
+
+ The header may be of arbitrary length; the function will allocate
+ as much memory as necessary for it to fit. It need not contain a
+ `:', thus you can use it to retrieve, say, HTTP status line.
+
+ The trailing CRLF or LF are stripped from the header, and it is
+ zero-terminated. #### Is this well-behaved? */
+int
+header_get (struct rbuf *rbuf, char **hdr, enum header_get_flags flags)
+{
+ int i;
+ int bufsize = 80;
+
+ *hdr = (char *)xmalloc (bufsize);
+ for (i = 0; 1; i++)
+ {
+ int res;
+ /* #### Use DO_REALLOC? */
+ if (i > bufsize - 1)
+ *hdr = (char *)xrealloc (*hdr, (bufsize <<= 1));
+ res = RBUF_READCHAR (rbuf, *hdr + i);
+ if (res == 1)
+ {
+ if ((*hdr)[i] == '\n')
+ {
+ if (!((flags & HG_NO_CONTINUATIONS)
+ || i == 0
+ || (i == 1 && (*hdr)[0] == '\r')))
+ {
+ char next;
+ /* If the header is non-empty, we need to check if
+ it continues on to the other line. We do that by
+ peeking at the next character. */
+ res = rbuf_peek (rbuf, &next);
+ if (res == 0)
+ return HG_EOF;
+ else if (res == -1)
+ return HG_ERROR;
+ /* If the next character is HT or SP, just continue. */
+ if (next == '\t' || next == ' ')
+ continue;
+ }
+ /* The header ends. */
+ (*hdr)[i] = '\0';
+ /* Get rid of '\r'. */
+ if (i > 0 && (*hdr)[i - 1] == '\r')
+ (*hdr)[i - 1] = '\0';
+ break;
+ }
+ }
+ else if (res == 0)
+ return HG_EOF;
+ else
+ return HG_ERROR;
+ }
+ DEBUGP (("%s\n", *hdr));
+ return HG_OK;
+}
+\f
+/* Check whether HEADER begins with NAME and, if yes, skip the `:' and
+ the whitespace, and call PROCFUN with the arguments of HEADER's
+ contents (after the `:' and space) and ARG. Otherwise, return 0. */
+int
+header_process (const char *header, const char *name,
+ int (*procfun) (const char *, void *),
+ void *arg)
+{
+ /* Check whether HEADER matches NAME. */
+ while (*name && (tolower (*name) == tolower (*header)))
+ ++name, ++header;
+ if (*name || *header++ != ':')
+ return 0;
+
+ header += skip_lws (header);
+
+ return ((*procfun) (header, arg));
+}
+\f
+/* Helper functions for use with header_process(). */
+
+/* Extract a long integer from HEADER and store it to CLOSURE. If an
+ error is encountered, return 0, else 1. */
+int
+header_extract_number (const char *header, void *closure)
+{
+ const char *p = header;
+ long result;
+
+ for (result = 0; ISDIGIT (*p); p++)
+ result = 10 * result + (*p - '0');
+ if (*p)
+ return 0;
+
+ *(long *)closure = result;
+ return 1;
+}
+
+/* Strdup HEADER, and place the pointer to CLOSURE. */
+int
+header_strdup (const char *header, void *closure)
+{
+ *(char **)closure = xstrdup (header);
+ return 1;
+}
+
+/* Skip LWS (linear white space), if present. Returns number of
+ characters to skip. */
+int
+skip_lws (const char *string)
+{
+ const char *p = string;
+
+ while (*p == ' ' || *p == '\t' || *p == '\r' || *p == '\n')
+ ++p;
+ return p - string;
+}
--- /dev/null
+/* Declarations for `headers.c'.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+enum {
+ HG_OK, HG_ERROR, HG_EOF
+};
+
+enum header_get_flags { HG_NONE = 0,
+ HG_NO_CONTINUATIONS = 0x2 };
+
+int header_get PARAMS ((struct rbuf *, char **, enum header_get_flags));
+int header_process PARAMS ((const char *, const char *,
+ int (*) (const char *, void *),
+ void *));
+
+int header_extract_number PARAMS ((const char *, void *));
+int header_strdup PARAMS ((const char *, void *));
+
+int skip_lws PARAMS ((const char *));
--- /dev/null
+/* Dealing with host names.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <ctype.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <assert.h>
+#include <sys/types.h>
+
+#ifdef WINDOWS
+# include <winsock.h>
+#else
+# include <sys/socket.h>
+# include <netinet/in.h>
+# include <arpa/inet.h>
+# include <netdb.h>
+#endif /* WINDOWS */
+
+#ifdef HAVE_SYS_UTSNAME_H
+# include <sys/utsname.h>
+#endif
+#include <errno.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "host.h"
+#include "url.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+/* Host list entry */
+struct host
+{
+ /* Host's symbolical name, as encountered at the time of first
+ inclusion, e.g. "fly.cc.fer.hr". */
+ char *hostname;
+ /* Host's "real" name, i.e. its IP address, written out in ASCII
+ form of N.N.N.N, e.g. "161.53.70.130". */
+ char *realname;
+ /* More than one HOSTNAME can correspond to the same REALNAME. For
+ our purposes, the canonical name of the host is its HOSTNAME when
+ it was first encountered. This entry is said to have QUALITY. */
+ int quality;
+ /* Next entry in the list. */
+ struct host *next;
+};
+
+static struct host *hlist;
+
+static struct host *add_hlist PARAMS ((struct host *, const char *,
+ const char *, int));
+
+/* The same as gethostbyname, but supports internet addresses of the
+ form `N.N.N.N'. */
+struct hostent *
+ngethostbyname (const char *name)
+{
+ struct hostent *hp;
+ unsigned long addr;
+
+ addr = (unsigned long)inet_addr (name);
+ if ((int)addr != -1)
+ hp = gethostbyaddr ((char *)&addr, sizeof (addr), AF_INET);
+ else
+ hp = gethostbyname (name);
+ return hp;
+}
+
+/* Search for HOST in the linked list L, by hostname. Return the
+ entry, if found, or NULL. The search is case-insensitive. */
+static struct host *
+search_host (struct host *l, const char *host)
+{
+ for (; l; l = l->next)
+ if (strcasecmp (l->hostname, host) == 0)
+ return l;
+ return NULL;
+}
+
+/* Like search_host, but searches by address. */
+static struct host *
+search_address (struct host *l, const char *address)
+{
+ for (; l; l = l->next)
+ {
+ int cmp = strcmp (l->realname, address);
+ if (cmp == 0)
+ return l;
+ else if (cmp > 0)
+ return NULL;
+ }
+ return NULL;
+}
+
+/* Store the address of HOSTNAME, internet-style, to WHERE. First
+ check for it in the host list, and (if not found), use
+ ngethostbyname to get it.
+
+ Return 1 on successful finding of the hostname, 0 otherwise. */
+int
+store_hostaddress (unsigned char *where, const char *hostname)
+{
+ struct host *t;
+ unsigned long addr;
+ struct hostent *hptr;
+ struct in_addr in;
+ char *inet_s;
+
+ /* If the address is of the form d.d.d.d, there will be no trouble
+ with it. */
+ addr = (unsigned long)inet_addr (hostname);
+ if ((int)addr == -1)
+ {
+ /* If it is not of that form, try to find it in the cache. */
+ t = search_host (hlist, hostname);
+ if (t)
+ addr = (unsigned long)inet_addr (t->realname);
+ }
+ /* If we have the numeric address, just store it. */
+ if ((int)addr != -1)
+ {
+ /* This works on both little and big endian architecture, as
+ inet_addr returns the address in the proper order. It
+ appears to work on 64-bit machines too. */
+ memcpy (where, &addr, 4);
+ return 1;
+ }
+ /* Since all else has failed, let's try gethostbyname(). Note that
+ we use gethostbyname() rather than ngethostbyname(), because we
+ *know* the address is not numerical. */
+ hptr = gethostbyname (hostname);
+ if (!hptr)
+ return 0;
+ /* Copy the address of the host to socket description. */
+ memcpy (where, hptr->h_addr_list[0], hptr->h_length);
+ /* Now that we're here, we could as well cache the hostname for
+ future use, as in realhost(). First, we have to look for it by
+ address to know if it's already in the cache by another name. */
+
+ /* Originally, we copied to in.s_addr, but it appears to be missing
+ on some systems. */
+ memcpy (&in, *hptr->h_addr_list, sizeof (in));
+ STRDUP_ALLOCA (inet_s, inet_ntoa (in));
+ t = search_address (hlist, inet_s);
+ if (t) /* Found in the list, as realname. */
+ {
+ /* Set the default, 0 quality. */
+ hlist = add_hlist (hlist, hostname, inet_s, 0);
+ return 1;
+ }
+ /* Since this is really the first time this host is encountered,
+ set quality to 1. */
+ hlist = add_hlist (hlist, hostname, inet_s, 1);
+ return 1;
+}
+
+/* Add a host to the host list. The list is sorted by addresses. For
+ equal addresses, the entries with quality should bubble towards the
+ beginning of the list. */
+static struct host *
+add_hlist (struct host *l, const char *nhost, const char *nreal, int quality)
+{
+ struct host *t, *old, *beg;
+
+ /* The entry goes to the beginning of the list if the list is empty
+ or the order requires it. */
+ if (!l || (strcmp (nreal, l->realname) < 0))
+ {
+ t = (struct host *)xmalloc (sizeof (struct host));
+ t->hostname = xstrdup (nhost);
+ t->realname = xstrdup (nreal);
+ t->quality = quality;
+ t->next = l;
+ return t;
+ }
+
+ beg = l;
+ /* Second two one-before-the-last element. */
+ while (l->next)
+ {
+ int cmp;
+ old = l;
+ l = l->next;
+ cmp = strcmp (nreal, l->realname);
+ if (cmp >= 0)
+ continue;
+ /* If the next list element is greater than s, put s between the
+ current and the next list element. */
+ t = (struct host *)xmalloc (sizeof (struct host));
+ old->next = t;
+ t->next = l;
+ t->hostname = xstrdup (nhost);
+ t->realname = xstrdup (nreal);
+ t->quality = quality;
+ return beg;
+ }
+ t = (struct host *)xmalloc (sizeof (struct host));
+ t->hostname = xstrdup (nhost);
+ t->realname = xstrdup (nreal);
+ t->quality = quality;
+ /* Insert the new element after the last element. */
+ l->next = t;
+ t->next = NULL;
+ return beg;
+}
+
+/* Determine the "real" name of HOST, as perceived by Wget. If HOST
+ is referenced by more than one name, "real" name is considered to
+ be the first one encountered in the past.
+
+ If the host cannot be found in the list of already dealt-with
+ hosts, try with its INET address. If this fails too, add it to the
+ list. The routine does not call gethostbyname twice for the same
+ host if it can possibly avoid it. */
+char *
+realhost (const char *host)
+{
+ struct host *l;
+ struct in_addr in;
+ struct hostent *hptr;
+ char *inet_s;
+
+ DEBUGP (("Checking for %s.\n", host));
+ /* Look for the host, looking by the host name. */
+ l = search_host (hlist, host);
+ if (l && l->quality) /* Found it with quality */
+ {
+ DEBUGP (("%s was already used, by that name.\n", host));
+ /* Here we return l->hostname, not host, because of the possible
+ case differences (e.g. jaGOR.srce.hr and jagor.srce.hr are
+ the same, but we want the one that was first. */
+ return xstrdup (l->hostname);
+ }
+ else if (!l) /* Not found, with or without quality */
+ {
+ /* The fact that gethostbyname will get called makes it
+ necessary to store it to the list, to ensure that
+ gethostbyname will not be called twice for the same string.
+ However, the quality argument must be set appropriately.
+
+ Note that add_hlist must be called *after* the realname
+ search, or the quality would be always set to 0 */
+ DEBUGP (("This is the first time I hear about host %s by that name.\n",
+ host));
+ hptr = ngethostbyname (host);
+ if (!hptr)
+ return xstrdup (host);
+ /* Originally, we copied to in.s_addr, but it appears to be
+ missing on some systems. */
+ memcpy (&in, *hptr->h_addr_list, sizeof (in));
+ STRDUP_ALLOCA (inet_s, inet_ntoa (in));
+ }
+ else /* Found, without quality */
+ {
+ /* This case happens when host is on the list,
+ but not as first entry (the one with quality).
+ Then we just get its INET address and pick
+ up the first entry with quality. */
+ DEBUGP (("We've dealt with host %s, but under the name %s.\n",
+ host, l->realname));
+ STRDUP_ALLOCA (inet_s, l->realname);
+ }
+
+ /* Now we certainly have the INET address. The following loop is
+ guaranteed to pick either an entry with quality (because it is
+ the first one), or none at all. */
+ l = search_address (hlist, inet_s);
+ if (l) /* Found in the list, as realname. */
+ {
+ /* Set the default, 0 quality. */
+ hlist = add_hlist (hlist, host, inet_s, 0);
+ return xstrdup (l->hostname);
+ }
+ /* Since this is really the first time this host is encountered,
+ set quality to 1. */
+ hlist = add_hlist (hlist, host, inet_s, 1);
+ return xstrdup (host);
+}
+
+/* Compare two hostnames (out of URL-s if the arguments are URL-s),
+ taking care of aliases. It uses realhost() to determine a unique
+ hostname for each of two hosts. If simple_check is non-zero, only
+ strcmp() is used for comparison. */
+int
+same_host (const char *u1, const char *u2)
+{
+ const char *s;
+ char *p1, *p2;
+ char *real1, *real2;
+
+ /* Skip protocol, if present. */
+ u1 += skip_url (u1);
+ u2 += skip_url (u2);
+ u1 += skip_proto (u1);
+ u2 += skip_proto (u2);
+
+ /* Skip username ans password, if present. */
+ u1 += skip_uname (u1);
+ u2 += skip_uname (u2);
+
+ for (s = u1; *u1 && *u1 != '/' && *u1 != ':'; u1++);
+ p1 = strdupdelim (s, u1);
+ for (s = u2; *u2 && *u2 != '/' && *u2 != ':'; u2++);
+ p2 = strdupdelim (s, u2);
+ DEBUGP (("Comparing hosts %s and %s...\n", p1, p2));
+ if (strcasecmp (p1, p2) == 0)
+ {
+ free (p1);
+ free (p2);
+ DEBUGP (("They are quite alike.\n"));
+ return 1;
+ }
+ else if (opt.simple_check)
+ {
+ free (p1);
+ free (p2);
+ DEBUGP (("Since checking is simple, I'd say they are not the same.\n"));
+ return 0;
+ }
+ real1 = realhost (p1);
+ real2 = realhost (p2);
+ free (p1);
+ free (p2);
+ if (strcasecmp (real1, real2) == 0)
+ {
+ DEBUGP (("They are alike, after realhost()->%s.\n", real1));
+ free (real1);
+ free (real2);
+ return 1;
+ }
+ else
+ {
+ DEBUGP (("They are not the same (%s, %s).\n", real1, real2));
+ free (real1);
+ free (real2);
+ return 0;
+ }
+}
+
+/* Determine whether a URL is acceptable to be followed, according to
+ a list of domains to accept. */
+int
+accept_domain (struct urlinfo *u)
+{
+ assert (u->host != NULL);
+ if (opt.domains)
+ {
+ if (!sufmatch ((const char **)opt.domains, u->host))
+ return 0;
+ }
+ if (opt.exclude_domains)
+ {
+ if (sufmatch ((const char **)opt.exclude_domains, u->host))
+ return 0;
+ }
+ return 1;
+}
+
+/* Check whether WHAT is matched in LIST, each element of LIST being a
+ pattern to match WHAT against, using backward matching (see
+ match_backwards() in utils.c).
+
+ If an element of LIST matched, 1 is returned, 0 otherwise. */
+int
+sufmatch (const char **list, const char *what)
+{
+ int i, j, k, lw;
+
+ lw = strlen (what);
+ for (i = 0; list[i]; i++)
+ {
+ for (j = strlen (list[i]), k = lw; j >= 0 && k >= 0; j--, k--)
+ if (tolower (list[i][j]) != tolower (what[k]))
+ break;
+ /* The domain must be first to reach to beginning. */
+ if (j == -1)
+ return 1;
+ }
+ return 0;
+}
+
+/* Return email address of the form username@FQDN suitable for
+ anonymous FTP passwords. This process is error-prone, and the
+ escape hatch is the MY_HOST preprocessor constant, which can be
+ used to hard-code either your hostname or FQDN at compile-time.
+
+ If the FQDN cannot be determined, a warning is printed, and the
+ function returns a short `username@' form, accepted by most
+ anonymous servers.
+
+ If not even the username cannot be divined, it means things are
+ seriously fucked up, and Wget exits. */
+char *
+ftp_getaddress (void)
+{
+ static char *address;
+
+ /* Do the drill only the first time, as it won't change. */
+ if (!address)
+ {
+ char userid[32]; /* 9 should be enough for Unix, but
+ I'd rather be on the safe side. */
+ char *host, *fqdn;
+
+ if (!pwd_cuserid (userid))
+ {
+ logprintf (LOG_ALWAYS, _("%s: Cannot determine user-id.\n"),
+ exec_name);
+ exit (1);
+ }
+#ifdef MY_HOST
+ STRDUP_ALLOCA (host, MY_HOST);
+#else /* not MY_HOST */
+#ifdef HAVE_UNAME
+ {
+ struct utsname ubuf;
+ if (uname (&ubuf) < 0)
+ {
+ logprintf (LOG_ALWAYS, _("%s: Warning: uname failed: %s\n"),
+ exec_name, strerror (errno));
+ fqdn = "";
+ goto giveup;
+ }
+ STRDUP_ALLOCA (host, ubuf.nodename);
+ }
+#else /* not HAVE_UNAME */
+#ifdef HAVE_GETHOSTNAME
+ host = alloca (256);
+ if (gethostname (host, 256) < 0)
+ {
+ logprintf (LOG_ALWAYS, _("%s: Warning: gethostname failed\n"),
+ exec_name);
+ fqdn = "";
+ goto giveup;
+ }
+#else /* not HAVE_GETHOSTNAME */
+ #error Cannot determine host name.
+#endif /* not HAVE_GETHOSTNAME */
+#endif /* not HAVE_UNAME */
+#endif /* not MY_HOST */
+ /* If the address we got so far contains a period, don't bother
+ anymore. */
+ if (strchr (host, '.'))
+ fqdn = host;
+ else
+ {
+ /* #### I've seen the following scheme fail on at least one
+ system! Do we care? */
+ char *tmpstore;
+ /* According to Richard Stevens, the correct way to find the
+ FQDN is to (1) find the host name, (2) find its IP
+ address using gethostbyname(), and (3) get the FQDN using
+ gethostbyaddr(). So that's what we'll do. Step one has
+ been done above. */
+ /* (2) */
+ struct hostent *hp = gethostbyname (host);
+ if (!hp || !hp->h_addr_list)
+ {
+ logprintf (LOG_ALWAYS, _("\
+%s: Warning: cannot determine local IP address.\n"),
+ exec_name);
+ fqdn = "";
+ goto giveup;
+ }
+ /* Copy the argument, so the call to gethostbyaddr doesn't
+ clobber it -- just in case. */
+ tmpstore = (char *)alloca (hp->h_length);
+ memcpy (tmpstore, *hp->h_addr_list, hp->h_length);
+ /* (3) */
+ hp = gethostbyaddr (tmpstore, hp->h_length, hp->h_addrtype);
+ if (!hp || !hp->h_name)
+ {
+ logprintf (LOG_ALWAYS, _("\
+%s: Warning: cannot reverse-lookup local IP address.\n"),
+ exec_name);
+ fqdn = "";
+ goto giveup;
+ }
+ if (!strchr (hp->h_name, '.'))
+ {
+#if 0
+ /* This gets ticked pretty often. Karl Berry reports
+ that there can be valid reasons for the local host
+ name not to be an FQDN, so I've decided to remove the
+ annoying warning. */
+ logprintf (LOG_ALWAYS, _("\
+%s: Warning: reverse-lookup of local address did not yield FQDN!\n"),
+ exec_name);
+#endif
+ fqdn = "";
+ goto giveup;
+ }
+ /* Once we're here, hp->h_name contains the correct FQDN. */
+ STRDUP_ALLOCA (fqdn, hp->h_name);
+ }
+ giveup:
+ address = (char *)xmalloc (strlen (userid) + 1 + strlen (fqdn) + 1);
+ sprintf (address, "%s@%s", userid, fqdn);
+ }
+ return address;
+}
+
+/* Print error messages for host errors. */
+char *
+herrmsg (int error)
+{
+ /* Can't use switch since some constants are equal (at least on my
+ system), and the compiler signals "duplicate case value". */
+ if (error == HOST_NOT_FOUND
+ || error == NO_RECOVERY
+ || error == NO_DATA
+ || error == NO_ADDRESS
+ || error == TRY_AGAIN)
+ return _("Host not found");
+ else
+ return _("Unknown error");
+}
+
+/* Clean the host list. This is a separate function, so we needn't
+ export HLIST and its implementation. Ha! */
+void
+clean_hosts (void)
+{
+ struct host *l = hlist;
+
+ while (l)
+ {
+ struct host *p = l->next;
+ free (l->hostname);
+ free (l->realname);
+ free (l);
+ l = p;
+ }
+ hlist = NULL;
+}
--- /dev/null
+/* Declarations for host.c
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef HOST_H
+#define HOST_H
+
+struct urlinfo;
+
+/* Function declarations */
+
+struct hostent *ngethostbyname PARAMS ((const char *));
+int store_hostaddress PARAMS ((unsigned char *, const char *));
+
+void clean_hosts PARAMS ((void));
+
+char *realhost PARAMS ((const char *));
+int same_host PARAMS ((const char *, const char *));
+int accept_domain PARAMS ((struct urlinfo *));
+int sufmatch PARAMS ((const char **, const char *));
+
+char *ftp_getaddress PARAMS ((void));
+
+char *herrmsg PARAMS ((int));
+
+#endif /* HOST_H */
--- /dev/null
+/* A simple HTML parser.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <ctype.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <errno.h>
+
+#include "wget.h"
+#include "url.h"
+#include "utils.h"
+#include "ftp.h"
+#include "html.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+static state_t global_state;
+
+struct tag_attr {
+ char *tag;
+ char *attr;
+};
+
+
+/* Match a string against a null-terminated list of identifiers. */
+static int
+idmatch (struct tag_attr *tags, const char *tag, const char *attr)
+{
+ int i;
+
+ if (!tag || !attr)
+ return 0;
+
+ for (i = 0; tags[i].tag; i++)
+ if (!strcasecmp (tags[i].tag, tag) && !strcasecmp (tags[i].attr, attr))
+ return 1;
+ return 0;
+}
+
+/* Parse BUF (a buffer of BUFSIZE characters) searching for HTML tags
+ describing URLs to follow. When a tag is encountered, extract its
+ components (as described by html_allow[] array), and return the
+ address and the length of the string. Return NULL if no URL is
+ found. */
+const char *
+htmlfindurl (const char *buf, int bufsize, int *size, int init)
+{
+ const char *p, *ph;
+ state_t *s;
+ /* NULL-terminated list of tags and modifiers someone would want to
+ follow -- feel free to edit to suit your needs: */
+ static struct tag_attr html_allow[] = {
+ { "a", "href" },
+ { "img", "src" },
+ { "img", "href" },
+ { "body", "background" },
+ { "frame", "src" },
+ { "iframe", "src" },
+ { "fig", "src" },
+ { "overlay", "src" },
+ { "applet", "code" },
+ { "script", "src" },
+ { "embed", "src" },
+ { "bgsound", "src" },
+ { "area", "href" },
+ { "img", "lowsrc" },
+ { "input", "src" },
+ { "layer", "src" },
+ { "table", "background"},
+ { "th", "background"},
+ { "td", "background"},
+ /* Tags below this line are treated specially. */
+ { "base", "href" },
+ { "meta", "content" },
+ { NULL, NULL }
+ };
+
+ s = &global_state;
+
+ if (init)
+ {
+ DEBUGP (("Resetting a parser state.\n"));
+ memset (s, 0, sizeof (*s));
+ }
+
+ while (1)
+ {
+ if (!bufsize)
+ break;
+ /* Let's look for a tag, if we are not already in one. */
+ if (!s->at_value)
+ {
+ /* Find '<'. */
+ if (*buf != '<')
+ for (; bufsize && *buf != '<'; ++buf, --bufsize);
+ if (!bufsize)
+ break;
+ /* Skip spaces. */
+ for (++buf, --bufsize; bufsize && ISSPACE (*buf) && *buf != '>';
+ ++buf, --bufsize);
+ if (!bufsize)
+ break;
+ p = buf;
+ /* Find the tag end. */
+ for (; bufsize && !ISSPACE (*buf) && *buf != '>' && *buf != '=';
+ ++buf, --bufsize);
+ if (!bufsize)
+ break;
+ if (*buf == '=')
+ {
+ /* <tag=something> is illegal. Just skip it. */
+ ++buf, --bufsize;
+ continue;
+ }
+ if (p == buf)
+ {
+ /* *buf == '>'. */
+ ++buf, --bufsize;
+ continue;
+ }
+ s->tag = strdupdelim (p, buf);
+ if (*buf == '>')
+ {
+ free (s->tag);
+ s->tag = NULL;
+ ++buf, --bufsize;
+ continue;
+ }
+ }
+ else /* s->at_value */
+ {
+ /* Reset AT_VALUE. */
+ s->at_value = 0;
+ /* If in quotes, just skip out of them and continue living. */
+ if (s->in_quote)
+ {
+ s->in_quote = 0;
+ for (; bufsize && *buf != s->quote_char; ++buf, --bufsize);
+ if (!bufsize)
+ break;
+ ++buf, --bufsize;
+ }
+ if (!bufsize)
+ break;
+ if (*buf == '>')
+ {
+ FREE_MAYBE (s->tag);
+ FREE_MAYBE (s->attr);
+ s->tag = s->attr = NULL;
+ continue;
+ }
+ }
+ /* Find the attributes. */
+ do
+ {
+ FREE_MAYBE (s->attr);
+ s->attr = NULL;
+ if (!bufsize)
+ break;
+ /* Skip the spaces if we have them. We don't have them at
+ places like <img alt="something"src="something-else">.
+ ^ no spaces here */
+ if (ISSPACE (*buf))
+ for (++buf, --bufsize; bufsize && ISSPACE (*buf) && *buf != '>';
+ ++buf, --bufsize);
+ if (!bufsize || *buf == '>')
+ break;
+ if (*buf == '=')
+ {
+ /* This is the case of <tag = something>, which is
+ illegal. Just skip it. */
+ ++buf, --bufsize;
+ continue;
+ }
+ p = buf;
+ /* Find the attribute end. */
+ for (; bufsize && !ISSPACE (*buf) && *buf != '>' && *buf != '=';
+ ++buf, --bufsize);
+ if (!bufsize || *buf == '>')
+ break;
+ /* Construct the attribute. */
+ s->attr = strdupdelim (p, buf);
+ /* Now we must skip the spaces to find '='. */
+ if (*buf != '=')
+ {
+ for (; bufsize && ISSPACE (*buf) && *buf != '>'; ++buf, --bufsize);
+ if (!bufsize || *buf == '>')
+ break;
+ }
+ /* If we still don't have '=', something is amiss. */
+ if (*buf != '=')
+ continue;
+ /* Find the beginning of attribute value by skipping the
+ spaces. */
+ ++buf, --bufsize;
+ for (; bufsize && ISSPACE (*buf) && *buf != '>'; ++buf, --bufsize);
+ if (!bufsize || *buf == '>')
+ break;
+ ph = NULL;
+ /* The value of an attribute can, but does not have to be
+ quoted. */
+ if (*buf == '\"' || *buf == '\'')
+ {
+ s->in_quote = 1;
+ s->quote_char = *buf;
+ p = buf + 1;
+ for (++buf, --bufsize;
+ bufsize && *buf != s->quote_char && *buf != '\n';
+ ++buf, --bufsize)
+ if (*buf == '#')
+ ph = buf;
+ if (!bufsize)
+ {
+ s->in_quote = 0;
+ break;
+ }
+ if (*buf == '\n')
+ {
+ /* #### Is the following logic good?
+
+ Obviously no longer in quote. It might be well
+ to check whether '>' was encountered, but that
+ would be encouraging writers of invalid HTMLs,
+ and we don't want that, now do we? */
+ s->in_quote = 0;
+ continue;
+ }
+ }
+ else
+ {
+ p = buf;
+ for (; bufsize && !ISSPACE (*buf) && *buf != '>'; ++buf, --bufsize)
+ if (*buf == '#')
+ ph = buf;
+ if (!bufsize)
+ break;
+ }
+ /* If '#' was found unprotected in a URI, it is probably an
+ HTML marker, or color spec. */
+ *size = (ph ? ph : buf) - p;
+ /* The URI is liable to be returned if:
+ 1) *size != 0;
+ 2) its tag and attribute are found in html_allow. */
+ if (*size && idmatch (html_allow, s->tag, s->attr))
+ {
+ if (!strcasecmp (s->tag, "base") && !strcasecmp (s->attr, "href"))
+ {
+ FREE_MAYBE (s->base);
+ s->base = strdupdelim (p, buf);
+ }
+ else if (!strcasecmp (s->tag, "meta") && !strcasecmp (s->attr, "content"))
+ {
+ /* Some pages use a META tag to specify that the page
+ be refreshed by a new page after a given number of
+ seconds. We need to attempt to extract an URL for
+ the new page from the other garbage present. The
+ general format for this is:
+ <META HTTP-EQUIV=Refresh CONTENT="0; URL=index2.html">
+
+ So we just need to skip past the "0; URL="
+ garbage to get to the URL. META tags are also
+ used for specifying random things like the page
+ author's name and what editor was used to create
+ it. So we need to be careful to ignore them and
+ not assume that an URL will be present at all. */
+ for (; *size && ISDIGIT (*p); p++, *size -= 1);
+ if (*p == ';')
+ {
+ for (p++, *size -= 1; *size && ISSPACE (*p); p++, *size -= 1) ;
+ if (!strncasecmp (p, "URL=", 4))
+ {
+ p += 4, *size -= 4;
+ s->at_value = 1;
+ return p;
+ }
+ }
+ }
+ else
+ {
+ s->at_value = 1;
+ return p;
+ }
+ }
+ /* Exit from quote. */
+ if (*buf == s->quote_char)
+ {
+ s->in_quote = 0;
+ ++buf, --bufsize;
+ }
+ } while (*buf != '>');
+ FREE_MAYBE (s->tag);
+ FREE_MAYBE (s->attr);
+ s->tag = s->attr = NULL;
+ if (!bufsize)
+ break;
+ }
+
+ FREE_MAYBE (s->tag);
+ FREE_MAYBE (s->attr);
+ FREE_MAYBE (s->base);
+ memset (s, 0, sizeof (*s)); /* just to be sure */
+ DEBUGP (("HTML parser ends here (state destroyed).\n"));
+ return NULL;
+}
+
+/* The function returns the base reference of HTML buffer id, or NULL
+ if one wasn't defined for that buffer. */
+const char *
+html_base (void)
+{
+ return global_state.base;
+}
+
+/* The function returns the pointer to the malloc-ed quoted version of
+ string s. It will recognize and quote numeric and special graphic
+ entities, as per RFC1866:
+
+ `&' -> `&'
+ `<' -> `<'
+ `>' -> `>'
+ `"' -> `"'
+
+ No other entities are recognized or replaced. */
+static char *
+html_quote_string (const char *s)
+{
+ const char *b = s;
+ char *p, *res;
+ int i;
+
+ /* Pass through the string, and count the new size. */
+ for (i = 0; *s; s++, i++)
+ {
+ if (*s == '&')
+ i += 4; /* `amp;' */
+ else if (*s == '<' || *s == '>')
+ i += 3; /* `lt;' and `gt;' */
+ else if (*s == '\"')
+ i += 5; /* `quot;' */
+ }
+ res = (char *)xmalloc (i + 1);
+ s = b;
+ for (p = res; *s; s++)
+ {
+ switch (*s)
+ {
+ case '&':
+ *p++ = '&';
+ *p++ = 'a';
+ *p++ = 'm';
+ *p++ = 'p';
+ *p++ = ';';
+ break;
+ case '<': case '>':
+ *p++ = '&';
+ *p++ = (*s == '<' ? 'l' : 'g');
+ *p++ = 't';
+ *p++ = ';';
+ break;
+ case '\"':
+ *p++ = '&';
+ *p++ = 'q';
+ *p++ = 'u';
+ *p++ = 'o';
+ *p++ = 't';
+ *p++ = ';';
+ break;
+ default:
+ *p++ = *s;
+ }
+ }
+ *p = '\0';
+ return res;
+}
+
+/* The function creates an HTML index containing references to given
+ directories and files on the appropriate host. The references are
+ FTP. */
+uerr_t
+ftp_index (const char *file, struct urlinfo *u, struct fileinfo *f)
+{
+ FILE *fp;
+ char *upwd;
+ char *htclfile; /* HTML-clean file name */
+
+ if (!opt.dfp)
+ {
+ fp = fopen (file, "wb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", file, strerror (errno));
+ return FOPENERR;
+ }
+ }
+ else
+ fp = opt.dfp;
+ if (u->user)
+ {
+ char *tmpu, *tmpp; /* temporary, clean user and passwd */
+
+ tmpu = CLEANDUP (u->user);
+ tmpp = u->passwd ? CLEANDUP (u->passwd) : NULL;
+ upwd = (char *)xmalloc (strlen (tmpu)
+ + (tmpp ? (1 + strlen (tmpp)) : 0) + 2);
+ sprintf (upwd, "%s%s%s@", tmpu, tmpp ? ":" : "", tmpp ? tmpp : "");
+ free (tmpu);
+ FREE_MAYBE (tmpp);
+ }
+ else
+ upwd = xstrdup ("");
+ fprintf (fp, "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n");
+ fprintf (fp, "<html>\n<head>\n<title>");
+ fprintf (fp, _("Index of /%s on %s:%d"), u->dir, u->host, u->port);
+ fprintf (fp, "</title>\n</head>\n<body>\n<h1>");
+ fprintf (fp, _("Index of /%s on %s:%d"), u->dir, u->host, u->port);
+ fprintf (fp, "</h1>\n<hr>\n<pre>\n");
+ while (f)
+ {
+ fprintf (fp, " ");
+ if (f->tstamp != -1)
+ {
+ /* #### Should we translate the months? */
+ static char *months[] = {
+ "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+ "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
+ };
+ struct tm *ptm = localtime ((time_t *)&f->tstamp);
+
+ fprintf (fp, "%d %s %02d ", ptm->tm_year + 1900, months[ptm->tm_mon],
+ ptm->tm_mday);
+ if (ptm->tm_hour)
+ fprintf (fp, "%02d:%02d ", ptm->tm_hour, ptm->tm_min);
+ else
+ fprintf (fp, " ");
+ }
+ else
+ fprintf (fp, _("time unknown "));
+ switch (f->type)
+ {
+ case FT_PLAINFILE:
+ fprintf (fp, _("File "));
+ break;
+ case FT_DIRECTORY:
+ fprintf (fp, _("Directory "));
+ break;
+ case FT_SYMLINK:
+ fprintf (fp, _("Link "));
+ break;
+ default:
+ fprintf (fp, _("Not sure "));
+ break;
+ }
+ htclfile = html_quote_string (f->name);
+ fprintf (fp, "<a href=\"ftp://%s%s:%hu", upwd, u->host, u->port);
+ if (*u->dir != '/')
+ putc ('/', fp);
+ fprintf (fp, "%s", u->dir);
+ if (*u->dir)
+ putc ('/', fp);
+ fprintf (fp, "%s", htclfile);
+ if (f->type == FT_DIRECTORY)
+ putc ('/', fp);
+ fprintf (fp, "\">%s", htclfile);
+ if (f->type == FT_DIRECTORY)
+ putc ('/', fp);
+ fprintf (fp, "</a> ");
+ if (f->type == FT_PLAINFILE)
+ fprintf (fp, _(" (%s bytes)"), legible (f->size));
+ else if (f->type == FT_SYMLINK)
+ fprintf (fp, "-> %s", f->linkto ? f->linkto : "(nil)");
+ putc ('\n', fp);
+ free (htclfile);
+ f = f->next;
+ }
+ fprintf (fp, "</pre>\n</body>\n</html>\n");
+ free (upwd);
+ if (!opt.dfp)
+ fclose (fp);
+ else
+ fflush (fp);
+ return FTPOK;
+}
--- /dev/null
+/* HTML parser declarations.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef HTML_H
+#define HTML_H
+
+/* Structure of a parser state */
+typedef struct
+{
+ int at_value, in_quote;
+ char quote_char;
+ char *tag, *attr;
+ char *base;
+} state_t;
+
+struct fileinfo;
+
+/* Function declarations */
+const char *htmlfindurl PARAMS ((const char *, int, int *, int));
+const char *html_base PARAMS ((void));
+uerr_t ftp_index PARAMS ((const char *, struct urlinfo *, struct fileinfo *));
+
+#endif /* HTML_H */
--- /dev/null
+/* HTTP support.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <ctype.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <assert.h>
+#include <errno.h>
+#if TIME_WITH_SYS_TIME
+# include <sys/time.h>
+# include <time.h>
+#else
+# if HAVE_SYS_TIME_H
+# include <sys/time.h>
+# else
+# include <time.h>
+# endif
+#endif
+
+#ifdef WINDOWS
+# include <winsock.h>
+#endif
+
+#include "wget.h"
+#include "utils.h"
+#include "url.h"
+#include "host.h"
+#include "rbuf.h"
+#include "retr.h"
+#include "headers.h"
+#include "connect.h"
+#include "fnmatch.h"
+#include "netrc.h"
+#if USE_DIGEST
+# include "md5.h"
+#endif
+
+extern char *version_string;
+
+#ifndef errno
+extern int errno;
+#endif
+#ifndef h_errno
+extern int h_errno;
+#endif
+\f
+
+#define TEXTHTML_S "text/html"
+#define HTTP_ACCEPT "*/*"
+
+/* Some status code validation macros: */
+#define H_20X(x) (((x) >= 200) && ((x) < 300))
+#define H_PARTIAL(x) ((x) == HTTP_STATUS_PARTIAL_CONTENTS)
+#define H_REDIRECTED(x) (((x) == HTTP_STATUS_MOVED_PERMANENTLY) \
+ || ((x) == HTTP_STATUS_MOVED_TEMPORARILY))
+
+/* HTTP/1.0 status codes from RFC1945, provided for reference. */
+/* Successful 2xx. */
+#define HTTP_STATUS_OK 200
+#define HTTP_STATUS_CREATED 201
+#define HTTP_STATUS_ACCEPTED 202
+#define HTTP_STATUS_NO_CONTENT 204
+#define HTTP_STATUS_PARTIAL_CONTENTS 206
+
+/* Redirection 3xx. */
+#define HTTP_STATUS_MULTIPLE_CHOICES 300
+#define HTTP_STATUS_MOVED_PERMANENTLY 301
+#define HTTP_STATUS_MOVED_TEMPORARILY 302
+#define HTTP_STATUS_NOT_MODIFIED 304
+
+/* Client error 4xx. */
+#define HTTP_STATUS_BAD_REQUEST 400
+#define HTTP_STATUS_UNAUTHORIZED 401
+#define HTTP_STATUS_FORBIDDEN 403
+#define HTTP_STATUS_NOT_FOUND 404
+
+/* Server errors 5xx. */
+#define HTTP_STATUS_INTERNAL 500
+#define HTTP_STATUS_NOT_IMPLEMENTED 501
+#define HTTP_STATUS_BAD_GATEWAY 502
+#define HTTP_STATUS_UNAVAILABLE 503
+
+\f
+/* Parse the HTTP status line, which is of format:
+
+ HTTP-Version SP Status-Code SP Reason-Phrase
+
+ The function returns the status-code, or -1 if the status line is
+ malformed. The pointer to reason-phrase is returned in RP. */
+static int
+parse_http_status_line (const char *line, const char **reason_phrase_ptr)
+{
+ /* (the variables must not be named `major' and `minor', because
+ that breaks compilation with SunOS4 cc.) */
+ int mjr, mnr, statcode;
+ const char *p;
+
+ *reason_phrase_ptr = NULL;
+
+ /* The standard format of HTTP-Version is: `HTTP/X.Y', where X is
+ major version, and Y is minor version. */
+ if (strncmp (line, "HTTP/", 5) != 0)
+ return -1;
+ line += 5;
+
+ /* Calculate major HTTP version. */
+ p = line;
+ for (mjr = 0; ISDIGIT (*line); line++)
+ mjr = 10 * mjr + (*line - '0');
+ if (*line != '.' || p == line)
+ return -1;
+ ++line;
+
+ /* Calculate minor HTTP version. */
+ p = line;
+ for (mnr = 0; ISDIGIT (*line); line++)
+ mnr = 10 * mnr + (*line - '0');
+ if (*line != ' ' || p == line)
+ return -1;
+ /* Wget will accept only 1.0 and higher HTTP-versions. The value of
+ minor version can be safely ignored. */
+ if (mjr < 1)
+ return -1;
+ ++line;
+
+ /* Calculate status code. */
+ if (!(ISDIGIT (*line) && ISDIGIT (line[1]) && ISDIGIT (line[2])))
+ return -1;
+ statcode = 100 * (*line - '0') + 10 * (line[1] - '0') + (line[2] - '0');
+
+ /* Set up the reason phrase pointer. */
+ line += 3;
+ /* RFC2068 requires SPC here, but we allow the string to finish
+ here, in case no reason-phrase is present. */
+ if (*line != ' ')
+ {
+ if (!*line)
+ *reason_phrase_ptr = line;
+ else
+ return -1;
+ }
+ else
+ *reason_phrase_ptr = line + 1;
+
+ return statcode;
+}
+\f
+/* Functions to be used as arguments to header_process(): */
+
+struct http_process_range_closure {
+ long first_byte_pos;
+ long last_byte_pos;
+ long entity_length;
+};
+
+/* Parse the `Content-Range' header and extract the information it
+ contains. Returns 1 if successful, -1 otherwise. */
+static int
+http_process_range (const char *hdr, void *arg)
+{
+ struct http_process_range_closure *closure
+ = (struct http_process_range_closure *)arg;
+ long num;
+
+ /* Certain versions of Nutscape proxy server send out
+ `Content-Length' without "bytes" specifier, which is a breach of
+ RFC2068 (as well as the HTTP/1.1 draft which was current at the
+ time). But hell, I must support it... */
+ if (!strncasecmp (hdr, "bytes", 5))
+ {
+ hdr += 5;
+ hdr += skip_lws (hdr);
+ if (!*hdr)
+ return 0;
+ }
+ if (!ISDIGIT (*hdr))
+ return 0;
+ for (num = 0; ISDIGIT (*hdr); hdr++)
+ num = 10 * num + (*hdr - '0');
+ if (*hdr != '-' || !ISDIGIT (*(hdr + 1)))
+ return 0;
+ closure->first_byte_pos = num;
+ ++hdr;
+ for (num = 0; ISDIGIT (*hdr); hdr++)
+ num = 10 * num + (*hdr - '0');
+ if (*hdr != '/' || !ISDIGIT (*(hdr + 1)))
+ return 0;
+ closure->last_byte_pos = num;
+ ++hdr;
+ for (num = 0; ISDIGIT (*hdr); hdr++)
+ num = 10 * num + (*hdr - '0');
+ closure->entity_length = num;
+ return 1;
+}
+
+/* Place 1 to ARG if the HDR contains the word "none", 0 otherwise.
+ Used for `Accept-Ranges'. */
+static int
+http_process_none (const char *hdr, void *arg)
+{
+ int *where = (int *)arg;
+
+ if (strstr (hdr, "none"))
+ *where = 1;
+ else
+ *where = 0;
+ return 1;
+}
+
+/* Place the malloc-ed copy of HDR hdr, to the first `;' to ARG. */
+static int
+http_process_type (const char *hdr, void *arg)
+{
+ char **result = (char **)arg;
+ char *p;
+
+ *result = xstrdup (hdr);
+ p = strrchr (hdr, ';');
+ if (p)
+ {
+ int len = p - hdr;
+ *result = (char *)xmalloc (len + 1);
+ memcpy (*result, hdr, len);
+ (*result)[len] = '\0';
+ }
+ else
+ *result = xstrdup (hdr);
+ return 1;
+}
+
+\f
+struct http_stat
+{
+ long len; /* received length */
+ long contlen; /* expected length */
+ long restval; /* the restart value */
+ int res; /* the result of last read */
+ char *newloc; /* new location (redirection) */
+ char *remote_time; /* remote time-stamp string */
+ char *error; /* textual HTTP error */
+ int statcode; /* status code */
+ long dltime; /* time of the download */
+};
+
+/* Free the elements of hstat X. */
+#define FREEHSTAT(x) do \
+{ \
+ FREE_MAYBE ((x).newloc); \
+ FREE_MAYBE ((x).remote_time); \
+ FREE_MAYBE ((x).error); \
+ (x).newloc = (x).remote_time = (x).error = NULL; \
+} while (0)
+
+static char *create_authorization_line PARAMS ((const char *, const char *,
+ const char *, const char *,
+ const char *));
+static char *basic_authentication_encode PARAMS ((const char *, const char *,
+ const char *));
+static int known_authentication_scheme_p PARAMS ((const char *));
+
+static time_t http_atotm PARAMS ((char *));
+
+/* Retrieve a document through HTTP protocol. It recognizes status
+ code, and correctly handles redirections. It closes the network
+ socket. If it receives an error from the functions below it, it
+ will print it if there is enough information to do so (almost
+ always), returning the error to the caller (i.e. http_loop).
+
+ Various HTTP parameters are stored to hs. Although it parses the
+ response code correctly, it is not used in a sane way. The caller
+ can do that, though.
+
+ If u->proxy is non-NULL, the URL u will be taken as a proxy URL,
+ and u->proxy->url will be given to the proxy server (bad naming,
+ I'm afraid). */
+static uerr_t
+gethttp (struct urlinfo *u, struct http_stat *hs, int *dt)
+{
+ char *request, *type, *command, *path;
+ char *user, *passwd;
+ char *pragma_h, *referer, *useragent, *range, *wwwauth, *remhost;
+ char *authenticate_h;
+ char *proxyauth;
+ char *all_headers;
+ int sock, hcount, num_written, all_length, remport, statcode;
+ long contlen, contrange;
+ struct urlinfo *ou;
+ uerr_t err;
+ FILE *fp;
+ int auth_tried_already;
+ struct rbuf rbuf;
+
+ /* Let the others worry about local filename... */
+ if (!(*dt & HEAD_ONLY))
+ assert (u->local != NULL);
+
+ authenticate_h = 0;
+ auth_tried_already = 0;
+
+ again:
+ /* We need to come back here when the initial attempt to retrieve
+ without authorization header fails. */
+
+ /* Initialize certain elements of struct hstat. */
+ hs->len = 0L;
+ hs->contlen = -1;
+ hs->res = -1;
+ hs->newloc = NULL;
+ hs->remote_time = NULL;
+ hs->error = NULL;
+
+ /* Which structure to use to retrieve the original URL data. */
+ if (u->proxy)
+ ou = u->proxy;
+ else
+ ou = u;
+
+ /* First: establish the connection. */
+ logprintf (LOG_VERBOSE, _("Connecting to %s:%hu... "), u->host, u->port);
+ err = make_connection (&sock, u->host, u->port);
+ switch (err)
+ {
+ case HOSTERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "%s: %s.\n", u->host, herrmsg (h_errno));
+ return HOSTERR;
+ break;
+ case CONSOCKERR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "socket: %s\n", strerror (errno));
+ return CONSOCKERR;
+ break;
+ case CONREFUSED:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET,
+ _("Connection to %s:%hu refused.\n"), u->host, u->port);
+ CLOSE (sock);
+ return CONREFUSED;
+ case CONERROR:
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, "connect: %s\n", strerror (errno));
+ CLOSE (sock);
+ return CONERROR;
+ break;
+ case NOCONERROR:
+ /* Everything is fine! */
+ logputs (LOG_VERBOSE, _("connected!\n"));
+ break;
+ default:
+ abort ();
+ break;
+ } /* switch */
+
+ if (u->proxy)
+ path = u->proxy->url;
+ else
+ path = u->path;
+ command = (*dt & HEAD_ONLY) ? "HEAD" : "GET";
+ referer = NULL;
+ if (ou->referer)
+ {
+ referer = (char *)alloca (9 + strlen (ou->referer) + 3);
+ sprintf (referer, "Referer: %s\r\n", ou->referer);
+ }
+ if (*dt & SEND_NOCACHE)
+ pragma_h = "Pragma: no-cache\r\n";
+ else
+ pragma_h = "";
+ if (hs->restval)
+ {
+ range = (char *)alloca (13 + numdigit (hs->restval) + 4);
+ /* #### Gag me! Some servers (e.g. WebSitePro) have been known
+ to misinterpret the following `Range' format, and return the
+ document as multipart/x-byte-ranges MIME type!
+
+ #### TODO: Interpret MIME types, recognize bullshits similar
+ the one described above, and deal with them! */
+ sprintf (range, "Range: bytes=%ld-\r\n", hs->restval);
+ }
+ else
+ range = NULL;
+ if (opt.useragent)
+ STRDUP_ALLOCA (useragent, opt.useragent);
+ else
+ {
+ useragent = (char *)alloca (10 + strlen (version_string));
+ sprintf (useragent, "Wget/%s", version_string);
+ }
+ /* Construct the authentication, if userid is present. */
+ user = ou->user;
+ passwd = ou->passwd;
+ search_netrc (u->host, (const char **)&user, (const char **)&passwd, 0);
+ user = user ? user : opt.http_user;
+ passwd = passwd ? passwd : opt.http_passwd;
+
+ wwwauth = NULL;
+ if (authenticate_h && user && passwd)
+ {
+ wwwauth = create_authorization_line (authenticate_h, user, passwd,
+ command, ou->path);
+ }
+
+ proxyauth = NULL;
+ if (u->proxy)
+ {
+ char *proxy_user, *proxy_passwd;
+ /* For normal username and password, URL components override
+ command-line/wgetrc parameters. With proxy authentication,
+ it's the reverse, because proxy URLs are normally the
+ "permanent" ones, so command-line args should take
+ precedence. */
+ if (opt.proxy_user && opt.proxy_passwd)
+ {
+ proxy_user = opt.proxy_user;
+ proxy_passwd = opt.proxy_passwd;
+ }
+ else
+ {
+ proxy_user = u->user;
+ proxy_passwd = u->passwd;
+ }
+ /* #### This is junky. Can't the proxy request, say, `Digest'
+ authentication? */
+ if (proxy_user && proxy_passwd)
+ proxyauth = basic_authentication_encode (proxy_user, proxy_passwd,
+ "Proxy-Authorization");
+ }
+ remhost = ou->host;
+ remport = ou->port;
+ /* Allocate the memory for the request. */
+ request = (char *)alloca (strlen (command) + strlen (path)
+ + strlen (useragent)
+ + strlen (remhost) + numdigit (remport)
+ + strlen (HTTP_ACCEPT)
+ + (referer ? strlen (referer) : 0)
+ + (wwwauth ? strlen (wwwauth) : 0)
+ + (proxyauth ? strlen (proxyauth) : 0)
+ + (range ? strlen (range) : 0)
+ + strlen (pragma_h)
+ + (opt.user_header ? strlen (opt.user_header) : 0)
+ + 64);
+ /* Construct the request. */
+ sprintf (request, "\
+%s %s HTTP/1.0\r\n\
+User-Agent: %s\r\n\
+Host: %s:%d\r\n\
+Accept: %s\r\n\
+%s%s%s%s%s%s\r\n",
+ command, path, useragent, remhost, remport, HTTP_ACCEPT,
+ referer ? referer : "",
+ wwwauth ? wwwauth : "",
+ proxyauth ? proxyauth : "",
+ range ? range : "",
+ pragma_h,
+ opt.user_header ? opt.user_header : "");
+ DEBUGP (("---request begin---\n%s---request end---\n", request));
+ /* Free the temporary memory. */
+ FREE_MAYBE (wwwauth);
+ FREE_MAYBE (proxyauth);
+
+ /* Send the request to server. */
+ num_written = iwrite (sock, request, strlen (request));
+ if (num_written < 0)
+ {
+ logputs (LOG_VERBOSE, _("Failed writing HTTP request.\n"));
+ free (request);
+ CLOSE (sock);
+ return WRITEFAILED;
+ }
+ logprintf (LOG_VERBOSE, _("%s request sent, awaiting response... "),
+ u->proxy ? "Proxy" : "HTTP");
+ contlen = contrange = -1;
+ type = NULL;
+ statcode = -1;
+ *dt &= ~RETROKF;
+
+ /* Before reading anything, initialize the rbuf. */
+ rbuf_initialize (&rbuf, sock);
+
+ all_headers = NULL;
+ all_length = 0;
+ /* Header-fetching loop. */
+ hcount = 0;
+ while (1)
+ {
+ char *hdr;
+ int status;
+
+ ++hcount;
+ /* Get the header. */
+ status = header_get (&rbuf, &hdr,
+ /* Disallow continuations for status line. */
+ (hcount == 1 ? HG_NO_CONTINUATIONS : HG_NONE));
+
+ /* Check for errors. */
+ if (status == HG_EOF && *hdr)
+ {
+ /* This used to be an unconditional error, but that was
+ somewhat controversial, because of a large number of
+ broken CGI's that happily "forget" to send the second EOL
+ before closing the connection of a HEAD request.
+
+ So, the deal is to check whether the header is empty
+ (*hdr is zero if it is); if yes, it means that the
+ previous header was fully retrieved, and that -- most
+ probably -- the request is complete. "...be liberal in
+ what you accept." Oh boy. */
+ logputs (LOG_VERBOSE, "\n");
+ logputs (LOG_NOTQUIET, _("End of file while parsing headers.\n"));
+ free (hdr);
+ FREE_MAYBE (type);
+ FREE_MAYBE (hs->newloc);
+ FREE_MAYBE (all_headers);
+ CLOSE (sock);
+ return HEOF;
+ }
+ else if (status == HG_ERROR)
+ {
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("Read error (%s) in headers.\n"),
+ strerror (errno));
+ free (hdr);
+ FREE_MAYBE (type);
+ FREE_MAYBE (hs->newloc);
+ FREE_MAYBE (all_headers);
+ CLOSE (sock);
+ return HERR;
+ }
+
+ /* If the headers are to be saved to a file later, save them to
+ memory now. */
+ if (opt.save_headers)
+ {
+ int lh = strlen (hdr);
+ all_headers = (char *)xrealloc (all_headers, all_length + lh + 2);
+ memcpy (all_headers + all_length, hdr, lh);
+ all_length += lh;
+ all_headers[all_length++] = '\n';
+ all_headers[all_length] = '\0';
+ }
+
+ /* Print the header if requested. */
+ if (opt.server_response && hcount != 1)
+ logprintf (LOG_VERBOSE, "\n%d %s", hcount, hdr);
+
+ /* Check for status line. */
+ if (hcount == 1)
+ {
+ const char *error;
+ /* Parse the first line of server response. */
+ statcode = parse_http_status_line (hdr, &error);
+ hs->statcode = statcode;
+ /* Store the descriptive response. */
+ if (statcode == -1) /* malformed response */
+ {
+ /* A common reason for "malformed response" error is the
+ case when no data was actually received. Handle this
+ special case. */
+ if (!*hdr)
+ hs->error = xstrdup (_("No data received"));
+ else
+ hs->error = xstrdup (_("Malformed status line"));
+ free (hdr);
+ break;
+ }
+ else if (!*error)
+ hs->error = xstrdup (_("(no description)"));
+ else
+ hs->error = xstrdup (error);
+
+ if ((statcode != -1)
+#ifdef DEBUG
+ && !opt.debug
+#endif
+ )
+ logprintf (LOG_VERBOSE, "%d %s", statcode, error);
+
+ goto done_header;
+ }
+
+ /* Exit on empty header. */
+ if (!*hdr)
+ {
+ free (hdr);
+ break;
+ }
+
+ /* Try getting content-length. */
+ if (contlen == -1 && !opt.ignore_length)
+ if (header_process (hdr, "Content-Length", header_extract_number,
+ &contlen))
+ goto done_header;
+ /* Try getting content-type. */
+ if (!type)
+ if (header_process (hdr, "Content-Type", http_process_type, &type))
+ goto done_header;
+ /* Try getting location. */
+ if (!hs->newloc)
+ if (header_process (hdr, "Location", header_strdup, &hs->newloc))
+ goto done_header;
+ /* Try getting last-modified. */
+ if (!hs->remote_time)
+ if (header_process (hdr, "Last-Modified", header_strdup,
+ &hs->remote_time))
+ goto done_header;
+ /* Try getting www-authentication. */
+ if (!authenticate_h)
+ if (header_process (hdr, "WWW-Authenticate", header_strdup,
+ &authenticate_h))
+ goto done_header;
+ /* Check for accept-ranges header. If it contains the word
+ `none', disable the ranges. */
+ if (*dt & ACCEPTRANGES)
+ {
+ int nonep;
+ if (header_process (hdr, "Accept-Ranges", http_process_none, &nonep))
+ {
+ if (nonep)
+ *dt &= ~ACCEPTRANGES;
+ goto done_header;
+ }
+ }
+ /* Try getting content-range. */
+ if (contrange == -1)
+ {
+ struct http_process_range_closure closure;
+ if (header_process (hdr, "Content-Range", http_process_range, &closure))
+ {
+ contrange = closure.first_byte_pos;
+ goto done_header;
+ }
+ }
+ done_header:
+ free (hdr);
+ }
+
+ logputs (LOG_VERBOSE, "\n");
+
+ if ((statcode == HTTP_STATUS_UNAUTHORIZED)
+ && authenticate_h)
+ {
+ /* Authorization is required. */
+ FREE_MAYBE (type);
+ type = NULL;
+ FREEHSTAT (*hs);
+ CLOSE (sock);
+ if (auth_tried_already)
+ {
+ /* If we have tried it already, then there is not point
+ retrying it. */
+ logputs (LOG_NOTQUIET, _("Authorization failed.\n"));
+ free (authenticate_h);
+ return AUTHFAILED;
+ }
+ else if (!known_authentication_scheme_p (authenticate_h))
+ {
+ free (authenticate_h);
+ logputs (LOG_NOTQUIET, _("Unknown authentication scheme.\n"));
+ return AUTHFAILED;
+ }
+ else
+ {
+ auth_tried_already = 1;
+ goto again;
+ }
+ }
+ /* We do not need this anymore. */
+ if (authenticate_h)
+ {
+ free (authenticate_h);
+ authenticate_h = NULL;
+ }
+
+ /* 20x responses are counted among successful by default. */
+ if (H_20X (statcode))
+ *dt |= RETROKF;
+
+ if (type && !strncasecmp (type, TEXTHTML_S, strlen (TEXTHTML_S)))
+ *dt |= TEXTHTML;
+ else
+ /* We don't assume text/html by default. */
+ *dt &= ~TEXTHTML;
+
+ if (contrange == -1)
+ hs->restval = 0;
+ else if (contrange != hs->restval ||
+ (H_PARTIAL (statcode) && contrange == -1))
+ {
+ /* This means the whole request was somehow misunderstood by the
+ server. Bail out. */
+ FREE_MAYBE (type);
+ FREE_MAYBE (hs->newloc);
+ FREE_MAYBE (all_headers);
+ CLOSE (sock);
+ return RANGEERR;
+ }
+
+ if (hs->restval)
+ {
+ if (contlen != -1)
+ contlen += contrange;
+ else
+ contrange = -1; /* If conent-length was not sent,
+ content-range will be ignored. */
+ }
+ hs->contlen = contlen;
+
+ /* Return if redirected. */
+ if (H_REDIRECTED (statcode) || statcode == HTTP_STATUS_MULTIPLE_CHOICES)
+ {
+ /* RFC2068 says that in case of the 300 (multiple choices)
+ response, the server can output a preferred URL through
+ `Location' header; otherwise, the request should be treated
+ like GET. So, if the location is set, it will be a
+ redirection; otherwise, just proceed normally. */
+ if (statcode == HTTP_STATUS_MULTIPLE_CHOICES && !hs->newloc)
+ *dt |= RETROKF;
+ else
+ {
+ logprintf (LOG_VERBOSE,
+ _("Location: %s%s\n"),
+ hs->newloc ? hs->newloc : _("unspecified"),
+ hs->newloc ? _(" [following]") : "");
+ CLOSE (sock);
+ FREE_MAYBE (type);
+ FREE_MAYBE (all_headers);
+ return NEWLOCATION;
+ }
+ }
+ if (opt.verbose)
+ {
+ if ((*dt & RETROKF) && !opt.server_response)
+ {
+ /* No need to print this output if the body won't be
+ downloaded at all, or if the original server response is
+ printed. */
+ logputs (LOG_VERBOSE, _("Length: "));
+ if (contlen != -1)
+ {
+ logputs (LOG_VERBOSE, legible (contlen));
+ if (contrange != -1)
+ logprintf (LOG_VERBOSE, _(" (%s to go)"),
+ legible (contlen - contrange));
+ }
+ else
+ logputs (LOG_VERBOSE,
+ opt.ignore_length ? _("ignored") : _("unspecified"));
+ if (type)
+ logprintf (LOG_VERBOSE, " [%s]\n", type);
+ else
+ logputs (LOG_VERBOSE, "\n");
+ }
+ }
+ FREE_MAYBE (type);
+ type = NULL; /* We don't need it any more. */
+
+ /* Return if we have no intention of further downloading. */
+ if (!(*dt & RETROKF) || (*dt & HEAD_ONLY))
+ {
+ /* In case someone cares to look... */
+ hs->len = 0L;
+ hs->res = 0;
+ FREE_MAYBE (type);
+ FREE_MAYBE (all_headers);
+ CLOSE (sock);
+ return RETRFINISHED;
+ }
+
+ /* Open the local file. */
+ if (!opt.dfp)
+ {
+ mkalldirs (u->local);
+ if (opt.backups)
+ rotate_backups (u->local);
+ fp = fopen (u->local, hs->restval ? "ab" : "wb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", u->local, strerror (errno));
+ CLOSE (sock);
+ FREE_MAYBE (all_headers);
+ return FOPENERR;
+ }
+ }
+ else /* opt.dfp */
+ fp = opt.dfp;
+
+ /* #### This confuses the code that checks for file size. There
+ should be some overhead information. */
+ if (opt.save_headers)
+ fwrite (all_headers, 1, all_length, fp);
+ reset_timer ();
+ /* Get the contents of the document. */
+ hs->res = get_contents (sock, fp, &hs->len, hs->restval,
+ (contlen != -1 ? contlen : 0),
+ &rbuf);
+ hs->dltime = elapsed_time ();
+ if (!opt.dfp)
+ fclose (fp);
+ else
+ fflush (fp);
+ FREE_MAYBE (all_headers);
+ CLOSE (sock);
+ if (hs->res == -2)
+ return FWRITEERR;
+ return RETRFINISHED;
+}
+
+/* The genuine HTTP loop! This is the part where the retrieval is
+ retried, and retried, and retried, and... */
+uerr_t
+http_loop (struct urlinfo *u, char **newloc, int *dt)
+{
+ static int first_retrieval = 1;
+
+ int count;
+ int use_ts, got_head = 0; /* time-stamping info */
+ char *tms, *suf, *locf, *tmrate;
+ uerr_t err;
+ time_t tml = -1, tmr = -1; /* local and remote time-stamps */
+ long local_size = 0; /* the size of the local file */
+ struct http_stat hstat; /* HTTP status */
+ struct stat st;
+
+ *newloc = NULL;
+
+ /* Warn on (likely bogus) wildcard usage in HTTP. Don't use
+ has_wildcards_p because it would also warn on `?', and we that
+ shows up in CGI paths a *lot*. */
+ if (strchr (u->url, '*'))
+ logputs (LOG_VERBOSE, _("Warning: wildcards not supported in HTTP.\n"));
+
+ /* Determine the local filename. */
+ if (!u->local)
+ u->local = url_filename (u->proxy ? u->proxy : u);
+
+ if (!opt.output_document)
+ locf = u->local;
+ else
+ locf = opt.output_document;
+
+ if (opt.noclobber && file_exists_p (u->local))
+ {
+ /* If opt.noclobber is turned on and file already exists, do not
+ retrieve the file */
+ logprintf (LOG_VERBOSE, _("\
+File `%s' already there, will not retrieve.\n"), u->local);
+ /* If the file is there, we suppose it's retrieved OK. */
+ *dt |= RETROKF;
+
+ /* #### Bogusness alert. */
+ /* If its suffix is "html" or (yuck!) "htm", we suppose it's
+ text/html, a harmless lie. */
+ if (((suf = suffix (u->local)) != NULL)
+ && (!strcmp (suf, "html") || !strcmp (suf, "htm")))
+ *dt |= TEXTHTML;
+ free (suf);
+ /* Another harmless lie: */
+ return RETROK;
+ }
+
+ use_ts = 0;
+ if (opt.timestamping)
+ {
+ if (stat (u->local, &st) == 0)
+ {
+ use_ts = 1;
+ tml = st.st_mtime;
+ local_size = st.st_size;
+ got_head = 0;
+ }
+ }
+ /* Reset the counter. */
+ count = 0;
+ *dt = 0 | ACCEPTRANGES;
+ /* THE loop */
+ do
+ {
+ /* Increment the pass counter. */
+ ++count;
+ /* Wait before the retrieval (unless this is the very first
+ retrieval). */
+ if (!first_retrieval && opt.wait)
+ sleep (opt.wait);
+ if (first_retrieval)
+ first_retrieval = 0;
+ /* Get the current time string. */
+ tms = time_str (NULL);
+ /* Print fetch message, if opt.verbose. */
+ if (opt.verbose)
+ {
+ char *hurl = str_url (u->proxy ? u->proxy : u, 1);
+ char tmp[15];
+ strcpy (tmp, " ");
+ if (count > 1)
+ sprintf (tmp, _("(try:%2d)"), count);
+ logprintf (LOG_VERBOSE, "--%s-- %s\n %s => `%s'\n",
+ tms, hurl, tmp, locf);
+#ifdef WINDOWS
+ ws_changetitle (hurl, 1);
+#endif
+ free (hurl);
+ }
+
+ /* Default document type is empty. However, if spider mode is
+ on or time-stamping is employed, HEAD_ONLY commands is
+ encoded within *dt. */
+ if (opt.spider || (use_ts && !got_head))
+ *dt |= HEAD_ONLY;
+ else
+ *dt &= ~HEAD_ONLY;
+ /* Assume no restarting. */
+ hstat.restval = 0L;
+ /* Decide whether or not to restart. */
+ if (((count > 1 && (*dt & ACCEPTRANGES)) || opt.always_rest)
+ && file_exists_p (u->local))
+ if (stat (u->local, &st) == 0)
+ hstat.restval = st.st_size;
+ /* Decide whether to send the no-cache directive. */
+ if (u->proxy && (count > 1 || (opt.proxy_cache == 0)))
+ *dt |= SEND_NOCACHE;
+ else
+ *dt &= ~SEND_NOCACHE;
+
+ /* Try fetching the document, or at least its head. :-) */
+ err = gethttp (u, &hstat, dt);
+ /* Time? */
+ tms = time_str (NULL);
+ /* Get the new location (with or without the redirection). */
+ if (hstat.newloc)
+ *newloc = xstrdup (hstat.newloc);
+ switch (err)
+ {
+ case HERR: case HEOF: case CONSOCKERR: case CONCLOSED:
+ case CONERROR: case READERR: case WRITEFAILED:
+ case RANGEERR:
+ /* Non-fatal errors continue executing the loop, which will
+ bring them to "while" statement at the end, to judge
+ whether the number of tries was exceeded. */
+ FREEHSTAT (hstat);
+ printwhat (count, opt.ntry);
+ continue;
+ break;
+ case HOSTERR: case CONREFUSED: case PROXERR: case AUTHFAILED:
+ /* Fatal errors just return from the function. */
+ FREEHSTAT (hstat);
+ return err;
+ break;
+ case FWRITEERR: case FOPENERR:
+ /* Another fatal error. */
+ logputs (LOG_VERBOSE, "\n");
+ logprintf (LOG_NOTQUIET, _("Cannot write to `%s' (%s).\n"),
+ u->local, strerror (errno));
+ FREEHSTAT (hstat);
+ return err;
+ break;
+ case NEWLOCATION:
+ /* Return the new location to the caller. */
+ if (!hstat.newloc)
+ {
+ logprintf (LOG_NOTQUIET,
+ _("ERROR: Redirection (%d) without location.\n"),
+ hstat.statcode);
+ return WRONGCODE;
+ }
+ FREEHSTAT (hstat);
+ return NEWLOCATION;
+ break;
+ case RETRFINISHED:
+ /* Deal with you later. */
+ break;
+ default:
+ /* All possibilities should have been exhausted. */
+ abort ();
+ }
+ if (!(*dt & RETROKF))
+ {
+ if (!opt.verbose)
+ {
+ /* #### Ugly ugly ugly! */
+ char *hurl = str_url (u->proxy ? u->proxy : u, 1);
+ logprintf (LOG_NONVERBOSE, "%s:\n", hurl);
+ free (hurl);
+ }
+ logprintf (LOG_NOTQUIET, _("%s ERROR %d: %s.\n"),
+ tms, hstat.statcode, hstat.error);
+ logputs (LOG_VERBOSE, "\n");
+ FREEHSTAT (hstat);
+ return WRONGCODE;
+ }
+
+ /* Did we get the time-stamp? */
+ if (!got_head)
+ {
+ if (opt.timestamping && !hstat.remote_time)
+ {
+ logputs (LOG_NOTQUIET, _("\
+Last-modified header missing -- time-stamps turned off.\n"));
+ }
+ else if (hstat.remote_time)
+ {
+ /* Convert the date-string into struct tm. */
+ tmr = http_atotm (hstat.remote_time);
+ if (tmr == (time_t) (-1))
+ logputs (LOG_VERBOSE, _("\
+Last-modified header invalid -- time-stamp ignored.\n"));
+ }
+ }
+
+ /* The time-stamping section. */
+ if (use_ts)
+ {
+ got_head = 1;
+ *dt &= ~HEAD_ONLY;
+ use_ts = 0; /* no more time-stamping */
+ count = 0; /* the retrieve count for HEAD is
+ reset */
+ if (hstat.remote_time && tmr != (time_t) (-1))
+ {
+ /* Now time-stamping can be used validly. Time-stamping
+ means that if the sizes of the local and remote file
+ match, and local file is newer than the remote file,
+ it will not be retrieved. Otherwise, the normal
+ download procedure is resumed. */
+ if (tml >= tmr &&
+ (hstat.contlen == -1 || local_size == hstat.contlen))
+ {
+ logprintf (LOG_VERBOSE, _("\
+Local file `%s' is more recent, not retrieving.\n\n"), u->local);
+ FREEHSTAT (hstat);
+ return RETROK;
+ }
+ else if (tml >= tmr)
+ logprintf (LOG_VERBOSE, _("\
+The sizes do not match (local %ld), retrieving.\n"), local_size);
+ else
+ logputs (LOG_VERBOSE,
+ _("Remote file is newer, retrieving.\n"));
+ }
+ FREEHSTAT (hstat);
+ continue;
+ }
+ if (!opt.dfp
+ && (tmr != (time_t) (-1))
+ && !opt.spider
+ && ((hstat.len == hstat.contlen) ||
+ ((hstat.res == 0) &&
+ ((hstat.contlen == -1) ||
+ (hstat.len >= hstat.contlen && !opt.kill_longer)))))
+ {
+ touch (u->local, tmr);
+ }
+ /* End of time-stamping section. */
+
+ if (opt.spider)
+ {
+ logprintf (LOG_NOTQUIET, "%d %s\n\n", hstat.statcode, hstat.error);
+ return RETROK;
+ }
+
+ /* It is now safe to free the remainder of hstat, since the
+ strings within it will no longer be used. */
+ FREEHSTAT (hstat);
+
+ tmrate = rate (hstat.len - hstat.restval, hstat.dltime);
+
+ if (hstat.len == hstat.contlen)
+ {
+ if (*dt & RETROKF)
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - `%s' saved [%ld/%ld]\n\n"),
+ tms, tmrate, locf, hstat.len, hstat.contlen);
+ logprintf (LOG_NONVERBOSE,
+ "%s URL:%s [%ld/%ld] -> \"%s\" [%d]\n",
+ tms, u->url, hstat.len, hstat.contlen, locf, count);
+ }
+ ++opt.numurls;
+ opt.downloaded += hstat.len;
+ return RETROK;
+ }
+ else if (hstat.res == 0) /* No read error */
+ {
+ if (hstat.contlen == -1) /* We don't know how much we were
+ supposed to get, so... */
+ {
+ if (*dt & RETROKF)
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - `%s' saved [%ld]\n\n"),
+ tms, tmrate, locf, hstat.len);
+ logprintf (LOG_NONVERBOSE,
+ "%s URL:%s [%ld] -> \"%s\" [%d]\n",
+ tms, u->url, hstat.len, locf, count);
+ }
+ ++opt.numurls;
+ opt.downloaded += hstat.len;
+ return RETROK;
+ }
+ else if (hstat.len < hstat.contlen) /* meaning we lost the
+ connection too soon */
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - Connection closed at byte %ld. "),
+ tms, tmrate, hstat.len);
+ printwhat (count, opt.ntry);
+ continue;
+ }
+ else if (!opt.kill_longer) /* meaning we got more than expected */
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - `%s' saved [%ld/%ld])\n\n"),
+ tms, tmrate, locf, hstat.len, hstat.contlen);
+ logprintf (LOG_NONVERBOSE,
+ "%s URL:%s [%ld/%ld] -> \"%s\" [%d]\n",
+ tms, u->url, hstat.len, hstat.contlen, locf, count);
+ ++opt.numurls;
+ opt.downloaded += hstat.len;
+ return RETROK;
+ }
+ else /* the same, but not accepted */
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - Connection closed at byte %ld/%ld. "),
+ tms, tmrate, hstat.len, hstat.contlen);
+ printwhat (count, opt.ntry);
+ continue;
+ }
+ }
+ else /* now hstat.res can only be -1 */
+ {
+ if (hstat.contlen == -1)
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - Read error at byte %ld (%s)."),
+ tms, tmrate, hstat.len, strerror (errno));
+ printwhat (count, opt.ntry);
+ continue;
+ }
+ else /* hstat.res == -1 and contlen is given */
+ {
+ logprintf (LOG_VERBOSE,
+ _("%s (%s) - Read error at byte %ld/%ld (%s). "),
+ tms, tmrate, hstat.len, hstat.contlen,
+ strerror (errno));
+ printwhat (count, opt.ntry);
+ continue;
+ }
+ }
+ /* not reached */
+ break;
+ }
+ while (!opt.ntry || (count < opt.ntry));
+ return TRYLIMEXC;
+}
+\f
+/* Converts struct tm to time_t, assuming the data in tm is UTC rather
+ than local timezone (mktime assumes the latter).
+
+ Contributed by Roger Beeman <beeman@cisco.com>, with the help of
+ Mark Baushke <mdb@cisco.com> and the rest of the Gurus at CISCO. */
+static time_t
+mktime_from_utc (struct tm *t)
+{
+ time_t tl, tb;
+
+ tl = mktime (t);
+ if (tl == -1)
+ return -1;
+ tb = mktime (gmtime (&tl));
+ return (tl <= tb ? (tl + (tl - tb)) : (tl - (tb - tl)));
+}
+
+/* Check whether the result of strptime() indicates success.
+ strptime() returns the pointer to how far it got to in the string.
+ The processing has been successful if the string is at `GMT' or
+ `+X', or at the end of the string.
+
+ In extended regexp parlance, the function returns 1 if P matches
+ "^ *(GMT|[+-][0-9]|$)", 0 otherwise. P being NULL (a valid result of
+ strptime()) is considered a failure and 0 is returned. */
+static int
+check_end (char *p)
+{
+ if (!p)
+ return 0;
+ while (ISSPACE (*p))
+ ++p;
+ if (!*p
+ || (p[0] == 'G' && p[1] == 'M' && p[2] == 'T')
+ || ((p[0] == '+' || p[1] == '-') && ISDIGIT (p[1])))
+ return 1;
+ else
+ return 0;
+}
+
+/* Convert TIME_STRING time to time_t. TIME_STRING can be in any of
+ the three formats RFC2068 allows the HTTP servers to emit --
+ RFC1123-date, RFC850-date or asctime-date. Timezones are ignored,
+ and should be GMT.
+
+ We use strptime() to recognize various dates, which makes it a
+ little bit slacker than the RFC1123/RFC850/asctime (e.g. it always
+ allows shortened dates and months, one-digit days, etc.). It also
+ allows more than one space anywhere where the specs require one SP.
+ The routine should probably be even more forgiving (as recommended
+ by RFC2068), but I do not have the time to write one.
+
+ Return the computed time_t representation, or -1 if all the
+ schemes fail.
+
+ Needless to say, what we *really* need here is something like
+ Marcus Hennecke's atotm(), which is forgiving, fast, to-the-point,
+ and does not use strptime(). atotm() is to be found in the sources
+ of `phttpd', a little-known HTTP server written by Peter Erikson. */
+static time_t
+http_atotm (char *time_string)
+{
+ struct tm t;
+
+ /* Roger Beeman says: "This function dynamically allocates struct tm
+ t, but does no initialization. The only field that actually
+ needs initialization is tm_isdst, since the others will be set by
+ strptime. Since strptime does not set tm_isdst, it will return
+ the data structure with whatever data was in tm_isdst to begin
+ with. For those of us in timezones where DST can occur, there
+ can be a one hour shift depending on the previous contents of the
+ data area where the data structure is allocated." */
+ t.tm_isdst = -1;
+
+ /* Note that under foreign locales Solaris strptime() fails to
+ recognize English dates, which renders this function useless. I
+ assume that other non-GNU strptime's are plagued by the same
+ disease. We solve this by setting only LC_MESSAGES in
+ i18n_initialize(), instead of LC_ALL.
+
+ Another solution could be to temporarily set locale to C, invoke
+ strptime(), and restore it back. This is slow and dirty,
+ however, and locale support other than LC_MESSAGES can mess other
+ things, so I rather chose to stick with just setting LC_MESSAGES.
+
+ Also note that none of this is necessary under GNU strptime(),
+ because it recognizes both international and local dates. */
+
+ /* NOTE: We don't use `%n' for white space, as OSF's strptime uses
+ it to eat all white space up to (and including) a newline, and
+ the function fails if there is no newline (!).
+
+ Let's hope all strptime() implementations use ` ' to skip *all*
+ whitespace instead of just one (it works that way on all the
+ systems I've tested it on). */
+
+ /* RFC1123: Thu, 29 Jan 1998 22:12:57 */
+ if (check_end (strptime (time_string, "%a, %d %b %Y %T", &t)))
+ return mktime_from_utc (&t);
+ /* RFC850: Thu, 29-Jan-98 22:12:57 */
+ if (check_end (strptime (time_string, "%a, %d-%b-%y %T", &t)))
+ return mktime_from_utc (&t);
+ /* asctime: Thu Jan 29 22:12:57 1998 */
+ if (check_end (strptime (time_string, "%a %b %d %T %Y", &t)))
+ return mktime_from_utc (&t);
+ /* Failure. */
+ return -1;
+}
+\f
+/* Authorization support: We support two authorization schemes:
+
+ * `Basic' scheme, consisting of base64-ing USER:PASSWORD string;
+
+ * `Digest' scheme, added by Junio Hamano <junio@twinsun.com>,
+ consisting of answering to the server's challenge with the proper
+ MD5 digests. */
+
+/* How many bytes it will take to store LEN bytes in base64. */
+#define BASE64_LENGTH(len) (4 * (((len) + 2) / 3))
+
+/* Encode the string S of length LENGTH to base64 format and place it
+ to STORE. STORE will be 0-terminated, and must point to a writable
+ buffer of at least 1+BASE64_LENGTH(length) bytes. */
+static void
+base64_encode (const char *s, char *store, int length)
+{
+ /* Conversion table. */
+ static char tbl[64] = {
+ 'A','B','C','D','E','F','G','H',
+ 'I','J','K','L','M','N','O','P',
+ 'Q','R','S','T','U','V','W','X',
+ 'Y','Z','a','b','c','d','e','f',
+ 'g','h','i','j','k','l','m','n',
+ 'o','p','q','r','s','t','u','v',
+ 'w','x','y','z','0','1','2','3',
+ '4','5','6','7','8','9','+','/'
+ };
+ int i;
+ unsigned char *p = (unsigned char *)store;
+
+ /* Transform the 3x8 bits to 4x6 bits, as required by base64. */
+ for (i = 0; i < length; i += 3)
+ {
+ *p++ = tbl[s[0] >> 2];
+ *p++ = tbl[((s[0] & 3) << 4) + (s[1] >> 4)];
+ *p++ = tbl[((s[1] & 0xf) << 2) + (s[2] >> 6)];
+ *p++ = tbl[s[2] & 0x3f];
+ s += 3;
+ }
+ /* Pad the result if necessary... */
+ if (i == length + 1)
+ *(p - 1) = '=';
+ else if (i == length + 2)
+ *(p - 1) = *(p - 2) = '=';
+ /* ...and zero-terminate it. */
+ *p = '\0';
+}
+
+/* Create the authentication header contents for the `Basic' scheme.
+ This is done by encoding the string `USER:PASS' in base64 and
+ prepending `HEADER: Basic ' to it. */
+static char *
+basic_authentication_encode (const char *user, const char *passwd,
+ const char *header)
+{
+ char *t1, *t2, *res;
+ int len1 = strlen (user) + 1 + strlen (passwd);
+ int len2 = BASE64_LENGTH (len1);
+
+ t1 = (char *)alloca (len1 + 1);
+ sprintf (t1, "%s:%s", user, passwd);
+ t2 = (char *)alloca (1 + len2);
+ base64_encode (t1, t2, len1);
+ res = (char *)malloc (len2 + 11 + strlen (header));
+ sprintf (res, "%s: Basic %s\r\n", header, t2);
+
+ return res;
+}
+
+#ifdef USE_DIGEST
+/* Parse HTTP `WWW-Authenticate:' header. AU points to the beginning
+ of a field in such a header. If the field is the one specified by
+ ATTR_NAME ("realm", "opaque", and "nonce" are used by the current
+ digest authorization code), extract its value in the (char*)
+ variable pointed by RET. Returns negative on a malformed header,
+ or number of bytes that have been parsed by this call. */
+static int
+extract_header_attr (const char *au, const char *attr_name, char **ret)
+{
+ const char *cp, *ep;
+
+ ep = cp = au;
+
+ if (strncmp (cp, attr_name, strlen (attr_name)) == 0)
+ {
+ cp += strlen (attr_name);
+ if (!*cp)
+ return -1;
+ cp += skip_lws (cp);
+ if (*cp != '=')
+ return -1;
+ if (!*++cp)
+ return -1;
+ cp += skip_lws (cp);
+ if (*cp != '\"')
+ return -1;
+ if (!*++cp)
+ return -1;
+ for (ep = cp; *ep && *ep != '\"'; ep++)
+ ;
+ if (!*ep)
+ return -1;
+ FREE_MAYBE (*ret);
+ *ret = strdupdelim (cp, ep);
+ return ep - au + 1;
+ }
+ else
+ return 0;
+}
+
+/* Response value needs to be in lowercase, so we cannot use HEXD2ASC
+ from url.h. See RFC 2069 2.1.2 for the syntax of response-digest. */
+#define HEXD2asc(x) (((x) < 10) ? ((x) + '0') : ((x) - 10 + 'a'))
+
+/* Dump the hexadecimal representation of HASH to BUF. HASH should be
+ an array of 16 bytes containing the hash keys, and BUF should be a
+ buffer of 33 writable characters (32 for hex digits plus one for
+ zero termination). */
+static void
+dump_hash (unsigned char *buf, const unsigned char *hash)
+{
+ int i;
+
+ for (i = 0; i < MD5_HASHLEN; i++, hash++)
+ {
+ *buf++ = HEXD2asc (*hash >> 4);
+ *buf++ = HEXD2asc (*hash & 0xf);
+ }
+ *buf = '\0';
+}
+
+/* Take the line apart to find the challenge, and compose a digest
+ authorization header. See RFC2069 section 2.1.2. */
+char *
+digest_authentication_encode (const char *au, const char *user,
+ const char *passwd, const char *method,
+ const char *path)
+{
+ static char *realm, *opaque, *nonce;
+ static struct {
+ const char *name;
+ char **variable;
+ } options[] = {
+ { "realm", &realm },
+ { "opaque", &opaque },
+ { "nonce", &nonce }
+ };
+ char *res;
+
+ realm = opaque = nonce = NULL;
+
+ au += 6; /* skip over `Digest' */
+ while (*au)
+ {
+ int i;
+
+ au += skip_lws (au);
+ for (i = 0; i < ARRAY_SIZE (options); i++)
+ {
+ int skip = extract_header_attr (au, options[i].name,
+ options[i].variable);
+ if (skip < 0)
+ {
+ FREE_MAYBE (realm);
+ FREE_MAYBE (opaque);
+ FREE_MAYBE (nonce);
+ return NULL;
+ }
+ else if (skip)
+ {
+ au += skip;
+ break;
+ }
+ }
+ if (i == ARRAY_SIZE (options))
+ {
+ while (*au && *au != '=')
+ au++;
+ if (*au && *++au)
+ {
+ au += skip_lws (au);
+ if (*au == '\"')
+ {
+ au++;
+ while (*au && *au != '\"')
+ au++;
+ if (*au)
+ au++;
+ }
+ }
+ }
+ while (*au && *au != ',')
+ au++;
+ if (*au)
+ au++;
+ }
+ if (!realm || !nonce || !user || !passwd || !path || !method)
+ {
+ FREE_MAYBE (realm);
+ FREE_MAYBE (opaque);
+ FREE_MAYBE (nonce);
+ return NULL;
+ }
+
+ /* Calculate the digest value. */
+ {
+ struct md5_ctx ctx;
+ unsigned char hash[MD5_HASHLEN];
+ unsigned char a1buf[MD5_HASHLEN * 2 + 1], a2buf[MD5_HASHLEN * 2 + 1];
+ unsigned char response_digest[MD5_HASHLEN * 2 + 1];
+
+ /* A1BUF = H(user ":" realm ":" password) */
+ md5_init_ctx (&ctx);
+ md5_process_bytes (user, strlen (user), &ctx);
+ md5_process_bytes (":", 1, &ctx);
+ md5_process_bytes (realm, strlen (realm), &ctx);
+ md5_process_bytes (":", 1, &ctx);
+ md5_process_bytes (passwd, strlen (passwd), &ctx);
+ md5_finish_ctx (&ctx, hash);
+ dump_hash (a1buf, hash);
+
+ /* A2BUF = H(method ":" path) */
+ md5_init_ctx (&ctx);
+ md5_process_bytes (method, strlen (method), &ctx);
+ md5_process_bytes (":", 1, &ctx);
+ md5_process_bytes (path, strlen (path), &ctx);
+ md5_finish_ctx (&ctx, hash);
+ dump_hash (a2buf, hash);
+
+ /* RESPONSE_DIGEST = H(A1BUF ":" nonce ":" A2BUF) */
+ md5_init_ctx (&ctx);
+ md5_process_bytes (a1buf, MD5_HASHLEN * 2, &ctx);
+ md5_process_bytes (":", 1, &ctx);
+ md5_process_bytes (nonce, strlen (nonce), &ctx);
+ md5_process_bytes (":", 1, &ctx);
+ md5_process_bytes (a2buf, MD5_HASHLEN * 2, &ctx);
+ md5_finish_ctx (&ctx, hash);
+ dump_hash (response_digest, hash);
+
+ res = (char*) xmalloc (strlen (user)
+ + strlen (user)
+ + strlen (realm)
+ + strlen (nonce)
+ + strlen (path)
+ + 2 * MD5_HASHLEN /*strlen (response_digest)*/
+ + (opaque ? strlen (opaque) : 0)
+ + 128);
+ sprintf (res, "Authorization: Digest \
+username=\"%s\", realm=\"%s\", nonce=\"%s\", uri=\"%s\", response=\"%s\"",
+ user, realm, nonce, path, response_digest);
+ if (opaque)
+ {
+ char *p = res + strlen (res);
+ strcat (p, ", opaque=\"");
+ strcat (p, opaque);
+ strcat (p, "\"");
+ }
+ strcat (res, "\r\n");
+ }
+ return res;
+}
+#endif /* USE_DIGEST */
+
+
+#define HACK_O_MATIC(line, string_constant) \
+ (!strncasecmp (line, string_constant, sizeof (string_constant) - 1) \
+ && (ISSPACE (line[sizeof (string_constant) - 1]) \
+ || !line[sizeof (string_constant) - 1]))
+
+static int
+known_authentication_scheme_p (const char *au)
+{
+ return HACK_O_MATIC (au, "Basic") || HACK_O_MATIC (au, "Digest");
+}
+
+#undef HACK_O_MATIC
+
+/* Create the HTTP authorization request header. When the
+ `WWW-Authenticate' response header is seen, according to the
+ authorization scheme specified in that header (`Basic' and `Digest'
+ are supported by the current implementation), produce an
+ appropriate HTTP authorization request header. */
+static char *
+create_authorization_line (const char *au, const char *user,
+ const char *passwd, const char *method,
+ const char *path)
+{
+ char *wwwauth = NULL;
+
+ if (!strncasecmp (au, "Basic", 5))
+ wwwauth = basic_authentication_encode (user, passwd, "Authorization");
+#ifdef USE_DIGEST
+ else if (!strncasecmp (au, "Digest", 6))
+ wwwauth = digest_authentication_encode (au, user, passwd, method, path);
+#endif /* USE_DIGEST */
+ return wwwauth;
+}
--- /dev/null
+/* Reading/parsing the initialization file.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <stdlib.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <errno.h>
+
+#ifdef HAVE_PWD_H
+#include <pwd.h>
+#endif
+
+#include "wget.h"
+#include "utils.h"
+#include "init.h"
+#include "host.h"
+#include "recur.h"
+#include "netrc.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+
+#define CMD_DECLARE(func) static int func \
+ PARAMS ((const char *, const char *, void *))
+
+CMD_DECLARE (cmd_boolean);
+CMD_DECLARE (cmd_boolean);
+CMD_DECLARE (cmd_number);
+CMD_DECLARE (cmd_number_inf);
+CMD_DECLARE (cmd_string);
+CMD_DECLARE (cmd_vector);
+CMD_DECLARE (cmd_directory_vector);
+CMD_DECLARE (cmd_bytes);
+CMD_DECLARE (cmd_time);
+
+CMD_DECLARE (cmd_spec_dirstruct);
+CMD_DECLARE (cmd_spec_dotstyle);
+CMD_DECLARE (cmd_spec_header);
+CMD_DECLARE (cmd_spec_htmlify);
+CMD_DECLARE (cmd_spec_mirror);
+CMD_DECLARE (cmd_spec_outputdocument);
+CMD_DECLARE (cmd_spec_recursive);
+CMD_DECLARE (cmd_spec_useragent);
+
+/* List of recognized commands, each consisting of name, closure and
+ function. When adding a new command, simply add it to the list,
+ but be sure to keep the list sorted alphabetically, as comind()
+ depends on it. */
+static struct {
+ char *name;
+ void *closure;
+ int (*action) PARAMS ((const char *, const char *, void *));
+} commands[] = {
+ { "accept", &opt.accepts, cmd_vector },
+ { "addhostdir", &opt.add_hostdir, cmd_boolean },
+ { "alwaysrest", &opt.always_rest, cmd_boolean }, /* deprecated */
+ { "background", &opt.background, cmd_boolean },
+ { "backups", &opt.backups, cmd_number },
+ { "base", &opt.base_href, cmd_string },
+ { "cache", &opt.proxy_cache, cmd_boolean },
+ { "continue", &opt.always_rest, cmd_boolean },
+ { "convertlinks", &opt.convert_links, cmd_boolean },
+ { "cutdirs", &opt.cut_dirs, cmd_number },
+#ifdef DEBUG
+ { "debug", &opt.debug, cmd_boolean },
+#endif
+ { "deleteafter", &opt.delete_after, cmd_boolean },
+ { "dirprefix", &opt.dir_prefix, cmd_string },
+ { "dirstruct", NULL, cmd_spec_dirstruct },
+ { "domains", &opt.domains, cmd_vector },
+ { "dotbytes", &opt.dot_bytes, cmd_bytes },
+ { "dotsinline", &opt.dots_in_line, cmd_number },
+ { "dotspacing", &opt.dot_spacing, cmd_number },
+ { "dotstyle", NULL, cmd_spec_dotstyle },
+ { "excludedirectories", &opt.excludes, cmd_directory_vector },
+ { "excludedomains", &opt.exclude_domains, cmd_vector },
+ { "followftp", &opt.follow_ftp, cmd_boolean },
+ { "forcehtml", &opt.force_html, cmd_boolean },
+ { "ftpproxy", &opt.ftp_proxy, cmd_string },
+ { "glob", &opt.ftp_glob, cmd_boolean },
+ { "header", NULL, cmd_spec_header },
+ { "htmlify", NULL, cmd_spec_htmlify },
+ { "httppasswd", &opt.http_passwd, cmd_string },
+ { "httpproxy", &opt.http_proxy, cmd_string },
+ { "httpuser", &opt.http_user, cmd_string },
+ { "ignorelength", &opt.ignore_length, cmd_boolean },
+ { "includedirectories", &opt.includes, cmd_directory_vector },
+ { "input", &opt.input_filename, cmd_string },
+ { "killlonger", &opt.kill_longer, cmd_boolean },
+ { "logfile", &opt.lfilename, cmd_string },
+ { "login", &opt.ftp_acc, cmd_string },
+ { "mirror", NULL, cmd_spec_mirror },
+ { "netrc", &opt.netrc, cmd_boolean },
+ { "noclobber", &opt.noclobber, cmd_boolean },
+ { "noparent", &opt.no_parent, cmd_boolean },
+ { "noproxy", &opt.no_proxy, cmd_vector },
+ { "numtries", &opt.ntry, cmd_number_inf }, /* deprecated */
+ { "outputdocument", NULL, cmd_spec_outputdocument },
+ { "passiveftp", &opt.ftp_pasv, cmd_boolean },
+ { "passwd", &opt.ftp_pass, cmd_string },
+ { "proxypasswd", &opt.proxy_passwd, cmd_string },
+ { "proxyuser", &opt.proxy_user, cmd_string },
+ { "quiet", &opt.quiet, cmd_boolean },
+ { "quota", &opt.quota, cmd_bytes },
+ { "reclevel", &opt.reclevel, cmd_number_inf },
+ { "recursive", NULL, cmd_spec_recursive },
+ { "reject", &opt.rejects, cmd_vector },
+ { "relativeonly", &opt.relative_only, cmd_boolean },
+ { "removelisting", &opt.remove_listing, cmd_boolean },
+ { "retrsymlinks", &opt.retr_symlinks, cmd_boolean },
+ { "robots", &opt.use_robots, cmd_boolean },
+ { "saveheaders", &opt.save_headers, cmd_boolean },
+ { "serverresponse", &opt.server_response, cmd_boolean },
+ { "simplehostcheck", &opt.simple_check, cmd_boolean },
+ { "spanhosts", &opt.spanhost, cmd_boolean },
+ { "spider", &opt.spider, cmd_boolean },
+ { "timeout", &opt.timeout, cmd_time },
+ { "timestamping", &opt.timestamping, cmd_boolean },
+ { "tries", &opt.ntry, cmd_number_inf },
+ { "useproxy", &opt.use_proxy, cmd_boolean },
+ { "useragent", NULL, cmd_spec_useragent },
+ { "verbose", &opt.verbose, cmd_boolean },
+ { "wait", &opt.wait, cmd_time }
+};
+
+/* Return index of COM if it is a valid command, or -1 otherwise. COM
+ is looked up in `commands' using binary search algorithm. */
+static int
+comind (const char *com)
+{
+ int min = 0, max = ARRAY_SIZE (commands);
+
+ do
+ {
+ int i = (min + max) / 2;
+ int cmp = strcasecmp (com, commands[i].name);
+ if (cmp == 0)
+ return i;
+ else if (cmp < 0)
+ max = i - 1;
+ else
+ min = i + 1;
+ }
+ while (min <= max);
+ return -1;
+}
+\f
+/* Reset the variables to default values. */
+static void
+defaults (void)
+{
+ char *tmp;
+
+ /* Most of the default values are 0. Just reset everything, and
+ fill in the non-zero values. Note that initializing pointers to
+ NULL this way is technically illegal, but porting Wget to a
+ machine where NULL is not all-zero bit pattern will be the least
+ of the implementors' worries. */
+ memset (&opt, 0, sizeof (opt));
+
+ opt.verbose = -1;
+ opt.dir_prefix = xstrdup (".");
+ opt.ntry = 20;
+ opt.reclevel = 5;
+ opt.add_hostdir = 1;
+ opt.ftp_acc = xstrdup ("anonymous");
+ /*opt.ftp_pass = xstrdup (ftp_getaddress ());*/
+ opt.netrc = 1;
+ opt.ftp_glob = 1;
+ opt.htmlify = 1;
+ opt.use_proxy = 1;
+ tmp = getenv ("no_proxy");
+ if (tmp)
+ opt.no_proxy = sepstring (tmp);
+ opt.proxy_cache = 1;
+
+#ifdef HAVE_SELECT
+ opt.timeout = 900;
+#endif
+ opt.use_robots = 1;
+
+ opt.remove_listing = 1;
+
+ opt.dot_bytes = 1024;
+ opt.dot_spacing = 10;
+ opt.dots_in_line = 50;
+}
+\f
+/* Return the user's home directory (strdup-ed), or NULL if none is
+ found. */
+char *
+home_dir (void)
+{
+ char *home = getenv ("HOME");
+
+ if (!home)
+ {
+#ifndef WINDOWS
+ /* If HOME is not defined, try getting it from the password
+ file. */
+ struct passwd *pwd = getpwuid (getuid ());
+ if (!pwd || !pwd->pw_dir)
+ return NULL;
+ home = pwd->pw_dir;
+#else /* WINDOWS */
+ home = "C:\\";
+ /* #### Maybe I should grab home_dir from registry, but the best
+ that I could get from there is user's Start menu. It sucks! */
+#endif /* WINDOWS */
+ }
+
+ return home ? xstrdup (home) : NULL;
+}
+
+/* Return the path to the user's .wgetrc. This is either the value of
+ `WGETRC' environment variable, or `$HOME/.wgetrc'.
+
+ If the `WGETRC' variable exists but the file does not exist, the
+ function will exit(). */
+static char *
+wgetrc_file_name (void)
+{
+ char *env, *home;
+ char *file = NULL;
+
+ /* Try the environment. */
+ env = getenv ("WGETRC");
+ if (env && *env)
+ {
+ if (!file_exists_p (env))
+ {
+ fprintf (stderr, "%s: %s: %s.\n", exec_name, file, strerror (errno));
+ exit (1);
+ }
+ return xstrdup (env);
+ }
+
+#ifndef WINDOWS
+ /* If that failed, try $HOME/.wgetrc. */
+ home = home_dir ();
+ if (home)
+ {
+ file = (char *)xmalloc (strlen (home) + 1 + strlen (".wgetrc") + 1);
+ sprintf (file, "%s/.wgetrc", home);
+ }
+#else /* WINDOWS */
+ /* Under Windows, "home" is (for the purposes of this function) the
+ directory where `wget.exe' resides, and `wget.ini' will be used
+ as file name. SYSTEM_WGETRC should not be defined under WINDOWS.
+
+ It is not as trivial as I assumed, because on 95 argv[0] is full
+ path, but on NT you get what you typed in command line. --dbudor */
+ home = ws_mypath ();
+ if (home)
+ {
+ file = (char *)xmalloc (strlen (home) + strlen ("wget.ini") + 1);
+ sprintf (file, "%swget.ini", home);
+ }
+#endif /* WINDOWS */
+
+ FREE_MAYBE (home);
+ if (!file)
+ return NULL;
+ if (!file_exists_p (file))
+ {
+ free (file);
+ return NULL;
+ }
+ return file;
+}
+
+/* Initialize variables from a wgetrc file */
+static void
+run_wgetrc (const char *file)
+{
+ FILE *fp;
+ char *line;
+ int ln;
+
+ fp = fopen (file, "rb");
+ if (!fp)
+ {
+ fprintf (stderr, _("%s: Cannot read %s (%s).\n"), exec_name,
+ file, strerror (errno));
+ return;
+ }
+ /* Reset line number. */
+ ln = 1;
+ while ((line = read_whole_line (fp)))
+ {
+ char *com, *val;
+ int status;
+ int length = strlen (line);
+
+ if (length && line[length - 1] == '\r')
+ line[length - 1] = '\0';
+ /* Parse the line. */
+ status = parse_line (line, &com, &val);
+ free (line);
+ /* If everything is OK, set the value. */
+ if (status == 1)
+ {
+ if (!setval (com, val))
+ fprintf (stderr, _("%s: Error in %s at line %d.\n"), exec_name,
+ file, ln);
+ free (com);
+ free (val);
+ }
+ else if (status == 0)
+ fprintf (stderr, _("%s: Error in %s at line %d.\n"), exec_name,
+ file, ln);
+ ++ln;
+ }
+ fclose (fp);
+}
+
+/* Initialize the defaults and run the system wgetrc and user's own
+ wgetrc. */
+void
+initialize (void)
+{
+ char *file;
+
+ /* Load the hard-coded defaults. */
+ defaults ();
+
+ /* If SYSTEM_WGETRC is defined, use it. */
+#ifdef SYSTEM_WGETRC
+ if (file_exists_p (SYSTEM_WGETRC))
+ run_wgetrc (SYSTEM_WGETRC);
+#endif
+ /* Override it with your own, if one exists. */
+ file = wgetrc_file_name ();
+ if (!file)
+ return;
+ /* #### We should somehow canonicalize `file' and SYSTEM_WGETRC,
+ really. */
+#ifdef SYSTEM_WGETRC
+ if (!strcmp (file, SYSTEM_WGETRC))
+ {
+ fprintf (stderr, _("\
+%s: Warning: Both system and user wgetrc point to `%s'.\n"),
+ exec_name, file);
+ }
+ else
+#endif
+ run_wgetrc (file);
+ free (file);
+ return;
+}
+\f
+/* Parse the line pointed by line, with the syntax:
+ <sp>* command <sp>* = <sp>* value <newline>
+ Uses malloc to allocate space for command and value.
+ If the line is invalid, data is freed and 0 is returned.
+
+ Return values:
+ 1 - success
+ 0 - failure
+ -1 - empty */
+int
+parse_line (const char *line, char **com, char **val)
+{
+ const char *p = line;
+ const char *orig_comptr, *end;
+ char *new_comptr;
+
+ /* Skip spaces. */
+ while (*p == ' ' || *p == '\t')
+ ++p;
+
+ /* Don't process empty lines. */
+ if (!*p || *p == '\n' || *p == '#')
+ return -1;
+
+ for (orig_comptr = p; ISALPHA (*p) || *p == '_' || *p == '-'; p++)
+ ;
+ /* The next char should be space or '='. */
+ if (!ISSPACE (*p) && (*p != '='))
+ return 0;
+ *com = (char *)xmalloc (p - orig_comptr + 1);
+ for (new_comptr = *com; orig_comptr < p; orig_comptr++)
+ {
+ if (*orig_comptr == '_' || *orig_comptr == '-')
+ continue;
+ *new_comptr++ = *orig_comptr;
+ }
+ *new_comptr = '\0';
+ /* If the command is invalid, exit now. */
+ if (comind (*com) == -1)
+ {
+ free (*com);
+ return 0;
+ }
+
+ /* Skip spaces before '='. */
+ for (; ISSPACE (*p); p++);
+ /* If '=' not found, bail out. */
+ if (*p != '=')
+ {
+ free (*com);
+ return 0;
+ }
+ /* Skip spaces after '='. */
+ for (++p; ISSPACE (*p); p++);
+ /* Get the ending position. */
+ for (end = p; *end && *end != '\n'; end++);
+ /* Allocate *val, and copy from line. */
+ *val = strdupdelim (p, end);
+ return 1;
+}
+
+/* Set COM to VAL. This is the meat behind processing `.wgetrc'. No
+ fatals -- error signal prints a warning and resets to default
+ value. All error messages are printed to stderr, *not* to
+ opt.lfile, since opt.lfile wasn't even generated yet. */
+int
+setval (const char *com, const char *val)
+{
+ int ind;
+
+ if (!com || !val)
+ return 0;
+ ind = comind (com);
+ if (ind == -1)
+ {
+ /* #### Should I just abort()? */
+#ifdef DEBUG
+ fprintf (stderr, _("%s: BUG: unknown command `%s', value `%s'.\n"),
+ exec_name, com, val);
+#endif
+ return 0;
+ }
+ return ((*commands[ind].action) (com, val, commands[ind].closure));
+}
+\f
+/* Generic helper functions, for use with `commands'. */
+
+static int myatoi PARAMS ((const char *s));
+
+/* Store the boolean value from VAL to CLOSURE. COM is ignored,
+ except for error messages. */
+static int
+cmd_boolean (const char *com, const char *val, void *closure)
+{
+ int bool_value;
+
+ if (!strcasecmp (val, "on")
+ || (*val == '1' && !*(val + 1)))
+ bool_value = 1;
+ else if (!strcasecmp (val, "off")
+ || (*val == '0' && !*(val + 1)))
+ bool_value = 0;
+ else
+ {
+ fprintf (stderr, _("%s: %s: Please specify on or off.\n"),
+ exec_name, com);
+ return 0;
+ }
+
+ *(int *)closure = bool_value;
+ return 1;
+}
+
+/* Set the non-negative integer value from VAL to CLOSURE. With
+ incorrect specification, the number remains unchanged. */
+static int
+cmd_number (const char *com, const char *val, void *closure)
+{
+ int num = myatoi (val);
+
+ if (num == -1)
+ {
+ fprintf (stderr, _("%s: %s: Invalid specification `%s'.\n"),
+ exec_name, com, val);
+ return 0;
+ }
+ *(int *)closure = num;
+ return 1;
+}
+
+/* Similar to cmd_number(), only accepts `inf' as a synonym for 0. */
+static int
+cmd_number_inf (const char *com, const char *val, void *closure)
+{
+ if (!strcasecmp (val, "inf"))
+ {
+ *(int *)closure = 0;
+ return 1;
+ }
+ return cmd_number (com, val, closure);
+}
+
+/* Copy (strdup) the string at COM to a new location and place a
+ pointer to *CLOSURE. */
+static int
+cmd_string (const char *com, const char *val, void *closure)
+{
+ char **pstring = (char **)closure;
+
+ FREE_MAYBE (*pstring);
+ *pstring = xstrdup (val);
+ return 1;
+}
+
+/* Merge the vector (array of strings separated with `,') in COM with
+ the vector (NULL-terminated array of strings) pointed to by
+ CLOSURE. */
+static int
+cmd_vector (const char *com, const char *val, void *closure)
+{
+ char ***pvec = (char ***)closure;
+
+ if (*val)
+ *pvec = merge_vecs (*pvec, sepstring (val));
+ else
+ {
+ free_vec (*pvec);
+ *pvec = NULL;
+ }
+ return 1;
+}
+
+static int
+cmd_directory_vector (const char *com, const char *val, void *closure)
+{
+ char ***pvec = (char ***)closure;
+
+ if (*val)
+ {
+ /* Strip the trailing slashes from directories. */
+ char **t, **seps;
+
+ seps = sepstring (val);
+ for (t = seps; t && *t; t++)
+ {
+ int len = strlen (*t);
+ /* Skip degenerate case of root directory. */
+ if (len > 1)
+ {
+ if ((*t)[len - 1] == '/')
+ (*t)[len - 1] = '\0';
+ }
+ }
+ *pvec = merge_vecs (*pvec, seps);
+ }
+ else
+ {
+ free_vec (*pvec);
+ *pvec = NULL;
+ }
+ return 1;
+}
+
+/* Set the value stored in VAL to CLOSURE (which should point to a
+ long int), allowing several postfixes, with the following syntax
+ (regexp):
+
+ [0-9]+ -> bytes
+ [0-9]+[kK] -> bytes * 1024
+ [0-9]+[mM] -> bytes * 1024 * 1024
+ inf -> 0
+
+ Anything else is flagged as incorrect, and CLOSURE is unchanged. */
+static int
+cmd_bytes (const char *com, const char *val, void *closure)
+{
+ long result;
+ long *out = (long *)closure;
+ const char *p;
+
+ result = 0;
+ p = val;
+ /* Check for "inf". */
+ if (p[0] == 'i' && p[1] == 'n' && p[2] == 'f' && p[3] == '\0')
+ {
+ *out = 0;
+ return 1;
+ }
+ /* Search for digits and construct result. */
+ for (; *p && ISDIGIT (*p); p++)
+ result = (10 * result) + (*p - '0');
+ /* If no digits were found, or more than one character is following
+ them, bail out. */
+ if (p == val || (*p != '\0' && *(p + 1) != '\0'))
+ {
+ printf (_("%s: Invalid specification `%s'\n"), com, val);
+ return 0;
+ }
+ /* Search for a designator. */
+ switch (tolower (*p))
+ {
+ case '\0':
+ /* None */
+ break;
+ case 'k':
+ /* Kilobytes */
+ result *= 1024;
+ break;
+ case 'm':
+ /* Megabytes */
+ result *= (long)1024 * 1024;
+ break;
+ case 'g':
+ /* Gigabytes */
+ result *= (long)1024 * 1024 * 1024;
+ break;
+ default:
+ printf (_("%s: Invalid specification `%s'\n"), com, val);
+ return 0;
+ }
+ *out = result;
+ return 1;
+}
+
+/* Store the value of VAL to *OUT, allowing suffixes for minutes and
+ hours. */
+static int
+cmd_time (const char *com, const char *val, void *closure)
+{
+ long result = 0;
+ const char *p = val;
+
+ /* Search for digits and construct result. */
+ for (; *p && ISDIGIT (*p); p++)
+ result = (10 * result) + (*p - '0');
+ /* If no digits were found, or more than one character is following
+ them, bail out. */
+ if (p == val || (*p != '\0' && *(p + 1) != '\0'))
+ {
+ printf (_("%s: Invalid specification `%s'\n"), com, val);
+ return 0;
+ }
+ /* Search for a suffix. */
+ switch (tolower (*p))
+ {
+ case '\0':
+ /* None */
+ break;
+ case 'm':
+ /* Minutes */
+ result *= 60;
+ break;
+ case 'h':
+ /* Seconds */
+ result *= 3600;
+ break;
+ case 'd':
+ /* Days (overflow on 16bit machines) */
+ result *= 86400L;
+ break;
+ case 'w':
+ /* Weeks :-) */
+ result *= 604800L;
+ break;
+ default:
+ printf (_("%s: Invalid specification `%s'\n"), com, val);
+ return 0;
+ }
+ *(long *)closure = result;
+ return 1;
+}
+\f
+/* Specialized helper functions, used by `commands' to handle some
+ options specially. */
+
+static int check_user_specified_header PARAMS ((const char *));
+
+static int
+cmd_spec_dirstruct (const char *com, const char *val, void *closure)
+{
+ if (!cmd_boolean (com, val, &opt.dirstruct))
+ return 0;
+ /* Since dirstruct behaviour is explicitly changed, no_dirstruct
+ must be affected inversely. */
+ if (opt.dirstruct)
+ opt.no_dirstruct = 0;
+ else
+ opt.no_dirstruct = 1;
+ return 1;
+}
+
+static int
+cmd_spec_dotstyle (const char *com, const char *val, void *closure)
+{
+ /* Retrieval styles. */
+ if (!strcasecmp (val, "default"))
+ {
+ /* Default style: 1K dots, 10 dots in a cluster, 50 dots in a
+ line. */
+ opt.dot_bytes = 1024;
+ opt.dot_spacing = 10;
+ opt.dots_in_line = 50;
+ }
+ else if (!strcasecmp (val, "binary"))
+ {
+ /* "Binary" retrieval: 8K dots, 16 dots in a cluster, 48 dots
+ (384K) in a line. */
+ opt.dot_bytes = 8192;
+ opt.dot_spacing = 16;
+ opt.dots_in_line = 48;
+ }
+ else if (!strcasecmp (val, "mega"))
+ {
+ /* "Mega" retrieval, for retrieving very long files; each dot is
+ 64K, 8 dots in a cluster, 6 clusters (3M) in a line. */
+ opt.dot_bytes = 65536L;
+ opt.dot_spacing = 8;
+ opt.dots_in_line = 48;
+ }
+ else if (!strcasecmp (val, "giga"))
+ {
+ /* "Giga" retrieval, for retrieving very very *very* long files;
+ each dot is 1M, 8 dots in a cluster, 4 clusters (32M) in a
+ line. */
+ opt.dot_bytes = (1L << 20);
+ opt.dot_spacing = 8;
+ opt.dots_in_line = 32;
+ }
+ else if (!strcasecmp (val, "micro"))
+ {
+ /* "Micro" retrieval, for retrieving very small files (and/or
+ slow connections); each dot is 128 bytes, 8 dots in a
+ cluster, 6 clusters (6K) in a line. */
+ opt.dot_bytes = 128;
+ opt.dot_spacing = 8;
+ opt.dots_in_line = 48;
+ }
+ else
+ {
+ fprintf (stderr, _("%s: %s: Invalid specification `%s'.\n"),
+ exec_name, com, val);
+ return 0;
+ }
+ return 1;
+}
+
+static int
+cmd_spec_header (const char *com, const char *val, void *closure)
+{
+ if (!*val)
+ {
+ /* Empty header means reset headers. */
+ FREE_MAYBE (opt.user_header);
+ opt.user_header = NULL;
+ }
+ else
+ {
+ int i;
+
+ if (!check_user_specified_header (val))
+ {
+ fprintf (stderr, _("%s: %s: Invalid specification `%s'.\n"),
+ exec_name, com, val);
+ return 0;
+ }
+ i = opt.user_header ? strlen (opt.user_header) : 0;
+ opt.user_header = (char *)xrealloc (opt.user_header, i + strlen (val)
+ + 2 + 1);
+ strcpy (opt.user_header + i, val);
+ i += strlen (val);
+ opt.user_header[i++] = '\r';
+ opt.user_header[i++] = '\n';
+ opt.user_header[i] = '\0';
+ }
+ return 1;
+}
+
+static int
+cmd_spec_htmlify (const char *com, const char *val, void *closure)
+{
+ int flag = cmd_boolean (com, val, &opt.htmlify);
+ if (flag && !opt.htmlify)
+ opt.remove_listing = 0;
+ return flag;
+}
+
+static int
+cmd_spec_mirror (const char *com, const char *val, void *closure)
+{
+ int mirror;
+
+ if (!cmd_boolean (com, val, &mirror))
+ return 0;
+ if (mirror)
+ {
+ opt.recursive = 1;
+ if (!opt.no_dirstruct)
+ opt.dirstruct = 1;
+ opt.timestamping = 1;
+ opt.reclevel = 0;
+ opt.remove_listing = 0;
+ }
+ return 1;
+}
+
+static int
+cmd_spec_outputdocument (const char *com, const char *val, void *closure)
+{
+ FREE_MAYBE (opt.output_document);
+ opt.output_document = xstrdup (val);
+ opt.ntry = 1;
+ return 1;
+}
+
+static int
+cmd_spec_recursive (const char *com, const char *val, void *closure)
+{
+ if (!cmd_boolean (com, val, &opt.recursive))
+ return 0;
+ else
+ {
+ if (opt.recursive && !opt.no_dirstruct)
+ opt.dirstruct = 1;
+ }
+ return 1;
+}
+
+static int
+cmd_spec_useragent (const char *com, const char *val, void *closure)
+{
+ /* Just check for empty string and newline, so we don't throw total
+ junk to the server. */
+ if (!*val || strchr (val, '\n'))
+ {
+ fprintf (stderr, _("%s: %s: Invalid specification `%s'.\n"),
+ exec_name, com, val);
+ return 0;
+ }
+ opt.useragent = xstrdup (val);
+ return 1;
+}
+\f
+/* Miscellaneous useful routines. */
+
+/* Return the integer value of a positive integer written in S, or -1
+ if an error was encountered. */
+static int
+myatoi (const char *s)
+{
+ int res;
+ const char *orig = s;
+
+ for (res = 0; *s && ISDIGIT (*s); s++)
+ res = 10 * res + (*s - '0');
+ if (*s || orig == s)
+ return -1;
+ else
+ return res;
+}
+
+#define ISODIGIT(x) ((x) >= '0' && (x) <= '7')
+
+static int
+check_user_specified_header (const char *s)
+{
+ const char *p;
+
+ for (p = s; *p && *p != ':' && !ISSPACE (*p); p++);
+ /* The header MUST contain `:' preceded by at least one
+ non-whitespace character. */
+ if (*p != ':' || p == s)
+ return 0;
+ /* The header MUST NOT contain newlines. */
+ if (strchr (s, '\n'))
+ return 0;
+ return 1;
+}
+\f
+/* Free the memory allocated by global variables. */
+void
+cleanup (void)
+{
+ extern acc_t *netrc_list;
+
+ recursive_cleanup ();
+ clean_hosts ();
+ free_netrc (netrc_list);
+ if (opt.dfp)
+ fclose (opt.dfp);
+ FREE_MAYBE (opt.lfilename);
+ free (opt.dir_prefix);
+ FREE_MAYBE (opt.input_filename);
+ FREE_MAYBE (opt.output_document);
+ free_vec (opt.accepts);
+ free_vec (opt.rejects);
+ free_vec (opt.excludes);
+ free_vec (opt.includes);
+ free_vec (opt.domains);
+ free (opt.ftp_acc);
+ free (opt.ftp_pass);
+ FREE_MAYBE (opt.ftp_proxy);
+ FREE_MAYBE (opt.http_proxy);
+ free_vec (opt.no_proxy);
+ FREE_MAYBE (opt.useragent);
+ FREE_MAYBE (opt.http_user);
+ FREE_MAYBE (opt.http_passwd);
+ FREE_MAYBE (opt.user_header);
+}
--- /dev/null
+/* Declarations for init.c.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef INIT_H
+#define INIT_H
+
+void initialize PARAMS ((void));
+int parse_line PARAMS ((const char *, char **, char **));
+int setval PARAMS ((const char *, const char *));
+char *home_dir PARAMS ((void));
+void cleanup PARAMS ((void));
+
+#endif /* INIT_H */
--- /dev/null
+/* Messages logging.
+ Copyright (C) 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <stdlib.h>
+#if defined(__STDC__) && defined(HAVE_STDARG_H)
+/* If we have __STDC__ and stdarg.h, we'll assume it's an ANSI system. */
+# define USE_STDARG
+# include <stdarg.h>
+#else
+# include <varargs.h>
+#endif
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <assert.h>
+#include <errno.h>
+
+#include "wget.h"
+#include "utils.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+/* We expect no message passed to logprintf() to be bigger than this.
+ Before a message is printed, we make sure that at least this much
+ room is left for printing it. */
+#define SAVED_LOG_MAXMSG 32768
+
+/* Maximum allowed growing size. */
+#define SAVED_LOG_MAXSIZE (10 * 1L << 20)
+
+static char *saved_log;
+/* Size of the current log. */
+static long saved_log_size;
+/* Offset into the log where we are about to print (size of the
+ used-up part of SAVED_LOG). */
+static long saved_log_offset;
+/* Whether logging is saved at all. */
+int save_log_p;
+
+static FILE *logfp;
+
+/* Check X against opt.verbose and opt.quiet. The semantics is as
+ follows:
+
+ * LOG_ALWAYS - print the message unconditionally;
+
+ * LOG_NOTQUIET - print the message if opt.quiet is non-zero;
+
+ * LOG_NONVERBOSE - print the message if opt.verbose is zero;
+
+ * LOG_VERBOSE - print the message if opt.verbose is non-zero. */
+#define CHECK_VERBOSE(x) \
+ switch (x) \
+ { \
+ case LOG_ALWAYS: \
+ break; \
+ case LOG_NOTQUIET: \
+ if (opt.quiet) \
+ return; \
+ break; \
+ case LOG_NONVERBOSE: \
+ if (opt.verbose || opt.quiet) \
+ return; \
+ break; \
+ case LOG_VERBOSE: \
+ if (!opt.verbose) \
+ return; \
+ }
+
+#define CANONICALIZE_LOGFP_OR_RETURN do { \
+ if (logfp == stdin) \
+ return; \
+ else if (!logfp) \
+ /* #### Should this ever happen? */ \
+ logfp = stderr; \
+} while (0)
+
+\f
+void
+logputs (enum log_options o, const char *s)
+{
+ CHECK_VERBOSE (o);
+ CANONICALIZE_LOGFP_OR_RETURN;
+
+ fputs (s, logfp);
+ if (!opt.no_flush)
+ fflush (logfp);
+
+ if (save_log_p && saved_log_size < SAVED_LOG_MAXSIZE)
+ {
+ int len = strlen (s);
+
+ /* Increase size of SAVED_LOG exponentially. */
+ DO_REALLOC (saved_log, saved_log_size,
+ saved_log_offset + len + 1, char);
+ memcpy (saved_log + saved_log_offset, s, len + 1);
+ saved_log_offset += len;
+ }
+}
+
+/* Print a message to the log file logfp. If logfp is NULL, print to
+ stderr. If logfp is stdin, don't print at all. A copy of message
+ will be saved to saved_log, for later reusal by dump_log(). */
+static void
+logvprintf (enum log_options o, const char *fmt, va_list args)
+{
+ CHECK_VERBOSE (o);
+ CANONICALIZE_LOGFP_OR_RETURN;
+
+ /* Originally, we first used vfprintf(), and then checked whether
+ the message needs to be stored with vsprintf(). However, Watcom
+ C didn't like ARGS being used twice, so now we first vsprintf()
+ the message, and then fwrite() it to LOGFP. */
+ if (save_log_p && saved_log_size < SAVED_LOG_MAXSIZE)
+ {
+ int len;
+ /* Increase size of `saved_log' exponentially. */
+ DO_REALLOC (saved_log, saved_log_size,
+ saved_log_offset + SAVED_LOG_MAXMSG, char);
+ /* Print the message to the log saver... */
+#ifdef HAVE_VSNPRINTF
+ vsnprintf (saved_log + saved_log_offset, SAVED_LOG_MAXMSG, fmt, args);
+#else /* not HAVE_VSNPRINTF */
+ vsprintf (saved_log + saved_log_offset, fmt, args);
+#endif /* not HAVE_VSNPRINTF */
+ /* ...and then dump it to LOGFP. */
+ len = strlen (saved_log + saved_log_offset);
+ fwrite (saved_log + saved_log_offset, len, 1, logfp);
+ saved_log_offset += len;
+ /* If we ran off the limits and corrupted something, bail out
+ immediately. */
+ assert (saved_log_size >= saved_log_offset);
+ }
+ else
+ vfprintf (logfp, fmt, args);
+
+ if (!opt.no_flush)
+ fflush (logfp);
+}
+
+/* Flush LOGFP. */
+void
+logflush (void)
+{
+ CANONICALIZE_LOGFP_OR_RETURN;
+ fflush (logfp);
+}
+
+/* Portability makes these two functions look like @#%#@$@#$. */
+
+#ifdef USE_STDARG
+void
+logprintf (enum log_options o, const char *fmt, ...)
+#else /* not USE_STDARG */
+void
+logprintf (va_alist)
+ va_dcl
+#endif /* not USE_STDARG */
+{
+ va_list args;
+#ifndef USE_STDARG
+ enum log_options o;
+ const char *fmt;
+#endif
+
+#ifdef USE_STDARG
+ va_start (args, fmt);
+#else
+ va_start (args);
+ o = va_arg (args, enum log_options);
+ fmt = va_arg (args, char *);
+#endif
+ logvprintf (o, fmt, args);
+ va_end (args);
+}
+
+#ifdef DEBUG
+/* The same as logprintf(), but does anything only if opt.debug is
+ non-zero. */
+#ifdef USE_STDARG
+void
+debug_logprintf (const char *fmt, ...)
+#else /* not USE_STDARG */
+void
+debug_logprintf (va_alist)
+ va_dcl
+#endif /* not USE_STDARG */
+{
+ if (opt.debug)
+ {
+ va_list args;
+#ifndef USE_STDARG
+ const char *fmt;
+#endif
+
+#ifdef USE_STDARG
+ va_start (args, fmt);
+#else
+ va_start (args);
+ fmt = va_arg (args, char *);
+#endif
+ logvprintf (LOG_ALWAYS, fmt, args);
+ va_end (args);
+ }
+}
+#endif /* DEBUG */
+\f
+/* Open FILE and set up a logging stream. If FILE cannot be opened,
+ exit with status of 1. */
+void
+log_init (const char *file, int appendp)
+{
+ if (file)
+ {
+ logfp = fopen (file, appendp ? "a" : "w");
+ if (!logfp)
+ {
+ perror (opt.lfilename);
+ exit (1);
+ }
+ }
+ else
+ {
+ logfp = stderr;
+ /* If the output is a TTY, enable logging, which will make Wget
+ remember all the printed messages, to be able to dump them to
+ a log file in case SIGHUP or SIGUSR1 is received (or
+ Ctrl+Break is pressed under Windows). */
+ if (1
+#ifdef HAVE_ISATTY
+ && isatty (fileno (logfp))
+#endif
+ )
+ {
+ save_log_p = 1;
+ }
+ }
+}
+
+/* Close LOGFP, inhibit further logging and free the memory associated
+ with it. */
+void
+log_close (void)
+{
+ fclose (logfp);
+ save_log_p = 0;
+ FREE_MAYBE (saved_log);
+ saved_log = NULL;
+ saved_log_size = saved_log_offset = 0;
+}
+
+/* Dump SAVED_LOG using logprintf(), but quit further logging to memory.
+ Also, free the memory occupied by saved_log. */
+static void
+log_dump (void)
+{
+ save_log_p = 0;
+ if (!saved_log)
+ return;
+ logputs (LOG_ALWAYS, saved_log);
+ free (saved_log);
+ saved_log = NULL;
+ saved_log_size = saved_log_offset = 0;
+}
+
+/* Redirect output to `wget-log' if opt.lfile is stdout. MESSIJ is
+ printed on stdout, and should contain *exactly one* `%s', which
+ will be replaced by the log file name.
+
+ If logging was not enabled, MESSIJ will not be printed. */
+void
+redirect_output (const char *messij)
+{
+ char *logfile;
+
+ if (!save_log_p)
+ return;
+
+ logfile = unique_name (DEFAULT_LOGFILE);
+ logfp = fopen (logfile, "w");
+ if (!logfp)
+ {
+ printf ("%s: %s: %s\n", exec_name, logfile, strerror (errno));
+ /* `stdin' is magic to not print anything. */
+ logfp = stdin;
+ }
+ printf (messij, logfile);
+ free (logfile);
+ /* Dump all the previous messages to LOGFILE. */
+ log_dump ();
+}
--- /dev/null
+/* Command line parsing.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <ctype.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif /* HAVE_UNISTD_H */
+#include <sys/types.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif /* HAVE_STRING_H */
+#ifdef HAVE_SIGNAL_H
+# include <signal.h>
+#endif
+#ifdef HAVE_NLS
+#ifdef HAVE_LOCALE_H
+# include <locale.h>
+#endif /* HAVE_LOCALE_H */
+#endif /* HAVE_NLS */
+
+#define OPTIONS_DEFINED_HERE /* for options.h */
+
+#include "wget.h"
+#include "utils.h"
+#include "getopt.h"
+#include "init.h"
+#include "retr.h"
+#include "recur.h"
+#include "host.h"
+
+#ifndef PATH_SEPARATOR
+# define PATH_SEPARATOR '/'
+#endif
+
+extern char *version_string;
+
+#ifndef errno
+extern int errno;
+#endif
+
+struct options opt;
+
+/* From log.c. */
+void log_init PARAMS ((const char *, int));
+void log_close PARAMS ((void));
+void redirect_output PARAMS ((const char *));
+
+static RETSIGTYPE redirect_output_signal PARAMS ((int));
+
+const char *exec_name;
+\f
+/* Initialize I18N. The initialization amounts to invoking
+ setlocale(), bindtextdomain() and textdomain().
+ Does nothing if NLS is disabled or missing. */
+static void
+i18n_initialize (void)
+{
+ /* If HAVE_NLS is defined, assume the existence of the three
+ functions invoked here. */
+#ifdef HAVE_NLS
+ /* Set the current locale. */
+ /* Here we use LC_MESSAGES instead of LC_ALL, for two reasons.
+ First, message catalogs are all of I18N Wget uses anyway.
+ Second, setting LC_ALL has a dangerous potential of messing
+ things up. For example, when in a foreign locale, Solaris
+ strptime() fails to handle international dates correctly, which
+ makes http_atotm() malfunction. */
+ setlocale (LC_MESSAGES, "");
+ /* Set the text message domain. */
+ bindtextdomain ("wget", LOCALEDIR);
+ textdomain ("wget");
+#endif /* HAVE_NLS */
+}
+\f
+/* Print the usage message. */
+static void
+print_usage (void)
+{
+ printf (_("Usage: %s [OPTION]... [URL]...\n"), exec_name);
+}
+
+/* Print the help message, describing all the available options. If
+ you add an option, be sure to update this list. */
+static void
+print_help (void)
+{
+ printf (_("GNU Wget %s, a non-interactive network retriever.\n"),
+ version_string);
+ print_usage ();
+ /* Had to split this in parts, so the #@@#%# Ultrix compiler and cpp
+ don't bitch. Also, it makes translation much easier. */
+ printf ("%s%s%s%s%s%s%s%s%s%s", _("\
+\n\
+Mandatory arguments to long options are mandatory for short options too.\n\
+\n"), _("\
+Startup:\n\
+ -V, --version display the version of Wget and exit.\n\
+ -h, --help print this help.\n\
+ -b, --background go to background after startup.\n\
+ -e, --execute=COMMAND execute a `.wgetrc\' command.\n\
+\n"), _("\
+Logging and input file:\n\
+ -o, --output-file=FILE log messages to FILE.\n\
+ -a, --append-output=FILE append messages to FILE.\n\
+ -d, --debug print debug output.\n\
+ -q, --quiet quiet (no output).\n\
+ -v, --verbose be verbose (this is the default).\n\
+ -nv, --non-verbose turn off verboseness, without being quiet.\n\
+ -i, --input-file=FILE read URL-s from file.\n\
+ -F, --force-html treat input file as HTML.\n\
+\n"), _("\
+Download:\n\
+ -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits).\n\
+ -O --output-document=FILE write documents to FILE.\n\
+ -nc, --no-clobber don\'t clobber existing files.\n\
+ -c, --continue restart getting an existing file.\n\
+ --dot-style=STYLE set retrieval display style.\n\
+ -N, --timestamping don\'t retrieve files if older than local.\n\
+ -S, --server-response print server response.\n\
+ --spider don\'t download anything.\n\
+ -T, --timeout=SECONDS set the read timeout to SECONDS.\n\
+ -w, --wait=SECONDS wait SECONDS between retrievals.\n\
+ -Y, --proxy=on/off turn proxy on or off.\n\
+ -Q, --quota=NUMBER set retrieval quota to NUMBER.\n\
+\n"), _("\
+Directories:\n\
+ -nd --no-directories don\'t create directories.\n\
+ -x, --force-directories force creation of directories.\n\
+ -nH, --no-host-directories don\'t create host directories.\n\
+ -P, --directory-prefix=PREFIX save files to PREFIX/...\n\
+ --cut-dirs=NUMBER ignore NUMBER remote directory components.\n\
+\n"), _("\
+HTTP options:\n\
+ --http-user=USER set http user to USER.\n\
+ --http-passwd=PASS set http password to PASS.\n\
+ -C, --cache=on/off (dis)allow server-cached data (normally allowed).\n\
+ --ignore-length ignore `Content-Length\' header field.\n\
+ --header=STRING insert STRING among the headers.\n\
+ --proxy-user=USER set USER as proxy username.\n\
+ --proxy-passwd=PASS set PASS as proxy password.\n\
+ -s, --save-headers save the HTTP headers to file.\n\
+ -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.\n\
+\n"), _("\
+FTP options:\n\
+ --retr-symlinks retrieve FTP symbolic links.\n\
+ -g, --glob=on/off turn file name globbing on or off.\n\
+ --passive-ftp use the \"passive\" transfer mode.\n\
+\n"), _("\
+Recursive retrieval:\n\
+ -r, --recursive recursive web-suck -- use with care!.\n\
+ -l, --level=NUMBER maximum recursion depth (0 to unlimit).\n\
+ --delete-after delete downloaded files.\n\
+ -k, --convert-links convert non-relative links to relative.\n\
+ -m, --mirror turn on options suitable for mirroring.\n\
+ -nr, --dont-remove-listing don\'t remove `.listing\' files.\n\
+\n"), _("\
+Recursive accept/reject:\n\
+ -A, --accept=LIST list of accepted extensions.\n\
+ -R, --reject=LIST list of rejected extensions.\n\
+ -D, --domains=LIST list of accepted domains.\n\
+ --exclude-domains=LIST comma-separated list of rejected domains.\n\
+ -L, --relative follow relative links only.\n\
+ --follow-ftp follow FTP links from HTML documents.\n\
+ -H, --span-hosts go to foreign hosts when recursive.\n\
+ -I, --include-directories=LIST list of allowed directories.\n\
+ -X, --exclude-directories=LIST list of excluded directories.\n\
+ -nh, --no-host-lookup don\'t DNS-lookup hosts.\n\
+ -np, --no-parent don\'t ascend to the parent directory.\n\
+\n"), _("Mail bug reports and suggestions to <bug-wget@gnu.org>.\n"));
+}
+\f
+int
+main (int argc, char *const *argv)
+{
+ char **url, **t;
+ int i, c, nurl, status, append_to_log;
+
+ static struct option long_options[] =
+ {
+ { "background", no_argument, NULL, 'b' },
+ { "continue", no_argument, NULL, 'c' },
+ { "convert-links", no_argument, NULL, 'k' },
+ { "debug", no_argument, NULL, 'd' },
+ { "dont-remove-listing", no_argument, NULL, 21 },
+ { "email-address", no_argument, NULL, 'E' }, /* undocumented (debug) */
+ { "follow-ftp", no_argument, NULL, 14 },
+ { "force-directories", no_argument, NULL, 'x' },
+ { "force-hier", no_argument, NULL, 'x' }, /* obsolete */
+ { "force-html", no_argument, NULL, 'F'},
+ { "help", no_argument, NULL, 'h' },
+ { "ignore-length", no_argument, NULL, 10 },
+ { "mirror", no_argument, NULL, 'm' },
+ { "no-clobber", no_argument, NULL, 13 },
+ { "no-directories", no_argument, NULL, 19 },
+ { "no-host-directories", no_argument, NULL, 20 },
+ { "no-host-lookup", no_argument, NULL, 22 },
+ { "no-parent", no_argument, NULL, 5 },
+ { "non-verbose", no_argument, NULL, 18 },
+ { "passive-ftp", no_argument, NULL, 11 },
+ { "quiet", no_argument, NULL, 'q' },
+ { "recursive", no_argument, NULL, 'r' },
+ { "relative", no_argument, NULL, 'L' },
+ { "retr-symlinks", no_argument, NULL, 9 },
+ { "save-headers", no_argument, NULL, 's' },
+ { "server-response", no_argument, NULL, 'S' },
+ { "span-hosts", no_argument, NULL, 'H' },
+ { "spider", no_argument, NULL, 4 },
+ { "timestamping", no_argument, NULL, 'N' },
+ { "verbose", no_argument, NULL, 'v' },
+ { "version", no_argument, NULL, 'V' },
+
+ { "accept", required_argument, NULL, 'A' },
+ { "append-output", required_argument, NULL, 'a' },
+ { "backups", required_argument, NULL, 23 }, /* undocumented */
+ { "base", required_argument, NULL, 'B' },
+ { "cache", required_argument, NULL, 'C' },
+ { "cut-dirs", required_argument, NULL, 17 },
+ { "delete-after", no_argument, NULL, 8 },
+ { "directory-prefix", required_argument, NULL, 'P' },
+ { "domains", required_argument, NULL, 'D' },
+ { "dot-style", required_argument, NULL, 6 },
+ { "execute", required_argument, NULL, 'e' },
+ { "exclude-directories", required_argument, NULL, 'X' },
+ { "exclude-domains", required_argument, NULL, 12 },
+ { "glob", required_argument, NULL, 'g' },
+ { "header", required_argument, NULL, 3 },
+ { "htmlify", required_argument, NULL, 7 },
+ { "http-passwd", required_argument, NULL, 2 },
+ { "http-user", required_argument, NULL, 1 },
+ { "include-directories", required_argument, NULL, 'I' },
+ { "input-file", required_argument, NULL, 'i' },
+ { "level", required_argument, NULL, 'l' },
+ { "no", required_argument, NULL, 'n' },
+ { "output-document", required_argument, NULL, 'O' },
+ { "output-file", required_argument, NULL, 'o' },
+ { "proxy", required_argument, NULL, 'Y' },
+ { "proxy-passwd", required_argument, NULL, 16 },
+ { "proxy-user", required_argument, NULL, 15 },
+ { "quota", required_argument, NULL, 'Q' },
+ { "reject", required_argument, NULL, 'R' },
+ { "timeout", required_argument, NULL, 'T' },
+ { "tries", required_argument, NULL, 't' },
+ { "user-agent", required_argument, NULL, 'U' },
+ { "use-proxy", required_argument, NULL, 'Y' },
+ { "wait", required_argument, NULL, 'w' },
+ { 0, 0, 0, 0 }
+ };
+
+ i18n_initialize ();
+
+ append_to_log = 0;
+
+ /* Construct the name of the executable, without the directory part. */
+ exec_name = strrchr (argv[0], PATH_SEPARATOR);
+ if (!exec_name)
+ exec_name = argv[0];
+ else
+ ++exec_name;
+
+#ifdef WINDOWS
+ windows_main_junk (&argc, (char **) argv, (char **) &exec_name);
+#endif
+
+ initialize ();
+
+ while ((c = getopt_long (argc, argv, "\
+hVqvdksxmNWrHSLcFbEY:g:T:U:O:l:n:i:o:a:t:D:A:R:P:B:e:Q:X:I:w:",
+ long_options, (int *)0)) != EOF)
+ {
+ switch (c)
+ {
+ /* Options without arguments: */
+ case 4:
+ setval ("spider", "on");
+ break;
+ case 5:
+ setval ("noparent", "on");
+ break;
+ case 8:
+ setval ("deleteafter", "on");
+ break;
+ case 9:
+ setval ("retrsymlinks", "on");
+ break;
+ case 10:
+ setval ("ignorelength", "on");
+ break;
+ case 11:
+ setval ("passiveftp", "on");
+ break;
+ case 13:
+ setval ("noclobber", "on");
+ break;
+ case 14:
+ setval ("followftp", "on");
+ break;
+ case 17:
+ setval ("cutdirs", optarg);
+ break;
+ case 18:
+ setval ("verbose", "off");
+ break;
+ case 19:
+ setval ("dirstruct", "off");
+ break;
+ case 20:
+ setval ("addhostdir", "off");
+ break;
+ case 21:
+ setval ("removelisting", "off");
+ break;
+ case 22:
+ setval ("simplehostcheck", "on");
+ break;
+ case 'b':
+ setval ("background", "on");
+ break;
+ case 'c':
+ setval ("continue", "on");
+ break;
+ case 'd':
+#ifdef DEBUG
+ setval ("debug", "on");
+#else /* not DEBUG */
+ fprintf (stderr, _("%s: debug support not compiled in.\n"),
+ exec_name);
+#endif /* not DEBUG */
+ break;
+ case 'E':
+ /* For debugging purposes. */
+ printf ("%s\n", ftp_getaddress ());
+ exit (0);
+ break;
+ case 'F':
+ setval ("forcehtml", "on");
+ break;
+ case 'H':
+ setval ("spanhosts", "on");
+ break;
+ case 'h':
+ print_help ();
+#ifdef WINDOWS
+ ws_help (exec_name);
+#endif
+ exit (0);
+ break;
+ case 'k':
+ setval ("convertlinks", "on");
+ break;
+ case 'L':
+ setval ("relativeonly", "on");
+ break;
+ case 'm':
+ setval ("mirror", "on");
+ break;
+ case 'N':
+ setval ("timestamping", "on");
+ break;
+ case 'S':
+ setval ("serverresponse", "on");
+ break;
+ case 's':
+ setval ("saveheaders", "on");
+ break;
+ case 'q':
+ setval ("quiet", "on");
+ break;
+ case 'r':
+ setval ("recursive", "on");
+ break;
+ case 'V':
+ printf ("GNU Wget %s\n\n", version_string);
+ printf ("%s", _("\
+Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.\n\
+This program is distributed in the hope that it will be useful,\n\
+but WITHOUT ANY WARRANTY; without even the implied warranty of\n\
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n\
+GNU General Public License for more details.\n"));
+ printf (_("\nWritten by Hrvoje Niksic <hniksic@srce.hr>.\n"));
+ exit (0);
+ break;
+ case 'v':
+ setval ("verbose", "on");
+ break;
+ case 'x':
+ setval ("dirstruct", "on");
+ break;
+
+ /* Options accepting an argument: */
+ case 1:
+ setval ("httpuser", optarg);
+ break;
+ case 2:
+ setval ("httppasswd", optarg);
+ break;
+ case 3:
+ setval ("header", optarg);
+ break;
+ case 6:
+ setval ("dotstyle", optarg);
+ break;
+ case 7:
+ setval ("htmlify", optarg);
+ break;
+ case 12:
+ setval ("excludedomains", optarg);
+ break;
+ case 15:
+ setval ("proxyuser", optarg);
+ break;
+ case 16:
+ setval ("proxypasswd", optarg);
+ break;
+ case 23:
+ setval ("backups", optarg);
+ break;
+ case 'A':
+ setval ("accept", optarg);
+ break;
+ case 'a':
+ setval ("logfile", optarg);
+ append_to_log = 1;
+ break;
+ case 'B':
+ setval ("base", optarg);
+ break;
+ case 'C':
+ setval ("cache", optarg);
+ break;
+ case 'D':
+ setval ("domains", optarg);
+ break;
+ case 'e':
+ {
+ char *com, *val;
+ if (parse_line (optarg, &com, &val))
+ {
+ if (!setval (com, val))
+ exit (1);
+ }
+ else
+ {
+ fprintf (stderr, _("%s: %s: invalid command\n"), exec_name,
+ optarg);
+ exit (1);
+ }
+ free (com);
+ free (val);
+ }
+ break;
+ case 'g':
+ setval ("glob", optarg);
+ break;
+ case 'I':
+ setval ("includedirectories", optarg);
+ break;
+ case 'i':
+ setval ("input", optarg);
+ break;
+ case 'l':
+ setval ("reclevel", optarg);
+ break;
+ case 'n':
+ {
+ /* #### The n? options are utter crock! */
+ char *p;
+
+ for (p = optarg; *p; p++)
+ switch (*p)
+ {
+ case 'v':
+ setval ("verbose", "off");
+ break;
+ case 'h':
+ setval ("simplehostcheck", "on");
+ break;
+ case 'H':
+ setval ("addhostdir", "off");
+ break;
+ case 'd':
+ setval ("dirstruct", "off");
+ break;
+ case 'c':
+ setval ("noclobber", "on");
+ break;
+ case 'r':
+ setval ("removelisting", "off");
+ break;
+ case 'p':
+ setval ("noparent", "on");
+ break;
+ default:
+ printf (_("%s: illegal option -- `-n%c'\n"), exec_name, *p);
+ print_usage ();
+ printf ("\n");
+ printf (_("Try `%s --help\' for more options.\n"), exec_name);
+ exit (1);
+ }
+ break;
+ }
+ case 'O':
+ setval ("outputdocument", optarg);
+ break;
+ case 'o':
+ setval ("logfile", optarg);
+ break;
+ case 'P':
+ setval ("dirprefix", optarg);
+ break;
+ case 'Q':
+ setval ("quota", optarg);
+ break;
+ case 'R':
+ setval ("reject", optarg);
+ break;
+ case 'T':
+ setval ("timeout", optarg);
+ break;
+ case 't':
+ setval ("tries", optarg);
+ break;
+ case 'U':
+ setval ("useragent", optarg);
+ break;
+ case 'w':
+ setval ("wait", optarg);
+ break;
+ case 'X':
+ setval ("excludedirectories", optarg);
+ break;
+ case 'Y':
+ setval ("useproxy", optarg);
+ break;
+
+ case '?':
+ print_usage ();
+ printf ("\n");
+ printf (_("Try `%s --help' for more options.\n"), exec_name);
+ exit (0);
+ break;
+ }
+ }
+ if (opt.verbose == -1)
+ opt.verbose = !opt.quiet;
+
+ /* Sanity checks. */
+ if (opt.verbose && opt.quiet)
+ {
+ printf (_("Can't be verbose and quiet at the same time.\n"));
+ print_usage ();
+ exit (1);
+ }
+ if (opt.timestamping && opt.noclobber)
+ {
+ printf (_("\
+Can't timestamp and not clobber old files at the same time.\n"));
+ print_usage ();
+ exit (1);
+ }
+ nurl = argc - optind;
+ if (!nurl && !opt.input_filename)
+ {
+ /* No URL specified. */
+ printf (_("%s: missing URL\n"), exec_name);
+ print_usage ();
+ printf ("\n");
+ /* #### Something nicer should be printed here -- similar to the
+ pre-1.5 `--help' page. */
+ printf (_("Try `%s --help' for more options.\n"), exec_name);
+ exit (1);
+ }
+
+ if (opt.background)
+ fork_to_background ();
+
+ /* Allocate basic pointer. */
+ url = ALLOCA_ARRAY (char *, nurl + 1);
+ /* Fill in the arguments. */
+ for (i = 0; i < nurl; i++, optind++)
+ {
+ char *irix4_cc_needs_this;
+ STRDUP_ALLOCA (irix4_cc_needs_this, argv[optind]);
+ url[i] = irix4_cc_needs_this;
+ }
+ url[i] = NULL;
+
+ /* Change the title of console window on Windows. #### I think this
+ statement should belong to retrieve_url(). --hniksic. */
+#ifdef WINDOWS
+ ws_changetitle (*url, nurl);
+#endif
+
+ /* Initialize logging. */
+ log_init (opt.lfilename, append_to_log);
+
+ DEBUGP (("DEBUG output created by Wget %s on %s.\n\n", version_string,
+ OS_TYPE));
+ /* Open the output filename if necessary. */
+ if (opt.output_document)
+ {
+ if (HYPHENP (opt.output_document))
+ opt.dfp = stdout;
+ else
+ {
+ opt.dfp = fopen (opt.output_document, "wb");
+ if (opt.dfp == NULL)
+ {
+ perror (opt.output_document);
+ exit (1);
+ }
+ }
+ }
+
+#ifdef WINDOWS
+ ws_startup ();
+#endif
+
+ /* Setup the signal handler to redirect output when hangup is
+ received. */
+#ifdef HAVE_SIGNAL
+ if (signal(SIGHUP, SIG_IGN) != SIG_IGN)
+ signal(SIGHUP, redirect_output_signal);
+ /* ...and do the same for SIGUSR1. */
+ signal (SIGUSR1, redirect_output_signal);
+ /* Writing to a closed socket normally signals SIGPIPE, and the
+ process exits. What we want is to ignore SIGPIPE and just check
+ for the return value of write(). */
+ signal (SIGPIPE, SIG_IGN);
+#endif /* HAVE_SIGNAL */
+
+ status = RETROK; /* initialize it, just-in-case */
+ recursive_reset ();
+ /* Retrieve the URLs from argument list. */
+ for (t = url; *t; t++)
+ {
+ char *filename, *new_file;
+ int dt;
+
+ status = retrieve_url (*t, &filename, &new_file, NULL, &dt);
+ if (opt.recursive && status == RETROK && (dt & TEXTHTML))
+ status = recursive_retrieve (filename, new_file ? new_file : *t);
+ FREE_MAYBE (new_file);
+ FREE_MAYBE (filename);
+ }
+
+ /* And then from the input file, if any. */
+ if (opt.input_filename)
+ {
+ int count;
+ status = retrieve_from_file (opt.input_filename, opt.force_html, &count);
+ if (!count)
+ logprintf (LOG_NOTQUIET, _("No URLs found in %s.\n"),
+ opt.input_filename);
+ }
+ /* Print the downloaded sum. */
+ if (opt.recursive
+ || nurl > 1
+ || (opt.input_filename && opt.downloaded != 0))
+ {
+ logprintf (LOG_NOTQUIET,
+ _("\nFINISHED --%s--\nDownloaded: %s bytes in %d files\n"),
+ time_str (NULL), legible (opt.downloaded), opt.numurls);
+ /* Print quota warning, if exceeded. */
+ if (opt.quota && opt.downloaded > opt.quota)
+ logprintf (LOG_NOTQUIET,
+ _("Download quota (%s bytes) EXCEEDED!\n"),
+ legible (opt.quota));
+ }
+ if (opt.convert_links)
+ {
+ convert_all_links ();
+ }
+ log_close ();
+ cleanup ();
+ if (status == RETROK)
+ return 0;
+ else
+ return 1;
+}
+\f
+/* Hangup signal handler. When wget receives SIGHUP or SIGUSR1, it
+ will proceed operation as usual, trying to write into a log file.
+ If that is impossible, the output will be turned off. */
+
+#ifdef HAVE_SIGNAL
+static RETSIGTYPE
+redirect_output_signal (int sig)
+{
+ char tmp[100];
+ signal (sig, redirect_output_signal);
+ /* Please note that the double `%' in `%%s' is intentional, because
+ redirect_output passes tmp through printf. */
+ sprintf (tmp, _("%s received, redirecting output to `%%s'.\n"),
+ (sig == SIGHUP ? "SIGHUP" :
+ (sig == SIGUSR1 ? "SIGUSR1" :
+ "WTF?!")));
+ redirect_output (tmp);
+}
+#endif /* HAVE_SIGNAL */
--- /dev/null
+/* md5.c - Functions to compute MD5 message digest of files or memory blocks
+ according to the definition of MD5 in RFC 1321 from April 1992.
+ Copyright (C) 1995, 1996 Free Software Foundation, Inc.
+ This file is part of the GNU C library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Library General Public License as
+ published by the Free Software Foundation; either version 2 of the
+ License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Library General Public License for more details.
+
+ You should have received a copy of the GNU Library General Public
+ License along with the GNU C Library; see the file COPYING.LIB. If not,
+ write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ Boston, MA 02111-1307, USA. */
+
+/* Written by Ulrich Drepper <drepper@gnu.ai.mit.edu>, 1995. */
+
+#ifdef HAVE_CONFIG_H
+# include <config.h>
+#endif
+
+/* Wget */
+/*#if STDC_HEADERS || defined _LIBC*/
+# include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+/*#else*/
+/*# ifndef HAVE_MEMCPY*/
+/*# define memcpy(d, s, n) bcopy ((s), (d), (n))*/
+/*# endif*/
+/*#endif*/
+
+#include "wget.h"
+#include "md5.h"
+
+#ifdef _LIBC
+# include <endian.h>
+# if __BYTE_ORDER == __BIG_ENDIAN
+# define WORDS_BIGENDIAN 1
+# endif
+#endif
+
+#ifdef WORDS_BIGENDIAN
+# define SWAP(n) \
+ (((n) << 24) | (((n) & 0xff00) << 8) | (((n) >> 8) & 0xff00) | ((n) >> 24))
+#else
+# define SWAP(n) (n)
+#endif
+
+
+/* This array contains the bytes used to pad the buffer to the next
+ 64-byte boundary. (RFC 1321, 3.1: Step 1) */
+static const unsigned char fillbuf[64] = { 0x80, 0 /* , 0, 0, ... */ };
+
+
+/* Initialize structure containing state of computation.
+ (RFC 1321, 3.3: Step 3) */
+void
+md5_init_ctx (struct md5_ctx *ctx)
+{
+ ctx->A = 0x67452301;
+ ctx->B = 0xefcdab89;
+ ctx->C = 0x98badcfe;
+ ctx->D = 0x10325476;
+
+ ctx->total[0] = ctx->total[1] = 0;
+ ctx->buflen = 0;
+}
+
+/* Put result from CTX in first 16 bytes following RESBUF. The result
+ must be in little endian byte order.
+
+ IMPORTANT: On some systems it is required that RESBUF is correctly
+ aligned for a 32 bits value. */
+void *
+md5_read_ctx (const struct md5_ctx *ctx, void *resbuf)
+{
+ ((md5_uint32 *) resbuf)[0] = SWAP (ctx->A);
+ ((md5_uint32 *) resbuf)[1] = SWAP (ctx->B);
+ ((md5_uint32 *) resbuf)[2] = SWAP (ctx->C);
+ ((md5_uint32 *) resbuf)[3] = SWAP (ctx->D);
+
+ return resbuf;
+}
+
+/* Process the remaining bytes in the internal buffer and the usual
+ prolog according to the standard and write the result to RESBUF.
+
+ IMPORTANT: On some systems it is required that RESBUF is correctly
+ aligned for a 32 bits value. */
+void *
+md5_finish_ctx (struct md5_ctx *ctx, void *resbuf)
+{
+ /* Take yet unprocessed bytes into account. */
+ md5_uint32 bytes = ctx->buflen;
+ size_t pad;
+
+ /* Now count remaining bytes. */
+ ctx->total[0] += bytes;
+ if (ctx->total[0] < bytes)
+ ++ctx->total[1];
+
+ pad = bytes >= 56 ? 64 + 56 - bytes : 56 - bytes;
+ memcpy (&ctx->buffer[bytes], fillbuf, pad);
+
+ /* Put the 64-bit file length in *bits* at the end of the buffer. */
+ *(md5_uint32 *) &ctx->buffer[bytes + pad] = SWAP (ctx->total[0] << 3);
+ *(md5_uint32 *) &ctx->buffer[bytes + pad + 4] = SWAP ((ctx->total[1] << 3) |
+ (ctx->total[0] >> 29));
+
+ /* Process last bytes. */
+ md5_process_block (ctx->buffer, bytes + pad + 8, ctx);
+
+ return md5_read_ctx (ctx, resbuf);
+}
+
+/* Unused in Wget */
+#if 0
+/* Compute MD5 message digest for bytes read from STREAM. The
+ resulting message digest number will be written into the 16 bytes
+ beginning at RESBLOCK. */
+int
+md5_stream (FILE *stream, void *resblock)
+{
+ /* Important: BLOCKSIZE must be a multiple of 64. */
+#define BLOCKSIZE 4096
+ struct md5_ctx ctx;
+ char buffer[BLOCKSIZE + 72];
+ size_t sum;
+
+ /* Initialize the computation context. */
+ md5_init_ctx (&ctx);
+
+ /* Iterate over full file contents. */
+ while (1)
+ {
+ /* We read the file in blocks of BLOCKSIZE bytes. One call of the
+ computation function processes the whole buffer so that with the
+ next round of the loop another block can be read. */
+ size_t n;
+ sum = 0;
+
+ /* Read block. Take care for partial reads. */
+ do
+ {
+ n = fread (buffer + sum, 1, BLOCKSIZE - sum, stream);
+
+ sum += n;
+ }
+ while (sum < BLOCKSIZE && n != 0);
+ if (n == 0 && ferror (stream))
+ return 1;
+
+ /* If end of file is reached, end the loop. */
+ if (n == 0)
+ break;
+
+ /* Process buffer with BLOCKSIZE bytes. Note that
+ BLOCKSIZE % 64 == 0
+ */
+ md5_process_block (buffer, BLOCKSIZE, &ctx);
+ }
+
+ /* Add the last bytes if necessary. */
+ if (sum > 0)
+ md5_process_bytes (buffer, sum, &ctx);
+
+ /* Construct result in desired memory. */
+ md5_finish_ctx (&ctx, resblock);
+ return 0;
+}
+
+/* Compute MD5 message digest for LEN bytes beginning at BUFFER. The
+ result is always in little endian byte order, so that a byte-wise
+ output yields to the wanted ASCII representation of the message
+ digest. */
+void *
+md5_buffer (const char *buffer, size_t len, void *resblock)
+{
+ struct md5_ctx ctx;
+
+ /* Initialize the computation context. */
+ md5_init_ctx (&ctx);
+
+ /* Process whole buffer but last len % 64 bytes. */
+ md5_process_bytes (buffer, len, &ctx);
+
+ /* Put result in desired memory area. */
+ return md5_finish_ctx (&ctx, resblock);
+}
+#endif /* 0 */
+
+
+void
+md5_process_bytes (const void *buffer, size_t len, struct md5_ctx *ctx)
+{
+ /* When we already have some bits in our internal buffer concatenate
+ both inputs first. */
+ if (ctx->buflen != 0)
+ {
+ size_t left_over = ctx->buflen;
+ size_t add = 128 - left_over > len ? len : 128 - left_over;
+
+ memcpy (&ctx->buffer[left_over], buffer, add);
+ ctx->buflen += add;
+
+ if (left_over + add > 64)
+ {
+ md5_process_block (ctx->buffer, (left_over + add) & ~63, ctx);
+ /* The regions in the following copy operation cannot overlap. */
+ memcpy (ctx->buffer, &ctx->buffer[(left_over + add) & ~63],
+ (left_over + add) & 63);
+ ctx->buflen = (left_over + add) & 63;
+ }
+
+ buffer = (const char *) buffer + add;
+ len -= add;
+ }
+
+ /* Process available complete blocks. */
+ if (len > 64)
+ {
+ md5_process_block (buffer, len & ~63, ctx);
+ buffer = (const char *) buffer + (len & ~63);
+ len &= 63;
+ }
+
+ /* Move remaining bytes in internal buffer. */
+ if (len > 0)
+ {
+ memcpy (ctx->buffer, buffer, len);
+ ctx->buflen = len;
+ }
+}
+
+
+/* These are the four functions used in the four steps of the MD5 algorithm
+ and defined in the RFC 1321. The first function is a little bit optimized
+ (as found in Colin Plumbs public domain implementation). */
+/* #define FF(b, c, d) ((b & c) | (~b & d)) */
+#define FF(b, c, d) (d ^ (b & (c ^ d)))
+#define FG(b, c, d) FF (d, b, c)
+#define FH(b, c, d) (b ^ c ^ d)
+#define FI(b, c, d) (c ^ (b | ~d))
+
+/* Process LEN bytes of BUFFER, accumulating context into CTX.
+ It is assumed that LEN % 64 == 0. */
+
+void
+md5_process_block (const void *buffer, size_t len, struct md5_ctx *ctx)
+{
+ md5_uint32 correct_words[16];
+ const md5_uint32 *words = (md5_uint32 *)buffer;
+ size_t nwords = len / sizeof (md5_uint32);
+ const md5_uint32 *endp = words + nwords;
+ md5_uint32 A = ctx->A;
+ md5_uint32 B = ctx->B;
+ md5_uint32 C = ctx->C;
+ md5_uint32 D = ctx->D;
+
+ /* First increment the byte count. RFC 1321 specifies the possible
+ length of the file up to 2^64 bits. Here we only compute the
+ number of bytes. Do a double word increment. */
+ ctx->total[0] += len;
+ if (ctx->total[0] < len)
+ ++ctx->total[1];
+
+ /* Process all bytes in the buffer with 64 bytes in each round of
+ the loop. */
+ while (words < endp)
+ {
+ md5_uint32 *cwp = correct_words;
+ md5_uint32 A_save = A;
+ md5_uint32 B_save = B;
+ md5_uint32 C_save = C;
+ md5_uint32 D_save = D;
+
+ /* First round: using the given function, the context and a constant
+ the next context is computed. Because the algorithms processing
+ unit is a 32-bit word and it is determined to work on words in
+ little endian byte order we perhaps have to change the byte order
+ before the computation. To reduce the work for the next steps
+ we store the swapped words in the array CORRECT_WORDS. */
+
+#define OP(a, b, c, d, s, T) \
+ do \
+ { \
+ a += FF (b, c, d) + (*cwp++ = SWAP (*words)) + T; \
+ ++words; \
+ CYCLIC (a, s); \
+ a += b; \
+ } \
+ while (0)
+
+ /* It is unfortunate that C does not provide an operator for
+ cyclic rotation. Hope the C compiler is smart enough. */
+#define CYCLIC(w, s) (w = (w << s) | (w >> (32 - s)))
+
+ /* Before we start, one word to the strange constants.
+ They are defined in RFC 1321 as
+
+ T[i] = (int) (4294967296.0 * fabs (sin (i))), i=1..64
+ */
+
+ /* Round 1. */
+ OP (A, B, C, D, 7, 0xd76aa478);
+ OP (D, A, B, C, 12, 0xe8c7b756);
+ OP (C, D, A, B, 17, 0x242070db);
+ OP (B, C, D, A, 22, 0xc1bdceee);
+ OP (A, B, C, D, 7, 0xf57c0faf);
+ OP (D, A, B, C, 12, 0x4787c62a);
+ OP (C, D, A, B, 17, 0xa8304613);
+ OP (B, C, D, A, 22, 0xfd469501);
+ OP (A, B, C, D, 7, 0x698098d8);
+ OP (D, A, B, C, 12, 0x8b44f7af);
+ OP (C, D, A, B, 17, 0xffff5bb1);
+ OP (B, C, D, A, 22, 0x895cd7be);
+ OP (A, B, C, D, 7, 0x6b901122);
+ OP (D, A, B, C, 12, 0xfd987193);
+ OP (C, D, A, B, 17, 0xa679438e);
+ OP (B, C, D, A, 22, 0x49b40821);
+
+ /* For the second to fourth round we have the possibly swapped words
+ in CORRECT_WORDS. Redefine the macro to take an additional first
+ argument specifying the function to use. */
+#undef OP
+#define OP(f, a, b, c, d, k, s, T) \
+ do \
+ { \
+ a += f (b, c, d) + correct_words[k] + T; \
+ CYCLIC (a, s); \
+ a += b; \
+ } \
+ while (0)
+
+ /* Round 2. */
+ OP (FG, A, B, C, D, 1, 5, 0xf61e2562);
+ OP (FG, D, A, B, C, 6, 9, 0xc040b340);
+ OP (FG, C, D, A, B, 11, 14, 0x265e5a51);
+ OP (FG, B, C, D, A, 0, 20, 0xe9b6c7aa);
+ OP (FG, A, B, C, D, 5, 5, 0xd62f105d);
+ OP (FG, D, A, B, C, 10, 9, 0x02441453);
+ OP (FG, C, D, A, B, 15, 14, 0xd8a1e681);
+ OP (FG, B, C, D, A, 4, 20, 0xe7d3fbc8);
+ OP (FG, A, B, C, D, 9, 5, 0x21e1cde6);
+ OP (FG, D, A, B, C, 14, 9, 0xc33707d6);
+ OP (FG, C, D, A, B, 3, 14, 0xf4d50d87);
+ OP (FG, B, C, D, A, 8, 20, 0x455a14ed);
+ OP (FG, A, B, C, D, 13, 5, 0xa9e3e905);
+ OP (FG, D, A, B, C, 2, 9, 0xfcefa3f8);
+ OP (FG, C, D, A, B, 7, 14, 0x676f02d9);
+ OP (FG, B, C, D, A, 12, 20, 0x8d2a4c8a);
+
+ /* Round 3. */
+ OP (FH, A, B, C, D, 5, 4, 0xfffa3942);
+ OP (FH, D, A, B, C, 8, 11, 0x8771f681);
+ OP (FH, C, D, A, B, 11, 16, 0x6d9d6122);
+ OP (FH, B, C, D, A, 14, 23, 0xfde5380c);
+ OP (FH, A, B, C, D, 1, 4, 0xa4beea44);
+ OP (FH, D, A, B, C, 4, 11, 0x4bdecfa9);
+ OP (FH, C, D, A, B, 7, 16, 0xf6bb4b60);
+ OP (FH, B, C, D, A, 10, 23, 0xbebfbc70);
+ OP (FH, A, B, C, D, 13, 4, 0x289b7ec6);
+ OP (FH, D, A, B, C, 0, 11, 0xeaa127fa);
+ OP (FH, C, D, A, B, 3, 16, 0xd4ef3085);
+ OP (FH, B, C, D, A, 6, 23, 0x04881d05);
+ OP (FH, A, B, C, D, 9, 4, 0xd9d4d039);
+ OP (FH, D, A, B, C, 12, 11, 0xe6db99e5);
+ OP (FH, C, D, A, B, 15, 16, 0x1fa27cf8);
+ OP (FH, B, C, D, A, 2, 23, 0xc4ac5665);
+
+ /* Round 4. */
+ OP (FI, A, B, C, D, 0, 6, 0xf4292244);
+ OP (FI, D, A, B, C, 7, 10, 0x432aff97);
+ OP (FI, C, D, A, B, 14, 15, 0xab9423a7);
+ OP (FI, B, C, D, A, 5, 21, 0xfc93a039);
+ OP (FI, A, B, C, D, 12, 6, 0x655b59c3);
+ OP (FI, D, A, B, C, 3, 10, 0x8f0ccc92);
+ OP (FI, C, D, A, B, 10, 15, 0xffeff47d);
+ OP (FI, B, C, D, A, 1, 21, 0x85845dd1);
+ OP (FI, A, B, C, D, 8, 6, 0x6fa87e4f);
+ OP (FI, D, A, B, C, 15, 10, 0xfe2ce6e0);
+ OP (FI, C, D, A, B, 6, 15, 0xa3014314);
+ OP (FI, B, C, D, A, 13, 21, 0x4e0811a1);
+ OP (FI, A, B, C, D, 4, 6, 0xf7537e82);
+ OP (FI, D, A, B, C, 11, 10, 0xbd3af235);
+ OP (FI, C, D, A, B, 2, 15, 0x2ad7d2bb);
+ OP (FI, B, C, D, A, 9, 21, 0xeb86d391);
+
+ /* Add the starting values of the context. */
+ A += A_save;
+ B += B_save;
+ C += C_save;
+ D += D_save;
+ }
+
+ /* Put checksum in context given as argument. */
+ ctx->A = A;
+ ctx->B = B;
+ ctx->C = C;
+ ctx->D = D;
+}
--- /dev/null
+/* md5.h - Declaration of functions and data types used for MD5 sum
+ computing library functions.
+ Copyright (C) 1995, 1996 Free Software Foundation, Inc.
+ NOTE: The canonical source of this file is maintained with the GNU C
+ Library. Bugs can be reported to bug-glibc@prep.ai.mit.edu.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the
+ Free Software Foundation; either version 2, or (at your option) any
+ later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software Foundation,
+ Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */
+
+#ifndef _MD5_H
+#define _MD5_H 1
+
+#include <stdio.h>
+
+#if defined HAVE_LIMITS_H || _LIBC
+# include <limits.h>
+#endif
+
+/* The following contortions are an attempt to use the C preprocessor
+ to determine an unsigned integral type that is 32 bits wide. An
+ alternative approach is to use autoconf's AC_CHECK_SIZEOF macro, but
+ doing that would require that the configure script compile and *run*
+ the resulting executable. Locally running cross-compiled executables
+ is usually not possible. */
+
+#ifdef _LIBC
+# include <sys/types.h>
+typedef u_int32_t md5_uint32;
+#else
+# if defined __STDC__ && __STDC__
+# define UINT_MAX_32_BITS 4294967295U
+# else
+# define UINT_MAX_32_BITS 0xFFFFFFFF
+# endif
+
+/* If UINT_MAX isn't defined, assume it's a 32-bit type.
+ This should be valid for all systems GNU cares about because
+ that doesn't include 16-bit systems, and only modern systems
+ (that certainly have <limits.h>) have 64+-bit integral types. */
+
+# ifndef UINT_MAX
+# define UINT_MAX UINT_MAX_32_BITS
+# endif
+
+# if UINT_MAX == UINT_MAX_32_BITS
+ typedef unsigned int md5_uint32;
+# else
+# if USHRT_MAX == UINT_MAX_32_BITS
+ typedef unsigned short md5_uint32;
+# else
+# if ULONG_MAX == UINT_MAX_32_BITS
+ typedef unsigned long md5_uint32;
+# else
+ /* The following line is intended to evoke an error.
+ Using #error is not portable enough. */
+ "Cannot determine unsigned 32-bit data type."
+# endif
+# endif
+# endif
+#endif
+
+/* Structure to save state of computation between the single steps. */
+struct md5_ctx
+{
+ md5_uint32 A;
+ md5_uint32 B;
+ md5_uint32 C;
+ md5_uint32 D;
+
+ md5_uint32 total[2];
+ md5_uint32 buflen;
+ char buffer[128];
+};
+
+/*
+ * The following three functions are build up the low level used in
+ * the functions `md5_stream' and `md5_buffer'.
+ */
+
+/* Initialize structure containing state of computation.
+ (RFC 1321, 3.3: Step 3) */
+extern void md5_init_ctx PARAMS ((struct md5_ctx *ctx));
+
+/* Starting with the result of former calls of this function (or the
+ initialization function update the context for the next LEN bytes
+ starting at BUFFER.
+ It is necessary that LEN is a multiple of 64!!! */
+extern void md5_process_block PARAMS ((const void *buffer, size_t len,
+ struct md5_ctx *ctx));
+
+/* Starting with the result of former calls of this function (or the
+ initialization function update the context for the next LEN bytes
+ starting at BUFFER.
+ It is NOT required that LEN is a multiple of 64. */
+extern void md5_process_bytes PARAMS ((const void *buffer, size_t len,
+ struct md5_ctx *ctx));
+
+/* Process the remaining bytes in the buffer and put result from CTX
+ in first 16 bytes following RESBUF. The result is always in little
+ endian byte order, so that a byte-wise output yields to the wanted
+ ASCII representation of the message digest.
+
+ IMPORTANT: On some systems it is required that RESBUF is correctly
+ aligned for a 32 bits value. */
+extern void *md5_finish_ctx PARAMS ((struct md5_ctx *ctx, void *resbuf));
+
+
+/* Put result from CTX in first 16 bytes following RESBUF. The result is
+ always in little endian byte order, so that a byte-wise output yields
+ to the wanted ASCII representation of the message digest.
+
+ IMPORTANT: On some systems it is required that RESBUF is correctly
+ aligned for a 32 bits value. */
+extern void *md5_read_ctx PARAMS ((const struct md5_ctx *ctx, void *resbuf));
+
+
+/* Compute MD5 message digest for bytes read from STREAM. The
+ resulting message digest number will be written into the 16 bytes
+ beginning at RESBLOCK. */
+extern int md5_stream PARAMS ((FILE *stream, void *resblock));
+
+/* Compute MD5 message digest for LEN bytes beginning at BUFFER. The
+ result is always in little endian byte order, so that a byte-wise
+ output yields to the wanted ASCII representation of the message
+ digest. */
+extern void *md5_buffer PARAMS ((const char *buffer, size_t len,
+ void *resblock));
+
+#endif
--- /dev/null
+/* mswindows.c -- Windows-specific support
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* #### Someone document these functions! */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <winsock.h>
+#include <string.h>
+#include <assert.h>
+
+#include "wget.h"
+#include "url.h"
+
+char *argv0;
+
+/* Defined in log.c. */
+void redirect_output (const char *);
+
+static int windows_nt_p;
+
+
+/* Emulation of Unix sleep. */
+unsigned int
+sleep (unsigned seconds)
+{
+ Sleep (1000 * seconds);
+ /* Unix sleep() is interruptible. To make it semi-usable, it
+ returns a value that says how much it "really" slept, or some
+ junk like that. Ignore it. */
+ return 0U;
+}
+
+static char *
+read_registry (HKEY hkey, char *subkey, char *valuename, char *buf, int *len)
+{
+ HKEY result;
+ DWORD size = *len;
+ DWORD type = REG_SZ;
+ if (RegOpenKeyEx (hkey, subkey, NULL, KEY_READ, &result) != ERROR_SUCCESS)
+ return NULL;
+ if (RegQueryValueEx (result, valuename, NULL, &type, buf, &size) != ERROR_SUCCESS)
+ buf = NULL;
+ *len = size;
+ RegCloseKey (result);
+ return buf;
+}
+
+char *
+pwd_cuserid (char *where)
+{
+ char buf[32], *ptr;
+ int len = sizeof (buf);
+ if (GetUserName (buf, (LPDWORD) &len) == TRUE)
+ {
+ ;
+ }
+ else if (!!(ptr = getenv ("USERNAME")))
+ {
+ strcpy (buf, ptr);
+ }
+ else if (!read_registry (HKEY_LOCAL_MACHINE, "Network\\Logon",
+ "username", buf, &len))
+ {
+ return NULL;
+ }
+ if (where)
+ {
+ strncpy (where, buf, len);
+ return where;
+ }
+ return xstrdup (buf);
+}
+
+void
+windows_main_junk (int *argc, char **argv, char **exec_name)
+{
+ char *p;
+
+ argv0 = argv[0];
+
+ /* Remove .EXE from filename if it has one. */
+ *exec_name = xstrdup (*exec_name);
+ p = strrchr (*exec_name, '.');
+ if (p)
+ *p = '\0';
+}
+\f
+/* Winsock stuff. */
+
+static void
+ws_cleanup (void)
+{
+ WSACleanup ();
+}
+
+static void
+ws_hangup (void)
+{
+ redirect_output (_("\n\
+CTRL+Break received, redirecting output to `%s'.\n\
+Execution continued in background.\n\
+You may stop Wget by pressing CTRL+ALT+DELETE.\n"));
+}
+
+void
+fork_to_background (void)
+{
+ /* Whether we arrange our own version of opt.lfilename here. */
+ int changedp = 0;
+
+ if (!opt.lfilename)
+ {
+ opt.lfilename = unique_name (DEFAULT_LOGFILE);
+ changedp = 1;
+ }
+ printf (_("Continuing in background.\n"));
+ if (changedp)
+ printf (_("Output will be written to `%s'.\n"), opt.lfilename);
+
+ ws_hangup ();
+ if (!windows_nt_p)
+ FreeConsole ();
+}
+
+static BOOL WINAPI
+ws_handler (DWORD dwEvent)
+{
+ switch (dwEvent)
+ {
+#ifdef CTRLC_BACKGND
+ case CTRL_C_EVENT:
+#endif
+#ifdef CTRLBREAK_BACKGND
+ case CTRL_BREAK_EVENT:
+#endif
+ fork_to_background ();
+ break;
+ case CTRL_SHUTDOWN_EVENT:
+ case CTRL_CLOSE_EVENT:
+ case CTRL_LOGOFF_EVENT:
+ default:
+ WSACleanup ();
+ return FALSE;
+ }
+ return TRUE;
+}
+
+void
+ws_changetitle (char *url, int nurl)
+{
+ char *title_buf;
+ if (!nurl)
+ return;
+
+ title_buf = (char *)xmalloc (strlen (url) + 20);
+ sprintf (title_buf, "Wget %s%s", url, nurl == 1 ? "" : " ...");
+ /* #### What are the semantics of SetConsoleTitle? Will it free the
+ given memory later? */
+ SetConsoleTitle (title_buf);
+}
+
+char *
+ws_mypath (void)
+{
+ static char *wspathsave;
+ char *buffer;
+ int rrr;
+ char *ptr;
+
+ if (wspathsave)
+ {
+ return wspathsave;
+ }
+ ptr = strrchr (argv0, '\\');
+ if (ptr)
+ {
+ *(ptr + 1) = '\0';
+ wspathsave = (char*) xmalloc (strlen(argv0)+1);
+ strcpy (wspathsave, argv0);
+ return wspathsave;
+ }
+ buffer = (char*) xmalloc (256);
+ rrr = SearchPath (NULL, argv0, strchr (argv0, '.') ? NULL : ".EXE",
+ 256, buffer, &ptr);
+ if (rrr && rrr <= 256)
+ {
+ *ptr = '\0';
+ wspathsave = (char*) xmalloc (strlen(buffer)+1);
+ strcpy (wspathsave, buffer);
+ return wspathsave;
+ }
+ free (buffer);
+ return NULL;
+}
+
+void
+ws_help (const char *name)
+{
+ char *mypath = ws_mypath ();
+
+ if (mypath)
+ {
+ struct stat sbuf;
+ char *buf = (char *)alloca (strlen (mypath) + strlen (name) + 4 + 1);
+ sprintf (buf, "%s%s.HLP", mypath, name);
+ if (stat (buf, &sbuf) == 0)
+ {
+ printf (_("Starting WinHelp %s\n"), buf);
+ WinHelp (NULL, buf, HELP_INDEX, NULL);
+ }
+ else
+ {
+ printf ("%s: %s\n", buf, strerror (errno));
+ }
+ }
+}
+
+void
+ws_startup (void)
+{
+ WORD requested;
+ WSADATA data;
+ int err;
+ OSVERSIONINFO os;
+
+ if (GetVersionEx (&os) == TRUE
+ && os.dwPlatformId == VER_PLATFORM_WIN32_WINDOWS)
+ windows_nt_p = 1;
+
+ requested = MAKEWORD (1, 1);
+ err = WSAStartup (requested, &data);
+
+ if (err != 0)
+ {
+ fprintf (stderr, _("%s: Couldn't find usable socket driver.\n"),
+ exec_name);
+ exit (1);
+ }
+
+ if (LOBYTE (requested) < 1 || (LOBYTE (requested) == 1 &&
+ HIBYTE (requested) < 1))
+ {
+ fprintf (stderr, _("%s: Couldn't find usable socket driver.\n"),
+ exec_name);
+ WSACleanup ();
+ exit (1);
+ }
+ atexit (ws_cleanup);
+ SetConsoleCtrlHandler (ws_handler, TRUE);
+}
--- /dev/null
+/* Declarations for windows
+ Copyright (C) 1995, 1997, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef MSWINDOWS_H
+#define MSWINDOWS_H
+
+#ifndef S_ISDIR
+# define S_ISDIR(m) (((m) & (_S_IFMT)) == (_S_IFDIR))
+#endif
+#ifndef S_ISLNK
+# define S_ISLNK(a) 0
+#endif
+
+/* We have strcasecmp and strncasecmp, just under a different name. */
+#define strcasecmp stricmp
+#define strncasecmp strnicmp
+
+/* No stat on Windows. */
+#define lstat stat
+
+#define PATH_SEPARATOR '\\'
+
+/* Microsoft says stat is _stat, Borland doesn't */
+#ifdef _MSC_VER
+# define stat _stat
+#endif
+
+#define REALCLOSE(x) closesocket (x)
+
+/* read & write don't work with sockets on Windows 95. */
+#define READ(fd, buf, cnt) recv ((fd), (buf), (cnt), 0)
+#define WRITE(fd, buf, cnt) send ((fd), (buf), (cnt), 0)
+
+/* #### Do we need this? */
+#include <direct.h>
+
+/* Windows compilers accept only one arg to mkdir. */
+#ifndef __BORLANDC__
+# define mkdir(a, b) _mkdir(a)
+#else /* __BORLANDC__ */
+# define mkdir(a, b) mkdir(a)
+#endif /* __BORLANDC__ */
+
+#include <windows.h>
+
+/* Declarations of various socket errors: */
+
+#define EWOULDBLOCK WSAEWOULDBLOCK
+#define EINPROGRESS WSAEINPROGRESS
+#define EALREADY WSAEALREADY
+#define ENOTSOCK WSAENOTSOCK
+#define EDESTADDRREQ WSAEDESTADDRREQ
+#define EMSGSIZE WSAEMSGSIZE
+#define EPROTOTYPE WSAEPROTOTYPE
+#define ENOPROTOOPT WSAENOPROTOOPT
+#define EPROTONOSUPPORT WSAEPROTONOSUPPORT
+#define ESOCKTNOSUPPORT WSAESOCKTNOSUPPORT
+#define EOPNOTSUPP WSAEOPNOTSUPP
+#define EPFNOSUPPORT WSAEPFNOSUPPORT
+#define EAFNOSUPPORT WSAEAFNOSUPPORT
+#define EADDRINUSE WSAEADDRINUSE
+#define EADDRNOTAVAIL WSAEADDRNOTAVAIL
+#define ENETDOWN WSAENETDOWN
+#define ENETUNREACH WSAENETUNREACH
+#define ENETRESET WSAENETRESET
+#define ECONNABORTED WSAECONNABORTED
+#define ECONNRESET WSAECONNRESET
+#define ENOBUFS WSAENOBUFS
+#define EISCONN WSAEISCONN
+#define ENOTCONN WSAENOTCONN
+#define ESHUTDOWN WSAESHUTDOWN
+#define ETOOMANYREFS WSAETOOMANYREFS
+#define ETIMEDOUT WSAETIMEDOUT
+#define ECONNREFUSED WSAECONNREFUSED
+#define ELOOP WSAELOOP
+#define EHOSTDOWN WSAEHOSTDOWN
+#define EHOSTUNREACH WSAEHOSTUNREACH
+#define EPROCLIM WSAEPROCLIM
+#define EUSERS WSAEUSERS
+#define EDQUOT WSAEDQUOT
+#define ESTALE WSAESTALE
+#define EREMOTE WSAEREMOTE
+
+/* Public functions. */
+
+unsigned int sleep (unsigned);
+void ws_startup (void);
+void ws_changetitle (char*, int);
+char *ws_mypath (void);
+void ws_help (const char *);
+void windows_main_junk (int *, char **, char **);
+
+#endif /* MSWINDOWS_H */
--- /dev/null
+/* Read and parse the .netrc file to get hosts, accounts, and passwords.
+ Copyright (C) 1996, Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* This file used to be kept in synch with the code in Fetchmail, but
+ the latter has diverged since. */
+
+#ifdef HAVE_CONFIG_H
+# include <config.h>
+#endif
+
+#include <stdio.h>
+#include <ctype.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <sys/types.h>
+#include <errno.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "netrc.h"
+#include "init.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+#define NETRC_FILE_NAME ".netrc"
+
+acc_t *netrc_list;
+
+static acc_t *parse_netrc PARAMS ((const char *));
+
+/* Return the correct user and password, given the host, user (as
+ given in the URL), and password (as given in the URL). May return
+ NULL.
+
+ If SLACK_DEFAULT is set, allow looking for a "default" account.
+ You will typically turn it off for HTTP. */
+void
+search_netrc (const char *host, const char **acc, const char **passwd,
+ int slack_default)
+{
+ acc_t *l;
+ static int processed_netrc;
+
+ if (!opt.netrc)
+ return;
+ /* Find ~/.netrc. */
+ if (!processed_netrc)
+ {
+ char *home = home_dir();
+
+ netrc_list = NULL;
+ processed_netrc = 1;
+ if (home)
+ {
+ int err;
+ struct stat buf;
+ char *path = (char *)alloca (strlen (home) + 1
+ + strlen (NETRC_FILE_NAME) + 1);
+ sprintf (path, "%s/%s", home, NETRC_FILE_NAME);
+ free (home);
+ err = stat (path, &buf);
+ if (err == 0)
+ netrc_list = parse_netrc (path);
+ }
+ }
+ /* If nothing to do... */
+ if (!netrc_list)
+ return;
+ /* Acc and password found; all OK. */
+ if (*acc && *passwd)
+ return;
+ if (!*acc && !slack_default)
+ return;
+ /* Some data not given -- try finding the host. */
+ for (l = netrc_list; l; l = l->next)
+ {
+ if (!l->host)
+ continue;
+ else if (!strcasecmp (l->host, host))
+ break;
+ }
+ if (l)
+ {
+ if (*acc)
+ {
+ /* Looking for password in .netrc. */
+ if (!strcmp (l->acc, *acc))
+ *passwd = l->passwd; /* usernames match; password OK */
+ else
+ *passwd = NULL; /* usernames don't match */
+ }
+ else /* NOT *acc */
+ {
+ /* If password was given, use it. The account is l->acc. */
+ *acc = l->acc;
+ if (l->passwd)
+ *passwd = l->passwd;
+ }
+ return;
+ }
+ else
+ {
+ if (!slack_default)
+ return;
+ if (*acc)
+ return;
+ /* Try looking for the default account. */
+ for (l = netrc_list; l; l = l->next)
+ if (!l->host)
+ break;
+ if (!l)
+ return;
+ *acc = l->acc;
+ if (!*passwd)
+ *passwd = l->passwd;
+ return;
+ }
+}
+
+
+#ifdef STANDALONE
+/* Normally, these functions would be defined by your package. */
+# define xmalloc malloc
+# define xstrdup strdup
+
+/* The function reads a whole line. It reads the line realloc-ing the
+ storage exponentially, doubling the storage after each overflow to
+ minimize the number of calls to realloc().
+
+ It is not an exemplary of correctness, since it kills off the
+ newline (and no, there is no way to know if there was a newline at
+ EOF). */
+# define xrealloc realloc
+# define DYNAMIC_LINE_BUFFER 40
+
+char *
+read_whole_line (FILE *fp)
+{
+ char *line;
+ int i, bufsize, c;
+
+ i = 0;
+ bufsize = DYNAMIC_LINE_BUFFER;
+ line = xmalloc(bufsize);
+ /* Construct the line. */
+ while ((c = getc(fp)) != EOF && c != '\n')
+ {
+ if (i > bufsize - 1)
+ line = (char *)xrealloc(line, (bufsize <<= 1));
+ line[i++] = c;
+ }
+ if (c == EOF && !i)
+ {
+ free(line);
+ return NULL;
+ }
+
+ /* Check for overflow at zero-termination (no need to double the
+ buffer in this case. */
+ if (i == bufsize)
+ line = (char *)xrealloc(line, i + 1);
+ line[i] = '\0';
+ return line;
+}
+
+#endif /* STANDALONE */
+
+/* Maybe add NEWENTRY to the account information list, LIST. NEWENTRY is
+ set to a ready-to-use acc_t, in any event. */
+static void
+maybe_add_to_list (acc_t **newentry, acc_t **list)
+{
+ acc_t *a, *l;
+ a = *newentry;
+ l = *list;
+
+ /* We need an account name in order to add the entry to the list. */
+ if (a && ! a->acc)
+ {
+ /* Free any allocated space. */
+ free (a->host);
+ free (a->acc);
+ free (a->passwd);
+ }
+ else
+ {
+ if (a)
+ {
+ /* Add the current machine into our list. */
+ a->next = l;
+ l = a;
+ }
+
+ /* Allocate a new acc_t structure. */
+ a = (acc_t *)xmalloc (sizeof (acc_t));
+ }
+
+ /* Zero the structure, so that it is ready to use. */
+ memset (a, 0, sizeof(*a));
+
+ /* Return the new pointers. */
+ *newentry = a;
+ *list = l;
+ return;
+}
+
+
+/* Parse a .netrc file (as described in the ftp(1) manual page). */
+static acc_t *
+parse_netrc (const char *path)
+{
+ FILE *fp;
+ char *line, *p, *tok, *premature_token;
+ acc_t *current, *retval;
+ int ln;
+
+ /* The latest token we've seen in the file. */
+ enum
+ {
+ tok_nothing, tok_account, tok_login, tok_macdef, tok_machine, tok_password
+ } last_token = tok_nothing;
+
+ current = retval = NULL;
+
+ fp = fopen (path, "r");
+ if (!fp)
+ {
+ fprintf (stderr, _("%s: Cannot read %s (%s).\n"), exec_name,
+ path, strerror (errno));
+ return retval;
+ }
+
+ /* Initialize the file data. */
+ ln = 0;
+ premature_token = NULL;
+
+ /* While there are lines in the file... */
+ while ((line = read_whole_line (fp)))
+ {
+ ln ++;
+
+ /* Parse the line. */
+ p = line;
+
+ /* If the line is empty, then end any macro definition. */
+ if (last_token == tok_macdef && !*p)
+ /* End of macro if the line is empty. */
+ last_token = tok_nothing;
+
+ /* If we are defining macros, then skip parsing the line. */
+ while (*p && last_token != tok_macdef)
+ {
+ /* Skip any whitespace. */
+ while (*p && ISSPACE (*p))
+ p ++;
+
+ /* Discard end-of-line comments. */
+ if (*p == '#')
+ break;
+
+ tok = p;
+
+ /* Find the end of the token. */
+ while (*p && !ISSPACE (*p))
+ p ++;
+
+ /* Null-terminate the token, if it isn't already. */
+ if (*p)
+ *p ++ = '\0';
+
+ switch (last_token)
+ {
+ case tok_login:
+ if (current)
+ current->acc = xstrdup (tok);
+ else
+ premature_token = "login";
+ break;
+
+ case tok_machine:
+ /* Start a new machine entry. */
+ maybe_add_to_list (¤t, &retval);
+ current->host = xstrdup (tok);
+ break;
+
+ case tok_password:
+ if (current)
+ current->passwd = xstrdup (tok);
+ else
+ premature_token = "password";
+ break;
+
+ /* We handle most of tok_macdef above. */
+ case tok_macdef:
+ if (!current)
+ premature_token = "macdef";
+ break;
+
+ /* We don't handle the account keyword at all. */
+ case tok_account:
+ if (!current)
+ premature_token = "account";
+ break;
+
+ /* We handle tok_nothing below this switch. */
+ case tok_nothing:
+ break;
+ }
+
+ if (premature_token)
+ {
+ fprintf (stderr, _("\
+%s: %s:%d: warning: \"%s\" token appears before any machine name\n"),
+ exec_name, path, ln, premature_token);
+ premature_token = NULL;
+ }
+
+ if (last_token != tok_nothing)
+ /* We got a value, so reset the token state. */
+ last_token = tok_nothing;
+ else
+ {
+ /* Fetch the next token. */
+ if (!strcmp (tok, "account"))
+ last_token = tok_account;
+ else if (!strcmp (tok, "default"))
+ {
+ maybe_add_to_list (¤t, &retval);
+ }
+ else if (!strcmp (tok, "login"))
+ last_token = tok_login;
+
+ else if (!strcmp (tok, "macdef"))
+ last_token = tok_macdef;
+
+ else if (!strcmp (tok, "machine"))
+ last_token = tok_machine;
+
+ else if (!strcmp (tok, "password"))
+ last_token = tok_password;
+
+ else
+ fprintf (stderr, _("%s: %s:%d: unknown token \"%s\"\n"),
+ exec_name, path, ln, tok);
+ }
+ }
+
+ free (line);
+ }
+
+ fclose (fp);
+
+ /* Finalize the last machine entry we found. */
+ maybe_add_to_list (¤t, &retval);
+ free (current);
+
+ /* Reverse the order of the list so that it appears in file order. */
+ current = retval;
+ retval = NULL;
+ while (current)
+ {
+ acc_t *saved_reference;
+
+ /* Change the direction of the pointers. */
+ saved_reference = current->next;
+ current->next = retval;
+
+ /* Advance to the next node. */
+ retval = current;
+ current = saved_reference;
+ }
+
+ return retval;
+}
+
+
+/* Free a netrc list. */
+void
+free_netrc(acc_t *l)
+{
+ acc_t *t;
+
+ while (l)
+ {
+ t = l->next;
+ FREE_MAYBE (l->acc);
+ FREE_MAYBE (l->passwd);
+ FREE_MAYBE (l->host);
+ free(l);
+ l = t;
+ }
+}
+
+#ifdef STANDALONE
+#include <sys/types.h>
+#include <sys/stat.h>
+
+int
+main (int argc, char **argv)
+{
+ struct stat sb;
+ char *program_name, *file, *target;
+ acc_t *head, *a;
+
+ if (argc < 2 || argc > 3)
+ {
+ fprintf (stderr, _("Usage: %s NETRC [HOSTNAME]\n"), argv[0]);
+ exit (1);
+ }
+
+ program_name = argv[0];
+ file = argv[1];
+ target = argv[2];
+
+ if (stat (file, &sb))
+ {
+ fprintf (stderr, _("%s: cannot stat %s: %s\n"), argv[0], file,
+ strerror (errno));
+ exit (1);
+ }
+
+ head = parse_netrc (file);
+ a = head;
+ while (a)
+ {
+ /* Skip if we have a target and this isn't it. */
+ if (target && a->host && strcmp (target, a->host))
+ {
+ a = a->next;
+ continue;
+ }
+
+ if (!target)
+ {
+ /* Print the host name if we have no target. */
+ if (a->host)
+ fputs (a->host, stdout);
+ else
+ fputs ("DEFAULT", stdout);
+
+ fputc (' ', stdout);
+ }
+
+ /* Print the account name. */
+ fputs (a->acc, stdout);
+
+ if (a->passwd)
+ {
+ /* Print the password, if there is any. */
+ fputc (' ', stdout);
+ fputs (a->passwd, stdout);
+ }
+
+ fputc ('\n', stdout);
+
+ /* Exit if we found the target. */
+ if (target)
+ exit (0);
+ a = a->next;
+ }
+
+ /* Exit with failure if we had a target, success otherwise. */
+ if (target)
+ exit (1);
+
+ exit (0);
+}
+#endif /* STANDALONE */
--- /dev/null
+/* Declarations for netrc.c
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+typedef struct _acc_t
+{
+ char *host; /* NULL if this is the default machine
+ entry. */
+ char *acc;
+ char *passwd; /* NULL if there is no password. */
+ struct _acc_t *next;
+} acc_t;
+
+void search_netrc PARAMS((const char *, const char **, const char **, int));
+void free_netrc PARAMS((acc_t *l));
--- /dev/null
+/* struct options.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* Needed for FDP. */
+#include <stdio.h>
+
+struct options
+{
+ int verbose; /* Are we verbose? */
+ int quiet; /* Are we quiet? */
+ int ntry; /* Number of tries per URL */
+ int background; /* Whether we should work in background. */
+ int kill_longer; /* Do we reject messages with *more*
+ data than specified in
+ content-length? */
+ int ignore_length; /* Do we heed content-length at all? */
+ int recursive; /* Are we recursive? */
+ int spanhost; /* Do we span across hosts in
+ recursion? */
+ int relative_only; /* Follow only relative links. */
+ int no_parent; /* Restrict access to the parent
+ directory. */
+ int simple_check; /* Should we use simple checking
+ (strcmp) or do we create a host
+ hash and call gethostbyname? */
+ int reclevel; /* Maximum level of recursion */
+ int dirstruct; /* Do we build the directory structure
+ as we go along? */
+ int no_dirstruct; /* Do we hate dirstruct? */
+ int cut_dirs; /* Number of directory components to cut. */
+ int add_hostdir; /* Do we add hostname directory? */
+ int noclobber; /* Disables clobbering of existing
+ data. */
+ char *dir_prefix; /* The top of directory tree */
+ char *lfilename; /* Log filename */
+ int no_flush; /* If non-zero, inhibit flushing log. */
+ char *input_filename; /* Input filename */
+ int force_html; /* Is the input file an HTML file? */
+
+ int spider; /* Is Wget in spider mode? */
+
+ char **accepts; /* List of patterns to accept. */
+ char **rejects; /* List of patterns to reject. */
+ char **excludes; /* List of excluded FTP directories. */
+ char **includes; /* List of FTP directories to
+ follow. */
+
+ char **domains; /* See host.c */
+ char **exclude_domains;
+
+ int follow_ftp; /* Are FTP URL-s followed in recursive
+ retrieving? */
+ int retr_symlinks; /* Whether we retrieve symlinks in
+ FTP. */
+ char *output_document; /* The output file to which the
+ documents will be printed. */
+ FILE *dfp; /* The file pointer to the output
+ document. */
+
+ int always_rest; /* Always use REST. */
+ char *ftp_acc; /* FTP username */
+ char *ftp_pass; /* FTP password */
+ int netrc; /* Whether to read .netrc. */
+ int ftp_glob; /* FTP globbing */
+ int ftp_pasv; /* Passive FTP. */
+
+ char *http_user; /* HTTP user. */
+ char *http_passwd; /* HTTP password. */
+ char *user_header; /* User-defined header(s). */
+
+ int use_proxy; /* Do we use proxy? */
+ int proxy_cache; /* Do we load from proxy cache? */
+ char *http_proxy, *ftp_proxy;
+ char **no_proxy;
+ char *base_href;
+ char *proxy_user; /*oli*/
+ char *proxy_passwd;
+#ifdef HAVE_SELECT
+ long timeout; /* The value of read timeout in
+ seconds. */
+#endif
+ long wait; /* The wait period between retries. */
+ int use_robots; /* Do we heed robots.txt? */
+
+ long quota; /* Maximum number of bytes to
+ retrieve. */
+ long downloaded; /* How much we downloaded already. */
+ int numurls; /* Number of successfully downloaded
+ URLs */
+
+ int server_response; /* Do we print server response? */
+ int save_headers; /* Do we save headers together with
+ file? */
+
+#ifdef DEBUG
+ int debug; /* Debugging on/off */
+#endif /* DEBUG */
+
+ int timestamping; /* Whether to use time-stamping. */
+ int backups; /* Are backups made? */
+
+ char *useragent; /* Naughty User-Agent, which can be
+ set to something other than
+ Wget. */
+ int convert_links; /* Will the links be converted
+ locally? */
+ int remove_listing; /* Do we remove .listing files
+ generated by FTP? */
+ int htmlify; /* Do we HTML-ify the OS-dependent
+ listings? */
+
+ long dot_bytes; /* How many bytes in a printing
+ dot. */
+ int dots_in_line; /* How many dots in one line. */
+ int dot_spacing; /* How many dots between spacings. */
+
+ int delete_after; /* Whether the files will be deleted
+ after download. */
+};
+
+#ifndef OPTIONS_DEFINED_HERE
+extern struct options opt;
+#endif
--- /dev/null
+/* Buffering read.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* This is a simple implementation of buffering IO-read functions. */
+
+#include <config.h>
+
+#include <stdio.h>
+
+#include "wget.h"
+#include "rbuf.h"
+#include "connect.h"
+
+void
+rbuf_initialize (struct rbuf *rbuf, int fd)
+{
+ rbuf->fd = fd;
+ rbuf->buffer_pos = rbuf->buffer;
+ rbuf->buffer_left = 0;
+}
+
+int
+rbuf_initialized_p (struct rbuf *rbuf)
+{
+ return rbuf->fd != -1;
+}
+
+void
+rbuf_uninitialize (struct rbuf *rbuf)
+{
+ rbuf->fd = -1;
+}
+
+/* Currently unused -- see RBUF_READCHAR. */
+#if 0
+/* Function version of RBUF_READCHAR. */
+int
+rbuf_readchar (struct rbuf *rbuf, char *store)
+{
+ return RBUF_READCHAR (rbuf, store);
+}
+#endif
+
+/* Like rbuf_readchar(), only don't move the buffer position. */
+int
+rbuf_peek (struct rbuf *rbuf, char *store)
+{
+ if (!rbuf->buffer_left)
+ {
+ int res;
+ rbuf->buffer_pos = rbuf->buffer;
+ rbuf->buffer_left = 0;
+ res = iread (rbuf->fd, rbuf->buffer, sizeof (rbuf->buffer));
+ if (res <= 0)
+ return res;
+ rbuf->buffer_left = res;
+ }
+ *store = *rbuf->buffer_pos;
+ return 1;
+}
+
+/* Flush RBUF's buffer to WHERE. Flush MAXSIZE bytes at most.
+ Returns the number of bytes actually copied. If the buffer is
+ empty, 0 is returned. */
+int
+rbuf_flush (struct rbuf *rbuf, char *where, int maxsize)
+{
+ if (!rbuf->buffer_left)
+ return 0;
+ else
+ {
+ int howmuch = MINVAL (rbuf->buffer_left, maxsize);
+
+ if (where)
+ memcpy (where, rbuf->buffer_pos, howmuch);
+ rbuf->buffer_left -= howmuch;
+ rbuf->buffer_pos += howmuch;
+ return howmuch;
+ }
+}
+
+/* Discard any cached data in RBUF. */
+void
+rbuf_discard (struct rbuf *rbuf)
+{
+ rbuf->buffer_left = 0;
+ rbuf->buffer_pos = rbuf->buffer;
+}
--- /dev/null
+/* Declarations for rbuf.c.
+ Copyright (C) 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef RBUF_H
+#define RBUF_H
+
+/* Retrieval stream */
+struct rbuf
+{
+ int fd;
+ char buffer[4096]; /* the input buffer */
+ char *buffer_pos; /* current position in the buffer */
+ size_t buffer_left; /* number of bytes left in the buffer:
+ buffer_left = buffer_end - buffer_pos */
+ int internal_dont_touch_this; /* used by RBUF_READCHAR macro */
+};
+
+/* Read a character from RBUF. If there is anything in the buffer,
+ the character is returned from the buffer. Otherwise, refill the
+ buffer and return the first character.
+
+ The return value is the same as with read(2). On buffered read,
+ the function returns 1.
+
+ #### That return value is totally screwed up, and is a direct
+ result of historical implementation of header code. The macro
+ should return the character or EOF, and in case of error store it
+ to rbuf->err or something. */
+#define RBUF_READCHAR(rbuf, store) \
+((rbuf)->buffer_left \
+ ? (--(rbuf)->buffer_left, \
+ *((char *) (store)) = *(rbuf)->buffer_pos++, 1) \
+ : ((rbuf)->buffer_pos = (rbuf)->buffer, \
+ ((((rbuf)->internal_dont_touch_this \
+ = iread ((rbuf)->fd, (rbuf)->buffer, \
+ sizeof ((rbuf)->buffer))) <= 0) \
+ ? (rbuf)->internal_dont_touch_this \
+ : ((rbuf)->buffer_left = (rbuf)->internal_dont_touch_this - 1, \
+ *((char *) (store)) = *(rbuf)->buffer_pos++, \
+ 1))))
+
+/* Return the file descriptor of RBUF. */
+#define RBUF_FD(rbuf) ((rbuf)->fd)
+
+/* Function declarations */
+void rbuf_initialize PARAMS ((struct rbuf *, int));
+int rbuf_initialized_p PARAMS ((struct rbuf *));
+void rbuf_uninitialize PARAMS ((struct rbuf *));
+int rbuf_readchar PARAMS ((struct rbuf *, char *));
+int rbuf_peek PARAMS ((struct rbuf *, char *));
+int rbuf_flush PARAMS ((struct rbuf *, char *, int));
+void rbuf_discard PARAMS ((struct rbuf *));
+
+#endif /* RBUF_H */
--- /dev/null
+/* Handling of recursive HTTP retrieving.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif /* HAVE_STRING_H */
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif /* HAVE_UNISTD_H */
+#include <errno.h>
+#include <assert.h>
+#include <ctype.h>
+#include <sys/types.h>
+
+#include "wget.h"
+#include "url.h"
+#include "recur.h"
+#include "utils.h"
+#include "retr.h"
+#include "ftp.h"
+#include "fnmatch.h"
+#include "host.h"
+
+extern char *version_string;
+
+#define ROBOTS_FILENAME "robots.txt"
+
+/* #### Many of these lists should really be hashtables! */
+
+/* List of downloaded URLs. */
+static urlpos *urls_downloaded;
+
+/* List of HTML URLs. */
+static slist *urls_html;
+
+/* List of undesirable-to-load URLs. */
+static slist *ulist;
+
+/* List of forbidden locations. */
+static char **forbidden = NULL;
+
+/* Current recursion depth. */
+static int depth;
+
+/* Base directory we're recursing from (used by no_parent). */
+static char *base_dir;
+
+/* The host name for which we last checked robots. */
+static char *robots_host;
+
+static int first_time = 1;
+
+/* Construct the robots URL. */
+static struct urlinfo *robots_url PARAMS ((const char *, const char *));
+static uerr_t retrieve_robots PARAMS ((const char *, const char *));
+static char **parse_robots PARAMS ((const char *));
+static int robots_match PARAMS ((struct urlinfo *, char **));
+
+
+/* Cleanup the data structures associated with recursive retrieving
+ (the variables above). */
+void
+recursive_cleanup (void)
+{
+ free_slist (ulist);
+ ulist = NULL;
+ free_vec (forbidden);
+ forbidden = NULL;
+ free_slist (urls_html);
+ urls_html = NULL;
+ free_urlpos (urls_downloaded);
+ urls_downloaded = NULL;
+ FREE_MAYBE (base_dir);
+ FREE_MAYBE (robots_host);
+ first_time = 1;
+}
+
+/* Reset FIRST_TIME to 1, so that some action can be taken in
+ recursive_retrieve(). */
+void
+recursive_reset (void)
+{
+ first_time = 1;
+}
+
+/* The core of recursive retrieving. Endless recursion is avoided by
+ having all URL-s stored to a linked list of URL-s, which is checked
+ before loading any URL. That way no URL can get loaded twice.
+
+ The function also supports specification of maximum recursion depth
+ and a number of other goodies. */
+uerr_t
+recursive_retrieve (const char *file, const char *this_url)
+{
+ char *constr, *filename, *newloc;
+ char *canon_this_url = NULL;
+ int dt, inl;
+ int this_url_ftp; /* See below the explanation */
+ uerr_t err;
+ struct urlinfo *rurl;
+ urlpos *url_list, *cur_url;
+ char *rfile; /* For robots */
+ struct urlinfo *u;
+
+ assert (this_url != NULL);
+ assert (file != NULL);
+ /* If quota was exceeded earlier, bail out. */
+ if (opt.quota && (opt.downloaded > opt.quota))
+ return QUOTEXC;
+ /* Cache the current URL in the list. */
+ if (first_time)
+ {
+ ulist = add_slist (ulist, this_url, 0);
+ urls_downloaded = NULL;
+ urls_html = NULL;
+ /* Enter this_url to the slist, in original and "enhanced" form. */
+ u = newurl ();
+ err = parseurl (this_url, u, 0);
+ if (err == URLOK)
+ {
+ ulist = add_slist (ulist, u->url, 0);
+ urls_downloaded = add_url (urls_downloaded, u->url, file);
+ urls_html = add_slist (urls_html, file, NOSORT);
+ if (opt.no_parent)
+ base_dir = xstrdup (u->dir); /* Set the base dir. */
+ /* Set the canonical this_url to be sent as referer. This
+ problem exists only when running the first time. */
+ canon_this_url = xstrdup (u->url);
+ }
+ else
+ {
+ DEBUGP (("Double yuck! The *base* URL is broken.\n"));
+ base_dir = NULL;
+ }
+ freeurl (u, 1);
+ depth = 1;
+ robots_host = NULL;
+ forbidden = NULL;
+ first_time = 0;
+ }
+ else
+ ++depth;
+
+ /* Bail out if opt.reclevel is exceeded. */
+ if ((opt.reclevel != 0) && (depth > opt.reclevel))
+ {
+ DEBUGP (("Recursion depth %d exceeded max. depth %d.\n",
+ depth, opt.reclevel));
+ --depth;
+ return RECLEVELEXC;
+ }
+
+ /* Determine whether this_url is an FTP URL. If it is, it means
+ that the retrieval is done through proxy. In that case, FTP
+ links will be followed by default and recursion will not be
+ turned off when following them. */
+ this_url_ftp = (urlproto (this_url) == URLFTP);
+
+ /* Get the URL-s from an HTML file: */
+ url_list = get_urls_html (file,
+ canon_this_url ? canon_this_url : this_url, 0);
+
+ /* Decide what to do with each of the URLs. A URL will be loaded if
+ it meets several requirements, discussed later. */
+ for (cur_url = url_list; cur_url; cur_url = cur_url->next)
+ {
+ /* If quota was exceeded earlier, bail out. */
+ if (opt.quota && (opt.downloaded > opt.quota))
+ break;
+ /* Parse the URL for convenient use in other functions, as well
+ as to get the optimized form. It also checks URL integrity. */
+ u = newurl ();
+ if (parseurl (cur_url->url, u, 0) != URLOK)
+ {
+ DEBUGP (("Yuck! A bad URL.\n"));
+ freeurl (u, 1);
+ continue;
+ }
+ if (u->proto == URLFILE)
+ {
+ DEBUGP (("Nothing to do with file:// around here.\n"));
+ freeurl (u, 1);
+ continue;
+ }
+ assert (u->url != NULL);
+ constr = xstrdup (u->url);
+
+ /* Several checkings whether a file is acceptable to load:
+ 1. check if URL is ftp, and we don't load it
+ 2. check for relative links (if relative_only is set)
+ 3. check for domain
+ 4. check for no-parent
+ 5. check for excludes && includes
+ 6. check for suffix
+ 7. check for same host (if spanhost is unset), with possible
+ gethostbyname baggage
+ 8. check for robots.txt
+
+ Addendum: If the URL is FTP, and it is to be loaded, only the
+ domain and suffix settings are "stronger".
+
+ Note that .html and (yuck) .htm will get loaded
+ regardless of suffix rules (but that is remedied later with
+ unlink).
+
+ More time- and memory- consuming tests should be put later on
+ the list. */
+
+ /* inl is set if the URL we are working on (constr) is stored in
+ ulist. Using it is crucial to avoid the incessant calls to
+ in_slist, which is quite slow. */
+ inl = in_slist (ulist, constr);
+
+ /* If it is FTP, and FTP is not followed, chuck it out. */
+ if (!inl)
+ if (u->proto == URLFTP && !opt.follow_ftp && !this_url_ftp)
+ {
+ DEBUGP (("Uh, it is FTP but i'm not in the mood to follow FTP.\n"));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ /* If it is absolute link and they are not followed, chuck it
+ out. */
+ if (!inl && u->proto != URLFTP)
+ if (opt.relative_only && !(cur_url->flags & URELATIVE))
+ {
+ DEBUGP (("It doesn't really look like a relative link.\n"));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ /* If its domain is not to be accepted/looked-up, chuck it out. */
+ if (!inl)
+ if (!accept_domain (u))
+ {
+ DEBUGP (("I don't like the smell of that domain.\n"));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ /* Check for parent directory. */
+ if (!inl && opt.no_parent
+ /* If the new URL is FTP and the old was not, ignore
+ opt.no_parent. */
+ && !(!this_url_ftp && u->proto == URLFTP))
+ {
+ /* Check for base_dir first. */
+ if (!(base_dir && frontcmp (base_dir, u->dir)))
+ {
+ /* Failing that, check for parent dir. */
+ struct urlinfo *ut = newurl ();
+ if (parseurl (this_url, ut, 0) != URLOK)
+ DEBUGP (("Double yuck! The *base* URL is broken.\n"));
+ else if (!frontcmp (ut->dir, u->dir))
+ {
+ /* Failing that too, kill the URL. */
+ DEBUGP (("Trying to escape parental guidance with no_parent on.\n"));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ freeurl (ut, 1);
+ }
+ }
+ /* If the file does not match the acceptance list, or is on the
+ rejection list, chuck it out. The same goes for the
+ directory exclude- and include- lists. */
+ if (!inl && (opt.includes || opt.excludes))
+ {
+ if (!accdir (u->dir, ALLABS))
+ {
+ DEBUGP (("%s (%s) is excluded/not-included.\n", constr, u->dir));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ }
+ if (!inl)
+ {
+ char *suf = NULL;
+ /* We check for acceptance/rejection rules only for non-HTML
+ documents. Since we don't know whether they really are
+ HTML, it will be deduced from (an OR-ed list):
+
+ 1) u->file is "" (meaning it is a directory)
+ 2) suffix exists, AND:
+ a) it is "html", OR
+ b) it is "htm"
+
+ If the file *is* supposed to be HTML, it will *not* be
+ subject to acc/rej rules. That's why the `!'. */
+ if (!
+ (!*u->file
+ || (((suf = suffix (constr)) != NULL)
+ && (!strcmp (suf, "html") || !strcmp (suf, "htm")))))
+ {
+ if (!acceptable (u->file))
+ {
+ DEBUGP (("%s (%s) does not match acc/rej rules.\n",
+ constr, u->file));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ }
+ FREE_MAYBE (suf);
+ }
+ /* Optimize the URL (which includes possible DNS lookup) only
+ after all other possibilities have been exhausted. */
+ if (!inl)
+ {
+ if (!opt.simple_check)
+ opt_url (u);
+ else
+ {
+ char *p;
+ /* Just lowercase the hostname. */
+ for (p = u->host; *p; p++)
+ *p = tolower (*p);
+ free (u->url);
+ u->url = str_url (u, 0);
+ }
+ free (constr);
+ constr = xstrdup (u->url);
+ inl = in_slist (ulist, constr);
+ if (!inl && !((u->proto == URLFTP) && !this_url_ftp))
+ if (!opt.spanhost && this_url && !same_host (this_url, constr))
+ {
+ DEBUGP (("This is not the same hostname as the parent's.\n"));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ }
+ /* What about robots.txt? */
+ if (!inl && opt.use_robots && u->proto == URLHTTP)
+ {
+ /* Since Wget knows about only one set of robot rules at a
+ time, /robots.txt must be reloaded whenever a new host is
+ accessed.
+
+ robots_host holds the host the current `forbid' variable
+ is assigned to. */
+ if (!robots_host || !same_host (robots_host, u->host))
+ {
+ FREE_MAYBE (robots_host);
+ /* Now make robots_host the new host, no matter what the
+ result will be. So if there is no /robots.txt on the
+ site, Wget will not retry getting robots all the
+ time. */
+ robots_host = xstrdup (u->host);
+ free_vec (forbidden);
+ forbidden = NULL;
+ err = retrieve_robots (constr, ROBOTS_FILENAME);
+ if (err == ROBOTSOK)
+ {
+ rurl = robots_url (constr, ROBOTS_FILENAME);
+ rfile = url_filename (rurl);
+ forbidden = parse_robots (rfile);
+ freeurl (rurl, 1);
+ free (rfile);
+ }
+ }
+
+ /* Now that we have (or don't have) robots, we can check for
+ them. */
+ if (!robots_match (u, forbidden))
+ {
+ DEBUGP (("Stuffing %s because %s forbids it.\n", this_url,
+ ROBOTS_FILENAME));
+ ulist = add_slist (ulist, constr, 0);
+ inl = 1;
+ }
+ }
+
+ filename = NULL;
+ /* If it wasn't chucked out, do something with it. */
+ if (!inl)
+ {
+ DEBUGP (("I've decided to load it -> "));
+ /* Add it to the list of already-loaded URL-s. */
+ ulist = add_slist (ulist, constr, 0);
+ /* Automatically followed FTPs will *not* be downloaded
+ recursively. */
+ if (u->proto == URLFTP)
+ {
+ /* Don't you adore side-effects? */
+ opt.recursive = 0;
+ }
+ /* Reset its type. */
+ dt = 0;
+ /* Retrieve it. */
+ retrieve_url (constr, &filename, &newloc,
+ canon_this_url ? canon_this_url : this_url, &dt);
+ if (u->proto == URLFTP)
+ {
+ /* Restore... */
+ opt.recursive = 1;
+ }
+ if (newloc)
+ {
+ free (constr);
+ constr = newloc;
+ }
+ /* In case of convert_links: If there was no error, add it to
+ the list of downloaded URLs. We might need it for
+ conversion. */
+ if (opt.convert_links && filename)
+ {
+ if (dt & RETROKF)
+ {
+ urls_downloaded = add_url (urls_downloaded, constr, filename);
+ /* If the URL is HTML, note it. */
+ if (dt & TEXTHTML)
+ urls_html = add_slist (urls_html, filename, NOSORT);
+ }
+ }
+ /* If there was no error, and the type is text/html, parse
+ it recursively. */
+ if (dt & TEXTHTML)
+ {
+ if (dt & RETROKF)
+ recursive_retrieve (filename, constr);
+ }
+ else
+ DEBUGP (("%s is not text/html so we don't chase.\n",
+ filename ? filename: "(null)"));
+ /* If an suffix-rejected file was loaded only because it was HTML,
+ undo the error now */
+ if (opt.delete_after || (filename && !acceptable (filename)))
+ {
+ logprintf (LOG_VERBOSE,
+ (opt.delete_after ? _("Removing %s.\n")
+ : _("Removing %s since it should be rejected.\n")),
+ filename);
+ if (unlink (filename))
+ logprintf (LOG_NOTQUIET, "unlink: %s\n", strerror (errno));
+ dt &= ~RETROKF;
+ }
+ /* If everything was OK, and links are to be converted, let's
+ store the local filename. */
+ if (opt.convert_links && (dt & RETROKF) && (filename != NULL))
+ {
+ cur_url->flags |= UABS2REL;
+ cur_url->local_name = xstrdup (filename);
+ }
+ }
+ DEBUGP (("%s already in list, so we don't load.\n", constr));
+ /* Free filename and constr. */
+ FREE_MAYBE (filename);
+ FREE_MAYBE (constr);
+ freeurl (u, 1);
+ /* Increment the pbuf for the appropriate size. */
+ }
+ if (opt.convert_links)
+ convert_links (file, url_list);
+ /* Free the linked list of URL-s. */
+ free_urlpos (url_list);
+ /* Free the canonical this_url. */
+ FREE_MAYBE (canon_this_url);
+ /* Decrement the recursion depth. */
+ --depth;
+ if (opt.quota && (opt.downloaded > opt.quota))
+ return QUOTEXC;
+ else
+ return RETROK;
+}
+\f
+/* Simple calls to convert_links will often fail because only the
+ downloaded files are converted, and Wget cannot know which files
+ will be converted in the future. So, if we have file fileone.html
+ with:
+
+ <a href=/c/something.gif>
+
+ and /c/something.gif was not downloaded because it exceeded the
+ recursion depth, the reference will *not* be changed.
+
+ However, later we can encounter /c/something.gif from an "upper"
+ level HTML (let's call it filetwo.html), and it gets downloaded.
+
+ But now we have a problem because /c/something.gif will be
+ correctly transformed in filetwo.html, but not in fileone.html,
+ since Wget could not have known that /c/something.gif will be
+ downloaded in the future.
+
+ This is why Wget must, after the whole retrieval, call
+ convert_all_links to go once more through the entire list of
+ retrieved HTML-s, and re-convert them.
+
+ All the downloaded HTMLs are kept in urls_html, and downloaded URLs
+ in urls_downloaded. From these two lists information is
+ extracted. */
+void
+convert_all_links (void)
+{
+ uerr_t res;
+ urlpos *l1, *l2, *urls;
+ struct urlinfo *u;
+ slist *html;
+ urlpos *urlhtml;
+
+ for (html = urls_html; html; html = html->next)
+ {
+ DEBUGP (("Rescanning %s\n", html->string));
+ /* Determine the URL of the HTML file. get_urls_html will need
+ it. */
+ for (urlhtml = urls_downloaded; urlhtml; urlhtml = urlhtml->next)
+ if (!strcmp (urlhtml->local_name, html->string))
+ break;
+ if (urlhtml)
+ DEBUGP (("It should correspond to %s.\n", urlhtml->url));
+ else
+ DEBUGP (("I cannot find the corresponding URL.\n"));
+ /* Parse the HTML file... */
+ urls = get_urls_html (html->string, urlhtml ? urlhtml->url : NULL, 1);
+ if (!urls)
+ continue;
+ for (l1 = urls; l1; l1 = l1->next)
+ {
+ /* The URL must be in canonical form to be compared. */
+ u = newurl ();
+ res = parseurl (l1->url, u, 0);
+ if (res != URLOK)
+ {
+ freeurl (u, 1);
+ continue;
+ }
+ /* We decide the direction of conversion according to whether
+ a URL was downloaded. Downloaded URLs will be converted
+ ABS2REL, whereas non-downloaded will be converted REL2ABS.
+ Note: not yet implemented; only ABS2REL works. */
+ for (l2 = urls_downloaded; l2; l2 = l2->next)
+ if (!strcmp (l2->url, u->url))
+ {
+ DEBUGP (("%s flagged for conversion, local %s\n",
+ l2->url, l2->local_name));
+ break;
+ }
+ /* Clear the flags. */
+ l1->flags &= ~ (UABS2REL | UREL2ABS);
+ /* Decide on the conversion direction. */
+ if (l2)
+ {
+ l1->flags |= UABS2REL;
+ l1->local_name = xstrdup (l2->local_name);
+ }
+ else
+ {
+ l1->flags |= UREL2ABS;
+ l1->local_name = NULL;
+ }
+ freeurl (u, 1);
+ }
+ /* Convert the links in the file. */
+ convert_links (html->string, urls);
+ /* Free the data. */
+ free_urlpos (urls);
+ }
+}
+\f
+/* Robots support. */
+
+/* Construct the robots URL. */
+static struct urlinfo *
+robots_url (const char *url, const char *robots_filename)
+{
+ struct urlinfo *u = newurl ();
+ uerr_t err;
+
+ err = parseurl (url, u, 0);
+ assert (err == URLOK && u->proto == URLHTTP);
+ free (u->file);
+ free (u->dir);
+ free (u->url);
+ u->dir = xstrdup ("");
+ u->file = xstrdup (robots_filename);
+ u->url = str_url (u, 0);
+ return u;
+}
+
+/* Retrieves the robots_filename from the root server directory, if
+ possible. Returns ROBOTSOK if robots were retrieved OK, and
+ NOROBOTS if robots could not be retrieved for any reason. */
+static uerr_t
+retrieve_robots (const char *url, const char *robots_filename)
+{
+ int dt;
+ uerr_t err;
+ struct urlinfo *u;
+
+ u = robots_url (url, robots_filename);
+ logputs (LOG_VERBOSE, _("Loading robots.txt; please ignore errors.\n"));
+ err = retrieve_url (u->url, NULL, NULL, NULL, &dt);
+ freeurl (u, 1);
+ if (err == RETROK)
+ return ROBOTSOK;
+ else
+ return NOROBOTS;
+}
+
+/* Parse the robots_filename and return the disallowed path components
+ in a malloc-ed vector of character pointers.
+
+ It should be fully compliant with the syntax as described in the
+ file norobots.txt, adopted by the robots mailing list
+ (robots@webcrawler.com). */
+static char **
+parse_robots (const char *robots_filename)
+{
+ FILE *fp;
+ char **entries;
+ char *line, *cmd, *str, *p;
+ char *base_version, *version;
+ int len, num, i;
+ int wget_matched; /* is the part meant for Wget? */
+
+ entries = NULL;
+
+ num = 0;
+ fp = fopen (robots_filename, "rb");
+ if (!fp)
+ return NULL;
+
+ /* Kill version number. */
+ if (opt.useragent)
+ {
+ STRDUP_ALLOCA (base_version, opt.useragent);
+ STRDUP_ALLOCA (version, opt.useragent);
+ }
+ else
+ {
+ int len = 10 + strlen (version_string);
+ base_version = (char *)alloca (len);
+ sprintf (base_version, "Wget/%s", version_string);
+ version = (char *)alloca (len);
+ sprintf (version, "Wget/%s", version_string);
+ }
+ for (p = version; *p; p++)
+ *p = tolower (*p);
+ for (p = base_version; *p && *p != '/'; p++)
+ *p = tolower (*p);
+ *p = '\0';
+
+ /* Setting this to 1 means that Wget considers itself under
+ restrictions by default, even if the User-Agent field is not
+ present. However, if it finds the user-agent set to anything
+ other than Wget, the rest will be ignored (up to the following
+ User-Agent field). Thus you may have something like:
+
+ Disallow: 1
+ Disallow: 2
+ User-Agent: stupid-robot
+ Disallow: 3
+ Disallow: 4
+ User-Agent: Wget*
+ Disallow: 5
+ Disallow: 6
+ User-Agent: *
+ Disallow: 7
+
+ In this case the 1, 2, 5, 6 and 7 disallow lines will be
+ stored. */
+ wget_matched = 1;
+ while ((line = read_whole_line (fp)))
+ {
+ len = strlen (line);
+ /* Destroy <CR> if there is one. */
+ if (len && line[len - 1] == '\r')
+ line[len - 1] = '\0';
+ /* According to specifications, optional space may be at the
+ end... */
+ DEBUGP (("Line: %s\n", line));
+ /* Skip spaces. */
+ for (cmd = line; *cmd && ISSPACE (*cmd); cmd++);
+ if (!*cmd)
+ {
+ free (line);
+ DEBUGP (("(chucked out)\n"));
+ continue;
+ }
+ /* Look for ':'. */
+ for (str = cmd; *str && *str != ':'; str++);
+ if (!*str)
+ {
+ free (line);
+ DEBUGP (("(chucked out)\n"));
+ continue;
+ }
+ /* Zero-terminate the command. */
+ *str++ = '\0';
+ /* Look for the string beginning... */
+ for (; *str && ISSPACE (*str); str++);
+ /* Look for comments and kill them off. */
+ for (p = str; *p; p++)
+ if (*p && ISSPACE (*p) && *(p + 1) == '#')
+ {
+ /* We have found a shell-style comment `<sp>+ #'. Now
+ rewind to the beginning of the spaces and place '\0'
+ there. */
+ while (p > str && ISSPACE (*p))
+ --p;
+ if (p == str)
+ *p = '\0';
+ else
+ *(p + 1) = '\0';
+ break;
+ }
+ if (!strcasecmp (cmd, "User-agent"))
+ {
+ int match = 0;
+ /* Lowercase the agent string. */
+ for (p = str; *p; p++)
+ *p = tolower (*p);
+ /* If the string is `*', it matches. */
+ if (*str == '*' && !*(str + 1))
+ match = 1;
+ else
+ {
+ /* If the string contains wildcards, we'll run it through
+ fnmatch(). */
+ if (has_wildcards_p (str))
+ {
+ /* If the string contains '/', compare with the full
+ version. Else, compare it to base_version. */
+ if (strchr (str, '/'))
+ match = !fnmatch (str, version, 0);
+ else
+ match = !fnmatch (str, base_version, 0);
+ }
+ else /* Substring search */
+ {
+ if (strstr (version, str))
+ match = 1;
+ else
+ match = 0;
+ }
+ }
+ /* If Wget is not matched, skip all the entries up to the
+ next User-agent field. */
+ wget_matched = match;
+ }
+ else if (!wget_matched)
+ {
+ free (line);
+ DEBUGP (("(chucking out since it is not applicable for Wget)\n"));
+ continue;
+ }
+ else if (!strcasecmp (cmd, "Disallow"))
+ {
+ /* If "Disallow" is empty, the robot is welcome. */
+ if (!*str)
+ {
+ free_vec (entries);
+ entries = (char **)xmalloc (sizeof (char *));
+ *entries = NULL;
+ num = 0;
+ }
+ else
+ {
+ entries = (char **)xrealloc (entries, (num + 2)* sizeof (char *));
+ entries[num] = xstrdup (str);
+ entries[++num] = NULL;
+ /* Strip trailing spaces, according to specifications. */
+ for (i = strlen (str); i >= 0 && ISSPACE (str[i]); i--)
+ if (ISSPACE (str[i]))
+ str[i] = '\0';
+ }
+ }
+ else
+ {
+ /* unknown command */
+ DEBUGP (("(chucked out)\n"));
+ }
+ free (line);
+ }
+ fclose (fp);
+ return entries;
+}
+
+/* May the URL url be loaded according to disallowing rules stored in
+ forbidden? */
+static int
+robots_match (struct urlinfo *u, char **forbidden)
+{
+ int l;
+
+ if (!forbidden)
+ return 1;
+ DEBUGP (("Matching %s against: ", u->path));
+ for (; *forbidden; forbidden++)
+ {
+ DEBUGP (("%s ", *forbidden));
+ l = strlen (*forbidden);
+ /* If dir is forbidden, we may not load the file. */
+ if (strncmp (u->path, *forbidden, l) == 0)
+ {
+ DEBUGP (("matched.\n"));
+ return 0; /* Matches, i.e. does not load... */
+ }
+ }
+ DEBUGP (("not matched.\n"));
+ return 1;
+}
--- /dev/null
+/* Declarations for recur.c.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef RECUR_H
+#define RECUR_H
+
+void recursive_cleanup PARAMS ((void));
+void recursive_reset PARAMS ((void));
+uerr_t recursive_retrieve PARAMS ((const char *, const char *));
+
+void convert_all_links PARAMS ((void));
+
+#endif /* RECUR_H */
--- /dev/null
+/* File retrieval.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif /* HAVE_UNISTD_H */
+#include <errno.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif /* HAVE_STRING_H */
+#include <ctype.h>
+#include <assert.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "retr.h"
+#include "url.h"
+#include "recur.h"
+#include "ftp.h"
+#include "host.h"
+#include "connect.h"
+
+/* Internal variables used by the timer. */
+static long internal_secs, internal_msecs;
+
+void logflush PARAMS ((void));
+
+/* From http.c. */
+uerr_t http_loop PARAMS ((struct urlinfo *, char **, int *));
+\f
+/* Flags for show_progress(). */
+enum spflags { SP_NONE, SP_INIT, SP_FINISH };
+
+static int show_progress PARAMS ((long, long, enum spflags));
+
+/* Reads the contents of file descriptor FD, until it is closed, or a
+ read error occurs. The data is read in 8K chunks, and stored to
+ stream fp, which should have been open for writing. If BUF is
+ non-NULL and its file descriptor is equal to FD, flush RBUF first.
+ This function will *not* use the rbuf_* functions!
+
+ The EXPECTED argument is passed to show_progress() unchanged, but
+ otherwise ignored.
+
+ If opt.verbose is set, the progress is also shown. RESTVAL
+ represents a value from which to start downloading (which will be
+ shown accordingly). If RESTVAL is non-zero, the stream should have
+ been open for appending.
+
+ The function exits and returns codes of 0, -1 and -2 if the
+ connection was closed, there was a read error, or if it could not
+ write to the output stream, respectively.
+
+ IMPORTANT: The function flushes the contents of the buffer in
+ rbuf_flush() before actually reading from fd. If you wish to read
+ from fd immediately, flush or discard the buffer. */
+int
+get_contents (int fd, FILE *fp, long *len, long restval, long expected,
+ struct rbuf *rbuf)
+{
+ int res;
+ static char c[8192];
+
+ *len = restval;
+ if (opt.verbose)
+ show_progress (restval, expected, SP_INIT);
+ if (rbuf && RBUF_FD (rbuf) == fd)
+ {
+ while ((res = rbuf_flush (rbuf, c, sizeof (c))) != 0)
+ {
+ if (fwrite (c, sizeof (char), res, fp) < res)
+ return -2;
+ if (opt.verbose)
+ {
+ if (show_progress (res, expected, SP_NONE))
+ fflush (fp);
+ }
+ *len += res;
+ }
+ }
+ /* Read from fd while there is available data. */
+ do
+ {
+ res = iread (fd, c, sizeof (c));
+ if (res > 0)
+ {
+ if (fwrite (c, sizeof (char), res, fp) < res)
+ return -2;
+ if (opt.verbose)
+ {
+ if (show_progress (res, expected, SP_NONE))
+ fflush (fp);
+ }
+ *len += res;
+ }
+ } while (res > 0);
+ if (res < -1)
+ res = -1;
+ if (opt.verbose)
+ show_progress (0, expected, SP_FINISH);
+ return res;
+}
+
+static void
+print_percentage (long bytes, long expected)
+{
+ int percentage = (int)(100.0 * bytes / expected);
+ logprintf (LOG_VERBOSE, " [%3d%%]", percentage);
+}
+
+/* Show the dotted progress report of file loading. Called with
+ length and a flag to tell it whether to reset or not. It keeps the
+ offset information in static local variables.
+
+ Return value: 1 or 0, designating whether any dots have been drawn.
+
+ If the init argument is set, the routine will initialize.
+
+ If the res is non-zero, res/line_bytes lines are skipped
+ (meaning the appropriate number ok kilobytes), and the number of
+ "dots" fitting on the first line are drawn as ','. */
+static int
+show_progress (long res, long expected, enum spflags flags)
+{
+ static long line_bytes;
+ static long offs;
+ static int ndot, nrow;
+ int any_output = 0;
+
+ if (flags == SP_FINISH)
+ {
+ if (expected)
+ {
+ int dot = ndot;
+ char *tmpstr = (char *)alloca (2 * opt.dots_in_line + 1);
+ char *tmpp = tmpstr;
+ for (; dot < opt.dots_in_line; dot++)
+ {
+ if (!(dot % opt.dot_spacing))
+ *tmpp++ = ' ';
+ *tmpp++ = ' ';
+ }
+ *tmpp = '\0';
+ logputs (LOG_VERBOSE, tmpstr);
+ print_percentage (nrow * line_bytes + ndot * opt.dot_bytes + offs,
+ expected);
+ }
+ logputs (LOG_VERBOSE, "\n\n");
+ return 0;
+ }
+
+ /* Temporarily disable flushing. */
+ opt.no_flush = 1;
+ /* init set means initialization. If res is set, it also means that
+ the retrieval is *not* done from the beginning. The part that
+ was already retrieved is not shown again. */
+ if (flags == SP_INIT)
+ {
+ /* Generic initialization of static variables. */
+ offs = 0L;
+ ndot = nrow = 0;
+ line_bytes = (long)opt.dots_in_line * opt.dot_bytes;
+ if (res)
+ {
+ if (res >= line_bytes)
+ {
+ nrow = res / line_bytes;
+ res %= line_bytes;
+ logprintf (LOG_VERBOSE,
+ _("\n [ skipping %dK ]"),
+ (int) ((nrow * line_bytes) / 1024));
+ ndot = 0;
+ }
+ }
+ logprintf (LOG_VERBOSE, "\n%5ldK ->", nrow * line_bytes / 1024);
+ }
+ /* Offset gets incremented by current value. */
+ offs += res;
+ /* While offset is >= opt.dot_bytes, print dots, taking care to
+ precede every 50th dot with a status message. */
+ for (; offs >= opt.dot_bytes; offs -= opt.dot_bytes)
+ {
+ if (!(ndot % opt.dot_spacing))
+ logputs (LOG_VERBOSE, " ");
+ any_output = 1;
+ logputs (LOG_VERBOSE, flags == SP_INIT ? "," : ".");
+ ++ndot;
+ if (ndot == opt.dots_in_line)
+ {
+ ndot = 0;
+ ++nrow;
+ if (expected)
+ print_percentage (nrow * line_bytes, expected);
+ logprintf (LOG_VERBOSE, "\n%5ldK ->", nrow * line_bytes / 1024);
+ }
+ }
+ /* Reenable flushing. */
+ opt.no_flush = 0;
+ if (any_output)
+ /* Force flush. #### Oh, what a kludge! */
+ logflush ();
+ return any_output;
+}
+\f
+/* Reset the internal timer. */
+void
+reset_timer (void)
+{
+#ifdef HAVE_GETTIMEOFDAY
+ struct timeval t;
+ gettimeofday (&t, NULL);
+ internal_secs = t.tv_sec;
+ internal_msecs = t.tv_usec / 1000;
+#else
+ internal_secs = time (NULL);
+ internal_msecs = 0;
+#endif
+}
+
+/* Return the time elapsed from the last call to reset_timer(), in
+ milliseconds. */
+long
+elapsed_time (void)
+{
+#ifdef HAVE_GETTIMEOFDAY
+ struct timeval t;
+ gettimeofday (&t, NULL);
+ return ((t.tv_sec - internal_secs) * 1000
+ + (t.tv_usec / 1000 - internal_msecs));
+#else
+ return 1000 * ((long)time (NULL) - internal_secs);
+#endif
+}
+
+/* Print out the appropriate download rate. Appropriate means that if
+ rate is > 1024 bytes per second, kilobytes are used, and if rate >
+ 1024 * 1024 bps, megabytes are used. */
+char *
+rate (long bytes, long msecs)
+{
+ static char res[15];
+ double dlrate;
+
+ if (!msecs)
+ ++msecs;
+ dlrate = (double)1000 * bytes / msecs;
+ /* #### Should these strings be translatable? */
+ if (dlrate < 1024.0)
+ sprintf (res, "%.2f B/s", dlrate);
+ else if (dlrate < 1024.0 * 1024.0)
+ sprintf (res, "%.2f KB/s", dlrate / 1024.0);
+ else
+ sprintf (res, "%.2f MB/s", dlrate / (1024.0 * 1024.0));
+ return res;
+}
+\f
+#define USE_PROXY_P(u) (opt.use_proxy && getproxy((u)->proto) \
+ && no_proxy_match((u)->host, \
+ (const char **)opt.no_proxy))
+
+/* Retrieve the given URL. Decides which loop to call -- HTTP, FTP,
+ or simply copy it with file:// (#### the latter not yet
+ implemented!). */
+uerr_t
+retrieve_url (const char *origurl, char **file, char **newloc,
+ const char *refurl, int *dt)
+{
+ uerr_t result;
+ char *url;
+ int location_changed, already_redirected, dummy;
+ int local_use_proxy;
+ char *mynewloc, *proxy;
+ struct urlinfo *u;
+
+
+ /* If dt is NULL, just ignore it. */
+ if (!dt)
+ dt = &dummy;
+ url = xstrdup (origurl);
+ if (newloc)
+ *newloc = NULL;
+ if (file)
+ *file = NULL;
+ already_redirected = 0;
+
+ again:
+ u = newurl ();
+ /* Parse the URL. RFC2068 requires `Location' to contain an
+ absoluteURI, but many sites break this requirement. #### We
+ should be liberal and accept a relative location, too. */
+ result = parseurl (url, u, already_redirected);
+ if (result != URLOK)
+ {
+ freeurl (u, 1);
+ logprintf (LOG_NOTQUIET, "%s: %s.\n", url, uerrmsg (result));
+ return result;
+ }
+
+ /* Set the referer. */
+ if (refurl)
+ u->referer = xstrdup (refurl);
+ else
+ u->referer = NULL;
+
+ local_use_proxy = USE_PROXY_P (u);
+ if (local_use_proxy)
+ {
+ struct urlinfo *pu = newurl ();
+
+ /* Copy the original URL to new location. */
+ memcpy (pu, u, sizeof (*u));
+ pu->proxy = NULL; /* A minor correction :) */
+ /* Initialize u to nil. */
+ memset (u, 0, sizeof (*u));
+ u->proxy = pu;
+ /* Get the appropriate proxy server, appropriate for the
+ current protocol. */
+ proxy = getproxy (pu->proto);
+ if (!proxy)
+ {
+ logputs (LOG_NOTQUIET, _("Could not find proxy host.\n"));
+ freeurl (u, 1);
+ return PROXERR;
+ }
+ /* Parse the proxy URL. */
+ result = parseurl (proxy, u, 0);
+ if (result != URLOK || u->proto != URLHTTP)
+ {
+ if (u->proto == URLHTTP)
+ logprintf (LOG_NOTQUIET, "Proxy %s: %s.\n", proxy, uerrmsg (result));
+ else
+ logprintf (LOG_NOTQUIET, _("Proxy %s: Must be HTTP.\n"), proxy);
+ freeurl (u, 1);
+ return PROXERR;
+ }
+ u->proto = URLHTTP;
+ }
+
+ assert (u->proto != URLFILE); /* #### Implement me! */
+ mynewloc = NULL;
+
+ if (u->proto == URLHTTP)
+ result = http_loop (u, &mynewloc, dt);
+ else if (u->proto == URLFTP)
+ {
+ /* If this is a redirection, we must not allow recursive FTP
+ retrieval, so we save recursion to oldrec, and restore it
+ later. */
+ int oldrec = opt.recursive;
+ if (already_redirected)
+ opt.recursive = 0;
+ result = ftp_loop (u, dt);
+ opt.recursive = oldrec;
+ /* There is a possibility of having HTTP being redirected to
+ FTP. In these cases we must decide whether the text is HTML
+ according to the suffix. The HTML suffixes are `.html' and
+ `.htm', case-insensitive.
+
+ #### All of this is, of course, crap. These types should be
+ determined through mailcap. */
+ if (already_redirected && u->local && (u->proto == URLFTP ))
+ {
+ char *suf = suffix (u->local);
+ if (suf && (!strcasecmp (suf, "html") || !strcasecmp (suf, "htm")))
+ *dt |= TEXTHTML;
+ FREE_MAYBE (suf);
+ }
+ }
+ location_changed = (result == NEWLOCATION);
+ if (location_changed)
+ {
+ /* Check for redirection to oneself. */
+ if (url_equal (url, mynewloc))
+ {
+ logprintf (LOG_NOTQUIET, _("%s: Redirection to itself.\n"),
+ mynewloc);
+ return WRONGCODE;
+ }
+ if (mynewloc)
+ {
+ free (url);
+ url = mynewloc;
+ }
+ freeurl (u, 1);
+ already_redirected = 1;
+ goto again;
+ }
+ if (file)
+ {
+ if (u->local)
+ *file = xstrdup (u->local);
+ else
+ *file = NULL;
+ }
+ freeurl (u, 1);
+
+ if (newloc)
+ *newloc = url;
+ else
+ free (url);
+
+ return result;
+}
+
+/* Find the URL-s in the file and call retrieve_url() for each of
+ them. If HTML is non-zero, treat the file as HTML, and construct
+ the URL-s accordingly.
+
+ If opt.recursive is set, call recursive_retrieve() for each file. */
+uerr_t
+retrieve_from_file (const char *file, int html, int *count)
+{
+ uerr_t status;
+ urlpos *url_list, *cur_url;
+
+ /* If spider-mode is on, we do not want get_urls_html barfing
+ errors on baseless links. */
+ url_list = (html ? get_urls_html (file, NULL, opt.spider)
+ : get_urls_file (file));
+ status = RETROK; /* Suppose everything is OK. */
+ *count = 0; /* Reset the URL count. */
+ recursive_reset ();
+ for (cur_url = url_list; cur_url; cur_url = cur_url->next, ++*count)
+ {
+ char *filename, *new_file;
+ int dt;
+
+ if (opt.quota && opt.downloaded > opt.quota)
+ {
+ status = QUOTEXC;
+ break;
+ }
+ status = retrieve_url (cur_url->url, &filename, &new_file, NULL, &dt);
+ if (opt.recursive && status == RETROK && (dt & TEXTHTML))
+ status = recursive_retrieve (filename, new_file ? new_file : cur_url->url);
+
+ if (filename && opt.delete_after && file_exists_p (filename))
+ {
+ logprintf (LOG_VERBOSE, _("Removing %s.\n"), filename);
+ if (unlink (filename))
+ logprintf (LOG_NOTQUIET, "unlink: %s\n", strerror (errno));
+ dt &= ~RETROKF;
+ }
+
+ FREE_MAYBE (new_file);
+ FREE_MAYBE (filename);
+ }
+
+ /* Free the linked list of URL-s. */
+ free_urlpos (url_list);
+
+ return status;
+}
+
+/* Print `giving up', or `retrying', depending on the impending
+ action. N1 and N2 are the attempt number and the attempt limit. */
+void
+printwhat (int n1, int n2)
+{
+ logputs (LOG_VERBOSE, (n1 == n2) ? _("Giving up.\n\n") : _("Retrying.\n\n"));
+}
--- /dev/null
+/* Declarations for retr.c.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef RETR_H
+#define RETR_H
+
+#include "rbuf.h"
+
+int get_contents PARAMS ((int, FILE *, long *, long, long, struct rbuf *));
+
+uerr_t retrieve_url PARAMS ((const char *, char **, char **,
+ const char *, int *));
+uerr_t retrieve_from_file PARAMS ((const char *, int, int *));
+
+void reset_timer PARAMS ((void));
+long elapsed_time PARAMS ((void));
+char *rate PARAMS ((long, long));
+
+void printwhat PARAMS ((int, int));
+
+#endif /* RETR_H */
--- /dev/null
+/* Dirty system-dependent hacks.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* This file is included by wget.h. Random .c files need not include
+ it. */
+
+#ifndef SYSDEP_H
+#define SYSDEP_H
+
+/* We need these to be playing with various stuff. */
+#ifdef TIME_WITH_SYS_TIME
+# include <sys/time.h>
+# include <time.h>
+#else /* not TIME_WITH_SYS_TIME_H */
+#ifdef HAVE_SYS_TIME_H
+# include <sys/time.h>
+#else /* not HAVE_SYS_TIME_H */
+# include <time.h>
+#endif /* HAVE_SYS_TIME_H */
+#endif /* TIME_WITH_SYS_TIME_H */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+
+#ifdef WINDOWS
+/* Windows doesn't have some functions. Include mswindows.h so we get
+ their declarations, as well as some additional declarations and
+ macros. This must come first, so it can set things up. */
+#include <mswindows.h>
+#endif /* WINDOWS */
+
+/* Allegedly needed for compilation under OS/2: */
+#ifdef EMXOS2
+#ifndef S_ISLNK
+# define S_ISLNK(m) 0
+#endif
+#ifndef lstat
+# define lstat stat
+#endif
+#endif /* EMXOS2 */
+
+/* Reportedly, stat() macros are broken on some old systems. Those
+ systems will have to fend for themselves, as I will not introduce
+ new code to handle it.
+
+ However, I will add code for *missing* macros, and the following
+ are missing from many systems. */
+#ifndef S_ISLNK
+# define S_ISLNK(m) (((m) & S_IFMT) == S_IFLNK)
+#endif
+#ifndef S_ISDIR
+# define S_ISDIR(m) (((m) & (_S_IFMT)) == (_S_IFDIR))
+#endif
+#ifndef S_ISREG
+# define S_ISREG(m) (((m) & _S_IFMT) == _S_IFREG)
+#endif
+
+/* Bletch! SPARC compiler doesn't define sparc (needed by
+ arpa/nameser.h) when in -Xc mode. Luckily, it always defines
+ __sparc. */
+#ifdef __sparc
+#ifndef sparc
+#define sparc
+#endif
+#endif
+
+/* mswindows.h defines these. */
+#ifndef READ
+# define READ(fd, buf, cnt) read ((fd), (buf), (cnt))
+#endif
+#ifndef WRITE
+# define WRITE(fd, buf, cnt) write ((fd), (buf), (cnt))
+#endif
+#ifndef REALCLOSE
+# define REALCLOSE(x) close (x)
+#endif
+
+#define CLOSE(x) \
+do { \
+ REALCLOSE (x); \
+ DEBUGP (("Closing fd %d\n", x)); \
+} while (0)
+
+/* OK, now define a decent interface to ctype macros. The regular
+ ones misfire when you feed them chars >= 127, as they understand
+ them as "negative", which results in out-of-bound access at
+ table-lookup, yielding random results. This is, of course, totally
+ bogus. One way to "solve" this is to use `unsigned char'
+ everywhere, but it is nearly impossible to do that cleanly, because
+ all of the library functions and system calls accept `char'.
+
+ Thus we define our wrapper macros which simply cast the argument to
+ unsigned char before passing it to the <ctype.h> macro. These
+ versions are used consistently across the code. */
+#define ISASCII(x) isascii ((unsigned char)(x))
+#define ISALPHA(x) isalpha ((unsigned char)(x))
+#define ISSPACE(x) isspace ((unsigned char)(x))
+#define ISDIGIT(x) isdigit ((unsigned char)(x))
+#define ISXDIGIT(x) isxdigit ((unsigned char)(x))
+
+/* Defined in cmpt.c: */
+#ifndef HAVE_STRERROR
+char *strerror ();
+#endif
+#ifndef HAVE_STRCASECMP
+int strcasecmp ();
+#endif
+#ifndef HAVE_STRNCASECMP
+int strncasecmp ();
+#endif
+#ifndef HAVE_STRSTR
+char *strstr ();
+#endif
+#ifndef HAVE_STRPTIME
+char *strptime ();
+#endif
+
+/* SunOS brain damage -- for some reason, SunOS header files fail to
+ declare the functions below, which causes all kinds of problems
+ (compiling errors) with pointer arithmetic and similar.
+
+ This used to be only within `#ifdef STDC_HEADERS', but it got
+ tripped on other systems (AIX), thus causing havoc. Fortunately,
+ SunOS appears to be the only system braindamaged that badly, so I
+ added an extra `#ifdef sun' guard. */
+#ifndef STDC_HEADERS
+#ifdef sun
+#ifndef __cplusplus
+char *strstr ();
+char *strchr ();
+char *strrchr ();
+char *strtok ();
+char *strdup ();
+void *memcpy ();
+#endif /* not __cplusplus */
+#endif /* sun */
+#endif /* STDC_HEADERS */
+
+#endif /* SYSDEP_H */
--- /dev/null
+/* URL handling.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else
+# include <strings.h>
+#endif
+#include <ctype.h>
+#include <sys/types.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#include <errno.h>
+#include <assert.h>
+
+#include "wget.h"
+#include "utils.h"
+#include "url.h"
+#include "host.h"
+#include "html.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+/* Default port definitions */
+#define DEFAULT_HTTP_PORT 80
+#define DEFAULT_FTP_PORT 21
+
+/* URL separator (for findurl) */
+#define URL_SEPARATOR "!\"#'(),>`{}|<>"
+
+/* A list of unsafe characters for encoding, as per RFC1738. '@' and
+ ':' (not listed in RFC) were added because of user/password
+ encoding, and \033 for safe printing. */
+
+#ifndef WINDOWS
+# define URL_UNSAFE " <>\"#%{}|\\^~[]`@:\033"
+#else /* WINDOWS */
+# define URL_UNSAFE " <>\"%{}|\\^[]`\033"
+#endif /* WINDOWS */
+
+/* If S contains unsafe characters, free it and replace it with a
+ version that doesn't. */
+#define URL_CLEANSE(s) do \
+{ \
+ if (contains_unsafe (s)) \
+ { \
+ char *uc_tmp = encode_string (s); \
+ free (s); \
+ (s) = uc_tmp; \
+ } \
+} while (0)
+
+/* Is a directory "."? */
+#define DOTP(x) ((*(x) == '.') && (!*(x + 1)))
+/* Is a directory ".."? */
+#define DDOTP(x) ((*(x) == '.') && (*(x + 1) == '.') && (!*(x + 2)))
+
+/* NULL-terminated list of strings to be recognized as prototypes (URL
+ schemes). Note that recognized doesn't mean supported -- only HTTP
+ and FTP are currently supported.
+
+ However, a string that does not match anything in the list will be
+ considered a relative URL. Thus it's important that this list has
+ anything anyone could think of being legal.
+
+ There are wild things here. :-) Take a look at
+ <URL:http://www.w3.org/pub/WWW/Addressing/schemes.html> for more
+ fun. */
+static char *protostrings[] =
+{
+ "cid:",
+ "clsid:",
+ "file:",
+ "finger:",
+ "ftp:",
+ "gopher:",
+ "hdl:",
+ "http:",
+ "https:",
+ "ilu:",
+ "ior:",
+ "irc:",
+ "java:",
+ "javascript:",
+ "lifn:",
+ "mailto:",
+ "mid:",
+ "news:",
+ "nntp:",
+ "path:",
+ "prospero:",
+ "rlogin:",
+ "service:",
+ "shttp:",
+ "snews:",
+ "stanf:",
+ "telnet:",
+ "tn3270:",
+ "wais:",
+ "whois++:",
+ NULL
+};
+
+struct proto
+{
+ char *name;
+ uerr_t ind;
+ unsigned short port;
+};
+
+/* Similar to former, but for supported protocols: */
+static struct proto sup_protos[] =
+{
+ { "http://", URLHTTP, DEFAULT_HTTP_PORT },
+ { "ftp://", URLFTP, DEFAULT_FTP_PORT },
+ /*{ "file://", URLFILE, DEFAULT_FTP_PORT },*/
+};
+
+static void parse_dir PARAMS ((const char *, char **, char **));
+static uerr_t parse_uname PARAMS ((const char *, char **, char **));
+static char *construct PARAMS ((const char *, const char *, int , int));
+static char *construct_relative PARAMS ((const char *, const char *));
+static char process_ftp_type PARAMS ((char *));
+
+\f
+/* Returns the number of characters to be skipped if the first thing
+ in a URL is URL: (which is 0 or 4+). The optional spaces after
+ URL: are also skipped. */
+int
+skip_url (const char *url)
+{
+ int i;
+
+ if (toupper (url[0]) == 'U'
+ && toupper (url[1]) == 'R'
+ && toupper (url[2]) == 'L'
+ && url[3] == ':')
+ {
+ /* Skip blanks. */
+ for (i = 4; url[i] && ISSPACE (url[i]); i++);
+ return i;
+ }
+ else
+ return 0;
+}
+
+/* Returns 1 if the string contains unsafe characters, 0 otherwise. */
+int
+contains_unsafe (const char *s)
+{
+ for (; *s; s++)
+ if (strchr (URL_UNSAFE, *s))
+ return 1;
+ return 0;
+}
+
+/* Decodes the forms %xy in a URL to the character the hexadecimal
+ code of which is xy. xy are hexadecimal digits from
+ [0123456789ABCDEF] (case-insensitive). If x or y are not
+ hex-digits or `%' precedes `\0', the sequence is inserted
+ literally. */
+
+static void
+decode_string (char *s)
+{
+ char *p = s;
+
+ for (; *s; s++, p++)
+ {
+ if (*s != '%')
+ *p = *s;
+ else
+ {
+ /* Do nothing if at the end of the string, or if the chars
+ are not hex-digits. */
+ if (!*(s + 1) || !*(s + 2)
+ || !(ISXDIGIT (*(s + 1)) && ISXDIGIT (*(s + 2))))
+ {
+ *p = *s;
+ continue;
+ }
+ *p = (ASC2HEXD (*(s + 1)) << 4) + ASC2HEXD (*(s + 2));
+ s += 2;
+ }
+ }
+ *p = '\0';
+}
+
+/* Encodes the unsafe characters (listed in URL_UNSAFE) in a given
+ string, returning a malloc-ed %XX encoded string. */
+char *
+encode_string (const char *s)
+{
+ const char *b;
+ char *p, *res;
+ int i;
+
+ b = s;
+ for (i = 0; *s; s++, i++)
+ if (strchr (URL_UNSAFE, *s))
+ i += 2; /* Two more characters (hex digits) */
+ res = (char *)xmalloc (i + 1);
+ s = b;
+ for (p = res; *s; s++)
+ if (strchr (URL_UNSAFE, *s))
+ {
+ const unsigned char c = *s;
+ *p++ = '%';
+ *p++ = HEXD2ASC (c >> 4);
+ *p++ = HEXD2ASC (c & 0xf);
+ }
+ else
+ *p++ = *s;
+ *p = '\0';
+ return res;
+}
+\f
+/* Returns the proto-type if URL's protocol is supported, or
+ URLUNKNOWN if not. */
+uerr_t
+urlproto (const char *url)
+{
+ int i;
+
+ url += skip_url (url);
+ for (i = 0; i < ARRAY_SIZE (sup_protos); i++)
+ if (!strncasecmp (url, sup_protos[i].name, strlen (sup_protos[i].name)))
+ return sup_protos[i].ind;
+ for (i = 0; url[i] && url[i] != ':' && url[i] != '/'; i++);
+ if (url[i] == ':')
+ {
+ for (++i; url[i] && url[i] != '/'; i++)
+ if (!ISDIGIT (url[i]))
+ return URLBADPORT;
+ if (url[i - 1] == ':')
+ return URLFTP;
+ else
+ return URLHTTP;
+ }
+ else
+ return URLHTTP;
+}
+
+/* Skip the protocol part of the URL, e.g. `http://'. If no protocol
+ part is found, returns 0. */
+int
+skip_proto (const char *url)
+{
+ char **s;
+ int l;
+
+ for (s = protostrings; *s; s++)
+ if (!strncasecmp (*s, url, strlen (*s)))
+ break;
+ if (!*s)
+ return 0;
+ l = strlen (*s);
+ /* HTTP and FTP protocols are expected to yield exact host names
+ (i.e. the `//' part must be skipped, too). */
+ if (!strcmp (*s, "http:") || !strcmp (*s, "ftp:"))
+ l += 2;
+ return l;
+}
+
+/* Returns 1 if the URL begins with a protocol (supported or
+ unsupported), 0 otherwise. */
+static int
+has_proto (const char *url)
+{
+ char **s;
+
+ url += skip_url (url);
+ for (s = protostrings; *s; s++)
+ if (strncasecmp (url, *s, strlen (*s)) == 0)
+ return 1;
+ return 0;
+}
+
+/* Skip the username and password, if present here. The function
+ should be called *not* with the complete URL, but with the part
+ right after the protocol.
+
+ If no username and password are found, return 0. */
+int
+skip_uname (const char *url)
+{
+ const char *p;
+ for (p = url; *p && *p != '/'; p++)
+ if (*p == '@')
+ break;
+ /* If a `@' was found before the first occurrence of `/', skip
+ it. */
+ if (*p == '@')
+ return p - url + 1;
+ else
+ return 0;
+}
+\f
+/* Allocate a new urlinfo structure, fill it with default values and
+ return a pointer to it. */
+struct urlinfo *
+newurl (void)
+{
+ struct urlinfo *u;
+
+ u = (struct urlinfo *)xmalloc (sizeof (struct urlinfo));
+ memset (u, 0, sizeof (*u));
+ u->proto = URLUNKNOWN;
+ return u;
+}
+
+/* Perform a "deep" free of the urlinfo structure. The structure
+ should have been created with newurl, but need not have been used.
+ If free_pointer is non-0, free the pointer itself. */
+void
+freeurl (struct urlinfo *u, int complete)
+{
+ assert (u != NULL);
+ FREE_MAYBE (u->url);
+ FREE_MAYBE (u->host);
+ FREE_MAYBE (u->path);
+ FREE_MAYBE (u->file);
+ FREE_MAYBE (u->dir);
+ FREE_MAYBE (u->user);
+ FREE_MAYBE (u->passwd);
+ FREE_MAYBE (u->local);
+ FREE_MAYBE (u->referer);
+ if (u->proxy)
+ freeurl (u->proxy, 1);
+ if (complete)
+ free (u);
+ return;
+}
+\f
+/* Extract the given URL of the form
+ (http:|ftp:)// (user (:password)?@)?hostname (:port)? (/path)?
+ 1. hostname (terminated with `/' or `:')
+ 2. port number (terminated with `/'), or chosen for the protocol
+ 3. dirname (everything after hostname)
+ Most errors are handled. No allocation is done, you must supply
+ pointers to allocated memory.
+ ...and a host of other stuff :-)
+
+ - Recognizes hostname:dir/file for FTP and
+ hostname (:portnum)?/dir/file for HTTP.
+ - Parses the path to yield directory and file
+ - Parses the URL to yield the username and passwd (if present)
+ - Decodes the strings, in case they contain "forbidden" characters
+ - Writes the result to struct urlinfo
+
+ If the argument STRICT is set, it recognizes only the canonical
+ form. */
+uerr_t
+parseurl (const char *url, struct urlinfo *u, int strict)
+{
+ int i, l, abs_ftp;
+ int recognizable; /* Recognizable URL is the one where
+ the protocol name was explicitly
+ named, i.e. it wasn't deduced from
+ the URL format. */
+ uerr_t type;
+
+ DEBUGP (("parseurl (\"%s\") -> ", url));
+ url += skip_url (url);
+ recognizable = has_proto (url);
+ if (strict && !recognizable)
+ return URLUNKNOWN;
+ for (i = 0, l = 0; i < ARRAY_SIZE (sup_protos); i++)
+ {
+ l = strlen (sup_protos[i].name);
+ if (!strncasecmp (sup_protos[i].name, url, l))
+ break;
+ }
+ /* If protocol is recognizable, but unsupported, bail out, else
+ suppose unknown. */
+ if (recognizable && !sup_protos[i].name)
+ return URLUNKNOWN;
+ else if (i == ARRAY_SIZE (sup_protos))
+ type = URLUNKNOWN;
+ else
+ u->proto = type = sup_protos[i].ind;
+
+ if (type == URLUNKNOWN)
+ l = 0;
+ /* Allow a username and password to be specified (i.e. just skip
+ them for now). */
+ if (recognizable)
+ l += skip_uname (url + l);
+ for (i = l; url[i] && url[i] != ':' && url[i] != '/'; i++);
+ if (i == l)
+ return URLBADHOST;
+ /* Get the hostname. */
+ u->host = strdupdelim (url + l, url + i);
+ DEBUGP (("host %s -> ", u->host));
+
+ /* Assume no port has been given. */
+ u->port = 0;
+ if (url[i] == ':')
+ {
+ /* We have a colon delimiting the hostname. It could mean that
+ a port number is following it, or a directory. */
+ if (ISDIGIT (url[++i])) /* A port number */
+ {
+ if (type == URLUNKNOWN)
+ u->proto = type = URLHTTP;
+ for (; url[i] && url[i] != '/'; i++)
+ if (ISDIGIT (url[i]))
+ u->port = 10 * u->port + (url[i] - '0');
+ else
+ return URLBADPORT;
+ if (!u->port)
+ return URLBADPORT;
+ DEBUGP (("port %hu -> ", u->port));
+ }
+ else if (type == URLUNKNOWN) /* or a directory */
+ u->proto = type = URLFTP;
+ else /* or just a misformed port number */
+ return URLBADPORT;
+ }
+ else if (type == URLUNKNOWN)
+ u->proto = type = URLHTTP;
+ if (!u->port)
+ {
+ int i;
+ for (i = 0; i < ARRAY_SIZE (sup_protos); i++)
+ if (sup_protos[i].ind == type)
+ break;
+ if (i == ARRAY_SIZE (sup_protos))
+ return URLUNKNOWN;
+ u->port = sup_protos[i].port;
+ }
+ /* Some delimiter troubles... */
+ if (url[i] == '/' && url[i - 1] != ':')
+ ++i;
+ if (type == URLHTTP)
+ while (url[i] && url[i] == '/')
+ ++i;
+ u->path = (char *)xmalloc (strlen (url + i) + 8);
+ strcpy (u->path, url + i);
+ if (type == URLFTP)
+ {
+ u->ftp_type = process_ftp_type (u->path);
+ /* #### We don't handle type `d' correctly yet. */
+ if (!u->ftp_type || toupper (u->ftp_type) == 'D')
+ u->ftp_type = 'I';
+ }
+ DEBUGP (("opath %s -> ", u->path));
+ /* Parse the username and password (if existing). */
+ parse_uname (url, &u->user, &u->passwd);
+ /* Decode the strings, as per RFC 1738. */
+ decode_string (u->host);
+ decode_string (u->path);
+ if (u->user)
+ decode_string (u->user);
+ if (u->passwd)
+ decode_string (u->passwd);
+ /* Parse the directory. */
+ parse_dir (u->path, &u->dir, &u->file);
+ DEBUGP (("dir %s -> file %s -> ", u->dir, u->file));
+ /* Simplify the directory. */
+ path_simplify (u->dir);
+ /* Remove the leading `/' in HTTP. */
+ if (type == URLHTTP && *u->dir == '/')
+ strcpy (u->dir, u->dir + 1);
+ DEBUGP (("ndir %s\n", u->dir));
+ /* Strip trailing `/'. */
+ l = strlen (u->dir);
+ if (l && u->dir[l - 1] == '/')
+ u->dir[l - 1] = '\0';
+ /* Re-create the path: */
+ abs_ftp = (u->proto == URLFTP && *u->dir == '/');
+ /* sprintf (u->path, "%s%s%s%s", abs_ftp ? "%2F": "/",
+ abs_ftp ? (u->dir + 1) : u->dir, *u->dir ? "/" : "", u->file); */
+ strcpy (u->path, abs_ftp ? "%2F" : "/");
+ strcat (u->path, abs_ftp ? (u->dir + 1) : u->dir);
+ strcat (u->path, *u->dir ? "/" : "");
+ strcat (u->path, u->file);
+ URL_CLEANSE (u->path);
+ /* Create the clean URL. */
+ u->url = str_url (u, 0);
+ return URLOK;
+}
+\f
+/* Build the directory and filename components of the path. Both
+ components are *separately* malloc-ed strings! It does not change
+ the contents of path.
+
+ If the path ends with "." or "..", they are (correctly) counted as
+ directories. */
+static void
+parse_dir (const char *path, char **dir, char **file)
+{
+ int i, l;
+
+ for (i = l = strlen (path); i && path[i] != '/'; i--);
+ if (!i && *path != '/') /* Just filename */
+ {
+ if (DOTP (path) || DDOTP (path))
+ {
+ *dir = xstrdup (path);
+ *file = xstrdup ("");
+ }
+ else
+ {
+ *dir = xstrdup (""); /* This is required because of FTP */
+ *file = xstrdup (path);
+ }
+ }
+ else if (!i) /* /filename */
+ {
+ if (DOTP (path + 1) || DDOTP (path + 1))
+ {
+ *dir = xstrdup (path);
+ *file = xstrdup ("");
+ }
+ else
+ {
+ *dir = xstrdup ("/");
+ *file = xstrdup (path + 1);
+ }
+ }
+ else /* Nonempty directory with or without a filename */
+ {
+ if (DOTP (path + i + 1) || DDOTP (path + i + 1))
+ {
+ *dir = xstrdup (path);
+ *file = xstrdup ("");
+ }
+ else
+ {
+ *dir = strdupdelim (path, path + i);
+ *file = strdupdelim (path + i + 1, path + l + 1);
+ }
+ }
+}
+
+/* Find the optional username and password within the URL, as per
+ RFC1738. The returned user and passwd char pointers are
+ malloc-ed. */
+static uerr_t
+parse_uname (const char *url, char **user, char **passwd)
+{
+ int l;
+ const char *p, *col;
+ char **where;
+
+ *user = NULL;
+ *passwd = NULL;
+ url += skip_url (url);
+ /* Look for end of protocol string. */
+ l = skip_proto (url);
+ if (!l)
+ return URLUNKNOWN;
+ /* Add protocol offset. */
+ url += l;
+ /* Is there an `@' character? */
+ for (p = url; *p && *p != '/'; p++)
+ if (*p == '@')
+ break;
+ /* If not, return. */
+ if (*p != '@')
+ return URLOK;
+ /* Else find the username and password. */
+ for (p = col = url; *p != '@'; p++)
+ {
+ if (*p == ':' && !*user)
+ {
+ *user = (char *)xmalloc (p - url + 1);
+ memcpy (*user, url, p - url);
+ (*user)[p - url] = '\0';
+ col = p + 1;
+ }
+ }
+ /* Decide whether you have only the username or both. */
+ where = *user ? passwd : user;
+ *where = (char *)xmalloc (p - col + 1);
+ memcpy (*where, col, p - col);
+ (*where)[p - col] = '\0';
+ return URLOK;
+}
+
+/* If PATH ends with `;type=X', return the character X. */
+static char
+process_ftp_type (char *path)
+{
+ int len = strlen (path);
+
+ if (len >= 7
+ && !memcmp (path + len - 7, ";type=", 6))
+ {
+ path[len - 7] = '\0';
+ return path[len - 1];
+ }
+ else
+ return '\0';
+}
+\f
+/* Return the URL as fine-formed string, with a proper protocol, port
+ number, directory and optional user/password. If HIDE is non-zero,
+ password will be hidden. The forbidden characters in the URL will
+ be cleansed. */
+char *
+str_url (const struct urlinfo *u, int hide)
+{
+ char *res, *host, *user, *passwd, *proto_name, *dir, *file;
+ int i, l, ln, lu, lh, lp, lf, ld;
+
+ /* Look for the protocol name. */
+ for (i = 0; i < ARRAY_SIZE (sup_protos); i++)
+ if (sup_protos[i].ind == u->proto)
+ break;
+ if (i == ARRAY_SIZE (sup_protos))
+ return NULL;
+ proto_name = sup_protos[i].name;
+ host = CLEANDUP (u->host);
+ dir = CLEANDUP (u->dir);
+ file = CLEANDUP (u->file);
+ user = passwd = NULL;
+ if (u->user)
+ user = CLEANDUP (u->user);
+ if (u->passwd)
+ {
+ int i;
+ passwd = CLEANDUP (u->passwd);
+ if (hide)
+ for (i = 0; passwd[i]; i++)
+ passwd[i] = 'x';
+ }
+ if (u->proto == URLFTP && *dir == '/')
+ {
+ char *tmp = (char *)xmalloc (strlen (dir) + 3);
+ /*sprintf (tmp, "%%2F%s", dir + 1);*/
+ *tmp = '%';
+ tmp[1] = '2';
+ tmp[2] = 'F';
+ strcpy (tmp + 3, dir + 1);
+ free (dir);
+ dir = tmp;
+ }
+
+ ln = strlen (proto_name);
+ lu = user ? strlen (user) : 0;
+ lp = passwd ? strlen (passwd) : 0;
+ lh = strlen (host);
+ ld = strlen (dir);
+ lf = strlen (file);
+ res = (char *)xmalloc (ln + lu + lp + lh + ld + lf + 20); /* safe sex */
+ /* sprintf (res, "%s%s%s%s%s%s:%d/%s%s%s", proto_name,
+ (user ? user : ""), (passwd ? ":" : ""),
+ (passwd ? passwd : ""), (user ? "@" : ""),
+ host, u->port, dir, *dir ? "/" : "", file); */
+ l = 0;
+ memcpy (res, proto_name, ln);
+ l += ln;
+ if (user)
+ {
+ memcpy (res + l, user, lu);
+ l += lu;
+ if (passwd)
+ {
+ res[l++] = ':';
+ memcpy (res + l, passwd, lp);
+ l += lp;
+ }
+ res[l++] = '@';
+ }
+ memcpy (res + l, host, lh);
+ l += lh;
+ res[l++] = ':';
+ long_to_string (res + l, (long)u->port);
+ l += numdigit (u->port);
+ res[l++] = '/';
+ memcpy (res + l, dir, ld);
+ l += ld;
+ if (*dir)
+ res[l++] = '/';
+ strcpy (res + l, file);
+ free (host);
+ free (dir);
+ free (file);
+ FREE_MAYBE (user);
+ FREE_MAYBE (passwd);
+ return res;
+}
+
+/* Check whether two URL-s are equivalent, i.e. pointing to the same
+ location. Uses parseurl to parse them, and compares the canonical
+ forms.
+
+ Returns 1 if the URL1 is equivalent to URL2, 0 otherwise. Also
+ return 0 on error. */
+int
+url_equal (const char *url1, const char *url2)
+{
+ struct urlinfo *u1, *u2;
+ uerr_t err;
+ int res;
+
+ u1 = newurl ();
+ err = parseurl (url1, u1, 0);
+ if (err != URLOK)
+ {
+ freeurl (u1, 1);
+ return 0;
+ }
+ u2 = newurl ();
+ err = parseurl (url2, u2, 0);
+ if (err != URLOK)
+ {
+ freeurl (u2, 1);
+ return 0;
+ }
+ res = !strcmp (u1->url, u2->url);
+ freeurl (u1, 1);
+ freeurl (u2, 1);
+ return res;
+}
+\f
+/* Find URL of format scheme:hostname[:port]/dir in a buffer. The
+ buffer may contain pretty much anything; no errors are signaled. */
+static const char *
+findurl (const char *buf, int howmuch, int *count)
+{
+ char **prot;
+ const char *s1, *s2;
+
+ for (s1 = buf; howmuch; s1++, howmuch--)
+ for (prot = protostrings; *prot; prot++)
+ if (howmuch <= strlen (*prot))
+ continue;
+ else if (!strncasecmp (*prot, s1, strlen (*prot)))
+ {
+ for (s2 = s1, *count = 0;
+ howmuch && *s2 && *s2 >= 32 && *s2 < 127 && !ISSPACE (*s2) &&
+ !strchr (URL_SEPARATOR, *s2);
+ s2++, (*count)++, howmuch--);
+ return s1;
+ }
+ return NULL;
+}
+
+/* Scans the file for signs of URL-s. Returns a vector of pointers,
+ each pointer representing a URL string. The file is *not* assumed
+ to be HTML. */
+urlpos *
+get_urls_file (const char *file)
+{
+ long nread;
+ FILE *fp;
+ char *buf;
+ const char *pbuf;
+ int size;
+ urlpos *first, *current, *old;
+
+ if (file && !HYPHENP (file))
+ {
+ fp = fopen (file, "rb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", file, strerror (errno));
+ return NULL;
+ }
+ }
+ else
+ fp = stdin;
+ /* Load the file. */
+ load_file (fp, &buf, &nread);
+ if (file && !HYPHENP (file))
+ fclose (fp);
+ DEBUGP (("Loaded %s (size %ld).\n", file, nread));
+ first = current = NULL;
+ /* Fill the linked list with URLs. */
+ for (pbuf = buf; (pbuf = findurl (pbuf, nread - (pbuf - buf), &size));
+ pbuf += size)
+ {
+ /* Allocate the space. */
+ old = current;
+ current = (urlpos *)xmalloc (sizeof (urlpos));
+ if (old)
+ old->next = current;
+ memset (current, 0, sizeof (*current));
+ current->next = NULL;
+ current->url = (char *)xmalloc (size + 1);
+ memcpy (current->url, pbuf, size);
+ current->url[size] = '\0';
+ if (!first)
+ first = current;
+ }
+ /* Free the buffer. */
+ free (buf);
+
+ return first;
+}
+
+/* Similar to get_urls_file, but for HTML files. FILE is scanned as
+ an HTML document using htmlfindurl(), which see. get_urls_html()
+ constructs the HTML-s from the relative href-s.
+
+ If SILENT is non-zero, do not barf on baseless relative links. */
+urlpos *
+get_urls_html (const char *file, const char *this_url, int silent)
+{
+ long nread;
+ FILE *fp;
+ char *orig_buf;
+ const char *buf;
+ int step, first_time;
+ urlpos *first, *current, *old;
+
+ if (file && !HYPHENP (file))
+ {
+ fp = fopen (file, "rb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, "%s: %s\n", file, strerror (errno));
+ return NULL;
+ }
+ }
+ else
+ fp = stdin;
+ /* Load the file. */
+ load_file (fp, &orig_buf, &nread);
+ if (file && !HYPHENP (file))
+ fclose (fp);
+ DEBUGP (("Loaded HTML file %s (size %ld).\n", file, nread));
+ first = current = NULL;
+ first_time = 1;
+ /* Iterate over the URLs in BUF, picked by htmlfindurl(). */
+ for (buf = orig_buf;
+ (buf = htmlfindurl (buf, nread - (buf - orig_buf), &step, first_time));
+ buf += step)
+ {
+ int i, no_proto;
+ int size = step;
+ const char *pbuf = buf;
+ char *constr, *base;
+ const char *cbase;
+
+ first_time = 0;
+
+ /* A frequent phenomenon that needs to be handled are pages
+ generated by brain-damaged HTML generators, which refer to to
+ URI-s as <a href="<spaces>URI<spaces>">. We simply ignore
+ any spaces at the beginning or at the end of the string.
+ This is probably not strictly correct, but that's what the
+ browsers do, so we may follow. May the authors of "WYSIWYG"
+ HTML tools burn in hell for the damage they've inflicted! */
+ while ((pbuf < buf + step) && ISSPACE (*pbuf))
+ {
+ ++pbuf;
+ --size;
+ }
+ while (size && ISSPACE (pbuf[size - 1]))
+ --size;
+ if (!size)
+ break;
+
+ for (i = 0; protostrings[i]; i++)
+ {
+ if (!strncasecmp (protostrings[i], pbuf,
+ MINVAL (strlen (protostrings[i]), size)))
+ break;
+ }
+ /* Check for http:RELATIVE_URI. See below for details. */
+ if (protostrings[i]
+ && !(strncasecmp (pbuf, "http:", 5) == 0
+ && strncasecmp (pbuf, "http://", 7) != 0))
+ {
+ no_proto = 0;
+ }
+ else
+ {
+ no_proto = 1;
+ /* This is for extremely brain-damaged pages that refer to
+ relative URI-s as <a href="http:URL">. Just strip off the
+ silly leading "http:" (as well as any leading blanks
+ before it). */
+ if ((size > 5) && !strncasecmp ("http:", pbuf, 5))
+ pbuf += 5, size -= 5;
+ }
+ if (!no_proto)
+ {
+ for (i = 0; i < ARRAY_SIZE (sup_protos); i++)
+ {
+ if (!strncasecmp (sup_protos[i].name, pbuf,
+ MINVAL (strlen (sup_protos[i].name), size)))
+ break;
+ }
+ /* Do *not* accept a non-supported protocol. */
+ if (i == ARRAY_SIZE (sup_protos))
+ continue;
+ }
+ if (no_proto)
+ {
+ /* First, construct the base, which can be relative itself.
+
+ Criteria for creating the base are:
+ 1) html_base created by <base href="...">
+ 2) current URL
+ 3) base provided from the command line */
+ cbase = html_base ();
+ if (!cbase)
+ cbase = this_url;
+ if (!cbase)
+ cbase = opt.base_href;
+ if (!cbase) /* Error condition -- a baseless
+ relative link. */
+ {
+ if (!opt.quiet && !silent)
+ {
+ /* Use malloc, not alloca because this is called in
+ a loop. */
+ char *temp = (char *)malloc (size + 1);
+ strncpy (temp, pbuf, size);
+ temp[size] = '\0';
+ logprintf (LOG_NOTQUIET,
+ _("Error (%s): Link %s without a base provided.\n"),
+ file, temp);
+ free (temp);
+ }
+ continue;
+ }
+ if (this_url)
+ base = construct (this_url, cbase, strlen (cbase),
+ !has_proto (cbase));
+ else
+ {
+ /* Base must now be absolute, with host name and
+ protocol. */
+ if (!has_proto (cbase))
+ {
+ logprintf (LOG_NOTQUIET, _("\
+Error (%s): Base %s relative, without referer URL.\n"),
+ file, cbase);
+ continue;
+ }
+ base = xstrdup (cbase);
+ }
+ constr = construct (base, pbuf, size, no_proto);
+ free (base);
+ }
+ else /* has proto */
+ {
+ constr = (char *)xmalloc (size + 1);
+ strncpy (constr, pbuf, size);
+ constr[size] = '\0';
+ }
+#ifdef DEBUG
+ if (opt.debug)
+ {
+ char *tmp;
+ const char *tmp2;
+
+ tmp2 = html_base ();
+ /* Use malloc, not alloca because this is called in a loop. */
+ tmp = (char *)xmalloc (size + 1);
+ strncpy (tmp, pbuf, size);
+ tmp[size] = '\0';
+ logprintf (LOG_ALWAYS,
+ "file %s; this_url %s; base %s\nlink: %s; constr: %s\n",
+ file, this_url ? this_url : "(null)",
+ tmp2 ? tmp2 : "(null)", tmp, constr);
+ free (tmp);
+ }
+#endif
+
+ /* Allocate the space. */
+ old = current;
+ current = (urlpos *)xmalloc (sizeof (urlpos));
+ if (old)
+ old->next = current;
+ if (!first)
+ first = current;
+ /* Fill the values. */
+ memset (current, 0, sizeof (*current));
+ current->next = NULL;
+ current->url = constr;
+ current->size = size;
+ current->pos = pbuf - orig_buf;
+ /* A URL is relative if the host and protocol are not named,
+ and the name does not start with `/'. */
+ if (no_proto && *pbuf != '/')
+ current->flags |= (URELATIVE | UNOPROTO);
+ else if (no_proto)
+ current->flags |= UNOPROTO;
+ }
+ free (orig_buf);
+
+ return first;
+}
+\f
+/* Free the linked list of urlpos. */
+void
+free_urlpos (urlpos *l)
+{
+ while (l)
+ {
+ urlpos *next = l->next;
+ free (l->url);
+ FREE_MAYBE (l->local_name);
+ free (l);
+ l = next;
+ }
+}
+
+/* Rotate FNAME opt.backups times */
+void
+rotate_backups(const char *fname)
+{
+ int maxlen = strlen (fname) + 1 + numdigit (opt.backups) + 1;
+ char *from = (char *)alloca (maxlen);
+ char *to = (char *)alloca (maxlen);
+ struct stat sb;
+ int i;
+
+ if (stat (fname, &sb) == 0)
+ if (S_ISREG (sb.st_mode) == 0)
+ return;
+
+ for (i = opt.backups; i > 1; i--)
+ {
+ sprintf (from, "%s.%d", fname, i - 1);
+ sprintf (to, "%s.%d", fname, i);
+ /* #### This will fail on machines without the rename() system
+ call. */
+ rename (from, to);
+ }
+
+ sprintf (to, "%s.%d", fname, 1);
+ rename(fname, to);
+}
+
+/* Create all the necessary directories for PATH (a file). Calls
+ mkdirhier() internally. */
+int
+mkalldirs (const char *path)
+{
+ const char *p;
+ char *t;
+ struct stat st;
+ int res;
+
+ p = path + strlen (path);
+ for (; *p != '/' && p != path; p--);
+ /* Don't create if it's just a file. */
+ if ((p == path) && (*p != '/'))
+ return 0;
+ t = strdupdelim (path, p);
+ /* Check whether the directory exists. */
+ if ((stat (t, &st) == 0))
+ {
+ if (S_ISDIR (st.st_mode))
+ {
+ free (t);
+ return 0;
+ }
+ else
+ {
+ /* If the dir exists as a file name, remove it first. This
+ is *only* for Wget to work with buggy old CERN http
+ servers. Here is the scenario: When Wget tries to
+ retrieve a directory without a slash, e.g.
+ http://foo/bar (bar being a directory), CERN server will
+ not redirect it too http://foo/bar/ -- it will generate a
+ directory listing containing links to bar/file1,
+ bar/file2, etc. Wget will lose because it saves this
+ HTML listing to a file `bar', so it cannot create the
+ directory. To work around this, if the file of the same
+ name exists, we just remove it and create the directory
+ anyway. */
+ DEBUGP (("Removing %s because of directory danger!\n", t));
+ unlink (t);
+ }
+ }
+ res = make_directory (t);
+ if (res != 0)
+ logprintf (LOG_NOTQUIET, "%s: %s", t, strerror (errno));
+ free (t);
+ return res;
+}
+
+static int
+count_slashes (const char *s)
+{
+ int i = 0;
+ while (*s)
+ if (*s++ == '/')
+ ++i;
+ return i;
+}
+
+/* Return the path name of the URL-equivalent file name, with a
+ remote-like structure of directories. */
+static char *
+mkstruct (const struct urlinfo *u)
+{
+ char *host, *dir, *file, *res, *dirpref;
+ int l;
+
+ assert (u->dir != NULL);
+ assert (u->host != NULL);
+
+ if (opt.cut_dirs)
+ {
+ char *ptr = u->dir + (*u->dir == '/');
+ int slash_count = 1 + count_slashes (ptr);
+ int cut = MINVAL (opt.cut_dirs, slash_count);
+ for (; cut && *ptr; ptr++)
+ if (*ptr == '/')
+ --cut;
+ STRDUP_ALLOCA (dir, ptr);
+ }
+ else
+ dir = u->dir + (*u->dir == '/');
+
+ host = xstrdup (u->host);
+ /* Check for the true name (or at least a consistent name for saving
+ to directory) of HOST, reusing the hlist if possible. */
+ if (opt.add_hostdir && !opt.simple_check)
+ {
+ char *nhost = realhost (host);
+ free (host);
+ host = nhost;
+ }
+ /* Add dir_prefix and hostname (if required) to the beginning of
+ dir. */
+ if (opt.add_hostdir)
+ {
+ if (!DOTP (opt.dir_prefix))
+ {
+ dirpref = (char *)alloca (strlen (opt.dir_prefix) + 1
+ + strlen (host) + 1);
+ sprintf (dirpref, "%s/%s", opt.dir_prefix, host);
+ }
+ else
+ STRDUP_ALLOCA (dirpref, host);
+ }
+ else /* not add_hostdir */
+ {
+ if (!DOTP (opt.dir_prefix))
+ dirpref = opt.dir_prefix;
+ else
+ dirpref = "";
+ }
+ free (host);
+
+ /* If there is a prefix, prepend it. */
+ if (*dirpref)
+ {
+ char *newdir = (char *)alloca (strlen (dirpref) + 1 + strlen (dir) + 2);
+ sprintf (newdir, "%s%s%s", dirpref, *dir == '/' ? "" : "/", dir);
+ dir = newdir;
+ }
+ dir = xstrdup (dir);
+ URL_CLEANSE (dir);
+ l = strlen (dir);
+ if (l && dir[l - 1] == '/')
+ dir[l - 1] = '\0';
+
+ if (!*u->file)
+ file = "index.html";
+ else
+ file = u->file;
+
+ /* Finally, construct the full name. */
+ res = (char *)xmalloc (strlen (dir) + 1 + strlen (file) + 1);
+ sprintf (res, "%s%s%s", dir, *dir ? "/" : "", file);
+ free (dir);
+ return res;
+}
+
+/* Create a unique filename, corresponding to a given URL. Calls
+ mkstruct if necessary. Does *not* actually create any directories. */
+char *
+url_filename (const struct urlinfo *u)
+{
+ char *file, *name;
+ int have_prefix = 0; /* whether we must prepend opt.dir_prefix */
+
+ if (opt.dirstruct)
+ {
+ file = mkstruct (u);
+ have_prefix = 1;
+ }
+ else
+ {
+ if (!*u->file)
+ file = xstrdup ("index.html");
+ else
+ file = xstrdup (u->file);
+ }
+
+ if (!have_prefix)
+ {
+ /* Check whether the prefix directory is something other than "."
+ before prepending it. */
+ if (!DOTP (opt.dir_prefix))
+ {
+ char *nfile = (char *)xmalloc (strlen (opt.dir_prefix)
+ + 1 + strlen (file) + 1);
+ sprintf (nfile, "%s/%s", opt.dir_prefix, file);
+ free (file);
+ file = nfile;
+ }
+ }
+ /* DOS-ish file systems don't like `%' signs in them; we change it
+ to `@'. */
+#ifdef WINDOWS
+ {
+ char *p = file;
+ for (p = file; *p; p++)
+ if (*p == '%')
+ *p = '@';
+ }
+#endif /* WINDOWS */
+
+ /* Check the cases in which the unique extensions are not used:
+ 1) Clobbering is turned off (-nc).
+ 2) Retrieval with regetting.
+ 3) Timestamping is used.
+ 4) Hierarchy is built.
+
+ The exception is the case when file does exist and is a
+ directory (actually support for bad httpd-s). */
+ if ((opt.noclobber || opt.always_rest || opt.timestamping || opt.dirstruct)
+ && !(file_exists_p (file) && !file_non_directory_p (file)))
+ return file;
+
+ /* Find a unique name. */
+ name = unique_name (file);
+ free (file);
+ return name;
+}
+
+/* Construct an absolute URL, given a (possibly) relative one. This
+ is more tricky than it might seem, but it works. */
+static char *
+construct (const char *url, const char *sub, int subsize, int no_proto)
+{
+ char *constr;
+
+ if (no_proto)
+ {
+ int i;
+
+ if (*sub != '/')
+ {
+ for (i = strlen (url); i && url[i] != '/'; i--);
+ if (!i || (url[i] == url[i - 1]))
+ {
+ int l = strlen (url);
+ char *t = (char *)alloca (l + 2);
+ strcpy (t, url);
+ t[l] = '/';
+ t[l + 1] = '\0';
+ url = t;
+ i = l;
+ }
+ constr = (char *)xmalloc (i + 1 + subsize + 1);
+ strncpy (constr, url, i + 1);
+ constr[i + 1] = '\0';
+ strncat (constr, sub, subsize);
+ }
+ else /* *sub == `/' */
+ {
+ int fl;
+
+ i = 0;
+ do
+ {
+ for (; url[i] && url[i] != '/'; i++);
+ if (!url[i])
+ break;
+ fl = (url[i] == url[i + 1] && url[i + 1] == '/');
+ if (fl)
+ i += 2;
+ }
+ while (fl);
+ if (!url[i])
+ {
+ int l = strlen (url);
+ char *t = (char *)alloca (l + 2);
+ strcpy (t, url);
+ t[l] = '/';
+ t[l + 1] = '\0';
+ url = t;
+ }
+ constr = (char *)xmalloc (i + 1 + subsize + 1);
+ strncpy (constr, url, i);
+ constr[i] = '\0';
+ strncat (constr + i, sub, subsize);
+ constr[i + subsize] = '\0';
+ } /* *sub == `/' */
+ }
+ else /* !no_proto */
+ {
+ constr = (char *)xmalloc (subsize + 1);
+ strncpy (constr, sub, subsize);
+ constr[subsize] = '\0';
+ }
+ return constr;
+}
+\f
+/* Optimize URL by host, destructively replacing u->host with realhost
+ (u->host). Do this regardless of opt.simple_check. */
+void
+opt_url (struct urlinfo *u)
+{
+ /* Find the "true" host. */
+ char *host = realhost (u->host);
+ free (u->host);
+ u->host = host;
+ assert (u->dir != NULL); /* the URL must have been parsed */
+ /* Refresh the printed representation. */
+ free (u->url);
+ u->url = str_url (u, 0);
+}
+\f
+/* Returns proxy host address, in accordance with PROTO. */
+char *
+getproxy (uerr_t proto)
+{
+ if (proto == URLHTTP)
+ return opt.http_proxy ? opt.http_proxy : getenv ("http_proxy");
+ else if (proto == URLFTP)
+ return opt.ftp_proxy ? opt.ftp_proxy : getenv ("ftp_proxy");
+ else
+ return NULL;
+}
+
+/* Should a host be accessed through proxy, concerning no_proxy? */
+int
+no_proxy_match (const char *host, const char **no_proxy)
+{
+ if (!no_proxy)
+ return 1;
+ else
+ return !sufmatch (no_proxy, host);
+}
+\f
+/* Change the links in an HTML document. Accepts a structure that
+ defines the positions of all the links. */
+void
+convert_links (const char *file, urlpos *l)
+{
+ FILE *fp;
+ char *buf, *p, *p2;
+ long size;
+
+ logprintf (LOG_VERBOSE, _("Converting %s... "), file);
+ /* Read from the file.... */
+ fp = fopen (file, "rb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, _("Cannot convert links in %s: %s\n"),
+ file, strerror (errno));
+ return;
+ }
+ /* ...to a buffer. */
+ load_file (fp, &buf, &size);
+ fclose (fp);
+ /* Now open the file for writing. */
+ fp = fopen (file, "wb");
+ if (!fp)
+ {
+ logprintf (LOG_NOTQUIET, _("Cannot convert links in %s: %s\n"),
+ file, strerror (errno));
+ free (buf);
+ return;
+ }
+ for (p = buf; l; l = l->next)
+ {
+ if (l->pos >= size)
+ {
+ DEBUGP (("Something strange is going on. Please investigate."));
+ break;
+ }
+ /* If the URL already is relative or it is not to be converted
+ for some other reason (e.g. because of not having been
+ downloaded in the first place), skip it. */
+ if ((l->flags & URELATIVE) || !(l->flags & UABS2REL))
+ {
+ DEBUGP (("Skipping %s at position %d (flags %d).\n", l->url,
+ l->pos, l->flags));
+ continue;
+ }
+ /* Else, reach the position of the offending URL, echoing
+ everything up to it to the outfile. */
+ for (p2 = buf + l->pos; p < p2; p++)
+ putc (*p, fp);
+ if (l->flags & UABS2REL)
+ {
+ char *newname = construct_relative (file, l->local_name);
+ fprintf (fp, "%s", newname);
+ DEBUGP (("ABS2REL: %s to %s at position %d in %s.\n",
+ l->url, newname, l->pos, file));
+ free (newname);
+ }
+ p += l->size;
+ }
+ if (p - buf < size)
+ {
+ for (p2 = buf + size; p < p2; p++)
+ putc (*p, fp);
+ }
+ fclose (fp);
+ free (buf);
+ logputs (LOG_VERBOSE, _("done.\n"));
+}
+
+/* Construct and return a malloced copy of the relative link from two
+ pieces of information: local name S1 of the referring file and
+ local name S2 of the referred file.
+
+ So, if S1 is "jagor.srce.hr/index.html" and S2 is
+ "jagor.srce.hr/images/news.gif", the function will return
+ "images/news.gif".
+
+ Alternately, if S1 is "fly.cc.fer.hr/ioccc/index.html", and S2 is
+ "fly.cc.fer.hr/images/fly.gif", the function will return
+ "../images/fly.gif".
+
+ Caveats: S1 should not begin with `/', unless S2 also begins with
+ '/'. S1 should not contain things like ".." and such --
+ construct_relative ("fly/ioccc/../index.html",
+ "fly/images/fly.gif") will fail. (A workaround is to call
+ something like path_simplify() on S1). */
+static char *
+construct_relative (const char *s1, const char *s2)
+{
+ int i, cnt, sepdirs1;
+ char *res;
+
+ if (*s2 == '/')
+ return xstrdup (s2);
+ /* S1 should *not* be absolute, if S2 wasn't. */
+ assert (*s1 != '/');
+ i = cnt = 0;
+ /* Skip the directories common to both strings. */
+ while (1)
+ {
+ while (s1[i] && s2[i]
+ && (s1[i] == s2[i])
+ && (s1[i] != '/')
+ && (s2[i] != '/'))
+ ++i;
+ if (s1[i] == '/' && s2[i] == '/')
+ cnt = ++i;
+ else
+ break;
+ }
+ for (sepdirs1 = 0; s1[i]; i++)
+ if (s1[i] == '/')
+ ++sepdirs1;
+ /* Now, construct the file as of:
+ - ../ repeated sepdirs1 time
+ - all the non-mutual directories of S2. */
+ res = (char *)xmalloc (3 * sepdirs1 + strlen (s2 + cnt) + 1);
+ for (i = 0; i < sepdirs1; i++)
+ memcpy (res + 3 * i, "../", 3);
+ strcpy (res + 3 * i, s2 + cnt);
+ return res;
+}
+\f
+/* Add URL to the head of the list L. */
+urlpos *
+add_url (urlpos *l, const char *url, const char *file)
+{
+ urlpos *t;
+
+ t = (urlpos *)xmalloc (sizeof (urlpos));
+ memset (t, 0, sizeof (*t));
+ t->url = xstrdup (url);
+ t->local_name = xstrdup (file);
+ t->next = l;
+ return t;
+}
--- /dev/null
+/* Declarations for url.c.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef URL_H
+#define URL_H
+
+/* If the string contains unsafe characters, duplicate it with
+ encode_string, otherwise just copy it with strdup. */
+#define CLEANDUP(x) (contains_unsafe (x) ? encode_string (x) : xstrdup (x))
+
+/* Structure containing info on a URL. */
+struct urlinfo
+{
+ char *url; /* Unchanged URL */
+ uerr_t proto; /* URL protocol */
+ char *host; /* Extracted hostname */
+ unsigned short port;
+ char ftp_type;
+ char *path, *dir, *file; /* Path, as well as dir and file
+ (properly decoded) */
+ char *user, *passwd; /* Username and password */
+ struct urlinfo *proxy; /* The exact string to pass to proxy
+ server */
+ char *referer; /* The source from which the request
+ URI was obtained */
+ char *local; /* The local filename of the URL
+ document */
+};
+
+enum uflags
+{
+ URELATIVE = 0x0001, /* Is URL relative? */
+ UNOPROTO = 0x0002, /* Is URL without a protocol? */
+ UABS2REL = 0x0004, /* Convert absolute to relative? */
+ UREL2ABS = 0x0008 /* Convert relative to absolute? */
+};
+
+/* A structure that defines the whereabouts of a URL, i.e. its
+ position in an HTML document, etc. */
+typedef struct _urlpos
+{
+ char *url; /* URL */
+ char *local_name; /* Local file to which it was saved */
+ enum uflags flags; /* Various flags */
+ int pos, size; /* Rekative position in the buffer */
+ struct _urlpos *next; /* Next struct in list */
+} urlpos;
+
+
+/* Function declarations */
+
+int skip_url PARAMS ((const char *));
+
+int contains_unsafe PARAMS ((const char *));
+char *encode_string PARAMS ((const char *));
+
+struct urlinfo *newurl PARAMS ((void));
+void freeurl PARAMS ((struct urlinfo *, int));
+uerr_t urlproto PARAMS ((const char *));
+int skip_proto PARAMS ((const char *));
+int skip_uname PARAMS ((const char *));
+
+uerr_t parseurl PARAMS ((const char *, struct urlinfo *, int));
+char *str_url PARAMS ((const struct urlinfo *, int));
+int url_equal PARAMS ((const char *, const char *));
+
+urlpos *get_urls_file PARAMS ((const char *));
+urlpos *get_urls_html PARAMS ((const char *, const char *, int));
+void free_urlpos PARAMS ((urlpos *));
+
+void rotate_backups PARAMS ((const char *));
+int mkalldirs PARAMS ((const char *));
+char *url_filename PARAMS ((const struct urlinfo *));
+void opt_url PARAMS ((struct urlinfo *));
+
+char *getproxy PARAMS ((uerr_t));
+int no_proxy_match PARAMS ((const char *, const char **));
+
+void convert_links PARAMS ((const char *, urlpos *));
+urlpos *add_url PARAMS ((urlpos *, const char *, const char *));
+
+#endif /* URL_H */
--- /dev/null
+/* Various functions of utilitarian nature.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#ifdef HAVE_STRING_H
+# include <string.h>
+#else /* not HAVE_STRING_H */
+# include <strings.h>
+#endif /* not HAVE_STRING_H */
+#include <ctype.h>
+#include <sys/types.h>
+#ifdef HAVE_UNISTD_H
+# include <unistd.h>
+#endif
+#ifdef HAVE_PWD_H
+# include <pwd.h>
+#endif
+#include <limits.h>
+#ifdef HAVE_UTIME_H
+# include <utime.h>
+#endif
+#ifdef HAVE_SYS_UTIME_H
+# include <sys/utime.h>
+#endif
+#include <errno.h>
+#ifdef NeXT
+# include <libc.h> /* for access() */
+#endif
+
+#include "wget.h"
+#include "utils.h"
+#include "fnmatch.h"
+
+#ifndef errno
+extern int errno;
+#endif
+
+
+/* Croak the fatal memory error and bail out with non-zero exit
+ status. */
+static void
+memfatal (const char *s)
+{
+ /* HACK: expose save_log_p from log.c, so we can turn it off in
+ order to prevent saving the log. Saving the log is dangerous
+ because logprintf() and logputs() can call malloc(), so this
+ could infloop. When logging is turned off, infloop can no longer
+ happen. */
+ extern int save_log_p;
+
+ save_log_p = 0;
+ logprintf (LOG_ALWAYS, _("%s: %s: Not enough memory.\n"), exec_name, s);
+ exit (1);
+}
+
+/* xmalloc, xrealloc and xstrdup exit the program if there is not
+ enough memory. xstrdup also implements strdup on systems that do
+ not have it. */
+void *
+xmalloc (size_t size)
+{
+ void *res;
+
+ res = malloc (size);
+ if (!res)
+ memfatal ("malloc");
+ return res;
+}
+
+void *
+xrealloc (void *obj, size_t size)
+{
+ void *res;
+
+ /* Not all Un*xes have the feature of realloc() that calling it with
+ a NULL-pointer is the same as malloc(), but it is easy to
+ simulate. */
+ if (obj)
+ res = realloc (obj, size);
+ else
+ res = malloc (size);
+ if (!res)
+ memfatal ("realloc");
+ return res;
+}
+
+char *
+xstrdup (const char *s)
+{
+#ifndef HAVE_STRDUP
+ int l = strlen (s);
+ char *s1 = malloc (l + 1);
+ if (!s1)
+ memfatal ("strdup");
+ memcpy (s1, s, l + 1);
+ return s1;
+#else /* HAVE_STRDUP */
+ char *s1 = strdup (s);
+ if (!s1)
+ memfatal ("strdup");
+ return s1;
+#endif /* HAVE_STRDUP */
+}
+\f
+/* Copy the string formed by two pointers (one on the beginning, other
+ on the char after the last char) to a new, malloc-ed location.
+ 0-terminate it. */
+char *
+strdupdelim (const char *beg, const char *end)
+{
+ char *res = (char *)xmalloc (end - beg + 1);
+ memcpy (res, beg, end - beg);
+ res[end - beg] = '\0';
+ return res;
+}
+
+/* Parse a string containing comma-separated elements, and return a
+ vector of char pointers with the elements. Spaces following the
+ commas are ignored. */
+char **
+sepstring (const char *s)
+{
+ char **res;
+ const char *p;
+ int i = 0;
+
+ if (!s || !*s)
+ return NULL;
+ res = NULL;
+ p = s;
+ while (*s)
+ {
+ if (*s == ',')
+ {
+ res = (char **)xrealloc (res, (i + 2) * sizeof (char *));
+ res[i] = strdupdelim (p, s);
+ res[++i] = NULL;
+ ++s;
+ /* Skip the blanks following the ','. */
+ while (ISSPACE (*s))
+ ++s;
+ p = s;
+ }
+ else
+ ++s;
+ }
+ res = (char **)xrealloc (res, (i + 2) * sizeof (char *));
+ res[i] = strdupdelim (p, s);
+ res[i + 1] = NULL;
+ return res;
+}
+\f
+/* Return pointer to a static char[] buffer in which zero-terminated
+ string-representation of TM (in form hh:mm:ss) is printed. It is
+ shamelessly non-reentrant, but it doesn't matter, really.
+
+ If TM is non-NULL, the time_t of the current time will be stored
+ there. */
+char *
+time_str (time_t *tm)
+{
+ static char tms[15];
+ struct tm *ptm;
+ time_t tim;
+
+ *tms = '\0';
+ tim = time (tm);
+ if (tim == -1)
+ return tms;
+ ptm = localtime (&tim);
+ sprintf (tms, "%02d:%02d:%02d", ptm->tm_hour, ptm->tm_min, ptm->tm_sec);
+ return tms;
+}
+
+/* Returns an error message for ERRNUM. #### This requires more work.
+ This function, as well as the whole error system, is very
+ ill-conceived. */
+const char *
+uerrmsg (uerr_t errnum)
+{
+ switch (errnum)
+ {
+ case URLUNKNOWN:
+ return _("Unknown/unsupported protocol");
+ break;
+ case URLBADPORT:
+ return _("Invalid port specification");
+ break;
+ case URLBADHOST:
+ return _("Invalid host name");
+ break;
+ default:
+ abort ();
+ /* $@#@#$ compiler. */
+ return NULL;
+ }
+}
+\f
+/* The Windows versions of the following two functions are defined in
+ mswindows.c. */
+
+/* A cuserid() immitation using getpwuid(), to avoid hassling with
+ utmp. Besides, not all systems have cuesrid(). Under Windows, it
+ is defined in mswindows.c.
+
+ If WHERE is non-NULL, the username will be stored there.
+ Otherwise, it will be returned as a static buffer (as returned by
+ getpwuid()). In the latter case, the buffer should be copied
+ before calling getpwuid() or pwd_cuserid() again. */
+#ifndef WINDOWS
+char *
+pwd_cuserid (char *where)
+{
+ struct passwd *pwd;
+
+ if (!(pwd = getpwuid (getuid ())) || !pwd->pw_name)
+ return NULL;
+ if (where)
+ {
+ strcpy (where, pwd->pw_name);
+ return where;
+ }
+ else
+ return pwd->pw_name;
+}
+
+void
+fork_to_background (void)
+{
+ pid_t pid;
+ /* Whether we arrange our own version of opt.lfilename here. */
+ int changedp = 0;
+
+ if (!opt.lfilename)
+ {
+ opt.lfilename = unique_name (DEFAULT_LOGFILE);
+ changedp = 1;
+ }
+ pid = fork ();
+ if (pid < 0)
+ {
+ /* parent, error */
+ perror ("fork");
+ exit (1);
+ }
+ else if (pid != 0)
+ {
+ /* parent, no error */
+ printf (_("Continuing in background.\n"));
+ if (changedp)
+ printf (_("Output will be written to `%s'.\n"), opt.lfilename);
+ exit (0);
+ }
+ /* child: keep running */
+}
+#endif /* not WINDOWS */
+\f
+/* Canonicalize PATH, and return a new path. The new path differs from PATH
+ in that:
+ Multple `/'s are collapsed to a single `/'.
+ Leading `./'s and trailing `/.'s are removed.
+ Trailing `/'s are removed.
+ Non-leading `../'s and trailing `..'s are handled by removing
+ portions of the path.
+
+ E.g. "a/b/c/./../d/.." will yield "a/b". This function originates
+ from GNU Bash.
+
+ Changes for Wget:
+ Always use '/' as stub_char.
+ Don't check for local things using canon_stat.
+ Change the original string instead of strdup-ing.
+ React correctly when beginning with `./' and `../'. */
+void
+path_simplify (char *path)
+{
+ register int i, start, ddot;
+ char stub_char;
+
+ if (!*path)
+ return;
+
+ /*stub_char = (*path == '/') ? '/' : '.';*/
+ stub_char = '/';
+
+ /* Addition: Remove all `./'-s preceding the string. If `../'-s
+ precede, put `/' in front and remove them too. */
+ i = 0;
+ ddot = 0;
+ while (1)
+ {
+ if (path[i] == '.' && path[i + 1] == '/')
+ i += 2;
+ else if (path[i] == '.' && path[i + 1] == '.' && path[i + 2] == '/')
+ {
+ i += 3;
+ ddot = 1;
+ }
+ else
+ break;
+ }
+ if (i)
+ strcpy (path, path + i - ddot);
+
+ /* Replace single `.' or `..' with `/'. */
+ if ((path[0] == '.' && path[1] == '\0')
+ || (path[0] == '.' && path[1] == '.' && path[2] == '\0'))
+ {
+ path[0] = stub_char;
+ path[1] = '\0';
+ return;
+ }
+ /* Walk along PATH looking for things to compact. */
+ i = 0;
+ while (1)
+ {
+ if (!path[i])
+ break;
+
+ while (path[i] && path[i] != '/')
+ i++;
+
+ start = i++;
+
+ /* If we didn't find any slashes, then there is nothing left to do. */
+ if (!path[start])
+ break;
+
+ /* Handle multiple `/'s in a row. */
+ while (path[i] == '/')
+ i++;
+
+ if ((start + 1) != i)
+ {
+ strcpy (path + start + 1, path + i);
+ i = start + 1;
+ }
+
+ /* Check for trailing `/'. */
+ if (start && !path[i])
+ {
+ zero_last:
+ path[--i] = '\0';
+ break;
+ }
+
+ /* Check for `../', `./' or trailing `.' by itself. */
+ if (path[i] == '.')
+ {
+ /* Handle trailing `.' by itself. */
+ if (!path[i + 1])
+ goto zero_last;
+
+ /* Handle `./'. */
+ if (path[i + 1] == '/')
+ {
+ strcpy (path + i, path + i + 1);
+ i = (start < 0) ? 0 : start;
+ continue;
+ }
+
+ /* Handle `../' or trailing `..' by itself. */
+ if (path[i + 1] == '.' &&
+ (path[i + 2] == '/' || !path[i + 2]))
+ {
+ while (--start > -1 && path[start] != '/');
+ strcpy (path + start + 1, path + i + 2);
+ i = (start < 0) ? 0 : start;
+ continue;
+ }
+ } /* path == '.' */
+ } /* while */
+
+ if (!*path)
+ {
+ *path = stub_char;
+ path[1] = '\0';
+ }
+}
+\f
+/* "Touch" FILE, i.e. make its atime and mtime equal to the time
+ specified with TM. */
+void
+touch (const char *file, time_t tm)
+{
+#ifdef HAVE_STRUCT_UTIMBUF
+ struct utimbuf times;
+ times.actime = times.modtime = tm;
+#else
+ time_t times[2];
+ times[0] = times[1] = tm;
+#endif
+
+ if (utime (file, ×) == -1)
+ logprintf (LOG_NOTQUIET, "utime: %s\n", strerror (errno));
+}
+
+/* Checks if FILE is a symbolic link, and removes it if it is. Does
+ nothing under MS-Windows. */
+int
+remove_link (const char *file)
+{
+ int err = 0;
+ struct stat st;
+
+ if (lstat (file, &st) == 0 && S_ISLNK (st.st_mode))
+ {
+ DEBUGP (("Unlinking %s (symlink).\n", file));
+ err = unlink (file);
+ if (err != 0)
+ logprintf (LOG_VERBOSE, _("Failed to unlink symlink `%s': %s\n"),
+ file, strerror (errno));
+ }
+ return err;
+}
+
+/* Does FILENAME exist? This is quite a lousy implementation, since
+ it supplies no error codes -- only a yes-or-no answer. Thus it
+ will return that a file does not exist if, e.g., the directory is
+ unreadable. I don't mind it too much currently, though. The
+ proper way should, of course, be to have a third, error state,
+ other than true/false, but that would introduce uncalled-for
+ additional complexity to the callers. */
+int
+file_exists_p (const char *filename)
+{
+#ifdef HAVE_ACCESS
+ return access (filename, F_OK) >= 0;
+#else
+ struct stat buf;
+ return stat (filename, &buf) >= 0;
+#endif
+}
+
+/* Returns 0 if PATH is a directory, 1 otherwise (any kind of file).
+ Returns 0 on error. */
+int
+file_non_directory_p (const char *path)
+{
+ struct stat buf;
+ /* Use lstat() rather than stat() so that symbolic links pointing to
+ directories can be identified correctly. */
+ if (lstat (path, &buf) != 0)
+ return 0;
+ return S_ISDIR (buf.st_mode) ? 0 : 1;
+}
+
+/* Return a unique filename, given a prefix and count */
+static char *
+unique_name_1 (const char *fileprefix, int count)
+{
+ char *filename;
+
+ if (count)
+ {
+ filename = (char *)xmalloc (strlen (fileprefix) + numdigit (count) + 2);
+ sprintf (filename, "%s.%d", fileprefix, count);
+ }
+ else
+ filename = xstrdup (fileprefix);
+
+ if (!file_exists_p (filename))
+ return filename;
+ else
+ {
+ free (filename);
+ return NULL;
+ }
+}
+
+/* Return a unique file name, based on PREFIX. */
+char *
+unique_name (const char *prefix)
+{
+ char *file = NULL;
+ int count = 0;
+
+ while (!file)
+ file = unique_name_1 (prefix, count++);
+ return file;
+}
+\f
+/* Create DIRECTORY. If some of the pathname components of DIRECTORY
+ are missing, create them first. In case any mkdir() call fails,
+ return its error status. Returns 0 on successful completion.
+
+ The behaviour of this function should be identical to the behaviour
+ of `mkdir -p' on systems where mkdir supports the `-p' option. */
+int
+make_directory (const char *directory)
+{
+ int quit = 0;
+ int i;
+ char *dir;
+
+ /* Make a copy of dir, to be able to write to it. Otherwise, the
+ function is unsafe if called with a read-only char *argument. */
+ STRDUP_ALLOCA (dir, directory);
+
+ /* If the first character of dir is '/', skip it (and thus enable
+ creation of absolute-pathname directories. */
+ for (i = (*dir == '/'); 1; ++i)
+ {
+ for (; dir[i] && dir[i] != '/'; i++)
+ ;
+ if (!dir[i])
+ quit = 1;
+ dir[i] = '\0';
+ /* Check whether the directory already exists. */
+ if (!file_exists_p (dir))
+ {
+ if (mkdir (dir, 0777) < 0)
+ return -1;
+ }
+ if (quit)
+ break;
+ else
+ dir[i] = '/';
+ }
+ return 0;
+}
+\f
+static int in_acclist PARAMS ((const char *const *, const char *, int));
+
+/* Determine whether a file is acceptable to be followed, according to
+ lists of patterns to accept/reject. */
+int
+acceptable (const char *s)
+{
+ int l = strlen (s);
+
+ while (l && s[l] != '/')
+ --l;
+ if (s[l] == '/')
+ s += (l + 1);
+ if (opt.accepts)
+ {
+ if (opt.rejects)
+ return (in_acclist ((const char *const *)opt.accepts, s, 1)
+ && !in_acclist ((const char *const *)opt.rejects, s, 1));
+ else
+ return in_acclist ((const char *const *)opt.accepts, s, 1);
+ }
+ else if (opt.rejects)
+ return !in_acclist ((const char *const *)opt.rejects, s, 1);
+ return 1;
+}
+
+/* Compare S1 and S2 frontally; S2 must begin with S1. E.g. if S1 is
+ `/something', frontcmp() will return 1 only if S2 begins with
+ `/something'. Otherwise, 0 is returned. */
+int
+frontcmp (const char *s1, const char *s2)
+{
+ for (; *s1 && *s2 && (*s1 == *s2); ++s1, ++s2);
+ return !*s1;
+}
+
+/* Iterate through STRLIST, and return the first element that matches
+ S, through wildcards or front comparison (as appropriate). */
+static char *
+proclist (char **strlist, const char *s, enum accd flags)
+{
+ char **x;
+
+ for (x = strlist; *x; x++)
+ if (has_wildcards_p (*x))
+ {
+ if (fnmatch (*x, s, FNM_PATHNAME) == 0)
+ break;
+ }
+ else
+ {
+ char *p = *x + ((flags & ALLABS) && (**x == '/')); /* Remove '/' */
+ if (frontcmp (p, s))
+ break;
+ }
+ return *x;
+}
+
+/* Returns whether DIRECTORY is acceptable for download, wrt the
+ include/exclude lists.
+
+ If FLAGS is ALLABS, the leading `/' is ignored in paths; relative
+ and absolute paths may be freely intermixed. */
+int
+accdir (const char *directory, enum accd flags)
+{
+ /* Remove starting '/'. */
+ if (flags & ALLABS && *directory == '/')
+ ++directory;
+ if (opt.includes)
+ {
+ if (!proclist (opt.includes, directory, flags))
+ return 0;
+ }
+ if (opt.excludes)
+ {
+ if (proclist (opt.excludes, directory, flags))
+ return 0;
+ }
+ return 1;
+}
+
+/* Match the end of STRING against PATTERN. For instance:
+
+ match_backwards ("abc", "bc") -> 1
+ match_backwards ("abc", "ab") -> 0
+ match_backwards ("abc", "abc") -> 1 */
+static int
+match_backwards (const char *string, const char *pattern)
+{
+ int i, j;
+
+ for (i = strlen (string), j = strlen (pattern); i >= 0 && j >= 0; i--, j--)
+ if (string[i] != pattern[j])
+ break;
+ /* If the pattern was exhausted, the match was succesful. */
+ if (j == -1)
+ return 1;
+ else
+ return 0;
+}
+
+/* Checks whether string S matches each element of ACCEPTS. A list
+ element are matched either with fnmatch() or match_backwards(),
+ according to whether the element contains wildcards or not.
+
+ If the BACKWARD is 0, don't do backward comparison -- just compare
+ them normally. */
+static int
+in_acclist (const char *const *accepts, const char *s, int backward)
+{
+ for (; *accepts; accepts++)
+ {
+ if (has_wildcards_p (*accepts))
+ {
+ /* fnmatch returns 0 if the pattern *does* match the
+ string. */
+ if (fnmatch (*accepts, s, 0) == 0)
+ return 1;
+ }
+ else
+ {
+ if (backward)
+ {
+ if (match_backwards (s, *accepts))
+ return 1;
+ }
+ else
+ {
+ if (!strcmp (s, *accepts))
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+
+/* Return the malloc-ed suffix of STR. For instance:
+ suffix ("foo.bar") -> "bar"
+ suffix ("foo.bar.baz") -> "baz"
+ suffix ("/foo/bar") -> NULL
+ suffix ("/foo.bar/baz") -> NULL */
+char *
+suffix (const char *str)
+{
+ int i;
+
+ for (i = strlen (str); i && str[i] != '/' && str[i] != '.'; i--);
+ if (str[i++] == '.')
+ return xstrdup (str + i);
+ else
+ return NULL;
+}
+
+/* Read a line from FP. The function reallocs the storage as needed
+ to accomodate for any length of the line. Reallocs are done
+ storage exponentially, doubling the storage after each overflow to
+ minimize the number of calls to realloc().
+
+ It is not an exemplary of correctness, since it kills off the
+ newline (and no, there is no way to know if there was a newline at
+ EOF). */
+char *
+read_whole_line (FILE *fp)
+{
+ char *line;
+ int i, bufsize, c;
+
+ i = 0;
+ bufsize = 40;
+ line = (char *)xmalloc (bufsize);
+ /* Construct the line. */
+ while ((c = getc (fp)) != EOF && c != '\n')
+ {
+ if (i > bufsize - 1)
+ line = (char *)xrealloc (line, (bufsize <<= 1));
+ line[i++] = c;
+ }
+ if (c == EOF && !i)
+ {
+ free (line);
+ return NULL;
+ }
+ /* Check for overflow at zero-termination (no need to double the
+ buffer in this case. */
+ if (i == bufsize)
+ line = (char *)xrealloc (line, i + 1);
+ line[i] = '\0';
+ return line;
+}
+
+/* Load file pointed to by FP to memory and return the malloc-ed
+ buffer with the contents. *NREAD will contain the number of read
+ bytes. The file is loaded in chunks, allocated exponentially,
+ starting with FILE_BUFFER_SIZE bytes. */
+void
+load_file (FILE *fp, char **buf, long *nread)
+{
+ long bufsize;
+
+ bufsize = 512;
+ *nread = 0;
+ *buf = NULL;
+ while (!feof (fp) && !ferror (fp))
+ {
+ *buf = (char *)xrealloc (*buf, bufsize + *nread);
+ *nread += fread (*buf + *nread, sizeof (char), bufsize, fp);
+ bufsize <<= 1;
+ }
+ /* #### No indication of encountered error?? */
+}
+
+/* Free the pointers in a NULL-terminated vector of pointers, then
+ free the pointer itself. */
+void
+free_vec (char **vec)
+{
+ if (vec)
+ {
+ char **p = vec;
+ while (*p)
+ free (*p++);
+ free (vec);
+ }
+}
+
+/* Append vector V2 to vector V1. The function frees V2 and
+ reallocates V1 (thus you may not use the contents of neither
+ pointer after the call). If V1 is NULL, V2 is returned. */
+char **
+merge_vecs (char **v1, char **v2)
+{
+ int i, j;
+
+ if (!v1)
+ return v2;
+ if (!v2)
+ return v1;
+ if (!*v2)
+ {
+ /* To avoid j == 0 */
+ free (v2);
+ return v1;
+ }
+ /* Count v1. */
+ for (i = 0; v1[i]; i++);
+ /* Count v2. */
+ for (j = 0; v2[j]; j++);
+ /* Reallocate v1. */
+ v1 = (char **)xrealloc (v1, (i + j + 1) * sizeof (char **));
+ memcpy (v1 + i, v2, (j + 1) * sizeof (char *));
+ free (v2);
+ return v1;
+}
+
+/* A set of simple-minded routines to store and search for strings in
+ a linked list. You may add a string to the slist, and peek whether
+ it's still in there at any time later. */
+
+/* Add an element to the list. If flags is NOSORT, the list will not
+ be sorted. */
+slist *
+add_slist (slist *l, const char *s, int flags)
+{
+ slist *t, *old, *beg;
+ int cmp;
+
+ if (flags & NOSORT)
+ {
+ if (!l)
+ {
+ t = (slist *)xmalloc (sizeof (slist));
+ t->string = xstrdup (s);
+ t->next = NULL;
+ return t;
+ }
+ beg = l;
+ /* Find the last element. */
+ while (l->next)
+ l = l->next;
+ t = (slist *)xmalloc (sizeof (slist));
+ l->next = t;
+ t->string = xstrdup (s);
+ t->next = NULL;
+ return beg;
+ }
+ /* Empty list or changing the first element. */
+ if (!l || (cmp = strcmp (l->string, s)) > 0)
+ {
+ t = (slist *)xmalloc (sizeof (slist));
+ t->string = xstrdup (s);
+ t->next = l;
+ return t;
+ }
+
+ beg = l;
+ if (cmp == 0)
+ return beg;
+
+ /* Second two one-before-the-last element. */
+ while (l->next)
+ {
+ old = l;
+ l = l->next;
+ cmp = strcmp (s, l->string);
+ if (cmp == 0) /* no repeating in the list */
+ return beg;
+ else if (cmp > 0)
+ continue;
+ /* If the next list element is greater than s, put s between the
+ current and the next list element. */
+ t = (slist *)xmalloc (sizeof (slist));
+ old->next = t;
+ t->next = l;
+ t->string = xstrdup (s);
+ return beg;
+ }
+ t = (slist *)xmalloc (sizeof (slist));
+ t->string = xstrdup (s);
+ /* Insert the new element after the last element. */
+ l->next = t;
+ t->next = NULL;
+ return beg;
+}
+
+/* Is there a specific entry in the list? */
+int
+in_slist (slist *l, const char *s)
+{
+ int cmp;
+
+ while (l)
+ {
+ cmp = strcmp (l->string, s);
+ if (cmp == 0)
+ return 1;
+ else if (cmp > 0) /* the list is ordered! */
+ return 0;
+ l = l->next;
+ }
+ return 0;
+}
+
+/* Free the whole slist. */
+void
+free_slist (slist *l)
+{
+ slist *n;
+
+ while (l)
+ {
+ n = l->next;
+ free (l->string);
+ free (l);
+ l = n;
+ }
+}
+
+/* Legible -- return a static pointer to the legibly printed long. */
+char *
+legible (long l)
+{
+ static char outbuf[20];
+ char inbuf[20];
+ int i, i1, mod;
+ char *outptr, *inptr;
+
+ /* Print the number into the buffer. */
+ long_to_string (inbuf, l);
+ /* Reset the pointers. */
+ outptr = outbuf;
+ inptr = inbuf;
+ /* If the number is negative, shift the pointers. */
+ if (*inptr == '-')
+ {
+ *outptr++ = '-';
+ ++inptr;
+ }
+ /* How many digits before the first separator? */
+ mod = strlen (inptr) % 3;
+ /* Insert them. */
+ for (i = 0; i < mod; i++)
+ *outptr++ = inptr[i];
+ /* Now insert the rest of them, putting separator before every
+ third digit. */
+ for (i1 = i, i = 0; inptr[i1]; i++, i1++)
+ {
+ if (i % 3 == 0 && i1 != 0)
+ *outptr++ = ',';
+ *outptr++ = inptr[i1];
+ }
+ /* Zero-terminate the string. */
+ *outptr = '\0';
+ return outbuf;
+}
+
+/* Count the digits in a (long) integer. */
+int
+numdigit (long a)
+{
+ int res = 1;
+ while ((a /= 10) != 0)
+ ++res;
+ return res;
+}
+
+/* Print NUMBER to BUFFER. The digits are first written in reverse
+ order (the least significant digit first), and are then reversed. */
+void
+long_to_string (char *buffer, long number)
+{
+ char *p;
+ int i, l;
+
+ if (number < 0)
+ {
+ *buffer++ = '-';
+ number = -number;
+ }
+ p = buffer;
+ /* Print the digits to the string. */
+ do
+ {
+ *p++ = number % 10 + '0';
+ number /= 10;
+ }
+ while (number);
+ /* And reverse them. */
+ l = p - buffer - 1;
+ for (i = l/2; i >= 0; i--)
+ {
+ char c = buffer[i];
+ buffer[i] = buffer[l - i];
+ buffer[l - i] = c;
+ }
+ buffer[l + 1] = '\0';
+}
--- /dev/null
+/* Declarations for utils.c.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+#ifndef UTILS_H
+#define UTILS_H
+
+/* Flags for slist. */
+enum {
+ NOSORT = 1
+};
+
+enum accd {
+ ALLABS = 1
+};
+
+/* A linked list of strings. The list is ordered alphabetically. */
+typedef struct _slist
+{
+ char *string;
+ struct _slist *next;
+} slist;
+
+char *time_str PARAMS ((time_t *));
+const char *uerrmsg PARAMS ((uerr_t));
+
+char *strdupdelim PARAMS ((const char *, const char *));
+char **sepstring PARAMS ((const char *));
+int frontcmp PARAMS ((const char *, const char *));
+char *pwd_cuserid PARAMS ((char *));
+void fork_to_background PARAMS ((void));
+void path_simplify PARAMS ((char *));
+
+void touch PARAMS ((const char *, time_t));
+int remove_link PARAMS ((const char *));
+int file_exists_p PARAMS ((const char *));
+int file_non_directory_p PARAMS ((const char *));
+int make_directory PARAMS ((const char *));
+char *unique_name PARAMS ((const char *));
+
+int acceptable PARAMS ((const char *));
+int accdir PARAMS ((const char *s, enum accd));
+char *suffix PARAMS ((const char *s));
+
+char *read_whole_line PARAMS ((FILE *));
+void load_file PARAMS ((FILE *, char **, long *));
+
+void free_vec PARAMS ((char **));
+char **merge_vecs PARAMS ((char **, char **));
+slist *add_slist PARAMS ((slist *, const char *, int));
+int in_slist PARAMS ((slist *, const char *));
+void free_slist PARAMS ((slist *));
+
+char *legible PARAMS ((long));
+int numdigit PARAMS ((long));
+void long_to_string PARAMS ((char *, long));
+
+#endif /* UTILS_H */
--- /dev/null
+char *version_string = "1.5.3";
--- /dev/null
+/* Miscellaneous declarations.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+This file is part of Wget.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+/* This file contains some declarations that don't fit anywhere else.
+ It also contains some useful includes, like the obnoxious TIME_H
+ inclusion. */
+
+#ifndef WGET_H
+#define WGET_H
+
+#ifndef DEBUG
+# define NDEBUG /* To kill off assertions */
+#endif /* not DEBUG */
+
+#ifndef PARAMS
+# if PROTOTYPES
+# define PARAMS(args) args
+# else
+# define PARAMS(args) ()
+# endif
+#endif
+
+/* `gettext (FOO)' is long to write, so we use `_(FOO)'. If NLS is
+ unavailable, _(STRING) simply returns STRING. */
+#ifdef HAVE_NLS
+# define _(string) gettext (string)
+# ifdef HAVE_LIBINTL_H
+# include <libintl.h>
+# endif /* HAVE_LIBINTL_H */
+#else /* not HAVE_NLS */
+# define _(string) string
+#endif /* not HAVE_NLS */
+
+/* I18N NOTE: You will notice that none of the DEBUG messages are
+ marked as translatable. This is intentional, for a few reasons:
+
+ 1) The debug messages are not meant for the users to look at, but
+ for the developers; as such, they should be considered more like
+ source comments than real program output.
+
+ 2) The messages are numerous, and yet they are random and frivolous
+ ("double yuck!" and such). There would be a lot of work with no
+ gain.
+
+ 3) Finally, the debug messages are meant to be a clue for me to
+ debug problems with Wget. If I get them in a language I don't
+ understand, debugging will become a new challenge of its own! :-) */
+
+
+/* Include these, so random files need not include them. */
+#include "sysdep.h"
+#include "options.h"
+
+#define DO_NOTHING do {} while (0)
+
+/* Print X if debugging is enabled; a no-op otherwise. */
+#ifdef DEBUG
+# define DEBUGP(x) do { debug_logprintf x; } while (0)
+#else /* not DEBUG */
+# define DEBUGP(x) DO_NOTHING
+#endif /* not DEBUG */
+
+/* Make gcc check for the format of logmsg() and debug_logmsg(). */
+#ifdef __GNUC__
+# define GCC_FORMAT_ATTR(a, b) __attribute__ ((format (printf, a, b)))
+#else /* not __GNUC__ */
+# define GCC_FORMAT_ATTR(a, b)
+#endif /* not __GNUC__ */
+
+/* These are from log.c, but they are used everywhere, so we declare
+ them here. */
+enum log_options { LOG_VERBOSE, LOG_NOTQUIET, LOG_NONVERBOSE, LOG_ALWAYS };
+
+void logprintf PARAMS ((enum log_options, const char *, ...))
+ GCC_FORMAT_ATTR (2, 3);
+void debug_logprintf PARAMS ((const char *, ...)) GCC_FORMAT_ATTR (1, 2);
+void logputs PARAMS ((enum log_options, const char *));
+
+/* Defined in `utils.c', but used literally everywhere. */
+void *xmalloc PARAMS ((size_t));
+void *xrealloc PARAMS ((void *, size_t));
+char *xstrdup PARAMS ((const char *));
+
+/* #### Find a better place for this. */
+/* The log file to which Wget writes to after HUP. */
+#define DEFAULT_LOGFILE "wget-log"
+
+#define MD5_HASHLEN 16
+\f
+/* Useful macros used across the code: */
+
+/* Is the string a hpyhen-only? */
+#define HYPHENP(x) (*(x) == '-' && !*((x) + 1))
+
+/* The smaller value of the two. */
+#define MINVAL(x, y) ((x) < (y) ? (x) : (y))
+
+/* ASCII char -> HEX digit */
+#define ASC2HEXD(x) (((x) >= '0' && (x) <= '9') ? \
+ ((x) - '0') : (toupper(x) - 'A' + 10))
+
+/* HEX digit -> ASCII char */
+#define HEXD2ASC(x) (((x) < 10) ? ((x) + '0') : ((x) - 10 + 'A'))
+
+#define ARRAY_SIZE(array) (sizeof (array) / sizeof (*(array)))
+
+/* Note that this much more elegant definition cannot be used:
+
+ #define STRDUP_ALLOCA(str) (strcpy ((char *)alloca (strlen (str) + 1), str))
+
+ This is because some compilers don't handle alloca() as argument to
+ function correctly. Gcc under Intel has been reported to offend in
+ this case. */
+
+#define STRDUP_ALLOCA(ptr, str) do { \
+ (ptr) = (char *)alloca (strlen (str) + 1); \
+ strcpy (ptr, str); \
+} while (0)
+
+#define ALLOCA_ARRAY(type, len) ((type *) alloca ((len) * sizeof (type)))
+
+#define XREALLOC_ARRAY(ptr, type, len) \
+ ((void) (ptr = (type *) xrealloc (ptr, (len) * sizeof (type))))
+
+/* Generally useful if you want to avoid arbitrary size limits but
+ don't need a full dynamic array. Assumes that BASEVAR points to a
+ malloced array of TYPE objects (or possibly a NULL pointer, if
+ SIZEVAR is 0), with the total size stored in SIZEVAR. This macro
+ will realloc BASEVAR as necessary so that it can hold at least
+ NEEDED_SIZE objects. The reallocing is done by doubling, which
+ ensures constant amortized time per element. */
+#define DO_REALLOC(basevar, sizevar, needed_size, type) do \
+{ \
+ /* Avoid side-effectualness. */ \
+ long do_realloc_needed_size = (needed_size); \
+ long do_realloc_newsize = 0; \
+ while ((sizevar) < (do_realloc_needed_size)) { \
+ do_realloc_newsize = 2*(sizevar); \
+ if (do_realloc_newsize < 32) \
+ do_realloc_newsize = 32; \
+ (sizevar) = do_realloc_newsize; \
+ } \
+ if (do_realloc_newsize) \
+ XREALLOC_ARRAY (basevar, type, do_realloc_newsize); \
+} while (0)
+
+/* Free FOO if it is non-NULL. */
+#define FREE_MAYBE(foo) do { if (foo) free (foo); } while (0)
+
+/* #### Hack: OPTIONS_DEFINED_HERE is defined in main.c. */
+#ifndef OPTIONS_DEFINED_HERE
+extern const char *exec_name;
+#endif
+
+\f
+/* Document-type flags */
+enum
+{
+ TEXTHTML = 0x0001, /* document is of type text/html */
+ RETROKF = 0x0002, /* retrieval was OK */
+ HEAD_ONLY = 0x0004, /* only send the HEAD request */
+ SEND_NOCACHE = 0x0008, /* send Pragma: no-cache directive */
+ ACCEPTRANGES = 0x0010 /* Accept-ranges header was found */
+};
+
+/* Universal error type -- used almost everywhere.
+ This is, of course, utter crock. */
+typedef enum
+{
+ NOCONERROR, HOSTERR, CONSOCKERR, CONERROR,
+ CONREFUSED, NEWLOCATION, NOTENOUGHMEM, CONPORTERR,
+ BINDERR, BINDOK, LISTENERR, ACCEPTERR, ACCEPTOK,
+ CONCLOSED, FTPOK, FTPLOGINC, FTPLOGREFUSED, FTPPORTERR,
+ FTPNSFOD, FTPRETROK, FTPUNKNOWNTYPE, FTPRERR,
+ FTPREXC, FTPSRVERR, FTPRETRINT, FTPRESTFAIL,
+ URLOK, URLHTTP, URLFTP, URLFILE, URLUNKNOWN, URLBADPORT,
+ URLBADHOST, FOPENERR, FWRITEERR, HOK, HLEXC, HEOF,
+ HERR, RETROK, RECLEVELEXC, FTPACCDENIED, WRONGCODE,
+ FTPINVPASV, FTPNOPASV,
+ RETRFINISHED, READERR, TRYLIMEXC, URLBADPATTERN,
+ FILEBADFILE, RANGEERR, RETRBADPATTERN, RETNOTSUP,
+ ROBOTSOK, NOROBOTS, PROXERR, AUTHFAILED, QUOTEXC, WRITEFAILED
+} uerr_t;
+
+#endif /* WGET_H */
--- /dev/null
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+#
+# Version: @VERSION@
+#
+
+SHELL = /bin/sh
+
+srcdir = @srcdir@
+VPATH = @srcdir@
+
+RM = rm -f
+
+all:
+
+clean:
+
+distclean: clean
+ $(RM) Makefile
+
+realclean: distclean
+
--- /dev/null
+ -*- text -*-
+
+This directory contains various optional utilities to help you use
+Wget.
+
+
+Socks:
+======
+Antonio Rosella <antonio.rosella@agip.it> has written a sample HTML
+frontend and a Perl script to demonstrate usage of socksified Wget as
+web retriever.
+
+To configure Wget to use socks, do a
+$ ./configure --with-sox.
+
+download.html and download-netscape.html are examples of how you can
+use socksified Wget to schedule the WWW requests. wget.cgi is a
+CGI Perl script used in conjunction with download.html, which
+schedules request using the "at" command.
+
+To get the script, contact Antonino.
+
+rmold.pl
+========
+This Perl script is used to check which local files are no longer on
+the remote server. You can use it to get the list of files, or
+$ rmold.pl [dir] | xargs rm
+
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
+<html>
+ <head>
+ <title>Wget Gateway</title>
+ <link rev="made" href="mailto:Antonio.Rosella@agip.it">
+ </head>
+
+ <body>
+ <center>
+ <h1>Wget Gateway</h1>
+ </center>
+ <p>
+ Welcome to Wget Gateway, a simple page showing the usage of
+ socksified wget behind a firewall. In my configuration it is
+ very useful because:
+ <ul>
+ <li>Only few users can exit from firewall
+ <li>A lot of users need information that can be reached in Internet
+ <li>I cannot dowload big files during my job time, so, I
+ have to schedule the requests after the normal work time
+ </ul>
+
+ <p>
+ With the combination of a socksified wget and a simple cgi
+ that schedules the requests can I reach the aim. All you need
+ is:
+ <ul>
+ <li> A socksified copy of
+ <a href="ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/wget.tar.gz">
+ wget</a>
+ <li> Perl (available on all the GNU mirroring sites)
+ <li> cgi-lib.pl (available at
+ <a href="ftp://ftp.switch.ch/mirror/CPAN/ROADMAP.html">CPAN</a>)
+ <li> A customized copy of this html
+ <li> A customized copy of socks.cgi
+ </ul>
+ This is my h/s configuration:
+ <pre>
+
++----------+ +----------------------------------+ +---------------------+
+| Firewall | | Host that can exit from firewall | | Intranet www server |
++----------+ | htceff | +---------------------+
+ +----------------------------------+ | Wget.html |
+ | socksified wget | +---------------------+
+ | cgi-lib.pl |
+ | perl |
+ | wget.cgi |
+ +----------------------------------+
+ </pre>
+ <p>
+ wget.cgi, wget and cgi-lib.pl are located in the usual
+ cgi-bin directory. The customization of wget.cgi and
+ wget.html has to reflect you installation, i.e.:
+ <ul>
+ <li> download-netscape.html requires wget.cgi
+ <li> wget.cgi requires Perl, cgi-lib.pl and wget
+ <li>
+ wget.cgi has to download the files to a directory writable
+ by the user submitting the request. At the moment I have an
+ anonymous ftp installed on <em>htceff</em>, and wget puts
+ dowloaded files to /pub/incoming directory (if you look at
+ wget.cgi, it sets the destdir to "/u/ftp/pub/incoming" if
+ the user leaves it blank).
+ </ul>
+ <p>
+ You can also add other parameters that you want to pass to wget,
+ but in this case you will also have to modify wget.cgi
+
+ <hr>
+ <form method="get" action="http://localhost/cgi-bin/wget.cgi">
+ <center>
+ <table border=1>
+ <td>Recursive Download
+ <td><select name=Recursion>
+ <Option selected value=N>No</Option>
+ <Option value=Y>Yes</Option>
+ </select>
+ </table>
+ <hr>
+ <table border=1>
+ <td>Depth
+ <td><input type="radio" name=depth value=1 checked> 1
+ <td><input type="radio" name=depth value=2 > 2
+ <td><input type="radio" name=depth value=3 > 3
+ <td><input type="radio" name=depth value=4 > 4
+ <td><input type="radio" name=depth value=5 > 5
+ </table>
+ <hr>
+ <table>
+ <td>Url to download: <td><input name="url" size=50><TR>
+ <td>Destination directory: <td><input name="destdir" size=50><TR>
+ </table>
+ <hr>
+ Now you can
+ <font color=yellow><input type="submit" value="download"></font>
+ the requested URL or
+ <font color=yellow><input type="reset" value="reset"></font>
+ the form.
+ </form>
+ <hr>
+ Feedback is always useful! Please contact me at
+ <address>
+ <a href="mailto:Antonio.Rosella@agip.it">Antonio Rosella<Antonio.Rosella@agip.it></a>.
+ </address>
+ You can send your suggestions or bug reports for Wget to
+ <address>
+ <a href="mailto:hniksic@srce.hr">Hrvoje Niksic <hniksic@srce.hr></a>.
+ </address>
+ <!-- hhmts start -->
+Last modified: Thu Mar 26 16:26:36 MET 1998
+<!-- hhmts end -->
+ </body>
+</html>
+
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+ <head>
+ <title>Wget Gateway</title>
+ <link rev="made" href="mailto:Antonio.Rosella@agip.it">
+ </head>
+
+ <body>
+ <h1>Wget Gateway</h1>
+ <p>
+ Welcome to Wget Gateway, a simple page showing the usage of
+ socksified wget behind a firewall. In my configuration it is
+ very useful because:
+ <ul>
+ <li>Only few users can exit from firewall
+ <li>A lot of users need information that can be reached in Internet
+ <li>I cannot dowload big files during my job time, so, I
+ have to schedule the requests after the normal work time
+ </ul>
+
+ <p>
+ With the combination of a socksified wget and a simple cgi
+ that schedules the requests can I reach the aim. All you need
+ is:
+ <ul>
+ <li> A socksified copy of
+ <a href="ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/wget.tar.gz">
+ wget</a>
+ <li> Perl (available on all the GNU mirroring sites)
+ <li> cgi-lib.pl (available at
+ <a href="ftp://ftp.switch.ch/mirror/CPAN/ROADMAP.html">CPAN</a>)
+ <li> A customized copy of this html
+ <li> A customized copy of socks.cgi
+ </ul>
+ This is my h/s configuration:
+ <pre>
+
++----------+ +----------------------------------+ +---------------------+
+| Firewall | | Host that can exit from firewall | | Intranet www server |
++----------+ | htceff | +---------------------+
+ +----------------------------------+ | Wget.html |
+ | socksified wget | +---------------------+
+ | cgi-lib.pl |
+ | perl |
+ | wget.cgi |
+ +----------------------------------+
+ </pre>
+ <p>
+ wget.cgi, wget and cgi-lib.pl are located in the usual
+ cgi-bin directory. The customization of wget.cgi and
+ wget.html has to reflect you installation, i.e.:
+ <ul>
+ <li> download.html requires wget.cgi
+ <li> wget.cgi requires Perl, cgi-lib.pl and wget
+ <li>
+ wget.cgi has to download the files to a directory writable
+ by the user submitting the request. At the moment I have an
+ anonymous ftp installed on <em>htceff</em>, and wget puts
+ dowloaded files to /pub/incoming directory (if you look at
+ wget.cgi, it sets the destdir to "/u/ftp/pub/incoming" if
+ the user leaves it blank).
+ </ul>
+ <p>
+ You can also add other parameters that you want to pass to wget,
+ but in this case you will also have to modify wget.cgi
+
+ <hr>
+ <form method="get" action="http://localhost/cgi-bin/wget.cgi">
+ <h3>Downloading (optionally recursive)</h3>
+ <ul>
+ <li>
+ Recursion:
+ <Select name=Recursion>
+ <Option selected value=N>No</Option>
+ <Option value=Y>Yes</Option>
+ </Select>
+ <li>
+ Depth:
+ <input type="radio" name=depth value=1 checked>1
+ <input type="radio" name=depth value=2 >2
+ <input type="radio" name=depth value=3 >3
+ <input type="radio" name=depth value=4 >4
+ <input type="radio" name=depth value=5 >5
+ <li>
+ Url to download: <input name="url" size=50>
+ <li>
+ Destination directory: <input name="destdir" size=50>
+ </ul>
+ Now you can <input type="submit" value="download"> the
+ requested URL or <input type="reset" value="reset"> the form.
+ </form>
+ <hr>
+ Feedback is always useful! Please contact me at
+ <address>
+ <a href="mailto:Antonio.Rosella@agip.it">Antonio Rosella<Antonio.Rosella@agip.it></a>.
+ </address>
+ You can send your suggestions or bug reports for Wget to
+ <address>
+ <a href="mailto:hniksic@srce.hr">Hrvoje Niksic <hniksic@srce.hr></a>.
+ </address>
+ <!-- hhmts start -->
+Last modified: Thu Mar 26 16:26:39 MET 1998
+<!-- hhmts end -->
+ </body>
+</html>
+
--- /dev/null
+#! /usr/bin/perl -w
+
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+
+# This script is a very lame hack to remove local files, until the
+# time when Wget proper will have this functionality.
+# Use with utmost care!
+
+# If the remote server supports BSD-style listings, set this to 0.
+$sysvlisting = 1;
+
+$verbose = 0;
+
+if (@ARGV && ($ARGV[0] eq '-v')) {
+ shift;
+ $verbose = 1;
+}
+
+defined($dirs[0] = shift) || ($dirs[0] = '.');
+while (defined($_ = shift)) {
+ @dirs = (@dirs, $_);
+}
+
+foreach $_ (@dirs) {
+ &procdir($_);
+}
+
+# End here
+
+sub procdir
+{
+ local($dir = $_[0]);
+ local(@lcfiles, @lcdirs, %files, @fl);
+
+ print STDERR "Processing directory '$dir':\n" if $verbose;
+
+ opendir(DH, $dir) || die("Cannot open $dir: $!\n");
+ @lcfiles = ();
+ @lcdirs = ();
+ # Read local files and directories.
+ foreach $_ (readdir(DH)) {
+ /^(\.listing|\.\.?)$/ && next;
+ if (-d "$dir/$_" || -l "$dir/$_") {
+ @lcdirs = (@lcdirs, $_);
+ }
+ else {
+ @lcfiles = (@lcfiles, $_);
+ }
+ }
+ closedir(DH);
+ # Parse .listing
+ if (open(FD, "<$dir/.listing")) {
+ @files = ();
+ while (<FD>)
+ {
+ # Weed out the line beginning with 'total'
+ /^total/ && next;
+ # Weed out everything but plain files and symlinks.
+ /^[-l]/ || next;
+ @fl = split;
+ $files{$fl[7 + $sysvlisting]} = 1;
+ }
+ close FD;
+ foreach $_ (@lcfiles) {
+ if (!$files{$_}) {
+ print "$dir/$_\n";
+ }
+ }
+ }
+ else {
+ print STDERR "Warning: $dir/.listing: $!\n";
+ }
+ foreach $_ (@lcdirs) {
+ &procdir("$dir/$_");
+ }
+}
+
--- /dev/null
+Summary: A command-line client to download WWW/FTP documents with optional recursion.
+Name: wget
+%define version 1.4.5
+Version: %{version}
+Release: 3
+Source: ftp://prep.ai.mit.edu/pub/gnu/wget-1.4.5.tar.gz
+Group: Applications/Networking
+Copyright: GPL
+Buildroot: /var/tmp/wget-root
+Packager: Jeff Johnson <jbj@jbj.org>
+
+%description
+GNU Wget is a freely available network utility to retrieve files from
+the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and
+FTP (File Transfer Protocol), the two most widely used Internet
+protocols.
+
+%prep
+%setup
+
+%build
+./configure --prefix=/usr --sysconfdir=/etc
+make
+
+%install
+rm -rf $RPM_BUILD_ROOT
+
+make prefix=$RPM_BUILD_ROOT/usr sysconfdir=$RPM_BUILD_ROOT/etc INSTALL_PROGRAM="install -s" install
+
+gzip -9nf $RPM_BUILD_ROOT/usr/info/wget*
+
+%post
+
+/sbin/install-info /usr/info/wget.info.gz /usr/info/dir --entry="* wget: (wget). GNU Wget Manual."
+
+%preun
+
+if [ $1 = 0 ]; then
+ /sbin/install-info --delete /usr/info/wget.info.gz /usr/info/dir --entry="* wget: (wget). GNU Wget Manual."
+fi
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+%files
+%doc README NEWS AUTHORS COPYING INSTALL MACHINES MAILING-LIST
+/usr/bin/wget
+/etc/wgetrc
+/usr/info/wget*
+/usr/man/man1/wget.1
+
+%changelog
+
+* Thu Feb 26 1998 Jeff Johnson <jbj@jbj.org>
+
+- Simplify previous contrib version.
+
--- /dev/null
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+RM = del
+
+all: wget.hlp
+
+# You probably need makeinfo utility
+# wget it from URL:http://www.sunsite.auc.dk/wget/makeinfo.zip
+
+.IGNORE:
+wget.hlp: wget.texi
+ makeinfo --no-validate --no-warn --force \
+--hpj wget.hpj --output wget.rtf wget.texi
+ hcrtf -xn wget.hpj
+
+clean:
+ $(RM) *.bak
+ $(RM) *.hpj
+ $(RM) *.rtf
+ $(RM) *.ph
+
+distclean: clean
+ $(RM) wget.hlp
+ $(RM) Makefile
+
+realclean: distclean
+
--- /dev/null
+# Makefile for `wget' utility for MSVC 4.0
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+#
+# Version: 1.4.4
+#
+
+SHELL = command
+
+VPATH = .
+o = .obj
+OUTDIR = .
+
+CC = cl
+LD = link
+
+CFLAGS = /nologo /MT /W0 /O2
+#DEBUGCF = /DDEBUG /Zi /Od #/Fd /FR
+CPPFLAGS =
+DEFS = /DWINDOWS /D_CONSOLE /DHAVE_CONFIG_H /DSYSTEM_WGETRC=\"wgetrc\"
+LDFLAGS = /subsystem:console /incremental:no /warn:3
+#DEBUGLF = /pdb:wget.pdb /debug /debugtype:cv /map:wget.map /opt:noref
+LIBS = kernel32.lib advapi32.lib wsock32.lib user32.lib
+
+INCLUDES = /I.
+
+COMPILE = $(CC) $(INCLUDES) $(CPPFLAGS) $(DEBUGCF) $(DEFS) $(CFLAGS)
+LINK = $(LD) $(LDFLAGS) $(DEBUGLF) /out:$@
+
+#INSTALL = @INSTALL@
+#INSTALL_PROGRAM = @INSTALL_PROGRAM@
+
+RM = del
+
+SRC = alloca.c cmpt.c connect.c host.c http.c netrc.c ftp-basic.c ftp.c ftp-ls.c \
+ ftp-opie.c getopt.c headers.c html.c retr.c recur.c url.c init.c utils.c main.c \
+ version.c mswindows.c fnmatch.c md5.c rbuf.c log.c
+
+OBJ = alloca$o cmpt$o connect$o host$o http$o netrc$o ftp-basic$o ftp$o ftp-ls$o \
+ ftp-opie$o headers$o html$o retr$o recur$o url$o init$o utils$o main$o \
+ getopt$o version$o mswindows$o fnmatch$o md5$o rbuf$o log$o
+
+.SUFFIXES: .c .obj
+
+.c.obj:
+ $(COMPILE) /c $<
+
+# Dependencies for building
+
+wget: wget.exe
+
+wget.exe: $(OBJ)
+ $(LD) @<< $(LDFLAGS) $(DEBUGLF) /out:$@ $(LIBS) $(OBJ)
+<<
+ ren wget.exe WGET.EXE
+
+
+#
+# Dependencies for cleanup
+#
+
+clean:
+ $(RM) *.obj
+ $(RM) *.exe
+ $(RM) *.bak
+ $(RM) *.pdb
+ $(RM) *.map
+
+distclean: clean
+ $(RM) Makefile
+
+realclean: distclean
+ $(RM) TAGS
+
+# Dependencies:
+
+!include "..\windows\wget.dep"
--- /dev/null
+## Compiler, linker, and lib stuff
+## Makefile for use with watcom win95/winnt executable.
+
+CC=bcc32
+LINK=tlink32
+
+LFLAGS=
+CFLAGS=-DWINDOWS -DHAVE_CONFIG_H -I. -H -H=wget.csm -w-
+
+## variables
+OBJS=cmpt.obj connect.obj fnmatch.obj ftp.obj ftp-basic.obj \
+ ftp-ls.obj ftp-opie.obj getopt.obj headers.obj host.obj html.obj \
+ http.obj init.obj log.obj main.obj md5.obj netrc.obj rbuf.obj \
+ alloca.obj \
+ recur.obj retr.obj url.obj utils.obj version.obj mswindows.obj
+
+LIBDIR=$(MAKEDIR)\..\lib
+
+wget.exe: $(OBJS)
+ $(LINK) @&&|
+$(LFLAGS) -Tpe -ap -c +
+$(LIBDIR)\c0x32.obj+
+alloca.obj+
+version.obj+
+utils.obj+
+url.obj+
+retr.obj+
+recur.obj+
+rbuf.obj+
+netrc.obj+
+mswindows.obj+
+md5.obj+
+main.obj+
+log.obj+
+init.obj+
+http.obj+
+html.obj+
+host.obj+
+headers.obj+
+getopt.obj+
+ftp-opie.obj+
+ftp-ls.obj+
+ftp-basic.obj+
+ftp.obj+
+fnmatch.obj+
+connect.obj+
+cmpt.obj
+$<,$*
+$(LIBDIR)\import32.lib+
+$(LIBDIR)\cw32.lib
+
+
+
+|
+
+o = .obj
+
+!include "..\windows\wget.dep"
--- /dev/null
+# Makefile for `Wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+RM = del
+CP = copy
+
+# flags passed to recursive makes in subdirectories
+
+# subdirectories in the distribution
+SUBDIRS = src doc #util
+
+# default target
+all: Makefile $(SUBDIRS)
+
+$(SUBDIRS): FORCE
+ cd $@
+ $(MAKE)
+ cd ..
+
+FORCE:
+
+# install everything
+install:
+ echo Just do it.
+
+clean: clean-recursive clean-top
+distclean: distclean-recursive distclean-top
+realclean: realclean-recursive realclean-top
+
+clean-top:
+ $(RM) *.bak
+ $(RM) *.zip
+
+distclean-top: clean-top
+ $(RM) Makefile
+ $(RM) config.h
+
+realclean-top: distclean-top
+
+clean-recursive distclean-recursive realclean-recursive:
+ cd src
+ $(MAKE) $(@:-recursive=)
+ cd ..\\doc
+ $(MAKE) $(@:-recursive=)
+ cd ..
+
+bindist: wget.zip
+
+wget.zip: $(SUBDIRS)
+ $(RM) wget.zip
+ zip -Djl9 wget.zip AUTHORS COPYING INSTALL MACHINES MAILING-LIST NEWS README DOC\\sample.wgetrc
+ zip -Dj9 wget.zip SRC\\WGET.EXE DOC\\WGET.HLP
+
--- /dev/null
+# Makefile for `Wget' utility
+# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+RM = del
+CP = copy
+
+# flags passed to recursive makes in subdirectories
+
+# subdirectories in the distribution
+SUBDIRS = src doc #util
+
+# default target
+all: Makefile $(SUBDIRS)
+
+$(SUBDIRS):
+ cd $@
+ $(MAKE)
+ cd ..
+
+# install everything
+install:
+ echo Just do it.
+
+clean: clean-recursive clean-top
+distclean: distclean-recursive distclean-top
+realclean: realclean-recursive realclean-top
+
+clean-top:
+ $(RM) *.bak
+ $(RM) *.zip
+
+distclean-top: clean-top
+ $(RM) Makefile
+ $(RM) config.h
+
+realclean-top: distclean-top
+
+clean-recursive distclean-recursive realclean-recursive:
+ cd src
+ $(MAKE) $(@:-recursive=)
+ cd ..\\doc
+ $(MAKE) $(@:-recursive=)
+ cd ..
+
+bindist: wget.zip
+
+wget.zip: $(SUBDIRS)
+ $(RM) wget.zip
+ zip -Djl9 wget.zip AUTHORS COPYING INSTALL MACHINES MAILING-LIST NEWS README DOC\\sample.wgetrc
+ zip -Dj9 wget.zip SRC\\WGET.EXE DOC\\WGET.HLP
+
--- /dev/null
+## Compiler, linker, and lib stuff
+## Makefile for use with watcom win95/winnt executable.
+
+CC=wcc386
+LINK=wlink
+
+#disabled for faster compiler
+LFLAGS=sys nt op st=32767 op version=15013 op map
+CFLAGS=/zp4 /d1 /w4 /fpd /5s /fp5 /bm /mf /os /bt=nt /DWINDOWS /DHAVE_CONFIG_H /I=f:\lang\watcom\h;f:\lang\watcom\h\nt;f:\code\wgetb13\src
+
+## variables
+OBJS = FILE ALLOCA.obj,cmpt.obj,connect.obj,fnmatch.obj,ftp.obj,ftp-basic.obj, &
+ ftp-ls.obj,ftp-opie.obj,getopt.obj,headers.obj,host.obj,html.obj, &
+ http.obj,init.obj,log.obj,main.obj,md5.obj,netrc.obj,rbuf.obj, &
+ recur.obj,retr.obj,url.obj,utils.obj,version.obj,mswindows.obj
+LINKOBJS = ALLOCA.obj cmpt.obj connect.obj fnmatch.obj ftp.obj ftp-basic.obj &
+ ftp-ls.obj ftp-opie.obj getopt.obj headers.obj host.obj html.obj &
+ http.obj init.obj log.obj main.obj md5.obj netrc.obj rbuf.obj &
+ recur.obj retr.obj url.obj utils.obj version.obj mswindows.obj
+LIBFILES =
+
+BINNAME=wget.exe
+
+$(BINNAME): $(LINKOBJS)
+ $(LINK) $(LFLAGS) NAME $(BINNAME) $(OBJS) $(LIBPATH) $(LIBFILES)
+
+alloca.obj : alloca.c config.h
+ $(CC) $(CFLAGS) alloca.c
+
+cmpt.obj : cmpt.c cmpt.h wget.h config.h
+ $(CC) $(CFLAGS) cmpt.c
+
+connect.obj : connect.c wget.h connect.h host.h config.h
+ $(CC) $(CFLAGS) connect.c
+
+fnmatch.obj : fnmatch.c wget.h fnmatch.h config.h
+ $(CC) $(CFLAGS) fnmatch.c
+
+ftp.obj : ftp.c wget.h utils.h url.h rbuf.h retr.h ftp.h html.h connect.h host.h fnmatch.h netrc.h config.h
+ $(CC) $(CFLAGS) ftp.c
+
+ftp-basic.obj : ftp-basic.c wget.h utils.h rbuf.h connect.h host.h config.h
+ $(CC) $(CFLAGS) ftp-basic.c
+
+ftp-ls.obj : ftp-ls.c wget.h utils.h ftp.h config.h
+ $(CC) $(CFLAGS) ftp-ls.c
+
+ftp-opie.obj : ftp-opie.c wget.h md5.h config.h
+ $(CC) $(CFLAGS) ftp-opie.c
+
+getopt.obj : getopt.c wget.h getopt.h config.h
+ $(CC) $(CFLAGS) getopt.c
+
+headers.obj : headers.c headers.h wget.h rbuf.h connect.h config.h
+ $(CC) $(CFLAGS) headers.c
+
+host.obj : host.c wget.h host.h utils.h url.h config.h
+ $(CC) $(CFLAGS) host.c
+
+html.obj : html.c wget.h url.h utils.h ftp.h html.h config.h
+ $(CC) $(CFLAGS) html.c
+
+http.obj : http.c wget.h utils.h url.h host.h rbuf.h retr.h headers.h connect.h fnmatch.h netrc.h config.h
+ $(CC) $(CFLAGS) http.c
+
+init.obj : init.c wget.h utils.h init.h host.h recur.h netrc.h config.h
+ $(CC) $(CFLAGS) init.c
+
+log.obj : log.c wget.h utils.h config.h
+ $(CC) $(CFLAGS) log.c
+
+main.obj : main.c wget.h utils.h getopt.h init.h retr.h host.h recur.h config.h mswindows.h
+ $(CC) $(CFLAGS) main.c
+
+md5.obj : md5.c wget.h md5.h config.h
+ $(CC) $(CFLAGS) md5.c
+
+mswindows.obj : mswindows.c wget.h url.h config.h
+ $(CC) $(CFLAGS) mswindows.c
+
+netrc.obj : netrc.c wget.h utils.h netrc.h init.h config.h
+ $(CC) $(CFLAGS) netrc.c
+
+rbuf.obj : rbuf.c wget.h rbuf.h connect.h config.h
+ $(CC) $(CFLAGS) rbuf.c
+
+recur.obj : recur.c wget.h url.h recur.h utils.h retr.h ftp.h fnmatch.h host.h config.h
+ $(CC) $(CFLAGS) recur.c
+
+retr.obj : retr.c wget.h utils.h retr.h url.h recur.h ftp.h host.h connect.h config.h
+ $(CC) $(CFLAGS) retr.c
+
+url.obj : url.c wget.h url.h host.h html.h utils.h config.h
+ $(CC) $(CFLAGS) url.c
+
+utils.obj : utils.c wget.h fnmatch.h utils.h config.h
+ $(CC) $(CFLAGS) utils.c
+
+version.obj : version.c config.h
+ $(CC) $(CFLAGS) version.c
+
--- /dev/null
+ -*- text -*-
+
+To build Wget with VC++ 5.0 run configure.bat in the wget directory,
+and then run nmake. If you want to build the help file you will need
+a copy of makinfo to convert wget.texi to rtf. I've made a copy
+available at <URL:http://www.sunsite.auc.dk/wget/makeinfo.zip>. This
+copy of makeinfo is from the miktxt 1.10 archive available from ctan.
+
+Windows contributors:
+
+* Darko Budor <dbudor@zesoi.fer.hr> -- the initial work on the Windows
+ port;
+
+* Tim Charron <tcharron@interlog.com> -- additional cleanup and
+ contribution of the Watcom makefile;
+
+* John Burden <john@futuresguide.com> -- cleanup of the VC++ makefile
+ to get a clean build with VC++ 5.0 on Windows 95;
+
+* Douglas E. Wegscheid -- maintains configure.bat and various Windows
+ makefiles.
--- /dev/null
+/* Configuration header file.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+
+#ifndef CONFIG_H
+#define CONFIG_H
+
+/* Define if you have the <alloca.h> header file. */
+#undef HAVE_ALLOCA_H
+
+/* AIX requires this to be the first thing in the file. */
+#ifdef __GNUC__
+# define alloca __builtin_alloca
+#else
+# if HAVE_ALLOCA_H
+# include <alloca.h>
+# else
+# ifdef _AIX
+ #pragma alloca
+# else
+# ifndef alloca /* predefined by HP cc +Olibcalls */
+char *alloca ();
+# endif
+# endif
+# endif
+#endif
+
+/* Define if on AIX 3.
+ System headers sometimes define this.
+ We just want to avoid a redefinition error message. */
+#ifndef _ALL_SOURCE
+/* #undef _ALL_SOURCE */
+#endif
+
+/* Define to empty if the keyword does not work. */
+/* #undef const */
+
+/* Define to `unsigned' if <sys/types.h> doesn't define. */
+/* #undef size_t */
+
+/* Define if you have the ANSI C header files. */
+#define STDC_HEADERS 1
+
+/* Define as the return type of signal handlers (int or void). */
+#define RETSIGTYPE int
+
+/* Define if your architecture is big endian (with the most
+ significant byte first). */
+#undef WORDS_BIGENDIAN
+
+/* Define this if you want the NLS support. */
+#undef ENABLE_NLS
+
+/* Define if you want the FTP support for Opie compiled in. */
+#define USE_OPIE 1
+
+/* Define if you want the HTTP Digest Authorization compiled in. */
+#define USE_DIGEST 1
+
+/* Define if you want the debug output support compiled in. */
+#define DEBUG
+
+/* Define if you have sys/time.h header. */
+#undef HAVE_SYS_TIME_H
+
+/* Define if you can safely include both <sys/time.h> and <time.h>. */
+#undef TIME_WITH_SYS_TIME
+
+/* Define if you have struct utimbuf. */
+#define HAVE_STRUCT_UTIMBUF 1
+
+/* Define if you have the gethostbyname function. */
+/* #undef HAVE_GETHOSTBYNAME */
+
+/* Define if you have the uname function. */
+#undef HAVE_UNAME
+
+/* Define if you have the gethostname function. */
+#define HAVE_GETHOSTNAME 1
+
+/* Define if you have the select function. */
+#define HAVE_SELECT 1
+
+/* Define if you have the gettimeofday function. */
+#undef HAVE_GETTIMEOFDAY
+
+/* Define if you have the strdup function. */
+#define HAVE_STRDUP 1
+
+/* Define if you have the sys/utsname.h header. */
+#undef HAVE_SYS_UTSNAME_H
+
+/* Define if you have the strerror function. */
+#define HAVE_STRERROR 1
+
+/* Define if you have the strstr function. */
+#define HAVE_STRSTR 1
+
+/* Define if you have the strcasecmp function. */
+#define HAVE_STRCASECMP 1
+
+/* Define if you have the strncasecmp function. */
+#define HAVE_STRNCASECMP 1
+
+/* Define if you have the strptime function. */
+#undef HAVE_STRPTIME
+
+/* Define if you have the mktime function. */
+#define HAVE_MKTIME 1
+
+/* Define if you have the symlink function. */
+#undef HAVE_SYMLINK
+
+/* Define if you have the signal function. */
+#undef HAVE_SIGNAL
+
+/* Define if you have the <stdarg.h> header file. */
+#define HAVE_STDARG_H 1
+
+/* Define if you have the <stdlib.h> header file. */
+#define HAVE_STDLIB_H 1
+
+/* Define if you have the <string.h> header file. */
+#define HAVE_STRING_H 1
+
+/* Define if you have the <unistd.h> header file. */
+/* #define HAVE_UNISTD_H 1 */
+#undef HAVE_UNISTD_H
+
+/* Define if you have the <utime.h> header file. */
+#define HAVE_UTIME_H 1
+
+/* Define if you have the <sys/utime.h> header file. */
+#undef HAVE_SYS_UTIME_H
+
+/* Define if you have the <sys/select.h> header file. */
+#undef HAVE_SYS_SELECT_H
+
+/* Define if you have the <pwd.h> header file. */
+#undef HAVE_PWD_H
+
+/* Define if you have the <signal.h> header file. */
+#undef HAVE_SIGNAL_H
+
+/* Define to be the name of the operating system. */
+#define OS_TYPE "Windows"
+
+#define CTRLBREAK_BACKGND 1
+
+/* Define if you wish to compile with socks support. */
+/* #undef HAVE_SOCKS */
+
+/* Define to 1 if ANSI function prototypes are usable. */
+#define PROTOTYPES 1
+
+#endif /* CONFIG_H */
--- /dev/null
+/* Configuration header file.
+ Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+
+
+#ifndef CONFIG_H
+#define CONFIG_H
+
+/* Define if you have the <alloca.h> header file. */
+#undef HAVE_ALLOCA_H
+
+/* AIX requires this to be the first thing in the file. */
+#ifdef __GNUC__
+# define alloca __builtin_alloca
+#else
+# if HAVE_ALLOCA_H
+# include <alloca.h>
+# else
+# ifdef _AIX
+ #pragma alloca
+# else
+# ifndef alloca /* predefined by HP cc +Olibcalls */
+char *alloca ();
+# endif
+# endif
+# endif
+#endif
+
+/* Define if on AIX 3.
+ System headers sometimes define this.
+ We just want to avoid a redefinition error message. */
+#ifndef _ALL_SOURCE
+/* #undef _ALL_SOURCE */
+#endif
+
+/* Define to empty if the keyword does not work. */
+/* #undef const */
+
+/* Define to `unsigned' if <sys/types.h> doesn't define. */
+/* #undef size_t */
+
+/* Define if you have the ANSI C header files. */
+#define STDC_HEADERS 1
+
+/* Define as the return type of signal handlers (int or void). */
+#define RETSIGTYPE int
+
+/* Define if your architecture is big endian (with the most
+ significant byte first). */
+#undef WORDS_BIGENDIAN
+
+/* Define this if you want the NLS support. */
+#undef ENABLE_NLS
+
+/* Define if you want the FTP support for Opie compiled in. */
+#define USE_OPIE 1
+
+/* Define if you want the HTTP Digest Authorization compiled in. */
+#define USE_DIGEST 1
+
+/* Define if you want the debug output support compiled in. */
+#define DEBUG
+
+/* Define if you have sys/time.h header. */
+#undef HAVE_SYS_TIME_H
+
+/* Define if you can safely include both <sys/time.h> and <time.h>. */
+#undef TIME_WITH_SYS_TIME
+
+/* Define if you have struct utimbuf. */
+#define HAVE_STRUCT_UTIMBUF 1
+
+/* Define if you have the gethostbyname function. */
+/* #undef HAVE_GETHOSTBYNAME */
+
+/* Define if you have the uname function. */
+#undef HAVE_UNAME
+
+/* Define if you have the gethostname function. */
+#define HAVE_GETHOSTNAME 1
+
+/* Define if you have the select function. */
+#define HAVE_SELECT 1
+
+/* Define if you have the gettimeofday function. */
+#undef HAVE_GETTIMEOFDAY
+
+/* Define if you have the strdup function. */
+#define HAVE_STRDUP 1
+
+/* Define if you have the sys/utsname.h header. */
+#undef HAVE_SYS_UTSNAME_H
+
+/* Define if you have the strerror function. */
+#define HAVE_STRERROR 1
+
+/* Define if you have the strstr function. */
+#define HAVE_STRSTR 1
+
+/* Define if you have the strcasecmp function. */
+#define HAVE_STRCASECMP 1
+
+/* Define if you have the strncasecmp function. */
+#define HAVE_STRNCASECMP 1
+
+/* Define if you have the strptime function. */
+#undef HAVE_STRPTIME
+
+/* Define if you have the mktime function. */
+#define HAVE_MKTIME 1
+
+/* Define if you have the symlink function. */
+#undef HAVE_SYMLINK
+
+/* Define if you have the signal function. */
+#undef HAVE_SIGNAL
+
+/* Define if you have the <stdarg.h> header file. */
+#define HAVE_STDARG_H 1
+
+/* Define if you have the <stdlib.h> header file. */
+#define HAVE_STDLIB_H 1
+
+/* Define if you have the <string.h> header file. */
+#define HAVE_STRING_H 1
+
+/* Define if you have the <unistd.h> header file. */
+/* #define HAVE_UNISTD_H 1 */
+#undef HAVE_UNISTD_H
+
+/* Define if you have the <utime.h> header file. */
+#undef HAVE_UTIME_H
+
+/* Define if you have the <sys/utime.h> header file. */
+#define HAVE_SYS_UTIME_H 1
+
+/* Define if you have the <sys/select.h> header file. */
+#undef HAVE_SYS_SELECT_H
+
+/* Define if you have the <pwd.h> header file. */
+#undef HAVE_PWD_H
+
+/* Define if you have the <signal.h> header file. */
+#undef HAVE_SIGNAL_H
+
+/* Define to be the name of the operating system. */
+#define OS_TYPE "Windows"
+
+#define CTRLBREAK_BACKGND 1
+
+/* Define if you wish to compile with socks support. */
+/* #undef HAVE_SOCKS */
+
+/* Define to 1 if ANSI function prototypes are usable. */
+#define PROTOTYPES 1
+
+#endif /* CONFIG_H */
--- /dev/null
+alloca$o: alloca.c
+ansi2knr$o: ansi2knr.c
+cmpt$o: cmpt.c config.h wget.h sysdep.h options.h
+connect$o: connect.c config.h wget.h sysdep.h options.h connect.h host.h
+fnmatch$o: fnmatch.c config.h wget.h sysdep.h options.h fnmatch.h
+ftp-basic$o: ftp-basic.c config.h wget.h sysdep.h options.h utils.h rbuf.h connect.h host.h
+ftp-ls$o: ftp-ls.c config.h wget.h sysdep.h options.h utils.h ftp.h rbuf.h
+ftp-opie$o: ftp-opie.c config.h wget.h sysdep.h options.h md5.h
+ftp$o: ftp.c config.h wget.h sysdep.h options.h utils.h url.h rbuf.h retr.h ftp.h html.h connect.h host.h fnmatch.h netrc.h
+getopt$o: getopt.c wget.h sysdep.h options.h getopt.h
+headers$o: headers.c config.h wget.h sysdep.h options.h connect.h rbuf.h headers.h
+host$o: host.c config.h wget.h sysdep.h options.h utils.h host.h url.h
+html$o: html.c config.h wget.h sysdep.h options.h url.h utils.h ftp.h rbuf.h html.h
+http$o: http.c config.h wget.h sysdep.h options.h utils.h url.h host.h rbuf.h retr.h headers.h connect.h fnmatch.h netrc.h md5.h
+init$o: init.c config.h wget.h sysdep.h options.h utils.h init.h host.h recur.h netrc.h
+log$o: log.c config.h wget.h sysdep.h options.h utils.h
+main$o: main.c config.h wget.h sysdep.h options.h utils.h getopt.h init.h retr.h rbuf.h recur.h host.h
+md5$o: md5.c wget.h sysdep.h options.h md5.h
+netrc$o: netrc.c wget.h sysdep.h options.h utils.h netrc.h init.h
+rbuf$o: rbuf.c config.h wget.h sysdep.h options.h rbuf.h connect.h
+recur$o: recur.c config.h wget.h sysdep.h options.h url.h recur.h utils.h retr.h rbuf.h ftp.h fnmatch.h host.h
+retr$o: retr.c config.h wget.h sysdep.h options.h utils.h retr.h rbuf.h url.h recur.h ftp.h host.h connect.h
+url$o: url.c config.h wget.h sysdep.h options.h utils.h url.h host.h html.h
+utils$o: utils.c config.h wget.h sysdep.h options.h utils.h fnmatch.h
+version$o: version.c