Menu
Is free
registration
home  /  Firmware/ Can wget dns query. Wget commands: examples

Can wget dns request. Wget commands: examples

wget - GNU Wget manual

SYNTAX

wget [ options]… [ Url ]…

DESCRIPTION

GNU Wget is an open source utility for downloading files from the Internet. It supports HTTP, HTTPS, and FTP protocols, downloads from proxy servers over HTTP.

Wget can follow the links of HTML pages and create local copies of remote web sites, while it is possible full recovery site folder structure ("recursive downloading"). While doing this, Wget looks for a file with robots permissions (/robots.txt). It is also possible to convert links in downloaded HTML files for further viewing the site in offline mode ("off-line browsing").
Checking file headers: Wget can read the file headers (this is available over HTTP and FTP) and compare them with the headers of previously downloaded files, after which it can download new versions of the files. This allows you to mirror sites or a set of files to FTP when using Wget.
Wget is designed for slow or unstable connections: if a problem occurs during download, Wget will try to continue downloading the file. If the server from which the file is downloaded supports resuming the file, then Wget will continue to download the file exactly from the point where the download was interrupted.

OPTIONS

main parameters

-V ––Version Display the version of Wget. -h ––Help Display wget command line options. -b ––Background Go to background after startup. If file for messages is not specified by parameter -o, then it is written in wget-log -e command ––Execute command Execute command as if she were a part .wgetrc... The command will be executed after the commands in .wgetrc.

Message parameters

-o logfile ––Output-file =logfile Log all messages to logfile... Otherwise, they will be directed to stderr. -a logfile ––Append-output =logfile Supplement logfile... As well -o, only logfile not replaced, but supplemented. If logfile does not exist, is created new file. -d ––Debug Displaying debug messages is miscellaneous information that is important to Wget developers. -q ––Quiet Turn off Wget messages. -v ––Verbose Include verbose messages, with all available data. Enabled by default. -nv –– non-verbose Use abbreviated messages (to disable messages, see -q). Error messages and basic information will be displayed. -i file ––Input-file =file Read URL from file... In this case, you do not need to specify the URL on the command line. If URLs are specified on both the command line and file then the URLs from the command line will be loaded first. file doesn't have to be HTML (but it's okay if it is) –– URLs just have to be specified in it. If you specify ––Force-html then the file will be read as html... In this case, there may be problems with relative links. This can be prevented by adding " "or by typing on the command line ––Base =url. -F ––Force-html When reading a URL from a file, it reads the file as HTML. To prevent errors in the case of a local HTML file, add to the file " "or enter the command line parameter ––Base. -B Url ––Base = Url When reading a URL from a file ( -F) defines Url appended to the relative addresses of the file specified by the parameter -i.

Boot options

––Bind-address = ADDRESS For TCP / IP connections, pass "bind ()" to ADDRESS on the local machine. V ADDRESS both hostname and IP address can be specified. Used if your computer has multiple IP addresses. -t number ––Tries =number Sets the number of repetitions number... Specify 0 or inf to cancel replays. -O file ––Output-document =file The documents will not be written to the corresponding files, but will be merged together and written to a file file... If file exists, it will be replaced. If file indicated as then documents will be output to stdout. This parameter automatically sets the number of retries to 1. Useful when downloading split files from mail servers via the web interface. -nc ––No-clobber If the connection is interrupted when loading the site, then specify this parameter to continue loading from the point where the connection was interrupted. When starting Wget without parameters -N, -nc, or -r uploading the same file to the same folder will create a copy of the file named file.1 ... If a file exists with that name, the third copy will be named file.2 etc. With the parameter -nc warnings about this will be displayed. When you run Wget with the parameter -r, but without -N or -nc, new download site will replace already downloaded files. When specifying a parameter -nc downloading will continue from where it was dropped and downloaded files will not be downloaded again (unless they have changed). When you run Wget with the parameter -N, with or without -r, the file will be loaded only if it is newer than the existing one, or if its size does not match the existing copy (see Comparison by Date). -nc not combined with -N.
With the specified parameter -nc files with extensions .html or (this is just awful) .htm from local disks will be loaded as if from the Internet. -c ––Continue File download resumed. Used if file download was interrupted. For example: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z

If the current folder already contains a file named ls-lR.Z, then Wget will check if the file is the same as the one being downloaded (not in size!), and if so, it will send a request to the server to continue downloading the file from the same place where the download was interrupted last time. repeats reloading attempts independently and without parameter -c, and only when it "surrenders" and finishes its work, then this parameter will be needed to resume downloading the file.
No option specified -c the previous example will load the specified file re-named with final name ls-lR.Z.1, without touching the existing ls-lR.Z.
Since version 1.7 when specifying the parameter -c if the file on the server is equal to or smaller than the local file, then Wget will not download anything and will display a message accordingly.
However, when using -c any file on the server that is larger than the local file will be considered under-downloaded. In this case, only "(length (remote file) - length (local file))" bytes will be loaded and written to the end of the file. This can be useful if you need to download new messages from some log.
Moreover, if the downloaded file is larger because it changed, then you will receive a damaged file (i.e. the file may end up being completely different from the original). You need to be especially careful when using -c together with -r since every modified file can be a candidate for a "work in progress".
You will also get a corrupted file if your HTTP proxy server is stupid and writes a "transfer interrupted" message to the file when the connection is broken. Wget will probably fix this in future versions.
remember, that -c works only with FTP and HTTP servers that support "Range" headers (ie resuming files). ––Progress =type Download progress indicator and its type. Possible values: "dot" and "bar". The default is "bar". Specifying an option ––Progress = bar will lead to drawing a beautiful indicator from ASCII characters (like a "thermometer"). If standard output is not TTY then "dot" will be used. ––Progress = dot to switch to "dot" type. Loading progress will be indicated by adding a dot or equal sign in the bar, each character represents the same amount of data. When using this type, you can specify its style - dot:style... If the style is "default", then each character will represent 1 KB, 10 characters per cluster and 50 per line. The "binary" style has a more "computer" look - 8Kb per character, 16 characters per cluster, and 48 characters per line (resulting in a 384Kb string). The "mega" style is used for loading large files- each character represents 64Kb, 8 characters per cluster and 48 characters per line (3 MB per line).
You can define the default style using the "progress" command in .wgetrc... If you want the indicator type "bar" to be used always (and not only when printing to stdout), then specify ––Progress = bar: force. -N ––Timestamping Enable date comparison. -S ––Server-response Display headers sent to HTTP servers and requests sent to FTP servers. ––Spider Setting Wget to behave like a spider, that is, Wget will not download files, it will only check for them. This way you can check the bookmarks and links of the site. For example:

Wget ––spider ––force-html -i bookmarks.html

Wget does not contain all the features of "real spiders" for the WWW. -T seconds ––Timeout =seconds Wait time in seconds. The default timeout is 900 seconds (15 minutes). Setting the value to 0 disables the timeout check. Please do not lower the timeout value unless you know exactly what you are doing. -w seconds ––Wait =seconds Pause in seconds between multiple downloads (including retries). This reduces the load on the server. To specify the value in minutes, use "m", in hours - "h", in days - "d" after the number. Specifying a large value for this parameter is useful if the network is unstable (for example, when the modem connection is interrupted). ––Waitretry =seconds Pauses only between retries of aborted downloads. Wget will wait 1 second after the first abort, 2 seconds after the second abort of downloading the same file, etc. - up to the maximum, which is indicated in seconds. For example, if this parameter is set to 10, Wget will wait a total of (1 + 2 +… + 10) = 55 seconds for each file. This is the default in the file wgetrc. ––Random-wait Some servers, when generating log files with paused file requests, can detect recursive file downloads — scanning by robots such as Wget. This parameter sets the time between requests, varying the pauses with a time calculated from 0 to 2 * wait(seconds) where wait specified by parameter -w to disguise Wget. Remember that source Wget is available, so even this masking can be calculated if desired. -Y on / off ––Proxy = on / off Proxy server support. Enabled by default if proxy is defined. -Q quota ––Quota =quota Quota for the size of uploaded files. Specified in bytes (default), in kilobytes KB (if at the end k) or in megabytes MB (if at the end m When the quota is exhausted, the current file is downloaded to the end, that is, the quota does not work when downloading a single file. For example, if you execute wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz then file ls-lR.gz will be fully loaded. Also, all files specified on the command line will be necessarily loaded, as opposed to a list of files in one file or as with recursive loading. Specifying 0 or inf will cancel the quota.

Folder upload options

-nd ––No-directories Do not create folder structure when loading recursively. With the specified parameter, all files will be downloaded to one folder. If a file with the given name already exists, it will be saved under the name FileName.n. -x ––Force-directories Opposite to parameter -nd- create a folder structure starting from the main page of the server. For example, wget -x http://fly.srk.fer.hr/robots.txt will download the file to the folder fly.srk.fer.hr. -nH ––No-host-directories Do not create empty folders at the beginning of the structure. Default / pub / xemacs /... If you load it with the parameter -r, then it will be saved under the name ftp.xemacs.org/pub/xemacs/... With parameter -nH cut from the name of the starting folder ftp.xemacs.org/ and it will be called pub / xemacs... And the parameter ––Cut-dirs will take away number components. Examples of parameter operation ––Cut-dirs: No parameters -> ftp.xemacs.org/pub/xemacs/ -nH -> pub / xemacs / -nH ––cut-dirs = 1 -> xemacs / -nH ––cut-dirs = 2 ->. ––Cut-dirs = 1 -> ftp.xemacs.org/xemacs/ ...

If you just want to get rid of the folder structure, then you can replace this parameter with -nd and -P... Unlike -nd, -nd works with subdirectories - for example, when -nH ––cut-dirs = 1 subdirectory beta / will be written as xemacs / beta. -P prefix ––Directory-prefix =prefix Defines start folder, in which the site folder structure (or just files) will be saved. By default, this parameter is . (current folder).

HTTP parameters

-E ––Html-extension If the type of the uploaded file is text / html and its address does not end in .? , when using this parameter, it will be added to its name .html... This can be useful when mirroring pages. .asp unless you want them to interfere with your Apache server. Another use case for this option is when loading response pages for CGI scripts. A page with a URL like http://site.com/article.cgi?25 will be saved as article.cgi? 25.html.Note: when updating or reloading pages with this parameter, the latter will be loaded again in any case, because Wget cannot find out if a local file is related X.html to downloaded from url X... To avoid unnecessary reboots, use the options -k and -K... In this case, the original versions of the files will also be saved as X.orig. ––Http-user =user ––Http-passwd =password Username user and password password for HTTP server... Depending on the type of response, Wget will use either "basic" (insecure) or "digest" (secure) authentication. You can also specify a username and password in the URL itself. -C on / off ––Cache = on / off Enables or disables server side caching. At the same time, Wget sends the corresponding request ( Pragma: no-cache). Also used to quickly update files on a proxy server. By default, caching is enabled. ––Cookies = on / off Enables or disables the use of cookies. The server sends a cookie to the client using the "Set-Cookie" header and the client responds with the same cookie. Thanks to this, the server can keep statistics of visitors. By default, cookies are used, but writing them to disk is disabled. ––Load-cookies file Load cookie from file before the first HTTP download. file has a text format like cookies.txt for Netscape. This option is used when mirroring. To do this, Wget sends the same cookies that your browser sends when it connects to the HTTP server. This is enabled by this parameter - just give Wget the path to cookies.txt... Different browsers store cookies in different folders: Netscape 4.x. The file is in ~ / .netscape / cookies.txt... Mozilla and Netscape 6.x. Mozilla stores cookies in cookies.txt located somewhere in ~ / .mozilla, in your profile folder. The full path usually ends with something like ~ / .mozilla / default / some-weird-string / cookies.txt... Internet Explorer. To export a cookie for Wget, select File, Import & Export, in the wizard, select Export Cookies. Tested in Internet Explorer 5; may not work in earlier versions. Other reviewers. Parameter ––Load-cookies will work with cookies in the Netscape format, which is supported by Wget. If you cannot use the parameter ––Load-cookies, then there is still a way out. If your browser supports Write down the name and value of the cookie and manually tell Wget to send these cookies: wget ––cookies = off ––header "Cookie: I = I " ––Save-cookies file Save cookie from file at the end of the session. Outdated cookies are not saved. ––Ignore-length Some HTTP servers (more specifically, CGI scripts) send "Content-Length" headers that tell Wget that it hasn't downloaded everything. And Wget downloads the same document multiple times. With this parameter, Wget will ignore "Content-Length" headers. ––Header =additional-header Defines additional-header sent to the HTTP server. It must contain : and the characters after it. You can define multiple additional headers by using ––Header repeatedly. wget ––header = "Accept-Charset: iso-8859-2" ––header = "Accept-Language: hr" http://fly.srk.fer.hr/

Specifying an empty string in the header value will clear all previously defined user-defined headers. ––Proxy-user =user ––Proxy-passwd =password Specifies the username user and the password password to authorize the proxy server. The authorization type will be "basic". ––Referer =url Adds the header `Referer: url' v HTTP request... Used when loading pages that are transmitted correctly only if the server knows which page you came from. -s ––Save-headers Preserve headers sent to HTTP servers. -U agent-string ––User-agent =agent-string Identified as agent-string when requesting an HTTP server. The HTTP protocol allows itself to be identified using an agent header. Wget is identified by default as Wget /version, where version Is a version of Wget. Some servers only provide the requested information to browsers identified as "Mozilla" or Microsoft "Internet Explorer". This option allows you to trick such servers.

FTP options

-nr ––Dont-remove-listing Don't delete temporary files .listing generated by FTP upload. These files contain information about the folders of the FTP servers. Failure to uninstall will help you quickly identify the update of the server folders (i.e. determine that your mirror is one). If you don't uninstall .listing then remember about your safety! For example, with this name you can create a symbolic link to / etc / passwd or something else. -g on / off ––Glob = on / off Enables or disables use special characters (masks) via FTP. It could be * , ? , [ and ] ... For example: wget ftp://gnjilux.srk.fer.hr/*.msg

By default, mask characters are allowed if the URL contains such characters. You can also enclose the URL in quotes. This will only work on Unix FTP servers (and emulating the Unix "ls" output). ––Passive-ftp Enables passive FTP mode when the connection is initiated by the client. Used when there is a firewall. ––Retr-symlinks When you recursively download FTP folders, files pointed to by symbolic links are not downloaded. This option disables this. ––Retr-symlinks currently only works for files, not folders. Remember that this option does not work when uploading a single file.

Recursive loading options

-r ––Recursive Enable recursive loading. -l depth ––Level =depth Maximum recursive loading depth depth... By default, its value is 5. ––Delete-after Delete every page (locally) after download it. Used to save new versions of frequently requested pages to a proxy. For example: wget -r -nd ––delete-after http://whatever.com/~popular/page/

Parameter -r enables loading by default, parameter -nd disables the creation of folders. With the specified parameter ––Delete-after parameter will be ignored ––Convert-links. -k ––Convert-links After the download is complete, convert the links in the document for offline viewing. This applies not only to visible links to other documents, but to links to all external local files. Each link is modified in one of two ways:

* Links to files downloaded by Wget are changed to corresponding relative links. For example: if the downloaded file /foo/doc.html then link to also downloaded file /bar/img.gif will look like ../bar/img.gif... This method works if there is a visible relationship between the folders of one and the other file. * Links to files not downloaded by Wget will be changed to the absolute addresses of those files on the remote server. For example: if the downloaded file /foo/doc.html contains a link to /bar/img.gif(or at ../bar/img.gif), then the link in the file doc.html will change to http: //host/bar/img.gif... Thanks to this, offline viewing of the site and files is possible: if a file is downloaded to which there is a link, then the link will point to it, if not, then the link will point to its Internet address (if such exists). When converting, relative links are used, which means you can move the downloaded site to another folder without changing its structure. Only after the download is complete, Wget knows which files were downloaded. Therefore, for the parameter -k conversion will take place only after the download is complete. -K ––Backup-converted Convert links back - remove extension .orig... Changes the behavior of an option -N. -m ––Mirror Enable options for site mirroring. This parameter is equal to several parameters: -r -N -l inf -nr... For unpretentious storage of mirrored copies of sites, you can use this option. -p ––Page-requisites Load all files that are needed to render HTML pages. For example: Pictures, Sounds, Cascading Styles. By default, these files are not loaded. Options -r and -l, listed together may help, but since Since wget does not distinguish between external and internal documents, there is no guarantee that everything required will be loaded. For example, 1.html contains the tag " " , with reference to 1.gif, and the tag " "referencing an external document 2.html... Page 2.html is similar, but its drawing is 2.gif and she refers to 3.html... Let's say it goes on until a certain number. If the command is given: wget -r -l 2 http: // I /1.html

then 1.html, 1.gif, 2.html, 2.gif and 3.html will be loaded. As you can see, 3.html without 3.gif since Wget just counts the number of hops it went through, goes to 2 and stops. And with the parameters:

Wget -r -l 2 -p http: // I /1.html

All files and drawing 3.gif pages 3.html will be loaded. Likewise

Wget -r -l 1 -p http: // I /1.html

will load 1.html, 1.gif, 2.html and 2.gif... To load one specified HTML page with all its elements, just do not specify -r and -l:

Wget -p http: // I /1.html

In this case, Wget will behave as if the parameter -r, but the page and its supporting files will be loaded. If you want support files on other servers (i.e. via absolute links) to be loaded, use:

Wget -E -H -k -K -p http: // I / I

Finally, I must say that for Wget external reference Is the URL specified in the tags " " , "" and " " , except " " .

Parameters to disable / enable recursive loading

-A acclist ––Accept acclist -R rejlist ––Reject rejlist A comma-separated list of filenames to download or not to download. Allowed to set file names by mask. -D domain-list ––Domains =domain-list Domain list domain-list from which to download files. Separated by commas. This parameter not includes -H. ––Exclude-domains domain-list List of domains from which not allowed to upload files ––Follow-ftp Follow FTP links from HTML pages. Otherwise, FTP file links are ignored. ––Follow-tags =list Wget has an embedded table HTML tags in which he looks for links to other files. You can specify additional tags in a comma separated list list in this parameter. -G list ––Ignore-tags =list Back ––Follow-tags... To skip HTML tags when recursively loaded, specify them in a comma separated list list.Formerly parameter -G was best for loading individual pages with their supporting files. You can see how it was by specifying the command wget -Ga, area -H -k -K -r http: // I / I

But now the best parameter for loading one page is completely considered ––Page-requisites. -H ––Span-hosts Allows to visit any servers to which there is a link. -L ––Relative Follow only relative links. With this parameter, files from other servers will definitely not be downloaded. -I list ––Include-directories =list A comma-separated list of folders from which to download files. List items list -X list ––Exclude-directories =list A comma-separated list of folders to exclude for download (see Restricting by folders). List items list may contain mask characters. -np ––No-parent Do not climb above the starting address on recursive loading.

EXAMPLES OF USING

The examples are divided into three categories according to their difficulty.

Simple use

* If you need to download a URL, enter: wget http://fly.srk.fer.hr/ * But what if the connection is slow and the file is long? It is possible to disconnect before completing the download. In this case, Wget will keep trying a new connection until it runs out of attempts (20 by default). You can change this number, for example to 45: wget ––tries = 45 http://fly.srk.fer.hr/jpg/flyweb.jpg * Now let's leave Wget running in the background and write its messages to the log log... Take a long time ––Tries so we use -t... wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &

The ampersand at the end tells the shell to continue without waiting for Wget to exit. To make the program repeat indefinitely - use -t inf... * Using FTP is also very easy. Wget takes care of all the authorization concerns.

Wget ftp://gnjilux.srk.fer.hr/welcome.msg * If you specify a folder address, then Wget will download the listing for that folder (ie the files and subdirectories contained in it) and convert it to HTML. For example: wget ftp://prep.ai.mit.edu/pub/gnu/ links index.html

Extended use

* If you have a file with a URL that you want to download, then use the parameter -i: wget -i I

If you specify instead of a filename, the URL will be read from stdin. * Create a five-level copy of the GNU site with the original folder structure, with one download attempt, save messages to gnulog:

Wget -r http://www.gnu.org/ -o gnulog * As in the example above, but converting links in HTML files to local ones for later offline viewing: wget ––convert-links -r http: // www.gnu.org/ -o gnulog * Download one HTML page and all the files required to render the latter (eg pictures, cascading style files, etc.). Also convert all links to these files: wget -p ––convert-links http://www.server.com/dir/page.html

The HTML page will be saved in www.server.com/dir/page.html and pictures, cascading styles, etc. will be saved in the folder www.server.com/, except for the case when files will be downloaded from other servers. * As in the example above, but without the folder www.server.com/... Also all files will be saved in subfolders download /.

Wget -p ––convert-links -nH -nd -Pdownload http://www.server.com/dir/page.html * Download index.html from www.lycos.com displaying server headers: wget -S http://www.lycos.com/ * Save headers to a file for later use. wget -s http://www.lycos.com/ more index.html * Download the top two levels wuarchive.wustl.edu v / tmp... wget -r -l2 -P / tmp ftp://wuarchive.wustl.edu/ * Download GIF files folders on the HTTP server. Command wget http://www.server.com/dir/*.gif will not work as wildcards are not supported when downloading over HTTP. Use: wget -r -l1 ––no-parent -A.gif http://www.server.com/dir/

-r -l1 enables recursive loading with a maximum depth of 1. ––No-parent disables following links to the parent folder that has the top level, -A.gif allows uploading only files with the .GIF extension. -A "* .gif" will also work. * Suppose that during a recursive boot, you urgently need to shutdown / restart your computer. To avoid downloading existing files, use:

Wget -nc -r http://www.gnu.org/ * If you want to provide a username and password for an HTTP or FTP server, use the appropriate URL syntax: wget ftp: // hniksic: /.emacs * You want to did the loaded documents go to standard output and not to files? wget -O - http://jagor.srce.hr/ http://www.srce.hr/

If you want to set up a pipeline and load all the sites that are linked on the same page:

Wget -O - ttp: //cool.list.com/ | wget ––force-html -i -

Professional use

* To store a mirror page (or FTP folder), then use ––Mirror (-m), which replaces -r -l inf -N... You can add Wget to your crontab asking it to check for updates every Sunday: crontab 0 0 * * 0 wget ––mirror http://www.gnu.org/ -o / home / me / weeklog * You also want the links to convert to local. But after reading this tutorial, you know that time comparison won't work. Tell Wget to keep backup copies of HTML files before converting. Command: wget ––mirror ––convert-links ––backup-converted http://www.gnu.org/ -o / home / me / weeklog * And if local viewing of HTML files other than .html, for example index.cgi, then you need to send the command to rename all such files (content-type = text / html) v name.html... wget ––mirror ––convert-links ––backup-converted ––html-extension -o / home / me / weeklog http://www.gnu.org/

With short analogs of the commands:

Wget -m -k -K -E http://www.gnu.org/ -o / home / me / weeklog

Files

/ usr / local / etc / wgetrc By default, this is the location global settings file. .wgetrc User settings file.

FOUND ERRORS

You can submit bug reports to the GNU Wget at< " " >(in English).
Before sending:

1. Make sure that the behavior of the program is really erroneous. If Wget crashes, it is an error. If the behavior of Wget does not match the documentation, then this is a bug. If everything works strangely, but you are not sure how it should actually work, then this may also be a bug. 2. Try to repeat the situation with an error in the minimum number of actions. Do not rush to send .wgetrc, try to do all the actions that led to the error with another settings file (or without it at all). 3. Run Wget with the parameter -d and submit the journal (or parts of it). A lot it is easier to find errors with such logs. 4. If Wget gets an error, try running it in a debugger, for example "gdb` which wget` core "and type" where "to get a backtrace.

CM. ALSO

GNU Info for wget.

AUTHORS

TRANSLATION

COPYRIGHT

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
This is free software; look for copying conditions in the source texts. There are NO guarantees; the program is NOT intended for SALES.

GNU Wget is a free, non-interactive console program for downloading files over the network. Supports HTTP, FTP and HTTPS protocols, and also supports working through an HTTP proxy server. The program is included in almost all GNU / Linux distributions.

GNU Wget is a non-interactive program. This means that after its launch, the user can influence its operation only with the help of the process control tools of the operating system itself. As a rule, for this, the keyboard shortcuts Ctrl + C are used when it is necessary to interrupt the work of the program and Ctrl + Z to place the current task in the background.

Modern browsers usually have the function of downloading files, however, since the browser is designed for an interactive mode of operation, downloading a large number of files manually can be tedious. Browsers generally do not provide the means to automate these tasks. GNU Wget, for example, supports loading URLs specified in a file. In this way, you can make a list of files, and at any convenient time download them using the GNU Wget.

The command line interface allows you to control GNU Wget from other programs and scripts, which is used to automate file downloads (regular updates, monitoring server availability, etc.).

GNU Wget allows you to download any file on the world wide web (including (X) HTML pages) via HTTP and HTTPS, as well as files and directory lists via FTP.

Files can be downloaded recursively from links in HTML pages, both from one site with a certain depth of following links, or from several. In addition, when uploading via FTP, files can be downloaded “by mask” name (that is, you can specify a group of files with “*”).

GNU Wget also supports resuming a file if the connection is lost.

We all sometimes download files from the Internet. If you use programs with graphical interface, then everything turns out to be extremely simple. However, when working on the Linux command line, things get a little more complicated. Especially for those who are not familiar with the right tools. One such tool is the extremely powerful wget utility, which is suitable for all kinds of downloads. We bring to your attention twelve examples, analyzing which, you can master the basic features of wget.

$ wget https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
After entering this command, the download of Nagios Core will begin. During this process, you will be able to see data about the download, for example, information about how much data has already been downloaded, the current speed, and how much time is left until the end of the download.

2. Download the file and save it with a new name

If we want to save the downloaded file with a different name from its original name, the wget command with the -O parameter comes in handy:

$ wget -O nagios_latest https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
With this approach, the uploaded file will be saved under the name nagios_latest.

3. Limiting the speed of downloading files

You can limit the download speed of files using wget if necessary. As a result, this operation will not occupy the entire available data transmission channel and will not affect other processes associated with the network. This can be done by using the --limit-rate parameter and specifying the rate limit expressed in bytes (as a regular number), kilobytes (adding a K after the number) or megabytes (M) per second:

$ wget ––limit-rate = 500K https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror = excellmedia
The download speed is set to 500 Kb / s.

4. Completing an interrupted download

If this operation is interrupted while downloading files, you can resume downloading by using the -c parameter of the wget command:

$ wget –c https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
If this parameter is not used, then the download of the incomplete file will start from the beginning.

If you are uploading a huge file and want to perform this operation in the background, you can do this using the -b option:

$ wget –b https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

If you have a list of URLs of files to download, but you do not want to manually trigger downloads of these files, you can use the -I parameter. However, before starting the download, you need to create a file containing all the addresses. For example, you can do this with the following command:

$ vi url.txt
Add addresses to this file - one on each line. Further, it remains only to start wget, passing the newly created file with the list of downloads to this utility:

$ wget –I url.txt
Execution of this command will result in sequential loading of all files from the list.

7. Increase in the total number of file download attempts

In order to configure the number of retries to download a file, you can use the --tries parameter:

Wget ––tries = 100 https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

The command to download a file from an anonymous FTP server using wget looks like this:

$ wget FTP url
If a username and password are required to access the file, the command will look like this:

$ wget –-ftp-user = dan ––ftp-password = ********* FTP URL

9. Create a local copy of the website

If you need to download the content of an entire website, you can do so using the --mirror parameter:

$ wget --mirror -p --convert-links -P / home / dan xyz.com
Note the additional command line parameters:

  • -p: downloads all files required for correct display of HTML pages.
  • --convert-links: links in documents will be converted for local site viewing purposes.
  • -P / home / dan: Content will be saved to the / home / dan folder.

10. Downloading from the site only files of a certain type

In order to download only files of a certain type from the site, you can use the -r -A parameters:

$ wget -r -A.txt Website_url

11. Skipping files of a certain type

If you want to copy an entire website, but you do not need files of a certain type, you can disable their loading using the --reject parameter:

$ wget --reject = png Website_url

12. Download using your own .log file

To load the file and use your own .log file, use the -o option and specify the name of the log file:

$ wget -o wgetfile.log https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

Outcomes

Wget is fairly easy to use, but very useful utility Linux. And, in fact, what we talked about is only a small part of what she can do. Hopefully this overview will help those unfamiliar with wget to evaluate this program, and possibly include it in their daily arsenal of command line tools.

Dear Readers! Are you using Linux command line tools to download files? If yes, please tell us about them.

When making client TCP / IP connections, bind to ADDRESS on the local machine. ADDRESS may be specified as a hostname or IP address. This option can be useful if your machine is bound to multiple IPs.

When creating client TCP / IP connections, bind to ADDRESS on the local computer. ADDRESS can be specified as a hostname or IP address. This option can be useful if your computer is associated with multiple IP addresses.

'—Bind-dns-address = ADDRESS ’

This address overrides the route for DNS requests. If you ever need to circumvent the standard settings from /etc/resolv.conf, this option together with ‘—dns-servers’ is your friend. ADDRESS must be specified either as IPv4 or IPv6 address. Wget needs to be built with libcares for this option to be available.

[libcares only] This address overrides the route for DNS queries. If you ever need to bypass the default settings in /etc/resolv.conf, this parameter, along with ‘—dns-servers’, is your friend. ADDRESS must be specified as either IPv4 or IPv6 address. The wget needs to be built with libcares for this parameter to be available.

‘—Dns-servers = ADDRESSES’

The given address (es) override the standard nameserver addresses, e.g. as configured in /etc/resolv.conf. ADDRESSES may be specified either as IPv4 or IPv6 addresses, comma-separated. Wget needs to be built with libcares for this option to be available.

[libcares only] The addresses provided override the default nameserver addresses, eg. as stated in /etc/resolv.conf. ADDRESSES can be specified as either IPv4 or comma-separated IPv6 addresses. The wget needs to be built with libcares for this parameter to be available.

‘-T number’
'—Tries = number ’

Set number of tries to number. Specify 0 or ‘ inf’For infinite retrying. The default is to retry 20 times, with the exception of fatal errors like “connection refused” or “ not found”(404), which are not retried.

Set the number of attempts for the number. Please indicate 0 (zero) or ‘ inf‘For endless retry. The default is to retry 20 times, except for fatal errors such as "connection refused" or "not found" (404), which are not retry.

‘-O file’
'—Output-document = file ’

The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If '-' is used as file, documents will be printed to standard output, disabling link conversion. (Use ‘./-’ to print to a file literally named ‘-’.)

The documents will not be written to the corresponding files, but they will all be merged together and written to the file. If ‘-’ is used as a file, documents will be printed to standard output with link conversion disabled. (Use ‘./-’ to print to a file literally named ‘-’.)

Use of ‘-O’ is not intended to mean simply “use the name file instead of the one in the URL;” rather, it is analogous to shell redirection: ‘ wget -O file http: // foo’Is intended to work like’ wget -O - http: // foo> file'; file will be truncated immediately, and all downloaded content will be written there.

Using "-O" doesn't just mean "use a name file instead of a URL," rather, similar to shell redirection: " wget -O file http: // foo"Is intended to work like‘ wget -O - http: // foo> file‘; the file will be truncated immediately and all uploaded content will be written there.

For this reason, ‘-N’ (for timestamp-checking) is not supported in combination with ‘-O’: since file is always newly created, it will always have a very new timestamp. A warning will be issued if this combination is used.

For this reason, "-N" (for checking the timestamp) is not supported in conjunction with "-O": since the file is always created, it will always have a very new timestamp. A warning will be issued when using this combination.

Similarly, using ‘ -r’Or‘ -p’With‘ -O’May not work as you expect: Wget won’t just download the first file to file and then download the rest to their normal names: all downloaded content will be placed in file. This was disabled in version 1.11, but has been reinstated (with a warning) in 1.11.2, as there are some cases where this behavior can actually have some use.

Likewise, using ‘ -r' or ' -p' with ' -O'May not work as you expect: Wget will not just download the first file to a file and then load the rest in their normal names: all downloaded content will be placed in the file. This was disabled in version 1.11, but has been reinstated (with a warning) in 1.11.2, as there are cases where this behavior could actually be used.

A combination with ' -nc’Is only accepted if the given output file does not exist.

Combination with ' -nc'Is only accepted if the given output file does not exist.

Note that a combination with ' -k’Is only permitted when downloading a single document, as in that case it will just convert all relative URIs to external ones; ‘ -k’Makes no sense for multiple URIs when they’re all being downloaded to a single file; ‘ -k’Can be used only when the output is a regular file.

Note that the combination with " -k»Allowed only when loading one document, since in this case it will simply convert all relative URIs to external ones; " -k»Does not make sense for multiple URIs when they are all loaded into one file; ‘ -k'Can only be used when the output is a regular file.

‘-Nc’
‘—No-clobber’

If a file is downloaded more than once in the same directory, Wget's behavior depends on a few options, including ' -nc’. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved.

If a file is downloaded more than once in the same directory, the behavior Wget depends on several parameters including "-Nc"... In some cases, the local file will be dumped or overwritten when reloaded. In other cases, it will be saved.

When running Wget without '-N', '-nc', '-r', or '-p', downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named ' file.1 '. If that file is downloaded yet again, the third copy will be named ‘file.2’, and so on. (This is also the behavior with ‘-nd’, even if ‘-r’ or ‘-p’ are in effect.)

If you run Wget without '-N', '-nc', '-r', or '-p', loading the same file in the same directory will save the original copy of the file, and the second copy will be named 'file. 1. If this file is uploaded again, the third copy will be named "file.2" and so on. (This is also the behavior with -nd, even if -r or -p is in effect.)

When ‘ -nc’Is specified, this behavior is suppressed, and Wget will refuse to download newer copies of’ file’. Therefore, “no-clobber” is actually a misnomer in this mode-it’s not clobbering that’s prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that’s prevented.

When indicated "-Nc", this behavior is suppressed and Wget will refuse to download new copies ‘ file’. So "no-clobber" is actually wrong in this mode - it is not clobber that is prevented (since the numeric suffixes already prevent anti-aliasing), but rather prevents multiple versions from being stored.

When running Wget with ' -r’Or‘ -p’, But without’ -N’, ‘-nd’, Or‘ -nc’, Re-downloading a file will result in the new copy simply overwriting the old. Adding '-nc ’will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored.

When starting Wget with ' -r' or ' -p', but without ' -N’, ‘-nd' or ' -nc', Re-downloading the file will cause the new copy to simply overwrite the old one. Adding "-nc" will prevent this behavior, instead the original version will be kept and any new copies on the server will be ignored.

When running Wget with '-N', with or without '-r' or '-p', the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file ( see Time-Stamping). '-Nc' may not be specified at the same time as '-N'.

When running Wget with '-N', with or without '-r', or '-p', the decision to download or not download a newer copy of the file depends on the local and remote timestamp and file size (see Temporal Stamping ). "-Nc" may not be specified at the same time as "-N".

A combination with ‘-O’ / ’- output-document’ is only accepted if the given output file does not exist.

The combination with '-O' / '- output-document' is only accepted if the given output file does not exist.

Note that when ‘ -nc’Is specified, files with the suffixes’ .html ’or’ .htm ’will be loaded from the local disk and parsed as if they had been retrieved from the Web.

Please note that when specified "-Nc", files with ".html" or ".htm" suffixes will be downloaded from local disk and analyzed as if they were retrieved from the Internet.

‘—Backups = backups’

Before (over) writing a file, back up an existing file by adding a ‘.1’ suffix (‘_1’ on VMS) to the file name. Such backup files are rotated to ‘.2’, ‘.3’, and so on, up to backups (and lost beyond that).

Before (over) writing the file, back up the existing file by adding the ‘.1’ (‘_1’ on VMS) suffix to the file name. Such backup files are rotated to ".2", ".3", and so on, up to and including backups (and are lost for doing so).

‘-C’
‘—Continue’

Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance:

Keep getting the partially downloaded file. This is useful when you want to complete a download started by a previous Wget instance or other program. For example:

wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z

If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file ...

If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first part of the remote file and ask the server to continue extracting at an offset equal to the length of the local file.

Note that you don’t need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. ‘ -c’Only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around.

Note that you do not need to specify this option if you just want the current Wget call to retry the file download if the connection is lost halfway. This is the default behavior. ‘ -c'Only affects resume downloads started before this Wget call, and whose local files are still sitting.

Without '-c', the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.

Without -c, the previous example just loaded the remote file into ls-lR.Z.1, leaving the file with ls-lR.Z truncated.

If you use ‘ -c’On a non-empty file, and the server does not support continued downloading, Wget will restart the download from scratch and overwrite the existing file entirely.

If you are using ‘ -c'In a non-empty file, and the server does not support persistent downloads, Wget will restart the download from scratch and completely overwrite the existing file.

Beginning with Wget 1.7, if you use ' -c’On a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt) -because “continuing” is not meaningful, no download occurs.

Since Wget 1.7, if you use " -c"In a file with the same size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same thing happens when the file is smaller on the server than it is locally (presumably because it has been modified on the server since the last download attempt), because “continue” is meaningless, no download occurs.

On the other side of the coin, while using '-c', any file that's bigger on the server than locally will be considered an incomplete download and only (length (remote) - length (local)) bytes will be downloaded and tacked onto the end of the local file. This behavior can be desirable in certain cases-for instance, you can use ‘wget -c’ to download just the new portion that’s been appended to a data collection or log file.

On the other side of the coin, when using ‘-c’, any file that is larger on the server than locally will be considered a partial download and only (length (remote) length (local)) bytes will be downloaded and attached to the end of the local file. This behavior may be desirable in some cases, for example, you can use "wget ​​-c" to download only the new part that has been added to the data collection or log file.

However, if the file is bigger on the server because it’s been changed, as opposed to just appended to, you’ll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using ‘-c’ in conjunction with ‘-r’, since every file will be considered as an “incomplete download” candidate.

However, if the file is larger on the server because it has been modified rather than just added, you will end up with a garbled file. Wget has no way of verifying that the local file is indeed a valid prefix for the remote file. You must be especially careful when using ‘-c’ in conjunction with ‘-r’, as each file will be considered a “partial download” candidate.

Another instance where you’ll get a garbled file if you try to use ’-c’ is if you have a lame HTTP proxy that inserts a “transfer interrupted” string into the local file. In the future a “rollback” option may be added to deal with this case.

Another example where you get a malformed file if you try to use "-c" is if you have a lame HTTP proxy that inserts the line "interrupt redirection" into a local file. In the future, a rollback option may be added to resolve this case.

Note that ‘ -c'Only works with FTP servers and with HTTP servers that support the Range header.

Note that ' -c'Only works with FTP servers and HTTP servers that support the Range header.

‘—Start-pos = OFFSET’

Start downloading at zero-based position OFFSET. Offset may be expressed in bytes, kilobytes with the ‘k’ suffix, or megabytes with the ‘m’ suffix, etc.

Start downloading at position zero OFFSET. The offset can be expressed in bytes, kilobytes with the "k" suffix, or megabytes with the "m" suffix, etc.

‘—Start-pos’ has higher precedence over ‘—continue’. When ‘—start-pos’ and ’—continue’ are both specified, wget will emit a warning then proceed as if ‘—continue’ was absent.

Server support for continued download is required, otherwise '—start-pos ’cannot help. See ‘-c’ for details.

'—Progress = type ’

Select the type of the progress indicator you wish to use. Legal indicators are “dot” and “bar”.

The “bar” indicator is used by default. It draws an ASCII progress bar graphics (a.k.a “thermometer” display) indicating the status of retrieval. If the output is not a TTY, the “dot” bar will be used by default.

Use '—progress = dot ’to switch to the“ dot ”display. It traces the retrieval by printing dots on the screen, each dot representing a fixed amount of downloaded data.

The progress type can also take one or more parameters. The parameters vary based on the type selected. Parameters to type are passed by appending them to the type sperated by a colon (:) like this: ‘—progress = type: parameter1: parameter2’.

When using the dotted retrieval, you may set the style by specifying the type as ‘dot: style’. Different styles assign different meaning to one dot. With the default style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The binary style has a more “computer” -like orientation-8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The mega style is suitable for downloading large files-each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M). If mega is not enough then you can use the giga style-each dot represents 1M retrieved, there are eight dots in a cluster, and 32 dots on each line (so each line contains 32M).

With '—progress = bar ’, there are currently two possible parameters, force and noscroll.

When the output is not a TTY, the progress bar always falls back to “dot”, even if ‘—progress = bar’ was passed to Wget during invokation. This behavior can be overridden and the “bar” output forced by using the “force” parameter as ‘—progress = bar: force’.

By default, the 'bar' style progress bar scroll the name of the file from left to right for the file being downloaded if the filename exceeds the maximum length allotted for its display. In certain cases, such as with '—progress = bar: force ’, one may not want the scrolling filename in the progress bar. By passing the “noscroll” parameter, Wget can be forced to display as much of the filename as possible without scrolling through it.

Note that you can set the default style using the progress command in .wgetrc. That setting may be overridden from the command line. For example, to force the bar output without scrolling, use '—progress = bar: force: noscroll ’.

‘—Show-progress’

Force wget to display the progress bar in any verbosity.

By default, wget only displays the progress bar in verbose mode. One may however, want wget to display the progress bar on screen in conjunction with any other verbosity modes like ‘—no-verbose’ or ‘—quiet’. This is often a desired a property when invoking wget to download several small / large files. In such a case, wget could simply be invoked with this parameter to get a much cleaner output on the screen.

This option will also force the progress bar to be printed to stderr when used alongside the ‘—logfile’ option.

‘-N’
‘—Timestamping’

Turn on time-stamping. See Time-Stamping, for details.

Turn on timping. See Time-Stamping for details.

'—No-if-modified-since ’

Do not send If-Modified-Since header in ‘-N’ mode. Send preliminary HEAD request instead. This has only effect in ‘-N’ mode.

'—No-use-server-timestamps'

Don’t set the local file’s timestamp by the one on the server.

By default, when a file is downloaded, its timestamps are set to match those from the remote file. This allows the use of '—timestamping' on subsequent invocations of wget. However, it is sometimes useful to base the local file’s timestamp on when it was actually downloaded; for that purpose, the ‘—no-use-server-timestamps’ option has been provided.

‘-S’
‘—Server-response’

Print the headers sent by HTTP servers and responses sent by FTP servers.

‘—Spider’

When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there. For example, you can use Wget to check your bookmarks:

wget -spider -force-html -i bookmarks.html
This feature needs much more work for Wget to get close to the functionality of real web spiders.

‘-T seconds’
'—Timeout = seconds ’

Set the network timeout to seconds seconds. This is equivalent to specifying ‘—dns-timeout’, ‘—connect-timeout’, and ‘—read-timeout’, all at the same time.

When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.

All timeout-related options accept decimal values, as well as subsecond values. For example, ‘0.1’ seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.

‘—Dns-timeout = seconds’

Set the DNS lookup timeout to seconds seconds. DNS lookups that don’t complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.

'—Connect-timeout = seconds ’

Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.

'—Read-timeout = seconds ’

Set the read (and write) timeout to seconds seconds. The “time” of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted. This option does not directly affect the duration of the entire download.

Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds.

'—Limit-rate = amount ’

Limit the download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the ‘k’ suffix, or megabytes with the ‘m’ suffix. For example, ‘—limit-rate = 20k’ will limit the retrieval rate to 20KB / s. This is useful when, for whatever reason, you don’t want Wget to consume the entire available bandwidth.

This option allows the use of decimal numbers, usually in conjunction with power suffixes; for example, ‘—limit-rate = 2.5k’ is a legal value.

Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by the rate. Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate. However, it may take some time for this balance to be achieved, so don’t be surprised if limiting the rate doesn’t work well with very small files.

‘-W seconds’
‘—Wait = seconds’

Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the m suffix, in hours using h suffix, or in days using d suffix.

Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry. The waiting interval specified by this function is influenced by -random-wait, which see.

‘—Waitretry = seconds’

If you don’t want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify.

By default, Wget will assume a value of 10 seconds.

‘—Random-wait’

Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified using the ‘—wait’ option, in order to mask Wget’s presence from such analysis.

A 2001 article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses.

The ‘—random-wait’ option was inspired by this ill-advised recommendation to block many unrelated users from a web site due to the actions of one.

‘—No-proxy’

Don’t use proxies, even if the appropriate * _proxy environment variable is defined.

See Proxies, for more information about the use of proxies with Wget.

'-Q quota'
‘—Quota = quota’

Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with ‘k’ suffix), or megabytes (with ‘m’ suffix).

Note that quota will never affect downloading a single file. So if you specify ‘wget -Q10k https://example.com/ls-lR.gz’, all of the ls-lR.gz will be downloaded. The same goes even when several URLs are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type 'wget -Q2m -i sites'-download will be aborted when the quota is exceeded.

Setting quota to 0 or to 'inf' unlimits the download quota.

'—No-dns-cache'

Turn off caching of DNS lookups. Normally, Wget remembers the IP addresses it looked up from DNS so it doesn’t have to repeatedly contact the DNS server for the same (typically small) set of hosts it retrieves from. This cache exists in memory only; a new Wget run will contact DNS again.

However, it has been reported that in some situations it is not desirable to cache host names, even for the duration of a short-running application like Wget. With this option Wget issues a new DNS lookup (more precisely, a new call to gethostbyname or getaddrinfo) each time it makes a new connection. Please note that this option will not affect caching that might be performed by the resolving library or by an external caching layer, such as NSCD.

If you don’t understand exactly what this option does, you probably won’t need it.

'—Restrict-file-names = modes ’

Change which characters found in remote URLs must be escaped during generation of local filenames. Characters that are restricted by this option are escaped, i.e. replaced with ‘% HH’, where ‘HH’ is the hexadecimal number that corresponds to the restricted character. This option may also be used to force all alphabetical cases to be either lower- or uppercase.

By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, perhaps because you are downloading to a non-native partition, or because you want to disable escaping of the control characters, or you want to further restrict characters to only those in the ASCII range of values.

The modes are a comma-separated set of text values. The acceptable values ​​are ‘unix’, ‘windows’, ‘nocontrol’, ‘ascii’, ‘lowercase’, and ‘uppercase’. The values ​​‘unix’ and ‘windows’ are mutually exclusive (one will override the other), as are ‘lowercase’ and ‘uppercase’. Those last are special cases, as they do not change the set of characters that would be escaped, but rather force local file paths to be converted either to lower- or uppercase.

When “unix” is specified, Wget escapes the character ’/’ and the control characters in the ranges 0–31 and 128–159. This is the default on Unix-like operating systems.

When “windows” is given, Wget escapes the characters ‘\’, ‘|’, ‘/’, ‘:’, ‘?’, ‘” ’,‘ * ’,‘<’, ‘>', And the control characters in the ranges 0-31 and 128-159. In addition to this, Wget in Windows mode uses '+' instead of ':' to separate host and port in local file names, and uses '@' instead of '?' To separate the query portion of the file name from the rest ... Therefore, a URL that would be saved as ‘www.xemacs.org:4300/search.pl?input=blah’ in Unix mode would be saved as ‘www.xemacs.org + 4300 / sear [email protected]= blah 'in Windows mode. This mode is the default on Windows.

If you specify ‘nocontrol’, then the escaping of the control characters is also switched off. This option may make sense when you are downloading URLs whose names contain UTF-8 characters, on a system which can save and display filenames in UTF-8 (some possible byte values ​​used in UTF-8 byte sequences fall in the range of values ​​designated by Wget as “controls”).

The ‘ascii’ mode is used to specify that any bytes whose values ​​are outside the range of ASCII characters (that is, greater than 127) shall be escaped. This can be useful when saving filenames whose encoding does not match the one used locally.

‘-4’
'—Inet4-only'
‘-6’
‘—Inet6-only’

Force connecting to IPv4 or IPv6 addresses. With ‘—inet4-only’ or ’-4’, Wget will only connect to IPv4 hosts, ignoring AAAA records in DNS, and refusing to connect to IPv6 addresses specified in URLs. Conversely, with ‘—inet6-only’ or ’-6’, Wget will only connect to IPv6 hosts and ignore A records and IPv4 addresses.

Neither options should be needed normally. By default, an IPv6-aware Wget will use the address family specified by the host’s DNS record. If the DNS responds with both IPv4 and IPv6 addresses, Wget will try them in sequence until it finds one it can connect to. (Also see —prefer-family option described below.)

These options can be used to deliberately force the use of IPv4 or IPv6 address families on dual family systems, usually to aid debugging or to deal with broken network configuration. Only one of ‘—inet6-only’ and ‘—inet4-only’ may be specified at the same time. Neither option is available in Wget compiled without IPv6 support.

'—Prefer-family = none / IPv4 / IPv6'

When given a choice of several addresses, connect to the addresses with specified address family first. The address order returned by DNS is used without change by default.

When selecting multiple addresses, connect to addresses with the specified address family first. The address order returned by DNS is unchanged by default.

This avoids spurious errors and connect attempts when accessing hosts that resolve to both IPv6 and IPv4 addresses from IPv4 networks. For example, ‘www.kame.net’ resolves to ‘2001: 200: 0: 8002: 203: 47ff: fea5: 3085’ and to ‘203.178.141.194’. When the preferred family is IPv4, the IPv4 address is used first; when the preferred family is IPv6, the IPv6 address is used first; if the specified value is none, the address order returned by DNS is used without change.

This avoids false errors and connection attempts when accessing hosts that allow IPv6 and IPv4 addresses from IPv4 networks. For example, "www.kame.net" resolves to "2001: 200: 0: 8002: 203: 47ff: fea5: 3085" and "203.178.141.194". When the preferred family is IPv4, the IPv4 address is used first; when the preferred family is IPv6, the IPv6 address is used first; if the specified value is none, the address returned by DNS is used unchanged.

Unlike ‘-4’ and ‘-6’, this option doesn’t inhibit access to any address family, it only changes the order in which the addresses are accessed. Also note that the reordering performed by this option is stable-it doesn’t affect order of addresses of the same family. That is, the relative order of all IPv4 addresses and of all IPv6 addresses remains intact in all cases.

Unlike "-4" and "-6", this parameter does not prohibit access to any address family, it changes the order of access to addresses. Also note that the reordering performed by this option is stable — it does not affect the order of addresses in the same address family. That is, the relative ordering of all IPv4 addresses and all IPv6 addresses remains intact in all cases.

‘—Retry-connrefused’

Consider “connection refused” a transient error and try again. Normally Wget gives up on a URL when it is unable to connect to the site because failure to connect is taken as a sign that the server is not running at all and that retries would not help. This option is for mirroring unreliable sites whose servers tend to disappear for short periods of time.

Note that the "connection was refused" transient error and try again. Typically, Wget will give up the url when it cannot connect to the site, because the connection rejection is taken as a sign that the server is not running at all and that trying will not help. This option is for mirroring untrusted sites whose servers tend to disappear within short periods of time.

‘—User = user’
'—Password = password ’

Specify the username user and password password for both FTP and HTTP file retrieval. These parameters can be overridden using the ' —Ftp-user’And‘ —Ftp-password’Options for FTP connections and the’ —http-user ’and’ —http-password ’options for HTTP connections.

Provide user password and user password to search for FTP and HTTP files. These parameters can be overridden using the options " —Ftp-user" and " —Ftp-password"For FTP connections and" -http-user "and" -http-password "for HTTP connections.

‘—Ask-password’

Prompt for a password for each connection established. Cannot be specified when ' —Password’Is being used, because they are mutually exclusive.

Request a password for everyone established connection... Cannot be specified when using " —Password"Because they are mutually exclusive.

‘—No-iri’

Turn off internationalized URI (IRI) support. Use ‘—Iri’ to turn it on. IRI support is activated by default.

Disable Internationalized URI (IRI) support. Use "—Iri" to turn it on. IRI support is enabled by default.

You can set the default state of IRI support using the iri command in .wgetrc. That setting may be overridden from the command line.

You can set the default IRI state using the iri command in .wgetrc. This parameter can be overridden from the command line.

'—Local-encoding = encoding'

Force Wget to use encoding as the default system encoding. That affects how Wget converts URLs specified as arguments from locale to UTF-8 for IRI support.

Force Wget to use the system default encoding. This affects how Wget converts URLs given as arguments from the locale to UTF-8 for IRI support.

Wget use the function nl_langinfo () and then the CHARSET environment variable to get the locale. If it fails, ASCII is used.

Wget uses the nl_langinfo () function and then the CHARSET environment variable to get the locale. If this fails, ASCII is used.

You can set the default local encoding using the local_encoding command in .wgetrc. That setting may be overridden from the command line.

You can set the default local encoding using the local_encoding command in .wgetrc. This parameter can be overridden from the command line.

'—Remote-encoding = encoding'

Force Wget to use encoding as the default remote server encoding. That affects how Wget converts URIs found in files from remote encoding to UTF-8 during a recursive fetch. This options is only useful for IRI support, for the interpretation of non-ASCII characters.

Force Wget to use the encoding as the default encoding of the remote server. This affects how Wget converts URIs found in remote-encoded files to UTF-8 during recursive fetch. These options are only useful for IRI support, for interpreting non-ASCII characters.

For HTTP, remote encoding can be found in HTTP Content-Type header and in HTML Content-Type http-equiv meta tag.

For HTTP, remote encoding can be found in the HTTP Content-Type header and in the http-equiv HTML Content-Type meta tag.

You can set the default encoding using the remoteencoding command in .wgetrc. That setting may be overridden from the command line.

You can set the default encoding using the remoteencoding command in .wgetrc. This parameter can be overridden from the command line.

‘—Unlink’

Force Wget to unlink file instead of clobbering existing file. This option is useful for downloading to the directory with hardlinks.

Make Wget disable the file rather than clog the existing file. This option is useful for uploading to a directory with hard links.