summaryrefslogtreecommitdiff
path: root/lib/libalpm/dload.c
Commit message (Collapse)AuthorAge
* Fix whitespace and other formatting issuesJason St. John2013-11-15
| | | | | | | | | This commit: -- replaces space-based indents with tabs per the coding standards -- removes extraneous whitespace (e.g. extra spaces between function args) -- adds missing braces for a one-line if statement Signed-off-by: Jason St. John <jstjohn@purdue.edu>
* Remove spaces between the opening "if" and the opening parenthesisJason St. John2013-11-08
| | | | | Signed-off-by: Jason St. John <jstjohn@purdue.edu> Signed-off-by: Allan McRae <allan@archlinux.org>
* dload: avoid renaming files downloaded via sync operationsChristian Hesse2013-09-18
| | | | | | | | | | | | | | | | | | If the server redirects from ${repo}.db to ${repo}.db.tar.gz pacman gets this wrong: It saves to new filename and fails when accessing ${repo}.db. We need the remote filename only when downloading remote files with pacman's -U operation. This introduces a new field 'trust_remote_name' to payload. If set pacman downloads to the filename given by the server. The field trust_remote_name is set in alpm_fetch_pkgurl(). Fixes FS#36791 ([pacman] downloads to wrong filename with redirect). [dave: remove redundant assignment leading to memory leak] Signed-off-by: Allan McRae <allan@archlinux.org>
* Do not refer to FlySpray numbersAllan McRae2013-08-21
| | | | | | | | These references to bug numbers assume we will forever be using that bug tracker. It is better to properly comment the code instead (which was done in almost all cases anyway). Signed-off-by: Allan McRae <allan@archlinux.org>
* Hide unused parameter warnings when building without libcurlAllan McRae2013-07-22
| | | | Signed-off-by: Allan McRae <allan@archlinux.org>
* do not check error from close(2)Dave Reisner2013-07-05
| | | | | | | | | | | | | | | On operating systems we support, the behavior is always such that the kernel will do the right thing as far as invalidating the file descriptor, regardless of the eventual return value. Therefore, potentially looping and calling close multiple times is wrong. At best, we call close again on an invalid FD and throw a spurious EBADF error. At worst, we might close an FD which doesn't belong to us when a multi-threaded application opens its own file descriptor between iterations of the loop. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Allan McRae <allan@archlinux.org>
* dload: don't download sig if package is found in cacheDave Reisner2013-02-24
| | | | | | | Avoids the segfault seen in FS#33911. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Allan McRae <allan@archlinux.org>
* dload: pass back the effective URL to callers of _alpm_downloadDave Reisner2013-01-29
| | | | | | | | | | | I suspect that eventually we're going to end up returning a pointer to an allocated struct to describe the download result, but that's for another patch when the need arises... Fixes FS#33508. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Allan McRae <allan@archlinux.org>
* Relax requirement of what constitutes a dead connectionLANGLOIS Olivier PIS -EXT2013-01-29
| | | | | | | | Users have hit issues behind corporate firewalls that initially throttle downloads to ~1B/sec. Signed-off-by: Olivier Langlois < olivier.pis.langlois@transport.alstom.com> Signed-off-by: Allan McRae <allan@archlinux.org>
* dload: avoid showing progress bars on some redirectsDave Reisner2013-01-17
| | | | | | | | | | | | | RFC 2616 doesn't forbid a 301 or 302 repsonse from having a body, and servers exist in the wild that show this behavior. In order to prevent pacman from showing a progress bar when we aren't actually downloading a package (and merely following one of these pain in the butt redirects), capture the server response code in the response header, rather than waiting to peel it off the handle after the download has finished. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Reported-by: Alexandre Filgueira <alexfilgueira@cinnarch.com> Signed-off-by: Allan McRae <allan@archlinux.org>
* Update copyright year for 2013Allan McRae2013-01-03
| | | | Signed-off-by: Allan McRae <allan@archlinux.org>
* Plug various minor memory leaksAndrew Gregory2012-12-14
| | | | | Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> Signed-off-by: Allan McRae <allan@archlinux.org>
* fix -Wshadow warnings as reported by gcc 4.4.3Dave Reisner2012-05-20
| | | | | | | | | Apparently gcc 4.7 has decided that -Wshadow warnings aren't worth reporting anymore even with the flag enabled. These were found on an Ubuntu 10.04 install. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* Merge branch 'maint'Dan McGee2012-04-12
|\
| * Fix issues with unintialized variable value usageDan McGee2012-04-09
| | | | | | | | | | | | | | | | | | | | | | | | Detected by clang scan-build static code analyzer. * Don't attempt to free an uninitialized gpgme key variable * Initialize answer variable before asking frontend a question * Pass by reference instead of value if uninitialized fields are possible in download signal handler code * Ensure we never call strlen() on NULL payload->remote_name value Signed-off-by: Dan McGee <dan@archlinux.org>
* | Merge branch 'maint'Dan McGee2012-03-16
|\| | | | | | | | | Conflicts: lib/libalpm/sync.c
| * dload: reset payload filename members before downloadDave Reisner2012-03-14
| | | | | | | | | | | | | | | | | | To avoid conflicts on reusing a payload after a failed download, ensure that we reset the filename hints in the payload struct prior to the download operation. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* | Merge branch 'maint'Dan McGee2012-02-20
|\| | | | | | | | | | | Conflicts: contrib/pacsysclean.in src/pacman/conf.h
| * Update SIGPIPE signal handler commentDan McGee2012-02-14
| | | | | | | | Signed-off-by: Dan McGee <dan@archlinux.org>
* | Print error message when to-be-downloaded file cannot be createdNagy Gabor2012-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can happen that the to-be-downloaded file cannot be created in cachedir. For example, I am an -Sup user, and it is comfortable to set --cachedir to /mnt/pendrive, which is a FAT filesystem, so files like capseo-1:0.3-2-i686.pkg.tar.xz cannot be downloaded to there. Before this patch, pacman didn't give clear output about what happens when the download code could not create the necessary file. This can be confusing with -Su. An example output: *** $ sudo pacman -S capseo bochs --cachedir /c/TEMP resolving dependencies... looking for inter-conflicts... Targets (2): bochs-2.4.6-1 capseo-1:0.3-2 Total Download Size: 0.61 MiB Total Installed Size: 2.61 MiB Proceed with installation? [Y/n] :: Retrieving packages from extra... warning: failed to retrieve some files from extra bochs-2.4.6-1-i686 611.5 KiB 118K/s 00:05 [------------------] 97% error: failed to commit transaction (unexpected error) Errors occurred, no packages were upgraded. *** After the patch, pacman will give more informative error message (and pm_errno is set properly): *** error: could not open file '/c/TEMP/capseo-1:0.3-2-i686.pkg.tar.xz.part': Invalid argument error: failed to commit transaction (failed to retrieve some files) *** Unfortunately, the "could not open file" error message is printed for every mirror (that can be dozens of lines), which is ugly, but at least informative... Without modifying the download logic (for example, by introducing -2 return value for _alpm_download() to indicate giving up), this ugliness cannot be eliminated. Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu> Signed-off-by: Dan McGee <dan@archlinux.org>
* | Merge branch 'maint'Dan McGee2012-01-23
|\| | | | | | | | | | | Conflicts: lib/libalpm/diskspace.c src/pacman/util.h
| * lib/dload: give uniform naming to curl CB functionsDave Reisner2012-01-23
| | | | | | | | | | Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
| * lib/dload: enforce usage of TCP keepalivesDave Reisner2012-01-23
| | | | | | | | | | | | | | | | | | This is particularly important in the case of FTP control connections, which may be closed by rogue NAT/firewall devices detecting idle connections on larger transfers which may take 5-10+ minutes. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
| * Update copyright on changed files since beginning of yearDan McGee2012-01-18
| | | | | | | | Signed-off-by: Dan McGee <dan@archlinux.org>
| * fetch_url: look for files in cache before downloadingDave Reisner2012-01-18
| | | | | | | | | | | | | | | | | | | | We lost this logic somewhere between the libfetch and libcurl transition, as it existed in the internal downloader, but was pulled back only into the sync workflow. Add a helper function that will let us check for existance in the filecache prior to calling the downloader. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* | include config.h via MakefilesDave Reisner2011-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensures that config.h is always ordered correctly (first) in the includes. Also means that new source files get this for free without having to remember to add it. We opt for -imacros over -include as its more portable, and the added constraint by -imacros doesn't bother us for config.h. This also touches the HACKING file to remove the explicit mention of config.h as part of the includes. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* | Merge branch 'maint'Dan McGee2011-12-07
|\|
| * Enforce signature download size limit on -U <url> operationsDan McGee2011-12-05
| | | | | | | | | | | | | | | | We had a 16 KiB limit on database signatures, we should do the same here too to have a slight sanity check, even if we can't do so for the package itself yet. Signed-off-by: Dan McGee <dan@archlinux.org>
* | Add OPEN() and CLOSE() util macrosDan McGee2011-11-01
|/ | | | | | | These wrap the normal open() and close() low-level I/O calls and ensure EINTR is handled correctly. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: remove redundant conditionalDave Reisner2011-10-27
| | | | | | | Replacing the strdup when after the first NULL check assures that we get continue with payload->remote_name defined. Signed-off-by: Dave Reisner <dreisner@archlinux.org>
* dload: chmod tempfiles to respect umaskDave Reisner2011-10-27
| | | | | | | Dan: fix mask calculation, add it to the success/fail block instead. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* Add more logging to download codeDan McGee2011-10-24
| | | | | | | | | This adds a logger to the CURLE_OK case so we can always know the return code if it was >= 400, and debug log it regardless. Also adjust another logger to use the cURL error message directly, as well as use fstat() when we have an open file handle rather than stat(). Signed-off-by: Dan McGee <dan@archlinux.org>
* curl_gethost() potential bug fixupsDan McGee2011-10-13
| | | | | | | | | | | | | | This is in the realm of "probably not going to happen", but if someone were to translate "disk" to a string longer than 256 characters, we would have a smashed/corrupted stack due to our unchecked strcpy() call. Rework the function to always length-check the value we copy into the hostname buffer, and do it with memcpy rather than the more cumbersome and unnecessary snprintf. Finally, move the magic 256 value into a constant and pass it into the function which is going to get inlined anyway. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: unhook error buffer after transfer finishesDave Reisner2011-10-10
| | | | | | | | | | | | | | Similar to what we did in edd9ed6a, disconnect the relationship with our stack allocated error buffer from the curl handle. Just as an FTP connection might have some network chatter on teardown causing the progress callback to be triggered, we might also hit an error condition that causes curl to write to our (now out of scope) error buffer. I'm unable to reproduce FS#26327, but I have a suspicion that this should fix it. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* move prevprogress onto payload handleDave Reisner2011-09-29
| | | | | | | | | | | This is a poor place for it, and it will likely move again in the future, but it's better to have it here than as a static variable. Initialization of this variable is now no longer necessary as its zeroed on creation of the payload struct. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* Refactor download payload reset and freeDan McGee2011-09-28
| | | | | | | | | | | | | | | This was done to squash a memory leak in the sync database download code. When we downloaded a database and then reused the payload struct, we could find ourselves calling get_fullpath() for the signatures and overwriting non-freed values we had left over from the database download. Refactor the payload_free function into a payload_reset function that we can call that does NOT free the payload itself, so we can reuse payload structs. This also allows us to move the payload to the stack in some call paths, relieving us of the need to alloc space. Signed-off-by: Dan McGee <dan@archlinux.org>
* Initialize cURL library on first useDan McGee2011-09-28
| | | | | | | | | Rather than always initializing it on any handle creation. There are several frontend operations (search, info, etc.) that never need the download code, so spending time initializing this every single time is a bit silly. This makes it a bit more like the GPGME code init path. Signed-off-by: Dan McGee <dan@archlinux.org>
* Fix memory leak in download payload->remote_nameDan McGee2011-09-28
| | | | | | | | | | | | | In the sync code, we explicitly allocated a string for this field, while in the dload code itself it was filled in with a pointer to another string. This led to a memory leak in the sync download case. Make remote_name non-const and always explicitly allocate it. This patch ensures this as well as uses malloc + snprintf (rather than calloc) in several codepaths, and eliminates the only use of PATH_MAX in the download code. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: avoid using memrchrDave Reisner2011-09-18
| | | | | | | | This function doesn't exist on OSX. Since there aren't any other candidates in alpm for which this function would make sense to use, simply replace the function call with a loop that does the equivalent. Signed-off-by: Dave Reisner <dreisner@archlinux.org>
* dload: remove user:pass@ definition from hostnameDave Reisner2011-09-18
| | | | Signed-off-by: Dave Reisner <dreisner@archlinux.org>
* dload: provide optional netrc supportDave Reisner2011-09-11
| | | | | | | | | if ~/.netrc exists and has credentials for the hostname requested in a download, they will be provided in an http auth request. This can still be overridden by explcitly declaring user:pass in the URL. Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: use intmax_t when printing off_tDan McGee2011-09-06
| | | | | | This works for both 32-bit and 64-bit platforms. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: abstract dload_interrupted reasonsDave Reisner2011-09-06
| | | | | | | This gives us some amount of room to grow in case we ever find another reason that we might return with an error from the progress callback. Signed-off-by: Dave Reisner <dreisner@archlinux.org>
* dload: improve debug outputDave Reisner2011-09-06
| | | | | | | | We lost some of this output in the fetch->curl conversion, but I also noticed in FS#25852 that we just lack some of this useful information along the way. Signed-off-by: Dave Reisner <dreisner@archlinux.org>
* Fix possible mismatched type with several curl argumentsDan McGee2011-08-28
| | | | | | | | After commit 2e7d0023150664, we use off_t rather than long variables. Use the _LARGE variants of the methods to indicate we are passing off_t sized variables, and cast using (curl_off_t) accordingly. Signed-off-by: Dan McGee <dan@archlinux.org>
* Finish large file download attack preventionDan McGee2011-08-25
| | | | | | | | | This handles the no Content-Length header problem as stated in the comments of FS#23413. We add a quick check to the callback that will force an abort if the downloaded data exceeds the payload size, and then check for this error in the post-download cleanup code. Signed-off-by: Dan McGee <dan@archlinux.org>
* Use off_t rather than double where possibleDan McGee2011-08-25
| | | | | | | | Beautiful of libcurl to use floating point types for what are never fractional values. We can do better, and we usually want these values in their integer form anyway. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: prevent need to copy struct in mask_signal()Dan McGee2011-08-22
| | | | | | | Since we store this directly in the download function, just rework mask_signal() to take a pointer to a location to store the original. Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: extract tempfile creation to its own functionDave Reisner2011-08-22
| | | | | Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>
* dload: move (un)masking of signals to separate functionsDave Reisner2011-08-22
| | | | | Signed-off-by: Dave Reisner <dreisner@archlinux.org> Signed-off-by: Dan McGee <dan@archlinux.org>