| Commit message (Collapse) | Author | Age |
|\
| |
| |
| |
| | |
Conflicts:
lib/libalpm/dload.c
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
libfetch supports checking mtime so we do not need to do it manually.
when the databases are already up-to-date, initiating a connection with
fetchXGet and closing it right after with fetchIO_close took a very long
time (up to 10min!) on some network.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
(cherry picked from commit d7675e393ff3cecb5408c243898ebaae80c5988d)
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We had 10000 as our timeout value, assuming it was expressed in ms. This is
false after looking at the current code, so reset it back to 10 seconds.
Addresses FS#15369.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- fix one memleak if get_filename failed
- cleanup according to Joerg's feedback:
"url_for_string: If fetchParseURL returned successful, you should always
have a scheme set. The logic for anonftp should only be needed for very
broken server -- do you know of any such?
download_internal:
Specifying 'p' is now a nop -- it is tried by default first with
fall-back to active FTP."
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
[Dan: remove from pacman.conf and pacman.conf.5]
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|\| |
|
| |
| |
| |
| |
| | |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If /sbin is not in the PATH and sudo is used, ldconfig cannot be found. So
use /sbin/ldconfig instead. The code checked for the existence of
/sbin/ldconfig anyway..
Signed-off-by: Marc - A. Dahlhaus <mad@wol.de>
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I assume the loop was never iterated more than once, because the write
location was not updated at each loop iteration (buffer instead of buffer +
nwritten), yet we never had reports of corrupted download.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|/
|
|
|
|
|
|
|
|
|
| |
libfetch supports checking mtime so we do not need to do it manually.
when the databases are already up-to-date, initiating a connection with
fetchXGet and closing it right after with fetchIO_close took a very long
time (up to 10min!) on some network.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
| |
After commit 30c4d53ce5c16cbbb17a88fe1ad14faf53d91999, get_destfile and
get_tempfile are only used for internal download, so move these two
functions inside the ifdef
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
| |
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
| |
'make distcheck' had issues with this one and reformatted it. In addition,
it found a fuzzy message which is now fixed due to an inadvertent msgid
edit.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
|
|
|
| |
Thanks to Avramucz Peter <muczyjoe@gmail.com>
for having translated all the scripts !
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This allows a frontend to define its own download algorithm so that the
libfetch dependency can be omitted without using an external process.
The callback will be used when if it is defined, otherwise the old
behavior applies.
Signed-off-by: Sebastian Nowicki <sebnow@gmail.com>
[Dan: minor cleanups]
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the user switches from unstable repo to a stable one, it is quite hard to
sync its system with the new repo (the user will see many "Local is newer
than stable" messages, nothing more). That's why I introduced -Suu, which
treats a sync package like an upgrade, iff the package version doesn't match
with the local one's.
I added a new pactest (sync104.py) to test this, and I updated the
documentation of -Su.
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
[Dan: slight doc reword]
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
| |
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
| |
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This happens for example if you install a new package, and one of its
backup config file is already on the file system.
If the local file was different, it was saved to .pacorig which is fine.
However if the local file and pkg file were the same, the pkg file
(temporarily extracted as .paccheck) was left on the system.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
| |
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
| |
This fixes FS#15546
Also fix the interface of unlink_file which was really stupid..
(alpm_list_t used with only one element)
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
| |
A package can now replace symdir->dir by dir without fileconflicts.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When one package wants to replace a directory by a file, we check that all
files in that directory were owned by that package.
Additionally pacman can be more verbose when the extraction of the symlink
(or file) fails. The patch to add.c looks more complex than it is, I just
moved and reindented code to handle cases 10 and 11 together.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
| |
This fixes FS#15294.
The code to run a command inside a chroot was refactored from the
_alpm_runscriptlet function to _alpm_run_chroot.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
|
|
|
|
| |
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
| |
See FS#14642- this allows -Qs output to be fed back into pacman without
problems or having to strip off the 'local/' prefix manually.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
| |
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
| |
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
| |
See FS#13099. This makes sense especially for the pacman frontend, as we
show groups in the search output.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
$ sudo pacman -S mc
Old output:
***********
:: mc conflicts with mc-mp. Remove mc-mp? [Y/n] y
...
(1/1) checking for file conflicts [################] 100%
(1/1) installing mc [################] 100%
New output:
***********
:: mc conflicts with mc-mp. Remove mc-mp? [Y/n] y
...
(1/1) checking for file conflicts [################] 100%
(1/1) removing mc-mp [################] 100%
(1/1) installing mc [################] 100%
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes FS#14899. When running an -Sp operation without servers
configured for a repository, we would segfault, so add an assert to the
backend method returning the first server preventing a null pointer
dereference.
In addition, add a new error code to libalpm that indicates we have no
servers configured for a repository. This makes -Sy and -S <package>
operations fail gracefully and helpfully when a repo is set up with no
servers, as the default mirrorlist in Arch is provided this way.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
| |
The main purpose of this function to make our code more readable.
It frees transaction specific fields of pmpkg_t. (It is used when a package
is removed from the target list.)
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is apparently no need to handle the re-compression manually when
applying a xdelta patch in case of bzip2 or xz.
Only gzip needs to be handled specifically for disabling timestamp with the
-n option.
After this patch, if xdelta is enhanced with xz support (1-line patch), it
will be transparent from pacman side.
Signed-off-by: Xavier Chantry <shiningxc@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This flag indicates that the front-end will not call alpm_trans_commit(),
so the database needn't be locked. This is the first step toward fixing
FS#8905.
If this flag is set, alpm_trans_commit() does nothing and returns with
an error (that has new code: PM_ERR_TRANS_NOT_LOCKED).
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
|
|
|
| |
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
|