| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Use of yum and repoquery will output the given additional warning when
using newer versions of subscription-manager, with older versions of
yum. (RHEL 7.1) Installing/upgrading newer docker can pull this
subscription-manager in resulting in problems with older versions of
ansible and it's yum module, as well as any use of repoquery/yum
commands in our playbooks.
This change explicitly checks for the problem by using repoquery and
fails early if found. This is run early in both config and upgrade.
|
|\
| |
| | |
Reconcile role bindings for jenkins pipeline during upgrade.
|
| |
| |
| |
| | |
https://github.com/openshift/origin/issues/11170 for more info.
|
|\ \
| | |
| | | |
Bug 1393663 - Failed to upgrade v3.2 to v3.3
|
| |/
| |
| |
| | |
upgrade.
|
|\ \
| |/
|/| |
Don't upgrade etcd on backup operations
|
| |
| |
| |
| |
| | |
Fixes Bug 1393187
Fixes BZ1393187
|
|\ \
| |/
|/| |
Fix HA etcd upgrade when facts cache has been deleted.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|
| | |
|
|\ \
| | |
| | | |
Revert openshift.node.nodename changes
|
| |/
| |
| |
| | |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|/ |
|
|
|
|
|
|
|
| |
curl, prior to RHEL 7.2, did not properly negotiate up the TLS protocol, so
force it to use tlsv1.2
Fixes bug 1390869
|
|\
| |
| | |
Bug 1388016 - The insecure-registry address was removed during upgrade
|
| |
| |
| |
| | |
existing /etc/sysconfig/docker.
|
|\ \
| | |
| | | |
Fix and reorder control plane service restart.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
| |/
|/|
| |
| | |
This file was removed and no longer used
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Fix typos
|
| | | |
|
|\ \ \
| | | |
| | | | |
Drop pacemaker restart logic.
|
| | |/
| |/|
| | |
| | |
| | | |
Pacemaker clusters are no longer supported, and in some cases bugs here
were causing upgrade failures.
|
|\ \ \
| |_|/
|/| | |
Switch from "oadm" to "oc adm" and fix bug in binary sync.
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Found bug syncing binaries to containerized hosts where if a symlink was
pre-existing, but pointing to the wrong destination, it would not be
corrected.
Switched to using oc adm instead of oadm.
|
| | |
|
|\ \
| |/
|/| |
Template with_items for upstream ansible-2.2 compat.
|
| | |
|
|\ \
| |/
|/| |
[logging] Use inventory variables rather than facts
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Error in commit 245fef16573757b6e691c448075d8564f5d569f4.
As it turns out this is the only place a rpm based node can be restarted
in upgrade. Restoring the restart but making it conditional to avoid the
two issues reported with out of sync node restarts.
|
| | |
|
|\ \
| |/
|/| |
update handling of use_dnsmasq
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
This looks to be causing a customer issue where some HA upgrades fail,
due to a missing EgressNetworkPolicy API. We update master rpms, we
don't restart services yet, but then restart node service which tries to
talk to an API that does not yet exist. (pending restart)
Restarting node here is very out of place and appears to not be
required.
|
| |
|
| |
|
|\
| |
| | |
Changes for Nuage HA
|
| |
| |
| |
| | |
frontends/backends.
|
|\ \
| | |
| | | |
3.4 Upgrade Improvements
|
| | |
| | |
| | |
| | |
| | |
| | | |
It is invalid Ansible to use a when on an include that contains plays,
as it cannot be applied to plays. Issue filed upstream for a better
error, or to get it working.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This can fail with a transient "object has been modified" error asking
you to re-try your changes on the latest version of the object.
Allow up to three retries to see if we can get the change to take
effect.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|
| | | |
|
| | | |
|
| | | |
|