| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
[uninstall] Remove excluder packages
|
| |
| |
| |
| | |
You will lose hours of your life if you don't do this.
|
|\ \
| | |
| | | |
Deprecate node 'evacuation' with 'drain'
|
| |/
| |
| |
| | |
* https://trello.com/c/TeaEB9fX/307-3-deprecate-node-evacuation
|
|\ \
| | |
| | | |
Add master config hook for 3.4 upgrade and fix facts ordering
|
| |/
| |
| |
| | |
hook run.
|
|/ |
|
|
|
|
|
|
|
| |
* Removed unneeded rules
* Moved etcd rule to conditional based on usage of embedded etcd
https://bugzilla.redhat.com/show_bug.cgi?id=1386329
|
|
|
|
|
| |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Fix metricsPublicURL only being set correctly on first master.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem was caused by facts not being set for that master. To fix this
patch cleans up the calculation of metricsPublicURL in general. Because
this value is used in openshift_master to template into the master
config file, we now define these facts more clearly in
openshift_master_facts, and add a dependency on this to
openshift_metrics.
The calculation of default sub-domain is also changed to remove it from
system facts (as neither of these are facts about the system) and
instead use plain variables.
|
|\ \
| | |
| | | |
Drop 3.2 upgrade playbooks.
|
| | | |
|
|\ \ \
| | | |
| | | | |
Silence warnings when using some commands directly
|
| |/ / |
|
|/ / |
|
|\ \
| | |
| | | |
etcd_upgrade: Simplify package installation
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Scheduler upgrades
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- do not upgrade predicates if openshift_master_scheduler_predicates is
defined
- do not upgrade priorities if openshift_master_scheduler_priorities is
defined
- do not upgrade predicates/priorities unless they match known previous
default configs
- output WARNING to user if predictes/priorities are not updated during
install
|
| | | |
|
| | | |
|
|/ / |
|
|\ \
| | |
| | | |
Scheduler var fix
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
upgrade_control_plane.yml: systemd_units.yaml needs the master facts
|
| | | | |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
inventory_hostname
When using a dynamic inventory inventory_hostname isn't guaranteed to be usable. We should use openshift.common.hostname which
already copes with this
|
|/ /
| |
| |
| | |
Fixes #2738
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
|\
| |
| | |
Update scheduler defaults
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
We require ansible >= 2.2.0 now. Updating version checking playbook to
reflect this change.
|
|\ \
| | |
| | | |
Remove duplicate when key
|
| |/ |
|
|\ \
| |/
|/| |
Fix rare failure to deploy new registry/router after upgrade.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
| |
| |
| |
| | |
Fixes Bug 1395945
|
|\ \
| | |
| | | |
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
lhuard1A/fix_list_after_create_on_libvirt_and_openstack
Fix the list done after cluster creation on libvirt and OpenStack
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `list.yml` playbooks are using cloud provider specific variables to find
the IPs of the VMs since 82449c6.
Those “cloud provider specific” variables are the ones provided by the dynamic
inventories.
But there was a problem when the `list.yml` playbooks are invoked from the
`launch.yml` ones because, in that case, the inventory is not coming from the
dynamic inventory scripts, but from the `add_host` done inside
`launch_instances.yml`.
Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the
`add_host` the variables used by `list.yml`, libvirt and OpenStack were missing
that.
Fixes #2856
|