| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- do not upgrade predicates if openshift_master_scheduler_predicates is
defined
- do not upgrade priorities if openshift_master_scheduler_priorities is
defined
- do not upgrade predicates/priorities unless they match known previous
default configs
- output WARNING to user if predictes/priorities are not updated during
install
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
| |
|
|
|
|
| |
Fixes Bug 1395945
|
|\
| |
| | |
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|\ \
| |/
|/| |
Merge admission plugin configs
|
| |
| |
| |
| |
| |
| |
| |
| | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
| |
| |
| |
| | |
containerized.
|
|/
|
|
|
| |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|\
| |
| | |
etcd upgrade playbooks
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | |
|
|/
|
|
|
| |
This variable is referenced in the systemd unit templates, this seems
like the easiest and most consistent fix.
|
|\
| |
| | |
Reconcile role bindings for jenkins pipeline during upgrade.
|
| |
| |
| |
| | |
https://github.com/openshift/origin/issues/11170 for more info.
|
|\ \
| | |
| | | |
Bug 1393663 - Failed to upgrade v3.2 to v3.3
|
| |/
| |
| |
| | |
upgrade.
|
|\ \
| |/
|/| |
Don't upgrade etcd on backup operations
|
| |
| |
| |
| |
| | |
Fixes Bug 1393187
Fixes BZ1393187
|
|\ \
| |/
|/| |
Fix HA etcd upgrade when facts cache has been deleted.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|
| | |
|
|\ \
| | |
| | | |
Revert openshift.node.nodename changes
|
| |/
| |
| |
| | |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|/ |
|
|\
| |
| | |
Fix and reorder control plane service restart.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
| |
| |
| |
| | |
This file was removed and no longer used
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Fix typos
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Switch from "oadm" to "oc adm" and fix bug in binary sync.
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Found bug syncing binaries to containerized hosts where if a symlink was
pre-existing, but pointing to the wrong destination, it would not be
corrected.
Switched to using oc adm instead of oadm.
|
|/ |
|
|
|
|
|
|
|
| |
Error in commit 245fef16573757b6e691c448075d8564f5d569f4.
As it turns out this is the only place a rpm based node can be restarted
in upgrade. Restoring the restart but making it conditional to avoid the
two issues reported with out of sync node restarts.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This looks to be causing a customer issue where some HA upgrades fail,
due to a missing EgressNetworkPolicy API. We update master rpms, we
don't restart services yet, but then restart node service which tries to
talk to an API that does not yet exist. (pending restart)
Restarting node here is very out of place and appears to not be
required.
|
|
|
|
|
|
| |
It is invalid Ansible to use a when on an include that contains plays,
as it cannot be applied to plays. Issue filed upstream for a better
error, or to get it working.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This can fail with a transient "object has been modified" error asking
you to re-try your changes on the latest version of the object.
Allow up to three retries to see if we can get the change to take
effect.
|
|
|
|
|
|
|
|
| |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|