| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This file was removed and no longer used
|
| |
|
| |
|
|\
| |
| | |
Fix typos
|
| | |
|
|\ \
| |/
|/| |
Switch from "oadm" to "oc adm" and fix bug in binary sync.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Found bug syncing binaries to containerized hosts where if a symlink was
pre-existing, but pointing to the wrong destination, it would not be
corrected.
Switched to using oc adm instead of oadm.
|
|/ |
|
|
|
|
|
|
|
| |
Error in commit 245fef16573757b6e691c448075d8564f5d569f4.
As it turns out this is the only place a rpm based node can be restarted
in upgrade. Restoring the restart but making it conditional to avoid the
two issues reported with out of sync node restarts.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This looks to be causing a customer issue where some HA upgrades fail,
due to a missing EgressNetworkPolicy API. We update master rpms, we
don't restart services yet, but then restart node service which tries to
talk to an API that does not yet exist. (pending restart)
Restarting node here is very out of place and appears to not be
required.
|
|
|
|
|
|
| |
It is invalid Ansible to use a when on an include that contains plays,
as it cannot be applied to plays. Issue filed upstream for a better
error, or to get it working.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This can fail with a transient "object has been modified" error asking
you to re-try your changes on the latest version of the object.
Allow up to three retries to see if we can get the change to take
effect.
|
|
|
|
|
|
|
|
| |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Allow overriding the Docker 1.10 requirement for upgrade.
|
| |
| |
| |
| | |
Respect an explicit docker_version, and the use of docker_upgrade=False.
|
| | |
|
| |
| |
| |
| |
| |
| | |
Handlers normally only trigger at the end of the play, but in this case
we just set our node schedulable again resulting in it immediately
getting taken down again.
|
| |
| |
| |
| |
| |
| |
| | |
Previously we were setting schedulability to the state defined in the inventory
without regard to whether or not it was manually made schedulable or
unschedulable. The right thing seems to be to record the state prior to upgrade
and set it back.
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Add warning at end of 3.3 upgrade if pluginOrderOverride is found.
|
| |/ |
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
Prevents the network egress bug causing node restart to fail during 3.3
upgrade. (even though a separate fix is incoming for this)
Only catch is preventing the openshift_cli role, which requires docker,
from triggering a potential upgrade, which we still don't want at this
point. To avoid we use the same variable to protect docker installed
version as we use in pre.yml.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a parallel step prior to real upgrade tasks, clear out all unused
Docker images on all hosts. This should be relatively safe to interrupt
as no real upgrade steps have taken place.
Once into actual upgrade, we again clear all images only this time with
force, and after stopping and removing all containers.
Both rmi commands use a new and hopefully less error prone command to do
the removal, this should avoid missed orphans as we were hitting before.
Added some logging around the current image count before and after this
step, most of them are only printed if we're crossing the 1.10 boundary
but one does not, just for additional information in your ansible log.
|
|
|
|
|
| |
This avoids the automatic image migration in 1.10, which can take a very
long time and potentially cause rpm db corruption.
|
|\
| |
| | |
1.3 / 3.3 Upgrades
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|
|/ |
|
|\
| |
| | |
Copy openshift binary instead of using wrapper script.
|
| | |
|
|\ \
| | |
| | | |
Fix bugs with origin 1.2 rpm based upgrades.
|
| |/ |
|