| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
How on earth did the bot merge this? The upgrade test should've stalled
indefinitely.
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1490677
The versioning scheme for 3.7 pre-releases has changed and now all
versions are 3.7.0 and the release is incremented on builds, ie:
3.7.0-0.124.0 upgraded to 3.7.0-0.125.0. If we know we're an upgrade and
they haven't requested a specific package version defer the defaulting
of openshift_pkg_version until the upgrade playbooks and there set it to
the available version including the release.
|
|
|
|
|
| |
Without this we were pullining in unbounded dependencies and upgrading
to the latest version available in a repo.
|
|
|
|
|
|
|
|
| |
Dependencies for these, particularly the SDN package, can cause the
entire transaction to jump to a newer openshift than we requested, if
something newer is available in the repositories. By being specific for
multiple packages, we avoid this problem and get the actual version we
require.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
|
|
|
|
| |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|
|
|
|
|
|
| |
Error in commit 245fef16573757b6e691c448075d8564f5d569f4.
As it turns out this is the only place a rpm based node can be restarted
in upgrade. Restoring the restart but making it conditional to avoid the
two issues reported with out of sync node restarts.
|
|
|
|
|
|
|
|
|
|
| |
This looks to be causing a customer issue where some HA upgrades fail,
due to a missing EgressNetworkPolicy API. We update master rpms, we
don't restart services yet, but then restart node service which tries to
talk to an API that does not yet exist. (pending restart)
Restarting node here is very out of place and appears to not be
required.
|
|
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|