summaryrefslogtreecommitdiffstats
path: root/playbooks/common/openshift-cluster/upgrades/rpm_upgrade.yml
Commit message (Collapse)AuthorAgeFilesLines
* Remove debugging statements and pause moduleScott Dodson2017-09-221-3/+0
| | | | | How on earth did the bot merge this? The upgrade test should've stalled indefinitely.
* Default openshift_pkg_version to full version-release during upgradesScott Dodson2017-09-201-2/+4
| | | | | | | | | | | Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1490677 The versioning scheme for 3.7 pre-releases has changed and now all versions are 3.7.0 and the release is incremented on builds, ie: 3.7.0-0.124.0 upgraded to 3.7.0-0.125.0. If we know we're an upgrade and they haven't requested a specific package version defer the defaulting of openshift_pkg_version until the upgrade playbooks and there set it to the available version including the release.
* Perform package upgrades in one transactionScott Dodson2017-05-081-22/+34
| | | | | Without this we were pullining in unbounded dependencies and upgrading to the latest version available in a repo.
* Upgrade specific rpms instead of just master/node.Devan Goodwin2017-03-271-2/+21
| | | | | | | | Dependencies for these, particularly the SDN package, can cause the entire transaction to jump to a newer openshift than we requested, if something newer is available in the repositories. By being specific for multiple packages, we avoid this problem and get the actual version we require.
* Cleanup ovs file and restart docker on every upgrade.Devan Goodwin2016-11-301-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | In 3.3 one of our services lays down a systemd drop-in for configuring Docker networking to use lbr0. In 3.4, this has been changed but the file must be cleaned up manually by us. However, after removing the file docker requires a restart. This had big implications particularly in containerized environments where upgrade is a very fragile series of upgrading and service restarts. To avoid double docker restarts, and thus double service restarts in containerized environments, this change does the following: - Skip restart during docker upgrade, if it is required. We will restart on our own later. - Skip containerized service restarts when we upgrade the services themselves. - Clean shutdown of all containerized services. - Restart Docker. (always, previously this only happened if it needed an upgrade) - Ensure all containerized services are restarted. - Restart rpm node services. (always) - Mark node schedulable again. At the end of this process, docker0 should be back on the system.
* Refactor to use Ansible package moduleRussell Teague2016-11-171-2/+3
| | | | | The Ansible package module will call the correct package manager for the underlying OS.
* Resume restarting node after upgrading node rpms.Devan Goodwin2016-10-141-0/+4
| | | | | | | Error in commit 245fef16573757b6e691c448075d8564f5d569f4. As it turns out this is the only place a rpm based node can be restarted in upgrade. Restoring the restart but making it conditional to avoid the two issues reported with out of sync node restarts.
* Stop restarting node after upgrading master rpms.Devan Goodwin2016-10-121-3/+0
| | | | | | | | | | This looks to be causing a customer issue where some HA upgrades fail, due to a missing EgressNetworkPolicy API. We update master rpms, we don't restart services yet, but then restart node service which tries to talk to an API that does not yet exist. (pending restart) Restarting node here is very out of place and appears to not be required.
* Introduce 1.3/3.3 upgrade path.Devan Goodwin2016-07-251-0/+10
Refactored the 3.2 upgrade common files out to a path that does not indicate they are strictly for 3.2. 3.3 upgrade then becomes a relatively small copy of the byo entry point, all calling the same code as 3.2 upgrade. Thus far there are no known 3.3 specific upgrade tasks. In future we will likely want to allow hooks out to version specific pre/upgrade/post tasks. Also fixes a bug where the handlers were not restarting nodes/openvswitch containers doing upgrades, due to a change in Ansible 2+.