| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
So that excluder is disabled and reset within the scope of each of those
in addition to the overall playbook
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This was done far into the process potentially leaving the user in a
difficult situation if they had now considered they were running the
upgrade playbook on a host that would be restarted. Instead check
configuration and what host we're running on in pre-upgrade and allow
the user to abort before making any substantial changes.
This is a step towards merging master upgrade into one serial process.
|
|
|
|
|
| |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|
|
|
|
|
|
|
|
| |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|