| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Closes #3070
|
|\
| |
| | |
Deprecate node 'evacuation' with 'drain'
|
| |
| |
| |
| | |
* https://trello.com/c/TeaEB9fX/307-3-deprecate-node-evacuation
|
|/
|
|
| |
hook run.
|
|
|
|
|
| |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
| |
|
| |
|
|\
| |
| | |
Scheduler var fix
|
| | |
|
|/
|
|
| |
Fixes #2738
|
|\
| |
| | |
Update scheduler defaults
|
| | |
|
|\ \
| | |
| | | |
Remove duplicate when key
|
| |/ |
|
|\ \
| |/
|/| |
Fix rare failure to deploy new registry/router after upgrade.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|\ \
| |/
|/| |
Added a BYO playbook for configuring NetworkManager on nodes
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In order to do a full install of OpenShfit using the byo/config.yml
playbook, it is currently required that NetworkManager be installed
and configured on the nodes prior to the installation. This playbook
introduces a very simple default configuration that can be used to
install, configure and enable NetworkManager on their nodes.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
| |
| |
| |
| |
| | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | |
|
|/
|
|
| |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|\
| |
| | |
Update link to latest versions upgrade README
|
| | |
|
|\ \
| | |
| | | |
Add support for 3.4 upgrade.
|
| |/
| |
| |
| |
| | |
This is a direct copy of 3.3 upgrade playbooks, with 3.3 specific hooks
removed and version numbers adjusted appropriately.
|
|\ \
| |/
|/| |
Fix and reorder control plane service restart.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
|\ \
| | |
| | | |
Switch from "oadm" to "oc adm" and fix bug in binary sync.
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Found bug syncing binaries to containerized hosts where if a symlink was
pre-existing, but pointing to the wrong destination, it would not be
corrected.
Switched to using oc adm instead of oadm.
|
|/ |
|
|
|
|
|
| |
Transition to being able to specify nodes to upgrade caused standalone
nodes to get skipped in this playbook.
|
|\
| |
| | |
3.4 Upgrade Improvements
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
| |
Add the Registry deployment subtype as an option in the quick installer.
|
| |
|
| |
|
| |
|
|\
| |
| | |
1.3 / 3.3 Upgrades
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|