summaryrefslogtreecommitdiffstats
path: root/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml
Commit message (Collapse)AuthorAgeFilesLines
* v3.3 Upgrade RefactorRussell Teague2017-05-021-103/+1
|
* Refactor initialize groups tasksRussell Teague2017-04-121-0/+2
| | | | | | | | | | | | Two tasks for initializing group names for the byo playbooks was located in the common folder in the std_include.yml file. Byo dependencies should not be in the common folder. The two tasks have been removed from common/openshift-cluster/std_include.yml to a new file byo/openshift-cluster/initialize_groups.yml. All references where these tasks were included from either std_include.yml or other various files have been updated to use the byo initialize_groups.yml. The methodology implemented follows the pattern of having groups set up in byo then calling out to playbooks in common, which are common to all deployments.
* - update excluders to latest, in non-upgrade scenarios do not updateJan Chaloupka2017-03-071-1/+1
| | | | | | - check both available excluder versions are at most of upgrade target version - get excluder status through status command - make excluders enablement configurable
* Move excluder disablement into control plane and node upgrade playbooksScott Dodson2017-02-061-0/+4
| | | | | So that excluder is disabled and reset within the scope of each of those in addition to the overall playbook
* Adding names to plays and standardizingRussell Teague2017-01-271-2/+2
|
* Validate system restart policy during pre-upgrade.Devan Goodwin2017-01-181-0/+4
| | | | | | | | | | This was done far into the process potentially leaving the user in a difficult situation if they had now considered they were running the upgrade playbook on a host that would be restarted. Instead check configuration and what host we're running on in pre-upgrade and allow the user to abort before making any substantial changes. This is a step towards merging master upgrade into one serial process.
* YAML LintingRussell Teague2016-12-121-1/+0
| | | | | * Added checks to make ci for yaml linting * Modified y(a)ml files to pass lint checks
* Fix rare failure to deploy new registry/router after upgrade.Devan Goodwin2016-11-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Router/registry update and re-deploy was recently reordered to immediately follow control plane upgrade, right before we proceed to node upgrade. In some situations (small or single host clusters) it appears possible that the deployer pods are running when the node in question is evacuated for upgrade. When the deployer pod dies the deployment is failed and the router/registry continue running the old version, despite the deployment config being updated correctly. This change re-orderes the router/registry upgrade to follow node upgrade. However for separate control plane upgrade, the router/registry still occurs at the end. This is because router/registry seems like they should logically be included in a control plane upgrade, and presumably the user will not manually launch node upgrade so quickly as to trigger an evac on the node in question. Workaround for this problem when it does occur is simply to: oc deploy docker-registry --latest
* Fix and reorder control plane service restart.Devan Goodwin2016-10-211-3/+2
| | | | | | | | | This was missed in the standalone upgrade control plane playbook. However it also looks to be out of order, we should restart before reconciling and upgrading nodes. As such moved the restart directly into the control plane upgrade common code, and placed it before reconciliation.
* Use pre_upgrade tag instread of a dry run variable.Devan Goodwin2016-09-291-6/+23
|
* Move etcd backup from pre-upgrade to upgrade itself.Devan Goodwin2016-09-291-2/+0
|
* Skip the docker role in early upgrade stages.Devan Goodwin2016-09-291-4/+6
| | | | | | | | This improves the situation further and prevents configuration changes from accidentally triggering docker restarts, before we've evacuated nodes. Now in two places, we skip the role entirely, instead of previous implementation which only skipped upgrading the installed version. (which did not catch config issues)
* Allow filtering nodes to upgrade by label.Devan Goodwin2016-09-291-4/+5
|
* Split upgrade for control plane/nodes.Devan Goodwin2016-09-291-8/+22
|
* Attempt to tease apart pre upgrade for masters/nodes.Devan Goodwin2016-09-281-2/+51
|
* Split upgrade entry points into control plane/node.Devan Goodwin2016-09-281-48/+1
|
* Upgrade configs for protobuf support.Devan Goodwin2016-08-081-0/+2
|
* Introduce 1.3/3.3 upgrade path.Devan Goodwin2016-07-251-0/+65
Refactored the 3.2 upgrade common files out to a path that does not indicate they are strictly for 3.2. 3.3 upgrade then becomes a relatively small copy of the byo entry point, all calling the same code as 3.2 upgrade. Thus far there are no known 3.3 specific upgrade tasks. In future we will likely want to allow hooks out to version specific pre/upgrade/post tasks. Also fixes a bug where the handlers were not restarting nodes/openvswitch containers doing upgrades, due to a change in Ansible 2+.