summaryrefslogtreecommitdiffstats
path: root/roles/openshift_metrics/vars
diff options
context:
space:
mode:
authorDevan Goodwin <dgoodwin@redhat.com>2016-11-21 11:29:07 -0400
committerDevan Goodwin <dgoodwin@redhat.com>2016-11-21 11:29:07 -0400
commit27c0a29f266def67e22667fef6823062b8167be5 (patch)
tree06ba5c73e85812324ea4f34ebfe4da3e0030cf59 /roles/openshift_metrics/vars
parent6782fa3c9e01b02e6a29e676f6bbe53d040b9708 (diff)
downloadopenshift-27c0a29f266def67e22667fef6823062b8167be5.tar.gz
openshift-27c0a29f266def67e22667fef6823062b8167be5.tar.bz2
openshift-27c0a29f266def67e22667fef6823062b8167be5.tar.xz
openshift-27c0a29f266def67e22667fef6823062b8167be5.zip
Fix rare failure to deploy new registry/router after upgrade.
Router/registry update and re-deploy was recently reordered to immediately follow control plane upgrade, right before we proceed to node upgrade. In some situations (small or single host clusters) it appears possible that the deployer pods are running when the node in question is evacuated for upgrade. When the deployer pod dies the deployment is failed and the router/registry continue running the old version, despite the deployment config being updated correctly. This change re-orderes the router/registry upgrade to follow node upgrade. However for separate control plane upgrade, the router/registry still occurs at the end. This is because router/registry seems like they should logically be included in a control plane upgrade, and presumably the user will not manually launch node upgrade so quickly as to trigger an evac on the node in question. Workaround for this problem when it does occur is simply to: oc deploy docker-registry --latest
Diffstat (limited to 'roles/openshift_metrics/vars')
0 files changed, 0 insertions, 0 deletions