| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Near the end of node upgrade, we now wait for the node to report Ready
before marking it schedulable again. This should help eliminate delays
when pods need to relocate as the next node in line is evacuated.
Happens near the end of the process, the only remaining task would be to
mark it schedulable again so easy for admins to detect and recover from.
|
|
|
|
|
| |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Drop 3.2 upgrade playbooks.
|
| | |
|
|\ \
| | |
| | | |
Silence warnings when using some commands directly
|
| |/ |
|
|/ |
|
|\
| |
| | |
etcd_upgrade: Simplify package installation
|
| | |
|
|\ \
| |/
|/| |
Scheduler upgrades
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- do not upgrade predicates if openshift_master_scheduler_predicates is
defined
- do not upgrade priorities if openshift_master_scheduler_priorities is
defined
- do not upgrade predicates/priorities unless they match known previous
default configs
- output WARNING to user if predictes/priorities are not updated during
install
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
| |
|
|
|
|
| |
Fixes Bug 1395945
|
|\
| |
| | |
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|\ \
| |/
|/| |
Merge admission plugin configs
|
| |
| |
| |
| |
| |
| |
| |
| | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
| |
| |
| |
| | |
containerized.
|
|/
|
|
|
| |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|\
| |
| | |
etcd upgrade playbooks
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | |
|
|/
|
|
|
| |
This variable is referenced in the systemd unit templates, this seems
like the easiest and most consistent fix.
|
|\
| |
| | |
Reconcile role bindings for jenkins pipeline during upgrade.
|
| |
| |
| |
| | |
https://github.com/openshift/origin/issues/11170 for more info.
|
|\ \
| | |
| | | |
Bug 1393663 - Failed to upgrade v3.2 to v3.3
|
| |/
| |
| |
| | |
upgrade.
|
|\ \
| |/
|/| |
Don't upgrade etcd on backup operations
|
| |
| |
| |
| |
| | |
Fixes Bug 1393187
Fixes BZ1393187
|
|\ \
| |/
|/| |
Fix HA etcd upgrade when facts cache has been deleted.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|
| | |
|
|\ \
| | |
| | | |
Revert openshift.node.nodename changes
|
| |/
| |
| |
| | |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|/ |
|
|\
| |
| | |
Fix and reorder control plane service restart.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|