| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Two tasks for initializing group names for the byo playbooks was located
in the common folder in the std_include.yml file. Byo dependencies
should not be in the common folder. The two tasks have been removed
from common/openshift-cluster/std_include.yml to a new file
byo/openshift-cluster/initialize_groups.yml. All references where these
tasks were included from either std_include.yml or other various files
have been updated to use the byo initialize_groups.yml. The methodology
implemented follows the pattern of having groups set up in byo then
calling out to playbooks in common, which are common to all deployments.
|
|
|
|
|
| |
The playbooks were crossing byo/common boundaries for task includes.
This moves all 'common' files/tasks into the 'common' folder.
|
|
|
|
|
|
|
|
|
|
| |
In openshift_repos and everywhere, ensure deployment_type and
openshift_deployment_type are defined and the same.
We really want to set openshift_deployment_type, but users will likely
still have just deployment_type, so accept both. And don't make every
playbook default openshift_deployment_type to deployment_type.
This introduces the openshift_sanitize_inventory role to run before anything else.
|
| |
|
|\
| |
| | |
Merged by openshift-bot
|
| | |
|
|/ |
|
|\
| |
| | |
WIP: update excluders to latest by default, in non-upgrade scenarios do not update
|
| |
| |
| |
| |
| |
| | |
- check both available excluder versions are at most of upgrade target version
- get excluder status through status command
- make excluders enablement configurable
|
|/ |
|
|
|
|
| |
upgrade README file
|
|
|
|
| |
Fixes Bug 1423425
|
|\
| |
| | |
Fixed issue where upgrade fails when using daemon sets (e.g. aggregated logging)
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Add upgrade job step after the entire upgrade performs
|
| |/ |
|
|/ |
|
| |
|
|
|
|
|
| |
So that excluder is disabled and reset within the scope of each of those
in addition to the overall playbook
|
| |
|
| |
|
| |
|
|\
| |
| | |
Correct usage of draining nodes
|
| | |
|
|/
|
|
|
|
|
| |
The add_host: task does not change any data on the host and as practice
has been configured to changed_when: False. This commit standardizes
that usage in the byo and common playbooks. Additionally, taks names
are added to each task to improve troubleshooting.
|
| |
|
|\
| |
| | |
Correct consistency between upgrade playbooks
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was done far into the process potentially leaving the user in a
difficult situation if they had now considered they were running the
upgrade playbook on a host that would be restarted. Instead check
configuration and what host we're running on in pre-upgrade and allow
the user to abort before making any substantial changes.
This is a step towards merging master upgrade into one serial process.
|
|\ \
| | |
| | | |
Logging deployer tasks
|
| | | |
|
| | |
| | |
| | |
| | | |
deployer image
|
|\ \ \
| |_|/
|/| | |
Begin requiring Docker 1.12.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Building off the work done for Docker 1.10, we now require Docker 1.12
by default.
The upgrade process was already set to ensure you are running the latest
docker during upgrade, and the standalone docker upgrade playbook can
also be used if desired.
As before, you can override this Docker 1.12 requirement by setting a
docker_version=1.10.3 (or similar), and you can skip the default to
upgrade docker by setting docker_upgrade=False.
|
| | |
| | |
| | |
| | | |
Closes #3070
|
|\ \ \
| | | |
| | | | |
Deprecate node 'evacuation' with 'drain'
|
| | | |
| | | |
| | | |
| | | | |
* https://trello.com/c/TeaEB9fX/307-3-deprecate-node-evacuation
|
|/ / /
| | |
| | |
| | | |
hook run.
|
| | |
| | |
| | |
| | |
| | | |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Update scheduler defaults
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Remove duplicate when key
|
| |/ / / |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|/ /
| |
| |
| |
| | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | |
|