summaryrefslogtreecommitdiffstats
path: root/playbooks/byo
Commit message (Collapse)AuthorAgeFilesLines
* initialize oo_nodes_to_upgrade group when running control plane upgrade onlyJan Chaloupka2017-02-163-0/+9
|
* Merge pull request #3367 from soltysh/upgrade_jobsScott Dodson2017-02-152-0/+4
|\ | | | | Add upgrade job step after the entire upgrade performs
| * Add upgrade job step after the entire upgrade performsMaciej Szulik2017-02-152-0/+4
| |
* | upgrades: fix path to disable_excluder.ymlJan Chaloupka2017-02-151-1/+1
|/
* Modify playbooks to use oadm_manage_node moduleRussell Teague2017-02-131-7/+24
|
* Introduce tag notation for checksRodolfo Carvalho2017-02-101-3/+1
| | | | This allows us to refer to a group of checks using a single handle.
* Replace multi-role checks with action pluginRodolfo Carvalho2017-02-101-29/+12
| | | | | | | | | | | | | | | | | | This approach should make it easier to add new checks without having to write lots of YAML and doing things against Ansible (e.g. ignore_errors). A single action plugin determines what checks to run per each host, including arguments to the check. A check is implemented as a class with a run method, with the same signature as an action plugin and module, and is normally backed by a regular Ansible module. Each check is implemented as a separate Python file. This allows whoever adds a new check to focus solely in a single Python module, and potentially an Ansible module within library/ too. All checks are automatically loaded, and only active checks that are requested by the playbook get executed.
* Fix playbooks/byo/openshift_facts.yml include pathScott Dodson2017-02-071-1/+1
| | | | Fixes Bug 1419893
* Add missing symlink to rolesRodolfo Carvalho2017-02-071-0/+1
| | | | | | | | | It turned out that the playbook `playbooks/byo/openshift-preflight/check.yml` would only work under a certain `ansible.cfg` in which `roles/` was added to `roles_path`. It was the case with the example config prior to b804e70cdd0bc8601bfc87fcf3e34043223828ee.
* Merge pull request #3261 from sdodson/excluderScott Dodson2017-02-069-0/+36
|\ | | | | Manage the excluder functionality
| * Move excluder disablement into control plane and node upgrade playbooksScott Dodson2017-02-069-0/+36
| | | | | | | | | | So that excluder is disabled and reset within the scope of each of those in addition to the overall playbook
* | Fix RHEL Subscribe std_include pathTim Bielawa2017-02-061-1/+1
|/ | | | Closes #3268
* Restructure certificate redeploy playbooksAndrew Butcher2017-02-0216-124/+85
|
* Create v3_5 upgrade playbooksRussell Teague2017-01-306-2/+322
|
* Adding names to plays and standardizingRussell Teague2017-01-2715-34/+85
|
* Merge pull request #3198 from mtnbikenc/drain-fixRussell Teague2017-01-261-1/+1
|\ | | | | Correct usage of draining nodes
| * Correct usage of draining nodesRussell Teague2017-01-261-1/+1
| |
* | Standardize add_host: with name and changed_whenRussell Teague2017-01-2512-19/+41
|/ | | | | | | The add_host: task does not change any data on the host and as practice has been configured to changed_when: False. This commit standardizes that usage in the byo and common playbooks. Additionally, taks names are added to each task to improve troubleshooting.
* Cleaning repo cache earlierRussell Teague2017-01-191-2/+2
|
* Merge pull request #3093 from mtnbikenc/upgrade-fixScott Dodson2017-01-191-0/+2
|\ | | | | Correct consistency between upgrade playbooks
| * Correct consistency between upgrade playbooksRussell Teague2017-01-131-0/+2
| |
* | Perform master upgrades in a single play serially.Devan Goodwin2017-01-181-1/+11
| |
* | Validate system restart policy during pre-upgrade.Devan Goodwin2017-01-185-0/+18
| | | | | | | | | | | | | | | | | | | | This was done far into the process potentially leaving the user in a difficult situation if they had now considered they were running the upgrade playbook on a host that would be restarted. Instead check configuration and what host we're running on in pre-upgrade and allow the user to abort before making any substantial changes. This is a step towards merging master upgrade into one serial process.
* | Merge pull request #2640 from ewolinetz/logging_deployer_tasksScott Dodson2017-01-171-0/+35
|\ \ | | | | | | Logging deployer tasks
| * | minor updates for code reviews, remove unused paramsJeff Cantrill2016-12-191-0/+5
| | |
| * | Creating openshift_logging role for deploying Aggregated Logging without a ↵ewolinetz2016-12-141-0/+30
| | | | | | | | | | | | deployer image
* | | Merge pull request #2786 from dgoodwin/docker-1.12Scott Dodson2017-01-171-2/+0
|\ \ \ | | | | | | | | Begin requiring Docker 1.12.
| * | | Begin requiring Docker 1.12.Devan Goodwin2016-11-101-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building off the work done for Docker 1.10, we now require Docker 1.12 by default. The upgrade process was already set to ensure you are running the latest docker during upgrade, and the standalone docker upgrade playbook can also be used if desired. As before, you can override this Docker 1.12 requirement by setting a docker_version=1.10.3 (or similar), and you can skip the default to upgrade docker by setting docker_upgrade=False.
* | | | Merge pull request #3083 from rhcarvalho/doc-playbooksScott Dodson2017-01-171-0/+11
|\ \ \ \ | | | | | | | | | | Document playbook directories
| * | | | Document playbook directoriesRodolfo Carvalho2017-01-131-0/+11
| | |_|/ | |/| |
* | | | Rename subrole facts -> initRodolfo Carvalho2017-01-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Trying to improve the name, `init` needs to be loaded before calling other subroles. We don't make `init` a dependency of `common`, `masters` and `nodes` to avoid running the relatively slow `openshift_facts` multiple times.
* | | | Replace custom variables with openshift_factsRodolfo Carvalho2017-01-121-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Note: on a simple example run of ansible-playbook against a single docker-based host, I saw the execution time jump from 7s to 17s. That's unfortunate, but it is probably better to reuse openshift_facts, than to come up with new variables.
* | | | Move playbook to BYORodolfo Carvalho2017-01-122-0/+75
|/ / / | | | | | | | | | Because that's the main playbook directory in use.
* | | Add a fact to select --evacuate or --drain based on your OCP versionTim Bielawa2017-01-111-1/+1
| | | | | | | | | | | | Closes #3070
* | | Merge pull request #2986 from tbielawa/deprecate_node_evacuationTim Bielawa2016-12-191-4/+4
|\ \ \ | | | | | | | | Deprecate node 'evacuation' with 'drain'
| * | | Deprecate node 'evacuation' with 'drain'Tim Bielawa2016-12-161-4/+4
| | | | | | | | | | | | | | | | * https://trello.com/c/TeaEB9fX/307-3-deprecate-node-evacuation
* | | | Add master config hook for 3.4 upgrade and fix facts ordering for config ↵Andrew Butcher2016-12-161-0/+2
|/ / / | | | | | | | | | hook run.
* | | YAML LintingRussell Teague2016-12-128-35/+32
| | | | | | | | | | | | | | | * Added checks to make ci for yaml linting * Modified y(a)ml files to pass lint checks
* | | Removed verify_ansible_version playbook refsRussell Teague2016-12-085-14/+2
| | |
* | | Drop 3.2 upgrade playbooks.Devan Goodwin2016-12-082-83/+0
| | |
* | | Merge pull request #2920 from detiber/schedulerVarFixAndrew Butcher2016-12-051-0/+2
|\ \ \ | | | | | | | | Scheduler var fix
| * | | fix tagsJason DeTiberus2016-12-011-0/+2
| | | |
* | | | Explictly set etcd vars for byo scaleupSamuel Munilla2016-11-301-0/+2
|/ / / | | | | | | | | | Fixes #2738
* | | Merge pull request #2855 from detiber/updateSchedulerDefaultsScott Dodson2016-11-291-0/+1
|\ \ \ | | | | | | | | Update scheduler defaults
| * | | do not report changed for group mappingJason DeTiberus2016-11-291-0/+1
| | | |
* | | | Merge pull request #2880 from mtnbikenc/docker-dupJason DeTiberus2016-11-291-1/+0
|\ \ \ \ | | | | | | | | | | Remove duplicate when key
| * | | | Remove duplicate when keyRussell Teague2016-11-291-1/+0
| |/ / /
* | | | Merge pull request #2831 from dgoodwin/upgrade-orderingScott Dodson2016-11-292-4/+4
|\ \ \ \ | |/ / / |/| | | Fix rare failure to deploy new registry/router after upgrade.
| * | | Fix rare failure to deploy new registry/router after upgrade.Devan Goodwin2016-11-212-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Router/registry update and re-deploy was recently reordered to immediately follow control plane upgrade, right before we proceed to node upgrade. In some situations (small or single host clusters) it appears possible that the deployer pods are running when the node in question is evacuated for upgrade. When the deployer pod dies the deployment is failed and the router/registry continue running the old version, despite the deployment config being updated correctly. This change re-orderes the router/registry upgrade to follow node upgrade. However for separate control plane upgrade, the router/registry still occurs at the end. This is because router/registry seems like they should logically be included in a control plane upgrade, and presumably the user will not manually launch node upgrade so quickly as to trigger an evac on the node in question. Workaround for this problem when it does occur is simply to: oc deploy docker-registry --latest
* | | | Merge pull request #2771 from stevekuznetsov/skuznets/network-managerScott Dodson2016-11-221-0/+36
|\ \ \ \ | |/ / / |/| | | Added a BYO playbook for configuring NetworkManager on nodes