summaryrefslogtreecommitdiffstats
path: root/playbooks
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #2946 from dagwieers/patch-1Scott Dodson2016-12-081-0/+2
|\ | | | | Silence warnings when using some commands directly
| * Silence warnings when using rpm directlyDag Wieers2016-12-081-0/+2
| |
* | Silence warnings when using rpm directlyDag Wieers2016-12-081-0/+2
|/
* Merge pull request #2934 from sdodson/etcd3-v2Scott Dodson2016-12-071-7/+4
|\ | | | | etcd_upgrade: Simplify package installation
| * etcd_upgrade: Simplify package installationScott Dodson2016-12-071-7/+4
| |
* | Merge pull request #2892 from detiber/upgradeSchedulerScott Dodson2016-12-073-12/+177
|\ \ | |/ |/| Scheduler upgrades
| * add comments and remove debug codeJason DeTiberus2016-12-071-2/+8
| |
| * Handle updating of scheduler config during upgradeJason DeTiberus2016-12-062-3/+162
| | | | | | | | | | | | | | | | | | | | | | - do not upgrade predicates if openshift_master_scheduler_predicates is defined - do not upgrade priorities if openshift_master_scheduler_priorities is defined - do not upgrade predicates/priorities unless they match known previous default configs - output WARNING to user if predictes/priorities are not updated during install
| * Fix templatingJason DeTiberus2016-12-061-9/+9
| |
* | Always install latest etcd for containerized hostsScott Dodson2016-12-061-3/+5
| |
* | etcd_upgrade : Use different variables for rpm vs container versionsScott Dodson2016-12-061-10/+10
|/
* Merge pull request #2920 from detiber/schedulerVarFixAndrew Butcher2016-12-051-0/+2
|\ | | | | Scheduler var fix
| * fix tagsJason DeTiberus2016-12-011-0/+2
| |
* | Conditionalize master config update for admission_plugin_config.Andrew Butcher2016-12-052-0/+2
| |
* | Merge pull request #2908 from tremble/upgrade_factsScott Dodson2016-12-051-0/+1
|\ \ | | | | | | upgrade_control_plane.yml: systemd_units.yaml needs the master facts
| * | upgrade_control_plane.yml: systemd_units.yaml nees the master factsMark Chappell2016-12-021-0/+1
| | |
* | | openshift-master/restart : use openshift.common.hostname instead of ↵Mark Chappell2016-12-022-2/+2
|/ / | | | | | | | | | | | | inventory_hostname When using a dynamic inventory inventory_hostname isn't guaranteed to be usable. We should use openshift.common.hostname which already copes with this
* / Explictly set etcd vars for byo scaleupSamuel Munilla2016-11-301-0/+2
|/ | | | Fixes #2738
* Cleanup ovs file and restart docker on every upgrade.Devan Goodwin2016-11-306-38/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | In 3.3 one of our services lays down a systemd drop-in for configuring Docker networking to use lbr0. In 3.4, this has been changed but the file must be cleaned up manually by us. However, after removing the file docker requires a restart. This had big implications particularly in containerized environments where upgrade is a very fragile series of upgrading and service restarts. To avoid double docker restarts, and thus double service restarts in containerized environments, this change does the following: - Skip restart during docker upgrade, if it is required. We will restart on our own later. - Skip containerized service restarts when we upgrade the services themselves. - Clean shutdown of all containerized services. - Restart Docker. (always, previously this only happened if it needed an upgrade) - Ensure all containerized services are restarted. - Restart rpm node services. (always) - Mark node schedulable again. At the end of this process, docker0 should be back on the system.
* Merge pull request #2855 from detiber/updateSchedulerDefaultsScott Dodson2016-11-296-6/+20
|\ | | | | Update scheduler defaults
| * update tests and flake8/pylint fixesJason DeTiberus2016-11-293-5/+6
| |
| * fix taggingJason DeTiberus2016-11-291-0/+2
| |
| * do not report changed for group mappingJason DeTiberus2016-11-292-1/+12
| |
* | Ansible version check updateTim Bielawa2016-11-291-3/+3
| | | | | | | | | | We require ansible >= 2.2.0 now. Updating version checking playbook to reflect this change.
* | Merge pull request #2880 from mtnbikenc/docker-dupJason DeTiberus2016-11-291-1/+0
|\ \ | | | | | | Remove duplicate when key
| * | Remove duplicate when keyRussell Teague2016-11-291-1/+0
| |/
* | Merge pull request #2831 from dgoodwin/upgrade-orderingScott Dodson2016-11-292-4/+4
|\ \ | |/ |/| Fix rare failure to deploy new registry/router after upgrade.
| * Fix rare failure to deploy new registry/router after upgrade.Devan Goodwin2016-11-212-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Router/registry update and re-deploy was recently reordered to immediately follow control plane upgrade, right before we proceed to node upgrade. In some situations (small or single host clusters) it appears possible that the deployer pods are running when the node in question is evacuated for upgrade. When the deployer pod dies the deployment is failed and the router/registry continue running the old version, despite the deployment config being updated correctly. This change re-orderes the router/registry upgrade to follow node upgrade. However for separate control plane upgrade, the router/registry still occurs at the end. This is because router/registry seems like they should logically be included in a control plane upgrade, and presumably the user will not manually launch node upgrade so quickly as to trigger an evac on the node in question. Workaround for this problem when it does occur is simply to: oc deploy docker-registry --latest
* | etcd upgrade playbook is not currently applicable to embedded etcd installsScott Dodson2016-11-281-0/+3
| | | | | | | | Fixes Bug 1395945
* | Merge pull request #2872 from dgoodwin/etcd-embedded-backupScott Dodson2016-11-281-1/+1
|\ \ | | | | | | Fix invalid embedded etcd fact in etcd upgrade playbook.
| * | Fix invalid embedded etcd fact in etcd upgrade playbook.Devan Goodwin2016-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549 Was getting a different failure here complaining that openshift was not in the facts, as we had not loaded facts for the first master during playbook run. However this check was used recently in upgrade_control_plane and should be more reliable.
* | | Merge pull request #2858 from ↵Jason DeTiberus2016-11-282-0/+13
|\ \ \ | | | | | | | | | | | | | | | | lhuard1A/fix_list_after_create_on_libvirt_and_openstack Fix the list done after cluster creation on libvirt and OpenStack
| * | | Fix the list done after cluster creation on libvirt and OpenStackLénaïc Huard2016-11-242-0/+13
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The `list.yml` playbooks are using cloud provider specific variables to find the IPs of the VMs since 82449c6. Those “cloud provider specific” variables are the ones provided by the dynamic inventories. But there was a problem when the `list.yml` playbooks are invoked from the `launch.yml` ones because, in that case, the inventory is not coming from the dynamic inventory scripts, but from the `add_host` done inside `launch_instances.yml`. Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the `add_host` the variables used by `list.yml`, libvirt and OpenStack were missing that. Fixes #2856
* | | Merge pull request #2836 from abutcher/BZ1393645Scott Dodson2016-11-282-0/+30
|\ \ \ | |/ / |/| | Merge admission plugin configs
| * | Merge kube_admission_plugin_config with admission_plugin_configSamuel Munilla2016-11-222-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | Move the values in kube_admission_plugin_config up one level per the new format from 1.3: "The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved and merged into admissionConfig.pluginConfig."
* | | Reference master binaries when delegating from node hosts which may be ↵Andrew Butcher2016-11-221-4/+4
| | | | | | | | | | | | containerized.
* | | Merge pull request #2771 from stevekuznetsov/skuznets/network-managerScott Dodson2016-11-221-0/+36
|\ \ \ | | | | | | | | Added a BYO playbook for configuring NetworkManager on nodes
| * | | Added a BYO playbook for configuring NetworkManager on nodesSteve Kuznetsov2016-11-221-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to do a full install of OpenShfit using the byo/config.yml playbook, it is currently required that NetworkManager be installed and configured on the nodes prior to the installation. This playbook introduces a very simple default configuration that can be used to install, configure and enable NetworkManager on their nodes. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* | | | Merge pull request #2818 from mtnbikenc/package-refactorScott Dodson2016-11-215-10/+11
|\ \ \ \ | |_|/ / |/| | | Refactor to use Ansible package module
| * | | Refactor to use Ansible package moduleRussell Teague2016-11-175-10/+11
| | |/ | |/| | | | | | | | | | The Ansible package module will call the correct package manager for the underlying OS.
* | | Merge pull request #2827 from abutcher/BZ1377619Jason DeTiberus2016-11-212-33/+7
|\ \ \ | | | | | | | | Allow ansible to continue when a node is unaccessible or fails.
| * | | Delegate openshift_manage_node tasks to master host.Andrew Butcher2016-11-211-32/+2
| | | |
| * | | Allow ansible to continue when a node is unaccessible or fails.Andrew Butcher2016-11-181-1/+5
| |/ /
* | | Merge pull request #2820 from dgoodwin/yum-check-skip-atomicScott Dodson2016-11-181-1/+2
|\ \ \ | | | | | | | | Fix yum/subman version check on Atomic.
| * | | Fix yum/subman version check on Atomic.Devan Goodwin2016-11-171-1/+2
| |/ /
* / / Escape LOGNAME variable according to GCE rulesJacek Suchenia2016-11-181-1/+1
|/ /
* | Merge pull request #2734 from dougbtv/openstack_timeout_optionJason DeTiberus2016-11-162-1/+3
|\ \ | | | | | | [openstack] allows timeout option for heat create stack
| * | [openstack] allows timeout option for heat create stackdougbtv2016-11-052-1/+3
| | |
* | | Merge pull request #2815 from dgoodwin/yumCheckScott Dodson2016-11-161-0/+16
|\ \ \ | | | | | | | | Check for bad versions of yum and subscription-manager.
| * | | Check for bad versions of yum and subscription-manager.Devan Goodwin2016-11-161-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use of yum and repoquery will output the given additional warning when using newer versions of subscription-manager, with older versions of yum. (RHEL 7.1) Installing/upgrading newer docker can pull this subscription-manager in resulting in problems with older versions of ansible and it's yum module, as well as any use of repoquery/yum commands in our playbooks. This change explicitly checks for the problem by using repoquery and fails early if found. This is run early in both config and upgrade.