summaryrefslogtreecommitdiffstats
path: root/playbooks
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #2908 from tremble/upgrade_factsScott Dodson2016-12-051-0/+1
|\ | | | | upgrade_control_plane.yml: systemd_units.yaml needs the master facts
| * upgrade_control_plane.yml: systemd_units.yaml nees the master factsMark Chappell2016-12-021-0/+1
| |
* | openshift-master/restart : use openshift.common.hostname instead of ↵Mark Chappell2016-12-022-2/+2
|/ | | | | | | inventory_hostname When using a dynamic inventory inventory_hostname isn't guaranteed to be usable. We should use openshift.common.hostname which already copes with this
* Explictly set etcd vars for byo scaleupSamuel Munilla2016-11-301-0/+2
| | | | Fixes #2738
* Cleanup ovs file and restart docker on every upgrade.Devan Goodwin2016-11-306-38/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | In 3.3 one of our services lays down a systemd drop-in for configuring Docker networking to use lbr0. In 3.4, this has been changed but the file must be cleaned up manually by us. However, after removing the file docker requires a restart. This had big implications particularly in containerized environments where upgrade is a very fragile series of upgrading and service restarts. To avoid double docker restarts, and thus double service restarts in containerized environments, this change does the following: - Skip restart during docker upgrade, if it is required. We will restart on our own later. - Skip containerized service restarts when we upgrade the services themselves. - Clean shutdown of all containerized services. - Restart Docker. (always, previously this only happened if it needed an upgrade) - Ensure all containerized services are restarted. - Restart rpm node services. (always) - Mark node schedulable again. At the end of this process, docker0 should be back on the system.
* Merge pull request #2855 from detiber/updateSchedulerDefaultsScott Dodson2016-11-296-6/+20
|\ | | | | Update scheduler defaults
| * update tests and flake8/pylint fixesJason DeTiberus2016-11-293-5/+6
| |
| * fix taggingJason DeTiberus2016-11-291-0/+2
| |
| * do not report changed for group mappingJason DeTiberus2016-11-292-1/+12
| |
* | Ansible version check updateTim Bielawa2016-11-291-3/+3
| | | | | | | | | | We require ansible >= 2.2.0 now. Updating version checking playbook to reflect this change.
* | Merge pull request #2880 from mtnbikenc/docker-dupJason DeTiberus2016-11-291-1/+0
|\ \ | | | | | | Remove duplicate when key
| * | Remove duplicate when keyRussell Teague2016-11-291-1/+0
| |/
* | Merge pull request #2831 from dgoodwin/upgrade-orderingScott Dodson2016-11-292-4/+4
|\ \ | |/ |/| Fix rare failure to deploy new registry/router after upgrade.
| * Fix rare failure to deploy new registry/router after upgrade.Devan Goodwin2016-11-212-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Router/registry update and re-deploy was recently reordered to immediately follow control plane upgrade, right before we proceed to node upgrade. In some situations (small or single host clusters) it appears possible that the deployer pods are running when the node in question is evacuated for upgrade. When the deployer pod dies the deployment is failed and the router/registry continue running the old version, despite the deployment config being updated correctly. This change re-orderes the router/registry upgrade to follow node upgrade. However for separate control plane upgrade, the router/registry still occurs at the end. This is because router/registry seems like they should logically be included in a control plane upgrade, and presumably the user will not manually launch node upgrade so quickly as to trigger an evac on the node in question. Workaround for this problem when it does occur is simply to: oc deploy docker-registry --latest
* | etcd upgrade playbook is not currently applicable to embedded etcd installsScott Dodson2016-11-281-0/+3
| | | | | | | | Fixes Bug 1395945
* | Merge pull request #2872 from dgoodwin/etcd-embedded-backupScott Dodson2016-11-281-1/+1
|\ \ | | | | | | Fix invalid embedded etcd fact in etcd upgrade playbook.
| * | Fix invalid embedded etcd fact in etcd upgrade playbook.Devan Goodwin2016-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549 Was getting a different failure here complaining that openshift was not in the facts, as we had not loaded facts for the first master during playbook run. However this check was used recently in upgrade_control_plane and should be more reliable.
* | | Merge pull request #2858 from ↵Jason DeTiberus2016-11-282-0/+13
|\ \ \ | | | | | | | | | | | | | | | | lhuard1A/fix_list_after_create_on_libvirt_and_openstack Fix the list done after cluster creation on libvirt and OpenStack
| * | | Fix the list done after cluster creation on libvirt and OpenStackLénaïc Huard2016-11-242-0/+13
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The `list.yml` playbooks are using cloud provider specific variables to find the IPs of the VMs since 82449c6. Those “cloud provider specific” variables are the ones provided by the dynamic inventories. But there was a problem when the `list.yml` playbooks are invoked from the `launch.yml` ones because, in that case, the inventory is not coming from the dynamic inventory scripts, but from the `add_host` done inside `launch_instances.yml`. Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the `add_host` the variables used by `list.yml`, libvirt and OpenStack were missing that. Fixes #2856
* | | Merge pull request #2836 from abutcher/BZ1393645Scott Dodson2016-11-282-0/+30
|\ \ \ | |/ / |/| | Merge admission plugin configs
| * | Merge kube_admission_plugin_config with admission_plugin_configSamuel Munilla2016-11-222-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | Move the values in kube_admission_plugin_config up one level per the new format from 1.3: "The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved and merged into admissionConfig.pluginConfig."
* | | Reference master binaries when delegating from node hosts which may be ↵Andrew Butcher2016-11-221-4/+4
| | | | | | | | | | | | containerized.
* | | Merge pull request #2771 from stevekuznetsov/skuznets/network-managerScott Dodson2016-11-221-0/+36
|\ \ \ | | | | | | | | Added a BYO playbook for configuring NetworkManager on nodes
| * | | Added a BYO playbook for configuring NetworkManager on nodesSteve Kuznetsov2016-11-221-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to do a full install of OpenShfit using the byo/config.yml playbook, it is currently required that NetworkManager be installed and configured on the nodes prior to the installation. This playbook introduces a very simple default configuration that can be used to install, configure and enable NetworkManager on their nodes. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* | | | Merge pull request #2818 from mtnbikenc/package-refactorScott Dodson2016-11-215-10/+11
|\ \ \ \ | |_|/ / |/| | | Refactor to use Ansible package module
| * | | Refactor to use Ansible package moduleRussell Teague2016-11-175-10/+11
| | |/ | |/| | | | | | | | | | The Ansible package module will call the correct package manager for the underlying OS.
* | | Merge pull request #2827 from abutcher/BZ1377619Jason DeTiberus2016-11-212-33/+7
|\ \ \ | | | | | | | | Allow ansible to continue when a node is unaccessible or fails.
| * | | Delegate openshift_manage_node tasks to master host.Andrew Butcher2016-11-211-32/+2
| | | |
| * | | Allow ansible to continue when a node is unaccessible or fails.Andrew Butcher2016-11-181-1/+5
| |/ /
* | | Merge pull request #2820 from dgoodwin/yum-check-skip-atomicScott Dodson2016-11-181-1/+2
|\ \ \ | | | | | | | | Fix yum/subman version check on Atomic.
| * | | Fix yum/subman version check on Atomic.Devan Goodwin2016-11-171-1/+2
| |/ /
* / / Escape LOGNAME variable according to GCE rulesJacek Suchenia2016-11-181-1/+1
|/ /
* | Merge pull request #2734 from dougbtv/openstack_timeout_optionJason DeTiberus2016-11-162-1/+3
|\ \ | | | | | | [openstack] allows timeout option for heat create stack
| * | [openstack] allows timeout option for heat create stackdougbtv2016-11-052-1/+3
| | |
* | | Merge pull request #2815 from dgoodwin/yumCheckScott Dodson2016-11-161-0/+16
|\ \ \ | | | | | | | | Check for bad versions of yum and subscription-manager.
| * | | Check for bad versions of yum and subscription-manager.Devan Goodwin2016-11-161-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use of yum and repoquery will output the given additional warning when using newer versions of subscription-manager, with older versions of yum. (RHEL 7.1) Installing/upgrading newer docker can pull this subscription-manager in resulting in problems with older versions of ansible and it's yum module, as well as any use of repoquery/yum commands in our playbooks. This change explicitly checks for the problem by using repoquery and fails early if found. This is run early in both config and upgrade.
* | | | Merge pull request #2813 from lhuard1A/optimize_listJason DeTiberus2016-11-164-42/+9
|\ \ \ \ | | | | | | | | | | Optimize the cloud-specific list.yml playbooks
| * | | | Optimize the cloud-specific list.yml playbooksLénaïc Huard2016-11-164-42/+9
| |/ / / | | | | | | | | | | | | | | | | | | | | by removing the need to gather facts on all VMs in order to list them. And prettify the output of AWS list the same way it is done for other cloud providers.
* | | | Merge pull request #2814 from lhuard1A/fix_gce_subnetJason DeTiberus2016-11-162-543/+1
|\ \ \ \ | | | | | | | | | | Fix GCE cluster creation
| * | | | Fix GCE cluster creationLénaïc Huard2016-11-162-543/+1
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Attempting to create a GCE cluster when the `gce.ini` configuration file contains a non-default network leads to the following error: ``` TASK [Launch instance(s)] ****************************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unexpected error attempting to create instance lenaic2-master-74f10, error: {'domain': 'global', 'message': \"Invalid value for field 'resource.networkInterfaces[0]': ''. Subnetwork should be specified for custom subnetmode network\", 'reason': 'invalid'}"} ``` The `subnetwork` parameter needs to be added and taken into account.
* | | | Merge pull request #2562 from sdodson/etcd3Scott Dodson2016-11-1412-73/+322
|\ \ \ \ | | | | | | | | | | etcd upgrade playbooks
| * | | | Actually upgrade host etcdctl no matter whatScott Dodson2016-11-141-2/+2
| | | | |
| * | | | Make etcd containerized upgrade stepwiseScott Dodson2016-11-142-18/+51
| | | | |
| * | | | Add updates for containerizedScott Dodson2016-11-143-6/+55
| | | | |
| * | | | Add etcd upgrade for RHEL and FedoraScott Dodson2016-11-149-1/+164
| | | | | | | | | | | | | | | | | | | | | | | | | On Fedora we just blindly upgrade to the latest. On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
| * | | | Drop /etc/profile.d/etcdctl.shScott Dodson2016-11-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Includes bash functions for etcdctl2 and etcdctl3 which provide reasonable defaults for etcdctl functions on a host that's configured with openshift_etcd.
| * | | | Move backups to a separate file for re-useScott Dodson2016-11-142-73/+74
| | | | |
| * | | | Uninstall etcd3 packageScott Dodson2016-11-121-0/+2
| |/ / /
* | | | Merge pull request #2794 from dgoodwin/no-fact-cacheScott Dodson2016-11-141-0/+1
|\ \ \ \ | | | | | | | | | | Fix HA upgrade when fact cache deleted.
| * | | | Fix HA upgrade when fact cache deleted.Devan Goodwin2016-11-141-0/+1
| |/ / / | | | | | | | | | | | | | | | | This variable is referenced in the systemd unit templates, this seems like the easiest and most consistent fix.