summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Default openshift_pkg_version to full version-release during upgradesScott Dodson2017-09-203-6/+17
| | | | | | | | | | | Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1490677 The versioning scheme for 3.7 pre-releases has changed and now all versions are 3.7.0 and the release is incremented on builds, ie: 3.7.0-0.124.0 upgraded to 3.7.0-0.125.0. If we know we're an upgrade and they haven't requested a specific package version defer the defaulting of openshift_pkg_version until the upgrade playbooks and there set it to the available version including the release.
* Merge pull request #5378 from mgugino-upstream-stage/cleanup-deployment-typesOpenShift Merge Robot2017-09-2021-121/+107
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Cleanup old deployment types Previously, openshift-ansible supported various types of deployments using the variable "openshift_deployment_type" Currently, openshift-ansible only supports two deployment types, "origin" and "openshift-enterprise". This commit removes all logic and references to deprecated deployment types.
| * Cleanup old deployment typesMichael Gugino2017-09-2021-121/+107
| | | | | | | | | | | | | | | | | | | | | | Previously, openshift-ansible supported various types of deployments using the variable "openshift_deployment_type" Currently, openshift-ansible only supports two deployment types, "origin" and "openshift-enterprise". This commit removes all logic and references to deprecated deployment types.
* | Merge pull request #5322 from mtnbikenc/proposal-playbook-consolidationScott Dodson2017-09-201-0/+178
|\ \ | |/ |/| [Proposal] OpenShift-Ansible Playbook Consolidation
| * Rework openshift-cluster into deploy_cluster.ymlRussell Teague2017-09-131-20/+15
| |
| * [Proposal] OpenShift-Ansible Playbook ConsolidationRussell Teague2017-09-061-0/+183
| |
* | Merge pull request #3753 from soltysh/issue12558OpenShift Merge Robot2017-09-202-0/+41
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Increase rate limiting in journald.conf @sdodson ptal, this is to address issues from https://github.com/openshift/origin/issues/12558 @smarterclayton @stevekuznetsov fyi
| * | Increase rate limiting in journald.confMaciej Szulik2017-09-122-0/+41
| | |
* | | Merge pull request #3778 from lhuard1A/rh_subscription_resilientOpenShift Merge Robot2017-09-191-0/+6
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Make RH subscription more resilient to temporary failures subscription-manager can sometimes fail because of server side errors. Manually replaying the command usually works. So, let’s make openshift-ansible more resilient to temporary failures of subscription-manager by retrying the failed commands with a maximum of 3 retries. Here is an example of such sporadic errors: ``` TASK [rhel_subscribe : Retrieve the OpenShift Pool ID] ************************* ok: [lenaic-node-compute-c96e7] ok: [lenaic-master-bbe09] ok: [lenaic-node-compute-2976a] fatal: [lenaic-node-infra-47ba5]: FAILED! => {"changed": false, "cmd": ["subscription-manager", "list", "--available", "--matches=Red Hat OpenShift Container Platform, Premium*", "--pool-only"], "delta": "0:00:07.152650", "end": "2017-04-04 11:24:59.729405", "failed": true, "rc": 70, "start": "2017-04-04 11:24:52.576755", "stderr": "Unable to verify server's identity: (104, 'Connection reset by peer')", "stdout": "", "stdout_lines": [], "warnings": []} TASK [rhel_subscribe : Determine if OpenShift Pool Already Attached] *********** skipping: [lenaic-master-bbe09] skipping: [lenaic-node-compute-2976a] skipping: [lenaic-node-compute-c96e7] TASK [rhel_subscribe : fail] *************************************************** skipping: [lenaic-node-compute-2976a] skipping: [lenaic-master-bbe09] skipping: [lenaic-node-compute-c96e7] TASK [rhel_subscribe : Attach to OpenShift Pool] ******************************* fatal: [lenaic-node-compute-c96e7]: FAILED! => {"changed": true, "cmd": ["subscription-manager", "subscribe", "--pool", "8a85f9814ff0134a014ff43b44095513"], "delta": "0:00:21.421300", "end": "2017-04-04 11:25:20.655873", "failed": true, "rc": 70, "start": "2017-04-04 11:24:59.234573", "stderr": "Unable to verify server's identity: (104, 'Connection reset by peer')", "stdout": "Successfully attached a subscription for: Red Hat OpenShift Container Platform, Premium (1-2 Sockets)", "stdout_lines": ["Successfully attached a subscription for: Red Hat OpenShift Container Platform, Premium (1-2 Sockets)"], "warnings": []} changed: [lenaic-master-bbe09] changed: [lenaic-node-compute-2976a] ``` In this example, subscription-manager was failing on some nodes, but not all. Retrying on the failed nodes would have avoided to abandon those nodes.
| * | | Make RH subscription more resilient to temporary failuresLénaïc Huard2017-05-021-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | subscription-manager can sometimes fail because of server side errors. Manually replaying the command usually works. So, let’s make openshift-ansible more resilient to temporary failures of subscription-manager by retrying the failed commands with a maximum of 3 retries.
* | | | Merge pull request #5380 from ↵OpenShift Merge Robot2017-09-193-1/+14
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mgugino-upstream-stage/fix-openshift-version-pkg-install Automatic merge from submit-queue Only install base openshift package on masters and nodes Recent refactoring to remove openshift_common resulted in base openshift rpm's being installed on more hosts than previous. This situation results in hosts that would otherwise not need access to openshift repositories to require them. This patch set results in only openshift_masters and openshift_nodes to have the openshift base package installed.
| * | | | Only install base openshift package on masters and nodesMichael Gugino2017-09-123-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent refactoring to remove openshift_common resulted in base openshift rpm's being installed on more hosts than previous. This situation results in hosts that would otherwise not need access to openshift repositories to require them. This patch set results in only openshift_masters and openshift_nodes to have the openshift base package installed.
* | | | | Merge pull request #5464 from sosiouxme/20170919-repoquery-bz1482551OpenShift Merge Robot2017-09-192-1/+5
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue repoquery bz1482551 followup Adding retries on the repoqueries I missed in https://github.com/openshift/openshift-ansible/pull/5401
| * | | | | more retries on repoquery_cmdLuke Meyer2017-09-192-1/+5
| | | | | |
* | | | | | Merge pull request #5416 from wozniakjan/bug1491636/honor_ops_nodeselectorOpenShift Merge Robot2017-09-191-0/+2
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Bug 1491636 - honor openshift_logging_es_ops_nodeselector https://bugzilla.redhat.com/show_bug.cgi?id=1491636
| * | | | | | Bug 1491636 - honor openshift_logging_es_ops_nodeselectorJan Wozniak2017-09-141-0/+2
| | | | | | |
* | | | | | | Merge pull request #5274 from sosiouxme/20170828-checks-save-resultsOpenShift Merge Robot2017-09-1915-82/+430
|\ \ \ \ \ \ \ | |_|/ / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue openshift_checks: enable writing results to files An iteration on how to record check results in a directory structure readable by machines and humans. Some refactoring of checks and the action plugin to enable writing files locally about the check operation and results, if the user wants them. This is aimed at enabling persistent and machine-readable results from recurring runs of health checks. Now, rather than trying to build a result hash to return from running each check, checks can just register what they need to as they're going along, and the action plugin processes state when the check is done. Checks can register failures, notes about what they saw, and arbitrary files to be saved into a directory structure where the user specifies. If no directory is specified, no files are written. At this time checks can still return a result hash, but that will likely be refactored away in the next iteration. Multiple failures can be registered without halting check execution. Throwing an exception or returning a hash with "failed" is registered as a failure. execute_module now does a little more with the results. Results are automatically included in notes and written individually as files. "changed" results are propagated. Some json results are decoded. A few of the checks were enhanced to use these features; all get some of the features for free. Action items: - [x] Provide a way for user to specify an output directory where they want results written - [x] Enable a check to register multiple failures and not have to assemble them in result - [x] Enable a check to register "notes" that will be saved to files but not displayed - [x] Have module invocations recorded individually as well as in notes - [x] Enable a check to register files (logs, etc.) from remote host that are to be copied to output dir - [x] Enable a check to register arbitrary file contents that are to be written to output - [ ] Take advantage of these features where possible in checks (Last item done somewhat, more should happen as we go along...)
| * | | | | | openshift_checks: enable providing file outputsLuke Meyer2017-09-1815-82/+430
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some refactoring of checks and the action plugin to enable writing files locally about the check operation and results, if the user wants them. This is aimed at enabling persistent and machine-readable results from recurring runs of health checks. Now, rather than trying to build a result hash to return from running each check, checks can just register what they need to as they're going along, and the action plugin processes state when the check is done. Checks can register failures, notes about what they saw, and arbitrary files to be saved into a directory structure where the user specifies. If no directory is specified, no files are written. At this time checks can still return a result hash, but that will likely be refactored away in the next iteration. Multiple failures can be registered without halting check execution. Throwing an exception or returning a hash with "failed" is registered as a failure. execute_module now does a little more with the results. Results are automatically included in notes and written individually as files. "changed" results are propagated. Some json results are decoded. A few of the checks were enhanced to use these features; all get some of the features for free.
* | | | | | | Merge pull request #5450 from ingvagabund/fix-etcd-backup-msg-errorJan Chaloupka2017-09-191-1/+1
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | Fix etcd backup msg error
| * | | | | | | fix etcd back message errorJan Chaloupka2017-09-191-1/+1
|/ / / / / / /
* | | | | | | Merge pull request #5156 from mangirdaz/5155-hotfixOpenShift Merge Robot2017-09-181-1/+1
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue hot fix for env variable resolve If we use environment variables in our inventory files (and from what I seen we do this everywhere where We deploy OCP) our fact engine ignores env variables so if my path looks like ``` openshift_hosted_registry_routecertificates={"certfile": "{{inventory_dir}}/../files/certs/wildcard.registry.company.local.crt", "keyfile": "{{inventory_dir}}/../files/certs/wildcard.registry.companylocal.key", "cafile":"{{inventory_dir}}/../files/certs/CompanyLocalRootCA.crt"} openshift_hosted_registry_routehost=containers.registry.comany.local ``` the result is: `/../files/certs/RoSLocalRootCA.crt` We need to fix our fact set in a long run to read Ansible variables. And it was done in the same way with router certificates already.
| * | | | | | | hot fix for env variable resolveMangirdas2017-08-221-1/+1
| | | | | | | |
* | | | | | | | Merge pull request #5441 from mgugino-upstream-stage/fix-reg-authOpenShift Merge Robot2017-09-182-4/+4
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Fix registry auth task ordering Currently, registry authentication credentials are not produced until after docker systemd service files are created. This commit ensures the credentials are created before the systemd service files to ensure the proper boolean is set to include the read-only mount of credentials inside containerized nodes and masters. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
| * | | | | | | | Fix registry auth task orderingMichael Gugino2017-09-182-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, registry authentication credentials are not produced until after docker systemd service files are created. This commit ensures the credentials are created before the systemd service files to ensure the proper boolean is set to include the read-only mount of credentials inside containerized nodes and masters. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
* | | | | | | | | Merge pull request #5439 from zgalor/prometheus-fixesOpenShift Merge Robot2017-09-182-2/+5
|\ \ \ \ \ \ \ \ \ | |_|_|/ / / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Prometheus role fixes - Use official prometheus-alert-buffer image - Add prometheus annotations to service
| * | | | | | | | Prometheus role fixesZohar Galor2017-09-182-2/+5
| |/ / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Use official prometheus-alert-buffer image - Add prometheus annotations to service
* | | | | | | | Merge pull request #5430 from ashcrow/always-required-new-variablesOpenShift Merge Robot2017-09-186-20/+45
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Always required new variables Related to https://bugzilla.redhat.com/show_bug.cgi?id=1451023
| * | | | | | | | papr: Update inventory to include required varsSteve Milner2017-09-151-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Steve Milner <smilner@redhat.com>
| * | | | | | | | testing: Skip net vars on integration testsSteve Milner2017-09-152-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Steve Milner <smilner@redhat.com>
| * | | | | | | | inventory: Update network variable docSteve Milner2017-09-152-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Steve Milner <smilner@redhat.com>
| * | | | | | | | openshift_sanitize_inventory: Check for required varsSteve Milner2017-09-152-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Moved the checks for osm_cluster_network_cidr, osm_host_subnet_length, openshift_portal_net from upgrade to openshift_sanitize_inventory as we now consider it a required variable for install, updrade, or scale up. Signed-off-by: Steve Milner <smilner@redhat.com>
* | | | | | | | | Merge pull request #5237 from smarterclayton/gceOpenShift Merge Robot2017-09-187-0/+570
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Port origin-gce roles for cluster setup to copy AWS provisioning This is a rough cut of the existing origin-gce structure (itself a refined version of the ref arch). I've removed everything except core cluster provisioning, image building, and inventory setup. Node groups are part of the "all at once" provisioning but can be changed. @kwoodson we should talk on monday, this is me adapting the origin-gce dynamic provisioning to be roughly parallel to openshift_aws. Still some topics we should discuss.
| * | | | | | | | | Port origin-gce roles for cluster setup to copy AWS provisioningClayton Coleman2017-09-147-0/+570
| | |_|_|_|/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a rough cut of the existing origin-gce structure (itself a refined version of the ref arch). I've removed everything except core cluster provisioning, image building, and inventory setup. Node groups are part of the "all at once" provisioning but can be changed.
* | | | | | | | | Merge pull request #5392 from ↵OpenShift Merge Robot2017-09-182-31/+22
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ingvagabund/pull-openshift_master-deps-out-into-a-play Automatic merge from submit-queue Pull openshift_master deps out into a play The `openshift_master` role is called only in a single play. Thus, we can pull out all its dependencies without duplicating all dependency role invocations. Both `lib_openshift` and `lib_os_firewall` are required deps as they defined ansible modules used inside the `openshift_master` role. I have also rearranged definition of variables so variable used only inside a single role are part of the `include_role` statement. Atm, we can't use `include_role` due to https://github.com/ansible/ansible/issues/21890
| * | | | | | | | | pull openshift_master deps out into a playJan Chaloupka2017-09-132-31/+22
| | | | | | | | | |
* | | | | | | | | | Merge pull request #5431 from sosiouxme/20170915-system-container-cwdOpenShift Merge Robot2017-09-182-2/+2
|\ \ \ \ \ \ \ \ \ \ | |_|_|_|/ / / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue update system container cwd This changes the cwd for the system container to be the base of the openshift-ansible content. This way the playbook can be specified as a relative path, and in the future when we drop the symlinks for various plugins and rely on cwd to find them, this will still work. Looking through the Dockerfile side of things I noticed that the run script changes directories to WORK_DIR which is the content base, so this change brings the two methods closer together. I was looking for anything that actually wrote to the current directory (which is $HOME at the beginning of the run script) and found one, the vault password. It seemed slightly more robust to write that to a temporary location instead so I tacked on a commit to do that as well.
| * | | | | | | | | installer image: use tmp file for vaultpassLuke Meyer2017-09-151-1/+1
| | | | | | | | | |
| * | | | | | | | | system container: use ansible root as cwdLuke Meyer2017-09-151-1/+1
| | | | | | | | | |
* | | | | | | | | | Merge pull request #5334 from juanluisvaladas/move-sysctlOpenShift Merge Robot2017-09-162-11/+5
|\ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Move sysctl.conf customizations to a separate file Move them from /etc/sysctl.conf to /etc/sysctl.d/99-openshift.conf This is a good idea becuase: 1- /etc/sysctl.conf is evaluated later, so it can easily be overwritten by previous customizations 2- It's likely that there is an agent like puppet monitoring this file 3- It's easier to know what's being changed by OpenShift
| * | | | | | | | | | Move sysctl.conf customizations to a separate fileJuan Luis de Sousa-Valadas Castaño2017-09-082-11/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move them from /etc/sysctl.conf to /etc/sysctl.d/99-openshift.conf
* | | | | | | | | | | Merge pull request #5345 from smarterclayton/firewallOpenShift Merge Robot2017-09-151-1/+3
|\ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Add `openshift_node_open_ports` to allow arbitrary firewall exposure It should be possible for an admin to define an arbitrary set of ports to be exposed on each node that will relate to the cluster function. This adds a new global variable for the node that supports Array(Object{'service':<name>,'port':<port_spec>,'cond':<boolean>}) which is the same format accepted by the firewall role. @sdodson as discussed, open to alternatives. I used this from origin-gce with openshift_node_open_ports: - service: Router stats port: 1936/tcp - service: Open node ports port: 9000-10000/tcp - service: Open node ports port: 9000-10000/udp Which then allows me to set firewall rules appropriately. Alternatives considered: * Simpler external format (have to parse inputs) * Additional parameter to role - felt ugly
| * | | | | | | | | | | Add `openshift_node_open_ports` to allow arbitrary firewall exposureClayton Coleman2017-09-111-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It should be possible for an admin to define an arbitrary set of ports to be exposed on each node that will relate to the cluster function. This adds a new global variable for the node that supports Array(Object{'service':<name>,'port':<port_spec>,'cond':<boolean>}) which is the same format accepted by the firewall role.
* | | | | | | | | | | | Merge pull request #5407 from sdodson/bz1490739OpenShift Merge Robot2017-09-151-1/+1
|\ \ \ \ \ \ \ \ \ \ \ \ | |_|_|_|_|_|/ / / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automatic merge from submit-queue Only attempt to start iptables on hosts in the current batch If os_firewall role is called from within a play that uses serial then it was attempting to start iptables on hosts that may not have had iptables installed on them yet. So limit the hosts to the current batch. According to the ansible docs on plays where serial is unused this is the same as ansible_play_hosts. See http://docs.ansible.com/ansible/latest/playbooks_variables.html Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1490739
| * | | | | | | | | | | Only attempt to start iptables on hosts in the current batchScott Dodson2017-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If os_firewall role is called from within a play that uses serial then it was attempting to start iptables on hosts that may not have had iptables installed on them yet. So limit the hosts to the current batch. According to the ansible docs on plays where serial is unused this is the same as ansible_play_hosts. See http://docs.ansible.com/ansible/latest/playbooks_variables.html Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1490739
* | | | | | | | | | | | Merge pull request #5427 from adelton/issue-2454-2Scott Dodson2017-09-151-5/+0
|\ \ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | No conversion to boolean and no quoting for include_granted_scopes.
| * | | | | | | | | | | | No conversion to boolean and no quoting for include_granted_scopes.Jan Pazdziora2017-09-151-5/+0
| | |_|_|/ / / / / / / / | |/| | | | | | | | | |
* | | | | | | | | | | | Merge pull request #5425 from mtnbikenc/fix-openshift-nfsScott Dodson2017-09-152-1/+7
|\ \ \ \ \ \ \ \ \ \ \ \ | |/ / / / / / / / / / / |/| | | | | | | | | | | 1491657 Correct firewall install for openshift-nfs
| * | | | | | | | | | | Correct firewall install for openshift-nfsRussell Teague2017-09-152-1/+7
|/ / / / / / / / / / /
* | | | | | | | | | | Merge pull request #5401 from sosiouxme/20170913-retries-subsetLuke Meyer2017-09-145-12/+27
|\ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | add retries on repoquery
| * | | | | | | | | | | add retry on repoquery_cmdLuke Meyer2017-09-132-1/+3
| | | | | | | | | | | |