| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Updating docs for Ansible 2.2 requirements
|
| | |
|
|\ \
| | |
| | | |
Verify the presence of dbus python binding
|
| | |
| | |
| | |
| | |
| | |
| | | |
While the proper fix is to have it installed by default, this commit
will also permit to have a better error message in the case the module
is not present (as running on python 3)
|
|\ \ \
| |_|/
|/| | |
Merge admission plugin configs
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`systemctl show` would exit with RC=1 for non-existent services in v231.
This caused the Ansible systemd module to exit with a failure of running the
`systemctl show` command instead of exiting stating the service was not found.
This change catches both failures on either older or newer versions of systemd.
The change in systemd exit status could be resolved in systemd v232.
https://github.com/systemd/systemd/commit/3dced37b7c2c9a5c733817569d2bbbaa397adaf7
|
|\ \ \
| | | |
| | | | |
Fix issues encountered in mixed environments
|
| | | |
| | | |
| | | |
| | | | |
containerized.
|
|\ \ \ \
| |/ / /
|/| | | |
Make os_firewall_manage_iptables run on python3
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
It fail with that traceback:
Traceback (most recent call last):
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 273, in <module>
main()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 257, in main
iptables_manager.add_rule(port, protocol)
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 87, in add_rule
self.verify_chain()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 82, in verify_chain
self.create_jump()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 142, in create_jump
input_rules = [s.split() for s in output.split('\\n')]
|
|\ \ \
| | | |
| | | | |
Refactor os_firewall role
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Remove unneeded tasks duplicated by new module functionality
* Ansible systemd module has 'masked' and 'daemon_reload' options
* Ansible firewalld module has 'immediate' option
|
|\ \ \ \
| | | | |
| | | | | |
Modified the error message being checked for
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Add hawkular admin cluster role to management admin
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
|
| |_|_|/ /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | | |
On F24 and earlier, systemctl show always returned 0. On F25, it
return 1 when a service do not exist, and thus the role fail
on Fedora 25 cloud edition.
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Refactor to use Ansible package module
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|\ \ \ \ \ \
| |_|_|_|_|/
|/| | | | | |
Only run tuned-adm if tuned exists.
|
| | |_|_|/
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fedora Atomic Host does not have tuned installed.
Fixes #2809
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Allow ansible to continue when a node is unaccessible or fails.
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
node_dnsmasq -- Set dnsmasq as our only nameserver
|
| | | | | |
|
| |/ / / |
|
|\ \ \ \
| |/ / /
|/| | | |
storage/nfs_lvm: Also export as ReadWriteOnce
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
While NFS supports `ReadWriteMany`, it's very common for pod authors
to only need `ReadWriteOnce`. At the moment, kube will not auto-bind
a `RWO` claim to a `RWM` volume.
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Added ip forwarding for nuage
|
| | |_|/
| |/| | |
|
|\ \ \ \
| | | | |
| | | | | |
Prevent useless master restart by reworking template for master service enf file
|
| | |_|/
| |/| | |
|
|\ \ \ \
| | | | |
| | | | | |
Add nuage rest server port to haproxy firewall rules.
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Support 3rd party scheduler
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Fix metrics deployment in 3.4
|
| | | | | | | |
|
| | | | | | | |
|
|\ \ \ \ \ \ \
| |_|_|_|_|/ /
|/| | | | | | |
[#2698] Change to allow cni deployments without openshift SDN
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The roles/openshift_facts main task did not pass the cni plugin variable to the later role playbooks.
The master.yaml and node.yaml templates did not allow for a cni configuration without either installing openshift sdn or nuage.
This change will allow to use os_sdn_network_plugin_name=cni and set openshift_use_openshift_sdn=false for deployments that use a cni plugin that doesn't need and want openshift sdn to be installed
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Add rolebinding-reader
|
| | | | | | | | |
|
| | | | | | | | |
|
| | |/ / / / /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Fixes Bug 1390913
Fixes BZ1390913
|
|\ \ \ \ \ \ \
| |/ / / / / /
|/| | | | | |
| | | | | | |
| | | | | | | |
EricMountain-1A/fix_docker_fatal_selinux_4upstream-github
Docker daemon is started prematurely.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Docker service is started prior to configuration changes being applied.
The service is then not restarted by the handlers, so configuration
changes are not applied.
We now start the docker service only once all config changes have been
made.
|