| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Remove duplicate when key
|
| | |
|
|\ \
| | |
| | | |
Fix rare failure to deploy new registry/router after upgrade.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|\ \ \
| | | |
| | | | |
Set nameservers on DHCPv6 event
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A dhcp6-change event may happen on nodes running dual stack
IPv4/IPv6 and DHCP, even if Openshift itself doesn't use IPv6.
/etc/resolv.conf needs be adjusted as well in this case.
|
|\ \ \ \
| |_|_|/
|/| | | |
fix selinux issues with etcd container
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Make it so that we don't relabel /etc/etcd/ (via `:z`) on every run.
Doing this causes systemd to fail accessing /etc/etcd/etcd.conf when
trying to run the systemd unit file on the next run. Convert it from
`:z` to `:ro` since we only need read-only access to the files.
Fixes #2811
|
|\ \ \ \
| | | | |
| | | | | |
Refactored to use Ansible systemd module
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Ansible systemd module used in place of service module
* Refactored command tasks which are no longer necessary
* Applying rules from openshift-ansible Best Practices Guide
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Gracefully handle OpenSSL module absence
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Should fix #2869
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
etcd upgrade playbook is not currently applicable to embedded etcd in…
|
|/ / / / /
| | | | |
| | | | |
| | | | | |
Fixes Bug 1395945
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|\ \ \ \
| |_|/ /
|/| | |
| | | |
| | | | |
lhuard1A/fix_list_after_create_on_libvirt_and_openstack
Fix the list done after cluster creation on libvirt and OpenStack
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `list.yml` playbooks are using cloud provider specific variables to find
the IPs of the VMs since 82449c6.
Those “cloud provider specific” variables are the ones provided by the dynamic
inventories.
But there was a problem when the `list.yml` playbooks are invoked from the
`launch.yml` ones because, in that case, the inventory is not coming from the
dynamic inventory scripts, but from the `add_host` done inside
`launch_instances.yml`.
Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the
`add_host` the variables used by `list.yml`, libvirt and OpenStack were missing
that.
Fixes #2856
|
|\ \ \
| | | |
| | | | |
Updating docs for Ansible 2.2 requirements
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Verify the presence of dbus python binding
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
While the proper fix is to have it installed by default, this commit
will also permit to have a better error message in the case the module
is not present (as running on python 3)
|
|\ \ \ \
| |_|/ /
|/| | | |
Merge admission plugin configs
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
|\ \ \ \
| | | | |
| | | | | |
Systemd `systemctl show` workaround
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
`systemctl show` would exit with RC=1 for non-existent services in v231.
This caused the Ansible systemd module to exit with a failure of running the
`systemctl show` command instead of exiting stating the service was not found.
This change catches both failures on either older or newer versions of systemd.
The change in systemd exit status could be resolved in systemd v232.
https://github.com/systemd/systemd/commit/3dced37b7c2c9a5c733817569d2bbbaa397adaf7
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Update README.md
|
|/ / / /
| | | |
| | | | |
add missing dependencies
|
|\ \ \ \
| | | | |
| | | | | |
Fix issues encountered in mixed environments
|
| | | | |
| | | | |
| | | | |
| | | | | |
containerized.
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Make os_firewall_manage_iptables run on python3
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It fail with that traceback:
Traceback (most recent call last):
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 273, in <module>
main()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 257, in main
iptables_manager.add_rule(port, protocol)
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 87, in add_rule
self.verify_chain()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 82, in verify_chain
self.create_jump()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 142, in create_jump
input_rules = [s.split() for s in output.split('\\n')]
|
|\ \ \ \
| | | | |
| | | | | |
Refactor os_firewall role
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Remove unneeded tasks duplicated by new module functionality
* Ansible systemd module has 'masked' and 'daemon_reload' options
* Ansible firewalld module has 'immediate' option
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Modified the error message being checked for
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Added a BYO playbook for configuring NetworkManager on nodes
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In order to do a full install of OpenShfit using the byo/config.yml
playbook, it is currently required that NetworkManager be installed
and configured on the nodes prior to the installation. This playbook
introduces a very simple default configuration that can be used to
install, configure and enable NetworkManager on their nodes.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Add hawkular admin cluster role to management admin
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
|
|\ \ \ \ \ \ \ \
| |_|_|_|_|/ / /
|/| | | | | | | |
Make the role work on F25 Cloud
|
|/ / / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
On F24 and earlier, systemctl show always returned 0. On F25, it
return 1 when a service do not exist, and thus the role fail
on Fedora 25 cloud edition.
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Refactor to use Ansible package module
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|\ \ \ \ \ \ \ \
| |_|_|_|_|_|/ /
|/| | | | | | | |
Only run tuned-adm if tuned exists.
|
| | |_|_|_|/ /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Fedora Atomic Host does not have tuned installed.
Fixes #2809
|
|\ \ \ \ \ \ \
| |/ / / / / /
|/| | | | | | |
Allow ansible to continue when a node is unaccessible or fails.
|
| | | | | | | |
|
| | | | | | | |
|