| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
YAML Linting with CI checking
|
| |
| |
| |
| |
| | |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
|/ |
|
|\
| |
| | |
Fix metricsPublicURL only being set correctly on first master.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem was caused by facts not being set for that master. To fix this
patch cleans up the calculation of metricsPublicURL in general. Because
this value is used in openshift_master to template into the master
config file, we now define these facts more clearly in
openshift_master_facts, and add a dependency on this to
openshift_metrics.
The calculation of default sub-domain is also changed to remove it from
system facts (as neither of these are facts about the system) and
instead use plain variables.
|
|\ \
| | |
| | | |
Pre-pull master/node/ovs images during upgrade.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We did this for install but not upgrade, leading to situations where the
service restarts after upgrade could take much longer than expected as
docker pulls down the new image. Now the images are present when we
restart services and should allow them to come back online much more
quickly, equivalent to rpm service restarts.
|
|\ \ \
| | | |
| | | | |
Silence warnings when using rpm directly
|
| | | | |
|
| | | | |
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Scheduler upgrades
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- do not upgrade predicates if openshift_master_scheduler_predicates is
defined
- do not upgrade priorities if openshift_master_scheduler_priorities is
defined
- do not upgrade predicates/priorities unless they match known previous
default configs
- output WARNING to user if predictes/priorities are not updated during
install
|
|/ / |
|
|\ \
| | |
| | | |
Scheduler var fix
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Introduce additional variables for current scheduler config and default
values to better determine if the values we are getting are user-defined,
coming from config or are the default values.
|
|\ \ \
| | | |
| | | | |
node_dnsmasq - restart dnsmasq if it's not currently running
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes Bug 1401425
Fixes BZ1401425
|
|/ / / |
|
|\ \ \
| |/ /
|/| | |
openshift_node_dnsmasq - Remove strict-order option from dnsmasq
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
strict-order forces dnsmasq to iterate through nameservers in order. If one of
the nameservers is down this will slow things down while dnsmasq waits for a
timeout. Also, this option prevents dnsmasq from querying other nameservers if
the first one returns a negative result. While I think it's odd to have a
nameserver that returns negative results for a query that another returns
positive results for this does seem to fix the issue in testing.
Fixes Bug 1399577
|
|\ \
| | |
| | | |
Cleanup ovs file and restart docker on every upgrade.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
|/ |
|
|\
| |
| | |
allow 'latest' origin_image_tag
|
| | |
|
|\ \
| | |
| | | |
xpaas v1.3.5
|
| |/ |
|
|\ \
| |/
|/| |
Update scheduler defaults
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Set nameservers on DHCPv6 event
|
| | |
| | |
| | |
| | |
| | |
| | | |
A dhcp6-change event may happen on nodes running dual stack
IPv4/IPv6 and DHCP, even if Openshift itself doesn't use IPv6.
/etc/resolv.conf needs be adjusted as well in this case.
|
|\ \ \
| |_|/
|/| | |
fix selinux issues with etcd container
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Make it so that we don't relabel /etc/etcd/ (via `:z`) on every run.
Doing this causes systemd to fail accessing /etc/etcd/etcd.conf when
trying to run the systemd unit file on the next run. Convert it from
`:z` to `:ro` since we only need read-only access to the files.
Fixes #2811
|
|/ /
| |
| |
| |
| |
| | |
* Ansible systemd module used in place of service module
* Refactored command tasks which are no longer necessary
* Applying rules from openshift-ansible Best Practices Guide
|
|\ \
| | |
| | | |
Updating docs for Ansible 2.2 requirements
|
| |/ |
|
|\ \
| | |
| | | |
Verify the presence of dbus python binding
|
| | |
| | |
| | |
| | |
| | |
| | | |
While the proper fix is to have it installed by default, this commit
will also permit to have a better error message in the case the module
is not present (as running on python 3)
|
|\ \ \
| |_|/
|/| | |
Merge admission plugin configs
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`systemctl show` would exit with RC=1 for non-existent services in v231.
This caused the Ansible systemd module to exit with a failure of running the
`systemctl show` command instead of exiting stating the service was not found.
This change catches both failures on either older or newer versions of systemd.
The change in systemd exit status could be resolved in systemd v232.
https://github.com/systemd/systemd/commit/3dced37b7c2c9a5c733817569d2bbbaa397adaf7
|
|\ \ \
| | | |
| | | | |
Fix issues encountered in mixed environments
|
| | | |
| | | |
| | | |
| | | | |
containerized.
|
|\ \ \ \
| |/ / /
|/| | | |
Make os_firewall_manage_iptables run on python3
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
It fail with that traceback:
Traceback (most recent call last):
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 273, in <module>
main()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 257, in main
iptables_manager.add_rule(port, protocol)
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 87, in add_rule
self.verify_chain()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 82, in verify_chain
self.create_jump()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 142, in create_jump
input_rules = [s.split() for s in output.split('\\n')]
|
|\ \ \
| | | |
| | | | |
Refactor os_firewall role
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Remove unneeded tasks duplicated by new module functionality
* Ansible systemd module has 'masked' and 'daemon_reload' options
* Ansible firewalld module has 'immediate' option
|