| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
etcd_upgrade: Simplify package installation
|
| | |
|
|\ \
| | |
| | | |
Speed up 'make ci' and trim the output
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The virtualenv is conditionally rebuilt now if the test requirements
file is modified. This will save upwards of 30+ seconds in iterative
'make ci' runs.
The pylint output is now trimmed to exclude disabled tests.
The order of the 'ci' target prerequisites has been changed to run the
fastest tests first.
Closes #2933
|
|\ \
| |/
|/| |
Scheduler upgrades
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- do not upgrade predicates if openshift_master_scheduler_predicates is
defined
- do not upgrade priorities if openshift_master_scheduler_priorities is
defined
- do not upgrade predicates/priorities unless they match known previous
default configs
- output WARNING to user if predictes/priorities are not updated during
install
|
| | |
|
| |
| |
| |
| |
| |
| | |
- fix nose coverage flags
- add coverage support for files tested outside of the utils directory
- exclude stdlib and virtualenv installed dependencies
|
|\ \
| |/
|/| |
Fix etcd upgrades to etcd 3.x
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Scheduler var fix
|
| |
| |
| |
| |
| |
| | |
- Introduce additional variables for current scheduler config and default
values to better determine if the values we are getting are user-defined,
coming from config or are the default values.
|
| | |
|
|\ \
| | |
| | | |
node_dnsmasq - restart dnsmasq if it's not currently running
|
| | |
| | |
| | |
| | |
| | | |
Fixes Bug 1401425
Fixes BZ1401425
|
|\ \ \
| |/ /
|/| | |
Conditionalize master config update for admission_plugin_config.
|
|/ / |
|
|\ \
| | |
| | | |
upgrade_control_plane.yml: systemd_units.yaml needs the master facts
|
| | | |
|
|\ \ \
| |/ /
|/| | |
openshift-master/restart : use openshift.common.hostname instead of inventory_hostname
|
|/ /
| |
| |
| |
| |
| |
| | |
inventory_hostname
When using a dynamic inventory inventory_hostname isn't guaranteed to be usable. We should use openshift.common.hostname which
already copes with this
|
|\ \
| | |
| | | |
openshift_node_dnsmasq - Remove strict-order option from dnsmasq
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
strict-order forces dnsmasq to iterate through nameservers in order. If one of
the nameservers is down this will slow things down while dnsmasq waits for a
timeout. Also, this option prevents dnsmasq from querying other nameservers if
the first one returns a negative result. While I think it's odd to have a
nameserver that returns negative results for a query that another returns
positive results for this does seem to fix the issue in testing.
Fixes Bug 1399577
|
|\ \ \
| |_|/
|/| | |
Explictly set etcd vars for byo scaleup
|
|/ /
| |
| |
| | |
Fixes #2738
|
|\ \
| | |
| | | |
Cleanup ovs file and restart docker on every upgrade.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In 3.3 one of our services lays down a systemd drop-in for configuring
Docker networking to use lbr0. In 3.4, this has been changed but the
file must be cleaned up manually by us.
However, after removing the file docker requires a restart. This had big
implications particularly in containerized environments where upgrade is
a very fragile series of upgrading and service restarts.
To avoid double docker restarts, and thus double service restarts in
containerized environments, this change does the following:
- Skip restart during docker upgrade, if it is required. We will restart
on our own later.
- Skip containerized service restarts when we upgrade the services
themselves.
- Clean shutdown of all containerized services.
- Restart Docker. (always, previously this only happened if it needed an
upgrade)
- Ensure all containerized services are restarted.
- Restart rpm node services. (always)
- Mark node schedulable again.
At the end of this process, docker0 should be back on the system.
|
|\ \
| |/
|/| |
Sync latest image stream and templates for v1.3 and v1.4
|
|/ |
|
|\
| |
| | |
allow 'latest' origin_image_tag
|
| | |
|
|\ \
| | |
| | | |
xpaas v1.3.5
|
| | | |
|
|\ \ \
| | | |
| | | | |
Update scheduler defaults
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| |_|/ /
|/| | | |
Ansible version check update
|
|/ / /
| | |
| | |
| | |
| | | |
We require ansible >= 2.2.0 now. Updating version checking playbook to
reflect this change.
|
|\ \ \
| |_|/
|/| | |
Remove duplicate when key
|
| |/ |
|
|\ \
| | |
| | | |
Fix rare failure to deploy new registry/router after upgrade.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|\ \ \
| | | |
| | | | |
Set nameservers on DHCPv6 event
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A dhcp6-change event may happen on nodes running dual stack
IPv4/IPv6 and DHCP, even if Openshift itself doesn't use IPv6.
/etc/resolv.conf needs be adjusted as well in this case.
|
|\ \ \ \
| |_|_|/
|/| | | |
fix selinux issues with etcd container
|