| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
In AWS where the master node was not part of the nodes and unschedulable
|
| |
| |
| |
| | |
in an unschedulable way
|
|\ \
| | |
| | | |
Bug 1369410 - uninstall fail at task [restart docker] on atomic-host
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Moved the restarting of docker and network services lower.
* Added /etc/systemd/system/docker.service.d/docker-sdn-ovs.conf to the list of
files to be removed (I suspect the RPM uninstall handles this for
non-containerized installs)
* sorted the file names
|
| | | |
|
| |/
|/| |
|
|\ \
| | |
| | | |
add run_once to repeatable actions
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Add the Registry deployment subtype as an option in the quick installer.
|
|\ \ \
| | | |
| | | | |
Metrics improvements
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Metrics deployer now checks for route activation. As such we need a router
before we install metrics.
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Remove duplicate flannel registration
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Adam Miller <maxamillion@fedoraproject.org>
|
| |/ / /
|/| | | |
|
|\ \ \ \
| | | | |
| | | | | |
Add warning at end of 3.3 upgrade if pluginOrderOverride is found.
|
| | | | | |
|
| |_|_|/
|/| | | |
|
|\ \ \ \
| | | | |
| | | | | |
Replace some virsh commands by native virt_XXX ansible module
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Fix etcd uninstall
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Open OpenStack security group for the service node port range
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
With OpenShift 3.2, creating a service accessible from the outside of the
cluster thanks to `nodePort` automatically opens the “local” `iptables`
firewall to allow incoming connection on the `nodePort` of the service.
In order to benefit from this improvement, the OpenStack security group
shouldn’t block those incoming connections.
This change opens, on the OS nodes, the port range dedicated to service
node ports.
|
|\ \ \ \ \ \
| |_|/ / / /
|/| | | | | |
Fix the “node on master” feature
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
What we want to do is to add the master as a node if:
* `g_nodeonmaster` is set to true, and
* we are not in the case where we want to add new nodes.
The second test was done by only checking whether `g_new_node_hosts` was defined.
This was wrong because, in all cloud-provider setups, this variable was set
with the default value of “empty list” (`[]`).
The test has been changed to use the `bool` filter so that it correctly evaluates
to false (and hence, effectively add the master as a node) when `g_new_node_hosts`
is the empty list.
|
|\ \ \ \ \
| | |/ / /
| |/| | | |
Fix standalone Docker upgrade missing symlink.
|
| | | | | |
|
| |/ / /
|/| | |
| | | |
| | | | |
Some expressions now need to be enclosed inside `{{…}}`.
|
| |/ /
|/| |
| | | |
Fixes #2317
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Prevents the network egress bug causing node restart to fail during 3.3
upgrade. (even though a separate fix is incoming for this)
Only catch is preventing the openshift_cli role, which requires docker,
from triggering a potential upgrade, which we still don't want at this
point. To avoid we use the same variable to protect docker installed
version as we use in pre.yml.
|
|\ \
| | |
| | | |
fixing openshift key error in case of node failure during run (ssh is…
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Improvements for Docker 1.10+ Upgrade Image Nuking
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In a parallel step prior to real upgrade tasks, clear out all unused
Docker images on all hosts. This should be relatively safe to interrupt
as no real upgrade steps have taken place.
Once into actual upgrade, we again clear all images only this time with
force, and after stopping and removing all containers.
Both rmi commands use a new and hopefully less error prone command to do
the removal, this should avoid missed orphans as we were hitting before.
Added some logging around the current image count before and after this
step, most of them are only printed if we're crossing the 1.10 boundary
but one does not, just for additional information in your ansible log.
|
| |/ /
| | |
| | |
| | |
| | | |
This avoids the automatic image migration in 1.10, which can take a very
long time and potentially cause rpm db corruption.
|
|/ / |
|
|\ \
| | |
| | | |
1.3 / 3.3 Upgrades
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|
| | | |
|
| | | |
|