| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Fix yum/subman version check on Atomic.
|
| | |
|
|/ |
|
|\
| |
| | |
[openstack] allows timeout option for heat create stack
|
| | |
|
|\ \
| | |
| | | |
Check for bad versions of yum and subscription-manager.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use of yum and repoquery will output the given additional warning when
using newer versions of subscription-manager, with older versions of
yum. (RHEL 7.1) Installing/upgrading newer docker can pull this
subscription-manager in resulting in problems with older versions of
ansible and it's yum module, as well as any use of repoquery/yum
commands in our playbooks.
This change explicitly checks for the problem by using repoquery and
fails early if found. This is run early in both config and upgrade.
|
|\ \ \
| | | |
| | | | |
Optimize the cloud-specific list.yml playbooks
|
| |/ /
| | |
| | |
| | |
| | |
| | | |
by removing the need to gather facts on all VMs in order to list them.
And prettify the output of AWS list the same way it is done for other cloud providers.
|
|\ \ \
| | | |
| | | | |
Fix GCE cluster creation
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Attempting to create a GCE cluster when the `gce.ini` configuration file
contains a non-default network leads to the following error:
```
TASK [Launch instance(s)] ******************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unexpected error attempting to create instance lenaic2-master-74f10, error: {'domain': 'global', 'message': \"Invalid value for field 'resource.networkInterfaces[0]': ''. Subnetwork should be specified for custom subnetmode network\", 'reason': 'invalid'}"}
```
The `subnetwork` parameter needs to be added and taken into account.
|
|\ \ \
| | | |
| | | | |
etcd upgrade playbooks
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Includes bash functions for etcdctl2 and etcdctl3 which provide reasonable
defaults for etcdctl functions on a host that's configured with openshift_etcd.
|
| | | | |
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Fix HA upgrade when fact cache deleted.
|
| |/ /
| | |
| | |
| | |
| | | |
This variable is referenced in the systemd unit templates, this seems
like the easiest and most consistent fix.
|
|/ / |
|
|\ \
| | |
| | | |
Reconcile role bindings for jenkins pipeline during upgrade.
|
| | |
| | |
| | |
| | | |
https://github.com/openshift/origin/issues/11170 for more info.
|
|\ \ \
| | | |
| | | | |
Bug 1393663 - Failed to upgrade v3.2 to v3.3
|
| |/ /
| | |
| | |
| | | |
upgrade.
|
|\ \ \
| |/ /
|/| | |
Don't upgrade etcd on backup operations
|
| | |
| | |
| | |
| | |
| | | |
Fixes Bug 1393187
Fixes BZ1393187
|
|\ \ \
| |/ /
|/| | |
Fix HA etcd upgrade when facts cache has been deleted.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|
| | | |
|
|\ \ \
| | | |
| | | | |
Revert openshift.node.nodename changes
|
| | | |
| | | |
| | | |
| | | | |
This reverts commit aaaf82ba6032d0b1e9c36a39a7eda25b8c5f4b84.
|
| |/ /
| | |
| | |
| | | |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|/ / |
|
|/ |
|
|
|
|
|
|
|
| |
curl, prior to RHEL 7.2, did not properly negotiate up the TLS protocol, so
force it to use tlsv1.2
Fixes bug 1390869
|
|\
| |
| | |
Bug 1388016 - The insecure-registry address was removed during upgrade
|
| |
| |
| |
| | |
existing /etc/sysconfig/docker.
|
|\ \
| | |
| | | |
Update link to latest versions upgrade README
|
| |/ |
|
|\ \
| | |
| | | |
Add support for 3.4 upgrade.
|
| |/
| |
| |
| |
| | |
This is a direct copy of 3.3 upgrade playbooks, with 3.3 specific hooks
removed and version numbers adjusted appropriately.
|
|\ \
| | |
| | | |
Fix and reorder control plane service restart.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
| |/
|/|
| |
| | |
This file was removed and no longer used
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Fix typos
|
| | | |
|