| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
- uses the value of oauthConfig.masterCA if present, otherwise sets it to
ca.crt
|
|\
| |
| | |
Making the uninstall playbook more flexible
|
| |
| |
| |
| |
| | |
This handles stage environments as well as the eventual change of aep3_beta to
aep3
|
| | |
|
|\ \
| | |
| | | |
Differentiate machine types on GCE (master and nodes)
|
| | | |
|
|\ \ \
| |_|/
|/| | |
Docker on master aws
|
| | | |
|
| | | |
|
|\ \ \
| |_|/
|/| | |
Refactor named certficates
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Uninstall - Remove systemd wants file for node
|
| | |/
| |/| |
|
|\ \ \
| | | |
| | | | |
Fix ec2 instance type override
|
| |/ / |
|
|/ / |
|
|/ |
|
|\
| |
| | |
Better structure the output of the list playbook
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The list playbook listed the IPs of the VMs without logging their role like:
TASK: [debug ] ************************************************************
ok: [10.64.109.37] => {
"msg": "public:10.64.109.37 private:192.168.165.5"
}
ok: [10.64.109.47] => {
"msg": "public:10.64.109.47 private:192.168.165.6"
}
ok: [10.64.109.36] => {
"msg": "public:10.64.109.36 private:192.168.165.4"
}
ok: [10.64.109.215] => {
"msg": "public:10.64.109.215 private:192.168.165.2"
}
The list playbook now prints the information in a more structured way with
a list of masters, a list of nodes and the subtype of the nodes like:
TASK: [debug ] ************************************************************
ok: [localhost] => {
"msg": {
"lenaicnewlist": {
"master": [
{
"name": "10.64.109.215",
"private IP": "192.168.165.2",
"public IP": "10.64.109.215",
"subtype": "default"
}
],
"node": [
{
"name": "10.64.109.47",
"private IP": "192.168.165.6",
"public IP": "10.64.109.47",
"subtype": "compute"
},
{
"name": "10.64.109.37",
"private IP": "192.168.165.5",
"public IP": "10.64.109.37",
"subtype": "compute"
},
{
"name": "10.64.109.36",
"private IP": "192.168.165.4",
"public IP": "10.64.109.36",
"subtype": "infra"
}
]
}
}
}
|
| | |
|
|\ \
| | |
| | | |
Gate upgrade steps
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Add gateing tests on the 3.0 to 3.1 upgrade
- Ensure that each stage does not proceed if a subset of the hosts fail,
since ansible will continue through the playbook as long as all hosts in a
play haven't failed.
- Fix up some left over references to byo group names
|
|\| |
| | |
| | | |
Update etcd default facts setting
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Update master facts prior to upgrading from 3.0 to 3.1
|
| |/ |
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Reorder to push all non-changing checks first
- Remove multiple plays where possible
- Make formatting more consistent
- Add additional comments to break up the different stages of the upgrade.
- Use group names more consistently
- Add package version checking to nodes
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
| |
|
|
|
|
|
| |
Instead of combining this with tasks to restart services, add a separate
started+enabled play for masters and nodes at the end of the playbook.
|
|
|
|
|
|
| |
With the openshift to atomic-openshift renames, some services were not enabled
after upgrade. Added enabled directives to all service restart lines in the
upgrade playbook.
|
|\
| |
| | |
Remove upgrade playbook restriction on 3.0.2.
|
| |
| |
| |
| |
| | |
This is blocking 3.0.1 upgrades to 3.1 incorrectly, which is a scenario we
should support.
|
|\ \
| | |
| | | |
Read etcd data dir from appropriate config file.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Rather than assuming the etcd data dir, we now read if from master-config.yaml
if using embedded etcd, otherwise from etcd.conf.
Doing so now required use of PyYAML to parse config file when gathering facts.
Fixed discrepancy with data_dir fact and openshift-enterprise deployment_type.
|
|/ |
|
|\
| |
| | |
Add support for flannel
|
| | |
|
| | |
|
| |
| |
| |
| | |
Signed-off-by: Sylvain Baubeau <sbaubeau@redhat.com>
|
|\ \
| | |
| | | |
Fix indentation on when
|
| | | |
|
|\| |
| | |
| | | |
Upgrade enhancements
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Skip some 3.1 checks if doing a 3.0.x to 3.0.2 upgrade.
Improve error message when oc whoami fails (i.e. openshift is down) during
pre-upgrade checks, rather than assuming the binary doesn't exist.
|
| |/ / |
|
| | | |
|