| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list playbook listed the IPs of the VMs without logging their role like:
TASK: [debug ] ************************************************************
ok: [10.64.109.37] => {
"msg": "public:10.64.109.37 private:192.168.165.5"
}
ok: [10.64.109.47] => {
"msg": "public:10.64.109.47 private:192.168.165.6"
}
ok: [10.64.109.36] => {
"msg": "public:10.64.109.36 private:192.168.165.4"
}
ok: [10.64.109.215] => {
"msg": "public:10.64.109.215 private:192.168.165.2"
}
The list playbook now prints the information in a more structured way with
a list of masters, a list of nodes and the subtype of the nodes like:
TASK: [debug ] ************************************************************
ok: [localhost] => {
"msg": {
"lenaicnewlist": {
"master": [
{
"name": "10.64.109.215",
"private IP": "192.168.165.2",
"public IP": "10.64.109.215",
"subtype": "default"
}
],
"node": [
{
"name": "10.64.109.47",
"private IP": "192.168.165.6",
"public IP": "10.64.109.47",
"subtype": "compute"
},
{
"name": "10.64.109.37",
"private IP": "192.168.165.5",
"public IP": "10.64.109.37",
"subtype": "compute"
},
{
"name": "10.64.109.36",
"private IP": "192.168.165.4",
"public IP": "10.64.109.36",
"subtype": "infra"
}
]
}
}
}
|
| |
|
|\
| |
| | |
atomic-openshift-installer: Update prompts and help messages
|
| | |
|
|\ \
| | |
| | | |
atomic-openshift-installer: Update nopwd sudo test
|
|/ /
| |
| |
| |
| | |
This is an update to the no password sudo test meant to address some
weirdness around group vs. user permissions.
|
|\ \
| |/
|/| |
Test additions for cli_installer:get_hosts_to_install_on
|
| | |
|
|\ \
| |/
|/| |
Fixed a bug. Upon creation also add dependencies to slas.
|
| | |
|
|\|
| |
| | |
Adding support for zabbix slas.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Sync with the latest image streams
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Update xpaas streams and templates for their v1.1.0 release
|
| | | |
| | | |
| | | |
| | | | |
This removes existing templates from disk and from the openshift namespace.
|
| |/ / |
|
| | | |
|
|\ \ \
| | | |
| | | | |
3.0 to 3.1 general cleanup and template update fix
|
| | | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Reorder to push all non-changing checks first
- Remove multiple plays where possible
- Make formatting more consistent
- Add additional comments to break up the different stages of the upgrade.
- Use group names more consistently
- Add package version checking to nodes
|
|\ \ \
| |/ /
|/| | |
Add zabbix pieces to hold AWS S3 bucket stats
|
| | | |
|
|\ \ \
| | | |
| | | | |
oo_filter: don't fail when attribute is not defined
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
add ansible dep to vagrant doc
|
|/ / / / |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Refactor upgrade
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
|\ \ \ \
| |/ / /
|/| | | |
Automatic commit of package [openshift-ansible] release [3.0.8-1].
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Add origin-clients to uninstall playbook.
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
atomic-openshift-installer: Remove question for container install
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Removing the option for a container-based install from the quick
installer with it is in tech preview.
|
|\ \ \ \
| | | | |
| | | | | |
examples: include logging and metrics infrastructure
|
| |/ / / |
|
|\ \ \ \
| | | | |
| | | | | |
Remove references to multi_ec2.py
|
| |/ / / |
|
|\ \ \ \
| |/ / /
|/| | | |
Package the default ansible.cfg with atomic-openshift-utils.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead of combining this with tasks to restart services, add a separate
started+enabled play for masters and nodes at the end of the playbook.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
With the openshift to atomic-openshift renames, some services were not enabled
after upgrade. Added enabled directives to all service restart lines in the
upgrade playbook.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A late change to original PR was not properly tested, there is a problem in the
facts when upgrading where the deployment type is openshift-enterprise, and the
system facts start reporting data_dir and config_base as referencing origin
directories, which are not yet symlinked to their previous openshift variants.
To correct we watch for a scenario where we evaluate these to origin
directories, which don't exist, but the openshift ones do. (to allow for
installation to still point at the origin variety)
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
If this file exists on disk, the installer will use it if the user didn't
specify an ansible config file on the CLI.
Rename share directory to match the rpm name. (utils vs util)
|