| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
| |
|
|
|
|
| |
multi_ec2.py.
|
|\
| |
| | |
Use ansible playbook to initialize openshift cluster
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- playbooks/gce/openshift-cluster:
- Remove some stray debugging statements
- Some minor formatting fixes
- removing un-necessary quotes
- cleaning up some jinja templates for readability
- add a play to the launch playbook to apply the os_update_latest role on
all hosts in the new environment
- improve setting groups and gce_public_ip when using add_host module
- set gce_public_ip as a variable for the host using the returned gce instance_data
- add a group for each tag configured on the host (pre-pending tag_ to the
tag name)
- update the openshift-master/config.yml and openshift-node/config.yml
includes to use the tag_env-host-type groups
- openshift-{master,node}/config.yml
- Some cleanup
- remove some extraneous quotes
- remove connection: ssh from remote hosts, since it is the default
- remove user: root and instead set ansible_ssh_user in
inventory/gce/group_vars/all
- set openshift_public_ip and openshift_env to templated values in
inventory/gce/group_vars/all as well
- no longer set openshift_node_ips for the master host, since nodes will
register themselves now when they are configured (prevent reboot on
adding nodes)
- move setting openshift_master_ips and openshift_public_master_ips using
set_fact and instead use the vars: of the 'Configure Instances' play
|
| | |
|
|/ |
|
|\
| |
| | |
Explicitely use python2
|
| |
| |
| |
| |
| | |
Some distributions are using python3 as the default python.
On those ones, we need to explicitely use python2.
|
| | |
|
| |
| |
| |
| |
| | |
This allows us to construct hostnames from a format string
plus ec2 tag values.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
providers if you ran multi_ec2.py from the inventory directory.
|
|
|
|
|
|
|
|
| |
Added a readme so its obvious how to run tests
Leaving this alone. Getting cleaned up in next PR
Fixing space
|
|
|
|
| |
pretty print string.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|