| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
move zbxapi module to a new os_zabbix role
|
| |
| |
| |
| | |
- cleans up repo root a bit
|
|/ |
|
|\
| |
| | |
fixed the opssh default output behavior to be consistent with pssh. Also fixed a bug in how directories are named for --outdir and --errdir.
|
|/
|
|
| |
fixed a bug in how directories are named for --outdir and --errdir.
|
|\
| |
| | |
Node registration changes master
|
| |
| |
| |
| |
| |
| |
| |
| | |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
|\
| |
| | |
Adding zabbix ansible module with a generic playbook example to fetch problem triggers. Also added oo_flatten to filters.
|
| | |
|
| | |
|
| | |
|
|/
|
|
| |
problem triggers. Also added oo_flatten to filters for arrays of arrays.
|
| |
|
|\
| |
| | |
Fixing bash completion for ossh/oscp. Adding for opssh.
|
|/ |
|
| |
|
|\
| |
| | |
created a python package named openshift_ansible
|
|/ |
|
| |
|
|\
| |
| | |
added config file support to opssh, ossh, and oscp
|
|/ |
|
| |
|
|\
| |
| | |
added the ability to have a config file in /etc/openshift_ansible to multi_ec2.py.
|
|/
|
|
| |
multi_ec2.py.
|
|\
| |
| | |
* Refactor bin/cluster to use argparse.subparsers
|
|/ |
|
|\
| |
| | |
Use ansible playbook to initialize openshift cluster
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
on inventory/playbook variables for openshift_hostname
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Remove default value for openshift_hostname and make it required
- Remove workarounds that are no longer needed
- Remove resources parameter from openshift_register_node module
- pre-create node certificates for each node before registering node
- distribute created node certificates to each node
- Move node registration logic to a new openshift_register_nodes role
- This is because we now have to run the steps on a master as opposed to on
the nodes like we were previously doing.
- Rename openshift_register_node module to kubernetes_register_node, one more
step to genericizing enough for upstreaming, however there are still plenty
of openshift specific commands that still need to be genericized.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Rename repos role to openshift_repos
- Make openshift_repos a dependency of openshift_common
- Add README and metadata for openshift_repos
- Playbook updates for role rename
- Verify libselinux-python is installed, otherwise some of the bulit-in
modules we use fail
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
- Does not install or start docker, since the openshift-node role will handle
that for us
- Only add root to the dockerroot group and configures the enter-container
script.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
- Add verify_chain action to os_firewall_manage_iptables module
- Update os_firewall module to use os_firewall_manage_iptables for creating
the DOCKER chain.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- don't use set_fact on localhost for openshift_master_ips and
openshift_master_public_ips
- we are only using it for the configure play
- move definition to vars section of configure play
- otherwise we'd have to set openshift_master_ips and
openshift_master_public_ips from hostvars['localhost'] and since we aren't
refrerencing it anywhere else, might as well just do it in vars instead of
set_fact on locahost.
|
| |
| |
| |
| | |
os_update_latest after repo config
|
| | |
|
| |
| |
| |
| |
| | |
* Added playbooks/gce/openshift-cluster
* Added bin/cluster (will replace cluster.sh)
|