| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Add missing atomic- and openshift-enterprise
|
| |
| |
| |
| |
| |
| | |
Some checks related to *enterprise deployments were still only
looking for "enterprise" deployment_type. Update them to
cover also atomic-enterprise and openshift-enterprise deployment types.
|
|\ \
| |/
|/| |
Refactor storage options
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Replace fail with a warn and prompt if running ansible from a host that will be rebooted.
Re-organize playbooks.
|
|
|
|
|
|
| |
Blocks running ansible on a host that will be restarted.
Can restart just services, or optionally the full system.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Move debug_level into vars.yml and byo inventory
- change variables in cluster_hosts.yml to be g_* and update playbooks to use
those values directly instead of setting them indirectly
- added a new g_all_hosts entry in cluster_hosts to use in the update playbook
instead of unioning all host types within the playbook
- added a cluster_hosts.yml for the byo playbook
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
- modify evaluate host to set oo_nodes_to_config to a new variable
g_new_nodes_group if defined rather than g_nodes_group and also skip adding
the master when g_new_nodes_group is set.
- Remove byo specific naming from playbooks/common/openshift-cluster/scaleup.yml
and created a new playbooks/byo/openshift-cluster/scaleup.yml playbook.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
| |
|
| |
|
|\
| |
| | |
Set loglevel=2 as our default across the board
|
| | |
|
| |
| |
| |
| | |
This seems to be what's used in other places
|
| |
| |
| |
| | |
Allows reuse out of vagrant, e.g. to subscribe systems by its own
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add support to bin/cluster for specifying etcd hosts
- defaults to 0, if no etcd hosts are selected, then configures embedded
etcd
- Updates for the byo inventory file for etcd and master as node by default
- Consolidation of cluster logic more centrally into common playbook
- Added etcd config support to playbooks
- Restructured byo playbooks to leverage the common openshift-cluster playbook
- Added support to common master playbook to generate and apply external etcd
client certs from the etcd ca
- start of refactor for better handling of master certs in a multi-master
environment.
- added the openshift_master_ca and openshift_master_certificates roles to
manage master certs instead of generating them in the openshift_master
role
- added etcd host groups to the cluster update playbooks
- aded better handling of host groups when they are either not present or are
empty.
- Update AWS readme
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- fix firewall conflict issues with co-located etcd and openshift hosts
- added os_firewall dependency to etcd role
- updated etcd template to better handle clustered and non-clustered installs
- added etcd_ca role
- generates a self-signed cert to manage etcd certificates, since etcd peer
certificates are required to be client and server certs and the openshift
ca will only generate client or server certs (not one authorized for
both).
- renamed openshift_etcd_certs role to etcd_certificates and updated it to
manage certificates generated from the CA managed by the etcd_ca role
- remove hard coded etcd_port in openshift_facts
- updates for the openshift-etcd common playbook
- removed etcd and openshift-etcd playbooks from the byo playbooks directory
- added a common playbook for setting etcd launch facts
- added an openshift-etcd common service playbook
- removed unused variables
- fixed tests for embedded_{etcd,dns,kube} in openshift_master
- removed old workaround for reloading systemd units
|
|
|
|
|
|
| |
- Add initial etcd role
- Add etcd playbook to create etcd client certs
- Hookup master to etcd
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Templatize node config
- Templatize master config
- Integrated sdn changes
- Updates for openshift_facts
- Added support for node, master and sdn related changes
- registry_url
- added identity provider facts
- Removed openshift_sdn_* roles
- Install httpd-tools if configuring htpasswd auth
- Remove references to external_id
- Setting external_id interferes with nodes associating with the generated
node object when pre-registering nodes.
- osc/oc and osadm/oadm binary detection in openshift_facts
Misc Changes:
- make non-errata puddle default for byo example
- comment out master in list of nodes in inventory/byo/hosts
- remove non-error errors from fluentd_* roles
- Use admin kubeconfig instead of openshift-client
|
|
|
|
|
| |
- Fix bug where playbooks/byo/config.yml would error if only a master is
defined in the inventory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
|
|
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|