summaryrefslogtreecommitdiffstats
path: root/playbooks
Commit message (Collapse)AuthorAgeFilesLines
* openshift_fact and misc fixesJason DeTiberus2015-05-061-3/+2
| | | | | | | | | | | | | | | | - Do not attempt to fetch file to same file location when playbooks are run locally on master - Fix for openshift_facts when run against a host in a VPC that does not assign internal/external hostnames or ips - Fix setting of labels and annotations on node instances and in openshift_facts - converted openshift_facts to use json for local_fact storage instead of an ini file, included code that should migrate existing ini users to json - added region/zone setting to byo inventory - Fix fact related bug where deployment_type was being set on node role instead of common role for node hosts
* Add vagrantfile and minor bugfixesJason DeTiberus2015-04-281-0/+4
| | | | | | | | - Add Vagrantfile for configuring a basic cluster - Add an initial readme for using vagrant - explicitly set connection: local and sudo: false for localhost actions in playbooks/common/openshift-node/config.yml - Fix permissions issue with openshift config file for non-root user
* lvm-direct support for awsJason DeTiberus2015-04-235-3/+54
| | | | | | | | | | | | | | | | | - Create a separate docker volume in aws openshift-cluster playbooks - default to using ephemeral storage, but allow to be overriden - allow root volume settingsto be overriden as well - add user-data cloud-config to bootstrap the installation/configuration of docker-storage-setup - pylint cleanup for oo_filters.py - remove left over traces to the deployment_type tags which were previously removed - oo_get_deployment_type_from_groups filter in oo_filters.py - cluster list playbooks references to oo_get_deployment_type_from_groups filter
* Allow variable overriding for ec2 deployment_varsJason DeTiberus2015-04-228-27/+393
| | | | | | | | | | | | - users can now override the deployment_vars variables with the assocated ec2_* variables - added deployment_type and env specific vars files that load some ec2_* overrides - added the ability to search for amis by ami_name - this allows us to specify a base name with a wildcard to have the playbook choose the latest available image for that image name - added a copy of the ec2_find_ami module that will be in ansible 2.0 until we can make ansible 2.0 a requirement.
* Update openshift-cluster/vars for online defaultsWesley Hearn2015-04-221-5/+5
|
* Merge pull request #166 from detiber/awsTerminateThomas Wiest2015-04-214-121/+69
|\ | | | | aws terminate playbook improvements
| * aws terminate playbook improvementsJason DeTiberus2015-04-204-121/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | - Reduce duplication in terminate playbooks between openshift-master and openshift-node (they both now just include playbooks/aws/terminate.yml - update openshift-cluster terminate playbook to include the new shared terminate playbook, also delete all cluster hosts at once instead of treating masters and nodes differently. - remove env, host-type and env-host-type tags from instance before terminating (since most users can't terminate, we are mostly just renaming instances to -terminate and stopping them, so this prevents "terminated" hosts from being returned by the dynamic inventory, at least after the cache is refreshed)
* | Merge pull request #172 from detiber/aws_vpcThomas Wiest2015-04-212-8/+33
|\ \ | | | | | | add vpc support to ec2 cluster, add more overrides for variables
| * | add vpc support to ec2 cluster, add more overrides for variablesJason DeTiberus2015-04-212-8/+33
| |/
* | Merge pull request #164 from detiber/bugFixRunOnMasterThomas Wiest2015-04-211-7/+10
|\ \ | | | | | | Fix common node config playbook when ansible is run on the first master
| * | Fix common node config playbook when ansible is run on the first masterJason DeTiberus2015-04-201-7/+10
| |/
* | Merge pull request #163 from detiber/todoForSyncThomas Wiest2015-04-211-0/+3
|\ \ | | | | | | Todo for sync
| * | Add TODO for making node certificate sync more efficientJason DeTiberus2015-04-201-0/+3
| |/
* / Remove deployment-type tagsJason DeTiberus2015-04-203-4/+1
|/
* Merge pull request #139 from detiber/configUpdatesMasterThomas Wiest2015-04-2066-819/+880
|\ | | | | Massive refactor, deployment-type support, config updates, reduce duplication
| * Fixup typosJason DeTiberus2015-04-152-2/+2
| |
| * Merge pull request #19 from lhuard1A/move_pool-refreshJason DeTiberus2015-04-152-4/+3
| |\ | | | | | | Move `virsh pool-refresh`
| | * Move `virsh pool-refresh`Lénaïc Huard2015-04-152-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | The `pool-refresh` command is used to ask libvirt to rescan the content of a volume pool. This is used to make `libvirt` take into account volumes that were created outside of livirt control i.e.: not with a `virsh` command. `pool-refresh` is useless after a `pool-create` as the content is scanned at creation. `pool-refresh` is mandatory after having created files inside an existing pool.
| * | Merge pull request #20 from lhuard1A/locale_proofJason DeTiberus2015-04-152-2/+2
| |\ \ | | | | | | | | Make the error message checks locale proof
| | * | Make the error message checks locale proofLénaïc Huard2015-04-152-2/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a computer which has a locale set, the error messages look like this: ``` $ virsh net-info foo erreur :impossible de récupérer le réseau « foo » erreur :Réseau non trouvé : no network with matching name 'foo' ``` ``` $ virsh pool-info foo erreur :impossible de récupérer le pool « foo » erreur :Pool de stockage introuvable : no storage pool with matching name 'foo' ``` The classical way to make those tests locale proof is to force a given locale. Like this: ``` $ LANG=POSIX virsh net-info foo error: failed to get network 'foo' error: Réseau non trouvé : no network with matching name 'foo' ``` ``` $ LANG=POSIX virsh pool-info foo error: failed to get pool 'foo' error: Pool de stockage introuvable : no storage pool with matching name 'foo' ``` It looks like the "Network not found" or "Storage pool not found" parts of the message are generated by the `libvirtd` daemon and are not subject to the locale of the `virsh` client. The clean fix consists in patching `libvirt` so that `virsh` sends its locale to the `libvirtd` daemon. But in the mean time, it is safer to have our playbook match the part of the message which is not subject to the daemon locale.
| * / Fix libvirt metadata used to store ansible tagsLénaïc Huard2015-04-161-4/+6
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace. Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag. As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group. This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]` The symptom is that `oo_hosts_to_terminate` was not defined. In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters: ``` TASK: [Destroy VMs] *********************************************************** failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]} msg: virtual machine g not found failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]} msg: virtual machine g not found failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]} msg: virtual machine r not found failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]} msg: virtual machine r not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]} msg: virtual machine u not found failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]} msg: virtual machine u not found failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]} msg: virtual machine p not found failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]} msg: virtual machine p not found failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]} msg: virtual machine s not found failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]} msg: virtual machine s not found failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]} msg: virtual machine [ not found failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]} msg: virtual machine [ not found failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]} msg: virtual machine ' not found failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]} msg: virtual machine ' not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found etc… ```
| * fix missed absolute path reference to mktempJason DeTiberus2015-04-151-1/+1
| |
| * Configuration updates for latest builds and major refactorJason DeTiberus2015-04-1467-892/+952
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configuration updates for latest builds - Switch to using create-node-config - Switch sdn services to use etcd over SSL - This re-uses the client certificate deployed on each node - Additional node registration changes - Do not assume that metadata service is available in openshift_facts module - Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node - Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks - Start moving generated configs to /etc/openshift - Some custom module cleanup - Add known issue with ansible-1.9 to README_OSE.md - Update to genericize the kubernetes_register_node module - Default to use kubectl for commands - Allow for overriding kubectl_cmd - In openshift_register_node role, override kubectl_cmd to openshift_kube - Set default openshift_registry_url for enterprise when deployment_type is enterprise - Fix openshift_register_node for client config change - Ensure that master certs directory is created - Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node - Allow non-root user with sudo nopasswd access - Updates for README_OSE.md - Update byo inventory for adding additional comments - Updates for node cert/config sync to work with non-root user using sudo - Move node config/certs to /etc/openshift/node - Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154 Create common playbooks - create common/openshift-master/config.yml - create common/openshift-node/config.yml - update playbooks to use new common playbooks - update launch playbooks to call update playbooks - fix openshift_registry and openshift_node_ip usage Set default deployment type to origin - openshift_repo updates for enabling origin deployments - also separate repo and gpgkey file structure - remove kubernetes repo since it isn't currently needed - full deployment type support for bin/cluster - honor OS_DEPLOYMENT_TYPE env variable - add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set - if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to origin installs Additional changes: - Add separate config action to bin/cluster that runs ansible config but does not update packages - Some more duplication reduction in cluster playbooks. - Rename task files in playbooks dirs to have tasks in their name for clarity. - update aws/gce scripts to use a directory for inventory (otherwise when there are no hosts returned from dynamic inventory there is an error) libvirt refactor and update - add libvirt dynamic inventory - updates to use dynamic inventory for libvirt
* | update tower ami image to latest libra-ops-rhel7Troy Dawson2015-04-161-1/+1
|/
* Merge pull request #152 from net-engine/aws_readmeThomas Wiest2015-04-141-1/+2
|\ | | | | Launch openshift on AWS issues
| * Add extra information for AWS READMERicardo Bernardeli2015-04-131-1/+2
| | | | | | Make security group an environment variable with default to ‘public’
* | Add libvirt as a providerLénaïc Huard2015-04-1018-0/+463
|/
* move zbxapi module to a new os_zabbix roleJason DeTiberus2015-04-083-2/+4
| | | | - cleans up repo root a bit
* Add byo playbooks and enterprise docsJason DeTiberus2015-04-039-0/+100
| | | | | | | | - added byo playbooks - added byo (example) inventory - added a README_OSE.md for getting started with Enterprise deployments - Added an ansible.cfg as an example for configuration helpful for playbooks/roles
* openshift_facts role/module refactor default settingsJason DeTiberus2015-04-0328-171/+502
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add openshift_facts role and module - Created new role openshift_facts that contains an openshift_facts module - Refactor openshift_* roles to use openshift_facts instead of relying on defaults - Refactor playbooks to use openshift_facts - Cleanup inventory group_vars - Update defaults - update openshift_master role firewall defaults - remove etcd peer port, since we will not be supporting clustered embedded etcd - remove 8444 since console now runs on the api port by default - add 8444 and 7001 to disabled services to ensure removal if updating - Add new role os_env_extras_node that is a subset of the docker role - previously, we were starting/enabling docker which was causing issues with some installations - Does not install or start docker, since the openshift-node role will handle that for us - Only adds root to the dockerroot group - Update playbooks to use ops_env_extras_node role instead of docker role - os_firewall bug fixes - ignore ip6tables for now, since we are not configuring any ipv6 rules - if installing package do a daemon-reload before starting/enabling service - Add aws support to bin/cluster - Add list action to bin/cluster - Add update action to bin/cluster - cleanup some stray debug statements - some variable renaming for clarity
* Adding the zabbix module along with a generic playbook to fetch current ↵Kenny Woodson2015-04-013-0/+41
| | | | problem triggers. Also added oo_flatten to filters for arrays of arrays.
* * repos role renamed to openshift_reposJhon Honce2015-03-241-1/+1
|
* Revert "Jwhonce wip/cluster"Jhon Honce2015-03-241-1/+0
|
* gce inventory/playbook updates for node registration changesJason DeTiberus2015-03-241-3/+3
|
* Rename repos role to openshift_reposJason DeTiberus2015-03-241-1/+1
| | | | | | | | | - Rename repos role to openshift_repos - Make openshift_repos a dependency of openshift_common - Add README and metadata for openshift_repos - Playbook updates for role rename - Verify libselinux-python is installed, otherwise some of the bulit-in modules we use fail
* * Updates from code reviewsJhon Honce2015-03-243-1/+3
|
* Add new role os_env_extras_node that is a subset of the docker roleJason DeTiberus2015-03-241-1/+1
| | | | | | | - Does not install or start docker, since the openshift-node role will handle that for us - Only add root to the dockerroot group and configures the enter-container script.
* * Add DOCKER chain to iptablesJhon Honce2015-03-241-5/+0
|
* use more specific variable names in gce/openshift-cluster/launch.ymlJason DeTiberus2015-03-241-6/+6
|
* replace oo_hosts_to_config with oo_nodes_to_config and oo_masters_to_configJason DeTiberus2015-03-247-17/+16
|
* Fix openshift_master_ips and openshift_master_public_ips resolutionJason DeTiberus2015-03-241-2/+1
| | | | | | | | | | | - don't use set_fact on localhost for openshift_master_ips and openshift_master_public_ips - we are only using it for the configure play - move definition to vars section of configure play - otherwise we'd have to set openshift_master_ips and openshift_master_public_ips from hostvars['localhost'] and since we aren't refrerencing it anywhere else, might as well just do it in vars instead of set_fact on locahost.
* add repos role to gce cluster launch so that we are applying ↵Jason DeTiberus2015-03-241-0/+1
| | | | os_update_latest after repo config
* Use env for gce paramsJason DeTiberus2015-03-241-6/+6
|
* Use ansible playbook to initialize openshift clusterJhon Honce2015-03-241-0/+1
| | | | | * Added playbooks/gce/openshift-cluster * Added bin/cluster (will replace cluster.sh)
* Various fixesJason DeTiberus2015-03-244-89/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - playbooks/gce/openshift-cluster: - Remove some stray debugging statements - Some minor formatting fixes - removing un-necessary quotes - cleaning up some jinja templates for readability - add a play to the launch playbook to apply the os_update_latest role on all hosts in the new environment - improve setting groups and gce_public_ip when using add_host module - set gce_public_ip as a variable for the host using the returned gce instance_data - add a group for each tag configured on the host (pre-pending tag_ to the tag name) - update the openshift-master/config.yml and openshift-node/config.yml includes to use the tag_env-host-type groups - openshift-{master,node}/config.yml - Some cleanup - remove some extraneous quotes - remove connection: ssh from remote hosts, since it is the default - remove user: root and instead set ansible_ssh_user in inventory/gce/group_vars/all - set openshift_public_ip and openshift_env to templated values in inventory/gce/group_vars/all as well - no longer set openshift_node_ips for the master host, since nodes will register themselves now when they are configured (prevent reboot on adding nodes) - move setting openshift_master_ips and openshift_public_master_ips using set_fact and instead use the vars: of the 'Configure Instances' play
* add roles symlink to playbooks/gce/openshift-cluster to allow launch to call ↵Jason DeTiberus2015-03-241-0/+1
| | | | os_update_latest role
* Use ansible playbook to initialize openshift clusterJhon Honce2015-03-249-6/+149
| | | | | * Added playbooks/gce/openshift-cluster * Added bin/cluster (will replace cluster.sh)
* Merge pull request #118 from liangxia/masterThomas Wiest2015-03-242-2/+2
|\ | | | | minor fix
| * minor fixliangxia2015-03-192-2/+2
| |
* | Rename repos role to openshift_reposJason DeTiberus2015-03-184-6/+2
|/ | | | | | | | | - Rename repos role to openshift_repos - Make openshift_repos a dependency of openshift_common - Add README and metadata for openshift_repos - Playbook updates for role rename - Verify libselinux-python is installed, otherwise some of the bulit-in modules we use fail