summaryrefslogtreecommitdiffstats
path: root/playbooks/aws
diff options
context:
space:
mode:
Diffstat (limited to 'playbooks/aws')
-rw-r--r--playbooks/aws/BUILD_AMI.md21
-rw-r--r--playbooks/aws/PREREQUISITES.md40
-rw-r--r--playbooks/aws/README.md199
-rwxr-xr-xplaybooks/aws/openshift-cluster/accept.yml53
-rw-r--r--playbooks/aws/openshift-cluster/add_nodes.yml35
-rw-r--r--playbooks/aws/openshift-cluster/build_ami.yml38
-rw-r--r--playbooks/aws/openshift-cluster/cluster_hosts.yml23
-rw-r--r--playbooks/aws/openshift-cluster/config.yml38
-rw-r--r--playbooks/aws/openshift-cluster/install.yml25
-rw-r--r--playbooks/aws/openshift-cluster/launch.yml54
-rw-r--r--playbooks/aws/openshift-cluster/library/ec2_ami_find.py303
-rw-r--r--playbooks/aws/openshift-cluster/list.yml23
-rw-r--r--playbooks/aws/openshift-cluster/prerequisites.yml8
-rw-r--r--playbooks/aws/openshift-cluster/provision.yml17
-rw-r--r--playbooks/aws/openshift-cluster/provision_install.yml16
-rw-r--r--playbooks/aws/openshift-cluster/provision_instance.yml12
-rw-r--r--playbooks/aws/openshift-cluster/provision_nodes.yml18
-rw-r--r--playbooks/aws/openshift-cluster/provision_sec_group.yml13
-rw-r--r--playbooks/aws/openshift-cluster/provision_ssh_keypair.yml12
-rw-r--r--playbooks/aws/openshift-cluster/provision_vpc.yml10
-rw-r--r--playbooks/aws/openshift-cluster/scaleup.yml32
-rw-r--r--playbooks/aws/openshift-cluster/seal_ami.yml12
-rw-r--r--playbooks/aws/openshift-cluster/service.yml31
-rw-r--r--playbooks/aws/openshift-cluster/tasks/launch_instances.yml188
-rw-r--r--playbooks/aws/openshift-cluster/templates/user_data.j222
-rw-r--r--playbooks/aws/openshift-cluster/terminate.yml77
-rw-r--r--playbooks/aws/openshift-cluster/update.yml34
-rw-r--r--playbooks/aws/openshift-cluster/vars.yml33
-rw-r--r--playbooks/aws/provisioning-inventory.example.ini25
-rw-r--r--playbooks/aws/provisioning_vars.yml.example120
30 files changed, 637 insertions, 895 deletions
diff --git a/playbooks/aws/BUILD_AMI.md b/playbooks/aws/BUILD_AMI.md
new file mode 100644
index 000000000..468264a9a
--- /dev/null
+++ b/playbooks/aws/BUILD_AMI.md
@@ -0,0 +1,21 @@
+# Build AMI
+
+When seeking to deploy a working openshift cluster using these plays, a few
+items must be in place.
+
+These are:
+
+1. Create an instance, using a specified ssh key.
+2. Run openshift-ansible setup roles to ensure packages and services are correctly configured.
+3. Create the AMI.
+4. If encryption is desired
+ - A KMS key is created with the name of $clusterid
+ - An encrypted AMI will be produced with $clusterid KMS key
+5. Terminate the instance used to configure the AMI.
+
+More AMI specific options can be found in ['openshift_aws/defaults/main.yml'](../../roles/openshift_aws/defaults/main.yml). When creating an encrypted AMI please specify use_encryption:
+```
+# openshift_aws_ami_encrypt: True # defaults to false
+```
+
+**Note**: This will ensure to take the recently created AMI and encrypt it to be used later. If encryption is not desired then set the value to false (defaults to false). The AMI id will be fetched and used according to its most recent creation date.
diff --git a/playbooks/aws/PREREQUISITES.md b/playbooks/aws/PREREQUISITES.md
new file mode 100644
index 000000000..4f428dcc3
--- /dev/null
+++ b/playbooks/aws/PREREQUISITES.md
@@ -0,0 +1,40 @@
+# Prerequisites
+
+When seeking to deploy a working openshift cluster using these plays, a few
+items must be in place.
+
+These are:
+
+1) vpc
+2) security group to build the AMI in.
+3) ssh keys to log into instances
+
+These items can be provisioned ahead of time, or you can utilize the plays here
+to create these items.
+
+If you wish to provision these items yourself, or you already have these items
+provisioned and wish to utilize existing components, please refer to
+provisioning_vars.yml.example.
+
+If you wish to have these items created for you, continue with this document.
+
+# Running prerequisites.yml
+
+Warning: Running these plays will provision items in your AWS account (if not
+present), and you may incur billing charges. These plays are not suitable
+for the free-tier.
+
+## Step 1:
+Ensure you have specified all the necessary provisioning variables. See
+provisioning_vars.example.yml and README.md for more information.
+
+## Step 2:
+```
+$ ansible-playbook -i inventory.yml prerequisites.yml -e @provisioning_vars.yml
+```
+
+This will create a VPC, security group, and ssh_key. These plays are idempotent,
+and multiple runs should result in no additional provisioning of these components.
+
+You can also verify that you will successfully utilize existing components with
+these plays.
diff --git a/playbooks/aws/README.md b/playbooks/aws/README.md
index 99698b4d0..4e5c1017b 100644
--- a/playbooks/aws/README.md
+++ b/playbooks/aws/README.md
@@ -1,4 +1,199 @@
# AWS playbooks
-This playbook directory is meant to be driven by [`bin/cluster`](../../bin),
-which is community supported and most use is considered deprecated.
+## Provisioning
+
+With recent desire for provisioning from customers and developers alike, the AWS
+ playbook directory now supports a limited set of ansible playbooks to achieve a
+ complete cluster setup. These playbooks bring into alignment our desire to
+ deploy highly scalable Openshift clusters utilizing AWS auto scale groups and
+ custom AMIs.
+
+To speed in the provisioning of medium and large clusters, openshift-node
+instances are created using a pre-built AMI. A list of pre-built AMIs will
+be available soon.
+
+If the deployer wishes to build their own AMI for provisioning, instructions
+to do so are provided here.
+
+### Where do I start?
+
+Before any provisioning may occur, AWS account credentials must be present in the environment. This can be done in two ways:
+
+- Create the following file `~/.aws/credentials` with the contents (substitute your access key and secret key):
+ ```
+ [myaccount]
+ aws_access_key_id = <Your access_key here>
+ aws_secret_access_key = <Your secret acces key here>
+ ```
+ From the shell:
+ ```
+ $ export AWS_PROFILE=myaccount
+ ```
+ ---
+- Alternatively to using a profile you can export your AWS credentials as environment variables.
+ ```
+ $ export AWS_ACCESS_KEY_ID=AKIXXXXXX
+ $ export AWS_SECRET_ACCESS_KEY=XXXXXX
+ ```
+
+### Let's Provision!
+
+Warning: Running these plays will provision items in your AWS account (if not
+present), and you may incur billing charges. These plays are not suitable
+for the free-tier.
+
+#### High-level overview
+- prerequisites.yml - Provision VPC, Security Groups, SSH keys, if needed. See PREREQUISITES.md for more information.
+- build_ami.yml - Builds a custom AMI. See BUILD_AMI.md for more information.
+- provision.yml - Create a vpc, elbs, security groups, launch config, asg's, etc.
+- install.yml - Calls the openshift-ansible installer on the newly created instances
+- provision_nodes.yml - Creates the infra and compute node scale groups
+- accept.yml - This is a playbook to accept infra and compute nodes into the cluster
+- provision_install.yml - This is a combination of all 3 of the above playbooks. (provision, install, and provision_nodes as well as accept.yml)
+
+The current expected work flow should be to provide an AMI with access to Openshift repositories. There should be a repository specified in the `openshift_additional_repos` parameter of the inventory file. The next expectation is a minimal set of values in the `provisioning_vars.yml` file to configure the desired settings for cluster instances. These settings are AWS specific and should be tailored to the consumer's AWS custom account settings.
+
+Values specified in provisioning_vars.yml may instead be specified in your inventory group_vars
+under the appropriate groups. Most variables can exist in the 'all' group.
+
+```yaml
+---
+# Minimum mandatory provisioning variables. See provisioning_vars.yml.example.
+# for more information.
+openshift_deployment_type: # 'origin' or 'openshift-enterprise'
+openshift_release: # example: v3.7
+openshift_pkg_version: # example: -3.7.0
+openshift_aws_ssh_key_name: # example: myuser_key
+openshift_aws_base_ami: # example: ami-12345678
+openshift_aws_iam_cert_path: # example: '/path/to/wildcard.<clusterid>.example.com.crt'
+openshift_aws_iam_cert_key_path: # example: '/path/to/wildcard.<clusterid>.example.com.key'
+```
+
+If customization is required for the instances, scale groups, or any other configurable option please see the ['openshift_aws/defaults/main.yml'](../../roles/openshift_aws/defaults/main.yml) for variables and overrides. These overrides can be placed in the `provisioning_vars.yml`, `inventory`, or `group_vars`.
+
+In order to create the bootstrap-able AMI we need to create a basic openshift-ansible inventory. This enables us to create the AMI using the openshift-ansible node roles. This inventory should not include any hosts, but certain variables should be defined in the appropriate groups, just as deploying a cluster
+using the normal openshift-ansible method. See provisioning-inventory.example.ini for an example.
+
+There are more examples of cluster inventory settings [`here`](../../inventory/byo/).
+
+#### Step 0 (optional)
+
+You may provision a VPC, Security Group, and SSH keypair to build the AMI.
+
+```
+$ ansible-playbook -i inventory.yml prerequisites.yml -e @provisioning_vars.yml
+```
+
+See PREREQUISITES.md for more information.
+
+#### Step 1
+
+Once the `inventory` and the `provisioning_vars.yml` file has been updated with the correct settings for the desired AWS account then we are ready to build an AMI.
+
+```
+$ ansible-playbook -i inventory.yml build_ami.yml -e @provisioning_vars.yml
+```
+
+#### Step 2
+
+Now that we have created an AMI for our Openshift installation, there are two ways to use the AMI.
+
+1. In the default behavior, the AMI id will be found and used in the last created fashion.
+2. The `openshift_aws_ami` option can be specified. This will allow the user to override the behavior of the role and use a custom AMI specified in the `openshift_aws_ami` variable.
+
+We are now ready to provision and install the cluster. This can be accomplished by calling all of the following steps at once or one-by-one. The all in one can be called like this:
+```
+$ ansible-playbook -i inventory.yml provision_install.yml -e @provisioning_vars.yml
+```
+
+If this is the first time running through this process, please attempt the following steps one-by-one and ensure the setup works correctly.
+
+#### Step 3
+
+We are ready to create the master instances.
+
+```
+$ ansible-playbook provision.yml -e @provisioning_vars.yml
+```
+
+This playbook runs through the following steps:
+1. Creates an s3 bucket for the registry named $clusterid-docker-registry
+2. Create master security groups.
+3. Create a master launch config.
+4. Create the master auto scaling groups.
+5. If certificates are desired for ELB, they will be uploaded.
+6. Create internal and external master ELBs.
+7. Add newly created masters to the correct groups.
+8. Set a couple of important facts for the masters.
+
+At this point we have successfully created the infrastructure including the master nodes.
+
+#### Step 4
+
+Now it is time to install Openshift using the openshift-ansible installer. This can be achieved by running the following playbook:
+
+```
+$ ansible-playbook -i inventory.yml install.yml @provisioning_vars.yml
+```
+This playbook accomplishes the following:
+1. Builds a dynamic inventory file by querying AWS.
+2. Runs the [`byo`](../../common/openshift-cluster/config.yml)
+
+Once this playbook completes, the cluster masters should be installed and configured.
+
+#### Step 5
+
+Now that we have the cluster masters deployed, we need to deploy our infrastructure and compute nodes:
+
+```
+$ ansible-playbook provision_nodes.yml -e @provisioning_vars.yml
+```
+
+Once this playbook completes, it should create the compute and infra node scale groups. These nodes will attempt to register themselves to the cluster. These requests must be approved by an administrator in Step 6.
+
+#### Step 6
+
+To facilitate the node registration process, nodes may be registered by running the following script `accept.yml`. This script can register in a few different ways.
+- approve_all - **Note**: this option is for development and test environments. Security is bypassed
+- nodes - A list of node names that will be accepted into the cluster
+
+```yaml
+ oc_adm_csr:
+ #approve_all: True
+ nodes: < list of nodes here >
+ timeout: 0
+```
+
+Once the desired accept method is chosen, run the following playbook `accept.yml`:
+1. Run the following playbook.
+```
+$ ansible-playbook accept.yml -e @provisioning_vars.yml
+```
+
+Login to a master and run the following command:
+```
+ssh root@<master ip address>
+$ oc --config=/etc/origin/master/admin.kubeconfig get csr
+node-bootstrapper-client-ip-172-31-49-148-ec2-internal 1h system:serviceaccount:openshift-infra:node-bootstrapper Approved,Issued
+node-bootstrapper-server-ip-172-31-49-148-ec2-internal 1h system:node:ip-172-31-49-148.ec2.internal Approved,Issued
+```
+
+Verify the `CONDITION` is `Approved,Issued` on the `csr` objects. There are two for each node required.
+1. `node-bootstrapper-client` is a request to access the api/controllers.
+2. `node-bootstrapper-server` is a request to join the cluster.
+
+Once this is complete, verify the nodes have joined the cluster and are `ready`.
+
+```
+$ oc --config=/etc/origin/master/admin.kubeconfig get nodes
+NAME STATUS AGE VERSION
+ip-172-31-49-148.ec2.internal Ready 1h v1.6.1+5115d708d7
+```
+
+### Ready To Work!
+
+At this point your cluster should be ready for workloads. Proceed to deploy applications on your cluster.
+
+### Still to come
+
+There are more enhancements that are arriving for provisioning. These will include more playbooks that enhance the provisioning capabilities.
diff --git a/playbooks/aws/openshift-cluster/accept.yml b/playbooks/aws/openshift-cluster/accept.yml
new file mode 100755
index 000000000..c2c8bea50
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/accept.yml
@@ -0,0 +1,53 @@
+#!/usr/bin/ansible-playbook
+---
+- name: Setup the vpc and the master node group
+ hosts: localhost
+ remote_user: root
+ gather_facts: no
+ tasks:
+ - name: Alert user to variables needed - clusterid
+ debug:
+ msg: "openshift_aws_clusterid={{ openshift_aws_clusterid | default('default') }}"
+
+ - name: Alert user to variables needed - region
+ debug:
+ msg: "openshift_aws_region={{ openshift_aws_region | default('us-east-1') }}"
+
+ - name: bring lib_openshift into scope
+ include_role:
+ name: lib_openshift
+
+ - name: fetch masters
+ ec2_remote_facts:
+ region: "{{ openshift_aws_region | default('us-east-1') }}"
+ filters:
+ "tag:clusterid": "{{ openshift_aws_clusterid | default('default') }}"
+ "tag:host-type": master
+ instance-state-name: running
+ register: mastersout
+ retries: 20
+ delay: 3
+ until: "'instances' in mastersout and mastersout.instances|length > 0"
+
+ - name: fetch new node instances
+ ec2_remote_facts:
+ region: "{{ openshift_aws_region | default('us-east-1') }}"
+ filters:
+ "tag:clusterid": "{{ openshift_aws_clusterid | default('default') }}"
+ "tag:host-type": node
+ instance-state-name: running
+ register: instancesout
+ retries: 20
+ delay: 3
+ until: "'instances' in instancesout and instancesout.instances|length > 0"
+
+ - debug:
+ msg: "{{ instancesout.instances|map(attribute='private_dns_name') | list }}"
+
+ - name: approve nodes
+ oc_adm_csr:
+ #approve_all: True
+ nodes: "{{ instancesout.instances|map(attribute='private_dns_name') | list }}"
+ timeout: 60
+ register: nodeout
+ delegate_to: "{{ mastersout.instances[0].public_ip_address }}"
diff --git a/playbooks/aws/openshift-cluster/add_nodes.yml b/playbooks/aws/openshift-cluster/add_nodes.yml
deleted file mode 100644
index 0e8eb90c1..000000000
--- a/playbooks/aws/openshift-cluster/add_nodes.yml
+++ /dev/null
@@ -1,35 +0,0 @@
----
-- name: Launch instance(s)
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- vars_files:
- - vars.yml
- vars:
- oo_extend_env: True
- tasks:
- - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
- vars:
- type: "compute"
- count: "{{ num_nodes }}"
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ node_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "{{ sub_host_type }}"
-
- - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
- vars:
- type: "infra"
- count: "{{ num_infra }}"
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ node_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "{{ sub_host_type }}"
-
-- include: scaleup.yml
-- include: list.yml
diff --git a/playbooks/aws/openshift-cluster/build_ami.yml b/playbooks/aws/openshift-cluster/build_ami.yml
new file mode 100644
index 000000000..5b4a6a1e8
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/build_ami.yml
@@ -0,0 +1,38 @@
+---
+- hosts: localhost
+ connection: local
+ gather_facts: no
+ tasks:
+ - name: Require openshift_aws_base_ami
+ fail:
+ msg: "A base AMI is required for AMI building. Please ensure `openshift_aws_base_ami` is defined."
+ when: openshift_aws_base_ami is undefined
+
+ - name: "Alert user to variables needed and their values - {{ item.name }}"
+ debug:
+ msg: "{{ item.msg }}"
+ with_items:
+ - name: openshift_aws_clusterid
+ msg: "openshift_aws_clusterid={{ openshift_aws_clusterid | default('default') }}"
+ - name: openshift_aws_region
+ msg: "openshift_aws_region={{ openshift_aws_region | default('us-east-1') }}"
+
+- include: provision_instance.yml
+ vars:
+ openshift_aws_node_group_type: compute
+
+- hosts: nodes
+ gather_facts: False
+ tasks:
+ - name: set the user to perform installation
+ set_fact:
+ ansible_ssh_user: "{{ openshift_aws_build_ami_ssh_user | default(ansible_ssh_user) }}"
+ openshift_node_bootstrap: True
+
+# This is the part that installs all of the software and configs for the instance
+# to become a node.
+- include: ../../common/openshift-node/image_prep.yml
+
+- include: seal_ami.yml
+ vars:
+ openshift_aws_ami_name: "openshift-gi-{{ lookup('pipe', 'date +%Y%m%d%H%M')}}"
diff --git a/playbooks/aws/openshift-cluster/cluster_hosts.yml b/playbooks/aws/openshift-cluster/cluster_hosts.yml
deleted file mode 100644
index 119df9c7d..000000000
--- a/playbooks/aws/openshift-cluster/cluster_hosts.yml
+++ /dev/null
@@ -1,23 +0,0 @@
----
-g_all_hosts: "{{ groups['tag_clusterid_' ~ cluster_id] | default([])
- | intersect(groups['tag_environment_' ~ cluster_env] | default([])) }}"
-
-g_etcd_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_etcd'] | default([])) }}"
-
-g_lb_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_lb'] | default([])) }}"
-
-g_nfs_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_nfs'] | default([])) }}"
-
-g_glusterfs_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type-glusterfs'] | default([])) }}"
-
-g_master_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_master'] | default([])) }}"
-
-g_new_master_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_new_master'] | default([])) }}"
-
-g_node_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_node'] | default([])) }}"
-
-g_new_node_hosts: "{{ g_all_hosts | intersect(groups['tag_host-type_new_node'] | default([])) }}"
-
-g_infra_hosts: "{{ g_node_hosts | intersect(groups['tag_sub-host-type_infra'] | default([])) }}"
-
-g_compute_hosts: "{{ g_node_hosts | intersect(groups['tag_sub-host-type_compute'] | default([])) }}"
diff --git a/playbooks/aws/openshift-cluster/config.yml b/playbooks/aws/openshift-cluster/config.yml
deleted file mode 100644
index 8d64b0521..000000000
--- a/playbooks/aws/openshift-cluster/config.yml
+++ /dev/null
@@ -1,38 +0,0 @@
----
-- hosts: localhost
- gather_facts: no
- tasks:
- - include_vars: vars.yml
- - include_vars: cluster_hosts.yml
- - add_host:
- name: "{{ item }}"
- groups: l_oo_all_hosts
- with_items: "{{ g_all_hosts | default([]) }}"
-
-- hosts: l_oo_all_hosts
- gather_facts: no
- tasks:
- - include_vars: vars.yml
- - include_vars: cluster_hosts.yml
-
-- include: ../../common/openshift-cluster/config.yml
- vars:
- g_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- g_sudo: "{{ deployment_vars[deployment_type].become }}"
- g_nodeonmaster: true
- openshift_cluster_id: "{{ cluster_id }}"
- openshift_debug_level: "{{ debug_level }}"
- openshift_deployment_type: "{{ deployment_type }}"
- openshift_public_hostname: "{{ ec2_ip_address }}"
- openshift_hosted_registry_selector: 'type=infra'
- openshift_hosted_router_selector: 'type=infra'
- openshift_node_labels:
- region: "{{ deployment_vars[deployment_type].region }}"
- type: "{{ hostvars[inventory_hostname]['ec2_tag_sub-host-type'] }}"
- openshift_master_cluster_method: 'native'
- openshift_use_openshift_sdn: "{{ lookup('oo_option', 'use_openshift_sdn') }}"
- os_sdn_network_plugin_name: "{{ lookup('oo_option', 'sdn_network_plugin_name') }}"
- openshift_use_flannel: "{{ lookup('oo_option', 'use_flannel') }}"
- openshift_use_calico: "{{ lookup('oo_option', 'use_calico') }}"
- openshift_use_fluentd: "{{ lookup('oo_option', 'use_fluentd') }}"
- openshift_use_dnsmasq: false
diff --git a/playbooks/aws/openshift-cluster/install.yml b/playbooks/aws/openshift-cluster/install.yml
new file mode 100644
index 000000000..4d0bf9531
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/install.yml
@@ -0,0 +1,25 @@
+---
+- name: Setup the master node group
+ hosts: localhost
+ tasks:
+ - include_role:
+ name: openshift_aws
+ tasks_from: setup_master_group.yml
+
+- name: set the master facts for hostname to elb
+ hosts: masters
+ gather_facts: no
+ remote_user: root
+ tasks:
+ - include_role:
+ name: openshift_aws
+ tasks_from: master_facts.yml
+
+- name: normalize groups
+ include: ../../byo/openshift-cluster/initialize_groups.yml
+
+- name: run the std_include
+ include: ../../common/openshift-cluster/std_include.yml
+
+- name: run the config
+ include: ../../common/openshift-cluster/config.yml
diff --git a/playbooks/aws/openshift-cluster/launch.yml b/playbooks/aws/openshift-cluster/launch.yml
deleted file mode 100644
index 3edace493..000000000
--- a/playbooks/aws/openshift-cluster/launch.yml
+++ /dev/null
@@ -1,54 +0,0 @@
----
-- name: Launch instance(s)
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- vars_files:
- - vars.yml
- tasks:
- - include: ../../common/openshift-cluster/tasks/set_etcd_launch_facts.yml
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ etcd_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "default"
-
- - include: ../../common/openshift-cluster/tasks/set_master_launch_facts.yml
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ master_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "default"
-
- - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
- vars:
- type: "compute"
- count: "{{ num_nodes }}"
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ node_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "{{ sub_host_type }}"
-
- - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
- vars:
- type: "infra"
- count: "{{ num_infra }}"
- - include: tasks/launch_instances.yml
- vars:
- instances: "{{ node_names }}"
- cluster: "{{ cluster_id }}"
- type: "{{ k8s_type }}"
- g_sub_host_type: "{{ sub_host_type }}"
-
- - add_host:
- name: "{{ master_names.0 }}"
- groups: service_master
- when: master_names is defined and master_names.0 is defined
-
-- include: update.yml
-- include: list.yml
diff --git a/playbooks/aws/openshift-cluster/library/ec2_ami_find.py b/playbooks/aws/openshift-cluster/library/ec2_ami_find.py
deleted file mode 100644
index 99d0f44f0..000000000
--- a/playbooks/aws/openshift-cluster/library/ec2_ami_find.py
+++ /dev/null
@@ -1,303 +0,0 @@
-#!/usr/bin/python
-#pylint: skip-file
-# flake8: noqa
-#
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-
-DOCUMENTATION = '''
----
-module: ec2_ami_find
-version_added: 2.0
-short_description: Searches for AMIs to obtain the AMI ID and other information
-description:
- - Returns list of matching AMIs with AMI ID, along with other useful information
- - Can search AMIs with different owners
- - Can search by matching tag(s), by AMI name and/or other criteria
- - Results can be sorted and sliced
-author: Tom Bamford
-notes:
- - This module is not backwards compatible with the previous version of the ec2_search_ami module which worked only for Ubuntu AMIs listed on cloud-images.ubuntu.com.
- - See the example below for a suggestion of how to search by distro/release.
-options:
- region:
- description:
- - The AWS region to use.
- required: true
- aliases: [ 'aws_region', 'ec2_region' ]
- owner:
- description:
- - Search AMIs owned by the specified owner
- - Can specify an AWS account ID, or one of the special IDs 'self', 'amazon' or 'aws-marketplace'
- - If not specified, all EC2 AMIs in the specified region will be searched.
- - You can include wildcards in many of the search options. An asterisk (*) matches zero or more characters, and a question mark (?) matches exactly one character. You can escape special characters using a backslash (\) before the character. For example, a value of \*amazon\?\\ searches for the literal string *amazon?\.
- required: false
- default: null
- ami_id:
- description:
- - An AMI ID to match.
- default: null
- required: false
- ami_tags:
- description:
- - A hash/dictionary of tags to match for the AMI.
- default: null
- required: false
- architecture:
- description:
- - An architecture type to match (e.g. x86_64).
- default: null
- required: false
- hypervisor:
- description:
- - A hypervisor type type to match (e.g. xen).
- default: null
- required: false
- is_public:
- description:
- - Whether or not the image(s) are public.
- choices: ['yes', 'no']
- default: null
- required: false
- name:
- description:
- - An AMI name to match.
- default: null
- required: false
- platform:
- description:
- - Platform type to match.
- default: null
- required: false
- sort:
- description:
- - Optional attribute which with to sort the results.
- - If specifying 'tag', the 'tag_name' parameter is required.
- choices: ['name', 'description', 'tag']
- default: null
- required: false
- sort_tag:
- description:
- - Tag name with which to sort results.
- - Required when specifying 'sort=tag'.
- default: null
- required: false
- sort_order:
- description:
- - Order in which to sort results.
- - Only used when the 'sort' parameter is specified.
- choices: ['ascending', 'descending']
- default: 'ascending'
- required: false
- sort_start:
- description:
- - Which result to start with (when sorting).
- - Corresponds to Python slice notation.
- default: null
- required: false
- sort_end:
- description:
- - Which result to end with (when sorting).
- - Corresponds to Python slice notation.
- default: null
- required: false
- state:
- description:
- - AMI state to match.
- default: 'available'
- required: false
- virtualization_type:
- description:
- - Virtualization type to match (e.g. hvm).
- default: null
- required: false
- no_result_action:
- description:
- - What to do when no results are found.
- - "'success' reports success and returns an empty array"
- - "'fail' causes the module to report failure"
- choices: ['success', 'fail']
- default: 'success'
- required: false
-requirements:
- - boto
-
-'''
-
-EXAMPLES = '''
-# Note: These examples do not set authentication details, see the AWS Guide for details.
-
-# Search for the AMI tagged "project:website"
-- ec2_ami_find:
- owner: self
- tags:
- project: website
- no_result_action: fail
- register: ami_find
-
-# Search for the latest Ubuntu 14.04 AMI
-- ec2_ami_find:
- name: "ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"
- owner: 099720109477
- sort: name
- sort_order: descending
- sort_end: 1
- register: ami_find
-
-# Launch an EC2 instance
-- ec2:
- image: "{{ ami_search.results[0].ami_id }}"
- instance_type: m4.medium
- key_name: mykey
- wait: yes
-'''
-
-try:
- import boto.ec2
- HAS_BOTO=True
-except ImportError:
- HAS_BOTO=False
-
-import json
-
-def main():
- argument_spec = ec2_argument_spec()
- argument_spec.update(dict(
- region = dict(required=True,
- aliases = ['aws_region', 'ec2_region']),
- owner = dict(required=False, default=None),
- ami_id = dict(required=False),
- ami_tags = dict(required=False, type='dict',
- aliases = ['search_tags', 'image_tags']),
- architecture = dict(required=False),
- hypervisor = dict(required=False),
- is_public = dict(required=False),
- name = dict(required=False),
- platform = dict(required=False),
- sort = dict(required=False, default=None,
- choices=['name', 'description', 'tag']),
- sort_tag = dict(required=False),
- sort_order = dict(required=False, default='ascending',
- choices=['ascending', 'descending']),
- sort_start = dict(required=False),
- sort_end = dict(required=False),
- state = dict(required=False, default='available'),
- virtualization_type = dict(required=False),
- no_result_action = dict(required=False, default='success',
- choices = ['success', 'fail']),
- )
- )
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- )
-
- if not HAS_BOTO:
- module.fail_json(msg='boto required for this module, install via pip or your package manager')
-
- ami_id = module.params.get('ami_id')
- ami_tags = module.params.get('ami_tags')
- architecture = module.params.get('architecture')
- hypervisor = module.params.get('hypervisor')
- is_public = module.params.get('is_public')
- name = module.params.get('name')
- owner = module.params.get('owner')
- platform = module.params.get('platform')
- sort = module.params.get('sort')
- sort_tag = module.params.get('sort_tag')
- sort_order = module.params.get('sort_order')
- sort_start = module.params.get('sort_start')
- sort_end = module.params.get('sort_end')
- state = module.params.get('state')
- virtualization_type = module.params.get('virtualization_type')
- no_result_action = module.params.get('no_result_action')
-
- filter = {'state': state}
-
- if ami_id:
- filter['image_id'] = ami_id
- if ami_tags:
- for tag in ami_tags:
- filter['tag:'+tag] = ami_tags[tag]
- if architecture:
- filter['architecture'] = architecture
- if hypervisor:
- filter['hypervisor'] = hypervisor
- if is_public:
- filter['is_public'] = is_public
- if name:
- filter['name'] = name
- if platform:
- filter['platform'] = platform
- if virtualization_type:
- filter['virtualization_type'] = virtualization_type
-
- ec2 = ec2_connect(module)
-
- images_result = ec2.get_all_images(owners=owner, filters=filter)
-
- if no_result_action == 'fail' and len(images_result) == 0:
- module.fail_json(msg="No AMIs matched the attributes: %s" % json.dumps(filter))
-
- results = []
- for image in images_result:
- data = {
- 'ami_id': image.id,
- 'architecture': image.architecture,
- 'description': image.description,
- 'is_public': image.is_public,
- 'name': image.name,
- 'owner_id': image.owner_id,
- 'platform': image.platform,
- 'root_device_name': image.root_device_name,
- 'root_device_type': image.root_device_type,
- 'state': image.state,
- 'tags': image.tags,
- 'virtualization_type': image.virtualization_type,
- }
-
- if image.kernel_id:
- data['kernel_id'] = image.kernel_id
- if image.ramdisk_id:
- data['ramdisk_id'] = image.ramdisk_id
-
- results.append(data)
-
- if sort == 'tag':
- if not sort_tag:
- module.fail_json(msg="'sort_tag' option must be given with 'sort=tag'")
- results.sort(key=lambda e: e['tags'][sort_tag], reverse=(sort_order=='descending'))
- elif sort:
- results.sort(key=lambda e: e[sort], reverse=(sort_order=='descending'))
-
- try:
- if sort and sort_start and sort_end:
- results = results[int(sort_start):int(sort_end)]
- elif sort and sort_start:
- results = results[int(sort_start):]
- elif sort and sort_end:
- results = results[:int(sort_end)]
- except TypeError:
- module.fail_json(msg="Please supply numeric values for sort_start and/or sort_end")
-
- module.exit_json(results=results)
-
-# import module snippets
-from ansible.module_utils.basic import *
-from ansible.module_utils.ec2 import *
-
-if __name__ == '__main__':
- main()
-
diff --git a/playbooks/aws/openshift-cluster/list.yml b/playbooks/aws/openshift-cluster/list.yml
deleted file mode 100644
index ed8aac398..000000000
--- a/playbooks/aws/openshift-cluster/list.yml
+++ /dev/null
@@ -1,23 +0,0 @@
----
-- name: Generate oo_list_hosts group
- hosts: localhost
- gather_facts: no
- connection: local
- become: no
- vars_files:
- - vars.yml
- tasks:
- - set_fact: scratch_group=tag_clusterid_{{ cluster_id }}
- when: cluster_id != ''
- - set_fact: scratch_group=all
- when: cluster_id == ''
- - add_host:
- name: "{{ item }}"
- groups: oo_list_hosts
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- oo_public_ipv4: "{{ hostvars[item].ec2_ip_address }}"
- oo_private_ipv4: "{{ hostvars[item].ec2_private_ip_address }}"
- with_items: "{{ groups[scratch_group] | default([]) | difference(['localhost']) }}"
- - debug:
- msg: "{{ hostvars | oo_select_keys(groups[scratch_group] | default([])) | oo_pretty_print_cluster }}"
diff --git a/playbooks/aws/openshift-cluster/prerequisites.yml b/playbooks/aws/openshift-cluster/prerequisites.yml
new file mode 100644
index 000000000..df77fe3bc
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/prerequisites.yml
@@ -0,0 +1,8 @@
+---
+- include: provision_vpc.yml
+
+- include: provision_ssh_keypair.yml
+
+- include: provision_sec_group.yml
+ vars:
+ openshift_aws_node_group_type: compute
diff --git a/playbooks/aws/openshift-cluster/provision.yml b/playbooks/aws/openshift-cluster/provision.yml
new file mode 100644
index 000000000..4b5bd22ea
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision.yml
@@ -0,0 +1,17 @@
+---
+- name: Setup the elb and the master node group
+ hosts: localhost
+ tasks:
+
+ - name: Alert user to variables needed - clusterid
+ debug:
+ msg: "openshift_aws_clusterid={{ openshift_aws_clusterid | default('default') }}"
+
+ - name: Alert user to variables needed - region
+ debug:
+ msg: "openshift_aws_region={{ openshift_aws_region | default('us-east-1') }}"
+
+ - name: provision cluster
+ include_role:
+ name: openshift_aws
+ tasks_from: provision.yml
diff --git a/playbooks/aws/openshift-cluster/provision_install.yml b/playbooks/aws/openshift-cluster/provision_install.yml
new file mode 100644
index 000000000..e787deced
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_install.yml
@@ -0,0 +1,16 @@
+---
+# Once an AMI is built then this script is used for
+# the one stop shop to provision and install a cluster
+# this playbook is run with the following parameters:
+# ansible-playbook -i openshift-ansible-inventory provision_install.yml
+- name: Include the provision.yml playbook to create cluster
+ include: provision.yml
+
+- name: Include the install.yml playbook to install cluster
+ include: install.yml
+
+- name: Include the install.yml playbook to install cluster
+ include: provision_nodes.yml
+
+- name: Include the accept.yml playbook to accept nodes into the cluster
+ include: accept.yml
diff --git a/playbooks/aws/openshift-cluster/provision_instance.yml b/playbooks/aws/openshift-cluster/provision_instance.yml
new file mode 100644
index 000000000..6e843453c
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_instance.yml
@@ -0,0 +1,12 @@
+---
+# If running this play directly, be sure the variable
+# 'openshift_aws_node_group_type' is set correctly for your usage.
+# See build_ami.yml for an example.
+- hosts: localhost
+ connection: local
+ gather_facts: no
+ tasks:
+ - name: create an instance and prepare for ami
+ include_role:
+ name: openshift_aws
+ tasks_from: provision_instance.yml
diff --git a/playbooks/aws/openshift-cluster/provision_nodes.yml b/playbooks/aws/openshift-cluster/provision_nodes.yml
new file mode 100644
index 000000000..44c686e08
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_nodes.yml
@@ -0,0 +1,18 @@
+---
+- name: create the node scale groups
+ hosts: localhost
+ connection: local
+ gather_facts: yes
+ tasks:
+ - name: Alert user to variables needed - clusterid
+ debug:
+ msg: "openshift_aws_clusterid={{ openshift_aws_clusterid | default('default') }}"
+
+ - name: Alert user to variables needed - region
+ debug:
+ msg: "openshift_aws_region={{ openshift_aws_region | default('us-east-1') }}"
+
+ - name: create the node groups
+ include_role:
+ name: openshift_aws
+ tasks_from: provision_nodes.yml
diff --git a/playbooks/aws/openshift-cluster/provision_sec_group.yml b/playbooks/aws/openshift-cluster/provision_sec_group.yml
new file mode 100644
index 000000000..039357adb
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_sec_group.yml
@@ -0,0 +1,13 @@
+---
+# If running this play directly, be sure the variable
+# 'openshift_aws_node_group_type' is set correctly for your usage.
+# See build_ami.yml for an example.
+- hosts: localhost
+ connection: local
+ gather_facts: no
+ tasks:
+ - name: create an instance and prepare for ami
+ include_role:
+ name: openshift_aws
+ tasks_from: security_group.yml
+ when: openshift_aws_create_security_groups | default(True) | bool
diff --git a/playbooks/aws/openshift-cluster/provision_ssh_keypair.yml b/playbooks/aws/openshift-cluster/provision_ssh_keypair.yml
new file mode 100644
index 000000000..3ec683958
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_ssh_keypair.yml
@@ -0,0 +1,12 @@
+---
+- hosts: localhost
+ connection: local
+ gather_facts: no
+ tasks:
+ - name: create an instance and prepare for ami
+ include_role:
+ name: openshift_aws
+ tasks_from: ssh_keys.yml
+ vars:
+ openshift_aws_node_group_type: compute
+ when: openshift_aws_users | default([]) | length > 0
diff --git a/playbooks/aws/openshift-cluster/provision_vpc.yml b/playbooks/aws/openshift-cluster/provision_vpc.yml
new file mode 100644
index 000000000..0a23a6d32
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/provision_vpc.yml
@@ -0,0 +1,10 @@
+---
+- hosts: localhost
+ connection: local
+ gather_facts: no
+ tasks:
+ - name: create a vpc
+ include_role:
+ name: openshift_aws
+ tasks_from: vpc.yml
+ when: openshift_aws_create_vpc | default(True) | bool
diff --git a/playbooks/aws/openshift-cluster/scaleup.yml b/playbooks/aws/openshift-cluster/scaleup.yml
deleted file mode 100644
index 6fa9142a0..000000000
--- a/playbooks/aws/openshift-cluster/scaleup.yml
+++ /dev/null
@@ -1,32 +0,0 @@
----
-
-- hosts: localhost
- gather_facts: no
- connection: local
- become: no
- vars_files:
- - vars.yml
- tasks:
- - name: Evaluate oo_hosts_to_update
- add_host:
- name: "{{ item }}"
- groups: oo_hosts_to_update
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- with_items: "{{ groups.nodes_to_add }}"
-
-- include: ../../common/openshift-cluster/update_repos_and_packages.yml
-
-- include: ../../common/openshift-cluster/scaleup.yml
- vars_files:
- - ../../aws/openshift-cluster/vars.yml
- - ../../aws/openshift-cluster/cluster_hosts.yml
- vars:
- g_new_node_hosts: "{{ groups.nodes_to_add }}"
- g_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- g_sudo: "{{ deployment_vars[deployment_type].become }}"
- g_nodeonmaster: true
- openshift_cluster_id: "{{ cluster_id }}"
- openshift_debug_level: "{{ debug_level }}"
- openshift_deployment_type: "{{ deployment_type }}"
- openshift_public_hostname: "{{ ec2_ip_address }}"
diff --git a/playbooks/aws/openshift-cluster/seal_ami.yml b/playbooks/aws/openshift-cluster/seal_ami.yml
new file mode 100644
index 000000000..8239a64fb
--- /dev/null
+++ b/playbooks/aws/openshift-cluster/seal_ami.yml
@@ -0,0 +1,12 @@
+---
+# If running this play directly, be sure the variable
+# 'openshift_aws_ami_name' is set correctly for your usage.
+# See build_ami.yml for an example.
+- hosts: localhost
+ connection: local
+ become: no
+ tasks:
+ - name: seal the ami
+ include_role:
+ name: openshift_aws
+ tasks_from: seal_ami.yml
diff --git a/playbooks/aws/openshift-cluster/service.yml b/playbooks/aws/openshift-cluster/service.yml
deleted file mode 100644
index f7f4812bb..000000000
--- a/playbooks/aws/openshift-cluster/service.yml
+++ /dev/null
@@ -1,31 +0,0 @@
----
-- name: Call same systemctl command for openshift on all instance(s)
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- vars_files:
- - vars.yml
- - cluster_hosts.yml
- tasks:
- - fail: msg="cluster_id is required to be injected in this playbook"
- when: cluster_id is not defined
-
- - name: Evaluate g_service_masters
- add_host:
- name: "{{ item }}"
- groups: g_service_masters
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- with_items: "{{ master_hosts | default([]) }}"
-
- - name: Evaluate g_service_nodes
- add_host:
- name: "{{ item }}"
- groups: g_service_nodes
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- with_items: "{{ node_hosts | default([]) }}"
-
-- include: ../../common/openshift-node/service.yml
-- include: ../../common/openshift-master/service.yml
diff --git a/playbooks/aws/openshift-cluster/tasks/launch_instances.yml b/playbooks/aws/openshift-cluster/tasks/launch_instances.yml
deleted file mode 100644
index 608512b79..000000000
--- a/playbooks/aws/openshift-cluster/tasks/launch_instances.yml
+++ /dev/null
@@ -1,188 +0,0 @@
----
-- set_fact:
- created_by: "{{ lookup('env', 'LOGNAME')|default(cluster, true) }}"
- docker_vol_ephemeral: "{{ lookup('env', 'os_docker_vol_ephemeral') | default(false, true) }}"
- cluster: "{{ cluster_id }}"
- env: "{{ cluster_env }}"
- host_type: "{{ type }}"
- sub_host_type: "{{ g_sub_host_type }}"
-
-- set_fact:
- ec2_instance_type: "{{ lookup('env', 'ec2_master_instance_type') | default(deployment_vars[deployment_type].type, true) }}"
- ec2_security_groups: "{{ lookup('env', 'ec2_master_security_groups') | default(deployment_vars[deployment_type].security_groups, true) }}"
- when: host_type == "master" and sub_host_type == "default"
-
-- set_fact:
- ec2_instance_type: "{{ lookup('env', 'ec2_etcd_instance_type') | default(deployment_vars[deployment_type].type, true) }}"
- ec2_security_groups: "{{ lookup('env', 'ec2_etcd_security_groups') | default(deployment_vars[deployment_type].security_groups, true) }}"
- when: host_type == "etcd" and sub_host_type == "default"
-
-- set_fact:
- ec2_instance_type: "{{ lookup('env', 'ec2_infra_instance_type') | default(deployment_vars[deployment_type].type, true) }}"
- ec2_security_groups: "{{ lookup('env', 'ec2_infra_security_groups') | default(deployment_vars[deployment_type].security_groups, true) }}"
- when: host_type == "node" and sub_host_type == "infra"
-
-- set_fact:
- ec2_instance_type: "{{ lookup('env', 'ec2_node_instance_type') | default(deployment_vars[deployment_type].type, true) }}"
- ec2_security_groups: "{{ lookup('env', 'ec2_node_security_groups') | default(deployment_vars[deployment_type].security_groups, true) }}"
- when: host_type == "node" and sub_host_type == "compute"
-
-- set_fact:
- ec2_instance_type: "{{ deployment_vars[deployment_type].type }}"
- when: ec2_instance_type is not defined
-- set_fact:
- ec2_security_groups: "{{ deployment_vars[deployment_type].security_groups }}"
- when: ec2_security_groups is not defined
-
-- name: Find amis for deployment_type
- ec2_ami_find:
- region: "{{ deployment_vars[deployment_type].region }}"
- ami_id: "{{ deployment_vars[deployment_type].image }}"
- name: "{{ deployment_vars[deployment_type].image_name }}"
- register: ami_result
-
-- fail: msg="Could not find requested ami"
- when: not ami_result.results
-
-- set_fact:
- latest_ami: "{{ ami_result.results | oo_ami_selector(deployment_vars[deployment_type].image_name) }}"
- volume_defs:
- etcd:
- root:
- volume_size: "{{ lookup('env', 'os_etcd_root_vol_size') | default(25, true) }}"
- device_type: "{{ lookup('env', 'os_etcd_root_vol_type') | default('gp2', true) }}"
- iops: "{{ lookup('env', 'os_etcd_root_vol_iops') | default(500, true) }}"
- master:
- root:
- volume_size: "{{ lookup('env', 'os_master_root_vol_size') | default(25, true) }}"
- device_type: "{{ lookup('env', 'os_master_root_vol_type') | default('gp2', true) }}"
- iops: "{{ lookup('env', 'os_master_root_vol_iops') | default(500, true) }}"
- docker:
- volume_size: "{{ lookup('env', 'os_docker_vol_size') | default(10, true) }}"
- device_type: "{{ lookup('env', 'os_docker_vol_type') | default('gp2', true) }}"
- iops: "{{ lookup('env', 'os_docker_vol_iops') | default(500, true) }}"
- node:
- root:
- volume_size: "{{ lookup('env', 'os_node_root_vol_size') | default(85, true) }}"
- device_type: "{{ lookup('env', 'os_node_root_vol_type') | default('gp2', true) }}"
- iops: "{{ lookup('env', 'os_node_root_vol_iops') | default(500, true) }}"
- docker:
- volume_size: "{{ lookup('env', 'os_docker_vol_size') | default(32, true) }}"
- device_type: "{{ lookup('env', 'os_docker_vol_type') | default('gp2', true) }}"
- iops: "{{ lookup('env', 'os_docker_vol_iops') | default(500, true) }}"
-
-- set_fact:
- volumes: "{{ volume_defs | oo_ec2_volume_definition(host_type, docker_vol_ephemeral | bool) }}"
-
-- name: Launch instance(s)
- ec2:
- state: present
- region: "{{ deployment_vars[deployment_type].region }}"
- keypair: "{{ deployment_vars[deployment_type].keypair }}"
- group: "{{ deployment_vars[deployment_type].security_groups }}"
- instance_type: "{{ ec2_instance_type }}"
- image: "{{ deployment_vars[deployment_type].image }}"
- count: "{{ instances | length }}"
- vpc_subnet_id: "{{ deployment_vars[deployment_type].vpc_subnet }}"
- assign_public_ip: "{{ deployment_vars[deployment_type].assign_public_ip }}"
- user_data: "{{ lookup('template', '../templates/user_data.j2') }}"
- wait: yes
- instance_tags:
- created-by: "{{ created_by }}"
- clusterid: "{{ cluster }}"
- environment: "{{ cluster_env }}"
- host-type: "{{ host_type }}"
- sub-host-type: "{{ sub_host_type }}"
- volumes: "{{ volumes }}"
- register: ec2
-
-- name: Add Name tag to instances
- ec2_tag: resource={{ item.1.id }} region={{ deployment_vars[deployment_type].region }} state=present
- with_together:
- - "{{ instances }}"
- - "{{ ec2.instances }}"
- args:
- tags:
- Name: "{{ item.0 }}"
-
-- set_fact:
- instance_groups: >
- tag_created-by_{{ created_by }}, tag_clusterid_{{ cluster }},
- tag_environment_{{ cluster_env }}, tag_host-type_{{ host_type }},
- tag_sub-host-type_{{ sub_host_type }}
-
-- set_fact:
- node_label:
- region: "{{ deployment_vars[deployment_type].region }}"
- type: "{{sub_host_type}}"
- when: host_type == "node"
-
-- set_fact:
- node_label:
- region: "{{ deployment_vars[deployment_type].region }}"
- type: "{{host_type}}"
- when: host_type != "node"
-
-- set_fact:
- logrotate:
- - name: syslog
- path: |
- /var/log/cron
- /var/log/maillog
- /var/log/messages
- /var/log/secure
- /var/log/spooler"
- options:
- - daily
- - rotate 7
- - compress
- - sharedscripts
- - missingok
- scripts:
- postrotate: "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"
-
-- name: Add new instances groups and variables
- add_host:
- hostname: "{{ item.0 }}"
- ansible_ssh_host: "{{ item.1.dns_name }}"
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- groups: "{{ instance_groups }}"
- ec2_private_ip_address: "{{ item.1.private_ip }}"
- ec2_ip_address: "{{ item.1.public_ip }}"
- ec2_tag_sub-host-type: "{{ sub_host_type }}"
- openshift_node_labels: "{{ node_label }}"
- logrotate_scripts: "{{ logrotate }}"
- with_together:
- - "{{ instances }}"
- - "{{ ec2.instances }}"
-
-- name: Add new instances to nodes_to_add group if needed
- add_host:
- hostname: "{{ item.0 }}"
- ansible_ssh_host: "{{ item.1.dns_name }}"
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- groups: nodes_to_add
- ec2_private_ip_address: "{{ item.1.private_ip }}"
- ec2_ip_address: "{{ item.1.public_ip }}"
- openshift_node_labels: "{{ node_label }}"
- logrotate_scripts: "{{ logrotate }}"
- with_together:
- - "{{ instances }}"
- - "{{ ec2.instances }}"
- when: oo_extend_env is defined and oo_extend_env | bool
-
-- name: Wait for ssh
- wait_for: "port=22 host={{ item.dns_name }}"
- with_items: "{{ ec2.instances }}"
-
-- name: Wait for user setup
- command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null {{ hostvars[item.0].ansible_ssh_user }}@{{ item.1.dns_name }} echo {{ hostvars[item.0].ansible_ssh_user }} user is setup"
- register: result
- until: result.rc == 0
- retries: 20
- delay: 10
- with_together:
- - "{{ instances }}"
- - "{{ ec2.instances }}"
diff --git a/playbooks/aws/openshift-cluster/templates/user_data.j2 b/playbooks/aws/openshift-cluster/templates/user_data.j2
deleted file mode 100644
index b1087f9c4..000000000
--- a/playbooks/aws/openshift-cluster/templates/user_data.j2
+++ /dev/null
@@ -1,22 +0,0 @@
-#cloud-config
-{% if type in ['node', 'master'] and 'docker' in volume_defs[type] %}
-mounts:
-- [ xvdb ]
-- [ ephemeral0 ]
-{% endif %}
-
-write_files:
-{% if type in ['node', 'master'] and 'docker' in volume_defs[type] %}
-- content: |
- DEVS=/dev/xvdb
- VG=docker_vg
- path: /etc/sysconfig/docker-storage-setup
- owner: root:root
- permissions: '0644'
-{% endif %}
-{% if deployment_vars[deployment_type].become | bool %}
-- path: /etc/sudoers.d/99-{{ deployment_vars[deployment_type].ssh_user }}-cloud-init-requiretty
- permissions: 440
- content: |
- Defaults:{{ deployment_vars[deployment_type].ssh_user }} !requiretty
-{% endif %}
diff --git a/playbooks/aws/openshift-cluster/terminate.yml b/playbooks/aws/openshift-cluster/terminate.yml
deleted file mode 100644
index 1f15aa4bf..000000000
--- a/playbooks/aws/openshift-cluster/terminate.yml
+++ /dev/null
@@ -1,77 +0,0 @@
----
-- name: Terminate instance(s)
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- vars_files:
- - vars.yml
- tasks:
- - add_host:
- name: "{{ item }}"
- groups: oo_hosts_to_terminate
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- with_items: "{{ (groups['tag_clusterid_' ~ cluster_id] | default([])) | difference(['localhost']) }}"
-
-- name: Unsubscribe VMs
- hosts: oo_hosts_to_terminate
- roles:
- - role: rhel_unsubscribe
- when: deployment_type in ['atomic-enterprise', 'enterprise', 'openshift-enterprise'] and
- ansible_distribution == "RedHat" and
- lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) |
- default('no', True) | lower in ['no', 'false']
-
-- name: Terminate instances
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- tasks:
- - name: Remove tags from instances
- ec2_tag:
- resource: "{{ hostvars[item]['ec2_id'] }}"
- region: "{{ hostvars[item]['ec2_region'] }}"
- state: absent
- tags:
- environment: "{{ hostvars[item]['ec2_tag_environment'] }}"
- clusterid: "{{ hostvars[item]['ec2_tag_clusterid'] }}"
- host-type: "{{ hostvars[item]['ec2_tag_host-type'] }}"
- sub_host_type: "{{ hostvars[item]['ec2_tag_sub-host-type'] }}"
- with_items: "{{ groups.oo_hosts_to_terminate }}"
- when: "'oo_hosts_to_terminate' in groups"
-
- - name: Terminate instances
- ec2:
- state: absent
- instance_ids: ["{{ hostvars[item].ec2_id }}"]
- region: "{{ hostvars[item].ec2_region }}"
- ignore_errors: yes
- register: ec2_term
- with_items: "{{ groups.oo_hosts_to_terminate }}"
- when: "'oo_hosts_to_terminate' in groups"
-
- # Fail if any of the instances failed to terminate with an error other
- # than 403 Forbidden
- - fail:
- msg: "Terminating instance {{ item.ec2_id }} failed with message {{ item.msg }}"
- when: "'oo_hosts_to_terminate' in groups and item.has_key('failed') and item.failed"
- with_items: "{{ ec2_term.results }}"
-
- - name: Stop instance if termination failed
- ec2:
- state: stopped
- instance_ids: ["{{ item.item.ec2_id }}"]
- region: "{{ item.item.ec2_region }}"
- register: ec2_stop
- when: "'oo_hosts_to_terminate' in groups and item.has_key('failed') and item.failed"
- with_items: "{{ ec2_term.results }}"
-
- - name: Rename stopped instances
- ec2_tag: resource={{ item.item.item.ec2_id }} region={{ item.item.item.ec2_region }} state=present
- args:
- tags:
- Name: "{{ item.item.item.ec2_tag_Name }}-terminate"
- with_items: "{{ ec2_stop.results }}"
- when: ec2_stop | changed
diff --git a/playbooks/aws/openshift-cluster/update.yml b/playbooks/aws/openshift-cluster/update.yml
deleted file mode 100644
index ed05d61ed..000000000
--- a/playbooks/aws/openshift-cluster/update.yml
+++ /dev/null
@@ -1,34 +0,0 @@
----
-- hosts: localhost
- gather_facts: no
- tasks:
- - include_vars: vars.yml
- - include_vars: cluster_hosts.yml
- - add_host:
- name: "{{ item }}"
- groups: l_oo_all_hosts
- with_items: "{{ g_all_hosts }}"
-
-- hosts: l_oo_all_hosts
- gather_facts: no
- tasks:
- - include_vars: vars.yml
- - include_vars: cluster_hosts.yml
-
-- name: Update - Populate oo_hosts_to_update group
- hosts: localhost
- connection: local
- become: no
- gather_facts: no
- tasks:
- - name: Update - Evaluate oo_hosts_to_update
- add_host:
- name: "{{ item }}"
- groups: oo_hosts_to_update
- ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
- ansible_become: "{{ deployment_vars[deployment_type].become }}"
- with_items: "{{ g_all_hosts | default([]) }}"
-
-- include: ../../common/openshift-cluster/update_repos_and_packages.yml
-
-- include: config.yml
diff --git a/playbooks/aws/openshift-cluster/vars.yml b/playbooks/aws/openshift-cluster/vars.yml
deleted file mode 100644
index d774187f0..000000000
--- a/playbooks/aws/openshift-cluster/vars.yml
+++ /dev/null
@@ -1,33 +0,0 @@
----
-debug_level: 2
-
-deployment_rhel7_ent_base:
- # rhel-7.1, requires cloud access subscription
- image: "{{ lookup('oo_option', 'ec2_image') | default('ami-10251c7a', True) }}"
- image_name: "{{ lookup('oo_option', 'ec2_image_name') | default(None, True) }}"
- region: "{{ lookup('oo_option', 'ec2_region') | default('us-east-1', True) }}"
- ssh_user: ec2-user
- become: yes
- keypair: "{{ lookup('oo_option', 'ec2_keypair') | default('libra', True) }}"
- type: "{{ lookup('oo_option', 'ec2_instance_type') | default('m4.large', True) }}"
- security_groups: "{{ lookup('oo_option', 'ec2_security_groups') | default([ 'public' ], True) }}"
- vpc_subnet: "{{ lookup('oo_option', 'ec2_vpc_subnet') | default(omit, True) }}"
- assign_public_ip: "{{ lookup('oo_option', 'ec2_assign_public_ip') | default(omit, True) }}"
-
-deployment_vars:
- origin:
- # centos-7, requires marketplace
- image: "{{ lookup('oo_option', 'ec2_image') | default('ami-6d1c2007', True) }}"
- image_name: "{{ lookup('oo_option', 'ec2_image_name') | default(None, True) }}"
- region: "{{ lookup('oo_option', 'ec2_region') | default('us-east-1', True) }}"
- ssh_user: centos
- become: yes
- keypair: "{{ lookup('oo_option', 'ec2_keypair') | default('libra', True) }}"
- type: "{{ lookup('oo_option', 'ec2_instance_type') | default('m4.large', True) }}"
- security_groups: "{{ lookup('oo_option', 'ec2_security_groups') | default([ 'public' ], True) }}"
- vpc_subnet: "{{ lookup('oo_option', 'ec2_vpc_subnet') | default(omit, True) }}"
- assign_public_ip: "{{ lookup('oo_option', 'ec2_assign_public_ip') | default(omit, True) }}"
-
- enterprise: "{{ deployment_rhel7_ent_base }}"
- openshift-enterprise: "{{ deployment_rhel7_ent_base }}"
- atomic-enterprise: "{{ deployment_rhel7_ent_base }}"
diff --git a/playbooks/aws/provisioning-inventory.example.ini b/playbooks/aws/provisioning-inventory.example.ini
new file mode 100644
index 000000000..238a7eb2f
--- /dev/null
+++ b/playbooks/aws/provisioning-inventory.example.ini
@@ -0,0 +1,25 @@
+[OSEv3:children]
+masters
+nodes
+etcd
+
+[OSEv3:vars]
+################################################################################
+# Ensure these variables are set for bootstrap
+################################################################################
+# openshift_deployment_type is required for installation
+openshift_deployment_type=origin
+
+openshift_master_bootstrap_enabled=True
+
+openshift_hosted_router_wait=False
+openshift_hosted_registry_wait=False
+
+################################################################################
+# cluster specific settings maybe be placed here
+
+[masters]
+
+[etcd]
+
+[nodes]
diff --git a/playbooks/aws/provisioning_vars.yml.example b/playbooks/aws/provisioning_vars.yml.example
new file mode 100644
index 000000000..1491fb868
--- /dev/null
+++ b/playbooks/aws/provisioning_vars.yml.example
@@ -0,0 +1,120 @@
+---
+# Variables that are commented in this file are optional; uncommented variables
+# are mandatory.
+
+# Default values for each variable are provided, as applicable.
+# Example values for mandatory variables are provided as a comment at the end
+# of the line.
+
+# ------------------------ #
+# Common/Cluster Variables #
+# ------------------------ #
+# Variables in this section affect all areas of the cluster
+
+# Deployment type must be specified.
+openshift_deployment_type: # 'origin' or 'openshift-enterprise'
+
+# openshift_release must be specified. Use whatever version of openshift
+# that is supported by openshift-ansible that you wish.
+openshift_release: # v3.7
+
+# This will be dependent on the version provided by the yum repository
+openshift_pkg_version: # -3.7.0
+
+# specify a clusterid
+# This value is also used as the default value for many other components.
+#openshift_aws_clusterid: default
+
+# AWS region
+# This value will instruct the plays where all items should be created.
+# Multi-region deployments are not supported using these plays at this time.
+#openshift_aws_region: us-east-1
+
+#openshift_aws_create_launch_config: true
+#openshift_aws_create_scale_group: true
+
+# --- #
+# VPC #
+# --- #
+
+# openshift_aws_create_vpc defaults to true. If you don't wish to provision
+# a vpc, set this to false.
+#openshift_aws_create_vpc: true
+
+# Name of the vpc. Needs to be set if using a pre-existing vpc.
+#openshift_aws_vpc_name: "{{ openshift_aws_clusterid }}"
+
+# Name of the subnet in the vpc to use. Needs to be set if using a pre-existing
+# vpc + subnet.
+#openshift_aws_subnet_name:
+
+# -------------- #
+# Security Group #
+# -------------- #
+
+# openshift_aws_create_security_groups defaults to true. If you wish to use
+# an existing security group, set this to false.
+#openshift_aws_create_security_groups: true
+
+# openshift_aws_build_ami_group is the name of the security group to build the
+# ami in. This defaults to the value of openshift_aws_clusterid.
+#openshift_aws_build_ami_group: "{{ openshift_aws_clusterid }}"
+
+# openshift_aws_launch_config_security_groups specifies the security groups to
+# apply to the launch config. The launch config security groups will be what
+# the cluster actually is deployed in.
+#openshift_aws_launch_config_security_groups: see roles/openshift_aws/defaults.yml
+
+# openshift_aws_node_security_groups are created when
+# openshift_aws_create_security_groups is set to true.
+#openshift_aws_node_security_groups: see roles/openshift_aws/defaults.yml
+
+# -------- #
+# ssh keys #
+# -------- #
+
+# Specify the key pair name here to connect to the provisioned instances. This
+# can be an existing key, or it can be one of the keys specified in
+# openshift_aws_users
+openshift_aws_ssh_key_name: # myuser_key
+
+# This will ensure these user and public keys are created.
+#openshift_aws_users:
+#- key_name: myuser_key
+# username: myuser
+# pub_key: |
+# ssh-rsa AAAA
+
+# When building the AMI, specify the user to ssh to the instance as.
+# openshift_aws_build_ami_ssh_user: root
+
+# --------- #
+# AMI Build #
+# --------- #
+# Variables in this section apply to building a node AMI for use in your
+# openshift cluster.
+
+# must specify a base_ami when building an AMI
+openshift_aws_base_ami: # ami-12345678
+
+# when creating an encrypted AMI please specify use_encryption
+#openshift_aws_ami_encrypt: False
+
+# -- #
+# S3 #
+# -- #
+
+# Create an s3 bucket.
+#openshift_aws_create_s3: True
+
+# --- #
+# ELB #
+# --- #
+
+# openshift_aws_elb_name will be the base-name of the ELBs.
+#openshift_aws_elb_name: "{{ openshift_aws_clusterid }}"
+
+# custom certificates are required for the ELB
+openshift_aws_iam_cert_path: # '/path/to/wildcard.<clusterid>.example.com.crt'
+openshift_aws_iam_cert_key_path: # '/path/to/wildcard.<clusterid>.example.com.key'
+openshift_aws_iam_cert_chain_path: # '/path/to/cert.ca.crt'