summaryrefslogtreecommitdiffstats
path: root/playbooks/provisioning/openstack
diff options
context:
space:
mode:
Diffstat (limited to 'playbooks/provisioning/openstack')
-rw-r--r--playbooks/provisioning/openstack/README.md258
-rw-r--r--playbooks/provisioning/openstack/advanced-configuration.md773
-rw-r--r--playbooks/provisioning/openstack/ansible.cfg24
-rw-r--r--playbooks/provisioning/openstack/custom-actions/add-cas.yml13
-rw-r--r--playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml90
-rw-r--r--playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml13
-rw-r--r--playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml12
-rw-r--r--playbooks/provisioning/openstack/custom_flavor_check.yaml9
-rw-r--r--playbooks/provisioning/openstack/custom_image_check.yaml9
-rw-r--r--playbooks/provisioning/openstack/galaxy-requirements.yaml10
-rw-r--r--playbooks/provisioning/openstack/net_vars_check.yaml14
-rw-r--r--playbooks/provisioning/openstack/post-install.yml57
-rw-r--r--playbooks/provisioning/openstack/post-provision-openstack.yml118
-rw-r--r--playbooks/provisioning/openstack/pre-install.yml19
-rw-r--r--playbooks/provisioning/openstack/pre_tasks.yml53
-rw-r--r--playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml67
-rw-r--r--playbooks/provisioning/openstack/prerequisites.yml123
-rw-r--r--playbooks/provisioning/openstack/provision-openstack.yml35
-rw-r--r--playbooks/provisioning/openstack/provision.yaml4
l---------playbooks/provisioning/openstack/roles1
-rw-r--r--playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml59
-rw-r--r--playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml166
-rwxr-xr-xplaybooks/provisioning/openstack/sample-inventory/inventory.py88
-rw-r--r--playbooks/provisioning/openstack/scale-up.yaml75
-rw-r--r--playbooks/provisioning/openstack/stack_params.yaml49
25 files changed, 2139 insertions, 0 deletions
diff --git a/playbooks/provisioning/openstack/README.md b/playbooks/provisioning/openstack/README.md
new file mode 100644
index 000000000..a2f553f4c
--- /dev/null
+++ b/playbooks/provisioning/openstack/README.md
@@ -0,0 +1,258 @@
+# OpenStack Provisioning
+
+This directory contains [Ansible][ansible] playbooks and roles to create
+OpenStack resources (servers, networking, volumes, security groups,
+etc.). The result is an environment ready for OpenShift installation
+via [openshift-ansible].
+
+We provide everything necessary to be able to install OpenShift on
+OpenStack (including the DNS and load balancer servers when
+necessary). In addition, we work on providing integration with the
+OpenStack-native services (storage, lbaas, baremetal as a service,
+dns, etc.).
+
+
+## OpenStack Requirements
+
+Before you start the installation, you need to have an OpenStack
+environment to connect to. You can use a public cloud or an OpenStack
+within your organisation. It is also possible to
+use [Devstack][devstack] or [TripleO][tripleo]. In the case of
+TripleO, we will be running on top of the **overcloud**.
+
+The OpenStack release must be Newton (for Red Hat OpenStack this is
+version 10) or newer. It must also satisfy these requirements:
+
+* Heat (Orchestration) must be available
+* The deployment image (CentOS 7 or RHEL 7) must be loaded
+* The deployment flavor must be available to your user
+ - `m1.medium` / 4GB RAM + 40GB disk should be enough for testing
+ - look at
+ the [Minimum Hardware Requirements page][hardware-requirements]
+ for production
+* The keypair for SSH must be available in openstack
+* `keystonerc` file that lets you talk to the openstack services
+ * NOTE: only Keystone V2 is currently supported
+
+Optional:
+* External Neutron network with a floating IP address pool
+
+
+## Installation
+
+There are four main parts to the installation:
+
+1. [Preparing Ansible and dependencies](#1-preparing-ansible-and-dependencies)
+2. [Configuring the desired OpenStack environment and OpenShift cluster](#2-configuring-the-openstack-environment-and-openshift-cluster)
+3. [Creating the OpenStack resources (VMs, networking, etc.)](#3-creating-the-openstack-resources-vms-networking-etc)
+4. [Installing OpenShift](#4-installing-openshift)
+
+This guide is going to install [OpenShift Origin][origin]
+with [CentOS 7][centos7] images with minimal customisation.
+
+We will create the VMs for running OpenShift, in a new Neutron
+network, assign Floating IP addresses and configure DNS.
+
+The OpenShift cluster will have a single Master node that will run
+`etcd`, a single Infra node and two App nodes.
+
+You can look at
+the [Advanced Configuration page][advanced-configuration] for
+additional options.
+
+
+
+### 1. Preparing Ansible and dependencies
+
+First, you need to select where to run [Ansible][ansible] from (the
+*Ansible host*). This can be the computer you read this guide on or an
+OpenStack VM you'll create specifically for this purpose.
+
+We will use
+a
+[Docker image that has all the dependencies installed][control-host-image] to
+make things easier. If you don't want to use Docker, take a look at
+the [Ansible host dependencies][ansible-dependencies] and make sure
+they're installed.
+
+Your *Ansible host* needs to have the following:
+
+1. Docker
+2. `keystonerc` file with your OpenStack credentials
+3. SSH private key for logging in to your OpenShift nodes
+
+Assuming your private key is `~/.ssh/id_rsa` and `keystonerc` in your
+current directory:
+
+```bash
+$ sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \
+ -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \
+ redhatcop/control-host-openstack bash
+```
+
+This will create the container, add your SSH key and source your
+`keystonerc`. It should be set up for the installation.
+
+You can verify that everything is in order:
+
+
+```bash
+$ less .ssh/id_rsa
+$ ansible --version
+$ openstack image list
+```
+
+
+### 2. Configuring the OpenStack Environment and OpenShift Cluster
+
+The configuration is all done in an Ansible inventory directory. We
+will clone the [openshift-ansible-contrib][contrib] repository and set
+things up for a minimal installation.
+
+
+```
+$ git clone https://github.com/openshift/openshift-ansible-contrib
+$ cp -r openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory/ inventory
+```
+
+If you're testing multiple configurations, you can have multiple
+inventories and switch between them.
+
+#### OpenStack Configuration
+
+The OpenStack configuration is in `inventory/group_vars/all.yml`.
+
+Open the file and plug in the image, flavor and network configuration
+corresponding to your OpenStack installation.
+
+```bash
+$ vi inventory/group_vars/all.yml
+```
+
+1. Set the `openstack_ssh_public_key` to your OpenStack keypair name.
+ - See `openstack keypair list` to find the keypairs registered with
+ OpenShift.
+ - This must correspond to your private SSH key in `~/.ssh/id_rsa`
+2. Set the `openstack_external_network_name` to the floating IP
+ network of your openstack.
+ - See `openstack network list` for the list of networks.
+ - It's often called `public`, `external` or `ext-net`.
+3. Set the `openstack_default_image_name` to the image you want your
+ OpenShift VMs to run.
+ - See `openstack image list` for the list of available images.
+4. Set the `openstack_default_flavor` to the flavor you want your
+ OpenShift VMs to use.
+ - See `openstack flavor list` for the list of available flavors.
+
+**NOTE**: In most OpenStack environments, you will also need to
+configure the forwarders for the DNS server we create. This depends on
+your environment.
+
+Launch a VM in your OpenStack and look at its `/etc/resolv.conf` and
+put the IP addresses into `public_dns_nameservers` in
+`inventory/group_vars/all.yml`.
+
+
+#### OpenShift configuration
+
+The OpenShift configuration is in `inventory/group_vars/OSEv3.yml`.
+
+The default options will mostly work, but unless you used the large
+flavors for a production-ready environment, openshift-ansible's
+hardware check will fail.
+
+Let's disable those checks by putting this in
+`inventory/group_vars/OSEv3.yml`:
+
+```yaml
+openshift_disable_check: disk_availability,memory_availability
+```
+
+**NOTE**: The default authentication method will allow **any username
+and password** in! If you're running this in a public place, you need
+to set up access control.
+
+Feel free to look at
+the [Sample OpenShift Inventory][sample-openshift-inventory] and
+the [advanced configuration][advanced-configuration].
+
+
+### 3. Creating the OpenStack resources (VMs, networking, etc.)
+
+We will install the DNS server roles using ansible galaxy and then run
+the openstack provisioning playbook. The `ansible.cfg` file we provide
+has useful defaults -- copy it to the directory you're going to run
+Ansible from.
+
+```bash
+$ ansible-galaxy install -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml -p openshift-ansible-contrib/roles
+$ cp openshift-ansible-contrib/playbooks/provisioning/openstack/ansible.cfg ansible.cfg
+```
+(you will only need to do this once)
+
+Then run the provisioning playbook -- this will create the OpenStack
+resources:
+
+```bash
+$ ansible-playbook -i inventory openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml
+```
+
+If you're using multiple inventories, make sure you pass the path to
+the right one to `-i`.
+
+
+### 4. Installing OpenShift
+
+We will use the `openshift-ansible` project to install openshift on
+top of the OpenStack nodes we have prepared:
+
+```bash
+$ git clone https://github.com/openshift/openshift-ansible
+$ ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml
+```
+
+
+### Next Steps
+
+And that's it! You should have a small but functional OpenShift
+cluster now.
+
+Take a look at [how to access the cluster][accessing-openshift]
+and [how to remove it][uninstall-openshift] as well as the more
+advanced configuration:
+
+* [Accessing the OpenShift cluster][accessing-openshift]
+* [Removing the OpenShift cluster][uninstall-openshift]
+* Set Up Authentication (TODO)
+* [Multiple Masters with a load balancer][loadbalancer]
+* [External Dns][external-dns]
+* Multiple Clusters (TODO)
+* [Cinder Registry][cinder-registry]
+* [Bastion Node][bastion]
+
+
+[ansible]: https://www.ansible.com/
+[openshift-ansible]: https://github.com/openshift/openshift-ansible
+[devstack]: https://docs.openstack.org/devstack/
+[tripleo]: http://tripleo.org/
+[ansible-dependencies]: ./advanced-configuration.md#dependencies-for-localhost-ansible-controladmin-node
+[contrib]: https://github.com/openshift/openshift-ansible-contrib
+[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/
+[hardware-requirements]: https://docs.openshift.org/latest/install_config/install/prerequisites.html#hardware
+[origin]: https://www.openshift.org/
+[centos7]: https://www.centos.org/
+[sample-openshift-inventory]: https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.example
+[advanced-configuration]: ./advanced-configuration.md
+[accessing-openshift]: ./advanced-configuration.md#accessing-the-openshift-cluster
+[uninstall-openshift]: ./advanced-configuration.md#removing-the-openshift-cluster
+[loadbalancer]: ./advanced-configuration.md#multi-master-configuration
+[external-dns]: ./advanced-configuration.md#dns-configuration-variables
+[cinder-registry]: ./advanced-configuration.md#creating-and-using-a-cinder-volume-for-the-openshift-registry
+[bastion]: ./advanced-configuration.md#configure-static-inventory-and-access-via-a-bastion-node
+
+
+
+## License
+
+Like the rest of the openshift-ansible-contrib repository, the code
+here is licensed under Apache 2.
diff --git a/playbooks/provisioning/openstack/advanced-configuration.md b/playbooks/provisioning/openstack/advanced-configuration.md
new file mode 100644
index 000000000..72bb95254
--- /dev/null
+++ b/playbooks/provisioning/openstack/advanced-configuration.md
@@ -0,0 +1,773 @@
+## Dependencies for localhost (ansible control/admin node)
+
+* [Ansible 2.3](https://pypi.python.org/pypi/ansible)
+* [Ansible-galaxy](https://pypi.python.org/pypi/ansible-galaxy-local-deps)
+* [jinja2](http://jinja.pocoo.org/docs/2.9/)
+* [shade](https://pypi.python.org/pypi/shade)
+* python-jmespath / [jmespath](https://pypi.python.org/pypi/jmespath)
+* python-dns / [dnspython](https://pypi.python.org/pypi/dnspython)
+* Become (sudo) is not required.
+
+**NOTE**: You can use a Docker image with all dependencies set up.
+Find more in the [Deployment section](#deployment).
+
+### Optional Dependencies for localhost
+**Note**: When using rhel images, `rhel-7-server-openstack-10-rpms` repository is required in order to install these packages.
+
+* `python-openstackclient`
+* `python-heatclient`
+
+## Dependencies for OpenStack hosted cluster nodes (servers)
+
+There are no additional dependencies for the cluster nodes. Required
+configuration steps are done by Heat given a specific user data config
+that normally should not be changed.
+
+## Required galaxy modules
+
+In order to pull in external dependencies for DNS configuration steps,
+the following commads need to be executed:
+
+ ansible-galaxy install \
+ -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml \
+ -p openshift-ansible-contrib/roles
+
+Alternatively you can install directly from github:
+
+ ansible-galaxy install git+https://github.com/redhat-cop/infra-ansible,master \
+ -p openshift-ansible-contrib/roles
+
+Notes:
+* This assumes we're in the directory that contains the clonned
+openshift-ansible-contrib repo in its root path.
+* When trying to install a different version, the previous one must be removed first
+(`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)).
+Otherwise, even if there are differences between the two versions, installation of the newer version is skipped.
+
+
+## Accessing the OpenShift Cluster
+
+### Use the Cluster DNS
+
+In addition to the OpenShift nodes, we created a DNS server with all
+the necessary entries. We will configure your *Ansible host* to use
+this new DNS and talk to the deployed OpenShift.
+
+First, get the DNS IP address:
+
+```bash
+$ openstack server show dns-0.openshift.example.com --format value --column addresses
+openshift-ansible-openshift.example.com-net=192.168.99.11, 10.40.128.129
+```
+
+Note the floating IP address (it's `10.40.128.129` in this case) -- if
+you're not sure, try pinging them both -- it's the one that responds
+to pings.
+
+Next, edit your `/etc/resolv.conf` as root and put `nameserver DNS_IP` as your
+**first entry**.
+
+If your `/etc/resolv.conf` currently looks like this:
+
+```
+; generated by /usr/sbin/dhclient-script
+search openstacklocal
+nameserver 192.168.0.3
+nameserver 192.168.0.2
+```
+
+Change it to this:
+
+```
+; generated by /usr/sbin/dhclient-script
+search openstacklocal
+nameserver 10.40.128.129
+nameserver 192.168.0.3
+nameserver 192.168.0.2
+```
+
+### Get the `oc` Client
+
+**NOTE**: You can skip this section if you're using the Docker image
+-- it already has the `oc` binary.
+
+You need to download the OpenShift command line client (called `oc`).
+You can download and extract `openshift-origin-client-tools` from the
+OpenShift release page:
+
+https://github.com/openshift/origin/releases/latest/
+
+Or you can now copy it from the master node:
+
+ $ ansible -i inventory masters[0] -m fetch -a "src=/bin/oc dest=oc"
+
+Either way, find the `oc` binary and put it in your `PATH`.
+
+
+### Logging in Using the Command Line
+
+
+```
+oc login --insecure-skip-tls-verify=true https://master-0.openshift.example.com:8443 -u user -p password
+oc new-project test
+oc new-app --template=cakephp-mysql-example
+oc status -v
+curl http://cakephp-mysql-example-test.apps.openshift.example.com
+```
+
+This will trigger an image build. You can run `oc logs -f
+bc/cakephp-mysql-example` to follow its progress.
+
+Wait until the build has finished and both pods are deployed and running:
+
+```
+$ oc status -v
+In project test on server https://master-0.openshift.example.com:8443
+
+http://cakephp-mysql-example-test.apps.openshift.example.com (svc/cakephp-mysql-example)
+ dc/cakephp-mysql-example deploys istag/cakephp-mysql-example:latest <-
+ bc/cakephp-mysql-example source builds https://github.com/openshift/cakephp-ex.git on openshift/php:7.0
+ deployment #1 deployed about a minute ago - 1 pod
+
+svc/mysql - 172.30.144.36:3306
+ dc/mysql deploys openshift/mysql:5.7
+ deployment #1 deployed 3 minutes ago - 1 pod
+
+Info:
+ * pod/cakephp-mysql-example-1-build has no liveness probe to verify pods are still running.
+ try: oc set probe pod/cakephp-mysql-example-1-build --liveness ...
+View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
+
+```
+
+You can now look at the deployed app using its route:
+
+```
+$ curl http://cakephp-mysql-example-test.apps.openshift.example.com
+```
+
+Its `title` should say: "Welcome to OpenShift".
+
+
+### Accessing the UI
+
+You can also access the OpenShift cluster with a web browser by going to:
+
+https://master-0.openshift.example.com:8443
+
+Note that for this to work, the OpenShift nodes must be accessible
+from your computer and it's DNS configuration must use the cruster's
+DNS.
+
+
+## Removing the OpenShift Cluster
+
+Everything in the cluster is contained within a Heat stack. To
+completely remove the cluster and all the related OpenStack resources,
+run this command:
+
+```bash
+openstack stack delete --wait --yes openshift.example.com
+```
+
+
+## DNS configuration variables
+
+Pay special attention to the values in the first paragraph -- these
+will depend on your OpenStack environment.
+
+Note that the provsisioning playbooks update the original Neutron subnet
+created with the Heat stack to point to the configured DNS servers.
+So the provisioned cluster nodes will start using those natively as
+default nameservers. Technically, this allows to deploy OpenShift clusters
+without dnsmasq proxies.
+
+The `env_id` and `public_dns_domain` will form the cluster's DNS domain all
+your servers will be under. With the default values, this will be
+`openshift.example.com`. For workloads, the default subdomain is 'apps'.
+That sudomain can be set as well by the `openshift_app_domain` variable in
+the inventory.
+
+The `openstack_<role name>_hostname` is a set of variables used for customising
+hostnames of servers with a given role. When such a variable stays commented,
+default hostname (usually the role name) is used.
+
+The `public_dns_nameservers` is a list of DNS servers accessible from all
+the created Nova servers. These will be serving as your DNS forwarders for
+external FQDNs that do not belong to the cluster's DNS domain and its subdomains.
+If you're unsure what to put in here, you can try the google or opendns servers,
+but note that some organizations may be blocking them.
+
+The `openshift_use_dnsmasq` controls either dnsmasq is deployed or not.
+By default, dnsmasq is deployed and comes as the hosts' /etc/resolv.conf file
+first nameserver entry that points to the local host instance of the dnsmasq
+daemon that in turn proxies DNS requests to the authoritative DNS server.
+When Network Manager is enabled for provisioned cluster nodes, which is
+normally the case, you should not change the defaults and always deploy dnsmasq.
+
+`external_nsupdate_keys` describes an external authoritative DNS server(s)
+processing dynamic records updates in the public and private cluster views:
+
+ external_nsupdate_keys:
+ public:
+ key_secret: <some nsupdate key>
+ key_algorithm: 'hmac-md5'
+ key_name: 'update-key'
+ server: <public DNS server IP>
+ private:
+ key_secret: <some nsupdate key 2>
+ key_algorithm: 'hmac-sha256'
+ server: <public or private DNS server IP>
+
+Here, for the public view section, we specified another key algorithm and
+optional `key_name`, which normally defaults to the cluster's DNS domain.
+This just illustrates a compatibility mode with a DNS service deployed
+by OpenShift on OSP10 reference architecture, and used in a mixed mode with
+another external DNS server.
+
+Another example defines an external DNS server for the public view
+additionally to the in-stack DNS server used for the private view only:
+
+ external_nsupdate_keys:
+ public:
+ key_secret: <some nsupdate key>
+ key_algorithm: 'hmac-sha256'
+ server: <public DNS server IP>
+
+Here, updates matching the public view will be hitting the given public
+server IP. While updates matching the private view will be sent to the
+auto evaluated in-stack DNS server's **public** IP.
+
+Note, for the in-stack DNS server, private view updates may be sent only
+via the public IP of the server. You can not send updates via the private
+IP yet. This forces the in-stack private server to have a floating IP.
+See also the [security notes](#security-notes)
+
+## Flannel networking
+
+In order to configure the
+[flannel networking](https://docs.openshift.com/container-platform/3.6/install_config/configuring_sdn.html#using-flannel),
+uncomment and adjust the appropriate `inventory/group_vars/OSEv3.yml` group vars.
+Note that the `osm_cluster_network_cidr` must not overlap with the default
+Docker bridge subnet of 172.17.0.0/16. Or you should change the docker0 default
+CIDR range otherwise. For example, by adding `--bip=192.168.2.1/24` to
+`DOCKER_NETWORK_OPTIONS` located in `/etc/sysconfig/docker-network`.
+
+Also note that the flannel network will be provisioned on a separate isolated Neutron
+subnet defined from `osm_cluster_network_cidr` and having ports security disabled.
+Use the `openstack_private_data_network_name` variable to define the network
+name for the heat stack resource.
+
+After the cluster deployment done, you should run an additional post installation
+step for flannel and docker iptables configuration:
+
+ ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-install.yml
+
+## Other configuration variables
+
+`openstack_ssh_public_key` is a Nova keypair - you can see your
+keypairs with `openstack keypair list`. It must correspond to the
+private SSH key Ansible will use to log into the created VMs. This is
+`~/.ssh/id_rsa` by default, but you can use a different key by passing
+`--private-key` to `ansible-playbook`.
+
+`openstack_default_image_name` is the default name of the Glance image the
+servers will use. You can see your images with `openstack image list`.
+In order to set a different image for a role, uncomment the line with the
+corresponding variable (e.g. `openstack_lb_image_name` for load balancer) and
+set its value to another available image name. `openstack_default_image_name`
+must stay defined as it is used as a default value for the rest of the roles.
+
+`openstack_default_flavor` is the default Nova flavor the servers will use.
+You can see your flavors with `openstack flavor list`.
+In order to set a different flavor for a role, uncomment the line with the
+corresponding variable (e.g. `openstack_lb_flavor` for load balancer) and
+set its value to another available flavor. `openstack_default_flavor` must
+stay defined as it is used as a default value for the rest of the roles.
+
+`openstack_external_network_name` is the name of the Neutron network
+providing external connectivity. It is often called `public`,
+`external` or `ext-net`. You can see your networks with `openstack
+network list`.
+
+`openstack_private_network_name` is the name of the private Neutron network
+providing admin/control access for ansible. It can be merged with other
+cluster networks, there are no special requirements for networking.
+
+The `openstack_num_masters`, `openstack_num_infra` and
+`openstack_num_nodes` values specify the number of Master, Infra and
+App nodes to create.
+
+The `openshift_cluster_node_labels` defines custom labels for your openshift
+cluster node groups. It currently supports app and infra node groups.
+The default value of this variable sets `region: primary` to app nodes and
+`region: infra` to infra nodes.
+An example of setting a customised label:
+```
+openshift_cluster_node_labels:
+ app:
+ mylabel: myvalue
+```
+
+The `openstack_nodes_to_remove` allows you to specify the numerical indexes
+of App nodes that should be removed; for example, ['0', '2'],
+
+The `docker_volume_size` is the default Docker volume size the servers will use.
+In order to set a different volume size for a role,
+uncomment the line with the corresponding variable (e. g. `docker_master_volume_size`
+for master) and change its value. `docker_volume_size` must stay defined as it is
+used as a default value for some of the servers (master, infra, app node).
+The rest of the roles (etcd, load balancer, dns) have their defaults hard-coded.
+
+**Note**: If the `ephemeral_volumes` is set to `true`, the `*_volume_size` variables
+will be ignored and the deployment will not create any cinder volumes.
+
+The `openstack_flat_secgrp`, controls Neutron security groups creation for Heat
+stacks. Set it to true, if you experience issues with sec group rules
+quotas. It trades security for number of rules, by sharing the same set
+of firewall rules for master, node, etcd and infra nodes.
+
+The `required_packages` variable also provides a list of the additional
+prerequisite packages to be installed before to deploy an OpenShift cluster.
+Those are ignored though, if the `manage_packages: False`.
+
+The `openstack_inventory` controls either a static inventory will be created after the
+cluster nodes provisioned on OpenStack cloud. Note, the fully dynamic inventory
+is yet to be supported, so the static inventory will be created anyway.
+
+The `openstack_inventory_path` points the directory to host the generated static inventory.
+It should point to the copied example inventory directory, otherwise ti creates
+a new one for you.
+
+## Multi-master configuration
+
+Please refer to the official documentation for the
+[multi-master setup](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#multiple-masters)
+and define the corresponding [inventory
+variables](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#configuring-cluster-variables)
+in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node
+under the ansible group named `ext_lb`:
+
+ openshift_master_cluster_method: native
+ openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}"
+ openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}"
+
+## Provider Network
+
+Normally, the playbooks create a new Neutron network and subnet and attach
+floating IP addresses to each node. If you have a provider network set up, this
+is all unnecessary as you can just access servers that are placed in the
+provider network directly.
+
+To use a provider network, set its name in `openstack_provider_network_name` in
+`inventory/group_vars/all.yml`.
+
+If you set the provider network name, the `openstack_external_network_name` and
+`openstack_private_network_name` fields will be ignored.
+
+**NOTE**: this will not update the nodes' DNS, so running openshift-ansible
+right after provisioning will fail (unless you're using an external DNS server
+your provider network knows about). You must make sure your nodes are able to
+resolve each other by name.
+
+## Security notes
+
+Configure required `*_ingress_cidr` variables to restrict public access
+to provisioned servers from your laptop (a /32 notation should be used)
+or your trusted network. The most important is the `node_ingress_cidr`
+that restricts public access to the deployed DNS server and cluster
+nodes' ephemeral ports range.
+
+Note, the command ``curl https://api.ipify.org`` helps fiding an external
+IP address of your box (the ansible admin node).
+
+There is also the `manage_packages` variable (defaults to True) you
+may want to turn off in order to speed up the provisioning tasks. This may
+be the case for development environments. When turned off, the servers will
+be provisioned omitting the ``yum update`` command. This brings security
+implications though, and is not recommended for production deployments.
+
+### DNS servers security options
+
+Aside from `node_ingress_cidr` restricting public access to in-stack DNS
+servers, there are following (bind/named specific) DNS security
+options available:
+
+ named_public_recursion: 'no'
+ named_private_recursion: 'yes'
+
+External DNS servers, which is not included in the 'dns' hosts group,
+are not managed. It is up to you to configure such ones.
+
+## Configure the OpenShift parameters
+
+Finally, you need to update the DNS entry in
+`inventory/group_vars/OSEv3.yml` (look at
+`openshift_master_default_subdomain`).
+
+In addition, this is the place where you can customise your OpenShift
+installation for example by specifying the authentication.
+
+The full list of options is available in this sample inventory:
+
+https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example
+
+Note, that in order to deploy OpenShift origin, you should update the following
+variables for the `inventory/group_vars/OSEv3.yml`, `all.yml`:
+
+ deployment_type: origin
+ openshift_deployment_type: "{{ deployment_type }}"
+
+
+## Setting a custom entrypoint
+
+In order to set a custom entrypoint, update `openshift_master_cluster_public_hostname`
+
+ openshift_master_cluster_public_hostname: api.openshift.example.com
+
+Note than an empty hostname does not work, so if your domain is `openshift.example.com`,
+you cannot set this value to simply `openshift.example.com`.
+
+## Creating and using a Cinder volume for the OpenShift registry
+
+You can optionally have the playbooks create a Cinder volume and set
+it up as the OpenShift hosted registry.
+
+To do that you need specify the desired Cinder volume name and size in
+Gigabytes in `inventory/group_vars/all.yml`:
+
+ cinder_hosted_registry_name: cinder-registry
+ cinder_hosted_registry_size_gb: 10
+
+With this, the playbooks will create the volume and set up its
+filesystem. If there is an existing volume of the same name, we will
+use it but keep the existing data on it.
+
+To use the volume for the registry, you must first configure it with
+the OpenStack credentials by putting the following to `OSEv3.yml`:
+
+ openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
+ openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
+ openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
+ openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"
+
+This will use the credentials from your shell environment. If you want
+to enter them explicitly, you can. You can also use credentials
+different from the provisioning ones (say for quota or access control
+reasons).
+
+**NOTE**: If you're testing this on (DevStack)[devstack], you must
+explicitly set your Keystone API version to v2 (e.g.
+`OS_AUTH_URL=http://10.34.37.47/identity/v2.0`) instead of the default
+value provided by `openrc`. You may also encounter the following issue
+with Cinder:
+
+https://github.com/kubernetes/kubernetes/issues/50461
+
+You can read the (OpenShift documentation on configuring
+OpenStack)[openstack] for more information.
+
+[devstack]: https://docs.openstack.org/devstack/latest/
+[openstack]: https://docs.openshift.org/latest/install_config/configuring_openstack.html
+
+
+Next, we need to instruct OpenShift to use the Cinder volume for it's
+registry. Again in `OSEv3.yml`:
+
+ #openshift_hosted_registry_storage_kind: openstack
+ #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
+ #openshift_hosted_registry_storage_openstack_filesystem: xfs
+
+The filesystem value here will be used in the initial formatting of
+the volume.
+
+If you're using the dynamic inventory, you must uncomment these two values as
+well:
+
+ #openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', cinder_hosted_registry_name).id }}"
+ #openshift_hosted_registry_storage_volume_size: "{{ cinder_hosted_registry_size_gb }}Gi"
+
+But note that they use the `os_cinder` lookup plugin we provide, so you must
+tell Ansible where to find it either in `ansible.cfg` (the one we provide is
+configured properly) or by exporting the
+`ANSIBLE_LOOKUP_PLUGINS=openshift-ansible-contrib/lookup_plugins` environment
+variable.
+
+
+
+## Use an existing Cinder volume for the OpenShift registry
+
+You can also use a pre-existing Cinder volume for the storage of your
+OpenShift registry.
+
+To do that, you need to have a Cinder volume. You can create one by
+running:
+
+ openstack volume create --size <volume size in gb> <volume name>
+
+The volume needs to have a file system created before you put it to
+use.
+
+As with the automatically-created volume, you have to set up the
+OpenStack credentials in `inventory/group_vars/OSEv3.yml` as well as
+registry values:
+
+ #openshift_hosted_registry_storage_kind: openstack
+ #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
+ #openshift_hosted_registry_storage_openstack_filesystem: xfs
+ #openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05
+ #openshift_hosted_registry_storage_volume_size: 10Gi
+
+Note the `openshift_hosted_registry_storage_openstack_volumeID` and
+`openshift_hosted_registry_storage_volume_size` values: these need to
+be added in addition to the previous variables.
+
+The **Cinder volume ID**, **filesystem** and **volume size** variables
+must correspond to the values in your volume. The volume ID must be
+the **UUID** of the Cinder volume, *not its name*.
+
+We can do formate the volume for you if you ask for it in
+`inventory/group_vars/all.yml`:
+
+ prepare_and_format_registry_volume: true
+
+**NOTE:** doing so **will destroy any data that's currently on the volume**!
+
+You can also run the registry setup playbook directly:
+
+ ansible-playbook -i inventory playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml
+
+(the provisioning phase must be completed, first)
+
+
+
+## Configure static inventory and access via a bastion node
+
+Example inventory variables:
+
+ openstack_use_bastion: true
+ bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24"
+ openstack_private_ssh_key: ~/.ssh/id_rsa
+ openstack_inventory: static
+ openstack_inventory_path: ../../../../inventory
+ openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com
+
+The `openstack_subnet_prefix` is the openstack private network for your cluster.
+And the `bastion_ingress_cidr` defines accepted range for SSH connections to nodes
+additionally to the `ssh_ingress_cidr`` (see the security notes above).
+
+The SSH config will be stored on the ansible control node by the
+gitven path. Ansible uses it automatically. To access the cluster nodes with
+that ssh config, use the `-F` prefix, f.e.:
+
+ ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK
+
+Note, relative paths will not work for the `openstack_ssh_config_path`, but it
+works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this
+guide, the latter points to the current directory, where you run ansible commands
+from.
+
+To verify nodes connectivity, use the command:
+
+ ansible -v -i inventory/hosts -m ping all
+
+If something is broken, double-check the inventory variables, paths and the
+generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files.
+
+The `inventory: dynamic` can be used instead to access cluster nodes directly via
+floating IPs. In this mode you can not use a bastion node and should specify
+the dynamic inventory file in your ansible commands , like `-i openstack.py`.
+
+## Using Docker on the Ansible host
+
+If you don't want to worry about the dependencies, you can use the
+[OpenStack Control Host image][control-host-image].
+
+[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/
+
+It has all the dependencies installed, but you'll need to map your
+code and credentials to it. Assuming your SSH keys live in `~/.ssh`
+and everything else is in your current directory (i.e. `ansible.cfg`,
+`keystonerc`, `inventory`, `openshift-ansible`,
+`openshift-ansible-contrib`), this is how you run the deployment:
+
+ sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \
+ -v $PWD:/root/openshift:Z \
+ -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \
+ redhatcop/control-host-openstack bash
+
+(feel free to replace `$PWD` with an actual path to your inventory and
+checkouts, but note that relative paths don't work)
+
+The first run may take a few minutes while the image is being
+downloaded. After that, you'll be inside the container and you can run
+the playbooks:
+
+ cd openshift
+ ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml
+
+
+### Run the playbook
+
+Assuming your OpenStack (Keystone) credentials are in the `keystonerc`
+this is how you stat the provisioning process from your ansible control node:
+
+ . keystonerc
+ ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml
+
+Note, here you start with an empty inventory. The static inventory will be populated
+with data so you can omit providing additional arguments for future ansible commands.
+
+If bastion enabled, the generates SSH config must be applied for ansible.
+Otherwise, it is auto included by the previous step. In order to execute it
+as a separate playbook, use the following command:
+
+ ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml
+
+The first infra node then becomes a bastion node as well and proxies access
+for future ansible commands. The post-provision step also configures Satellite,
+if requested, and DNS server, and ensures other OpenShift requirements to be met.
+
+
+## Running Custom Post-Provision Actions
+
+A custom playbook can be run like this:
+
+```
+ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml
+```
+
+If you'd like to limit the run to one particular host, you can do so as follows:
+
+```
+ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -l app-node-0.openshift.example.com
+```
+
+You can also create your own custom playbook. Here are a few examples:
+
+### Adding additional YUM repositories
+
+```
+---
+- hosts: app
+ tasks:
+
+ # enable EPL
+ - name: Add repository
+ yum_repository:
+ name: epel
+ description: EPEL YUM repo
+ baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/
+```
+
+This example runs against app nodes. The list of options include:
+
+ - cluster_hosts (all hosts: app, infra, masters, dns, lb)
+ - OSEv3 (app, infra, masters)
+ - app
+ - dns
+ - masters
+ - infra_hosts
+
+### Attaching additional RHN pools
+
+```
+---
+- hosts: cluster_hosts
+ tasks:
+ - name: Attach additional RHN pool
+ become: true
+ command: "/usr/bin/subscription-manager attach --pool=<pool ID>"
+ register: attach_rhn_pool_result
+ until: attach_rhn_pool_result.rc == 0
+ retries: 10
+ delay: 1
+```
+
+This playbook runs against all cluster nodes. In order to help prevent slow connectivity
+problems, the task is retried 10 times in case of initial failure.
+Note that in order for this example to work in your deployment, your servers must use the RHEL image.
+
+### Adding extra Docker registry URLs
+
+This playbook is located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/tree/master/playbooks/provisioning/openstack/custom-actions) directory.
+
+It adds URLs passed as arguments to the docker configuration program.
+Going into more detail, the configuration program (which is in the YAML format) is loaded into an ansible variable
+([lines 27-30](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L27-L30))
+and in its structure, `registries` and `insecure_registries` sections are expanded with the newly added items
+([lines 56-76](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L56-L76)).
+The new content is then saved into the original file
+([lines 78-82](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L78-L82))
+and docker is restarted.
+
+Example usage:
+```
+ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml --extra-vars '{"registries": "reg1", "insecure_registries": ["ins_reg1","ins_reg2"]}'
+```
+
+### Adding extra CAs to the trust chain
+
+This playbook is also located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions) directory.
+It copies passed CAs to the trust chain location and updates the trust chain on each selected host.
+
+Example usage:
+```
+ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-cas.yml --extra-vars '{"ca_files": [<absolute path to ca1 file>, <absolute path to ca2 file>]}'
+```
+
+Please consider contributing your custom playbook back to openshift-ansible-contrib!
+
+A library of custom post-provision actions exists in `openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions`. Playbooks include:
+
+* [add-yum-repos.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml): adds a list of custom yum repositories to every node in the cluster
+* [add-rhn-pools.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): attaches a list of additional RHN pools to every node in the cluster
+* [add-docker-registry.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml): adds a list of docker registries to the docker configuration on every node in the cluster
+* [add-cas.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): adds a list of CAs to the trust chain on every node in the cluster
+
+
+## Install OpenShift
+
+Once it succeeds, you can install openshift by running:
+
+ ansible-playbook openshift-ansible/playbooks/byo/config.yml
+
+## Access UI
+
+OpenShift UI may be accessed via the 1st master node FQDN, port 8443.
+
+When using a bastion, you may want to make an SSH tunnel from your control node
+to access UI on the `https://localhost:8443`, with this inventory variable:
+
+ openshift_ui_ssh_tunnel: True
+
+Note, this requires sudo rights on the ansible control node and an absolute path
+for the `openstack_private_ssh_key`. You should also update the control node's
+`/etc/hosts`:
+
+ 127.0.0.1 master-0.openshift.example.com
+
+In order to access UI, the ssh-tunnel service will be created and started on the
+control node. Make sure to remove these changes and the service manually, when not
+needed anymore.
+
+## Scale Deployment up/down
+
+### Scaling up
+
+One can scale up the number of application nodes by executing the ansible playbook
+`openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml`.
+This process can be done even if there is currently no deployment available.
+The `increment_by` variable is used to specify by how much the deployment should
+be scaled up (if none exists, it serves as a target number of application nodes).
+The path to `openshift-ansible` directory can be customised by the `openshift_ansible_dir`
+variable. Its value must be an absolute path to `openshift-ansible` and it cannot
+contain the '/' symbol at the end.
+
+Usage:
+
+```
+ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>]
+```
+
+Note: This playbook works only without a bastion node (`openstack_use_bastion: False`).
diff --git a/playbooks/provisioning/openstack/ansible.cfg b/playbooks/provisioning/openstack/ansible.cfg
new file mode 100644
index 000000000..a21f023ea
--- /dev/null
+++ b/playbooks/provisioning/openstack/ansible.cfg
@@ -0,0 +1,24 @@
+# config file for ansible -- http://ansible.com/
+# ==============================================
+[defaults]
+ansible_user = openshift
+forks = 50
+# work around privilege escalation timeouts in ansible
+timeout = 30
+host_key_checking = false
+inventory = inventory
+inventory_ignore_extensions = secrets.py, .pyc, .cfg, .crt
+gathering = smart
+retry_files_enabled = false
+fact_caching = jsonfile
+fact_caching_connection = .ansible/cached_facts
+fact_caching_timeout = 900
+stdout_callback = skippy
+callback_whitelist = profile_tasks
+lookup_plugins = openshift-ansible-contrib/lookup_plugins
+
+
+[ssh_connection]
+ssh_args = -o ControlMaster=auto -o ControlPersist=900s -o GSSAPIAuthentication=no
+control_path = /var/tmp/%%h-%%r
+pipelining = True
diff --git a/playbooks/provisioning/openstack/custom-actions/add-cas.yml b/playbooks/provisioning/openstack/custom-actions/add-cas.yml
new file mode 100644
index 000000000..b2c195f91
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom-actions/add-cas.yml
@@ -0,0 +1,13 @@
+---
+- hosts: cluster_hosts
+ become: true
+ vars:
+ ca_files: []
+ tasks:
+ - name: Copy CAs to the trusted CAs location
+ with_items: "{{ ca_files }}"
+ copy:
+ src: "{{ item }}"
+ dest: /etc/pki/ca-trust/source/anchors/
+ - name: Update trusted CAs
+ shell: 'update-ca-trust enable && update-ca-trust extract'
diff --git a/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml b/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml
new file mode 100644
index 000000000..e118a71dc
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml
@@ -0,0 +1,90 @@
+---
+- hosts: OSEv3
+ become: true
+ vars:
+ registries: []
+ insecure_registries: []
+
+ tasks:
+ - name: Check if docker is even installed
+ command: docker
+
+ - name: Install atomic-registries package
+ yum:
+ name: atomic-registries
+ state: latest
+
+ - name: Get registry configuration file
+ register: file_result
+ stat:
+ path: /etc/containers/registries.conf
+
+ - name: Check if it exists
+ assert:
+ that: 'file_result.stat.exists'
+ msg: "Configuration file does not exist."
+
+ - name: Load configuration file
+ shell: cat /etc/containers/registries.conf
+ register: file_content
+
+ - name: Store file content into a variable
+ set_fact:
+ docker_conf: "{{ file_content.stdout | from_yaml }}"
+
+ - name: Make sure that docker file content is a dictionary
+ when: '(docker_conf is string) and (not docker_conf)'
+ set_fact:
+ docker_conf: {}
+
+ - name: Make sure that registries is a list
+ when: 'registries is string'
+ set_fact:
+ registries_list: [ "{{ registries }}" ]
+
+ - name: Make sure that insecure_registries is a list
+ when: 'insecure_registries is string'
+ set_fact:
+ insecure_registries_list: [ "{{ insecure_registries }}" ]
+
+ - name: Set default values if there are no registries defined
+ set_fact:
+ docker_conf_registries: "{{ [] if docker_conf['registries'] is not defined else docker_conf['registries'] }}"
+ docker_conf_insecure_registries: "{{ [] if docker_conf['insecure_registries'] is not defined else docker_conf['insecure_registries'] }}"
+
+ - name: Add other registries
+ when: 'registries_list is not defined'
+ register: registries_merge_result
+ set_fact:
+ docker_conf: "{{ docker_conf | combine({'registries': (docker_conf_registries + registries) | unique}, recursive=True) }}"
+
+ - name: Add other registries (if registries had to be converted)
+ when: 'registries_merge_result|skipped'
+ set_fact:
+ docker_conf: "{{ docker_conf | combine({'registries': (docker_conf_registries + registries_list) | unique}, recursive=True) }}"
+
+ - name: Add insecure registries
+ when: 'insecure_registries_list is not defined'
+ register: insecure_registries_merge_result
+ set_fact:
+ docker_conf: "{{ docker_conf | combine({'insecure_registries': (docker_conf_insecure_registries + insecure_registries) | unique }, recursive=True) }}"
+
+ - name: Add insecure registries (if insecure_registries had to be converted)
+ when: 'insecure_registries_merge_result|skipped'
+ set_fact:
+ docker_conf: "{{ docker_conf | combine({'insecure_registries': (docker_conf_insecure_registries + insecure_registries_list) | unique }, recursive=True) }}"
+
+ - name: Load variable back to file
+ copy:
+ content: "{{ docker_conf | to_yaml }}"
+ dest: /etc/containers/registries.conf
+
+ - name: Restart registries service
+ service:
+ name: registries
+ state: restarted
+
+ - name: Restart docker
+ service:
+ name: docker
+ state: restarted
diff --git a/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml b/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml
new file mode 100644
index 000000000..d17c1e335
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml
@@ -0,0 +1,13 @@
+---
+- hosts: cluster_hosts
+ vars:
+ rhn_pools: []
+ tasks:
+ - name: Attach additional RHN pools
+ become: true
+ with_items: "{{ rhn_pools }}"
+ command: "/usr/bin/subscription-manager attach --pool={{ item }}"
+ register: attach_rhn_pools_result
+ until: attach_rhn_pools_result.rc == 0
+ retries: 10
+ delay: 1
diff --git a/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml b/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml
new file mode 100644
index 000000000..ffebcb642
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml
@@ -0,0 +1,12 @@
+---
+- hosts: cluster_hosts
+ vars:
+ yum_repos: []
+ tasks:
+ # enable additional yum repos
+ - name: Add repository
+ yum_repository:
+ name: "{{ item.name }}"
+ description: "{{ item.description }}"
+ baseurl: "{{ item.baseurl }}"
+ with_items: "{{ yum_repos }}"
diff --git a/playbooks/provisioning/openstack/custom_flavor_check.yaml b/playbooks/provisioning/openstack/custom_flavor_check.yaml
new file mode 100644
index 000000000..e11874c28
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom_flavor_check.yaml
@@ -0,0 +1,9 @@
+---
+- name: Try to get flavor facts
+ os_flavor_facts:
+ name: "{{ flavor }}"
+ register: flavor_result
+- name: Check that custom flavor is available
+ assert:
+ that: "flavor_result.ansible_facts.openstack_flavors"
+ msg: "Flavor {{ flavor }} is not available."
diff --git a/playbooks/provisioning/openstack/custom_image_check.yaml b/playbooks/provisioning/openstack/custom_image_check.yaml
new file mode 100644
index 000000000..452e1e4d8
--- /dev/null
+++ b/playbooks/provisioning/openstack/custom_image_check.yaml
@@ -0,0 +1,9 @@
+---
+- name: Try to get image facts
+ os_image_facts:
+ image: "{{ image }}"
+ register: image_result
+- name: Check that custom image is available
+ assert:
+ that: "image_result.ansible_facts.openstack_image"
+ msg: "Image {{ image }} is not available."
diff --git a/playbooks/provisioning/openstack/galaxy-requirements.yaml b/playbooks/provisioning/openstack/galaxy-requirements.yaml
new file mode 100644
index 000000000..1d745dcc3
--- /dev/null
+++ b/playbooks/provisioning/openstack/galaxy-requirements.yaml
@@ -0,0 +1,10 @@
+---
+# This is the Ansible Galaxy requirements file to pull in the correct roles
+
+# From 'infra-ansible'
+- src: https://github.com/redhat-cop/infra-ansible
+ version: master
+
+# From 'openshift-ansible'
+- src: https://github.com/openshift/openshift-ansible
+ version: master
diff --git a/playbooks/provisioning/openstack/net_vars_check.yaml b/playbooks/provisioning/openstack/net_vars_check.yaml
new file mode 100644
index 000000000..68afde415
--- /dev/null
+++ b/playbooks/provisioning/openstack/net_vars_check.yaml
@@ -0,0 +1,14 @@
+---
+- name: Check the provider network configuration
+ fail:
+ msg: "Flannel SDN requires a dedicated containers data network and can not work over a provider network"
+ when:
+ - openstack_provider_network_name is defined
+ - openstack_private_data_network_name is defined
+
+- name: Check the flannel network configuration
+ fail:
+ msg: "A dedicated containers data network is only supported with Flannel SDN"
+ when:
+ - openstack_private_data_network_name is defined
+ - not openshift_use_flannel|default(False)|bool
diff --git a/playbooks/provisioning/openstack/post-install.yml b/playbooks/provisioning/openstack/post-install.yml
new file mode 100644
index 000000000..417813e2a
--- /dev/null
+++ b/playbooks/provisioning/openstack/post-install.yml
@@ -0,0 +1,57 @@
+---
+- hosts: OSEv3
+ gather_facts: False
+ become: True
+ tasks:
+ - name: Save iptables rules to a backup file
+ when: openshift_use_flannel|default(False)|bool
+ shell: iptables-save > /etc/sysconfig/iptables.orig-$(date +%Y%m%d%H%M%S)
+
+# Enable iptables service on app nodes to persist custom rules (flannel SDN)
+# FIXME(bogdando) w/a https://bugzilla.redhat.com/show_bug.cgi?id=1490820
+- hosts: app
+ gather_facts: False
+ become: True
+ vars:
+ os_firewall_allow:
+ - service: dnsmasq tcp
+ port: 53/tcp
+ - service: dnsmasq udp
+ port: 53/udp
+ tasks:
+ - when: openshift_use_flannel|default(False)|bool
+ block:
+ - include_role:
+ name: openshift-ansible/roles/os_firewall
+ - include_role:
+ name: openshift-ansible/roles/lib_os_firewall
+ - name: set allow rules for dnsmasq
+ os_firewall_manage_iptables:
+ name: "{{ item.service }}"
+ action: add
+ protocol: "{{ item.port.split('/')[1] }}"
+ port: "{{ item.port.split('/')[0] }}"
+ with_items: "{{ os_firewall_allow }}"
+
+- hosts: OSEv3
+ gather_facts: False
+ become: True
+ tasks:
+ - name: Apply post-install iptables hacks for Flannel SDN (the best effort)
+ when: openshift_use_flannel|default(False)|bool
+ block:
+ - name: set allow/masquerade rules for for flannel/docker
+ shell: >-
+ (iptables-save | grep -q custom-flannel-docker-1) ||
+ iptables -A DOCKER -w
+ -p all -j ACCEPT
+ -m comment --comment "custom-flannel-docker-1";
+ (iptables-save | grep -q custom-flannel-docker-2) ||
+ iptables -t nat -A POSTROUTING -w
+ -o {{flannel_interface|default('eth1')}}
+ -m comment --comment "custom-flannel-docker-2"
+ -j MASQUERADE
+
+ # NOTE(bogdando) the rules will not be restored, when iptables service unit is disabled & masked
+ - name: Persist in-memory iptables rules (w/o dynamic KUBE rules)
+ shell: iptables-save | grep -v KUBE > /etc/sysconfig/iptables
diff --git a/playbooks/provisioning/openstack/post-provision-openstack.yml b/playbooks/provisioning/openstack/post-provision-openstack.yml
new file mode 100644
index 000000000..e460fbf12
--- /dev/null
+++ b/playbooks/provisioning/openstack/post-provision-openstack.yml
@@ -0,0 +1,118 @@
+---
+- hosts: cluster_hosts
+ name: Wait for the the nodes to come up
+ become: False
+ gather_facts: False
+ tasks:
+ - when: not openstack_use_bastion|default(False)|bool
+ wait_for_connection:
+ - when: openstack_use_bastion|default(False)|bool
+ delegate_to: bastion
+ wait_for_connection:
+
+- hosts: cluster_hosts
+ gather_facts: True
+ tasks:
+ - name: Debug hostvar
+ debug:
+ msg: "{{ hostvars[inventory_hostname] }}"
+ verbosity: 2
+
+- name: OpenShift Pre-Requisites (part 1)
+ include: pre-install.yml
+
+- name: Assign hostnames
+ hosts: cluster_hosts
+ gather_facts: False
+ become: true
+ roles:
+ - role: hostnames
+
+- name: Subscribe DNS Host to allow for configuration below
+ hosts: dns
+ gather_facts: False
+ become: true
+ roles:
+ - role: subscription-manager
+ when: hostvars.localhost.rhsm_register|default(False)
+ tags: 'subscription-manager'
+
+- name: Determine which DNS server(s) to use for our generated records
+ hosts: localhost
+ gather_facts: False
+ become: False
+ roles:
+ - dns-server-detect
+
+- name: Build the DNS Server Views and Configure DNS Server(s)
+ hosts: dns
+ gather_facts: False
+ become: true
+ roles:
+ - role: dns-views
+ - role: infra-ansible/roles/dns-server
+
+- name: Build and process DNS Records
+ hosts: localhost
+ gather_facts: True
+ become: False
+ roles:
+ - role: dns-records
+ use_bastion: "{{ openstack_use_bastion|default(False)|bool }}"
+ - role: infra-ansible/roles/dns
+
+- name: Switch the stack subnet to the configured private DNS server
+ hosts: localhost
+ gather_facts: False
+ become: False
+ vars_files:
+ - stack_params.yaml
+ tasks:
+ - include_role:
+ name: openstack-stack
+ tasks_from: subnet_update_dns_servers
+
+- name: OpenShift Pre-Requisites (part 2)
+ hosts: OSEv3
+ gather_facts: true
+ become: true
+ vars:
+ interface: "{{ flannel_interface|default('eth1') }}"
+ interface_file: /etc/sysconfig/network-scripts/ifcfg-{{ interface }}
+ interface_config:
+ DEVICE: "{{ interface }}"
+ TYPE: Ethernet
+ BOOTPROTO: dhcp
+ ONBOOT: 'yes'
+ DEFTROUTE: 'no'
+ PEERDNS: 'no'
+ pre_tasks:
+ - name: "Include DNS configuration to ensure proper name resolution"
+ lineinfile:
+ state: present
+ dest: /etc/sysconfig/network
+ regexp: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}"
+ line: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}"
+ - name: "Configure the flannel interface options"
+ when: openshift_use_flannel|default(False)|bool
+ block:
+ - file:
+ dest: "{{ interface_file }}"
+ state: touch
+ mode: 0644
+ owner: root
+ group: root
+ - lineinfile:
+ state: present
+ dest: "{{ interface_file }}"
+ regexp: "{{ item.key }}="
+ line: "{{ item.key }}={{ item.value }}"
+ with_dict: "{{ interface_config }}"
+ roles:
+ - node-network-manager
+
+- include: prepare-and-format-cinder-volume.yaml
+ when: >
+ prepare_and_format_registry_volume|default(False) or
+ (cinder_registry_volume is defined and
+ cinder_registry_volume.changed|default(False))
diff --git a/playbooks/provisioning/openstack/pre-install.yml b/playbooks/provisioning/openstack/pre-install.yml
new file mode 100644
index 000000000..45e9005cc
--- /dev/null
+++ b/playbooks/provisioning/openstack/pre-install.yml
@@ -0,0 +1,19 @@
+---
+###############################
+# OpenShift Pre-Requisites
+
+# - subscribe hosts
+# - prepare docker
+# - other prep (install additional packages, etc.)
+#
+- hosts: OSEv3
+ become: true
+ roles:
+ - { role: subscription-manager, when: hostvars.localhost.rhsm_register|default(False), tags: 'subscription-manager', ansible_sudo: true }
+ - { role: docker, tags: 'docker' }
+ - { role: openshift-prep, tags: 'openshift-prep' }
+
+- hosts: localhost:cluster_hosts
+ become: False
+ tasks:
+ - include: pre_tasks.yml
diff --git a/playbooks/provisioning/openstack/pre_tasks.yml b/playbooks/provisioning/openstack/pre_tasks.yml
new file mode 100644
index 000000000..11fe2dd84
--- /dev/null
+++ b/playbooks/provisioning/openstack/pre_tasks.yml
@@ -0,0 +1,53 @@
+---
+- name: Generate Environment ID
+ set_fact:
+ env_random_id: "{{ ansible_date_time.epoch }}"
+ run_once: true
+ delegate_to: localhost
+
+- name: Set default Environment ID
+ set_fact:
+ default_env_id: "openshift-{{ lookup('env','OS_USERNAME') }}-{{ env_random_id }}"
+ delegate_to: localhost
+
+- name: Setting Common Facts
+ set_fact:
+ env_id: "{{ env_id | default(default_env_id) }}"
+ delegate_to: localhost
+
+- name: Updating DNS domain to include env_id (if not empty)
+ set_fact:
+ full_dns_domain: "{{ (env_id|trim == '') | ternary(public_dns_domain, env_id + '.' + public_dns_domain) }}"
+ delegate_to: localhost
+
+- name: Set the APP domain for OpenShift use
+ set_fact:
+ openshift_app_domain: "{{ openshift_app_domain | default('apps') }}"
+ delegate_to: localhost
+
+- name: Set the default app domain for routing purposes
+ set_fact:
+ openshift_master_default_subdomain: "{{ openshift_app_domain }}.{{ full_dns_domain }}"
+ delegate_to: localhost
+ when:
+ - openshift_master_default_subdomain is undefined
+
+# Check that openshift_cluster_node_labels has regions defined for all groups
+# NOTE(kpilatov): if node labels are to be enabled for more groups,
+# this check needs to be modified as well
+- name: Set openshift_cluster_node_labels if undefined (should not happen)
+ set_fact:
+ openshift_cluster_node_labels: {'app': {'region': 'primary'}, 'infra': {'region': 'infra'}}
+ when: openshift_cluster_node_labels is not defined
+
+- name: Set openshift_cluster_node_labels for the infra group
+ set_fact:
+ openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'infra': {'region': 'infra'}}, recursive=True) }}"
+
+- name: Set openshift_cluster_node_labels for the app group
+ set_fact:
+ openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'region': 'primary'}}, recursive=True) }}"
+
+- name: Set openshift_cluster_node_labels for auto-scaling app nodes
+ set_fact:
+ openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'autoscaling': 'app'}}, recursive=True) }}"
diff --git a/playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml b/playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml
new file mode 100644
index 000000000..30e094459
--- /dev/null
+++ b/playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml
@@ -0,0 +1,67 @@
+---
+- hosts: localhost
+ gather_facts: False
+ become: False
+ tasks:
+ - set_fact:
+ cinder_volume: "{{ hostvars[groups.masters[0]].openshift_hosted_registry_storage_openstack_volumeID }}"
+ cinder_fs: "{{ hostvars[groups.masters[0]].openshift_hosted_registry_storage_openstack_filesystem }}"
+
+ - name: Attach the volume to the VM
+ os_server_volume:
+ state: present
+ server: "{{ groups['masters'][0] }}"
+ volume: "{{ cinder_volume }}"
+ register: volume_attachment
+
+ - set_fact:
+ attached_device: >-
+ {{ volume_attachment['attachments']|json_query("[?volume_id=='" + cinder_volume + "'].device | [0]") }}
+
+ - delegate_to: "{{ groups['masters'][0] }}"
+ block:
+ - name: Wait for the device to appear
+ wait_for: path={{ attached_device }}
+
+ - name: Create a temp directory for mounting the volume
+ tempfile:
+ prefix: cinder-volume
+ state: directory
+ register: cinder_mount_dir
+
+ - name: Format the device
+ filesystem:
+ fstype: "{{ cinder_fs }}"
+ dev: "{{ attached_device }}"
+
+ - name: Mount the device
+ mount:
+ name: "{{ cinder_mount_dir.path }}"
+ src: "{{ attached_device }}"
+ state: mounted
+ fstype: "{{ cinder_fs }}"
+
+ - name: Change mode on the filesystem
+ file:
+ path: "{{ cinder_mount_dir.path }}"
+ state: directory
+ recurse: true
+ mode: 0777
+
+ - name: Unmount the device
+ mount:
+ name: "{{ cinder_mount_dir.path }}"
+ src: "{{ attached_device }}"
+ state: absent
+ fstype: "{{ cinder_fs }}"
+
+ - name: Delete the temp directory
+ file:
+ name: "{{ cinder_mount_dir.path }}"
+ state: absent
+
+ - name: Detach the volume from the VM
+ os_server_volume:
+ state: absent
+ server: "{{ groups['masters'][0] }}"
+ volume: "{{ cinder_volume }}"
diff --git a/playbooks/provisioning/openstack/prerequisites.yml b/playbooks/provisioning/openstack/prerequisites.yml
new file mode 100644
index 000000000..11a31411e
--- /dev/null
+++ b/playbooks/provisioning/openstack/prerequisites.yml
@@ -0,0 +1,123 @@
+---
+- hosts: localhost
+ tasks:
+
+ # Sanity check of inventory variables
+ - include: net_vars_check.yaml
+
+ # Check ansible
+ - name: Check Ansible version
+ assert:
+ that: >
+ (ansible_version.major == 2 and ansible_version.minor >= 3) or
+ (ansible_version.major > 2)
+ msg: "Ansible version must be at least 2.3"
+
+ # Check shade
+ - name: Try to import python module shade
+ command: python -c "import shade"
+ ignore_errors: yes
+ register: shade_result
+ - name: Check if shade is installed
+ assert:
+ that: 'shade_result.rc == 0'
+ msg: "Python module shade is not installed"
+
+ # Check jmespath
+ - name: Try to import python module shade
+ command: python -c "import jmespath"
+ ignore_errors: yes
+ register: jmespath_result
+ - name: Check if jmespath is installed
+ assert:
+ that: 'jmespath_result.rc == 0'
+ msg: "Python module jmespath is not installed"
+
+ # Check python-dns
+ - name: Try to import python DNS module
+ command: python -c "import dns"
+ ignore_errors: yes
+ register: pythondns_result
+ - name: Check if python-dns is installed
+ assert:
+ that: 'pythondns_result.rc == 0'
+ msg: "Python module python-dns is not installed"
+
+ # Check jinja2
+ - name: Try to import jinja2 module
+ command: python -c "import jinja2"
+ ignore_errors: yes
+ register: jinja_result
+ - name: Check if jinja2 is installed
+ assert:
+ that: 'jinja_result.rc == 0'
+ msg: "Python module jinja2 is not installed"
+
+ # Check Glance image
+ - name: Try to get image facts
+ os_image_facts:
+ image: "{{ openstack_default_image_name }}"
+ register: image_result
+ - name: Check that image is available
+ assert:
+ that: "image_result.ansible_facts.openstack_image"
+ msg: "Image {{ openstack_default_image_name }} is not available"
+
+ # Check network name
+ - name: Try to get network facts
+ os_networks_facts:
+ name: "{{ openstack_external_network_name }}"
+ register: network_result
+ when: not openstack_provider_network_name|default(None)
+ - name: Check that network is available
+ assert:
+ that: "network_result.ansible_facts.openstack_networks"
+ msg: "Network {{ openstack_external_network_name }} is not available"
+ when: not openstack_provider_network_name|default(None)
+
+ # Check keypair
+ # TODO kpilatov: there is no Ansible module for getting OS keypairs
+ # (os_keypair is not suitable for this)
+ # this method does not force python-openstackclient dependency
+ - name: Try to show keypair
+ command: >
+ python -c 'import shade; cloud = shade.openstack_cloud();
+ exit(cloud.get_keypair("{{ openstack_ssh_public_key }}") is None)'
+ ignore_errors: yes
+ register: key_result
+ - name: Check that keypair is available
+ assert:
+ that: 'key_result.rc == 0'
+ msg: "Keypair {{ openstack_ssh_public_key }} is not available"
+
+# Check that custom images and flavors exist
+- hosts: localhost
+
+ # Include variables that will be used by heat
+ vars_files:
+ - stack_params.yaml
+
+ tasks:
+ # Check that custom images are available
+ - include: custom_image_check.yaml
+ with_items:
+ - "{{ openstack_master_image }}"
+ - "{{ openstack_infra_image }}"
+ - "{{ openstack_node_image }}"
+ - "{{ openstack_lb_image }}"
+ - "{{ openstack_etcd_image }}"
+ - "{{ openstack_dns_image }}"
+ loop_control:
+ loop_var: image
+
+ # Check that custom flavors are available
+ - include: custom_flavor_check.yaml
+ with_items:
+ - "{{ master_flavor }}"
+ - "{{ infra_flavor }}"
+ - "{{ node_flavor }}"
+ - "{{ lb_flavor }}"
+ - "{{ etcd_flavor }}"
+ - "{{ dns_flavor }}"
+ loop_control:
+ loop_var: flavor
diff --git a/playbooks/provisioning/openstack/provision-openstack.yml b/playbooks/provisioning/openstack/provision-openstack.yml
new file mode 100644
index 000000000..bf424676d
--- /dev/null
+++ b/playbooks/provisioning/openstack/provision-openstack.yml
@@ -0,0 +1,35 @@
+---
+- hosts: localhost
+ gather_facts: True
+ become: False
+ vars_files:
+ - stack_params.yaml
+ pre_tasks:
+ - include: pre_tasks.yml
+ roles:
+ - role: openstack-stack
+ - role: openstack-create-cinder-registry
+ when:
+ - cinder_hosted_registry_name is defined
+ - cinder_hosted_registry_size_gb is defined
+ - role: static_inventory
+ when: openstack_inventory|default('static') == 'static'
+ inventory_path: "{{ openstack_inventory_path|default(inventory_dir) }}"
+ private_ssh_key: "{{ openstack_private_ssh_key|default('') }}"
+ ssh_config_path: "{{ openstack_ssh_config_path|default('/tmp/ssh.config.openshift.ansible' + '.' + stack_name) }}"
+ ssh_user: "{{ ansible_user }}"
+
+- name: Refresh Server inventory or exit to apply SSH config
+ hosts: localhost
+ connection: local
+ become: False
+ gather_facts: False
+ tasks:
+ - name: Exit to apply SSH config for a bastion
+ meta: end_play
+ when: openstack_use_bastion|default(False)|bool
+ - name: Refresh Server inventory
+ meta: refresh_inventory
+
+- include: post-provision-openstack.yml
+ when: not openstack_use_bastion|default(False)|bool
diff --git a/playbooks/provisioning/openstack/provision.yaml b/playbooks/provisioning/openstack/provision.yaml
new file mode 100644
index 000000000..474c9c803
--- /dev/null
+++ b/playbooks/provisioning/openstack/provision.yaml
@@ -0,0 +1,4 @@
+---
+- include: "prerequisites.yml"
+
+- include: "provision-openstack.yml"
diff --git a/playbooks/provisioning/openstack/roles b/playbooks/provisioning/openstack/roles
new file mode 120000
index 000000000..e2b799b9d
--- /dev/null
+++ b/playbooks/provisioning/openstack/roles
@@ -0,0 +1 @@
+../../../roles/ \ No newline at end of file
diff --git a/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml b/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml
new file mode 100644
index 000000000..949a323a7
--- /dev/null
+++ b/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml
@@ -0,0 +1,59 @@
+---
+openshift_deployment_type: origin
+#openshift_deployment_type: openshift-enterprise
+#openshift_release: v3.5
+openshift_master_default_subdomain: "apps.{{ env_id }}.{{ public_dns_domain }}"
+
+openshift_master_cluster_method: native
+openshift_master_cluster_hostname: "{{ groups.lb.0|default(groups.masters.0) }}"
+openshift_master_cluster_public_hostname: "{{ groups.lb.0|default(groups.masters.0) }}"
+
+osm_default_node_selector: 'region=primary'
+
+openshift_hosted_router_wait: True
+openshift_hosted_registry_wait: True
+
+## Openstack credentials
+#openshift_cloudprovider_kind=openstack
+#openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
+#openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
+#openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
+#openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"
+#openshift_cloudprovider_openstack_region="{{ lookup('env', 'OS_REGION_NAME') }}"
+
+
+## Use Cinder volume for Openshift registry:
+#openshift_hosted_registry_storage_kind: openstack
+#openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
+#openshift_hosted_registry_storage_openstack_filesystem: xfs
+
+## NOTE(shadower): This won't work until the openshift-ansible issue #5657 is fixed:
+## https://github.com/openshift/openshift-ansible/issues/5657
+## If you're using the `cinder_hosted_registry_name` option from
+## `all.yml`, uncomment these lines:
+#openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', cinder_hosted_registry_name).id }}"
+#openshift_hosted_registry_storage_volume_size: "{{ cinder_hosted_registry_size_gb }}Gi"
+
+## If you're using a Cinder volume you've set up yourself, uncomment these lines:
+#openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05
+#openshift_hosted_registry_storage_volume_size: 10Gi
+
+
+# NOTE(shadower): the hostname check seems to always fail because the
+# host's floating IP address doesn't match the address received from
+# inside the host.
+openshift_override_hostname_check: true
+
+# For POCs or demo environments that are using smaller instances than
+# the official recommended values for RAM and DISK, uncomment the line below.
+#openshift_disable_check: disk_availability,memory_availability
+
+# NOTE(shadower): Always switch to root on the OSEv3 nodes.
+# openshift-ansible requires an explicit `become`.
+ansible_become: true
+
+# # Flannel networking
+#osm_cluster_network_cidr: 10.128.0.0/14
+#openshift_use_openshift_sdn: false
+#openshift_use_flannel: true
+#flannel_interface: eth1
diff --git a/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml b/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml
new file mode 100644
index 000000000..83289307d
--- /dev/null
+++ b/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml
@@ -0,0 +1,166 @@
+---
+env_id: "openshift"
+public_dns_domain: "example.com"
+public_dns_nameservers: []
+
+# # Used Hostnames
+# # - set custom hostnames for roles by uncommenting corresponding lines
+#openstack_master_hostname: "master"
+#openstack_infra_hostname: "infra-node"
+#openstack_node_hostname: "app-node"
+#openstack_lb_hostname: "lb"
+#openstack_etcd_hostname: "etcd"
+#openstack_dns_hostname: "dns"
+
+openstack_ssh_public_key: "openshift"
+openstack_external_network_name: "public"
+#openstack_private_network_name: "openshift-ansible-{{ stack_name }}-net"
+# # A dedicated Neutron network name for containers data network
+# # Configures the data network to be separated from openstack_private_network_name
+# # NOTE: this is only supported with Flannel SDN yet
+#openstack_private_data_network_name: "openshift-ansible-{{ stack_name }}-data-net"
+
+## If you want to use a provider network, set its name here.
+## NOTE: the `openstack_external_network_name` and
+## `openstack_private_network_name` options will be ignored when using a
+## provider network.
+#openstack_provider_network_name: "provider"
+
+# # Used Images
+# # - set specific images for roles by uncommenting corresponding lines
+# # - note: do not remove openstack_default_image_name definition
+#openstack_master_image_name: "centos7"
+#openstack_infra_image_name: "centos7"
+#openstack_node_image_name: "centos7"
+#openstack_lb_image_name: "centos7"
+#openstack_etcd_image_name: "centos7"
+#openstack_dns_image_name: "centos7"
+openstack_default_image_name: "centos7"
+
+openstack_num_masters: 1
+openstack_num_infra: 1
+openstack_num_nodes: 2
+
+# # Used Flavors
+# # - set specific flavors for roles by uncommenting corresponding lines
+# # - note: do note remove openstack_default_flavor definition
+#openstack_master_flavor: "m1.medium"
+#openstack_infra_flavor: "m1.medium"
+#openstack_node_flavor: "m1.medium"
+#openstack_lb_flavor: "m1.medium"
+#openstack_etcd_flavor: "m1.medium"
+#openstack_dns_flavor: "m1.medium"
+openstack_default_flavor: "m1.medium"
+
+# # Numerical index of nodes to remove
+# openstack_nodes_to_remove: []
+
+# # Docker volume size
+# # - set specific volume size for roles by uncommenting corresponding lines
+# # - note: do not remove docker_default_volume_size definition
+#docker_master_volume_size: "15"
+#docker_infra_volume_size: "15"
+#docker_node_volume_size: "15"
+#docker_etcd_volume_size: "2"
+#docker_dns_volume_size: "1"
+#docker_lb_volume_size: "5"
+docker_volume_size: "15"
+
+## Specify server group policies for master and infra nodes. Nova must be configured to
+## enable these policies. 'anti-affinity' will ensure that each VM is launched on a
+## different physical host.
+#openstack_master_server_group_policies: [anti-affinity]
+#openstack_infra_server_group_policies: [anti-affinity]
+
+## Create a Cinder volume and use it for the OpenShift registry.
+## NOTE: the openstack credentials and hosted registry options must be set in OSEv3.yml!
+#cinder_hosted_registry_name: cinder-registry
+#cinder_hosted_registry_size_gb: 10
+
+## Set up a filesystem on the cinder volume specified in `OSEv3.yaml`.
+## You need to specify the file system and volume ID in OSEv3 via
+## `openshift_hosted_registry_storage_openstack_filesystem` and
+## `openshift_hosted_registry_storage_openstack_volumeID`.
+## WARNING: This will delete any data on the volume!
+#prepare_and_format_registry_volume: False
+
+openstack_subnet_prefix: "192.168.99"
+
+## Red Hat subscription defaults to false which means we will not attempt to
+## subscribe the nodes
+#rhsm_register: False
+
+# # Using Red Hat Satellite:
+#rhsm_register: True
+#rhsm_satellite: 'sat-6.example.com'
+#rhsm_org: 'OPENSHIFT_ORG'
+#rhsm_activationkey: '<activation-key>'
+
+# # Or using RHN username, password and optionally pool:
+#rhsm_register: True
+#rhsm_username: '<username>'
+#rhsm_password: '<password>'
+#rhsm_pool: '<pool id>'
+
+#rhsm_repos:
+# - "rhel-7-server-rpms"
+# - "rhel-7-server-ose-3.5-rpms"
+# - "rhel-7-server-extras-rpms"
+# - "rhel-7-fast-datapath-rpms"
+
+
+# # Roll-your-own DNS
+#openstack_num_dns: 0
+#external_nsupdate_keys:
+# public:
+# key_secret: 'SKqKNdpfk7llKxZ57bbxUnUDobaaJp9t8CjXLJPl+fRI5mPcSBuxTAyvJPa6Y9R7vUg9DwCy/6WTpgLNqnV4Hg=='
+# key_algorithm: 'hmac-md5'
+# server: '192.168.1.1'
+# private:
+# key_secret: 'kVE2bVTgZjrdJipxPhID8BEZmbHD8cExlVPR+zbFpW6la8kL5wpXiwOh8q5AAosXQI5t95UXwq3Inx8QT58duw=='
+# key_algorithm: 'hmac-md5'
+# server: '192.168.1.2'
+
+# # Customize DNS server security options
+#named_public_recursion: 'no'
+#named_private_recursion: 'yes'
+
+
+# NOTE(shadower): Do not change this value. The Ansible user is currently
+# hardcoded to `openshift`.
+ansible_user: openshift
+
+# # Use a single security group for a cluster (default: false)
+#openstack_flat_secgrp: false
+
+# # Openstack inventory type and cluster nodes access pattern
+# # Defaults to 'static'.
+# # Use 'dynamic' to access cluster nodes directly, via floating IPs
+# # and given a dynamic inventory script, like openstack.py
+#openstack_inventory: static
+# # The path to checkpoint the static inventory from the in-memory one
+#openstack_inventory_path: ../../../../inventory
+
+# # Use bastion node to access cluster nodes (Defaults to False).
+# # Requires a static inventory.
+#openstack_use_bastion: False
+#bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24"
+#
+# # The Nova key-pair's private SSH key to access inventory nodes
+#openstack_private_ssh_key: ~/.ssh/openshift
+# # The path for the SSH config to access all nodes
+#openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.{{ env_id }}.{{ public_dns_domain }}
+
+
+# If you want to use the VM storage instead of Cinder volumes, set this to `true`.
+# NOTE: this is for testing only! Your data will be gone once the VM disappears!
+# ephemeral_volumes: false
+
+# # OpenShift node labels
+# # - in order to customise node labels for app and/or infra group, set the
+# # openshift_cluster_node_labels variable
+#openshift_cluster_node_labels:
+# app:
+# region: primary
+# infra:
+# region: infra
diff --git a/playbooks/provisioning/openstack/sample-inventory/inventory.py b/playbooks/provisioning/openstack/sample-inventory/inventory.py
new file mode 100755
index 000000000..6a1b74b3d
--- /dev/null
+++ b/playbooks/provisioning/openstack/sample-inventory/inventory.py
@@ -0,0 +1,88 @@
+#!/usr/bin/env python
+
+from __future__ import print_function
+
+import json
+
+import shade
+
+
+if __name__ == '__main__':
+ cloud = shade.openstack_cloud()
+
+ inventory = {}
+
+ # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER`
+ # environment variable.
+ cluster_hosts = [
+ server for server in cloud.list_servers()
+ if 'metadata' in server and 'clusterid' in server.metadata]
+
+ masters = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'master']
+
+ etcd = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'etcd']
+ if not etcd:
+ etcd = masters
+
+ infra_hosts = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'node' and
+ server.metadata['sub-host-type'] == 'infra']
+
+ app = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'node' and
+ server.metadata['sub-host-type'] == 'app']
+
+ nodes = list(set(masters + infra_hosts + app))
+
+ dns = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'dns']
+
+ lb = [server.name for server in cluster_hosts
+ if server.metadata['host-type'] == 'lb']
+
+ osev3 = list(set(nodes + etcd + lb))
+
+ groups = [server.metadata.group for server in cluster_hosts
+ if 'group' in server.metadata]
+
+ inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}
+ inventory['OSEv3'] = {'hosts': osev3}
+ inventory['masters'] = {'hosts': masters}
+ inventory['etcd'] = {'hosts': etcd}
+ inventory['nodes'] = {'hosts': nodes}
+ inventory['infra_hosts'] = {'hosts': infra_hosts}
+ inventory['app'] = {'hosts': app}
+ inventory['dns'] = {'hosts': dns}
+ inventory['lb'] = {'hosts': lb}
+
+ for server in cluster_hosts:
+ if 'group' in server.metadata:
+ group = server.metadata.group
+ if group not in inventory:
+ inventory[group] = {'hosts': []}
+ inventory[group]['hosts'].append(server.name)
+
+ inventory['_meta'] = {'hostvars': {}}
+
+ for server in cluster_hosts:
+ ssh_ip_address = server.public_v4 or server.private_v4
+ vars = {
+ 'ansible_host': ssh_ip_address
+ }
+
+ public_v4 = server.public_v4 or server.private_v4
+ if public_v4:
+ vars['public_v4'] = public_v4
+ # TODO(shadower): what about multiple networks?
+ if server.private_v4:
+ vars['private_v4'] = server.private_v4
+
+ node_labels = server.metadata.get('node_labels')
+ if node_labels:
+ vars['openshift_node_labels'] = node_labels
+
+ inventory['_meta']['hostvars'][server.name] = vars
+
+ print(json.dumps(inventory, indent=4, sort_keys=True))
diff --git a/playbooks/provisioning/openstack/scale-up.yaml b/playbooks/provisioning/openstack/scale-up.yaml
new file mode 100644
index 000000000..79fc09050
--- /dev/null
+++ b/playbooks/provisioning/openstack/scale-up.yaml
@@ -0,0 +1,75 @@
+---
+# Get the needed information about the current deployment
+- hosts: masters[0]
+ tasks:
+ - name: Get number of app nodes
+ shell: oc get nodes -l autoscaling=app --no-headers=true | wc -l
+ register: oc_old_num_nodes
+ - name: Get names of app nodes
+ shell: oc get nodes -l autoscaling=app --no-headers=true | cut -f1 -d " "
+ register: oc_old_app_nodes
+
+- hosts: localhost
+ tasks:
+ # Since both number and names of app nodes are to be removed
+ # localhost variables for these values need to be set
+ - name: Store old number and names of app nodes locally (if there is an existing deployment)
+ when: '"masters" in groups'
+ register: set_fact_result
+ set_fact:
+ oc_old_num_nodes: "{{ hostvars[groups['masters'][0]]['oc_old_num_nodes'].stdout }}"
+ oc_old_app_nodes: "{{ hostvars[groups['masters'][0]]['oc_old_app_nodes'].stdout_lines }}"
+
+ - name: Set default values for old app nodes (if there is no existing deployment)
+ when: 'set_fact_result | skipped'
+ set_fact:
+ oc_old_num_nodes: 0
+ oc_old_app_nodes: []
+
+ # Set how many nodes are to be added (1 by default)
+ - name: Set how many nodes are to be added
+ set_fact:
+ increment_by: 1
+ - name: Check that the number corresponds to scaling up (not down)
+ assert:
+ that: 'increment_by | int >= 1'
+ msg: >
+ FAIL: The value of increment_by must be at least 1
+ (but it is {{ increment_by | int }}).
+ - name: Update openstack_num_nodes variable
+ set_fact:
+ openstack_num_nodes: "{{ oc_old_num_nodes | int + increment_by | int }}"
+
+# Run provision.yaml with higher number of nodes to create a new app-node VM
+- include: provision.yaml
+
+# Run config.yml to perform openshift installation
+# Path to openshift-ansible can be customised:
+# - the value of openshift_ansible_dir has to be an absolute path
+# - the path cannot contain the '/' symbol at the end
+
+# Creating a new deployment by the full installation
+- include: "{{ openshift_ansible_dir }}/playbooks/byo/config.yml"
+ vars:
+ openshift_ansible_dir: ../../../../openshift-ansible
+ when: 'not groups["new_nodes"] | list'
+
+# Scaling up existing deployment
+- include: "{{ openshift_ansible_dir }}/playbooks/byo/openshift-node/scaleup.yml"
+ vars:
+ openshift_ansible_dir: ../../../../openshift-ansible
+ when: 'groups["new_nodes"] | list'
+
+# Post-verification: Verify new number of nodes
+- hosts: masters[0]
+ tasks:
+ - name: Get number of nodes
+ shell: oc get nodes -l autoscaling=app --no-headers=true | wc -l
+ register: oc_new_num_nodes
+ - name: Check that the actual result matches the defined value
+ assert:
+ that: 'oc_new_num_nodes.stdout | int == (hostvars["localhost"]["oc_old_num_nodes"] | int + hostvars["localhost"]["increment_by"] | int)'
+ msg: >
+ FAIL: Number of application nodes has not been increased accordingly
+ (it should be {{ hostvars["localhost"]["oc_old_num_nodes"] | int + hostvars["localhost"]["increment_by"] | int }}
+ but it is {{ oc_new_num_nodes.stdout | int }}).
diff --git a/playbooks/provisioning/openstack/stack_params.yaml b/playbooks/provisioning/openstack/stack_params.yaml
new file mode 100644
index 000000000..a4da31bfe
--- /dev/null
+++ b/playbooks/provisioning/openstack/stack_params.yaml
@@ -0,0 +1,49 @@
+---
+stack_name: "{{ env_id }}.{{ public_dns_domain }}"
+dns_domain: "{{ public_dns_domain }}"
+dns_nameservers: "{{ public_dns_nameservers }}"
+subnet_prefix: "{{ openstack_subnet_prefix }}"
+master_hostname: "{{ openstack_master_hostname | default('master') }}"
+infra_hostname: "{{ openstack_infra_hostname | default('infra-node') }}"
+node_hostname: "{{ openstack_node_hostname | default('app-node') }}"
+lb_hostname: "{{ openstack_lb_hostname | default('lb') }}"
+etcd_hostname: "{{ openstack_etcd_hostname | default('etcd') }}"
+dns_hostname: "{{ openstack_dns_hostname | default('dns') }}"
+ssh_public_key: "{{ openstack_ssh_public_key }}"
+openstack_image: "{{ openstack_default_image_name }}"
+lb_flavor: "{{ openstack_lb_flavor | default(openstack_default_flavor) }}"
+etcd_flavor: "{{ openstack_etcd_flavor | default(openstack_default_flavor) }}"
+master_flavor: "{{ openstack_master_flavor | default(openstack_default_flavor) }}"
+node_flavor: "{{ openstack_node_flavor | default(openstack_default_flavor) }}"
+infra_flavor: "{{ openstack_infra_flavor | default(openstack_default_flavor) }}"
+dns_flavor: "{{ openstack_dns_flavor | default(openstack_default_flavor) }}"
+openstack_master_image: "{{ openstack_master_image_name | default(openstack_default_image_name) }}"
+openstack_infra_image: "{{ openstack_infra_image_name | default(openstack_default_image_name) }}"
+openstack_node_image: "{{ openstack_node_image_name | default(openstack_default_image_name) }}"
+openstack_lb_image: "{{ openstack_lb_image_name | default(openstack_default_image_name) }}"
+openstack_etcd_image: "{{ openstack_etcd_image_name | default(openstack_default_image_name) }}"
+openstack_dns_image: "{{ openstack_dns_image_name | default(openstack_default_image_name) }}"
+openstack_private_network: >-
+ {% if openstack_provider_network_name | default(None) -%}
+ {{ openstack_provider_network_name }}
+ {%- else -%}
+ {{ openstack_private_network_name | default ('openshift-ansible-' + stack_name + '-net') }}
+ {%- endif -%}
+provider_network: "{{ openstack_provider_network_name | default(None) }}"
+external_network: "{{ openstack_external_network_name | default(None) }}"
+num_etcd: "{{ openstack_num_etcd | default(0) }}"
+num_masters: "{{ openstack_num_masters }}"
+num_nodes: "{{ openstack_num_nodes }}"
+num_infra: "{{ openstack_num_infra }}"
+num_dns: "{{ openstack_num_dns | default(1) }}"
+master_server_group_policies: "{{ openstack_master_server_group_policies | default([]) | to_yaml }}"
+infra_server_group_policies: "{{ openstack_infra_server_group_policies | default([]) | to_yaml }}"
+master_volume_size: "{{ docker_master_volume_size | default(docker_volume_size) }}"
+infra_volume_size: "{{ docker_infra_volume_size | default(docker_volume_size) }}"
+node_volume_size: "{{ docker_node_volume_size | default(docker_volume_size) }}"
+etcd_volume_size: "{{ docker_etcd_volume_size | default('2') }}"
+dns_volume_size: "{{ docker_dns_volume_size | default('1') }}"
+lb_volume_size: "{{ docker_lb_volume_size | default('5') }}"
+nodes_to_remove: "{{ openstack_nodes_to_remove | default([]) | to_yaml }}"
+use_bastion: "{{ openstack_use_bastion|default(False) }}"
+ui_ssh_tunnel: "{{ openshift_ui_ssh_tunnel|default(False) }}"