summaryrefslogtreecommitdiffstats
path: root/roles/openshift_cfme
diff options
context:
space:
mode:
Diffstat (limited to 'roles/openshift_cfme')
-rw-r--r--roles/openshift_cfme/README.md404
-rw-r--r--roles/openshift_cfme/defaults/main.yml42
-rw-r--r--roles/openshift_cfme/files/miq-template.yaml566
-rw-r--r--roles/openshift_cfme/files/openshift_cfme.exports3
-rw-r--r--roles/openshift_cfme/handlers/main.yml37
-rw-r--r--roles/openshift_cfme/img/CFMEBasicDeployment.pngbin38316 -> 0 bytes
-rw-r--r--roles/openshift_cfme/meta/main.yml19
-rw-r--r--roles/openshift_cfme/tasks/create_pvs.yml36
-rw-r--r--roles/openshift_cfme/tasks/main.yml117
-rw-r--r--roles/openshift_cfme/tasks/nfs.yml51
-rw-r--r--roles/openshift_cfme/tasks/tune_masters.yml12
-rw-r--r--roles/openshift_cfme/tasks/uninstall.yml46
-rw-r--r--roles/openshift_cfme/templates/miq-pv-db.yaml.j213
-rw-r--r--roles/openshift_cfme/templates/miq-pv-region.yaml.j213
-rw-r--r--roles/openshift_cfme/templates/miq-pv-server.yaml.j213
15 files changed, 0 insertions, 1372 deletions
diff --git a/roles/openshift_cfme/README.md b/roles/openshift_cfme/README.md
deleted file mode 100644
index 8283afed6..000000000
--- a/roles/openshift_cfme/README.md
+++ /dev/null
@@ -1,404 +0,0 @@
-# OpenShift-Ansible - CFME Role
-
-# PROOF OF CONCEPT - Alpha Version
-
-This role is based on the work in the upstream
-[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
-project. For additional literature on configuration specific to
-ManageIQ (optional post-installation tasks), visit the project's
-[upstream documentation page](http://manageiq.org/docs/get-started/basic-configuration).
-
-Please submit a
-[new issue](https://github.com/openshift/openshift-ansible/issues/new)
-if you run into bugs with this role or wish to request enhancements.
-
-# Important Notes
-
-This is an early *proof of concept* role to install the Cloud Forms
-Management Engine (ManageIQ) on OpenShift Container Platform (OCP).
-
-* This role is still in **ALPHA STATUS**
-* Many options are hard-coded still (ex: NFS setup)
-* Not many configurable options yet
-* **Should** be ran on a dedicated cluster
-* **Will not run** on undersized infra
-* The terms *CFME* and *MIQ* / *ManageIQ* are interchangeable
-
-## Requirements
-
-**NOTE:** These requirements are copied from the upstream
-[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
-project.
-
-### Prerequisites:
-
-*
- [OpenShift Origin 1.5](https://docs.openshift.com/container-platform/3.5/welcome/index.html)
- or
- [higher](https://docs.openshift.com/container-platform/latest/welcome/index.html)
- provisioned
-* NFS or other compatible volume provider
-* A cluster-admin user (created by role if required)
-
-### Cluster Sizing
-
-In order to avoid random deployment failures due to resource
-starvation, we recommend a minimum cluster size for a **test**
-environment.
-
-| Type | Size | CPUs | Memory |
-|----------------|---------|----------|----------|
-| Masters | `1+` | `8` | `12GB` |
-| Nodes | `2+` | `4` | `8GB` |
-| PV Storage | `25GB` | `N/A` | `N/A` |
-
-
-![Basic CFME Deployment](img/CFMEBasicDeployment.png)
-
-**CFME has hard-requirements for memory. CFME will NOT install if your
- infrastructure does not meet or exceed the requirements given
- above. Do not run this playbook if you do not have the required
- memory, you will just waste your time.**
-
-
-### Other sizing considerations
-
-* Recommendations assume MIQ will be the **only application running**
- on this cluster.
-* Alternatively, you can provision an infrastructure node to run
- registry/metrics/router/logging pods.
-* Each MIQ application pod will consume at least `3GB` of RAM on initial
- deployment (blank deployment without providers).
-* RAM consumption will ramp up higher depending on appliance use, once
- providers are added expect higher resource consumption.
-
-
-### Assumptions
-
-1) You meet/exceed the [cluster sizing](#cluster-sizing) requirements
-1) Your NFS server is on your master host
-1) Your PV backing NFS storage volume is mounted on `/exports/`
-
-Required directories that NFS will export to back the PVs:
-
-* `/exports/miq-pv0[123]`
-
-If the required directories are not present at install-time, they will
-be created using the recommended permissions per the
-[upstream documentation](https://github.com/ManageIQ/manageiq-pods#make-persistent-volumes-to-host-the-miq-database-and-application-data):
-
-* UID/GID: `root`/`root`
-* Mode: `0775`
-
-**IMPORTANT:** If you are using a separate volume (`/dev/vdX`) for NFS
- storage, **ensure** it is mounted on `/exports/` **before** running
- this role.
-
-
-
-## Role Variables
-
-Core variables in this role:
-
-| Name | Default value | Description |
-|-------------------------------|---------------|---------------|
-| `openshift_cfme_install_app` | `False` | `True`: Install everything and create a new CFME app, `False`: Just install all of the templates and scaffolding |
-
-
-Variables you may override have defaults defined in
-[defaults/main.yml](defaults/main.yml).
-
-
-# Important Notes
-
-This is a **tech preview** status role presently. Use it with the same
-caution you would give any other pre-release software.
-
-**Most importantly** follow this one rule: don't re-run the entrypoint
-playbook multiple times in a row without cleaning up after previous
-runs if some of the CFME steps have ran. This is a known
-flake. Cleanup instructions are provided at the bottom of this README.
-
-
-# Usage
-
-This section describes the basic usage of this role. All parameters
-will use their [default values](defaults/main.yml).
-
-## Pre-flight Checks
-
-**IMPORTANT:** As documented above in [the prerequisites](#prerequisites),
- you **must already** have your OCP cluster up and running.
-
-**Optional:** The ManageIQ pod is fairly large (about 1.7 GB) so to
-save some spin-up time post-deployment, you can begin pre-pulling the
-docker image to each of your nodes now:
-
-```
-root@node0x # docker pull docker.io/manageiq/manageiq-pods:app-latest-fine
-```
-
-## Getting Started
-
-1) The *entry point playbook* to install CFME is located in
-[the BYO playbooks](../../playbooks/byo/openshift-cfme/config.yml)
-directory
-
-2) Update your existing `hosts` inventory file and ensure the
-parameter `openshift_cfme_install_app` is set to `True` under the
-`[OSEv3:vars]` block.
-
-2) Using your existing `hosts` inventory file, run `ansible-playbook`
-with the entry point playbook:
-
-```
-$ ansible-playbook -v -i <INVENTORY_FILE> playbooks/byo/openshift-cfme/config.yml
-```
-
-## Next Steps
-
-Once complete, the playbook will let you know:
-
-
-```
-TASK [openshift_cfme : Status update] *********************************************************
-ok: [ho.st.na.me] => {
- "msg": "CFME has been deployed. Note that there will be a delay before it is fully initialized.\n"
-}
-```
-
-This will take several minutes (*possibly 10 or more*, depending on
-your network connection). However, you can get some insight into the
-deployment process during initialization.
-
-### oc describe pod manageiq-0
-
-*Some useful information about the output you will see if you run the
-`oc describe pod manageiq-0` command*
-
-**Readiness probe**s - These will take a while to become
-`Healthy`. The initial health probes won't even happen for at least 8
-minutes depending on how long it takes you to pull down the large
-images. ManageIQ is a large application so it may take a considerable
-amount of time for it to deploy and be marked as `Healthy`.
-
-If you go to the node you know the application is running on (check
-for `Successfully assigned manageiq-0 to <HOST|IP>` in the `describe`
-output) you can run a `docker pull` command to monitor the progress of
-the image pull:
-
-```
-[root@cfme-node ~]# docker pull docker.io/manageiq/manageiq-pods:app-latest-fine
-Trying to pull repository docker.io/manageiq/manageiq-pods ...
-sha256:6c055ca9d3c65cd694d6c0e28986b5239ba56bbdf0488cccdaa283d545258f8a: Pulling from docker.io/manageiq/manageiq-pods
-Digest: sha256:6c055ca9d3c65cd694d6c0e28986b5239ba56bbdf0488cccdaa283d545258f8a
-Status: Image is up to date for docker.io/manageiq/manageiq-pods:app-latest-fine
-```
-
-The example above demonstrates the case where the image has been
-successfully pulled already.
-
-If the image isn't completely pulled already then you will see
-multiple progress bars detailing each image layer download status.
-
-
-### rsh
-
-*Useful inspection/progress monitoring techniques with the `oc rsh`
-command.*
-
-
-On your master node, switch to the `cfme` project (or whatever you
-named it if you overrode the `openshift_cfme_project` variable) and
-check on the pod states:
-
-```
-[root@cfme-master01 ~]# oc project cfme
-Now using project "cfme" on server "https://10.10.0.100:8443".
-
-[root@cfme-master01 ~]# oc get pod
-NAME READY STATUS RESTARTS AGE
-manageiq-0 0/1 Running 0 14m
-memcached-1-3lk7g 1/1 Running 0 14m
-postgresql-1-12slb 1/1 Running 0 14m
-```
-
-Note how the `manageiq-0` pod says `0/1` under the **READY**
-column. After some time (depending on your network connection) you'll
-be able to `rsh` into the pod to find out more of what's happening in
-real time. First, the easy-mode command, run this once `rsh` is
-available and then watch until it says `Started Initialize Appliance
-Database`:
-
-```
-[root@cfme-master01 ~]# oc rsh manageiq-0 journalctl -f -u appliance-initialize.service
-```
-
-For the full explanation of what this means, and more interactive
-inspection techniques, keep reading on.
-
-To obtain a shell on our `manageiq` pod we use this command:
-
-```
-[root@cfme-master01 ~]# oc rsh manageiq-0 bash -l
-```
-
-The `rsh` command opens a shell in your pod for you. In this case it's
-the pod called `manageiq-0`. `systemd` is managing the services in
-this pod so we can use the `list-units` command to see what is running
-currently: `# systemctl list-units | grep appliance`.
-
-If you see the `appliance-initialize` service running, this indicates
-that basic setup is still in progress. We can monitor the process with
-the `journalctl` command like so:
-
-
-```
-[root@manageiq-0 vmdb]# journalctl -f -u appliance-initialize.service
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking deployment status ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: No pre-existing EVM configuration found on region PV
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking for existing data on server PV ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Starting New Deployment ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Applying memcached config ==
-Jun 14 14:55:53 manageiq-0 appliance-initialize.sh[58]: == Initializing Appliance ==
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: create encryption key
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: configuring external database
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: Checking for connections to the database...
-Jun 14 14:56:09 manageiq-0 appliance-initialize.sh[58]: Create region starting
-Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: Create region complete
-Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data backup ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sending incremental file list
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: created directory /persistent/server-deploy/backup/backup_2017_06_14_145816
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/REGION
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/v2_key
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/database.yml
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/GUID
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sent 1330 bytes received 136 bytes 2932.00 bytes/sec
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: total size is 770 speedup is 0.53
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Restoring PV data symlinks ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/REGION symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/config/database.yml symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/certs/v2_key symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/log symlink is already in place, skipping
-Jun 14 14:58:28 manageiq-0 systemctl[304]: Removed symlink /etc/systemd/system/multi-user.target.wants/appliance-initialize.service.
-Jun 14 14:58:29 manageiq-0 systemd[1]: Started Initialize Appliance Database.
-```
-
-Most of what we see here (above) is the initial database seeding
-process. This process isn't very quick, so be patient.
-
-At the bottom of the log there is a special line from the `systemctl`
-service, `Removed symlink
-/etc/systemd/system/multi-user.target.wants/appliance-initialize.service`. The
-`appliance-initialize` service is no longer marked as enabled. This
-indicates that the base application initialization is complete now.
-
-We're not done yet though, there are other ancillary services which
-run in this pod to support the application. *Still in the rsh shell*,
-Use the `ps` command to monitor for the `httpd` processes
-starting. You will see output similar to the following when that stage
-has completed:
-
-```
-[root@manageiq-0 vmdb]# ps aux | grep http
-root 1941 0.0 0.1 249820 7640 ? Ss 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
-apache 1942 0.0 0.0 250752 6012 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
-apache 1943 0.0 0.0 250472 5952 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
-apache 1944 0.0 0.0 250472 5916 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
-apache 1945 0.0 0.0 250360 5764 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
-```
-
-Furthermore, you can find other related processes by just looking for
-ones with `MIQ` in their name:
-
-```
-[root@manageiq-0 vmdb]# ps aux | grep miq
-root 333 27.7 4.2 555884 315916 ? Sl 14:58 3:59 MIQ Server
-root 1976 0.6 4.0 507224 303740 ? SNl 15:02 0:03 MIQ: MiqGenericWorker id: 1, queue: generic
-root 1984 0.6 4.0 507224 304312 ? SNl 15:02 0:03 MIQ: MiqGenericWorker id: 2, queue: generic
-root 1992 0.9 4.0 508252 304888 ? SNl 15:02 0:05 MIQ: MiqPriorityWorker id: 3, queue: generic
-root 2000 0.7 4.0 510308 304696 ? SNl 15:02 0:04 MIQ: MiqPriorityWorker id: 4, queue: generic
-root 2008 1.2 4.0 514000 303612 ? SNl 15:02 0:07 MIQ: MiqScheduleWorker id: 5
-root 2026 0.2 4.0 517504 303644 ? SNl 15:02 0:01 MIQ: MiqEventHandler id: 6, queue: ems
-root 2036 0.2 4.0 518532 303768 ? SNl 15:02 0:01 MIQ: MiqReportingWorker id: 7, queue: reporting
-root 2044 0.2 4.0 519560 303812 ? SNl 15:02 0:01 MIQ: MiqReportingWorker id: 8, queue: reporting
-root 2059 0.2 4.0 528372 303956 ? SNl 15:02 0:01 puma 3.3.0 (tcp://127.0.0.1:5000) [MIQ: Web Server Worker]
-root 2067 0.9 4.0 529664 305716 ? SNl 15:02 0:05 puma 3.3.0 (tcp://127.0.0.1:3000) [MIQ: Web Server Worker]
-root 2075 0.2 4.0 529408 304056 ? SNl 15:02 0:01 puma 3.3.0 (tcp://127.0.0.1:4000) [MIQ: Web Server Worker]
-root 2329 0.0 0.0 10640 972 ? S+ 15:13 0:00 grep --color=auto -i miq
-```
-
-Finally, *still in the rsh shell*, to test if the application is
-running correctly, we can request the application homepage. If the
-page is available the page title will be `ManageIQ: Login`:
-
-```
-[root@manageiq-0 vmdb]# curl -s -k https://localhost | grep -A2 '<title>'
-<title>
-ManageIQ: Login
-</title>
-```
-
-**Note:** The `-s` flag makes `curl` operations silent and the `-k`
-flag to ignore errors about untrusted certificates.
-
-
-
-# Additional Upstream Resources
-
-Below are some useful resources from the upstream project
-documentation. You may find these of value.
-
-* [Verify Setup Was Successful](https://github.com/ManageIQ/manageiq-pods#verifying-the-setup-was-successful)
-* [POD Access And Routes](https://github.com/ManageIQ/manageiq-pods#pod-access-and-routes)
-* [Troubleshooting](https://github.com/ManageIQ/manageiq-pods#troubleshooting)
-
-
-# Manual Cleanup
-
-At this time uninstallation/cleanup is still a manual process. You
-will have to follow a few steps to fully remove CFME from your
-cluster.
-
-Delete the project:
-
-* `oc delete project cfme`
-
-Delete the PVs:
-
-* `oc delete pv miq-pv01`
-* `oc delete pv miq-pv02`
-* `oc delete pv miq-pv03`
-
-Clean out the old PV data:
-
-* `cd /exports/`
-* `find miq* -type f -delete`
-* `find miq* -type d -delete`
-
-Remove the NFS exports:
-
-* `rm /etc/exports.d/openshift_cfme.exports`
-* `exportfs -ar`
-
-Delete the user:
-
-* `oc delete user cfme`
-
-**NOTE:** The `oc delete project cfme` command will return quickly
-however it will continue to operate in the background. Continue
-running `oc get project` after you've completed the other steps to
-monitor the pods and final project termination progress.
diff --git a/roles/openshift_cfme/defaults/main.yml b/roles/openshift_cfme/defaults/main.yml
deleted file mode 100644
index b82c2e602..000000000
--- a/roles/openshift_cfme/defaults/main.yml
+++ /dev/null
@@ -1,42 +0,0 @@
----
-# Namespace for the CFME project (Note: changed post-3.6 to use
-# reserved 'openshift-' namespace prefix)
-openshift_cfme_project: openshift-cfme
-# Namespace/project description
-openshift_cfme_project_description: ManageIQ - CloudForms Management Engine
-# Basic user assigned the `admin` role for the project
-openshift_cfme_user: cfme
-# Project system account for enabling privileged pods
-openshift_cfme_service_account: "system:serviceaccount:{{ openshift_cfme_project }}:default"
-# All the required exports
-openshift_cfme_pv_exports:
- - miq-pv01
- - miq-pv02
- - miq-pv03
-# PV template files and their created object names
-openshift_cfme_pv_data:
- - pv_name: miq-pv01
- pv_template: miq-pv-db.yaml
- pv_label: CFME DB PV
- - pv_name: miq-pv02
- pv_template: miq-pv-region.yaml
- pv_label: CFME Region PV
- - pv_name: miq-pv03
- pv_template: miq-pv-server.yaml
- pv_label: CFME Server PV
-
-# Tuning parameter to use more than 5 images at once from an ImageStream
-openshift_cfme_maxImagesBulkImportedPerRepository: 100
-# TODO: Refactor '_install_app' variable. This is just for testing but
-# maybe in the future it should control the entire yes/no for CFME.
-#
-# Whether or not the manageiq app should be initialized ('oc new-app
-# --template=manageiq). If False everything UP TO 'new-app' is ran.
-openshift_cfme_install_app: False
-# Docker image to pull
-openshift_cfme_application_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-app' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_postgresql_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-postgresql' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_memcached_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-memcached' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_application_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'app-latest-fine' }}"
-openshift_cfme_memcached_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'memcached-latest-fine' }}"
-openshift_cfme_postgresql_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'postgresql-latest-fine' }}"
diff --git a/roles/openshift_cfme/files/miq-template.yaml b/roles/openshift_cfme/files/miq-template.yaml
deleted file mode 100644
index 8f0d2af38..000000000
--- a/roles/openshift_cfme/files/miq-template.yaml
+++ /dev/null
@@ -1,566 +0,0 @@
----
-path: /tmp/miq-template-out
-data:
- apiVersion: v1
- kind: Template
- labels:
- template: manageiq
- metadata:
- name: manageiq
- annotations:
- description: "ManageIQ appliance with persistent storage"
- tags: "instant-app,manageiq,miq"
- iconClass: "icon-rails"
- objects:
- - apiVersion: v1
- kind: Secret
- metadata:
- name: "${NAME}-secrets"
- stringData:
- pg-password: "${DATABASE_PASSWORD}"
- - apiVersion: v1
- kind: Service
- metadata:
- annotations:
- description: "Exposes and load balances ManageIQ pods"
- service.alpha.openshift.io/dependencies: '[{"name":"${DATABASE_SERVICE_NAME}","namespace":"","kind":"Service"},{"name":"${MEMCACHED_SERVICE_NAME}","namespace":"","kind":"Service"}]'
- name: ${NAME}
- spec:
- clusterIP: None
- ports:
- - name: http
- port: 80
- protocol: TCP
- targetPort: 80
- - name: https
- port: 443
- protocol: TCP
- targetPort: 443
- selector:
- name: ${NAME}
- - apiVersion: v1
- kind: Route
- metadata:
- name: ${NAME}
- spec:
- host: ${APPLICATION_DOMAIN}
- port:
- targetPort: https
- tls:
- termination: passthrough
- to:
- kind: Service
- name: ${NAME}
- - apiVersion: v1
- kind: ImageStream
- metadata:
- name: miq-app
- annotations:
- description: "Keeps track of the ManageIQ image changes"
- spec:
- dockerImageRepository: "${APPLICATION_IMG_NAME}"
- - apiVersion: v1
- kind: ImageStream
- metadata:
- name: miq-postgresql
- annotations:
- description: "Keeps track of the PostgreSQL image changes"
- spec:
- dockerImageRepository: "${POSTGRESQL_IMG_NAME}"
- - apiVersion: v1
- kind: ImageStream
- metadata:
- name: miq-memcached
- annotations:
- description: "Keeps track of the Memcached image changes"
- spec:
- dockerImageRepository: "${MEMCACHED_IMG_NAME}"
- - apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: "${NAME}-${DATABASE_SERVICE_NAME}"
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: ${DATABASE_VOLUME_CAPACITY}
- - apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: "${NAME}-region"
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: ${APPLICATION_REGION_VOLUME_CAPACITY}
- - apiVersion: apps/v1beta1
- kind: "StatefulSet"
- metadata:
- name: ${NAME}
- annotations:
- description: "Defines how to deploy the ManageIQ appliance"
- spec:
- serviceName: "${NAME}"
- replicas: "${APPLICATION_REPLICA_COUNT}"
- template:
- metadata:
- labels:
- name: ${NAME}
- name: ${NAME}
- spec:
- containers:
- - name: manageiq
- image: "${APPLICATION_IMG_NAME}:${APPLICATION_IMG_TAG}"
- livenessProbe:
- tcpSocket:
- port: 443
- initialDelaySeconds: 480
- timeoutSeconds: 3
- readinessProbe:
- httpGet:
- path: /
- port: 443
- scheme: HTTPS
- initialDelaySeconds: 200
- timeoutSeconds: 3
- ports:
- - containerPort: 80
- protocol: TCP
- - containerPort: 443
- protocol: TCP
- securityContext:
- privileged: true
- volumeMounts:
- -
- name: "${NAME}-server"
- mountPath: "/persistent"
- -
- name: "${NAME}-region"
- mountPath: "/persistent-region"
- env:
- -
- name: "APPLICATION_INIT_DELAY"
- value: "${APPLICATION_INIT_DELAY}"
- -
- name: "DATABASE_SERVICE_NAME"
- value: "${DATABASE_SERVICE_NAME}"
- -
- name: "DATABASE_REGION"
- value: "${DATABASE_REGION}"
- -
- name: "MEMCACHED_SERVICE_NAME"
- value: "${MEMCACHED_SERVICE_NAME}"
- -
- name: "POSTGRESQL_USER"
- value: "${DATABASE_USER}"
- -
- name: "POSTGRESQL_PASSWORD"
- valueFrom:
- secretKeyRef:
- name: "${NAME}-secrets"
- key: "pg-password"
- -
- name: "POSTGRESQL_DATABASE"
- value: "${DATABASE_NAME}"
- -
- name: "POSTGRESQL_MAX_CONNECTIONS"
- value: "${POSTGRESQL_MAX_CONNECTIONS}"
- -
- name: "POSTGRESQL_SHARED_BUFFERS"
- value: "${POSTGRESQL_SHARED_BUFFERS}"
- resources:
- requests:
- memory: "${APPLICATION_MEM_REQ}"
- cpu: "${APPLICATION_CPU_REQ}"
- limits:
- memory: "${APPLICATION_MEM_LIMIT}"
- lifecycle:
- preStop:
- exec:
- command:
- - /opt/manageiq/container-scripts/sync-pv-data
- volumes:
- -
- name: "${NAME}-region"
- persistentVolumeClaim:
- claimName: ${NAME}-region
- volumeClaimTemplates:
- - metadata:
- name: "${NAME}-server"
- annotations:
- # Uncomment this if using dynamic volume provisioning.
- # https://docs.openshift.org/latest/install_config/persistent_storage/dynamically_provisioning_pvs.html
- # volume.alpha.kubernetes.io/storage-class: anything
- spec:
- accessModes: [ ReadWriteOnce ]
- resources:
- requests:
- storage: "${APPLICATION_VOLUME_CAPACITY}"
- - apiVersion: v1
- kind: "Service"
- metadata:
- name: "${MEMCACHED_SERVICE_NAME}"
- annotations:
- description: "Exposes the memcached server"
- spec:
- ports:
- -
- name: "memcached"
- port: 11211
- targetPort: 11211
- selector:
- name: "${MEMCACHED_SERVICE_NAME}"
- - apiVersion: v1
- kind: "DeploymentConfig"
- metadata:
- name: "${MEMCACHED_SERVICE_NAME}"
- annotations:
- description: "Defines how to deploy memcached"
- spec:
- strategy:
- type: "Recreate"
- triggers:
- -
- type: "ImageChange"
- imageChangeParams:
- automatic: true
- containerNames:
- - "memcached"
- from:
- kind: "ImageStreamTag"
- name: "miq-memcached:${MEMCACHED_IMG_TAG}"
- -
- type: "ConfigChange"
- replicas: 1
- selector:
- name: "${MEMCACHED_SERVICE_NAME}"
- template:
- metadata:
- name: "${MEMCACHED_SERVICE_NAME}"
- labels:
- name: "${MEMCACHED_SERVICE_NAME}"
- spec:
- volumes: []
- containers:
- -
- name: "memcached"
- image: "${MEMCACHED_IMG_NAME}:${MEMCACHED_IMG_TAG}"
- ports:
- -
- containerPort: 11211
- readinessProbe:
- timeoutSeconds: 1
- initialDelaySeconds: 5
- tcpSocket:
- port: 11211
- livenessProbe:
- timeoutSeconds: 1
- initialDelaySeconds: 30
- tcpSocket:
- port: 11211
- volumeMounts: []
- env:
- -
- name: "MEMCACHED_MAX_MEMORY"
- value: "${MEMCACHED_MAX_MEMORY}"
- -
- name: "MEMCACHED_MAX_CONNECTIONS"
- value: "${MEMCACHED_MAX_CONNECTIONS}"
- -
- name: "MEMCACHED_SLAB_PAGE_SIZE"
- value: "${MEMCACHED_SLAB_PAGE_SIZE}"
- resources:
- requests:
- memory: "${MEMCACHED_MEM_REQ}"
- cpu: "${MEMCACHED_CPU_REQ}"
- limits:
- memory: "${MEMCACHED_MEM_LIMIT}"
- - apiVersion: v1
- kind: "Service"
- metadata:
- name: "${DATABASE_SERVICE_NAME}"
- annotations:
- description: "Exposes the database server"
- spec:
- ports:
- -
- name: "postgresql"
- port: 5432
- targetPort: 5432
- selector:
- name: "${DATABASE_SERVICE_NAME}"
- - apiVersion: v1
- kind: "DeploymentConfig"
- metadata:
- name: "${DATABASE_SERVICE_NAME}"
- annotations:
- description: "Defines how to deploy the database"
- spec:
- strategy:
- type: "Recreate"
- triggers:
- -
- type: "ImageChange"
- imageChangeParams:
- automatic: true
- containerNames:
- - "postgresql"
- from:
- kind: "ImageStreamTag"
- name: "miq-postgresql:${POSTGRESQL_IMG_TAG}"
- -
- type: "ConfigChange"
- replicas: 1
- selector:
- name: "${DATABASE_SERVICE_NAME}"
- template:
- metadata:
- name: "${DATABASE_SERVICE_NAME}"
- labels:
- name: "${DATABASE_SERVICE_NAME}"
- spec:
- volumes:
- -
- name: "miq-pgdb-volume"
- persistentVolumeClaim:
- claimName: "${NAME}-${DATABASE_SERVICE_NAME}"
- containers:
- -
- name: "postgresql"
- image: "${POSTGRESQL_IMG_NAME}:${POSTGRESQL_IMG_TAG}"
- ports:
- -
- containerPort: 5432
- readinessProbe:
- timeoutSeconds: 1
- initialDelaySeconds: 15
- exec:
- command:
- - "/bin/sh"
- - "-i"
- - "-c"
- - "psql -h 127.0.0.1 -U ${POSTGRESQL_USER} -q -d ${POSTGRESQL_DATABASE} -c 'SELECT 1'"
- livenessProbe:
- timeoutSeconds: 1
- initialDelaySeconds: 60
- tcpSocket:
- port: 5432
- volumeMounts:
- -
- name: "miq-pgdb-volume"
- mountPath: "/var/lib/pgsql/data"
- env:
- -
- name: "POSTGRESQL_USER"
- value: "${DATABASE_USER}"
- -
- name: "POSTGRESQL_PASSWORD"
- valueFrom:
- secretKeyRef:
- name: "${NAME}-secrets"
- key: "pg-password"
- -
- name: "POSTGRESQL_DATABASE"
- value: "${DATABASE_NAME}"
- -
- name: "POSTGRESQL_MAX_CONNECTIONS"
- value: "${POSTGRESQL_MAX_CONNECTIONS}"
- -
- name: "POSTGRESQL_SHARED_BUFFERS"
- value: "${POSTGRESQL_SHARED_BUFFERS}"
- resources:
- requests:
- memory: "${POSTGRESQL_MEM_REQ}"
- cpu: "${POSTGRESQL_CPU_REQ}"
- limits:
- memory: "${POSTGRESQL_MEM_LIMIT}"
-
- parameters:
- -
- name: "NAME"
- displayName: Name
- required: true
- description: "The name assigned to all of the frontend objects defined in this template."
- value: manageiq
- -
- name: "DATABASE_SERVICE_NAME"
- displayName: "PostgreSQL Service Name"
- required: true
- description: "The name of the OpenShift Service exposed for the PostgreSQL container."
- value: "postgresql"
- -
- name: "DATABASE_USER"
- displayName: "PostgreSQL User"
- required: true
- description: "PostgreSQL user that will access the database."
- value: "root"
- -
- name: "DATABASE_PASSWORD"
- displayName: "PostgreSQL Password"
- required: true
- description: "Password for the PostgreSQL user."
- from: "[a-zA-Z0-9]{8}"
- generate: expression
- -
- name: "DATABASE_NAME"
- required: true
- displayName: "PostgreSQL Database Name"
- description: "Name of the PostgreSQL database accessed."
- value: "vmdb_production"
- -
- name: "DATABASE_REGION"
- required: true
- displayName: "Application Database Region"
- description: "Database region that will be used for application."
- value: "0"
- -
- name: "MEMCACHED_SERVICE_NAME"
- required: true
- displayName: "Memcached Service Name"
- description: "The name of the OpenShift Service exposed for the Memcached container."
- value: "memcached"
- -
- name: "MEMCACHED_MAX_MEMORY"
- displayName: "Memcached Max Memory"
- description: "Memcached maximum memory for memcached object storage in MB."
- value: "64"
- -
- name: "MEMCACHED_MAX_CONNECTIONS"
- displayName: "Memcached Max Connections"
- description: "Memcached maximum number of connections allowed."
- value: "1024"
- -
- name: "MEMCACHED_SLAB_PAGE_SIZE"
- displayName: "Memcached Slab Page Size"
- description: "Memcached size of each slab page."
- value: "1m"
- -
- name: "POSTGRESQL_MAX_CONNECTIONS"
- displayName: "PostgreSQL Max Connections"
- description: "PostgreSQL maximum number of database connections allowed."
- value: "100"
- -
- name: "POSTGRESQL_SHARED_BUFFERS"
- displayName: "PostgreSQL Shared Buffer Amount"
- description: "Amount of memory dedicated for PostgreSQL shared memory buffers."
- value: "256MB"
- -
- name: "APPLICATION_CPU_REQ"
- displayName: "Application Min CPU Requested"
- required: true
- description: "Minimum amount of CPU time the Application container will need (expressed in millicores)."
- value: "1000m"
- -
- name: "POSTGRESQL_CPU_REQ"
- displayName: "PostgreSQL Min CPU Requested"
- required: true
- description: "Minimum amount of CPU time the PostgreSQL container will need (expressed in millicores)."
- value: "500m"
- -
- name: "MEMCACHED_CPU_REQ"
- displayName: "Memcached Min CPU Requested"
- required: true
- description: "Minimum amount of CPU time the Memcached container will need (expressed in millicores)."
- value: "200m"
- -
- name: "APPLICATION_MEM_REQ"
- displayName: "Application Min RAM Requested"
- required: true
- description: "Minimum amount of memory the Application container will need."
- value: "6144Mi"
- -
- name: "POSTGRESQL_MEM_REQ"
- displayName: "PostgreSQL Min RAM Requested"
- required: true
- description: "Minimum amount of memory the PostgreSQL container will need."
- value: "1024Mi"
- -
- name: "MEMCACHED_MEM_REQ"
- displayName: "Memcached Min RAM Requested"
- required: true
- description: "Minimum amount of memory the Memcached container will need."
- value: "64Mi"
- -
- name: "APPLICATION_MEM_LIMIT"
- displayName: "Application Max RAM Limit"
- required: true
- description: "Maximum amount of memory the Application container can consume."
- value: "16384Mi"
- -
- name: "POSTGRESQL_MEM_LIMIT"
- displayName: "PostgreSQL Max RAM Limit"
- required: true
- description: "Maximum amount of memory the PostgreSQL container can consume."
- value: "8192Mi"
- -
- name: "MEMCACHED_MEM_LIMIT"
- displayName: "Memcached Max RAM Limit"
- required: true
- description: "Maximum amount of memory the Memcached container can consume."
- value: "256Mi"
- -
- name: "POSTGRESQL_IMG_NAME"
- displayName: "PostgreSQL Image Name"
- description: "This is the PostgreSQL image name requested to deploy."
- value: "docker.io/manageiq/manageiq-pods"
- -
- name: "POSTGRESQL_IMG_TAG"
- displayName: "PostgreSQL Image Tag"
- description: "This is the PostgreSQL image tag/version requested to deploy."
- value: "postgresql-latest-fine"
- -
- name: "MEMCACHED_IMG_NAME"
- displayName: "Memcached Image Name"
- description: "This is the Memcached image name requested to deploy."
- value: "docker.io/manageiq/manageiq-pods"
- -
- name: "MEMCACHED_IMG_TAG"
- displayName: "Memcached Image Tag"
- description: "This is the Memcached image tag/version requested to deploy."
- value: "memcached-latest-fine"
- -
- name: "APPLICATION_IMG_NAME"
- displayName: "Application Image Name"
- description: "This is the Application image name requested to deploy."
- value: "docker.io/manageiq/manageiq-pods"
- -
- name: "APPLICATION_IMG_TAG"
- displayName: "Application Image Tag"
- description: "This is the Application image tag/version requested to deploy."
- value: "app-latest-fine"
- -
- name: "APPLICATION_DOMAIN"
- displayName: "Application Hostname"
- description: "The exposed hostname that will route to the application service, if left blank a value will be defaulted."
- value: ""
- -
- name: "APPLICATION_REPLICA_COUNT"
- displayName: "Application Replica Count"
- description: "This is the number of Application replicas requested to deploy."
- value: "1"
- -
- name: "APPLICATION_INIT_DELAY"
- displayName: "Application Init Delay"
- required: true
- description: "Delay in seconds before we attempt to initialize the application."
- value: "15"
- -
- name: "APPLICATION_VOLUME_CAPACITY"
- displayName: "Application Volume Capacity"
- required: true
- description: "Volume space available for application data."
- value: "5Gi"
- -
- name: "APPLICATION_REGION_VOLUME_CAPACITY"
- displayName: "Application Region Volume Capacity"
- required: true
- description: "Volume space available for region application data."
- value: "5Gi"
- -
- name: "DATABASE_VOLUME_CAPACITY"
- displayName: "Database Volume Capacity"
- required: true
- description: "Volume space available for database."
- value: "15Gi"
diff --git a/roles/openshift_cfme/files/openshift_cfme.exports b/roles/openshift_cfme/files/openshift_cfme.exports
deleted file mode 100644
index 5457d41fc..000000000
--- a/roles/openshift_cfme/files/openshift_cfme.exports
+++ /dev/null
@@ -1,3 +0,0 @@
-/exports/miq-pv01 *(rw,no_root_squash,no_wdelay)
-/exports/miq-pv02 *(rw,no_root_squash,no_wdelay)
-/exports/miq-pv03 *(rw,no_root_squash,no_wdelay)
diff --git a/roles/openshift_cfme/handlers/main.yml b/roles/openshift_cfme/handlers/main.yml
deleted file mode 100644
index 7e90b09a4..000000000
--- a/roles/openshift_cfme/handlers/main.yml
+++ /dev/null
@@ -1,37 +0,0 @@
----
-######################################################################
-# NOTE: These are duplicated from roles/openshift_master/handlers/main.yml
-#
-# TODO: Use the consolidated 'openshift_handlers' role once it's ready
-# See: https://github.com/openshift/openshift-ansible/pull/4041#discussion_r118770782
-######################################################################
-
-- name: restart master api
- systemd: name={{ openshift.common.service_type }}-master-api state=restarted
- when: (not (master_api_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'
- notify: Verify API Server
-
-- name: restart master controllers
- systemd: name={{ openshift.common.service_type }}-master-controllers state=restarted
- when: (not (master_controllers_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'
-
-- name: Verify API Server
- # Using curl here since the uri module requires python-httplib2 and
- # wait_for port doesn't provide health information.
- command: >
- curl --silent --tlsv1.2
- {% if openshift.common.version_gte_3_2_or_1_2 | bool %}
- --cacert {{ openshift.common.config_base }}/master/ca-bundle.crt
- {% else %}
- --cacert {{ openshift.common.config_base }}/master/ca.crt
- {% endif %}
- {{ openshift.master.api_url }}/healthz/ready
- args:
- # Disables the following warning:
- # Consider using get_url or uri module rather than running curl
- warn: no
- register: api_available_output
- until: api_available_output.stdout == 'ok'
- retries: 120
- delay: 1
- changed_when: false
diff --git a/roles/openshift_cfme/img/CFMEBasicDeployment.png b/roles/openshift_cfme/img/CFMEBasicDeployment.png
deleted file mode 100644
index a89c1e325..000000000
--- a/roles/openshift_cfme/img/CFMEBasicDeployment.png
+++ /dev/null
Binary files differ
diff --git a/roles/openshift_cfme/meta/main.yml b/roles/openshift_cfme/meta/main.yml
deleted file mode 100644
index 162d817f0..000000000
--- a/roles/openshift_cfme/meta/main.yml
+++ /dev/null
@@ -1,19 +0,0 @@
----
-galaxy_info:
- author: Tim Bielawa
- description: OpenShift CFME (Manage IQ) Deployer
- company: Red Hat, Inc.
- license: Apache License, Version 2.0
- min_ansible_version: 2.2
- version: 1.0
- platforms:
- - name: EL
- versions:
- - 7
- categories:
- - cloud
- - system
-dependencies:
-- role: lib_openshift
-- role: lib_utils
-- role: openshift_master_facts
diff --git a/roles/openshift_cfme/tasks/create_pvs.yml b/roles/openshift_cfme/tasks/create_pvs.yml
deleted file mode 100644
index 7fa7d3997..000000000
--- a/roles/openshift_cfme/tasks/create_pvs.yml
+++ /dev/null
@@ -1,36 +0,0 @@
----
-# Check for existance and then conditionally:
-# - evaluate templates
-# - PVs
-#
-# These tasks idempotently create required CFME PV objects. Do not
-# call this file directly. This file is intended to be ran as an
-# include that has a 'with_items' attached to it. Hence the use below
-# of variables like "{{ item.pv_label }}"
-
-- name: "Check if the {{ item.pv_label }} template has been created already"
- oc_obj:
- namespace: "{{ openshift_cfme_project }}"
- state: list
- kind: pv
- name: "{{ item.pv_name }}"
- register: miq_pv_check
-
-# Skip all of this if the PV already exists
-- block:
- - name: "Ensure the {{ item.pv_label }} template is evaluated"
- template:
- src: "{{ item.pv_template }}.j2"
- dest: "{{ template_dir }}/{{ item.pv_template }}"
-
- - name: "Ensure {{ item.pv_label }} is created"
- oc_obj:
- namespace: "{{ openshift_cfme_project }}"
- kind: pv
- name: "{{ item.pv_name }}"
- state: present
- delete_after: True
- files:
- - "{{ template_dir }}/{{ item.pv_template }}"
- when:
- - not miq_pv_check.results.results.0
diff --git a/roles/openshift_cfme/tasks/main.yml b/roles/openshift_cfme/tasks/main.yml
deleted file mode 100644
index 74ae16d91..000000000
--- a/roles/openshift_cfme/tasks/main.yml
+++ /dev/null
@@ -1,117 +0,0 @@
----
-######################################################################
-# Users, projects, and privileges
-
-- name: Ensure the CFME user exists
- oc_user:
- state: present
- username: "{{ openshift_cfme_user }}"
-
-- name: Ensure the CFME namespace exists with CFME user as admin
- oc_project:
- state: present
- name: "{{ openshift_cfme_project }}"
- display_name: "{{ openshift_cfme_project_description }}"
- admin: "{{ openshift_cfme_user }}"
-
-- name: Ensure the CFME namespace service account is privileged
- oc_adm_policy_user:
- namespace: "{{ openshift_cfme_project }}"
- user: "{{ openshift_cfme_service_account }}"
- resource_kind: scc
- resource_name: privileged
- state: present
-
-######################################################################
-# NFS
-# In the case that we are not running on a cloud provider, volumes must be statically provisioned
-
-- include: nfs.yml
- when: not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))
-
-######################################################################
-# CFME App Template
-#
-# Note, this is different from the create_pvs.yml tasks in that the
-# application template does not require any jinja2 evaluation.
-#
-# TODO: Handle the case where the server template is updated in
-# openshift-ansible and the change needs to be landed on the managed
-# cluster.
-
-- name: Check if the CFME Server template has been created already
- oc_obj:
- namespace: "{{ openshift_cfme_project }}"
- state: list
- kind: template
- name: manageiq
- register: miq_server_check
-
-- name: Copy over CFME Server template
- copy:
- src: miq-template.yaml
- dest: "{{ template_dir }}/miq-template.yaml"
-
-- name: Ensure the server template was read from disk
- debug:
- var=r_openshift_cfme_miq_template_content
-
-- name: Ensure CFME Server Template exists
- oc_obj:
- namespace: "{{ openshift_cfme_project }}"
- kind: template
- name: "manageiq"
- state: present
- content: "{{ r_openshift_cfme_miq_template_content }}"
-
-######################################################################
-# Let's do this
-
-- name: Ensure the CFME Server is created
- oc_process:
- namespace: "{{ openshift_cfme_project }}"
- template_name: manageiq
- create: True
- params:
- APPLICATION_IMG_NAME: "{{ openshift_cfme_application_img_name }}"
- POSTGRESQL_IMG_NAME: "{{ openshift_cfme_postgresql_img_name }}"
- MEMCACHED_IMG_NAME: "{{ openshift_cfme_memcached_img_name }}"
- APPLICATION_IMG_TAG: "{{ openshift_cfme_application_img_tag }}"
- POSTGRESQL_IMG_TAG: "{{ openshift_cfme_postgresql_img_tag }}"
- MEMCACHED_IMG_TAG: "{{ openshift_cfme_memcached_img_tag }}"
- register: cfme_new_app_process
- run_once: True
- when:
- # User said to install CFME in their inventory
- - openshift_cfme_install_app | bool
- # # The server app doesn't exist already
- # - not miq_server_check.results.results.0
-
-- debug:
- var: cfme_new_app_process
-
-######################################################################
-# Various cleanup steps
-
-# TODO: Not sure what to do about this right now. Might be able to
-# just delete it? This currently warns about "Unable to find
-# '<TEMP_DIR>' in expected paths."
-- name: Ensure the temporary PV/App templates are erased
- file:
- path: "{{ item }}"
- state: absent
- with_fileglob:
- - "{{ template_dir }}/*.yaml"
-
-- name: Ensure the temporary PV/app template directory is erased
- file:
- path: "{{ template_dir }}"
- state: absent
-
-######################################################################
-
-- name: Status update
- debug:
- msg: >
- CFME has been deployed. Note that there will be a delay before
- it is fully initialized.
diff --git a/roles/openshift_cfme/tasks/nfs.yml b/roles/openshift_cfme/tasks/nfs.yml
deleted file mode 100644
index ca04628a8..000000000
--- a/roles/openshift_cfme/tasks/nfs.yml
+++ /dev/null
@@ -1,51 +0,0 @@
----
-# Tasks to statically provision NFS volumes
-# Include if not using dynamic volume provisioning
-
-- name: Set openshift_cfme_nfs_server fact
- when: openshift_cfme_nfs_server is not defined
- set_fact:
- # Hostname/IP of the NFS server. Currently defaults to first master
- openshift_cfme_nfs_server: "{{ oo_nfs_to_config.0 }}"
-
-- name: Ensure the /exports/ directory exists
- file:
- path: /exports/
- state: directory
- mode: 0755
- owner: root
- group: root
-
-- name: Ensure the miq-pv0X export directories exist
- file:
- path: "/exports/{{ item }}"
- state: directory
- mode: 0775
- owner: root
- group: root
- with_items: "{{ openshift_cfme_pv_exports }}"
-
-- name: Ensure the NFS exports for CFME PVs exist
- copy:
- src: openshift_cfme.exports
- dest: /etc/exports.d/openshift_cfme.exports
- register: nfs_exports_updated
-
-- name: Ensure the NFS export table is refreshed if exports were added
- command: exportfs -ar
- when:
- - nfs_exports_updated.changed
-
-
-######################################################################
-# Create the required CFME PVs. Check out these online docs if you
-# need a refresher on includes looping with items:
-# * http://docs.ansible.com/ansible/playbooks_loops.html#loops-and-includes-in-2-0
-# * http://stackoverflow.com/a/35128533
-#
-# TODO: Handle the case where a PV template is updated in
-# openshift-ansible and the change needs to be landed on the managed
-# cluster.
-
-- include: create_pvs.yml
- with_items: "{{ openshift_cfme_pv_data }}"
diff --git a/roles/openshift_cfme/tasks/tune_masters.yml b/roles/openshift_cfme/tasks/tune_masters.yml
deleted file mode 100644
index 02b0f10bf..000000000
--- a/roles/openshift_cfme/tasks/tune_masters.yml
+++ /dev/null
@@ -1,12 +0,0 @@
----
-- name: Ensure bulk image import limit is tuned
- yedit:
- src: /etc/origin/master/master-config.yaml
- key: 'imagePolicyConfig.maxImagesBulkImportedPerRepository'
- value: "{{ openshift_cfme_maxImagesBulkImportedPerRepository | int() }}"
- state: present
- backup: True
- notify:
- - restart master
-
-- meta: flush_handlers
diff --git a/roles/openshift_cfme/tasks/uninstall.yml b/roles/openshift_cfme/tasks/uninstall.yml
deleted file mode 100644
index 406b59364..000000000
--- a/roles/openshift_cfme/tasks/uninstall.yml
+++ /dev/null
@@ -1,46 +0,0 @@
----
-- include_role:
- name: lib_openshift
-
-- name: Uninstall CFME - ManageIQ
- debug:
- msg: Uninstalling Cloudforms Management Engine - ManageIQ
-
-- name: Ensure the CFME project is removed
- oc_project:
- state: absent
- name: "{{ openshift_cfme_project }}"
-
-- name: Ensure the CFME template is removed
- oc_obj:
- namespace: "{{ openshift_cfme_project }}"
- state: absent
- kind: template
- name: manageiq
-
-- name: Ensure the CFME PVs are removed
- oc_obj:
- state: absent
- all_namespaces: True
- kind: pv
- name: "{{ item }}"
- with_items: "{{ openshift_cfme_pv_exports }}"
- when: not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))
-
-- name: Ensure the CFME user is removed
- oc_user:
- state: absent
- username: "{{ openshift_cfme_user }}"
-
-- name: Ensure the CFME NFS Exports are removed
- file:
- path: /etc/exports.d/openshift_cfme.exports
- state: absent
- register: nfs_exports_removed
- when: not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))
-
-- name: Ensure the NFS export table is refreshed if exports were removed
- command: exportfs -ar
- when:
- - nfs_exports_removed.changed
- - not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))
diff --git a/roles/openshift_cfme/templates/miq-pv-db.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-db.yaml.j2
deleted file mode 100644
index 280f3e97a..000000000
--- a/roles/openshift_cfme/templates/miq-pv-db.yaml.j2
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: miq-pv01
-spec:
- capacity:
- storage: 15Gi
- accessModes:
- - ReadWriteOnce
- nfs:
- path: {{ openshift_cfme_nfs_directory }}/miq-pv01
- server: {{ openshift_cfme_nfs_server }}
- persistentVolumeReclaimPolicy: Retain
diff --git a/roles/openshift_cfme/templates/miq-pv-region.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-region.yaml.j2
deleted file mode 100644
index fe80dffa5..000000000
--- a/roles/openshift_cfme/templates/miq-pv-region.yaml.j2
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: miq-pv02
-spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteOnce
- nfs:
- path: {{ openshift_cfme_nfs_directory }}/miq-pv02
- server: {{ openshift_cfme_nfs_server }}
- persistentVolumeReclaimPolicy: Retain
diff --git a/roles/openshift_cfme/templates/miq-pv-server.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-server.yaml.j2
deleted file mode 100644
index f84b67ea9..000000000
--- a/roles/openshift_cfme/templates/miq-pv-server.yaml.j2
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: miq-pv03
-spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteOnce
- nfs:
- path: {{ openshift_cfme_nfs_directory }}/miq-pv03
- server: {{ openshift_cfme_nfs_server }}
- persistentVolumeReclaimPolicy: Retain