summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/best_practices_guide.adoc39
-rw-r--r--docs/proposals/role_decomposition.md353
-rw-r--r--docs/pull_requests.md95
-rw-r--r--docs/repo_structure.md67
4 files changed, 517 insertions, 37 deletions
diff --git a/docs/best_practices_guide.adoc b/docs/best_practices_guide.adoc
index 7f3d85d40..e66c5addb 100644
--- a/docs/best_practices_guide.adoc
+++ b/docs/best_practices_guide.adoc
@@ -11,44 +11,9 @@ All new pull requests created against this repository MUST comply with this guid
This guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119].
-== Pull Requests
-
-
-
-[[All-pull-requests-MUST-pass-the-build-bot-before-they-are-merged]]
-[cols="2v,v"]
-|===
-| <<All-pull-requests-MUST-pass-the-build-bot-before-they-are-merged, Rule>>
-| All pull requests MUST pass the build bot *before* they are merged.
-|===
-
-The purpose of this rule is to avoid cases where the build bot will fail pull requests for code modified in a previous pull request.
-
-The tooling is flexible enough that exceptions can be made so that the tool the build bot is running will ignore certain areas or certain checks, but the build bot itself must pass for the pull request to be merged.
-
-
== Python
-=== Python Source Files
-
-'''
-[[Python-source-files-MUST-contain-the-following-vim-mode-line]]
-[cols="2v,v"]
-|===
-| <<Python-source-files-MUST-contain-the-following-vim-mode-line, Rule>>
-| Python source files MUST contain the following vim mode line.
-|===
-
-[source]
-----
-# vim: expandtab:tabstop=4:shiftwidth=4
-----
-
-Since most developers contributing to this repository use vim, this rule helps to promote consistency.
-
-If mode lines for other editors are needed, please open a GitHub issue.
-
=== Method Signatures
'''
@@ -509,12 +474,12 @@ The Ansible `package` module calls the associated package manager for the underl
# tasks.yml
- name: Install etcd (for etcdctl)
yum: name=etcd state=latest
- when: "ansible_pkg_mgr == yum"
+ when: ansible_pkg_mgr == yum
register: install_result
- name: Install etcd (for etcdctl)
dnf: name=etcd state=latest
- when: "ansible_pkg_mgr == dnf"
+ when: ansible_pkg_mgr == dnf
register: install_result
----
diff --git a/docs/proposals/role_decomposition.md b/docs/proposals/role_decomposition.md
new file mode 100644
index 000000000..b6c1d8c5b
--- /dev/null
+++ b/docs/proposals/role_decomposition.md
@@ -0,0 +1,353 @@
+# Scaffolding for decomposing large roles
+
+## Why?
+
+Currently we have roles that are very large and encompass a lot of different
+components. This makes for a lot of logic required within the role, can
+create complex conditionals, and increases the learning curve for the role.
+
+## How?
+
+Creating a guide on how to approach breaking up a large role into smaller,
+component based, roles. Also describe how to develop new roles, to avoid creating
+large roles.
+
+## Proposal
+
+Create a new guide or append to the current contributing guide a process for
+identifying large roles that can be split up, and how to compose smaller roles
+going forward.
+
+### Large roles
+
+A role should be considered for decomposition if it:
+
+1) Configures/installs more than one product.
+1) Can configure multiple variations of the same product that can live
+side by side.
+1) Has different entry points for upgrading and installing a product
+
+Large roles<sup>1</sup> should be responsible for:
+> 1 or composing playbooks
+
+1) Composing smaller roles to provide a full solution such as an Openshift Master
+1) Ensuring that smaller roles are called in the correct order if necessary
+1) Calling smaller roles with their required variables
+1) Performing prerequisite tasks that small roles may depend on being in place
+(openshift_logging certificate generation for example)
+
+### Small roles
+
+A small role should be able to:
+
+1) Be deployed independently of other products (this is different than requiring
+being installed after other base components such as OCP)
+1) Be self contained and able to determine facts that it requires to complete
+1) Fail fast when facts it requires are not available or are invalid
+1) "Make it so" based on provided variables and anything that may be required
+as part of doing such (this should include data migrations)
+1) Have a minimal set of dependencies in meta/main.yml, just enough to do its job
+
+### Example using decomposition of openshift_logging
+
+The `openshift_logging` role was created as a port from the deployer image for
+the `3.5` deliverable. It was a large role that created the service accounts,
+configmaps, secrets, routes, and deployment configs/daemonset required for each
+of its different components (Fluentd, Kibana, Curator, Elasticsearch).
+
+It was possible to configure any of the components independently of one another,
+up to a point. However, it was an all of nothing installation and there was a
+need from customers to be able to do things like just deploy Fluentd.
+
+Also being able to support multiple versions of configuration files would become
+increasingly messy with a large role. Especially if the components had changes
+at different intervals.
+
+#### Folding of responsibility
+
+There was a duplicate of work within the installation of three of the four logging
+components where there was a possibility to deploy both an 'operations' and
+'non-operations' cluster side-by-side. The first step was to collapse that
+duplicate work into a single path and allow a variable to be provided to
+configure such that either possibility could be created.
+
+#### Consolidation of responsibility
+
+The generation of OCP objects required for each component were being created in
+the same task file, all Service Accounts were created at the same time, all secrets,
+configmaps, etc. The only components that were not generated at the same time were
+the deployment configs and the daemonset. The second step was to make the small
+roles self contained and generate their own required objects.
+
+#### Consideration for prerequisites
+
+Currently the Aggregated Logging stack generates its own certificates as it has
+some requirements that prevent it from utilizing the OCP cert generation service.
+In order to make sure that all components were able to trust one another as they
+did previously, until the cert generation service can be used, the certificate
+generation is being handled within the top level `openshift_logging` role and
+providing the location of the generated certificates to the individual roles.
+
+#### Snippets
+
+[openshift_logging/tasks/install_logging.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/tasks/install_logging.yaml)
+```yaml
+- name: Gather OpenShift Logging Facts
+ openshift_logging_facts:
+ oc_bin: "{{openshift.common.client_binary}}"
+ openshift_logging_namespace: "{{openshift_logging_namespace}}"
+
+- name: Set logging project
+ oc_project:
+ state: present
+ name: "{{ openshift_logging_namespace }}"
+
+- name: Create logging cert directory
+ file:
+ path: "{{ openshift.common.config_base }}/logging"
+ state: directory
+ mode: 0755
+ changed_when: False
+ check_mode: no
+
+- include: generate_certs.yaml
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+
+## Elasticsearch
+- include_role:
+ name: openshift_logging_elasticsearch
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+
+- include_role:
+ name: openshift_logging_elasticsearch
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+ openshift_logging_es_ops_deployment: true
+ when:
+ - openshift_logging_use_ops | bool
+
+
+## Kibana
+- include_role:
+ name: openshift_logging_kibana
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+ openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}"
+ openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}"
+ openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}"
+ openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}"
+ openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}"
+ openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_replica_count }}"
+ openshift_logging_kibana_es_host: "{{ openshift_logging_es_host }}"
+ openshift_logging_kibana_es_port: "{{ openshift_logging_es_port }}"
+ openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
+
+- include_role:
+ name: openshift_logging_kibana
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+ openshift_logging_kibana_ops_deployment: true
+ openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}"
+ openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}"
+ openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}"
+ openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}"
+ openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}"
+ openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
+ openshift_logging_kibana_es_host: "{{ openshift_logging_es_ops_host }}"
+ openshift_logging_kibana_es_port: "{{ openshift_logging_es_ops_port }}"
+ openshift_logging_kibana_nodeselector: "{{ openshift_logging_kibana_ops_nodeselector }}"
+ openshift_logging_kibana_cpu_limit: "{{ openshift_logging_kibana_ops_cpu_limit }}"
+ openshift_logging_kibana_memory_limit: "{{ openshift_logging_kibana_ops_memory_limit }}"
+ openshift_logging_kibana_hostname: "{{ openshift_logging_kibana_ops_hostname }}"
+ openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_ops_replica_count }}"
+ openshift_logging_kibana_proxy_debug: "{{ openshift_logging_kibana_ops_proxy_debug }}"
+ openshift_logging_kibana_proxy_cpu_limit: "{{ openshift_logging_kibana_ops_proxy_cpu_limit }}"
+ openshift_logging_kibana_proxy_memory_limit: "{{ openshift_logging_kibana_ops_proxy_memory_limit }}"
+ openshift_logging_kibana_cert: "{{ openshift_logging_kibana_ops_cert }}"
+ openshift_logging_kibana_key: "{{ openshift_logging_kibana_ops_key }}"
+ openshift_logging_kibana_ca: "{{ openshift_logging_kibana_ops_ca}}"
+ when:
+ - openshift_logging_use_ops | bool
+
+
+## Curator
+- include_role:
+ name: openshift_logging_curator
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+ openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}"
+ openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}"
+ openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}"
+ openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}"
+ openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
+
+- include_role:
+ name: openshift_logging_curator
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+ openshift_logging_curator_ops_deployment: true
+ openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}"
+ openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}"
+ openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}"
+ openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}"
+ openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
+ openshift_logging_curator_cpu_limit: "{{ openshift_logging_curator_ops_cpu_limit }}"
+ openshift_logging_curator_memory_limit: "{{ openshift_logging_curator_ops_memory_limit }}"
+ openshift_logging_curator_nodeselector: "{{ openshift_logging_curator_ops_nodeselector }}"
+ when:
+ - openshift_logging_use_ops | bool
+
+
+## Fluentd
+- include_role:
+ name: openshift_logging_fluentd
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+
+- include: update_master_config.yaml
+```
+
+[openshift_logging_elasticsearch/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging_elasticsearch/meta/main.yaml)
+```yaml
+---
+galaxy_info:
+ author: OpenShift Red Hat
+ description: OpenShift Aggregated Logging Elasticsearch Component
+ company: Red Hat, Inc.
+ license: Apache License, Version 2.0
+ min_ansible_version: 2.2
+ platforms:
+ - name: EL
+ versions:
+ - 7
+ categories:
+ - cloud
+dependencies:
+- role: lib_openshift
+```
+
+[openshift_logging/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/meta/main.yaml)
+```yaml
+---
+galaxy_info:
+ author: OpenShift Red Hat
+ description: OpenShift Aggregated Logging
+ company: Red Hat, Inc.
+ license: Apache License, Version 2.0
+ min_ansible_version: 2.2
+ platforms:
+ - name: EL
+ versions:
+ - 7
+ categories:
+ - cloud
+dependencies:
+- role: lib_openshift
+- role: openshift_facts
+```
+
+[openshift_logging/tasks/install_support.yaml - old](https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_logging/tasks/install_support.yaml)
+```yaml
+---
+# This is the base configuration for installing the other components
+- name: Check for logging project already exists
+ command: >
+ {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig get project {{openshift_logging_namespace}} --no-headers
+ register: logging_project_result
+ ignore_errors: yes
+ when: not ansible_check_mode
+ changed_when: no
+
+- name: "Create logging project"
+ command: >
+ {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig new-project {{openshift_logging_namespace}}
+ when: not ansible_check_mode and "not found" in logging_project_result.stderr
+
+- name: Create logging cert directory
+ file: path={{openshift.common.config_base}}/logging state=directory mode=0755
+ changed_when: False
+ check_mode: no
+
+- include: generate_certs.yaml
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+
+- name: Create temp directory for all our templates
+ file: path={{mktemp.stdout}}/templates state=directory mode=0755
+ changed_when: False
+ check_mode: no
+
+- include: generate_secrets.yaml
+ vars:
+ generated_certs_dir: "{{openshift.common.config_base}}/logging"
+
+- include: generate_configmaps.yaml
+
+- include: generate_services.yaml
+
+- name: Generate kibana-proxy oauth client
+ template: src=oauth-client.j2 dest={{mktemp.stdout}}/templates/oauth-client.yaml
+ vars:
+ secret: "{{oauth_secret}}"
+ when: oauth_secret is defined
+ check_mode: no
+ changed_when: no
+
+- include: generate_clusterroles.yaml
+
+- include: generate_rolebindings.yaml
+
+- include: generate_clusterrolebindings.yaml
+
+- include: generate_serviceaccounts.yaml
+
+- include: generate_routes.yaml
+```
+
+# Limitations
+
+There will always be exceptions for some of these rules, however the majority of
+roles should be able to fall within these guidelines.
+
+# Additional considerations
+
+## Playbooks including playbooks
+In some circumstances it does not make sense to have a composing role but instead
+a playbook would be best for orchestrating the role flow. Decisions made regarding
+playbooks including playbooks will need to be taken into consideration as part of
+defining this process.
+Ref: (link to rteague's presentation?)
+
+## Role dependencies
+We want to make sure that our roles do not have any extra or unnecessary dependencies
+in meta/main.yml without:
+
+1. Proposing the inclusion in a team meeting or as part of the PR review and getting agreement
+1. Documenting in meta/main.yml why it is there and when it was agreed to (date)
+
+## Avoiding overly verbose roles
+When we are splitting our roles up into smaller components we want to ensure we
+avoid creating roles that are, for a lack of a better term, overly verbose. What
+do we mean by that? If we have `openshift_master` as an example, and we were to
+split it up, we would have a component for `etcd`, `docker`, and possibly for
+its rpms/configs. We would want to avoid creating a role that would just create
+certificates as those would make sense to be contained with the rpms and configs.
+Likewise, when it comes to being able to restart the master, we wouldn't have a
+role where that was its sole purpose.
+
+The same would apply for the `etcd` and `docker` roles. Anything that is required
+as part of installing `etcd` such as generating certificates, installing rpms,
+and upgrading data between versions should all be contained within the single
+`etcd` role.
+
+## Enforcing standards
+Certain naming standards like variable names could be verified as part of a Travis
+test. If we were going to also enforce that a role either has tasks or includes
+(for example) then we could create tests for that as well.
+
+## CI tests for individual roles
+If we are able to correctly split up roles, it should be possible to test role
+installations/upgrades like unit tests (assuming they would be able to be installed
+independently of other components).
diff --git a/docs/pull_requests.md b/docs/pull_requests.md
new file mode 100644
index 000000000..45ae01a9d
--- /dev/null
+++ b/docs/pull_requests.md
@@ -0,0 +1,95 @@
+# Pull Request process
+
+Pull Requests in the `openshift-ansible` project follow a
+[Continuous](https://en.wikipedia.org/wiki/Continuous_integration)
+[Integration](https://martinfowler.com/articles/continuousIntegration.html)
+process that is similar to the process observed in other repositories such as
+[`origin`](https://github.com/openshift/origin).
+
+Whenever a
+[Pull Request is opened](../CONTRIBUTING.md#submitting-contributions), some
+automated test jobs must be successfully run before the PR can be merged.
+
+Some of these jobs are automatically triggered, e.g., Travis, PAPR, and
+Coveralls. Other jobs need to be manually triggered by a member of the
+[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors).
+
+## Triggering tests
+
+We have two different Jenkins infrastructures, and, while that holds true, there
+are two commands that trigger a different set of test jobs. We are working on
+simplifying the workflow towards a single infrastructure in the future.
+
+- **Test jobs on the older infrastructure**
+
+ Members of the [OpenShift organization](https://github.com/orgs/openshift/people)
+ can trigger the set of test jobs in the older infrastructure by writing a
+ comment with the exact text `aos-ci-test` and nothing else.
+
+ The Jenkins host is not publicly accessible. Test results are posted to S3
+ buckets when complete, and links are available both at the bottom of the Pull
+ Request page and as comments posted by
+ [@openshift-bot](https://github.com/openshift-bot).
+
+- **Test jobs on the newer infrastructure**
+
+ Members of the
+ [Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors)
+ can trigger the set of test jobs in the newer infrastructure by writing a
+ comment containing `[test]` anywhere in the comment body.
+
+ The [Jenkins host](https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible/)
+ is publicly accessible. Like for the older infrastructure, the result of each
+ job is also posted to the Pull Request as comments and summarized at the
+ bottom of the Pull Request page.
+
+### Fedora tests
+
+There are a set of tests that run on Fedora infrastructure. They are started
+automatically with every pull request.
+
+They are implemented using the [`PAPR` framework](https://github.com/projectatomic/papr).
+
+To re-run tests, write a comment containing only `bot, retest this please`.
+
+## Triggering merge
+
+After a PR is properly reviewed and a set of
+[required jobs](https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml)
+reported successfully, it can be tagged for merge by a member of the
+[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors)
+by writing a comment containing `[merge]` anywhere in the comment body.
+
+Tagging a Pull Request for merge puts it in an automated merge queue. The
+[@openshift-bot](https://github.com/openshift-bot) monitors the queue and merges
+PRs that pass all of the required tests.
+
+### Manual merges
+
+The normal process described above should be followed: `aos-ci-test` and
+`[test]` / `[merge]`.
+
+In exceptional cases, such as when known problems with the merge queue prevent
+PRs from being merged, a PR may be manually merged if _all_ of these conditions
+are true:
+
+- [ ] Travis job must have passed (as enforced by GitHub)
+- [ ] Must have passed `aos-ci-test` (as enforced by GitHub)
+- [ ] Must have a positive review (as enforced by GitHub)
+- [ ] Must have failed the `[merge]` queue with a reported flake at least twice
+- [ ] Must have [issues labeled kind/test-flake](https://github.com/openshift/origin/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Ftest-flake) in [Origin](https://github.com/openshift/origin) linked in comments for the failures
+- [ ] Content must not have changed since all of the above conditions have been met (no rebases, no new commits)
+
+This exception is temporary and should be completely removed in the future once
+the merge queue has become more stable.
+
+Only members of the
+[Team OpenShift Ansible Committers](https://github.com/orgs/openshift/teams/team-openshift-ansible-committers)
+can perform manual merges.
+
+## Useful links
+
+- Repository containing Jenkins job definitions: https://github.com/openshift/aos-cd-jobs
+- List of required successful jobs before merge: https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml
+- Source code of the bot responsible for testing and merging PRs: https://github.com/openshift/test-pull-requests/
+- Trend of the time taken by merge jobs: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/buildTimeTrend
diff --git a/docs/repo_structure.md b/docs/repo_structure.md
new file mode 100644
index 000000000..f598f22c3
--- /dev/null
+++ b/docs/repo_structure.md
@@ -0,0 +1,67 @@
+# Repository structure
+
+### Ansible
+
+```
+.
+├── inventory Contains dynamic inventory scripts, and examples of
+│ Ansible inventories.
+├── library Contains Python modules used by the playbooks.
+├── playbooks Contains Ansible playbooks targeting multiple use cases.
+└── roles Contains Ansible roles, units of shared behavior among
+ playbooks.
+```
+
+#### Ansible plugins
+
+These are plugins used in playbooks and roles:
+
+```
+.
+├── ansible-profile
+├── callback_plugins
+├── filter_plugins
+└── lookup_plugins
+```
+
+### Scripts
+
+```
+.
+├── bin [DEPRECATED] Contains the `bin/cluster` script, a
+│ wrapper around the Ansible playbooks that ensures proper
+│ configuration, and facilitates installing, updating,
+│ destroying and configuring OpenShift clusters.
+│ Note: this tool is kept in the repository for legacy
+│ reasons and will be removed at some point.
+└── utils Contains the `atomic-openshift-installer` command, an
+ interactive CLI utility to install OpenShift across a
+ set of hosts.
+```
+
+### Documentation
+
+```
+.
+└── docs Contains documentation for this repository.
+```
+
+### Tests
+
+```
+.
+└── test Contains tests.
+```
+
+### CI
+
+These files are used by [PAPR](https://github.com/projectatomic/papr),
+It is very similar in workflow to Travis, with the test
+environment and test scripts defined in a YAML file.
+
+```
+.
+├── .papr.yml
+├── .papr.sh
+└── .papr.inventory
+```