| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
certificates.
|
|
|
|
|
|
| |
QE found that for fresh installs we were basing the docker version facts of the
images that could be pulled prior to configuring /etc/sysconfig/docker. This
is an edge case but something we need to fix.
|
|
|
|
| |
containerized env
|
|
|
|
|
|
|
|
| |
Previously we were trying to use the running container to get the current
version. There are cases in which the Master or Node may not be running during
upgrade. It's actually safer to just run the container to fetch the version
that would be launch if the container were running. Then we pull the image to
see what the latest image contains.
|
|
|
|
| |
containerized systemd units
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move openshift_router to openshift_hosted role which will eventually
contain registry, metrics and logging.
* Adds option for specifying an openshift_hosted_router_certificate
cert and key pair.
* Removes dependency on node label variables and retrieves the node
list from the API s.t. this role can be applied to any cluster with
existing nodes. I've added an openshift_hosted playbook that occurs
after node install to account for this.
* Infrastructure nodes are selected using
openshift_hosted_router_selector which is based on deployment type
by default; openshift-enterprise -> "region=infra" and online ->
"type=infra".
|
|\
| |
| | |
Fixing bugs 1322788 and 1323123
|
| |
| |
| |
| | |
and atomic-openshift-master-controllers
|
|\ \
| | |
| | | |
Pacemaker is unsupported for 3.2
|
| |/ |
|
|\ \
| |/
|/| |
We require docker 1.9 for the 3.2 upgrade
|
| | |
|
| |
| |
| |
| |
| |
| | |
- gather facts requiring docker only if docker is present and running
- Update reference to etcd role in playbooks/common/openshift-etcd/config.yml
to use openshift_etcd
|
| |
| |
| |
| | |
and atomic-openshift-master-controllers
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently there's no good way to install from a registry that requires
authentication. This applies both to RPM and containerized installs:
https://bugzilla.redhat.com/show_bug.cgi?id=1316341
The workaround is to 'docker login' as root and then have ansible pull the
images to the image cache.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Bug 1317755 - Set insecure-registry for internal registry by default
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| | |
- Prevents roles that need common facts from needing to require
openshift_common, which pulls in the openshift binary.
- Add dependency on openshift_facts to os_firewall, since it uses
openshift.common facts
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
If the master or node aren't running we can't determine the correct version
that is currently installed.
|
| |
| |
| |
| |
| |
| |
| |
| | |
defined.
We already have a check in pre.yml to make sure openshift_image_tag is set to a
range that is allowed. This is an advanced setting and should be used to
override whatever is returned by the 'latest' image in a given registry.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- refactors the docker role to push generic config into docker role and wrap
openshift specific variables into an openshift_docker role and it's
dependent openshift_docker_facts role
- adds support for setting --confirm-def-push flag (Resolves
https://github.com/openshift/openshift-ansible/issues/1014)
- moves docker related facts from common/node roles to a new docker role
- renames cli_docker_* role varialbes to openshift_docker-* (maintaining
backward compatibility)
- update role dependencies to pull in openshift_docker conditionally based on
is_containerized
- remove playbooks/common/openshift-docker since the docker role is now
conditionally included
|
| | |
|
|/
|
|
|
|
| |
Previously I was greping for 'ose' in the systemd units. That was only working
on my machine because my Nodes were also Masters. It's safer to grep for
openshift3 since that would be present for Masters or Nodes.
|
|\
| |
| | |
reverting back to pre-pulling the master image
|
| |\
| | |
| | |
| | | |
https://github.com/abutcher/openshift-ansible
|
| | | |
|
|/ /
| |
| |
| |
| |
| | |
connections
Bug 1315563 - Upgrade failed to containerized install OSE 3.1 on RHEL
|
|\ \
| | |
| | | |
Bug 1315637 - The docker wasn't upgraded on node during upgrade
|
| |/ |
|
|/ |
|
|\
| |
| | |
BZ1315151: Support openshift_image_tag
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is the containerized openshift_pkg_version equivalent. Originally I was
hoping to reuse openshift_pkg_version for containerized installs but the fact
that it's very coupled to yum made that pretty ugly.
However, I did opt to rely on the previously existing 'openshift_version'
variable. Containerized and RPM installs can both use that variable and it
will be set appropriately if either openshift_pkg_version or
openshift_image_tag are set. I suspect someday containerized installs will be
the only option and I didn't can to have thinkgs like openshift_pkg_version and
openshift_image_tag in the playbooks anymore the necessary.
|
|/ |
|
|
|
|
| |
object' has no attribute 'stdout'"
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
configure debug_level for master and node from cli
|