| Age | Commit message (Collapse) | Author |
|
Add missing v3.9 gluster templates
|
|
Bug 1527178 - installation of logging stack failed: Invalid version s…
|
|
Bug 1532787 - Add empty node selector to openshift-web-console namespace
|
|
Automatic merge from submit-queue.
logging: fix jinja filters to support py3
|
|
Automatic merge from submit-queue.
Update web console template
Update the web console template based on changes in
https://github.com/openshift/origin/pull/17575
/assign @sdodson
@deads2k fyi
|
|
coreydaley/trello_1435_default_tolerations_via_buildconfig_defaulter
Automatic merge from submit-queue.
Ability to specify default tolerations via the buildconfig defaulter
Trello: https://trello.com/c/LNxlMjjU/1435-5-ability-to-specify-default-tolerations-via-the-buildconfig-defaulter-builds
Dependent on:
https://github.com/openshift/origin/pull/17955
|
|
Updating tsb image names
|
|
Automatic merge from submit-queue.
Add the ability to specify a timeout for node drain operations
A timeout to wait for nodes to drain pods can be specified to ensure that the upgrade continues even if nodes fail to drain pods in the allowed time. The default value of 0 will wait indefinitely allowing the admin to investigate the root cause and ensuring that disruption budgets are respected. In practice the `oc adm drain` command will eventually error out, at least that's what we've seen in our large online clusters, when that happens a second attempt will be made to drain the nodes, if it fails again it will abort the upgrade for that node or for the entire cluster based on your defined `openshift_upgrade_nodes_max_fail_percentage`.
`openshift_upgrade_nodes_drain_timeout=0` is the default and will wait until all pods have been drained successfully
`openshift_upgrade_nodes_drain_timeout=600` would wait for 600s before moving on to the tasks which would forcefully stop pods such as stopping docker, node, and openvswitch.
|
|
Ensure that openshift_facts role is imported whenever we rely on
|
|
Fixes Bug 1532961
|
|
vrutkovs/3.9-upgrades-remove-openshift.common.service_type
3.9 upgrade: remove openshift.common.service_type
|
|
failure_summary: make sure msg is always a string
|
|
Add defaults for openshift_pkg_version
|
|
Fixing openshift_hosted variable.
|
|
Add vsphere provider
|
|
|
|
Add key existing check to collect facts for rolebidings
|
|
Automatic merge from submit-queue.
Don't hardcode the network interface in the openshift_logging_mux role
The openshift_logging_mux role hardcodes the 'eth0' interface alias
for determining the IP address to use for incoming external client
connections. This will cause the playbook to fail with an undefined
variable error on systems where an 'eth0' interface does not exist.
This patch changes the default IP address for external connections
to use the 'ansible_default_ipv4' fact. It also allows this to be
overridden by a new 'openshift_logging_mux_external_address' variable.
|
|
|
|
This variable may or may not be defined by the users.
During deployments, it will be set to '-{{ openshift_version }}'
if undefined.
During upgrades, it will remain undefined.
This commit ensures that if the variable is undefined,
empty strings '' are set.
|
|
mgugino-upstream-stage/node-reduce-package-commands
Install node packages in one task instead of 3
|
|
Remove become statements
|
|
Automatic merge from submit-queue.
Limit host group scope on control-plane upgrades
This commit limits common init code to exclude
oo_nodes_to_config during upgrade_control_plane runs.
|
|
Since py3 returns `dict_items` for dict.keys() call instead of a list,
it should be converted into a list for compatibility
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
|
Automatic merge from submit-queue.
Adding logic to do a full cluster restart if we are incrementing our …
…major versions of ES
This will help with the upgrade from 2.x to 5.x for ES, it also fixes something I came across with the handler on 3.7 where it checks the prior deployed version of the ES pod rather than the new one.
|
|
Automatic merge from submit-queue.
Add iptables rules for flannel
[WIP] When using flannel there are iptables rules that need
to be added as stated here:
https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_red_hat_openshift_container_platform_3.4_on_red_hat_openstack_platform_10/#run_ansible_installer
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1493955
|
|
Automatic merge from submit-queue.
ensure containerized bools are cast
|
|
Trello: https://trello.com/c/LNxlMjjU/1435-5-ability-to-specify-default-tolerations-via-the-buildconfig-defaulter-builds
|
|
After remove become:no statements on local_action tasks,
we need to ensure that the proper file permssions are
applied to local temp directories.
This reason for this is that the 'fetch' module
does not use 'become' for the localhost, just the remote
host.
Additionally, users may not wish for the localhost to
become during a fetch. local_action will execute with
whatever permissions are specified in inventory or via
cli.
|
|
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1532787
|
|
Automatic merge from submit-queue.
container-engine: move registry_auth.yml before pull
so that the atomic pull takes into account the credentials if
required.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
This commit removes become:no statements that break
the installer in various ways.
|
|
Provide example on how to use osm_etcd_image
|
|
specified for Elasticsearch
openshift_logging_{curator,elasicsearch,fluentd,kibana,mux}/vars/main.yml:
- adding "3_9" to __allowed_.*_versions
- bumping __latest_.*_version to "3_9"
|
|
This commit limits common init code to exclude
oo_nodes_to_config during upgrade_control_plane runs.
|
|
This commit changes how we handle openshift_version role.
Most of the version initialization code is only run
on the first master now. All other hosts have values
set from the master.
Aftwards, we run some basic RPM queries to ensure
that the correct version is available on the other nodes.
Containerized needs to do their own image checks elsewhere.
|
|
upgrades: set openshift_client_binary fact when running on oo_first_master host
|
|
vrutkovs/containerized_upgrade_set_openshift_use_openshift_sdn
Automatic merge from submit-queue.
upgrades: use openshift_node_use_openshift_sdn when trying to pre-pull the image
This affects 3.8/3.9 upgrades for containerized hosts, if nodes are separate from master.
|
|
|
|
openshift_client_binary
|
|
|
|
Update the web console template based on changes in
https://github.com/openshift/origin/pull/17575
|
|
docker storage setup for ami building
|
|
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
|
|
|
Fix: change import_role to include_role
|
|
Build containerized host group dynamically
|
|
Properly cast crio boolean variables to bool
|
|
It appears that when one role dynamically imports
another, usage of import_role inside the dynamically
included role is not possible.
If something is included with include_role (dynamic),
all tasks therein must also use include_role (dynamic).
|
|
add host to g_new_node_hosts so that plays run against the AMI instance
update example vars so that overlay2 is used by default for docker storage
|