| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Move repeating pre_tasks to pre-install
(OpenShift Pre-Requisites) step.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* node labels: add checks for custom labels
- README: add more info about customising labels
- pre_tasks: add checks for label values, set to empty dict if undefined
- group_vars: move labels customisation from OSEv3 to all
* pre_tasks: tried a new approach to updating variables
* pre_tasks: variable update fixed
* pre_tasks: rollback upscaling changes (to be added in upscaling PR)
* pre_tasks: blank line removed
* pre_tasks: add check for undefined variable (should not happen though)
* pre_tasks: be sure to have regions defined
|
|
|
|
|
|
|
|
| |
* Add documentation regarding running custom post-provision tasks
* moved post-provision doc to openstack README
* added reference to OSEv3, clarified some text
|
|\
| |
| | |
[WIP] Add docs and defaults for multi-master setup
|
| |
| |
| |
| |
| |
| |
| |
| | |
Additionally, add the lb group to contain lb nodes to the
static inventory template. Include the lb group into the
OSEv3 group, in order to apply the cluster group vars to it.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|/
|
|
|
|
| |
This allows our users to keep the ansible.cfg file in the inventory as
well as putting e.g. LDAP certificates in.
Fixes #481
|
|
|
|
|
|
|
| |
* Update openshift_release in the sample inventory
This removes setting the version for Openshift Origin, because the
only the latest release is actually available. So if a new Origin
release comes up, the installation will fail.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* README, all.yml, stack_params.yaml, openstack-stack: added docker volume size customisation
- app_volume_size changed to node_volume_size (it is node everywhere else)
* all.yml, stack_params.yaml,openstack-stack: added customisation for lb, etcd, dns
* README: updated
* README: updated info about ephemeral volumes
|
|
|
|
|
|
|
|
|
|
| |
* README, all.yml, stack_params.yml, heat_stack.yaml.j2: hostname customisation added
* hostnames customisation: default set in stack_params
* heat_stack: bug fix
* fixed commented defaults in group_vars/all.yml
|
|
|
|
|
|
|
|
| |
When using a bastion and a single master, use the lb-secgrp
to access UI port allowed from the ingress bastion node cidr.
For HA (masters>1), UI still should be accessed via
the LB node's ingress cidr, omitting the bastion.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
| |
and documented (#638)
|
|
|
|
|
|
|
|
|
|
|
|
| |
* all.yml: set up new variables for specifying images for roles
* stack_params.yaml: add image name variables for different roles
* more roles added
* heat_stack.yaml.j2: openstack_image changed to updated image names
* README: updated documentation for specifying image names
|
|
|
|
|
|
| |
Add openstack_private_network_name to filter by a wanted private
network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
For testing cases it's sometimes useful to not create Cinder volumes for
the VMs. It can also sometimes be a little faster and more robust (but
unfit for production).
This adds an option called `ephemeral_volumes` that will use the VM's
storage instead of creating volumes when set to true.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the move to the static inventory, we don't need it anymore so it's
now just an unnecessary step in the deployment.
Note that the users may still want to use clouds.yaml for openstack
credentials instead of sourcing the `OS_*` environment variables, but
they can do that at their discression.
The reason we had the clouds.yaml here was because the `openstack.py`
dynamic inventory used the servers' UUID's as ansible hosts by default
and the options we put in caused it to use the hostnames (as desired).
|
| |
|
|\
| |
| | |
Add wildcard record for Private DNS
|
| | |
|
| | |
|
| |
| |
| | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* At the provisioning stage, allow users to auto-generate SSH config,
when using a static inventory.
* Run playbooks to provsion and post-provision as a separate, when
using a bastion. This re-applies the SSH config, which ansible can't
do on the fly.
* Support a pre-installed bastion node, colocated with the 1st infra
node.
* With a bastion enabled, reduce floating IP footprint to infra and
dns nodes only, effectively isolating a cluster in a private
network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
* README in provisioning: note about infra-ansible not updating versions if one exists
* README in provisioning: minor change
* README: improved readability
|
|
|
|
|
|
|
|
|
| |
* At the provisioning stage, allow users to auto-generate a static
inventory w/o manual steps needed. The alternative to
go fully dynamic TBD.
* Move openshift pre-install playbook to the post provision playbook,
where the second part of the pre install tasks is already placed.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Autogenerate inventory/hosts when 'inventory: static' (Default),
with the shade-inventory tool.
* Drop unused anymore: openstack.py and associated GPL notes,
an example static inventory, omit manual updates for the
inventory DNS names in the deployment guide.
* Switch openstack.py formatted inventory hostvars
to the shade-inventory format (omit openstack.* from hostvars).
* Populate node labels from inventory vars instead of the heat
templates combined with inventory vars.
* Add app (k8s minions) nodes group for primary node labels.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\
| |
| | |
Added prerequisity for python-openstackclient installation
|
| | |
|
| | |
|
| |
| |
| |
| | |
dependencies
|
| |
| |
| |
| | |
python-openstackclient installation
|
|/
|
|
|
|
|
|
|
|
| |
Because openshift-ansible requires root on the cluster nodes, but it
doesn't explicitly set it in the playbooks (like we do), let's set it
in our inventory instead of requiring to pass `--become` to
`ansible-playbook`.
That will simplify the installation steps as well as let us include
the provisioning and openshift-ansible playbooks in a single playbook.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Set up NetworkManager automatically
This removes the extra step of running the
`openshift-ansible/playbooks/byo/openshift-node/network_manager.yml`
before installing openshift. In addition, the playbook relies on a
host group that the provisioning doesn't provide (oo_all_hosts).
Instead, we set up NetworkManager on CentOS nodes automatically. And
we restart it on RHEL (which is necessary for the nodes to pick up the
new DNS we configured the subnet with).
This makes the provisioning easier and more resilient.
* Apply the node-network-manager role to every node
It makes the code simpler and more consistent across distros.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Switch the sample inventory to CentOS
This changes the image name and deployment types to use centos instead
of rhel and sets `rhsm_register` to false.
With these changes, the inventory should be immediately deployable
using the default values (assuming the image, network and flavor names
match).
Ideally, the upstream CI will just end up using this inventory with
little to no changes, too at some point.
* Specify the origin openshift_release
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add defaults values for some openstack vars
Ansible shows errors when the `rhsm_register` and
`openstack_flat_secgrp` values are not present in the inventory even
though they have sensible default values.
This makes them both default to false when they're not specified.
* Comment out the flat security group option in inv
It's no longer required to be there so let's comment it out.
|
|
|
| |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
provisioning (#518)
* prerequisites.yml: check prerequisites on localhost needed for provisioning
provision.yml: includes prerequisites.yml
* prerequisites: indentation fixed
* prerequisites.yml: used ansible_version variable, openstack modules for ansible
* prerequisites.yml: os_keypair is not suitable for this purpose
* prerequisites.yml: openstack keypair command exchanged for shade
- there is no Ansible module for this now
- os_keypair is not suitable for this purpose
- python-openstackclient dependency is not desirable
|
|\
| |
| | |
Manage packages to install/update for openstack provider
|
| |
| |
| |
| |
| |
| |
| | |
Allow required packages and yum update all steps to be optionally
disabled.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Firstly, provision a Heat stack with given public resolvers.
* After the DNS node configured as an authoritative server,
switch the Heat stack's Neutron subnet to that resolver
(private_dns_server) the way it to become the first entry pushed
into the hosts /etc/resolv.conf. It will be serving the cluster
domain requests for OpenShift nodes and workloads.
* Drop post-provision /etc/reslov.conf nameserver hacks as not
needed anymore.
* Fix dns floating IPs output and add the priv IPs output as well.
* Update docs, clarify localhost vs servers requirements, add
required Network Manager setup step.
* Use post-provision task names instead of comments.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
The `wait_for_connection` module is more reliable as it uses Ansible's `ping`
to verify the nodes are really accessible. Using `wait_for` and checking that
port 22 is open runs into the possibility of SSH being up but the public keys
or users not being set up yet (as that's done with cloud-init).
In addition, we were gathering facts before running the wait_for task which
rendered it useless.
|
| |
|
| |
|
|\
| |
| | |
Add node_removal_policies variable to openstack provisioning to allow for scaling down
|
| | |
|
| | |
|
|\ \
| | |
| | | |
all.yml: removed whitespaces in front of variables
|
| | | |
|
|\| |
| | |
| | | |
OSEv3.yml: added option to ignore set hardware limits for RAM and DISK
|
| | | |
|
| | | |
|