summaryrefslogtreecommitdiffstats
path: root/playbooks/libvirt
Commit message (Collapse)AuthorAgeFilesLines
* infra_node fixesJason DeTiberus2016-02-011-0/+2
| | | | | | | | | | | | | | - openshift_master role update - infra_nodes was previously being set to num_infra, which is an integer value when using the cloud providers, added a new variable osm_infra_nodes that is expected to be a list of hosts - if openshift_infra_nodes is not already set, create it from the nodes that have the region=infra label. - Cloud provider config playbook updates - override openshift_router_selector for cloud providers to avoid using the default of 'region=infra' when deployment_type is not 'online' - Set openshift_infra_nodes to g_infra_host for cloud providers
* Update cluster_hosts.yml for cloud providersJason DeTiberus2016-01-191-16/+11
| | | | | | - Add g_infra_hosts (nodes with sub-type infra) - Add g_compute_hosts (nodes with sub-type compute) - Reduce duplication by re-using previously defined variables
* Merge pull request #1128 from lhuard1A/bin_cluster_ose_3.1Thomas Wiest2016-01-111-15/+16
|\ | | | | Make bin/cluster able to spawn an OSE 3.1 cluster
| * Make bin/cluster able to spawn OSE 3.1 clustersLénaïc Huard2016-01-081-15/+16
| |
* | Fix VM drive cleanup during terminate on libvirtLénaïc Huard2016-01-071-1/+5
|/
* Fix error in byo cluster_hosts.ymlJason DeTiberus2016-01-041-1/+1
|
* Cleanup and fixes for cluster_id changeJason DeTiberus2016-01-044-39/+23
| | | | | | | | | - Move debug_level into vars.yml and byo inventory - change variables in cluster_hosts.yml to be g_* and update playbooks to use those values directly instead of setting them indirectly - added a new g_all_hosts entry in cluster_hosts to use in the update playbook instead of unioning all host types within the playbook - added a cluster_hosts.yml for the byo playbook
* Removing environment and env tags.Kenny Woodson2016-01-047-16/+38
|
* Create nfs host group with registry volume attachment.Andrew Butcher2015-12-151-0/+1
|
* Merge pull request #1028 from kwoodson/remove_env_host_typeKenny Woodson2015-12-145-11/+14
|\ | | | | Removing env-host-type in preparation of env and environment changes.
| * Updating env-host-type to host patternsKenny Woodson2015-12-115-11/+14
| |
* | Merge pull request #954 from damaestro/update_latest_cloud_imageThomas Wiest2015-12-103-3/+13
|\ \ | | | | | | Update for latest CentOS-7-x86_64-GenericCloud.
| * | Use join for the uncompress command.Jonathan Steffan2015-11-221-1/+1
| | |
| * | Update for latest CentOS-7-x86_64-GenericCloud.Jonathan Steffan2015-11-223-3/+13
| |/ | | | | | | | | | | - Use xz compressed image - Update sha256 for new image - Update docs to reflect new settings
* / Enforce connection: local and become: no on all localhost playsJason DeTiberus2015-11-306-0/+16
|/
* Better structure the output of the list playbookLénaïc Huard2015-11-131-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The list playbook listed the IPs of the VMs without logging their role like: TASK: [debug ] ************************************************************ ok: [10.64.109.37] => { "msg": "public:10.64.109.37 private:192.168.165.5" } ok: [10.64.109.47] => { "msg": "public:10.64.109.47 private:192.168.165.6" } ok: [10.64.109.36] => { "msg": "public:10.64.109.36 private:192.168.165.4" } ok: [10.64.109.215] => { "msg": "public:10.64.109.215 private:192.168.165.2" } The list playbook now prints the information in a more structured way with a list of masters, a list of nodes and the subtype of the nodes like: TASK: [debug ] ************************************************************ ok: [localhost] => { "msg": { "lenaicnewlist": { "master": [ { "name": "10.64.109.215", "private IP": "192.168.165.2", "public IP": "10.64.109.215", "subtype": "default" } ], "node": [ { "name": "10.64.109.47", "private IP": "192.168.165.6", "public IP": "10.64.109.47", "subtype": "compute" }, { "name": "10.64.109.37", "private IP": "192.168.165.5", "public IP": "10.64.109.37", "subtype": "compute" }, { "name": "10.64.109.36", "private IP": "192.168.165.4", "public IP": "10.64.109.36", "subtype": "infra" } ] } } }
* Add the sub-host-type tag to the libvirt VMsLénaïc Huard2015-11-132-1/+2
|
* Fix lb group related errorsJason DeTiberus2015-11-051-0/+1
|
* Refactor common group evaluation to avoid duplicationJason DeTiberus2015-11-041-4/+4
|
* Disable requiretty for only the openshift usererror102015-11-011-1/+6
| | | Use write_files to disable requiretty for the openshift user as suggested by @detiberm, fixes #773
* Don't require tty to run sudoerror102015-10-301-0/+1
| | | Set Defaults !requiretty so that ansible can run sudo without a terminal. Fixes #773
* Increase sleep when waiting for IP.Jaroslav Henner2015-10-201-1/+1
| | | | It was timeouting on slower hardware.
* Use runcmd to restart network.Jaroslav Henner2015-10-201-1/+1
| | | | | | | | | | | | Using bootcmd in cloud-config lead to restarts prior to starting the systemd-hostnamed, which was probable cause of the failure when DHCP client was failing to send the hostname, and subsequently, the ansible-opnshift was not able to identify the VM among the others when checking DHCP leases. The failure looked like: following 10:17:31 failed: [localhost] => {"attempts": 60, "changed": true, "cmd": "virsh -c qemu:///system net-dhcp-leases openshift-ansible | egrep -c 'experiment-node-compute-453d0|experiment-node-compute-61e16'", "delta": "0:00:00.033061", "end": "2015-10-19 10:17:31.409434", "failed": true, "rc": 0, "start": "2015-10-19 10:17:31.376373", "warnings": []} 10:17:31 stdout: 1 10:17:31 msg: Task failed as maximum retries was encountered
* Prevent dns resolution recursion (loop).Jaroslav Henner2015-09-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | The dnsmasq should not be resolving the example.com recursively, because in case that we have /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf: server=/example.com/192.168.55.1 the dnsmasq will be asking itself, therefore a dns resolution loop is created, which causes Maximum number of concurrent DNS queries reached (max: 150) and performance degradation of dns resolution on the whole hypervizor and guests. This patch will fix that in the domain.xml, which will cause adding local=/example.com/ to the /var/lib/libvirt/dnsmasq/openshift-ansible.conf, effectively fixing the problem.
* Add etcd nodes management in libvirtLénaïc Huard2015-08-252-2/+11
|
* Merge pull request #405 from sdodson/loglevel2Brenton Leanhardt2015-08-171-1/+1
|\ | | | | Set loglevel=2 as our default across the board
| * Set loglevel=2 as our default across the boardScott Dodson2015-07-291-1/+1
| |
* | Fix infra node support on libvirtLénaïc Huard2015-08-111-1/+1
|/
* Infra node supportWesley Hearn2015-07-231-0/+16
|
* Implement RHEL subscription for enterprise deployment typeLénaïc Huard2015-07-173-4/+24
|
* Playbook updates for clustered etcdJason DeTiberus2015-07-102-25/+12
| | | | | | | | | | | | | | | | | | | | | - Add support to bin/cluster for specifying etcd hosts - defaults to 0, if no etcd hosts are selected, then configures embedded etcd - Updates for the byo inventory file for etcd and master as node by default - Consolidation of cluster logic more centrally into common playbook - Added etcd config support to playbooks - Restructured byo playbooks to leverage the common openshift-cluster playbook - Added support to common master playbook to generate and apply external etcd client certs from the etcd ca - start of refactor for better handling of master certs in a multi-master environment. - added the openshift_master_ca and openshift_master_certificates roles to manage master certs instead of generating them in the openshift_master role - added etcd host groups to the cluster update playbooks - aded better handling of host groups when they are either not present or are empty. - Update AWS readme
* Add a generic mechanism for passing optionsLénaïc Huard2015-07-033-3/+8
| | | | And use it in the libvirt and openstack playbooks
* Templatize configs and 0.5.2 changesJason DeTiberus2015-06-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | - Templatize node config - Templatize master config - Integrated sdn changes - Updates for openshift_facts - Added support for node, master and sdn related changes - registry_url - added identity provider facts - Removed openshift_sdn_* roles - Install httpd-tools if configuring htpasswd auth - Remove references to external_id - Setting external_id interferes with nodes associating with the generated node object when pre-registering nodes. - osc/oc and osadm/oadm binary detection in openshift_facts Misc Changes: - make non-errata puddle default for byo example - comment out master in list of nodes in inventory/byo/hosts - remove non-error errors from fluentd_* roles - Use admin kubeconfig instead of openshift-client
* Fix libvirt playbookLénaïc Huard2015-06-071-2/+2
| | | | | | | If we don’t explicitly specify the libvirt URI to use for virsh, it will use the LIBVIRT_DEFAULT_URI environment variable. For a consistent behavior, all `virsh` invocation must be done with the `-c <libvirt_uri>` parameter.
* Infrastructure - Add service action to bin/clusterJhon Honce2015-06-031-0/+32
| | | | | * Add necessary playbooks/roles * Cleanup bin/cluster to meet new design guide lines
* [libvirt cluster] Use net-dhcp-leases to find VMs’ IPsLénaïc Huard2015-05-221-9/+3
| | | | | Query libvirt’s DHCP leases rather than inspecting the host’s ARP cache to find the VMs’ IPs.
* lvm-direct support for awsJason DeTiberus2015-04-231-1/+1
| | | | | | | | | | | | | | | | | - Create a separate docker volume in aws openshift-cluster playbooks - default to using ephemeral storage, but allow to be overriden - allow root volume settingsto be overriden as well - add user-data cloud-config to bootstrap the installation/configuration of docker-storage-setup - pylint cleanup for oo_filters.py - remove left over traces to the deployment_type tags which were previously removed - oo_get_deployment_type_from_groups filter in oo_filters.py - cluster list playbooks references to oo_get_deployment_type_from_groups filter
* Remove deployment-type tagsJason DeTiberus2015-04-201-1/+0
|
* Merge pull request #19 from lhuard1A/move_pool-refreshJason DeTiberus2015-04-152-4/+3
|\ | | | | Move `virsh pool-refresh`
| * Move `virsh pool-refresh`Lénaïc Huard2015-04-152-4/+3
| | | | | | | | | | | | | | | | | | The `pool-refresh` command is used to ask libvirt to rescan the content of a volume pool. This is used to make `libvirt` take into account volumes that were created outside of livirt control i.e.: not with a `virsh` command. `pool-refresh` is useless after a `pool-create` as the content is scanned at creation. `pool-refresh` is mandatory after having created files inside an existing pool.
* | Merge pull request #20 from lhuard1A/locale_proofJason DeTiberus2015-04-152-2/+2
|\ \ | | | | | | Make the error message checks locale proof
| * | Make the error message checks locale proofLénaïc Huard2015-04-152-2/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a computer which has a locale set, the error messages look like this: ``` $ virsh net-info foo erreur :impossible de récupérer le réseau « foo » erreur :Réseau non trouvé : no network with matching name 'foo' ``` ``` $ virsh pool-info foo erreur :impossible de récupérer le pool « foo » erreur :Pool de stockage introuvable : no storage pool with matching name 'foo' ``` The classical way to make those tests locale proof is to force a given locale. Like this: ``` $ LANG=POSIX virsh net-info foo error: failed to get network 'foo' error: Réseau non trouvé : no network with matching name 'foo' ``` ``` $ LANG=POSIX virsh pool-info foo error: failed to get pool 'foo' error: Pool de stockage introuvable : no storage pool with matching name 'foo' ``` It looks like the "Network not found" or "Storage pool not found" parts of the message are generated by the `libvirtd` daemon and are not subject to the locale of the `virsh` client. The clean fix consists in patching `libvirt` so that `virsh` sends its locale to the `libvirtd` daemon. But in the mean time, it is safer to have our playbook match the part of the message which is not subject to the daemon locale.
* / Fix libvirt metadata used to store ansible tagsLénaïc Huard2015-04-161-4/+6
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace. Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag. As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group. This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]` The symptom is that `oo_hosts_to_terminate` was not defined. In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters: ``` TASK: [Destroy VMs] *********************************************************** failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]} msg: virtual machine g not found failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]} msg: virtual machine g not found failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]} msg: virtual machine r not found failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]} msg: virtual machine r not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]} msg: virtual machine u not found failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]} msg: virtual machine u not found failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]} msg: virtual machine p not found failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]} msg: virtual machine p not found failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]} msg: virtual machine s not found failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]} msg: virtual machine s not found failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]} msg: virtual machine [ not found failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]} msg: virtual machine [ not found failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]} msg: virtual machine ' not found failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]} msg: virtual machine ' not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found etc… ```
* fix missed absolute path reference to mktempJason DeTiberus2015-04-151-1/+1
|
* Configuration updates for latest builds and major refactorJason DeTiberus2015-04-1425-376/+387
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configuration updates for latest builds - Switch to using create-node-config - Switch sdn services to use etcd over SSL - This re-uses the client certificate deployed on each node - Additional node registration changes - Do not assume that metadata service is available in openshift_facts module - Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node - Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks - Start moving generated configs to /etc/openshift - Some custom module cleanup - Add known issue with ansible-1.9 to README_OSE.md - Update to genericize the kubernetes_register_node module - Default to use kubectl for commands - Allow for overriding kubectl_cmd - In openshift_register_node role, override kubectl_cmd to openshift_kube - Set default openshift_registry_url for enterprise when deployment_type is enterprise - Fix openshift_register_node for client config change - Ensure that master certs directory is created - Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node - Allow non-root user with sudo nopasswd access - Updates for README_OSE.md - Update byo inventory for adding additional comments - Updates for node cert/config sync to work with non-root user using sudo - Move node config/certs to /etc/openshift/node - Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154 Create common playbooks - create common/openshift-master/config.yml - create common/openshift-node/config.yml - update playbooks to use new common playbooks - update launch playbooks to call update playbooks - fix openshift_registry and openshift_node_ip usage Set default deployment type to origin - openshift_repo updates for enabling origin deployments - also separate repo and gpgkey file structure - remove kubernetes repo since it isn't currently needed - full deployment type support for bin/cluster - honor OS_DEPLOYMENT_TYPE env variable - add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set - if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to origin installs Additional changes: - Add separate config action to bin/cluster that runs ansible config but does not update packages - Some more duplication reduction in cluster playbooks. - Rename task files in playbooks dirs to have tasks in their name for clarity. - update aws/gce scripts to use a directory for inventory (otherwise when there are no hosts returned from dynamic inventory there is an error) libvirt refactor and update - add libvirt dynamic inventory - updates to use dynamic inventory for libvirt
* Add libvirt as a providerLénaïc Huard2015-04-1018-0/+463