| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
- sdn overrides
- allow overrides for use_flannel and use_fluentd
|
|
|
|
|
| |
- set openshift_master_cluster_method=native for all cloud providers so
bin/cluster will build the ha masters correctly
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- openshift_master role update
- infra_nodes was previously being set to num_infra, which is an integer
value when using the cloud providers, added a new variable osm_infra_nodes
that is expected to be a list of hosts
- if openshift_infra_nodes is not already set, create it from the nodes that
have the region=infra label.
- Cloud provider config playbook updates
- override openshift_router_selector for cloud providers to avoid using the
default of 'region=infra' when deployment_type is not 'online'
- Set openshift_infra_nodes to g_infra_host for cloud providers
|
|
|
|
|
|
| |
- Add g_infra_hosts (nodes with sub-type infra)
- Add g_compute_hosts (nodes with sub-type compute)
- Reduce duplication by re-using previously defined variables
|
|\
| |
| | |
Make bin/cluster able to spawn an OSE 3.1 cluster
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Move debug_level into vars.yml and byo inventory
- change variables in cluster_hosts.yml to be g_* and update playbooks to use
those values directly instead of setting them indirectly
- added a new g_all_hosts entry in cluster_hosts to use in the update playbook
instead of unioning all host types within the playbook
- added a cluster_hosts.yml for the byo playbook
|
| |
|
| |
|
|\
| |
| | |
Removing env-host-type in preparation of env and environment changes.
|
| | |
|
|\ \
| | |
| | | |
Update for latest CentOS-7-x86_64-GenericCloud.
|
| | | |
|
| |/
| |
| |
| |
| |
| | |
- Use xz compressed image
- Update sha256 for new image
- Update docs to reflect new settings
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list playbook listed the IPs of the VMs without logging their role like:
TASK: [debug ] ************************************************************
ok: [10.64.109.37] => {
"msg": "public:10.64.109.37 private:192.168.165.5"
}
ok: [10.64.109.47] => {
"msg": "public:10.64.109.47 private:192.168.165.6"
}
ok: [10.64.109.36] => {
"msg": "public:10.64.109.36 private:192.168.165.4"
}
ok: [10.64.109.215] => {
"msg": "public:10.64.109.215 private:192.168.165.2"
}
The list playbook now prints the information in a more structured way with
a list of masters, a list of nodes and the subtype of the nodes like:
TASK: [debug ] ************************************************************
ok: [localhost] => {
"msg": {
"lenaicnewlist": {
"master": [
{
"name": "10.64.109.215",
"private IP": "192.168.165.2",
"public IP": "10.64.109.215",
"subtype": "default"
}
],
"node": [
{
"name": "10.64.109.47",
"private IP": "192.168.165.6",
"public IP": "10.64.109.47",
"subtype": "compute"
},
{
"name": "10.64.109.37",
"private IP": "192.168.165.5",
"public IP": "10.64.109.37",
"subtype": "compute"
},
{
"name": "10.64.109.36",
"private IP": "192.168.165.4",
"public IP": "10.64.109.36",
"subtype": "infra"
}
]
}
}
}
|
| |
|
| |
|
| |
|
|
|
| |
Use write_files to disable requiretty for the openshift user as suggested by @detiberm, fixes #773
|
|
|
| |
Set Defaults !requiretty so that ansible can run sudo without a terminal. Fixes #773
|
|
|
|
| |
It was timeouting on slower hardware.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using bootcmd in cloud-config lead to restarts prior to starting the
systemd-hostnamed, which was probable cause of the failure when DHCP
client was failing to send the hostname, and subsequently, the
ansible-opnshift was not able to identify the VM among the others when
checking DHCP leases. The failure looked like: following
10:17:31 failed: [localhost] => {"attempts": 60, "changed": true, "cmd": "virsh -c qemu:///system net-dhcp-leases openshift-ansible | egrep -c 'experiment-node-compute-453d0|experiment-node-compute-61e16'", "delta": "0:00:00.033061", "end": "2015-10-19 10:17:31.409434", "failed": true, "rc": 0, "start": "2015-10-19 10:17:31.376373", "warnings": []}
10:17:31 stdout: 1
10:17:31 msg: Task failed as maximum retries was encountered
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dnsmasq should not be resolving the example.com recursively, because
in case that we have /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf:
server=/example.com/192.168.55.1
the dnsmasq will be asking itself, therefore a dns resolution loop is
created, which causes
Maximum number of concurrent DNS queries reached (max: 150)
and performance degradation of dns resolution on the whole hypervizor and
guests.
This patch will fix that in the domain.xml, which will cause adding
local=/example.com/
to the /var/lib/libvirt/dnsmasq/openshift-ansible.conf, effectively
fixing the problem.
|
| |
|
|\
| |
| | |
Set loglevel=2 as our default across the board
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add support to bin/cluster for specifying etcd hosts
- defaults to 0, if no etcd hosts are selected, then configures embedded
etcd
- Updates for the byo inventory file for etcd and master as node by default
- Consolidation of cluster logic more centrally into common playbook
- Added etcd config support to playbooks
- Restructured byo playbooks to leverage the common openshift-cluster playbook
- Added support to common master playbook to generate and apply external etcd
client certs from the etcd ca
- start of refactor for better handling of master certs in a multi-master
environment.
- added the openshift_master_ca and openshift_master_certificates roles to
manage master certs instead of generating them in the openshift_master
role
- added etcd host groups to the cluster update playbooks
- aded better handling of host groups when they are either not present or are
empty.
- Update AWS readme
|
|
|
|
| |
And use it in the libvirt and openstack playbooks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Templatize node config
- Templatize master config
- Integrated sdn changes
- Updates for openshift_facts
- Added support for node, master and sdn related changes
- registry_url
- added identity provider facts
- Removed openshift_sdn_* roles
- Install httpd-tools if configuring htpasswd auth
- Remove references to external_id
- Setting external_id interferes with nodes associating with the generated
node object when pre-registering nodes.
- osc/oc and osadm/oadm binary detection in openshift_facts
Misc Changes:
- make non-errata puddle default for byo example
- comment out master in list of nodes in inventory/byo/hosts
- remove non-error errors from fluentd_* roles
- Use admin kubeconfig instead of openshift-client
|
|
|
|
|
|
|
| |
If we don’t explicitly specify the libvirt URI to use for virsh,
it will use the LIBVIRT_DEFAULT_URI environment variable.
For a consistent behavior, all `virsh` invocation must be done with
the `-c <libvirt_uri>` parameter.
|
|
|
|
|
| |
* Add necessary playbooks/roles
* Cleanup bin/cluster to meet new design guide lines
|
|
|
|
|
| |
Query libvirt’s DHCP leases rather than inspecting the host’s ARP cache
to find the VMs’ IPs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Create a separate docker volume in aws openshift-cluster playbooks
- default to using ephemeral storage, but allow to be overriden
- allow root volume settingsto be overriden as well
- add user-data cloud-config to bootstrap the installation/configuration of
docker-storage-setup
- pylint cleanup for oo_filters.py
- remove left over traces to the deployment_type tags which were previously
removed
- oo_get_deployment_type_from_groups filter in oo_filters.py
- cluster list playbooks references to oo_get_deployment_type_from_groups
filter
|
| |
|
|\
| |
| | |
Move `virsh pool-refresh`
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `pool-refresh` command is used to ask libvirt to rescan the content of a volume pool.
This is used to make `libvirt` take into account volumes that were created outside of livirt control
i.e.: not with a `virsh` command.
`pool-refresh` is useless after a `pool-create` as the content is scanned at creation.
`pool-refresh` is mandatory after having created files inside an existing pool.
|
|\ \
| | |
| | | |
Make the error message checks locale proof
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On a computer which has a locale set, the error messages look like this:
```
$ virsh net-info foo
erreur :impossible de récupérer le réseau « foo »
erreur :Réseau non trouvé : no network with matching name 'foo'
```
```
$ virsh pool-info foo
erreur :impossible de récupérer le pool « foo »
erreur :Pool de stockage introuvable : no storage pool with matching name 'foo'
```
The classical way to make those tests locale proof is to force a given locale.
Like this:
```
$ LANG=POSIX virsh net-info foo
error: failed to get network 'foo'
error: Réseau non trouvé : no network with matching name 'foo'
```
```
$ LANG=POSIX virsh pool-info foo
error: failed to get pool 'foo'
error: Pool de stockage introuvable : no storage pool with matching name 'foo'
```
It looks like the "Network not found" or "Storage pool not found" parts of the message
are generated by the `libvirtd` daemon and are not subject to the locale of the `virsh`
client.
The clean fix consists in patching `libvirt` so that `virsh` sends its locale to the
`libvirtd` daemon.
But in the mean time, it is safer to have our playbook match the part of the message
which is not subject to the daemon locale.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace.
Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag.
As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group.
This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]`
The symptom is that `oo_hosts_to_terminate` was not defined.
In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters:
```
TASK: [Destroy VMs] ***********************************************************
failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]}
msg: virtual machine g not found
failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]}
msg: virtual machine g not found
failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]}
msg: virtual machine r not found
failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]}
msg: virtual machine r not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]}
msg: virtual machine u not found
failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]}
msg: virtual machine u not found
failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]}
msg: virtual machine p not found
failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]}
msg: virtual machine p not found
failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]}
msg: virtual machine s not found
failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]}
msg: virtual machine s not found
failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]}
msg: virtual machine [ not found
failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]}
msg: virtual machine [ not found
failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]}
msg: virtual machine ' not found
failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]}
msg: virtual machine ' not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
etc…
```
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
|
|
|