| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Add missing atomic- and openshift-enterprise
|
| |
| |
| |
| |
| |
| | |
Some checks related to *enterprise deployments were still only
looking for "enterprise" deployment_type. Update them to
cover also atomic-enterprise and openshift-enterprise deployment types.
|
|\ \
| | |
| | | |
Make pod_eviction_timeout configurable from cli
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Add a DNS server on OpenStack clusters
|
| |/ / |
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Allow compression option to be set to empty for non compressed images
|
| |/
| |
| |
| | |
Support tgz and gzip compressed images
|
|\ \
| | |
| | | |
Additional overrides for cloud provider playbooks
|
| | |
| | |
| | |
| | |
| | | |
- sdn overrides
- allow overrides for use_flannel and use_fluentd
|
|\ \ \
| | | |
| | | | |
Check that openshift_hostname resolves to an ip on our host
|
| | | | |
|
|\ \ \ \
| |_|_|/
|/| | | |
Refactor storage options
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Fix scaleup playbook.
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- fix overrides for gce machine type and gce machine image
- Update default image for origin
- Update default ssh user for origin and enterprise
- Remove old commented out code
- Remove wip and join_node playbooks
- Added add_nodes playbook which will now allow for using bin/cluster to add
additional nodes
- Allow env override of ssh_user
- improve list playbook
|
|\ \ \ \
| | | | |
| | | | | |
Install and start one etcd server before the others.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In faster environments (i.e. all local VMs) etcd nodes could come online at
roughly the same time leading to conflicts with self-elections, resulting in a
non-functional cluster.
To solve we configure the first etcd host by itself, then configure the
remaining ones in parallel to keep things as fast as possible.
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
s3_registry no filter named 'lookup'
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Added a default function for the lookup.
* According to [1] added default(,true) to avoid empty string
[1] https://github.com/openshift/openshift-ansible/blob/master/docs/best_practices_guide.adoc#filters
|
| |_|_|/
|/| | | |
|
|\ \ \ \
| | | | |
| | | | | |
Multi-master fixes for provider playbooks
|
| | |_|/
| |/| |
| | | |
| | | |
| | | | |
- set openshift_master_cluster_method=native for all cloud providers so
bin/cluster will build the ha masters correctly
|
|\ \ \ \
| |/ / /
|/| | | |
Fix hostname for aws cloud provider
|
| | | |
| | | |
| | | |
| | | | |
- No longer set openshift_hostname to the private ip for the instance
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- openshift_master role update
- infra_nodes was previously being set to num_infra, which is an integer
value when using the cloud providers, added a new variable osm_infra_nodes
that is expected to be a list of hosts
- if openshift_infra_nodes is not already set, create it from the nodes that
have the region=infra label.
- Cloud provider config playbook updates
- override openshift_router_selector for cloud providers to avoid using the
default of 'region=infra' when deployment_type is not 'online'
- Set openshift_infra_nodes to g_infra_host for cloud providers
|
|\ \ \ \
| | | | |
| | | | | |
Fix update_repos_and_packages playbook which now needs openshift_facts
|
| | | | |
| | | | |
| | | | |
| | | | | |
`rhel_subscribe`
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Increase OpenStack stack creation/deletion timeout
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Fix for bug 1298
|
| | |_|_|/ /
| |/| | | | |
|
|/ / / / / |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
adhoc s3 registry - add auth part in the registry config sample
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Without the auth part, after spawning the registry we were not able to do an auth.
```
docker login -u .. -p ... 172.30.234.98:5000
Error response from daemon: no successful auth challenge forhttp://172.30.234.98:5000/v2/ - errors: []
```
Simply adding this part in the registry config sample
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Allow to have custom bucket name and region
|
| |/ / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
File playbooks/adhoc/s3_registry/s3_registry*
To be able to use a different bucket name and region, aws_bucket and aws_region are now available
* Add variable for region and bucket into j2
* Update comment Usage
* Add default aws_bucket_name and aws_bucket_region
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Fix checking for update package availability
|
| | |_|_|/ /
| |/| | | |
| | | | | |
| | | | | |
| | | | | | |
Currently, if `yum list available` returns two versions, for whatever reason, no sorting is imposed. Therefore it's possible that an upgraded package version is available but is not being detected.
This patch sorts the version number list so that most recent version is always picked first.
|
|/ / / / / |
|