| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Use ansible playbook to initialize openshift cluster
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
on inventory/playbook variables for openshift_hostname
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Remove default value for openshift_hostname and make it required
- Remove workarounds that are no longer needed
- Remove resources parameter from openshift_register_node module
- pre-create node certificates for each node before registering node
- distribute created node certificates to each node
- Move node registration logic to a new openshift_register_nodes role
- This is because we now have to run the steps on a master as opposed to on
the nodes like we were previously doing.
- Rename openshift_register_node module to kubernetes_register_node, one more
step to genericizing enough for upstreaming, however there are still plenty
of openshift specific commands that still need to be genericized.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Rename repos role to openshift_repos
- Make openshift_repos a dependency of openshift_common
- Add README and metadata for openshift_repos
- Playbook updates for role rename
- Verify libselinux-python is installed, otherwise some of the bulit-in
modules we use fail
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
- Does not install or start docker, since the openshift-node role will handle
that for us
- Only add root to the dockerroot group and configures the enter-container
script.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
- Add verify_chain action to os_firewall_manage_iptables module
- Update os_firewall module to use os_firewall_manage_iptables for creating
the DOCKER chain.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- don't use set_fact on localhost for openshift_master_ips and
openshift_master_public_ips
- we are only using it for the configure play
- move definition to vars section of configure play
- otherwise we'd have to set openshift_master_ips and
openshift_master_public_ips from hostvars['localhost'] and since we aren't
refrerencing it anywhere else, might as well just do it in vars instead of
set_fact on locahost.
|
| |
| |
| |
| | |
os_update_latest after repo config
|
| | |
|
| |
| |
| |
| |
| | |
* Added playbooks/gce/openshift-cluster
* Added bin/cluster (will replace cluster.sh)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- playbooks/gce/openshift-cluster:
- Remove some stray debugging statements
- Some minor formatting fixes
- removing un-necessary quotes
- cleaning up some jinja templates for readability
- add a play to the launch playbook to apply the os_update_latest role on
all hosts in the new environment
- improve setting groups and gce_public_ip when using add_host module
- set gce_public_ip as a variable for the host using the returned gce instance_data
- add a group for each tag configured on the host (pre-pending tag_ to the
tag name)
- update the openshift-master/config.yml and openshift-node/config.yml
includes to use the tag_env-host-type groups
- openshift-{master,node}/config.yml
- Some cleanup
- remove some extraneous quotes
- remove connection: ssh from remote hosts, since it is the default
- remove user: root and instead set ansible_ssh_user in
inventory/gce/group_vars/all
- set openshift_public_ip and openshift_env to templated values in
inventory/gce/group_vars/all as well
- no longer set openshift_node_ips for the master host, since nodes will
register themselves now when they are configured (prevent reboot on
adding nodes)
- move setting openshift_master_ips and openshift_public_master_ips using
set_fact and instead use the vars: of the 'Configure Instances' play
|
| |
| |
| |
| | |
os_update_latest role
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
* Added playbooks/gce/openshift-cluster
* Added bin/cluster (will replace cluster.sh)
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Added tito build stuff
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
minor fix
|
| | |
|
|\ \
| | |
| | | |
Rename repos role to openshift_repos
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Rename repos role to openshift_repos
- Make openshift_repos a dependency of openshift_common
- Add README and metadata for openshift_repos
- Playbook updates for role rename
- Verify libselinux-python is installed, otherwise some of the bulit-in
modules we use fail
|
| | | |
|
| |/
|/| |
|
|\ \
| |/
|/| |
Bug squashing
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Set --hostname flag in node config in openshift_node role
- Support some additional node attributes in openshift_node role
- podCIDR
- labels
- annotations
- Support both output types for openshift ex config view in
openshift_register_node module
- Support multiple api versions in openshift_register_node module
- Support additional attributes in openshift_register_node module
- annotations
- labels
- pod_cidr
- external_ips (v1beta3, will be available after next kube rebase)
- internal_ips (v1beta3, will be available after next kube rebase)
- hostnames (v1beta3, will be available after next kube rebase)
- external_id (v1beta3, will be available after next kube rebase)
|
|/
|
|
|
| |
- always set hostname if hostname does not match openshift_hostname
- Use local IP instead of public IP as hostname for workaround
|
|\
| |
| | |
Renamed AnsibleUtil to AwsUtil. Fixed bug in AwsUtil for hosts without environment set.
|
| | |
|
| |
| |
| |
| | |
environment set.
|
|/ |
|
|\
| |
| | |
Add workaround for openshift-master startup timeout
|
| | |
|