| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue.
Include Deprecation: Convert to include_tasks
For all roles/
* Converts to include_tasks: for dynamic includes
* Converts to import_tasks: for static includes
Trello: https://trello.com/c/ZTyZu3UM/484-3-ansible-24-include-deprecation
|
| | |
|
|/ |
|
|
|
|
|
|
| |
Remove hosted vars from openshift_facts.
The current pattern is causing a bunch of undesired sideffects.
|
| |
|
| |
|
|
|
|
| |
openshift_logging pattern
|
| |
|
|
|
|
|
| |
- all images logging and metrics change their default imagePullPolicy
from Always to IfNotPresent
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now use a CPU request to ensure logging infrastructure pods are
not capped by default for CPU usage. It is still important to ensure
we have a minimum amount of CPU.
We keep the use of the variables *_cpu_limit so that the existing
behavior is maintained.
Note that we don't want to cap an infra pod's CPU usage by default,
since we want to be able to use the necessary resources to complete
it's tasks.
Bug 1501960 (https://bugzilla.redhat.com/show_bug.cgi?id=1501960)
|
|\
| |
| | |
Merged by openshift-bot
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
logging set memory request to limit
|
| | |
|
|\ \
| | |
| | | |
Merged by openshift-bot
|
| | |
| | |
| | |
| | |
| | |
| | | |
Allowing to specify an image version for each logging component
https://bugzilla.redhat.com/show_bug.cgi?id=1471322
|
|\ \ \
| |_|/
|/| | |
bug 1468987: kibana_proxy OOM
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We currently set the memory allocated to the kibana-proxy container to be
the same as `max_old_space_size` for nodejs. But in V8, the heap consists
of multiple spaces.
The old space has only memory ready to be GC and measuring the used heap
by kibana-proxy code, there is at least additional 32MB needed in the code
space when `max_old_space_size` peaks.
Setting the default memory limit to 256MB here and also changing the default
calculation of `max_old_space_size` in the image repository to be only half
of what the container receives to allow some heap for other `spaces`.
|
| |/
|/|
| |
| |
| |
| |
| |
| | |
Without that, playbook runs print warnings such as this:
[WARNING]: when statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: {{ g_etcd_hosts is not
defined and g_new_etcd_hosts is not
defined}}
|
| |
| |
| |
| | |
creeping
|
|/ |
|
| |
|
| |
|
|
|
|
| |
oauthclient generation for ops
|
|
|
|
| |
ES dc creation
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
In order to ensure that the Kubernetes machinery can determine when the
Kibana Pods are becoming ready, we need to add a readiness probe to the
Containers that make up those pods. The Kibana readiness probe simply
hits the base URL at `http://localhost:5601/` and expects a 200.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we currently create the set of logging `DeploymentConfig`s, we
create them with zero desired replicas. This causes the deployment to
immediately succeed as there is no work to be done. This inhibits our
ability to use nice CLI UX features like `oc rollout status` to monitor
the logging stack deployments. Instead, we should can create the configs
with the correct number of replicas in the first place and stop using
`oc scale` to bring them up after the fact.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
| |
|
| |
|
|
|