| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue.
bug 1506073. Lower cpu request for logging when it exceeds limit
This PR fixes https://bugzilla.redhat.com/show_bug.cgi?id=1506073 by:
* Lowering the CPU request to match the limit when the request is greater then a specified limit
I have an open question on if this is an acceptable change of if it makes the outcome unexpected. Should we prefer to exit during the deployment and advise the operator to correct their inventory?
|
| | |
|
|/ |
|
|\
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue.
Bug 1452939 - change imagePullPolicy in logging and metrics
cc: @jcantrill
|
| |
| |
| |
| |
| | |
- all images logging and metrics change their default imagePullPolicy
from Always to IfNotPresent
|
|\ \
| |/
|/| |
Updating to use same image as origin until enterprise image is built
|
| |
| |
| |
| | |
specified
|
|\ \
| | |
| | | |
Use "requests" for CPU resources instead of limits
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We now use a CPU request to ensure logging infrastructure pods are
not capped by default for CPU usage. It is still important to ensure
we have a minimum amount of CPU.
We keep the use of the variables *_cpu_limit so that the existing
behavior is maintained.
Note that we don't want to cap an infra pod's CPU usage by default,
since we want to be able to use the necessary resources to complete
it's tasks.
Bug 1501960 (https://bugzilla.redhat.com/show_bug.cgi?id=1501960)
|
|\ \
| |/
|/| |
bug 1489498. preserve replica and shard settings
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue.
Bug 1496271 - Perserve SCC for ES local persistent storage
ES can be modified to use node local persistent storage. This requires changing SCC and is described in docs:
https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html
During an upgrade, SCC defined by the user is ignored. This fix fetches SCC user defined as a fact and adds it to the ES DC which is later used.
Also includes cherrypicked fix for - Bug 1482661 - Preserve ES dc nodeSelector and supplementalGroups
cc @jcantrill
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ES can be modified to use node local persistent storage. This requires
changing SCC and is described in docs:
https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html
During an upgrade, SCC defined by the user is ignored. This fix fetches
SCC user defined as a fact and adds it to the ES DC which is later used.
|
| |
| |
| |
| | |
(cherry picked from commit 601e35cbf4410972c7fa0a1d3d5c6327b82353ac)
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
Add logging es prometheus endpoint
This PR adds changes to add a prometheus endpoint to the logging elasticsearch pod
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PR https://github.com/openshift/openshift-ansible/pull/3509 has removed any
usage of `openshift_logging_es_cpu_limit`.
Currently, the `openshift_logging_elasticsearch_cpu_limit` is either default
'1000m' or derived from `openshift_logging_es_ops_cpu_limit` but if user sets
the `openshift_logging_es_cpu_limit` in the inventory as documented, its value
is ignored.
This PR fixes the issue by trying to set
openshift_logging_elasticsearch_cpu_limit=openshift_logging_es_cpu_limit
And including the role as -ops overrides this setting.
|
|\
| |
| | |
Merged by openshift-bot
|
| | |
|
|\ \
| | |
| | | |
Merged by openshift-bot
|
| | | |
|
| |/
|/| |
|
| | |
|
|\ \
| | |
| | | |
logging set memory request to limit
|
| | | |
|
|\ \ \
| | | |
| | | | |
bug 1480878. Default pvc for logging
|
| | |/
| |/| |
|
|\ \ \
| |/ /
|/| | |
Merged by openshift-bot
|
| | |
| | |
| | |
| | |
| | |
| | | |
Allowing to specify an image version for each logging component
https://bugzilla.redhat.com/show_bug.cgi?id=1471322
|
| |/
|/| |
|
|/ |
|
|
|
|
| |
avoid idempotent issues
|
|
|
|
| |
creeping
|
|
|
|
| |
openshift_logging_elasticsearch
|
|\
| |
| | |
Merged by openshift-bot
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The readiness probe, as currently implemented, together with the ES
master discovery using the logging-es service, causes a deadlock on
clusters with more than one node. Readiness probe will be
reintroduced in later release.
More information in:
https://bugzilla.redhat.com/show_bug.cgi?id=1459430
|
|/ |
|
|\
| |
| | |
logging: write ES heap dump to persistent storage
|
| | |
|
|\ \
| | |
| | | |
bugzilla:1463577 - Fix for dynamic pvs when using storageclasses.
|
| | | |
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Merged by openshift-bot
|
| |/ |
|
|\ \
| | |
| | | |
Merged by openshift-bot
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
bug 1460564. Fixes [BZ #1460564](https://bugzilla.redhat.com/show_bug.cgi?id=1460564).
Unfortunately, the defaults for Elasticsearch prior to v5 allow more
than one "node" to access the same configured storage volume(s).
This change forces this value to 1 to ensure we don't have an ES pod
starting up accessing a volume while another ES pod is shutting down
when reploying. This can lead to "1" directories being created in
`/elasticsearch/persistent/${CLUSTER_NAME}/data/${CLUSTER_NAME}/nodes/`.
By default ES uses a "0" directory there when only one node is accessing
it.
|