yawyd313 0c28696fdf save | 1 month ago | |
---|---|---|
.. | ||
charts | 11 months ago | |
ci | 11 months ago | |
examples | 11 months ago | |
grafana-dashboards | 11 months ago | |
templates | 11 months ago | |
.helmignore | 11 months ago | |
Chart.lock | 11 months ago | |
Chart.yaml | 1 month ago | |
README.md | 11 months ago | |
UPGRADING.md | 11 months ago | |
fe.log | 11 months ago | |
values.schema.json | 11 months ago | |
values.yaml | 1 month ago |
The helm chart installs OpenTelemetry Demo in kubernetes cluster.
Add OpenTelemetry Helm repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
To install the chart with the release name my-otel-demo, run the following command:
helm install my-otel-demo open-telemetry/opentelemetry-demo
See UPGRADING.md.
Installing the chart on OpenShift requires the following additional steps:
Create a new project:
oc new-project opentelemetry-demo
Create a new service account:
oc create sa opentelemetry-demo
Add the service account to the anyuid
SCC (may require cluster admin):
oc adm policy add-scc-to-user anyuid -z opentelemetry-demo
Install the chart with the following command:
helm install my-otel-demo charts/opentelemetry-demo \
--namespace opentelemetry-demo \
--set serviceAccount.create=false \
--set serviceAccount.name=opentelemetry-demo \
--set prometheus.rbac.create=false \
--set prometheus.serviceAccounts.server.create=false \
--set prometheus.serviceAccounts.server.name=opentelemetry-demo \
--set grafana.rbac.create=false \
--set grafana.serviceAccount.create=false \
--set grafana.serviceAccount.name=opentelemetry-demo
Chart parameters are separated in 4 general sections:
Property | Description | Default |
---|---|---|
default.env |
Environment variables added to all components | Array of several OpenTelemetry environment variables |
default.envOverrides |
Used to override individual environment variables without re-specifying the entire array. | [] |
default.image.repository |
Demo components image name | otel/demo |
default.image.tag |
Demo components image tag (leave blank to use app version) | nil |
default.image.pullPolicy |
Demo components image pull policy | IfNotPresent |
default.image.pullSecrets |
Demo components image pull secrets | [] |
default.replicas |
Number of replicas for each component | 1 |
default.schedulingRules.nodeSelector |
Node labels for pod assignment | {} |
default.schedulingRules.affinity |
Man of node/pod affinities | {} |
default.schedulingRules.tolerations |
Tolerations for pod assignment | [] |
default.securityContext |
Demo components container security context | {} |
serviceAccount.annotations |
Annotations for the serviceAccount | {} |
serviceAccount.create |
Whether to create a serviceAccount or use an existing one | true |
serviceAccount.name |
The name of the ServiceAccount to use for demo components | "" |
The OpenTelemetry demo contains several components (microservices). Each
component is configured with a common set of parameters. All components will
be defined within components.[NAME]
where [NAME]
is the name of the demo
component.
Note The following parameters require a
components.[NAME].
prefix where[NAME]
is the name of the demo component
Parameter | Description | Default |
---|---|---|
enabled |
Is this component enabled | true |
useDefault.env |
Use the default environment variables in this component | true |
imageOverride.repository |
Name of image for this component | Defaults to the overall default image repository |
imageOverride.tag |
Tag of the image for this component | Defaults to the overall default image tag |
imageOverride.pullPolicy |
Image pull policy for this component | IfNotPresent |
imageOverride.pullSecrets |
Image pull secrets for this component | [] |
service.type |
Service type used for this component | ClusterIP |
service.port |
Service port used for this component | nil |
service.nodePort |
Service node port used for this component | nil |
service.annotations |
Annotations to add to the component's service | {} |
ports |
Array of ports to open for deployment and service of this component | [] |
env |
Array of environment variables added to this component | Each component will have its own set of environment variables |
envOverrides |
Used to override individual environment variables without re-specifying the entire array | [] |
replicas |
Number of replicas for this component | 1 for ffsPostgres, kafka, and redis ; nil otherwise |
resources |
CPU/Memory resource requests/limits | Each component will have a default memory limit set |
schedulingRules.nodeSelector |
Node labels for pod assignment | {} |
schedulingRules.affinity |
Man of node/pod affinities | {} |
schedulingRules.tolerations |
Tolerations for pod assignment | [] |
securityContext |
Container security context to define user ID (UID), group ID (GID) and other security policies | {} |
podAnnotations |
Pod annotations for this component | {} |
ingress.enabled |
Enable the creation of Ingress rules | false |
ingress.annotations |
Annotations to add to the ingress rule | {} |
ingress.ingressClassName |
Ingress class to use. If not specified default Ingress class will be used. | nil |
ingress.hosts |
Array of Hosts to use for the ingress rule. | [] |
ingress.hosts[].paths |
Array of paths / routes to use for the ingress rule host. | [] |
ingress.hosts[].paths[].path |
Actual path route to use | nil |
ingress.hosts[].paths[].pathType |
Path type to use for the given path. Typically this is Prefix . |
nil |
ingress.hosts[].paths[].port |
Port to use for the given path | nil |
ingress.additionalIngresses |
Array of additional ingress rules to add. This is handy if you need to differently annotated ingress rules | [] |
ingress.additionalIngresses[].name |
Each additional ingress rule needs to have a unique name | nil |
command |
Command & arguments to pass to the container being spun up for this service | [] |
configuration |
Configuration for the container being spun up; will create a ConfigMap, Volume and VolumeMount | {} |
initContainers |
Array of init containers to add to the pod | [] |
initContainers[].name |
Name of the init container | nil |
initContainers[].image |
Image to use for the init container | nil |
initContainers[].command |
Command to run for the init container | nil |
The OpenTelemetry Demo Helm chart depends on 4 sub-charts:
Parameters for each sub-chart can be specified within that sub-chart's respective top level. This chart will override some of the dependent sub-chart parameters by default. The overriden parameters are specified below.
Note The following parameters have a
opentelemetry-collector.
prefix.
Parameter | Description | Default |
---|---|---|
enabled |
Install the OpenTelemetry collector | true |
nameOverride |
Name that will be used by the sub-chart release | otelcol |
mode |
The Deployment or Daemonset mode | deployment |
resources |
CPU/Memory resource requests/limits | 100Mi memory limit |
service.type |
Service Type to use | ClusterIP |
ports |
Ports to enabled for the collector pod and service | metrics is enabled and prometheus is defined/enabled |
podAnnotations |
Pod annotations | Annotations leveraged by Prometheus scrape |
config |
OpenTelemetry Collector configuration | Configuration required for demo |
Note The following parameters have a
jaeger.
prefix.
Parameter | Description | Default |
---|---|---|
enabled |
Install the Jaeger sub-chart | true |
provisionDataStore.cassandra |
Provision a cassandra data store | false (required for AllInOne mode) |
allInOne.enabled |
Enable All in One In-Memory Configuration | true |
allInOne.args |
Command arguments to pass to All in One deployment | ["--memory.max-traces", "10000", "--query.base-path", "/jaeger/ui"] |
allInOne.resources |
CPU/Memory resource requests/limits for All in One | 275Mi memory limit |
storage.type |
Storage type to use | none (required for AllInOne mode) |
agent.enabled |
Enable Jaeger agent | false (required for AllInOne mode) |
collector.enabled |
Enable Jaeger Collector | false (required for AllInOne mode) |
query.enabled |
Enable Jaeger Query | false (required for AllInOne mode) |
Note The following parameters have a
prometheus.
prefix.
Parameter | Description | Default |
---|---|---|
enabled |
Install the Prometheus sub-chart | true |
alertmanager.enabled |
Install the alertmanager | false |
configmapReload.prometheus.enabled |
Install the configmap-reload container | false |
kube-state-metrics.enabled |
Install the kube-state-metrics sub-chart | false |
prometheus-node-exporter.enabled |
Install the Prometheus Node Exporter sub-chart | false |
prometheus-pushgateway.enabled |
Install the Prometheus Push Gateway sub-chart | false |
server.extraFlags |
Additional flags to add to Prometheus server | ["enable-feature=exemplar-storage"] |
server.persistentVolume.enabled |
Enable persistent storage for Prometheus data | false |
server.global.scrape_interval |
How frequently to scrape targets by default | 5s |
server.global.scrap_timeout |
How long until a scrape request times out | 3s |
server.global.evaluation_interval |
How frequently to evaluate rules | 30s |
service.servicePort |
Service port used | 9090 |
serverFiles.prometheus.yml |
Prometheus configuration file | Scrape config to get metrics from OpenTelemetry collector |
Note The following parameters have a
grafana.
prefix.
Parameter | Description | Default |
---|---|---|
enabled |
Install the Grafana sub-chart | true |
grafana.ini |
Grafana's primary configuration | Enables anonymous login, and proxy through the frontendProxy service |
adminPassword |
Password used by admin user |
admin |
rbac.pspEnabled |
Enable PodSecurityPolicy resources | false |
datasources |
Configure grafana datasources (passed through tpl) | Prometheus and Jaeger data sources |
dashboardProviders |
Configure grafana dashboard providers | Defines a default provider based on a file path |
dashboardConfigMaps |
ConfigMaps reference that contains dashboards | Dashboard config map deployed with this Helm chart |