OpenTelemetry Bot d680729c09 [chore] Prepare release 0.90.0 (#29543) | 1 年間 前 | |
---|---|---|
.. | ||
internal | 1 年間 前 | |
testdata | 1 年間 前 | |
Makefile | 4 年 前 | |
README.md | 1 年間 前 | |
config.go | 1 年間 前 | |
config_test.go | 1 年間 前 | |
doc.go | 1 年間 前 | |
documentation.md | 1 年間 前 | |
e2e_test.go | 1 年間 前 | |
factory.go | 1 年間 前 | |
factory_test.go | 1 年間 前 | |
go.mod | 1 年間 前 | |
go.sum | 1 年間 前 | |
informer_transform.go | 1 年間 前 | |
informer_transform_test.go | 1 年間 前 | |
metadata.yaml | 1 年間 前 | |
mock_exporter_test.go | 1 年間 前 | |
mock_resources_test.go | 1 年間 前 | |
receiver.go | 1 年間 前 | |
receiver_test.go | 1 年間 前 | |
watcher.go | 1 年間 前 | |
watcher_test.go | 1 年間 前 |
Status | |
---|---|
Stability | beta: metrics |
development: logs | |
Distributions | contrib, observiq, splunk, sumo |
Issues | |
Code Owners | @dmitryax, @TylerHelmuth, @povilasv |
The Kubernetes Cluster receiver collects cluster-level metrics and entity events from the Kubernetes API server. It uses the K8s API to listen for updates. A single instance of this receiver should be used to monitor a cluster.
Currently, this receiver supports authentication via service accounts only. See example for more information.
Details about the metrics produced by this receiver can be found in metadata.yaml and documentation.md.
The following settings are required:
auth_type
(default = serviceAccount
): Determines how to authenticate to
the K8s API server. This can be one of none
(for no auth), serviceAccount
(to use the standard service account token provided to the agent pod), or
kubeConfig
to use credentials from ~/.kube/config
.The following settings are optional:
collection_interval
(default = 10s
): This receiver continuously watches
for events using K8s API. However, the metrics collected are emitted only
once every collection interval. collection_interval
will determine the
frequency at which metrics are emitted by this receiver.metadata_collection_interval
(default = 5m
): Collection interval for metadata
for K8s entities such as pods, nodes, etc.
Metadata of the particular entity in the cluster is collected when the entity changes.
In addition, metadata of all entities is collected periodically even if no changes happen.
This setting controls the interval between periodic collections.
Setting the duration to 0 will disable periodic collection (however will not impact
metadata collection on changes).node_conditions_to_report
(default = [Ready]
): An array of node
conditions this receiver should report. See
here for
list of node conditions. The receiver will emit one metric per entry in the
array.distribution
(default = kubernetes
): The Kubernetes distribution being used
by the cluster. Currently supported versions are kubernetes
and openshift
. Setting
the value to openshift
enables OpenShift specific metrics in addition to standard
kubernetes ones.allocatable_types_to_report
(default = []
): An array of allocatable resource types this receiver should report.
The following allocatable resource types are available.
metrics
: Allows to enable/disable metrics.resource_attributes
: Allows to enable/disable resource attributes.Example:
k8s_cluster:
auth_type: kubeConfig
node_conditions_to_report: [Ready, MemoryPressure]
allocatable_types_to_report: [cpu, memory]
metrics:
k8s.container.cpu_limit:
enabled: false
resource_attributes:
container.id:
enabled: false
The full list of settings exposed for this receiver are documented here with detailed sample configurations here.
For example, with the config below the receiver will emit two metrics
k8s.node.condition_ready
and k8s.node.condition_memory_pressure
, one
for each condition in the config. The value will be 1
if the ConditionStatus
for the
corresponding Condition
is True
, 0
if it is False
and -1 if it is Unknown
.
...
k8s_cluster:
node_conditions_to_report:
- Ready
- MemoryPressure
...
A list of metadata exporters to which metadata being collected by this receiver should be synced. Exporters specified in this list are expected to implement the following interface. If an exporter that does not implement the interface is listed, startup will fail.
type MetadataExporter interface {
ConsumeMetadata(metadata []*MetadataUpdate) error
}
type MetadataUpdate struct {
ResourceIDKey string
ResourceID ResourceID
MetadataDelta
}
type MetadataDelta struct {
MetadataToAdd map[string]string
MetadataToRemove map[string]string
MetadataToUpdate map[string]string
}
See here for details about the above types.
The same metadata will be also emitted as entity events in the form of log records if this receiver is connected to a logs pipeline. See here for the format of emitted log records.
Here is an example deployment of the collector that sets up this receiver along with the debug exporter.
Follow the below sections to setup various Kubernetes resources required for the deployment.
Create a ConfigMap with the config for otelcontribcol
:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
data:
config.yaml: |
receivers:
k8s_cluster:
collection_interval: 10s
exporters:
debug:
service:
pipelines:
metrics:
receivers: [k8s_cluster]
exporters: [debug]
logs/entity_events:
receivers: [k8s_cluster]
exporters: [debug]
EOF
Create a service account that the collector should use.
<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: otelcontribcol
name: otelcontribcol
EOF
Use the below commands to create a ClusterRole
with required permissions and a
ClusterRoleBinding
to grant the role to the service account created above.
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
rules:
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
EOF
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otelcontribcol
subjects:
- kind: ServiceAccount
name: otelcontribcol
namespace: default
EOF
Create a Deployment to deploy the collector.
<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
spec:
replicas: 1
selector:
matchLabels:
app: otelcontribcol
template:
metadata:
labels:
app: otelcontribcol
spec:
serviceAccountName: otelcontribcol
containers:
- name: otelcontribcol
image: otel/opentelemetry-collector-contrib
args: ["--config", "/etc/config/config.yaml"]
volumeMounts:
- name: config
mountPath: /etc/config
imagePullPolicy: IfNotPresent
volumes:
- name: config
configMap:
name: otelcontribcol
EOF
You can enable OpenShift support to collect OpenShift specific metrics in addition to the default
kubernetes ones. To do this, set the distribution
key to openshift
.
Example:
k8s_cluster:
distribution: openshift
Add the following rules to your ClusterRole:
- apigroups:
- quota.openshift.io
resources:
- clusterresourcequotas
verbs:
- get
- list
- watch