# Google Managed Service for Prometheus Exporter | Status | | | ------------- |-----------| | Stability | [beta]: metrics | | Distributions | [contrib], [observiq] | | Issues | [![Open issues](https://img.shields.io/github/issues-search/open-telemetry/opentelemetry-collector-contrib?query=is%3Aissue%20is%3Aopen%20label%3Aexporter%2Fgooglemanagedprometheus%20&label=open&color=orange&logo=opentelemetry)](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues?q=is%3Aopen+is%3Aissue+label%3Aexporter%2Fgooglemanagedprometheus) [![Closed issues](https://img.shields.io/github/issues-search/open-telemetry/opentelemetry-collector-contrib?query=is%3Aissue%20is%3Aclosed%20label%3Aexporter%2Fgooglemanagedprometheus%20&label=closed&color=blue&logo=opentelemetry)](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues?q=is%3Aclosed+is%3Aissue+label%3Aexporter%2Fgooglemanagedprometheus) | | [Code Owners](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/CONTRIBUTING.md#becoming-a-code-owner) | [@aabmass](https://www.github.com/aabmass), [@dashpole](https://www.github.com/dashpole), [@jsuereth](https://www.github.com/jsuereth), [@punya](https://www.github.com/punya), [@damemi](https://www.github.com/damemi), [@psx95](https://www.github.com/psx95) | [beta]: https://github.com/open-telemetry/opentelemetry-collector#beta [contrib]: https://github.com/open-telemetry/opentelemetry-collector-releases/tree/main/distributions/otelcol-contrib [observiq]: https://github.com/observIQ/observiq-otel-collector This exporter can be used to send metrics (including trace exemplars) to [Google Cloud Managed Service for Prometheus](https://cloud.google.com/stackdriver/docs/managed-prometheus). It is one of [several supported approaches for sending metrics to Google Cloud Managed Service for Prometheus](https://cloud.google.com/stackdriver/docs/managed-prometheus#gmp-data-collection). ## Configuration Reference The following configuration options are supported: - `project` (optional): GCP project identifier. - `user_agent` (optional): Override the user agent string sent on requests to Cloud Monitoring (currently only applies to metrics). Specify `{{version}}` to include the application version number. Defaults to `opentelemetry-collector-contrib {{version}}`. - `metric`(optional): Configuration for sending metrics to Cloud Monitoring. - `endpoint` (optional): Endpoint where metric data is going to be sent to. Replaces `endpoint`. - `compression` (optional): Compression format for Metrics gRPC requests. Supported values: [`gzip`]. Defaults to no compression. - `grpc_pool_size` (optional): Sets the size of the connection pool in the GCP client. Defaults to a single connection. - `use_insecure` (optional): If true, disables gRPC client transport security. Only has applies if Endpoint is not "". - `add_metric_suffixes` (default=`true`): Add type and unit suffixes to metrics. - `extra_metrics_config` (optional): Enable or disable additional metrics. - `enable_target_info` (default=`true`): Add `target_info` metric based on resource. - `enable_scope_info` (default=`true`): Add `otel_scope_info` metric and `scope_name`/`scope_version` attributes to all other metrics. - `resource_filters` (optional): Provides a list of filters to match resource attributes which will be included in metric labels. - `prefix` (optional): Match resource attribute keys by prefix. - `regex` (optional): Match resource attribute keys by regex. - `sending_queue` (optional): Configuration for how to buffer traces before sending. - `enabled` (default = true) - `num_consumers` (default = 10): Number of consumers that dequeue batches; ignored if `enabled` is `false` - `queue_size` (default = 1000): Maximum number of batches kept in memory before data; ignored if `enabled` is `false`; User should calculate this as `num_seconds * requests_per_second` where: - `num_seconds` is the number of seconds to buffer in case of a backend outage - `requests_per_second` is the average number of requests per seconds. Note: The `sending_queue` is provided (and documented) by the [Exporter Helper](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/exporterhelper#configuration) ## Example Configuration ```yaml receivers: prometheus: config: scrape_configs: # Add your prometheus scrape configuration here. # Using kubernetes_sd_configs with namespaced resources (e.g. pod) # ensures the namespace is set on your metrics. - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $$1:$$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) processors: batch: # batch metrics before sending to reduce API usage send_batch_max_size: 200 send_batch_size: 200 timeout: 5s memory_limiter: # drop metrics if memory usage gets too high check_interval: 1s limit_percentage: 65 spike_limit_percentage: 20 resourcedetection: # detect cluster name and location detectors: [gcp] timeout: 10s transform: # "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and # metrics containing these labels will be rejected. Prefix them with exported_ to prevent this. metric_statements: - context: datapoint statements: - set(attributes["exported_location"], attributes["location"]) - delete_key(attributes, "location") - set(attributes["exported_cluster"], attributes["cluster"]) - delete_key(attributes, "cluster") - set(attributes["exported_namespace"], attributes["namespace"]) - delete_key(attributes, "namespace") - set(attributes["exported_job"], attributes["job"]) - delete_key(attributes, "job") - set(attributes["exported_instance"], attributes["instance"]) - delete_key(attributes, "instance") - set(attributes["exported_project_id"], attributes["project_id"]) - delete_key(attributes, "project_id") exporters: googlemanagedprometheus: service: pipelines: metrics: receivers: [prometheus] processors: [batch, memory_limiter, transform, resourcedetection] exporters: [googlemanagedprometheus] ``` ## Resource Attribute Handling The Google Managed Prometheus exporter maps metrics to the [prometheus_target](https://cloud.google.com/monitoring/api/resources#tag_prometheus_target) monitored resource. The logic for mapping to monitored resources is designed to be used with the prometheus receiver, but can be used with other receivers as well. To avoid collisions (i.e. "duplicate timeseries enountered" errors), you need to ensure the prometheus_target resource uniquely identifies the source of metrics. The exporter uses the following resource attributes to determine monitored resource: * location: [`location`, `cloud.availability_zone`, `cloud.region`] * cluster: [`cluster`, `k8s.cluster.name`] * namespace: [`namespace`, `k8s.namespace.name`] * job: [`service.name` + `service.namespace`] * instance: [`service.instance.id`] In the configuration above, `cloud.availability_zone`, `cloud.region`, and `k8s.cluster.name` are detected using the `resourcedetection` processor with the `gcp` detector. The prometheus receiver sets `service.name` to the configured `job_name`, and `service.instance.id` is set to the scrape target's `instance`. The prometheus receiver sets `k8s.namespace.name` when using `role: pod`. ### Manually Setting location, cluster, or namespace In GMP, the above attributes are used to identify the `prometheus_target` monitored resource. As such, it is recommended to avoid writing metric or resource labels that match these keys. Doing so can cause errors when exporting metrics to GMP or when trying to query from GMP. So, the recommended way to set them is with the [resourcedetection processor](../../processor/resourcedetectionprocessor). If you still need to set `location`, `cluster`, or `namespace` labels (such as when running in non-GCP environments), you can do so with the [resource processor](../../processor/resourceprocessor) like so: ```yaml processors: resource: attributes: - key: "location" value: "us-east1" action: upsert ``` ### Setting cluster, location or namespace using metric labels This example copies the `location` metric attribute to a new `exported_location` attribute, then deletes the original `location`. It is recommended to use the `exported_*` prefix, which is consistent with GMP's behavior. You can also use the [groupbyattrs processor](../../processor/groupbyattrsprocessor) to move metric labels to resource labels. This is useful in situations where, for example, an exporter monitors multiple namespaces (with each namespace exported as a metric label). One such example is kube-state-metrics. Using `groupbyattrs` will promote that label to a resource label and associate those metrics with the new resource. For example: ```yaml processors: groupbyattrs: keys: - namespace - cluster - location ```