OpenTelemetry Bot d680729c09 [chore] Prepare release 0.90.0 (#29543) 1 yıl önce
..
internal 96d53a2066 [exporter/kafka] move kafka configures authentication to internal pkg (#27289) 1 yıl önce
testdata dcbbc5f48f [chore] Change exporter config tests to unmarshal config only for that component. (part2) (#14773) 2 yıl önce
Makefile b8248287bb Migrate kafkaexporter from Collector core to Collector contrib (#4656) 3 yıl önce
README.md 6af33d6229 [kafka] Expose resolve_canonical_bootstrap_servers_only (#26022) 1 yıl önce
config.go 6af33d6229 [kafka] Expose resolve_canonical_bootstrap_servers_only (#26022) 1 yıl önce
config_test.go 6af33d6229 [kafka] Expose resolve_canonical_bootstrap_servers_only (#26022) 1 yıl önce
doc.go 54e0feb826 [exporter/kafka] use generated status header (#22126) 1 yıl önce
factory.go a72128b9d8 [exporter/kafka] do not expose method (#27583) 1 yıl önce
factory_test.go a72128b9d8 [exporter/kafka] do not expose method (#27583) 1 yıl önce
go.mod d680729c09 [chore] Prepare release 0.90.0 (#29543) 1 yıl önce
go.sum 40b485f08a Update core for v0.90.0 release (#29539) 1 yıl önce
jaeger_marshaler.go 95b3bcafe6 [chore] Relocate Shopify/sarama to IBM/sarama, upgrade to v1.40.1 (#24742) 1 yıl önce
jaeger_marshaler_test.go 95b3bcafe6 [chore] Relocate Shopify/sarama to IBM/sarama, upgrade to v1.40.1 (#24742) 1 yıl önce
kafka_exporter.go 6af33d6229 [kafka] Expose resolve_canonical_bootstrap_servers_only (#26022) 1 yıl önce
kafka_exporter_test.go 96d53a2066 [exporter/kafka] move kafka configures authentication to internal pkg (#27289) 1 yıl önce
marshaler.go 732d259285 [exporter/kafkaexporter] add zipkin encoding for traces (#23947) 1 yıl önce
marshaler_test.go f4c44858b5 [all][chore] Moved from interface{} to any for all go code (#29072) 1 yıl önce
metadata.yaml 8a4348cb00 [chore] add codeowners to metadata (#24404) 1 yıl önce
pdata_marshaler.go 95b3bcafe6 [chore] Relocate Shopify/sarama to IBM/sarama, upgrade to v1.40.1 (#24742) 1 yıl önce
raw_marshaler.go f4c44858b5 [all][chore] Moved from interface{} to any for all go code (#29072) 1 yıl önce
raw_marshaller_test.go 95b3bcafe6 [chore] Relocate Shopify/sarama to IBM/sarama, upgrade to v1.40.1 (#24742) 1 yıl önce

README.md

Kafka Exporter

Status
Stability beta: traces, metrics, logs
Distributions contrib, aws, observiq, splunk, sumo
Issues Open issues Closed issues
Code Owners @pavolloffay, @MovieStoreGuy

Kafka exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages, therefore it should be used with batch and queued retry processors for higher throughput and resiliency. Message payload encoding is configurable.

The following settings are required:

  • protocol_version (no default): Kafka protocol version e.g. 2.0.0.

The following settings can be optionally configured:

  • brokers (default = localhost:9092): The list of kafka brokers.
  • resolve_canonical_bootstrap_servers_only (default = false): Whether to resolve then reverse-lookup broker IPs during startup.
  • topic (default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the kafka topic to export to.
  • encoding (default = otlp_proto): The encoding of the traces sent to kafka. All available encodings:
    • otlp_proto: payload is Protobuf serialized from ExportTraceServiceRequest if set as a traces exporter or ExportMetricsServiceRequest for metrics or ExportLogsServiceRequest for logs.
    • otlp_json: payload is JSON serialized from ExportTraceServiceRequest if set as a traces exporter or ExportMetricsServiceRequest for metrics or ExportLogsServiceRequest for logs.
    • The following encodings are valid only for traces.
    • jaeger_proto: the payload is serialized to a single Jaeger proto Span, and keyed by TraceID.
    • jaeger_json: the payload is serialized to a single Jaeger JSON Span using jsonpb, and keyed by TraceID.
    • zipkin_proto: the payload is serialized to Zipkin v2 proto Span.
    • zipkin_json: the payload is serialized to Zipkin v2 JSON Span.
    • The following encodings are valid only for logs.
    • raw: if the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.
  • auth
    • plain_text
    • username: The username to use.
    • password: The password to use
    • sasl
    • username: The username to use.
    • password: The password to use
    • mechanism: The SASL mechanism to use (SCRAM-SHA-256, SCRAM-SHA-512, AWS_MSK_IAM or PLAIN)
    • version (default = 0): The SASL protocol version to use (0 or 1)
    • aws_msk.region: AWS Region in case of AWS_MSK_IAM mechanism
    • aws_msk.broker_addr: MSK Broker address in case of AWS_MSK_IAM mechanism
    • tls
    • ca_file: path to the CA cert. For a client this verifies the server certificate. Should only be used if insecure is set to false.
    • cert_file: path to the TLS cert to use for TLS required connections. Should only be used if insecure is set to false.
    • key_file: path to the TLS key to use for TLS required connections. Should only be used if insecure is set to false.
    • insecure (default = false): Disable verifying the server's certificate chain and host name (InsecureSkipVerify in the tls config)
    • server_name_override: ServerName indicates the name of the server requested by the client in order to support virtual hosting.
    • kerberos
    • service_name: Kerberos service name
    • realm: Kerberos realm
    • use_keytab: Use of keytab instead of password, if this is true, keytab file will be used instead of password
    • username: The Kerberos username used for authenticate with KDC
    • password: The Kerberos password used for authenticate with KDC
    • config_file: Path to Kerberos configuration. i.e /etc/krb5.conf
    • keytab_file: Path to keytab file. i.e /etc/security/kafka.keytab
  • metadata
    • full (default = true): Whether to maintain a full set of metadata. When disabled, the client does not make the initial request to broker at the startup.
    • retry
    • max (default = 3): The number of retries to get metadata
    • backoff (default = 250ms): How long to wait between metadata retries
  • timeout (default = 5s): Is the timeout for every attempt to send data to the backend.
  • retry_on_failure
    • enabled (default = true)
    • initial_interval (default = 5s): Time to wait after the first failure before retrying; ignored if enabled is false
    • max_interval (default = 30s): Is the upper bound on backoff; ignored if enabled is false
    • max_elapsed_time (default = 120s): Is the maximum amount of time spent trying to send a batch; ignored if enabled is false
  • sending_queue
    • enabled (default = true)
    • num_consumers (default = 10): Number of consumers that dequeue batches; ignored if enabled is false
    • queue_size (default = 1000): Maximum number of batches kept in memory before dropping data; ignored if enabled is false; User should calculate this as num_seconds * requests_per_second where:
    • num_seconds is the number of seconds to buffer in case of a backend outage
    • requests_per_second is the average number of requests per seconds.
  • producer

Example configuration:

exporters:
  kafka:
    brokers:
      - localhost:9092
    protocol_version: 2.0.0