OpenTelemetry Bot d680729c09 [chore] Prepare release 0.90.0 (#29543) | 1 anno fa | |
---|---|---|
.. | ||
internal | 1 anno fa | |
testdata | 2 anni fa | |
Makefile | 3 anni fa | |
README.md | 1 anno fa | |
config.go | 1 anno fa | |
config_test.go | 1 anno fa | |
doc.go | 1 anno fa | |
factory.go | 1 anno fa | |
factory_test.go | 1 anno fa | |
go.mod | 1 anno fa | |
go.sum | 1 anno fa | |
jaeger_marshaler.go | 1 anno fa | |
jaeger_marshaler_test.go | 1 anno fa | |
kafka_exporter.go | 1 anno fa | |
kafka_exporter_test.go | 1 anno fa | |
marshaler.go | 1 anno fa | |
marshaler_test.go | 1 anno fa | |
metadata.yaml | 1 anno fa | |
pdata_marshaler.go | 1 anno fa | |
raw_marshaler.go | 1 anno fa | |
raw_marshaller_test.go | 1 anno fa |
Status | |
---|---|
Stability | beta: traces, metrics, logs |
Distributions | contrib, aws, observiq, splunk, sumo |
Issues | |
Code Owners | @pavolloffay, @MovieStoreGuy |
Kafka exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages, therefore it should be used with batch and queued retry processors for higher throughput and resiliency. Message payload encoding is configurable.
The following settings are required:
protocol_version
(no default): Kafka protocol version e.g. 2.0.0
.The following settings can be optionally configured:
brokers
(default = localhost:9092): The list of kafka brokers.resolve_canonical_bootstrap_servers_only
(default = false): Whether to resolve then reverse-lookup broker IPs during startup.topic
(default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the kafka topic to export to.encoding
(default = otlp_proto): The encoding of the traces sent to kafka. All available encodings:
otlp_proto
: payload is Protobuf serialized from ExportTraceServiceRequest
if set as a traces exporter or ExportMetricsServiceRequest
for metrics or ExportLogsServiceRequest
for logs.otlp_json
: payload is JSON serialized from ExportTraceServiceRequest
if set as a traces exporter or ExportMetricsServiceRequest
for metrics or ExportLogsServiceRequest
for logs.jaeger_proto
: the payload is serialized to a single Jaeger proto Span
, and keyed by TraceID.jaeger_json
: the payload is serialized to a single Jaeger JSON Span using jsonpb
, and keyed by TraceID.zipkin_proto
: the payload is serialized to Zipkin v2 proto Span.zipkin_json
: the payload is serialized to Zipkin v2 JSON Span.raw
: if the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.auth
plain_text
username
: The username to use.password
: The password to usesasl
username
: The username to use.password
: The password to usemechanism
: The SASL mechanism to use (SCRAM-SHA-256, SCRAM-SHA-512, AWS_MSK_IAM or PLAIN)version
(default = 0): The SASL protocol version to use (0 or 1)aws_msk.region
: AWS Region in case of AWS_MSK_IAM mechanismaws_msk.broker_addr
: MSK Broker address in case of AWS_MSK_IAM mechanismtls
ca_file
: path to the CA cert. For a client this verifies the server certificate. Should
only be used if insecure
is set to false.cert_file
: path to the TLS cert to use for TLS required connections. Should
only be used if insecure
is set to false.key_file
: path to the TLS key to use for TLS required connections. Should
only be used if insecure
is set to false.insecure
(default = false): Disable verifying the server's certificate chain and host
name (InsecureSkipVerify
in the tls config)server_name_override
: ServerName indicates the name of the server requested by the client
in order to support virtual hosting.kerberos
service_name
: Kerberos service namerealm
: Kerberos realmuse_keytab
: Use of keytab instead of password, if this is true, keytab file will be used instead of passwordusername
: The Kerberos username used for authenticate with KDCpassword
: The Kerberos password used for authenticate with KDCconfig_file
: Path to Kerberos configuration. i.e /etc/krb5.confkeytab_file
: Path to keytab file. i.e /etc/security/kafka.keytabmetadata
full
(default = true): Whether to maintain a full set of metadata. When
disabled, the client does not make the initial request to broker at the
startup.retry
max
(default = 3): The number of retries to get metadatabackoff
(default = 250ms): How long to wait between metadata retriestimeout
(default = 5s): Is the timeout for every attempt to send data to the backend.retry_on_failure
enabled
(default = true)initial_interval
(default = 5s): Time to wait after the first failure before retrying; ignored if enabled
is false
max_interval
(default = 30s): Is the upper bound on backoff; ignored if enabled
is false
max_elapsed_time
(default = 120s): Is the maximum amount of time spent trying to send a batch; ignored if enabled
is false
sending_queue
enabled
(default = true)num_consumers
(default = 10): Number of consumers that dequeue batches; ignored if enabled
is false
queue_size
(default = 1000): Maximum number of batches kept in memory before dropping data; ignored if enabled
is false
;
User should calculate this as num_seconds * requests_per_second
where:num_seconds
is the number of seconds to buffer in case of a backend outagerequests_per_second
is the average number of requests per seconds.producer
max_message_bytes
(default = 1000000) the maximum permitted size of a message in bytesrequired_acks
(default = 1) controls when a message is regarded as transmitted. https://pkg.go.dev/github.com/IBM/sarama@v1.30.0#RequiredAckscompression
(default = 'none') the compression used when producing messages to kafka. The options are: none
, gzip
, snappy
, lz4
, and zstd
https://pkg.go.dev/github.com/IBM/sarama@v1.30.0#CompressionCodecflush_max_messages
(default = 0) The maximum number of messages the producer will send in a single broker request.Example configuration:
exporters:
kafka:
brokers:
- localhost:9092
protocol_version: 2.0.0