OpenTelemetry Bot d680729c09 [chore] Prepare release 0.90.0 (#29543) 1 éve
..
internal f4c4e593e2 [exporter/file] use generated status header (#22108) 1 éve
testdata 416119105d [exporter/fileexporter] add periodic flushing (#20675) 1 éve
Makefile 75c22f3824 Move fileexporter from collector core to collector contrib (#4832) 3 éve
README.md 4982f49841 [chore] update examples to use debugexporter (#26715) 1 éve
buffered_writer.go be81ae7675 [chore][exporter/fileexporter] use errors.Join instead of go.uber.org/multierr (#27835) 1 éve
buffered_writer_test.go be81ae7675 [chore][exporter/fileexporter] use errors.Join instead of go.uber.org/multierr (#27835) 1 éve
codec.go 5133f4ccd6 [chore] use license shortform (#22052) 1 éve
config.go 5133f4ccd6 [chore] use license shortform (#22052) 1 éve
config_test.go f4c4e593e2 [exporter/file] use generated status header (#22108) 1 éve
doc.go f4c4e593e2 [exporter/file] use generated status header (#22108) 1 éve
factory.go 3771c65325 Unwrap lumberjack.logger with buffered writer (#22972) 1 éve
factory_test.go 3771c65325 Unwrap lumberjack.logger with buffered writer (#22972) 1 éve
file_exporter.go f20113ff38 [chore] remove redundant ifs (#22916) 1 éve
file_exporter_test.go 5133f4ccd6 [chore] use license shortform (#22052) 1 éve
go.mod d680729c09 [chore] Prepare release 0.90.0 (#29543) 1 éve
go.sum 40b485f08a Update core for v0.90.0 release (#29539) 1 éve
metadata.yaml 8a4348cb00 [chore] add codeowners to metadata (#24404) 1 éve

README.md

File Exporter

Status
Stability alpha: traces, metrics, logs
Distributions core, contrib, aws, observiq, splunk, sumo
Issues Open issues Closed issues
Code Owners @atingchen

Exporter supports the following features:

  • Support for writing pipeline data to a file.

  • Support for rotation of telemetry files.

  • Support for compressing the telemetry data before exporting.

Please note that there is no guarantee that exact field names will remain stable. This intended for primarily for debugging Collector without setting up backends.

The official opentelemetry-collector-contrib container does not have a writable filesystem by default since it's built on the scratch layer. As such, you will need to create a writable directory for the path, potentially by mounting writable volumes or creating a custom image.

Configuration options:

The following settings are required:

  • path [no default]: where to write information.

The following settings are optional:

  • rotation settings to rotate telemetry files.

    • max_megabytes: [default: 100]: the maximum size in megabytes of the telemetry file before it is rotated.
    • max_days: [no default (unlimited)]: the maximum number of days to retain telemetry files based on the timestamp encoded in their filename.
    • max_backups: [default: 100]: the maximum number of old telemetry files to retain.
    • localtime : [default: false (use UTC)] whether or not the timestamps in backup files is formatted according to the host's local time.
  • format[default: json]: define the data format of encoded telemetry data. The setting can be overridden with proto.

  • compression[no default]: the compression algorithm used when exporting telemetry data to file. Supported compression algorithms:zstd

  • flush_interval[default: 1s]: time.Duration interval between flushes. See time.ParseDuration for valid formats. NOTE: a value without unit is in nanoseconds and flush_interval is ignored and writes are not buffered if rotation is set.

File Rotation

Telemetry data is exported to a single file by default. fileexporter only enables file rotation when the user specifies rotation: in the config. However, if specified, related default settings would apply.

Telemetry is first written to a file that exactly matches the path setting. When the file size exceeds max_megabytes or age exceeds max_days, the file will be rotated.

When a file is rotated, it is renamed by putting the current time in a timestamp in the name immediately before the file's extension (or the end of the filename if there's no extension). A new telemetry file will be created at the original path.

For example, if your path is data.json and rotation is triggered, this file will be renamed to data-2022-09-14T05-02-14.173.json, and a new telemetry file created with data.json

File Compression

Telemetry data is compressed according to the compression setting. fileexporter does not compress data by default.

Currently, fileexporter support the zstd compression algorithm, and we will support more compression algorithms in the future.

File Format

Telemetry data is encoded according to the format setting and then written to the file.

When format is json and compression is none , telemetry data is written to file in JSON format. Each line in the file is a JSON object.

Otherwise, when using proto format or any kind of encoding, each encoded object is preceded by 4 bytes (an unsigned 32 bit integer) which represent the number of bytes contained in the encoded object.When we need read the messages back in, we read the size, then read the bytes into a separate buffer, then parse from that buffer.

Example:

exporters:
  file/no_rotation:
    path: ./foo

  file/rotation_with_default_settings:
    path: ./foo
    rotation:

  file/rotation_with_custom_settings:
    path: ./foo
    rotation:
      max_megabytes: 10
      max_days: 3
      max_backups: 3
      localtime: true
    format: proto
    compression: zstd

  file/flush_every_5_seconds:
    path: ./foo
    flush_interval: 5

Get Started in an existing cluster

We will follow the documentation to first install the operator in an existing cluster and then create an OpenTelemetry Collector (otelcol) instance, mounting an additional volume under /data under which the file exporter will write metrics.json:

kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: fileexporter
spec:
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:

    exporters:
      debug:
      file:
        path: /data/metrics.json

    service:
      pipelines:
        metrics:
          receivers: [otlp]
          processors: []
          exporters: [debug,file]
  volumes:
    - name: file
      emptyDir: {}
  volumeMounts: 
    - name: file
      mountPath: /data
EOF