OpenTelemetry Bot d680729c09 [chore] Prepare release 0.90.0 (#29543) | пре 1 година | |
---|---|---|
.. | ||
internal | пре 1 година | |
testdata | пре 2 година | |
Makefile | пре 3 година | |
README.md | пре 1 година | |
doc.go | пре 1 година | |
factory.go | пре 1 година | |
factory_test.go | пре 1 година | |
go.mod | пре 1 година | |
go.sum | пре 1 година | |
journald.go | пре 1 година | |
journald_nonlinux.go | пре 1 година | |
journald_test.go | пре 1 година | |
metadata.yaml | пре 1 година |
Journald Receiver
Status | |
---|---|
Stability | alpha: logs |
Distributions | contrib, observiq, splunk, sumo |
Issues | |
Code Owners | @sumo-drosiek, @djaglowski |
Parses Journald events from systemd journal. Journald receiver requires that:
journalctl
binary is present in the $PATH of the agent; andjournalctl
.Field | Default | Description |
---|---|---|
directory |
/run/log/journal or /run/journal |
A directory containing journal files to read entries from |
files |
A list of journal files to read entries from | |
start_at |
end |
At startup, where to start reading logs from the file. Options are beginning or end |
units |
A list of units to read entries from. See Multiple filtering options examples. | |
identifiers |
Filter output by message identifiers (SYSTEMD_IDENTIFIER ). See Multiple filtering options examples. |
|
matches |
A list of matches to read entries from. See Matches and Multiple filtering options examples. | |
priority |
info |
Filter output by message priorities or priority ranges. See Multiple filtering options examples. |
grep |
Filter output to entries where the MESSAGE= field matches the specified regular expression. See Multiple filtering options examples. | |
dmesg |
'false' | Show only kernel messages. This shows logs from current boot and adds the match _TRANSPORT=kernel . See Multiple filtering options examples. |
storage |
none | The ID of a storage extension to be used to store cursors. Cursors allow the receiver to pick up where it left off in the case of a collector restart. If no storage extension is used, the receiver will manage cursors in memory only. |
retry_on_failure.enabled |
false |
If true , the receiver will pause reading a file and attempt to resend the current batch of logs if it encounters an error from downstream components. |
retry_on_failure.initial_interval |
1 second |
Time to wait after the first failure before retrying. |
retry_on_failure.max_interval |
30 seconds |
Upper bound on retry backoff interval. Once this value is reached the delay between consecutive retries will remain constant at the specified value. |
retry_on_failure.max_elapsed_time |
5 minutes |
Maximum amount of time (including retries) spent trying to send a logs batch to a downstream consumer. Once this value is reached, the data is discarded. Retrying never stops if set to 0 . |
operators |
[] | An array of operators. See below for more details |
Each operator performs a simple responsibility, such as parsing a timestamp or JSON. Chain together operators to process logs into a desired format.
type
.id
. If you use the same type of operator more than once in a pipeline, you must specify an id
. Otherwise, the id
defaults to the value of type
.output
parameter can be used to specify the id
of another operator to which logs will be passed directly.receivers:
journald:
directory: /run/log/journal
units:
- ssh
- kubelet
- docker
- containerd
priority: info
The following configuration:
- type: journald_input
matches:
- _SYSTEMD_UNIT: ssh
- _SYSTEMD_UNIT: kubelet
_UID: "1000"
will be passed to journalctl
as the following arguments: journalctl ... _SYSTEMD_UNIT=ssh + _SYSTEMD_UNIT=kubelet _UID=1000
,
which is going to retrieve all entries which match at least one of the following rules:
_SYSTEMD_UNIT
is ssh
_SYSTEMD_UNIT
is kubelet
and _UID
is 1000
In case of using multiple following options, conditions between them are logically AND
ed and within them are logically OR
ed:
( dmesg )
AND
( priority )
AND
( units[0] OR units[1] OR units[2] OR ... units[U] )
AND
( identifier[0] OR identifier[1] OR identifier[2] OR ... identifier[I] )
AND
( matches[0] OR matches[1] OR matches[2] OR ... matches[M] )
AND
( grep )
Consider the following example:
- type: journald_input
matches:
- _SYSTEMD_UNIT: ssh
- _SYSTEMD_UNIT: kubelet
_UID: "1000"
units:
- kubelet
- systemd
priority: info
identifiers:
- systemd
The above configuration will be passed to journalctl
as the following arguments
journalctl ... --priority=info --unit=kubelet --unit=systemd --identifier=systemd _SYSTEMD_UNIT=ssh + _SYSTEMD_UNIT=kubelet _UID=1000
,
which is going to effectively retrieve all entries which matches the following set of rules:
_PRIORITY
is 6
, and_SYSTEMD_UNIT
is kubelet
or systemd
, andSYSLOG_IDENTIFIER
systemd
, andentry matches at least one of the following rules:
_SYSTEMD_UNIT
is ssh
_SYSTEMD_UNIT
is kubelet
and _UID
is 1000
The user running the collector must have enough permissions to access the journal; not granting them will lead to issues.
When running in a containerized environment, differences in the systemd version running on the host and on the container may prevent access to logs due to different features and configurations (e.g. zstd compression, keyed hash etc).
When running otelcol in a container, note that:
/run/log/journal
, /var/log/journal
...) must be mounted in the containerPlease note that the official otelcol images do not contain the journald binary; you will need to create your custom image or find one that does.
When installing otelcol as a linux package, you will most likely need to add the otelcol-contrib
or otel
user to the systemd-journal
group. The exact user and group might vary depending on your package and linux distribution of choice.
You can test if the user has sufficient permissions by running something like (you might need to adjust according to available shell and opentelemetry user)
sudo su -s /bin/bash -c 'journalctl --lines 5' otelcol-contrib
if the permissions are set correctly you will see some logs, otherwise a clear error message.
See the instructions for Docker and adapt according to your Kubernetes distribution and node OS.