6.3. Working with Logs #
This section describes the steps required to manage logs.
6.3.1. Adding and Configuring the filelog
Receiver #
The filelog
receiver is an open component of OpenTelemetry that is used for collecting logs from the DBMS instance. Detailed information about this receiver can be found here.
The filelog
receiver should be added to the receivers
section and configured.
The receiver configuration depends on the database instance setup and the log format used (see the logging_collector
and log_destination
parameters). The collector supports log collection in the CSV and JSON formats.
Regardless of the log format, the path to the log directory and the template for log file names need to be specified.
An example of setting up a receiver for collecting logs in the JSON format:
receivers: filelog: include: [ /var/log/postgresql/*.json ] start_at: end retry_on_failure: enabled: true initial_interval: 1s max_interval: 30s max_elapsed_time: 5m operators: - type: csv_parser parse_ints: true timestamp: parse_from: attributes.timestamp layout_type: strptime layout: '%Y-%m-%d %H:%M:%S.%L %Z' - type: remove field: attributes.timestamp
An example of setting up a receiver for collecting logs in the CVS format:
receivers: filelog: include: [ /var/log/postgresql/*.csv ] start_at: end retry_on_failure: enabled: true initial_interval: 1s max_interval: 30s max_elapsed_time: 5m multiline: line_start_pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2} operators: - type: csv_parser header: timestamp,user,dbname,pid,connection_from,session_id,line_num,ps,session_start,vxid,txid,error_severity,state_code,message,detail,hint,internal_query,internal_position,context,statement,cursor_position,func_name,application_name,backend_type,leader_pid,query_id timestamp: parse_from: attributes.timestamp layout_type: strptime layout: '%Y-%m-%d %H:%M:%S.%L %Z' - type: remove field: attributes.timestamp
Note
CSV configuration requires specifying more parameters than other formats, as it has to adapt to CSV logging specifics.
A detailed description of configuration parameters with examples can be found in the /usr/share/doc/pgpro-otel-collector/examples
directory.
6.3.2. Adding and Configuring the attributes
and resource
Processors #
The attributes and resource processors are open components of OpenTelemetry.
The processor configuration also depends on the database instance setup and the log format used (see the logging_collector
and log_destination
parameters).
The resource
processor needs to be configured when sending logs to Elastic. Regardless of the log format, the service.name
and service.instance.id
attributes need to be specified.
An example of setting up processors for collecting logs in the JSON format:
processors: attributes/convert: actions: - key: query_id action: convert converted_type: string resource: attributes: - key: service.name action: upsert value: postgresql - key: service.instance.id action: upsert value: 1.2.3.4:5432
An example of setting up processors for collecting logs in the CSV format:
processors: attributes/convert: actions: - key: pid action: convert converted_type: int - key: line_num action: convert converted_type: int - key: txid action: convert converted_type: int key: remote_port action: convert converted_type: int - key: cursor_position action: convert converted_type: int - key: internal_position action: convert converted_type: int - key: leader_pid action: convert converted_type: int resource: attributes: - key: service.name action: upsert value: postgresql - key: service.instance.id action: upsert value: 1.2.3.4:5432
6.3.3. Adding and Configuring the otlphttp
Exporter #
The otlphttp
exporter is an open component of OpenTelemetry and is used for exporting collected logs to an OTLP-compatible storage or monitoring system that has to be predeployed and accessible. Read here to learn more.
To configure the otlphttp
exporter, it is sufficient to specify the address of the target system where data should be sent:
exporters: otlphttp: endpoint: https://otlp.example.org
6.3.4. Setting up a Pipeline #
Once receivers, processors, and exporters are added and configured, they need to be combined into a pipeline. The pipeline is configured in the service
section. The pipeline contents depend altogether on the previously added components (there is no default configuration).
Below is the example of how to set up a pipeline for log management. The data is collected by the filelog
receiver, processed by the resource
and attributes
processors and exported by the otlphttp
exporter.
Thus, all the components used in the pipeline should also be added in the configuration file and set up.
service: extensions: [] pipelines: logs: receivers: - filelog processors: - resource - attributes/convert exporters: - otlphttp