G.1. pgpro_pwr — workload reports #
- G.1.1. pgpro_pwr Architecture
- G.1.2. Prerequisites
- G.1.3. Installation and Setup
- G.1.4. Managing Servers
- G.1.5. Managing Samples
- G.1.6. Managing the Collection of Relation Sizes
- G.1.7. Managing Subsamples
- G.1.8. Managing Baselines
- G.1.9. Data Export and Import
- G.1.10. Report Generation Functions
- G.1.11. pgpro_pwr Report Sections
- G.1.12. pgpro_pwr Diagnostic Tools
- G.1.13. Important Notes
- G.1.2. Prerequisites
The pgpro_pwr module is designed to discover most resource-intensive activities in your database. (PWR, pronounced like "power", is an abbreviation of Postgres Pro Workload Reporting.) This extension is based on Postgres Pro's Statistics Collector views and the pgpro_stats or pg_stat_statements extension.
Note
Although pgpro_pwr can work with the pg_stat_statements extension, it is recommended that you use the pgpro_stats extension since it provides statement plans, wait events sampling and load distribution statistics for databases, roles, client hosts and applications.
Below, use of pgpro_stats is assumed unless otherwise noted.
If you cannot use pgpro_stats for an observed database, but the pg_stat_kcache extension is available, pgpro_pwr can process pg_stat_kcache data, which also provides information about CPU resource usage of statements and filesystem load (rusage).
pgpro_pwr can obtain summary wait statistics from the pg_wait_sampling extension. When pg_wait_sampling is in use, pgpro_pwr will reset the wait sampling profile on every sample.
pgpro_pwr is based on cumulative statistics sampling. Each sample contains statistic increments for most active objects and queries since the time when the previous sample was taken, or more concisely, since the previous sample. This data is later used to generate reports.
pgpro_pwr provides functions to collect samples. Regular sampling allows building a report on the database workload in the past.
pgpro_pwr allows you to take explicit samples during batch processing, load testing, etc.
Any time a sample is taken, pgpro_stats_statements_reset()
(see pgpro_stats for the function description) is called to ensure that statement statistics will not be lost when the statements count exceed pgpro_stats.max
(see Section G.2.7.1). The report will also contain a section informing you of whether the count of captured statements in any sample reaches 90% of pgpro_stats.max
.
pgpro_pwr installed on one Postgres Pro server can also collect statistics from other servers. This feature is useful for gathering workload statistics from hot standbys on the primary server. To benefit from it, make sure that all server names and connection strings are specified and that the pgpro_pwr server can connect to all databases on all servers.
G.1.1. pgpro_pwr Architecture #
The extension consists of the following parts:
Historical repository is a storage for sampling data. The repository is a set of extension tables.
Note
Among the rest, pgpro_pwr tables store query texts, which can contain sensitive information. So, for security reasons, restrict access to the repository as appropriate.
Sample management engine comprises functions used to take samples and maintain the repository by removing obsolete sample data.
Report engine comprises functions for generating reports based on data from the historical repository.
Administrative functions allow you to create and manage servers and baselines.
G.1.2. Prerequisites #
The prerequisites assume that pgpro_pwr, which is usually installed in a target cluster, i.e., the cluster that you will mainly track the workload for, the extension can also collect performance data from other clusters.
G.1.2.1. For the pgpro_pwr Database #
The pgpro_pwr extension depends on PL/pgSQL and the dblink extension.
G.1.2.2. For the Target Server #
The target server must allow connections to all databases from the server where pgpro_pwr is running. To connect to the target server, provide a connection string where a particular database on this server is specified. This database is of high importance for pgpro_pwr since the functionality of the pgpro_stats or pg_stat_statements extensions will be provided through this database. Note, however, that pgpro_pwr will also connect to all the other databases on this server.
Optionally, for completeness of gathered statistics:
If statement statistics are needed in reports, pgpro_stats must be installed and configured in the aforementioned database. The following settings may affect the completeness and accuracy of gathered statistics:
pgpro_stats.max
Low setting of this parameter may cause some statement statistics to be wiped out before the sample is taken. A report will warn you if the value of
pgpro_stats.max
seems undersized.pgpro_stats.track
Avoid changing the default value of '
top
' (note that the value of 'all
' will affect the accuracy of%Total
fields for statements-related sections of a report).
Set the parameters of the Postgres Pro's Statistics Collector as follows:
track_activities = on track_counts = on track_io_timing = on track_wal_io_timing = on # Since PostgreSQL 14 track_functions = all/pl
G.1.3. Installation and Setup #
pgpro_pwr is provided with Postgres Pro Standard as a separate pre-built package pgpro-pwr-std-17
(for the detailed installation instructions, see Chapter 16).
Note
pgpro_pwr creates a bunch of database objects, so installation in a dedicated schema is recommended.
Although the use of pgpro_pwr with superuser privileges does not have any issues, superuser privileges are not necessary. So you can choose one of the following setup procedures depending on your configuration and security requirements or customize them to meet your needs:
G.1.3.1. Simple Setup #
Use this setup procedure when pgpro_pwr is to be installed on the target cluster to only track its workload as superuser.
Create a schema for the pgpro_pwr installation and create the extension:
CREATE SCHEMA profile; CREATE EXTENSION pgpro_pwr SCHEMA profile;
G.1.3.2. Complex Setup #
Use this setup procedure when you intend to use pgpro_pwr for tracking workload on one or more servers and need to follow the principle of least privilege.
G.1.3.2.1. In the Target Server Database #
Create a user for pgpro_pwr on the target server:
CREATE USER pwr_collector PASSWORD 'collector_pwd';
Make sure this user has permissions to connect to any database in the target cluster (by default, it is true) and that pg_hba.conf
permits such a connection from the pgpro_pwr database host. Also, grant pwr_collector
with membership in the pg_read_all_stats
role and the EXECUTE
privilege on the following functions:
GRANT pg_read_all_stats TO pwr_collector; GRANT EXECUTE ON FUNCTION pgpro_stats_statements_reset TO pwr_collector; GRANT EXECUTE ON FUNCTION pgpro_stats_totals_reset(text,bigint) TO pwr_collector;
Also ensure the SELECT
privilege on the pgpro_stats_archiver
view:
GRANT SELECT ON pgpro_stats_archiver TO pwr_collector;
G.1.3.2.2. In the pgpro_pwr Database #
Create an unprivileged user:
CREATE USER pwr_user;
This user will be the owner of the extension schema and will collect samples.
Create a schema for the pgpro_pwr installation:
CREATE SCHEMA profile AUTHORIZATION pwr_user;
Grant the USAGE
privilege on the schema where the dblink extension resides:
GRANT USAGE ON SCHEMA public TO pwr_user;
Create the extension using pwr_user
account:
\c - pwr_user CREATE EXTENSION pgpro_pwr SCHEMA profile;
Define the connection parameters of the target server for pgpro_pwr. For example:
SELECT profile.create_server('target_server_name','host=192.168.1.100 dbname=postgres port=5432');
The connection string provided will be used in the
call while executing the dblink_connect()
take_sample()
function.
Note
Connection strings are stored in a pgpro_pwr table in clear-text form. Make sure no other database users can access tables of the pgpro_pwr extension.
G.1.3.3. Setting Up pgpro_pwr Roles #
Up to three roles can be distinguished when pgpro_pwr is in operation:
The pgpro_pwr owner role is the owner of the pgpro_pwr extension.
The collecting role is used by pgpro_pwr to connect to databases and collect statistics.
The reporting role is used to generate reports.
If all the actions with pgpro_pwr are performed by the superuser role postgres
, you can skip most of the setup explained below.
G.1.3.3.1. The pgpro_pwr Owner #
This role can be used to perform all actions related to pgpro_pwr. This role will have access to server connection strings, which may contain passwords. You should use this role to call the take_sample()
function. The dblink extension is needed for this user.
Consider an example assuming each extension in its own schema:
\c postgres postgres CREATE SCHEMA dblink; CREATE EXTENSION dblink SCHEMA dblink; CREATE USER pwr_usr with password 'pwr_pwd'; GRANT USAGE ON SCHEMA dblink TO pwr_usr; CREATE SCHEMA profile AUTHORIZATION pwr_usr; \c postgres pwr_usr CREATE EXTENSION pgpro_pwr SCHEMA profile;
G.1.3.3.2. The Collecting Role #
This role should be used by pgpro_pwr to connect to databases and collect statistics. Unprivileged users cannot open connections using dblink without a password, so you need to provide the password in the connection string for each server. This role should have access to all supported statistics extensions. It should also be able to perform a reset of statistics extensions.
Consider an example. If you use pgpro_stats to collect statistics, set up the collecting role as follows:
\c postgres postgres CREATE SCHEMA pgps; CREATE EXTENSION pgpro_stats SCHEMA pgps; CREATE USER pwr_collector with password 'collector_pwd'; GRANT pg_read_all_stats TO pwr_collector; GRANT USAGE ON SCHEMA pgps TO pwr_collector; GRANT EXECUTE ON FUNCTION pgps.pgpro_stats_statements_reset TO pwr_collector;
If you use pg_stat_statements to collect statistics, set up the collecting role as follows:
\c postgres postgres CREATE SCHEMA pgss; CREATE SCHEMA pgsk; CREATE SCHEMA pgws; CREATE EXTENSION pg_stat_statements SCHEMA pgss; CREATE EXTENSION pg_stat_kcache SCHEMA pgsk; CREATE EXTENSION pg_wait_sampling SCHEMA pgws; CREATE USER pwr_collector with password 'collector_pwd'; GRANT pg_read_all_stats TO pwr_collector; GRANT USAGE ON SCHEMA pgss TO pwr_collector; GRANT USAGE ON SCHEMA pgsk TO pwr_collector; GRANT USAGE ON SCHEMA pgws TO pwr_collector; GRANT EXECUTE ON FUNCTION pgss.pg_stat_statements_reset TO pwr_collector; GRANT EXECUTE ON FUNCTION pgsk.pg_stat_kcache_reset TO pwr_collector; GRANT EXECUTE ON FUNCTION pgws.pg_wait_sampling_reset_profile TO pwr_collector;
Now you should set up a connection string pointing to the database with statistics extensions installed:
\c postgres pwr_usr SELECT profile.set_server_connstr('local','dbname=postgres port=5432 host=localhost user=pwr_collector password=collector_pwd');
Password authentication must be configured in the pg_hba.conf
file for the pwr_collector
user.
Obviously, the collecting role should be properly configured on all servers observed by pgpro_pwr.
Now you should be able to call take_sample()
using the pwr_usr
role:
\c postgres pwr_usr SELECT * FROM take_sample();
And now it's time to configure the scheduler (in our example, the crontab
command of the postgres
user):
*/30 * * * * psql -U pwr_usr -d postgres -c 'SELECT pgpro_pwr.take_sample()' > /dev/null 2>&1
Note that you can use the Postgres Pro password file to store passwords.
G.1.3.3.3. The Reporting Role #
Any user can build a pgpro_pwr report. The minimal privileges needed to generate pgpro_pwr reports are granted to the public
role. However a full report, with query texts, is only available to the member of the pg_read_all_stats
role. Anyway, the reporting role cannot access server connection strings, so it cannot get the passwords of servers.
G.1.3.4. Setting Extension Parameters #
In postgresql.conf
, you can define the following pgpro_pwr parameters:
pgpro_pwr.max
(integer
) #Number of top objects (statements, relations, etc.) to be reported in each sorted report table. This parameter affects the size of a sample.
The default value is
20
.pgpro_pwr.max_sample_age
(integer
) #Retention time of the sample, in days. Samples aged
pgpro_pwr.max_sample_age
days and older are automatically deleted on the nexttake_sample()
call.The default value is
7
days.pgpro_pwr.max_query_length
(integer
) #Maximum query length allowed in reports. All queries in a report will be truncated to
pgpro_pwr.max_query_length
characters.The default value is
20 000
characters.pgpro_pwr.track_sample_timings
(boolean
) #Enables collecting detailed timing statistics of pgpro_pwr's own sampling procedures. Set this parameter to diagnose why sampling functions run slowly. Collected timing statistics will be available in the
v_sample_timings
view.The default value is
off
.pgpro_pwr.statements_reset
(boolean
) #Controls the pgpro_stats/pg_stat_statements statistics reset during taking a sample. Allows you not to reset the statistics during taking a sample due to new techniques employed. When disabled, pgpro_pwr will track statement evictions using the value of the
calls
field. However this method does not completly prevent statistics loss. pg_stat_statements v1.11 and pgpro_stats v1.8 contain time tracking abilities that can reduce the possible data loss. When this setting is disabled, you can temporarily enable it in a session if you want to sometimes perform a reset of pgpro_stats/pg_stat_statements.The default value is
on
.
G.1.4. Managing Servers #
Once installed, pgpro_pwr creates one enabled local
server for the current cluster. If a server is enabled, pgpro_pwr includes it in sampling when no server is explicitly specified (see take_sample()
for details). A server that is not enabled is referred to as disabled.
The default connection string for a local node contains only dbname
and port
parameters. The values of these parameters are taken from the connection used to create the extension. You can change the server connection string using the set_server_connstr()
function when needed.
G.1.4.1. Server Management Functions #
Use the following pgpro_pwr functions for server management:
-
create_server(
#server
name
,connstr
text
,enabled
boolean
DEFAULTTRUE
,max_sample_age
integer
DEFAULTNULL
description
text
DEFAULTNULL
) Creates a server definition.
Arguments:
server
— server name. Must be unique.connstr
— connection string. Must contain all the necessary settings to connect from pgpro_pwr server to the target server database.enabled
— set to include the server in sampling by thetake_sample()
function without arguments.max_sample_age
— retention time of the sample. Overrides the globalpgpro_pwr.max_sample_age
setting for this server.description
— server description text, to be included in reports.
Here is an example of how to create a server definition:
SELECT profile.create_server('omega','host=192.168.1.100 dbname=postgres port=5432');
-
drop_server(
#server
name
) Drops a server and all its samples.
-
set_server_description(
#server
name
description
text
) Sets a new server description.
-
set_server_subsampling(
#server
name
,subsample_enabled
boolean
,min_query_duration
interval
,min_xact_duration
interval
,min_xact_age
integer
,min_idle_xact_dur
interval hour to second
) Defines subsample settings for a server.
Arguments:
server
— server name.subsample_enabled
— defines whether subsampling is enabled for the server, that is, whethertake_subsample()
function should actually take a subsample.min_query_duration
— the query duration threshold.min_xact_duration
— the transaction duration threshold.min_xact_age
— the transaction age threshold.min_idle_xact_dur_age
— the idle transaction threshold.
-
enable_server(
#server
name
) Includes a server in sampling by the
take_sample()
function without arguments.
-
disable_server(
#server
name
) Excludes a server from sampling by the
take_sample()
function without arguments.
-
rename_server(
#server
name
,new_name
name
) Renames a server.
-
set_server_max_sample_age(
#server
name
,max_sample_age
integer
) Sets the retention period for a server (in days). To reset the server retention, set the value of
max_sample_age
toNULL
.
-
set_server_db_exclude(
#server
name
,exclude_db
name[]
) Excludes a list of databases on a server from sampling. Use when pgpro_pwr is unable to connect to some databases in a cluster (for example, in Amazon RDS instances).
-
set_server_connstr(
#server
name
,server_connstr
text
) Sets the connection string for a server.
-
show_servers()
# Displays the list of configured servers.
G.1.5. Managing Samples #
A sample contains the database workload statistics since the previous sample
G.1.5.1. Sampling Functions #
The following pgpro_pwr functions relate to sampling:
-
take_sample()
take_sample(
#server
name
[,skip_sizes
boolean
]) Takes samples.
If the parameter is omitted, the function takes a sample on each enabled server. Servers are accessed for sampling sequentially, one by one. The function returns a table with the following columns:
server
— server name.result
— result of taking the sample. Can beOK
if the sample was taken successfully or contain the error trace text in case of exception.elapsed
— time elapsed while the sample was taken.
If called with the parameter, the function takes a sample on the specified server even if this server is disabled. Use when you need different sampling frequencies on specific servers. Returns
0
on success.Arguments:
server
— server name.skip_sizes
— if omitted or set to null, the size-collection policy applies; iffalse
, relation sizes are collected; iftrue
, the collection of relation sizes is skipped.
-
take_sample_subset([
#sets_cnt
integer
,current_set
integer
]) Takes a sample on each server in a subset of servers. Use to take samples on servers in parallel if you have many enabled servers. Although PL/pgSQL does not support parallel execution, you can call this function in parallel sessions. This function returns the same type as
take_sample()
. If both parameters are omitted, the function behaves like thetake_sample()
function, i.e., it takes a sample on all enabled servers one by one.Arguments:
sets_cnt
— number of subsets to divide all enabled servers into.current_set
— number of the subset to collect samples for. Takes values from 0 throughsets_cnt
- 1. For the specified subset, samples are collected as usual, server by server.
If a reset of statistics since the previous sample was detected, pgpro_pwr treats corresponding absolute values as differentials; however, the accuracy will be affected anyway.
-
show_samples([
#server
name
,] [days
integer
]) Returns a table with information on server samples (
local
server is assumed ifserver
is omitted) for the lastdays
days (all existing samples are assumed if omitted). This table has the following columns:sample
— sample identifier.sample_time
— time when this sample was taken.dbstats_reset
—NULL
or the statistics reset timestamp of thepg_stat_database
view if the statistics were reset since the previous sample.clustats_reset
—NULL
or the statistics reset timestamp of thepg_stat_bgwriter
view if the statistics were reset since the previous sample.archstats_reset
—NULL
or the statistics reset timestamp of thepg_stat_archiver
view if the statistics were reset since the previous sample.
Sampling functions also maintain the server repository by deleting obsolete samples and baselines according to the retention policy.
G.1.5.2. Taking Samples #
To take samples for all enabled servers, call the take_sample()
function. Usually, one or two samples per hour is sufficient. You can use a cron-like tool to schedule sampling. Here is an example for a 30-minute sampling period:
*/30 * * * * psql -c 'SELECT profile.take_sample()' &> /dev/null
However, the results of such a call are not checked for errors. In a production environment, function results can be used for monitoring. This function returns OK
for all servers with successfully taken samples and shows error text for failed servers:
SELECT * FROM take_sample(); server | result | elapsed -----------+-----------------------------------------------------------------------------+------------- ok_node | OK | 00:00:00.48 fail_node | could not establish connection +| 00:00:00 | SQL statement "SELECT dblink_connect('server_connection',server_connstr)" +| | PL/pgSQL function take_sample(integer) line 69 at PERFORM +| | PL/pgSQL function take_sample_subset(integer,integer) line 27 at assignment+| | SQL function "take_sample" statement 1 +| | FATAL: database "postgresno" does not exist | (2 rows)
G.1.5.3. Sample Retention Policy #
You can define sample retention at the following levels:
Global
The value of the
pgpro_pwr.max_sample_age
parameter in thepostgresql.conf
file defines a common retention setting, which is effective if none of other related settings are defined.Server
Specifying the
max_sample_age
parameter while creating a server or calling theset_server_max_sample_age(
function for an existing server defines the retention for the server. A server retention setting overridesserver
,max_sample_age
)pgpro_pwr.max_sample_age
for a specific server.Baseline
A baseline created overrides all the other retention periods for included samples.
G.1.6. Managing the Collection of Relation Sizes #
It may take considerable time to collect sizes of all relations in a database by Postgres Pro relation-size functions. Besides, those functions require AccessExclusiveLock
on a relation. However, it may be sufficient for you to collect relation sizes on a daily basis. pgpro_pwr allows you to skip collecting relation sizes by defining the size-collection policy for servers. The policy defines:
A daily window when the collection of relation sizes is permitted.
A minimum gap between two samples with relation sizes collected.
When the size-collection policy is defined, sampling functions collect relation sizes only when the sample is taken in the defined window and the previous sample with sizes is older than the gap. The following function defines this policy:
-
set_server_size_sampling(
#server
name
,window_start
time with time zone
DEFAULTNULL
,window_duration
interval hour to second
DEFAULTNULL
,sample_interval
interval day to minute
DEFAULTNULL
) Defines the size-collection policy for a server.
Arguments:
server
— server name.window_start
— start time of the size-collection window.window_duration
— duration of the size-collection window.sample_interval
— minimum time gap between two samples with relation sizes collected.
Note
When you build a report between samples either of which lacks relation-size data, relation-growth sections will be based on pg_class
.relpages
data. However, you can expand the report interval bounds to the nearest samples with relation sizes collected using the with_growth
parameter of report generation functions; this makes the growth data more accurate.
Relation sizes are needed to calculate sequentially scanned volume for tables and explicit vacuum load for indexes.
Example:
SELECT set_server_size_sampling('local','23:00+03',interval '2 hour',interval '8 hour');
The show_servers_size_sampling
function shows size collection policies for all servers:
postgres=# SELECT * FROM show_servers_size_sampling(); server_name | window_start | window_end | window_duration | sample_interval | limited_collection -------------+--------------+-------------+-----------------+-----------------+-------------------- local | 23:00:00+03 | 01:00:00+03 | 02:00:00 | 08:00:00 | t
G.1.7. Managing Subsamples #
Some performance-related data available in Postgres Pro is not cumulative. For example, the most often used data about session states is available through the pg_stat_activity view and can only be obtained with frequent samples. However, the take_sample()
function is heavy and can take considerable amount of time. So it is not suitable for collecting session state data.
The subsample feature provides a new fast take_subsample()
function. It can be used to collect relatively fast changing data. Every subsample is bound to the next regular sample and is deleted by the retention policy together with it.
The subsample feature can be used to capture the most interesting session states:
Long running queries
Long transactions
Aged transactions, that is, those that hold a snapshot behind a lot of other transactions
Transactions being in an idle state for a long time
G.1.7.1. Subsampling Functions #
The following pgpro_pwr functions relate to subsampling:
-
take_subsample()
take_subsample(
#server
name
) If the parameter is omitted, the function takes a subsample on each enabled server with subsampling enabled (see set_server_subsampling for details). Server subsamples are taken sequentially, one by one. The function returns a table with the following columns:
server
— server name.result
— result of taking the subsample. Can beOK
if the subsample was taken successfully or contain the error text in case of exception.elapsed
— time elapsed while the subsample was taken.
This tabular return format makes it easy to control subsample creation using an SQL query.
If called with the parameter, the function takes a subsample for the specified server. Use when you need different subsampling frequencies on servers or if you want to take an explicit subsample on a specific server.
Arguments:
server
— server name.
Note
Trying to take a subsample during taking a sample fails.
-
take_subsample_subset([
#sets_cnt
integer
], [current_set
integer
]) Takes subsamples for a subset of enabled servers with subsampling enabled. Although subsamples should be fast enough for serial processing, subsamples can be taken in parallel, like regular samples. This function returns the same type as
take_subsample()
. If both parameters are omitted, the function behaves like thetake_subsample()
function.Arguments:
sets_cnt
— number of server subsets.current_set
— the subset to process. Takes values from 0 throughsets_cnt
- 1. For the specified subset, subsamples are collected as usual, server by server.
G.1.7.2. Configuring the Subsample Feature #
The following settings affect the subsample behaviour:
pgpro_pwr.subsample_enabled
— defines whether thetake_subsample()
function should actually take a subsample.pgpro_pwr.min_query_duration
— the long running query threshold.pgpro_pwr.min_xact_duration
— the long transaction threshold.pgpro_pwr.min_xact_age
— the transaction age threshold.pgpro_pwr.min_idle_xact_dur_age
— the idle transaction threshold.
The subsample behavior can be defined at the server level using the set_server_subsampling function.
The last observed session state is saved in the repository when either of the following threshold-related events happens:
During query execution, the difference between
now()
and thequery_start
exceeds thepgpro_pwr.min_query_duration
threshold.During a transaction, the difference between
now()
and thexact_start
exceeds thepgpro_pwr.min_xact_duration
threshold.During a transaction, the
age(
exceeds thebackend_xmin
)pgpro_pwr.min_xact_age
threshold.During a transaction in a state
idle in transaction
oridle in transaction (aborted)
, the difference betweennow()
and thestate_change
exceeds thepgpro_pwr.min_idle_xact_duration
threshold.
See Chapter 26 for details of the mentioned fields. Every subsample can hold at most pg_profile.topn
entries for every threshold type.
G.1.7.3. Scheduling Subsamples #
Subsamples are fast enough to take them quite often. However usually you do not need more than 2-4 subsamples per minute. Obviously the subsample frequency depends on the shortest used duration of a threshold setting.
Cron only allows one call per minute, so some effort is needed to schedule more frequent subsamples. For example, the \watch
psql command can be used:
echo "select take_subsample(); \watch 15" | psql &> /dev/null
The psql call can be wrapped in the systemd service like this:
Description=pg_profile subsampling unit [Unit] [Service] Type=simple ExecStart=/bin/sh -c 'echo "select take_subsample(); \\watch 15" | /path/to/psql -qo /dev/null' User=postgres Group=postgres [Install] WantedBy=multi-user.target
G.1.8. Managing Baselines #
A baseline is a named sequence of samples that has its own retention setting. A baseline can be used as a sample interval in report generation functions. An undefined baseline retention means infinite retention. Use baselines to save information about the database workload for a certain time interval.
G.1.8.1. Baseline Management Functions #
Use the following pgpro_pwr functions for baseline management:
-
create_baseline([
server
name
,]baseline
varchar(25)
,start_id
integer
,end_id
integer
[,days
integer
])
create_baseline([
#server
name
,]baseline
varchar(25)
,time_range
tstzrange
[,days
integer
]) Creates a baseline.
Arguments:
server
— server name.local
sever is assumed if omitted.baseline
— baseline name. Must be unique for a server.start_id
— identifier of the first sample in the baseline.end_id
— identifier of the last sample in the baseline.time_range
— time interval for the baseline. The baseline will include all samples for the minimal interval that coverstime_range
.days
— baseline retention time, defined in integer days sincenow()
. Omit or set to null for infinite retention.
-
drop_baseline([
server
name
,]baseline
varchar(25)
) Drops a baseline. For the meaning and usage details of function arguments, see
create_baseline
. Dropping a baseline does not mean dropping all its samples immediately. The baseline retention just no longer applies to them.
-
keep_baseline([
server
name
,]baseline
varchar(25)
[,days
integer
]) Changes the retention of a baseline. For the meaning and usage details of function arguments, see
create_baseline
. Omit thebaseline
parameter or pass null to it to change the retention of all existing baselines.
G.1.9. Data Export and Import #
Collected samples can be exported from one instance of the pgpro_pwr extension and then loaded into another one. This feature helps you to move server data from one instance to another or to send collected data to your support team.
G.1.9.1. Data Export #
The export_data function exports data to a regular table. You can use any method available to export this table from your database. For example, you can use the \copy
meta-command of psql to obtain a single csv
file:
postgres=# \copy (select * from export_data()) to 'export.csv'
G.1.9.2. Data Import #
Since data can only be imported from a local table, first, load the data you exported. Using the \copy
meta-command again:
postgres=# CREATE TABLE import (section_id bigint, row_data json); CREATE TABLE postgres=# \copy import from 'export.csv' COPY 6437
Now you can import the data by providing the import
table to the import_data function:
postgres=# SELECT * FROM import_data('import');
After successful import, you can drop the import
table.
Note
If server data is imported for the first time, your local pgpro_pwr servers with matching names will cause a conflict during import. To avoid this, you can temporarily rename such servers or you can specify the server name prefix for import operations. However, during import of new data for already imported servers, they are matched by system identifiers, so feel free to rename imported severs. Also keep in mind that pgpro_pwr sets servers being imported to the disabled state for take_sample()
to bypass them.
G.1.9.3. Export and Import Functions #
Use these functions to export or import data:
-
export_data([
#server
name
, [min_sample_id
integer
,] [max_sample_id
integer
,]] [,obfuscate_queries
boolean
]) Exports collected data.
Arguments:
server
— server name. All configured servers are assumed if omitted.min_sample_id
,max_sample_id
— sample identifiers to bound the export (inclusive). Ifmin_sample_id
is omitted or set to null, all samples untilmax_sample_id
sample are exported; ifmax_sample_id
is omitted or set to null, all samples sincemin_sample_id
sample are exported.obfuscate_queries
— iftrue
, query texts are exported asMD5
hash.
-
import_data(
#data
regclass
[,server_name_prefix
text
]) Imports previously exported data. Returns the number of actually loaded rows in pgpro_pwr tables.
Arguments:
data
is the name of the table containing import data.server_name_prefix
specifies the server name prefix for the import operation. It can be used to avoid name conflicts.
G.1.10. Report Generation Functions #
pgpro_pwr reports are generated in HTML format by reporting functions. The following types of reports are available:
Regular reports provide statistics on the workload for an interval.
Differential reports provide statistics on the same objects for two intervals. Corresponding values are located next to each other, which makes it easy to compare the workloads.
Reporting functions take sample identifiers, baselines or time ranges to determine the intervals. For time ranges, these are the minimal intervals that cover the ranges.
G.1.10.1. Regular Reports #
Use this function to generate regular reports:
-
get_report([
server
name
,]start_id
integer
,end_id
integer
[,description
text
[,with_growth
boolean
]])
get_report([
server
name
,]time_range
tstzrange
[,description
text
[,with_growth
boolean
]])get_report([
server
name
,]baseline
varchar(25)
[,description
text
[,with_growth
boolean
]]) Generates a regular report.
Arguments:
server
— server name.local
sever is assumed if omitted.start_id
— identifier of the interval starting sample.end_id
— identifier of the interval ending sample.baseline
— baseline name.time_range
— time range.description
— short text to be included in the report as its description.with_growth
— flag requesting interval expansion to the nearest bounds with data on relation growth available. The default value isfalse
.
G.1.10.2. Differential Reports #
Use this function to generate differential reports:
-
get_diffreport([
server
name
,]start1_id
integer
,end1_id
integer
,start2_id
integer
,end2_id
integer
[,description
text
[,with_growth
boolean
]])
get_diffreport([
server
name
,]time_range1
tstzrange
,time_range2
tstzrange
[,description
text
[,with_growth
boolean
]])
get_diffreport([
server
name
,]baseline1
varchar(25)
,baseline2
varchar(25)
[,description
text
[,with_growth
boolean
]])get_diffreport([
server
name
,]baseline1
varchar(25)
,time_range2
tstzrange
[,description
text
[,with_growth
boolean
]])get_diffreport([
server
name
,]time_range1
tstzrange
,baseline2
varchar(25)
[,description
text
[,with_growth
boolean
]])get_diffreport([
server
name
,]start1_id
integer
,end1_id
integer
,baseline2
varchar(25)
[,description
text
[,with_growth
boolean
]])get_diffreport([
server
name
,]baseline1
varchar(25)
,start2_id
integer
,end2_id
integer
[,description
text
[,with_growth
boolean
]]) Generates a differential report for two intervals. The combinations of arguments provide possible ways to specify the two intervals.
Arguments:
server
— server name.local
sever is assumed if omitted.start1_id
,end1_id
— identifiers of the starting and ending samples for the first interval.start2_id
,end2_id
— identifiers of the starting and ending samples for the second interval.baseline1
— baseline name for the first interval.baseline2
— baseline name for the second interval.time_range1
— time range for the first interval.time_range2
— time range for the second interval.description
— short text to be included in the report as its description.with_growth
— flag requesting interval expansion to the nearest bounds with data on relation growth available. The default value isfalse
.
G.1.10.3. Report Generation Example #
Generate a report for the local
server and interval defined by samples:
psql -Aqtc "SELECT profile.get_report(480,482)" -o report_480_482.html
For any other server, provide its name:
psql -Aqtc "SELECT profile.get_report('omega',12,14)" -o report_omega_12_14.html
Generate a report using time ranges:
psql -Aqtc "SELECT profile.get_report(tstzrange('2020-05-13 11:51:35+03','2020-05-13 11:52:18+03'))" -o report_range.html
Generate a relative time-range report:
psql -Aqtc "SELECT profile.get_report(tstzrange(now() - interval '1 day',now()))" -o report_last_day.html
G.1.11. pgpro_pwr Report Sections #
Each pgpro_pwr report is divided into sections, described below. The number of top objects reported in each sorted report table is specified by the pgpro_pwr.max
parameter.
Almost every item in the report can be accentuated by a single mouse click. The accentuated item will be instantly highlighted in all report sections, making it easy to find. The attributes identifying the item will appear in the bottom-right corner of the page. For example, if you click on a database name in the “Database statistics” report table, you can notice a small table with the database attributes in the bottom-right corner of the page.
When you scroll down the report, its table of contents will be available on the right side of the page. It can be hidden with a single mouse click on the “content” tag.
A substring-based filter is also available that helps limit the report contents to particular objects based on a substring. Specifically, substring-based filtering is applied to query texts.
G.1.11.1. Server statistics #
Tables in this section of a pgpro_pwr report are described below.
The report table “Database statistics” provides per-database statistics for the report interval. The statistics are based on the pg_stat_database
view. Table G.1 lists columns of this report table.
Table G.1. Database statistics
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | datname |
Commits | Number of committed transactions | xact_commit |
Rollbacks | Number of rolled back transactions | xact_rollback |
Deadlocks | Number of deadlocks detected | deadlocks |
Checksum Failures | Number of data page checksum failures detected in this database. This field is only shown if any checksum failures were detected in this database during the report interval. | checksum_failures |
Checksums Last | Time at which the last data page checksum failure was detected in this database. This field is only shown if any checksum failures were detected in this database during the report interval. | checksum_last_failure |
Hit% | Buffer cache hit ratio, i.e., percentage of pages fetched from buffers in all pages fetched | |
Read | Number of disk blocks read in this database | blks_read |
Hit | Number of times disk blocks were found already in the buffer cache | blks_hit |
Ret | Number of returned tuples | tup_returned |
Fet | Number of fetched tuples | tup_fetched |
Ins | Number of inserted tuples | tup_inserted |
Upd | Number of updated tuples | tup_updated |
Del | Number of deleted tuples | tup_deleted |
Temp Size | Total amount of data written to temporary files by queries in this database | temp_bytes |
Temp Files | Number of temporary files created by queries in this database | temp_files |
Size | Database size at the time of the last sample in the report interval | pg_database_size() |
Growth | Database growth during the report interval | pg_database_size() increment between interval bounds |
The report table “Cluster I/O statistics” provides I/O statistics by object types, backend types and contexts. This table is based on the pg_stat_io
view of the Cumulative Statistics System, available since Postgres Pro 16. Table G.2 lists columns of this report table. Times are provided in seconds.
Table G.2. Cluster I/O statistics
Column | Description |
---|---|
Object | Target object of an I/O operation |
Backend | Type of the backend that performed an I/O operation |
Context | The context of an I/O operation |
Reads Count | Number of read operations |
Reads Bytes | Amount of data read |
Reads Time | Time spent in read operations |
Writes Count | Number of write operations |
Writes Bytes | Amount of data written |
Writes Time | Time spent in write operations |
Writebacks Count | Number of blocks which the process requested the kernel write out to permanent storage |
Writebacks Bytes | Amount of data requested for write out to permanent storage |
Writebacks Time | Time spent in writeback operations, including the time spent queueing write-out requests and, potentially, the time spent to write out the dirty data |
Extends Count | Number of relation extend operations |
Extends Bytes | Amount of space used by extend operations |
Extends Time | Time spent in extend operations |
Hits | The number of times a desired block was found in a shared buffer |
Evictions | Number of times a block has been written out from a shared or local buffer in order to make it available for another use |
Reuses | The number of times an existing buffer in a size-limited ring buffer outside of shared buffers was reused as part of an I/O operation in the bulkread , bulkwrite , or vacuum context s |
Fsyncs Count | Number of fsync calls. These are only tracked in context normal . |
Fsyncs Time | Time spent in fsync operations |
The report table “Cluster SLRU statistics” provides access statistics on SLRU (simple least-recently-used) caches. This table is based on the pg_stat_slru
view of the Cumulative Statistics System. Table G.3 lists columns of this report table. Times are provided in seconds.
Table G.3. Cluster SLRU statistics
Column | Description | Field/Calculation |
---|---|---|
Name | Name of the SLRU | name |
Zeroed | Number of blocks zeroed during initializations | blks_zeroed |
Hits | Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache) | blks_hit |
Reads | Number of disk blocks read for this SLRU | blks_read |
%Hit | Number of disk block hits for this SLRU as the percentage of Reads + Hits | blks_hit *100/blks_read + blks_hit |
Writes | Number of disk blocks written for this SLRU | blks_written |
Checked | Number of blocks checked for existence for this SLRU | blks_exists |
Flushes | Number of flushes of dirty data for this SLRU | flushes |
Truncates | Number of truncates for this SLRU | truncates |
Table “Session statistics by database” is available in the report for Postgres Pro databases starting with version 14. This table is based on the pg_stat_database
view of the Statistics Collector. Table G.4 lists columns of this report table. Times are provided in seconds.
Table G.4. Session statistics by database
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | |
Timing Total | Time spent by database sessions in this database during the report interval (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won't be included) | session_time |
Timing Active | Time spent executing SQL statements in this database during the report interval (this corresponds to the states active and fastpath function call in pg_stat_activity ) | active_time |
Timing Idle(T) | Time spent idling while in a transaction in this database during the report interval (this corresponds to the states idle in transaction and idle in transaction (aborted) in pg_stat_activity ) | idle_in_transaction_time |
Sessions Established | Total number of sessions established to this database during the report interval | sessions |
Sessions Abandoned | Number of database sessions to this database that were terminated because connection to the client was lost during the report interval | sessions_abandoned |
Sessions Fatal | Number of database sessions to this database that were terminated by fatal errors during the report interval | sessions_fatal |
Sessions Killed | Number of database sessions to this database that were terminated by operator intervention during the report interval | sessions_killed |
In Postgres Pro Standard databases of versions that include pgpro_stats version starting with 1.4, workload statistics of vacuum processes are available. The “Database vacuum statistics” report table provides per-database aggregated total vacuum statistics based on the pgpro_stats_vacuum_tables
view. Table G.5 lists columns of this report table. Times are provided in seconds.
Table G.5. Database vacuum statistics
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | |
Blocks fetched | Total number of database blocks fetched by vacuum operations | total_blks_read + total_blks_hit |
%Total | Total number of database blocks fetched (read+hit) by vacuum operations as the percentage of all blocks fetched in the cluster | Blocks fetched * 100 / Cluster fetched |
Blocks read | Total number of database blocks read by vacuum operations | total_blks_read |
%Total | Total number of database blocks read by vacuum operations as the percentage of all blocks read in the cluster | Blocks read * 100 / Cluster read |
VM Frozen | Total number of blocks marked all-frozen in the visibility map | pages_frozen |
VM Visible | Total number of blocks marked all-visible in the visibility map | pages_all_visible |
Tuples deleted | Total number of dead tuples vacuum operations deleted from tables of this database | tuples_deleted |
Tuples left | Total number of dead tuples vacuum operations left in tables of this database due to their visibility in transactions | dead_tuples |
%Eff | Vacuum efficiency in terms of deleted tuples. This is the percentage of tuples deleted from tables of this database in all dead tuples to be deleted from tables of this database. | tuples_deleted * 100 / (tuples_deleted + dead_tuples ) |
WAL size | Total amount of WAL bytes generated by vacuum operations performed on tables of this database | wal_bytes |
Read I/O time | Time spent reading database blocks by vacuum operations performed on tables of this database | blk_read_time |
Write I/O time | Time spent writing database blocks by vacuum operations performed on tables of this database | blk_write_time |
%Total | Vacuum I/O time spent as the percentage of whole cluster I/O time | |
Total vacuum time | Total time of vacuuming tables of this database | total_time |
Delay vacuum time | Time spent sleeping in a vacuum delay point by vacuum operations performed on tables of this database | delay_time |
User CPU time | User CPU time of vacuuming tables of this database | user_time |
System CPU time | System CPU time of vacuuming tables of this database | system_time |
Interrupts | Number of times vacuum operations performed on tables of this database were interrupted on any errors | interrupts |
If the pgpro_stats extension supporting invalidation statistics was available during the report interval, the "Invalidation messages by database" report table provides per-database aggregated total invalidation message statistics. Table G.6 lists columns of this report table.
Table G.6. Invalidation messages by database
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | |
Invalidation messages sent | Total number of invalidation messages sent by backends in this database. Statistics are provided for corresponding message types of pgpro_stats_inval_msgs | Fields of pgpro_stats_totals .inval_msgs |
Cache resets | Total number of shared cache resets | pgpro_stats_totals .cache_resets |
If the pgpro_stats extension was available during the report interval, the “Statement statistics by database” report table provides per-database aggregated total statistics for the pgpro_stats_statements
view data. Table G.7 lists columns of this report table. Times are provided in seconds.
Table G.7. Statement statistics by database
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | |
Calls | Number of times all statements in the database were executed | calls |
Plan Time | Time spent planning all statements in the database | Sum of total_plan_time |
Exec Time | Time spent executing all statements in the database | Sum of total_exec_time |
Read Time | Time spent reading blocks by all statements in the database | Sum of blk_read_time |
Write Time | Time spent writing blocks by all statements in the database | Sum of blk_write_time |
Trg Time | Time spent executing trigger functions by all statements in the database | |
Shared Fetched | Total number of shared blocks fetched by all statements in the database | Sum of (shared_blks_read + shared_blks_hit ) |
Local Fetched | Total number of local blocks fetched by all statements in the database | Sum of (local_blks_read + local_blks_hit ) |
Shared Dirtied | Total number of shared blocks dirtied by all statements in the database | Sum of shared_blks_dirtied |
Local Dirtied | Total number of local blocks dirtied by all statements in the database | Sum of local_blks_dirtied |
Read Temp | Total number of temp blocks read by all statements in the database | Sum of temp_blks_read |
Write Temp | Total number of temp blocks written by all statements in the database | Sum of temp_blks_written |
Read Local | Total number of local blocks read by all statements in the database | Sum of local_blks_read |
Write Local | Total number of local blocks written by all statements in the database | Sum of local_blks_written |
Statements | Total number of captured statements | |
WAL Size | Total amount of WAL generated by all statements in the database | Sum of wal_bytes |
The “Statement average min/max timings” report table contains per-database aggregated min/max timing statistics from the one of pgpro_stats or pg_stat_statements extensions that was available during the report interval, with the precedence of pgpro_stats. This report is sensitive to the fastest and slowest planning and execution of every statement in a cluster. That is, you can see execution and planning stability in your database. Table G.8 lists columns of this report table.
Table G.8. Statement average min/max timings
Column | Description |
---|---|
Database | Database name |
Min average planning time (ms) | The average value of min_plan_time for all statements and all samples included in the report |
Max average planning time (ms) | The average value of max_plan_time for all statements and all samples included in the report |
Delta% of average planning times | Difference between the mean max_plan_time and mean min_plan_time as the percentage of the mean min_plan_time . The less this difference, the more stable query planning in your database is. |
Min average execution time (ms) | The average value of min_exec_time for all statements and all samples included in the report |
Max average execution time (ms) | The average value of max_exec_time for all statements and all samples included in the report |
Delta% of average execution times | Difference between the mean max_exec_time and mean min_exec_time as the percentage of the mean min_exec_time . The less this difference, the more stable query execution in your database is. |
Statements | Total count of captured statements |
If the JIT-related statistics was avaliable in the statement statistics extension during the report interval, the “JIT statistics by database” report table provides per-database aggregated total statistics of JIT executions. Table G.9 lists columns of this report table. Times are provided in seconds.
Table G.9. JIT statistics by database
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | |
Calls | Number of times all statements in the database were executed | calls |
Plan Time | Time spent planning all statements in the database | Sum of total_plan_time |
Exec Time | Time spent executing all statements in the database | Sum of total_exec_time |
Generation count | Total number of functions JIT-compiled by the statements | Sum of jit_functions |
Generation time | Total time spent by the statements on generating JIT code | Sum of jit_generation_time |
Inlining count | Number of times functions have been inlined | Sum of jit_inlining_count |
Inlining time | Total time spent by statements on inlining functions | Sum of jit_inlining_time |
Optimization count | Number of times statements have been optimized | Sum of jit_optimization_count |
Optimization time | Total time spent by statements on optimizing | Sum of jit_optimization_time |
Emission count | Number of times code has been emitted | Sum of jit_emission_count |
Emission time | Total time spent by statements on emitting code | Sum of jit_emission_time |
Deform count | Number of tuple deform functions JIT-compiled by the statement of the database | |
Deform time | Total time spent by the statements of the database on JIT-compiling the tuple deform functions |
The report table “Cluster statistics” provides data from the pg_stat_bgwriter
view. Table G.10 lists rows of this report table. Times are provided in seconds.
Table G.10. Cluster statistics
Row | Description | Field/Calculation |
---|---|---|
Scheduled checkpoints | Number of scheduled checkpoints that have been performed | checkpoints_timed |
Requested checkpoints | Number of requested checkpoints that have been performed | checkpoints_req |
Checkpoint write time (s) | Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk | checkpoint_write_time |
Checkpoint sync time (s) | Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk | checkpoint_sync_time |
Checkpoint buffers written | Number of buffers written during checkpoints | buffers_checkpoint |
Background buffers written | Number of buffers written by the background writer | buffers_clean |
Backend buffers written | Number of buffers written directly by a backend. Will not be shown since Postgres Pro 17. | buffers_backend |
Backend fsync count | Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write). Will not be shown since Postgres Pro 17. | buffers_backend_fsync |
Bgwriter interrupts (too many buffers) | Number of times the background writer stopped a cleaning scan because it had written too many buffers | maxwritten_clean |
Number of buffers allocated | Total number of buffers allocated | buffers_alloc |
WAL generated | Total amount of WAL generated | pg_current_wal_lsn() value increment |
Start LSN | Log sequence number at the start of a report interval | pg_current_wal_lsn() at the first sample of a report |
End LSN | Log sequence number at the end of a report interval | pg_current_wal_lsn() at the last sample of a report |
WAL generated by vacuum | Total amount of WAL generated by vacuum | Based on the wal_bytes field of the pgpro_stats_vacuum_databases view. |
WAL segments archived | Total number of archived WAL segments | Based on pg_stat_archiver .archived_count |
WAL segments archive failed | Total number of WAL segment archiver failures | Based on pg_stat_archiver .failed_count . |
Archiver performance | Average archiver process performance per second | Based on the active_time field of the pgpro_stats_archiver view. |
Archive command performance | Average archive_command performance per second | Based on the archive_command_time field of the pgpro_stats_archiver view. |
Table “WAL statistics” is available in the report for Postgres Pro databases starting with version 14. This table is based on the pg_stat_wal
view of the Statistics Collector. Table G.11 lists columns of this report table. Times are provided in seconds.
Table G.11. WAL statistics
Row | Description | Field/Calculation |
---|---|---|
WAL generated | Total amount of WAL generated during the report interval | wal_bytes |
WAL per second | Average amount of WAL generated per second during the report interval | wal_bytes / report_duration |
WAL records | Total number of WAL records generated during the report interval | wal_records |
WAL FPI | Total number of WAL full page images generated during the report interval | wal_fpi |
WAL buffers full | Number of times WAL data was written to disk because WAL buffers became full during the report interval | wal_buffers_full |
WAL writes | Number of times WAL buffers were written out to disk via XLogWrite request during the report interval | wal_write |
WAL writes per second | Average number of times WAL buffers were written out to disk via XLogWrite request per second during the report interval | wal_write / report_duration |
WAL sync | Number of times WAL files were synced to disk via issue_xlog_fsync request during the report interval (if fsync is on and wal_sync_method is either fdatasync , fsync or fsync_writethrough , otherwise zero). See Section 27.5 for more information about the internal WAL function issue_xlog_fsync . | wal_sync |
WAL syncs per second | Average number of times WAL files were synced to disk via issue_xlog_fsync request per second during the report interval | wal_sync / report_duration |
WAL write time | Total amount of time spent writing WAL buffers to disk via XLogWrite request during the report interval (if track_wal_io_timing is enabled, otherwise zero; for more details, see Section 18.9). This includes the sync time when wal_sync_method is either open_datasync or open_sync . | wal_write_time |
WAL write duty | WAL write time as the percentage of the report interval duration | wal_write_time * 100 / report_duration |
WAL sync time | Total amount of time spent syncing WAL files to disk via issue_xlog_fsync request during the report interval (if track_wal_io_timing is enabled, fsync is on, and wal_sync_method is either fdatasync , fsync or fsync_writethrough , otherwise zero). | wal_sync_time |
WAL sync duty | WAL sync time as the percentage of the report interval duration | wal_sync_time * 100 / report_duration |
The report table “Tablespace statistics” provides information on the sizes and growth of tablespaces. Table G.12 lists columns of this report table.
Table G.12. Tablespace statistics
Column | Description | Field/Calculation |
---|---|---|
Tablespace | Tablespace name | pg_tablespace .spcname |
Path | Tablespace path | pg_tablespace_location() |
Size | Tablespace size at the time of the last sample in the report interval | pg_tablespace_size() |
Growth | Tablespace growth during the report interval | pg_tablespace_size() increment between interval bounds |
If the pgpro_stats extension was available during the report interval, the report table “Wait statistics by database” shows the total wait time by wait event type and database. Table G.13 lists columns of this report table.
Table G.13. Wait statistics by database
Column | Description |
---|---|
Database | Database name |
Wait event type | Type of event for which the backends were waiting. Asterisk means aggregation of all wait event types in the database. |
Waited (s) | Time spent waiting in events of Wait event type , in seconds |
%Total | Percentage of wait time spent in the database events of Wait event type in all wait time for the cluster |
If the pgpro_stats extension was available during the report interval, the report table “Top wait events” shows top wait events in the cluster by wait time. Table G.14 lists columns of this report table.
Table G.14. Top wait events
Column | Description |
---|---|
Database | Database name |
Wait event type | The type of event for which the backends were waiting |
Wait event | Wait event name for which the backends were waiting |
Waited | Total wait time spent in Wait event of the database, in seconds |
%Total | Percentage of wait time spent in Wait event of the database in all wait time in the cluster |
G.1.11.2. Load distribution #
This section of a pgpro_pwr report is based on the pgpro_stats_totals
view of the pgpro_stats extension if it was available during the report interval. Each table in this section provides data for the report interval on load distribution for a certain kind of objects for which aggregated statistics are collected, such as databases, applications, hosts, or users. Each table contains one row for each resource (for example, total time or shared blocks written), where load distribution is shown in graphics, as a stacked bar chart for top objects by load of this resource. If the bar chart area that corresponds to an object is too narrow to include captions, point that area to get a hint with the caption, value and percentage. The report tables “Load distribution among heavily loaded databases”, “Load distribution among heavily loaded applications”, “Load distribution among heavily loaded hosts” and “Load distribution among heavily loaded users” show load distribution for respective objects. Table G.15 lists rows of these report tables.
Table G.15. Load distribution
Row | Description | Calculation |
---|---|---|
Total time (sec.) | Total time spent in the planning and execution of statements | total_plan_time + total_exec_time |
Executed count | Number of queries executed | queries_executed |
I/O time (sec.) | Total time the statements spent reading or writing blocks (if track_io_timing is enabled, otherwise zero) | blk_read_time + blk_write_time |
Blocks fetched | Total number of shared block cache hits and shared blocks read by the statements | shared_blks_hit + shared_blks_read |
Shared blocks read | Total number of shared blocks read by the statements | shared_blks_read |
Shared blocks dirtied | Total number of shared blocks dirtied by the statements | shared_blks_dirtied |
Shared blocks written | Total number of shared blocks written by the statements | shared_blks_written |
WAL generated | Total amount of WAL generated by the statements | wal_bytes |
Temp and Local blocks written | Total number of temporary and local blocks written by the statements | temp_blks_written + local_blks_written |
Temp and Local blocks read | Total number of temp and local blocks read by the statements | temp_blks_read + local_blks_read |
Invalidation messages sent | Total number of all invalidation messages sent by backends in this database | (pgpro_stats_totals .inval_msgs ).all |
Cache resets | Total number of shared cache resets | pgpro_stats_totals .cache_resets |
G.1.11.3. Session states observed by subsamples #
This section of a pgpro_pwr report provides information about session states captured by subsamples during the report interval.
Tables of this report section are described below.
The report subsection “Chart with session state” visualizes session states captured by subsamples. It is the timeline chart showing captured session states in backends and transactions. Every state contains a popup with session state attributes. Click on a state to see this state in the table of session states.
The report table “Session state statistics by database” shows the aggregated data on session states. Only session states captured in subsamples are counted. Table G.16 lists columns of this report table.
Table G.16. Session state statistics by database
Column | Description |
---|---|
Database | Database name |
Summary Active | Overall time of active states captured in subsamples |
Summary Idle in xact | Overall time of idle in transaction states captured in subsamples |
Summary Idle in xact (A) | Overall time of idle in transaction (aborted) states captured in subsamples |
Maximal Active | Time of the longest active state captured in subsamples |
Maximal Idle in xact | Time of the longest idle in transaction state captured in subsamples |
Maximal Idle in xact (A) | Time of the longest idle in transaction (aborted) state captured in subsamples |
Maximal xact age | Maximal transaction age detected in subsamples |
The report table “Top 'idle in transaction' session states by duration” shows top pgpro_pwr.topn
longest idle in transaction
states that were last observed in the pg_stat_activity view for each session. Table G.17 lists columns of this report table.
Table G.17. Top 'idle in transaction' session states by duration
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | datname |
User | User name | usename |
App | Application name | application_name |
Pid | Process ID | pid |
Xact start | Transaction start timestamp | xact_start |
State change | State change timestamp | state_change |
State duration | State duration | clock_timestamp() - state_change |
The report table “Top 'active' session states by duration” shows top pgpro_pwr.topn
longest active
states that were last observed in the pg_stat_activity view for each session. Table G.18 lists columns of this report table.
Table G.18. Top 'active' session states by duration
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | datname |
User | User name | usename |
App | Application name | application_name |
Pid | Process ID | pid |
Xact start | Transaction start timestamp | xact_start |
State change | State change timestamp | state_change |
State duration | State duration | clock_timestamp() - state_change |
The report table “Top states by transaction age” shows top session states by transaction age that were last observed in the pg_stat_activity view for each session. Table G.19 lists columns of this report table.
Table G.19. Top states by transaction age
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | datname |
User | User name | usename |
App | Application name | application_name |
Pid | Process ID | pid |
Xact start | Transaction start timestamp | xact_start |
Xact duration | Transaction duration | clock_timestamp() - xact_start |
Age | Transaction age | age( |
State | Session state at the maximum age detected | |
State change | State change timestamp | state_change |
State duration | State duration | clock_timestamp() - state_change |
The report table “Top states by transaction duration” shows top longest session states that were last observed in the pg_stat_activity view for each session. Table G.20 lists columns of this report table.
Table G.20. Top states by transaction duration
Column | Description | Field/Calculation |
---|---|---|
Database | Database name | datname |
User | User name | usename |
App | Application name | application_name |
Pid | Process ID | pid |
Xact start | Transaction start timestamp | xact_start |
Xact duration | Transaction duration | clock_timestamp() - xact_start |
Age | Transaction age | age( |
State | Session state at the maximum age detected | |
State change | State change timestamp | state_change |
State duration | State duration | clock_timestamp() - state_change |
G.1.11.4. SQL query statistics #
This section of a pgpro_pwr report provides data for the report interval on top statements by several important statistics. The data is mainly captured from views of the one of pgpro_stats and pg_stat_statements extensions that was available during the report interval, with the precedence of pgpro_stats. Each statement can be highlighted in all SQL-related sections with a single mouse click on it. This click will also show a query text preview just under the query statistics row. The query text preview can be hidden with a second click on a query.
Tables of this report section are described below.
The report table “Top SQL by elapsed time” shows top statements by the sum of total_plan_time
and total_exec_time
fields of the pgpro_stats_statements
or pg_stat_statements
view. Table G.21 lists columns of this report table. Times are provided in seconds.
Table G.21. Top SQL by elapsed time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
%Total | Percentage of elapsed time of this statement plan in the total elapsed time of all statements in the cluster | |
Elapsed Time (s) | Total time spent in planning and execution of the statement plan | total_plan_time + total_exec_time |
Plan Time (s) | Total time spent in planning of the statement | total_plan_time |
Exec Time (s) | Total time spent in execution of the statement plan | total_exec_time |
JIT Time (s) | Total time spent by JIT executing this statement plan, in seconds | jit_generation_time + jit_inlining_time + jit_optimization_time + jit_emission_time |
Read I/O time (s) | Total time the statement spent reading blocks | blk_read_time |
Write I/O time (s) | Total time the statement spent writing blocks | blk_write_time |
Usr CPU time (s) | Time spent on CPU in the user space, in seconds | rusage .user_time |
Sys CPU time (s) | Time spent on CPU in the system space, in seconds | rusage .system_time |
Plans | Number of times the statement was planned | plans |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by planning time” shows top statements by the value of the total_plan_time
field of the pgpro_stats_statements
or pg_stat_statements
view. Table G.22 lists columns of this report table.
Table G.22. Top SQL by planning time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Plan elapsed(s) | Total time spent in planning of the statement, in seconds | total_plan_time |
%Elapsed | Percentage of total_plan_time in the sum of total_plan_time and total_exec_time of this statement plan | |
Mean plan time | Mean time spent planning the statement, in milliseconds | mean_plan_time |
Min plan time | Minimum time spent planning the statement, in milliseconds | min_plan_time |
Max plan time | Maximum time spent planning the statement, in milliseconds | max_plan_time |
StdErr plan time | Population standard deviation of time spent planning the statement, in milliseconds | stddev_plan_time |
Plans | Number of times the statement was planned | plans |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by execution time” shows top statements by the value of the total_time
field of the pgpro_stats_statements
or pg_stat_statements
view. Table G.23 lists columns of this report table.
Table G.23. Top SQL by execution time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Exec (s) | Total time spent executing the statement plan, in seconds | total_exec_time |
%Elapsed | Percentage of total_exec_time of this statement plan in this statement elapsed time | |
%Total | Percentage of total_exec_time of this statement plan in the total elapsed time of all statements in the cluster | |
JIT Time (s) | Total time spent by JIT executing this statement plan, in seconds | jit_generation_time + jit_inlining_time + jit_optimization_time + jit_emission_time |
Read I/O time (s) | Total time spent in reading pages while executing the statement plan, in seconds | blk_read_time |
Write I/O time (s) | Total time spent in writing pages while executing the statement plan, in seconds | blk_write_time |
Usr CPU time (s) | Time spent on CPU in the user space, in seconds | rusage .user_time |
Sys CPU time (s) | Time spent on CPU in the system space, in seconds | rusage .system_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Mean execution time | Mean time spent executing the statement plan, in milliseconds | mean_exec_time |
Min execution time | Minimum time spent executing the statement plan, in milliseconds | min_exec_time |
Max execution time | Maximum time spent executing the statement plan, in milliseconds | max_exec_time |
StdErr execution time | Population standard deviation of time spent executing the statement plan, in milliseconds | stddev_exec_time |
Executions | Number of executions of this statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by mean execution time” shows top pgpro_pwr.topn
statements by the value of the mean_time
or mean_exec_time
field of the pgpro_stats_statements
or pg_stat_statements
view. Table G.24 lists columns of this report table.
Table G.24. Top SQL by mean execution time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Mean execution time (ms) | Mean time spent executing the statement, in milliseconds | mean_exec_time |
Min execution time (ms) | Minimum time spent executing the statement, in milliseconds | min_exec_time |
Max execution time (ms) | Maximum time spent executing the statement, in milliseconds | max_exec_time |
StdErr execution time | Population standard deviation of time spent executing the statement, in milliseconds | stddev_exec_time (ms) |
Exec (s) | Time spent executing this statement, in seconds | total_exec_time |
%Elapsed | Execution time of this statement as the percentage of the statement elapsed time | |
%Total | Execution time of this statement as the percentage of the total elapsed time of all statements in the cluster | |
JIT time (s) | Total time spent by JIT executing this statement, in seconds | jit_generation_time + jit_inlining_time + jit_optimization_time + jit_emission_time |
Read I/O time (s) | Time spent reading blocks, in seconds | blk_read_time |
Write I/O time (s) | Time spent writing blocks, in seconds | blk_write_time |
Rows | Number of rows retrieved or affected by the statement | rows |
Executions | Number of times this statement was executed | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by executions” shows top statements by the value of the calls
field of the pgpro_stats_statements
or pg_stat_statements
view. Table G.25 lists columns of this report table.
Table G.25. Top SQL by executions
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Executions | Number of executions of the statement plan | calls |
%Total | Percentage of calls of this statement plan in the total calls of all statements in the cluster | |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Mean (ms) | Mean time spent executing the statement plan, in milliseconds | mean_exec_time |
Min (ms) | Minimum time spent executing the statement plan, in milliseconds | min_exec_time |
Max (ms) | Maximum time spent executing the statement plan, in milliseconds | max_exec_time |
StdErr (ms) | Population standard deviation of time spent executing the statement plan, in milliseconds | stddev_time |
Elapsed(s) | Total time spent executing the statement plan, in seconds | total_exec_time |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by I/O wait time” shows top statements by read and write time, i.e., sum of values of blk_read_time
and blk_write_time
fields of the pgpro_stats_statements
or pg_stat_statements
view. Table G.26 lists columns of this report table. Times are provided in seconds.
Table G.26. Top SQL by I/O wait time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
IO(s) | Total time spent in reading and writing while executing this statement plan, i.e., I/O time | blk_read_time + blk_write_time |
R(s) | Total time spent in reading while executing this statement plan | blk_read_time |
W(s) | Total time spent in writing while executing this statement plan | blk_write_time |
%Total | Percentage of I/O time of this statement plan in the total I/O time of all statements in the cluster | |
Shr Reads | Total number of shared blocks read while executing the statement plan | shared_blks_read |
Loc Reads | Total number of local blocks read while executing the statement plan | local_blks_read |
Tmp Reads | Total number of temp blocks read while executing the statement plan | temp_blks_read |
Shr Writes | Total number of shared blocks written while executing the statement plan | shared_blks_written |
Loc Writes | Total number of local blocks written while executing the statement plan | local_blks_written |
Tmp Writes | Total number of temp blocks written while executing the statement plan | temp_blks_written |
Elapsed(s) | Total time spent in execution of the statement plan | total_plan_time + total_exec_time |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by shared blocks fetched” shows top statements by the number of read and hit blocks, which helps to detect the most data-intensive statements. Table G.27 lists columns of this report table.
Table G.27. Top SQL by shared blocks fetched
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Blks fetched | Number of blocks retrieved while executing the statement plan | shared_blks_hit + shared_blks_read |
%Total | Percentage of blocks fetched while executing the statement plan in all blocks fetched for all statements in the cluster | |
Hits(%) | Percentage of blocks got from buffers in all blocks got | |
Elapsed(s) | Total time spent in execution of the statement plan, in seconds | total_plan_time + total_exec_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by shared blocks read” shows top statements by the number of shared reads, which helps to detect the most read-intensive statements. Table G.28 lists columns of this report table.
Table G.28. Top SQL by shared blocks read
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Reads | Number of shared blocks read while executing this statement plan | shared_blks_read |
%Total | Percentage of shared reads for this statement plan in all shared reads of all statements in the cluster | |
Hits(%) | Percentage of blocks got from buffers in all blocks got while executing this statement plan | |
Elapsed(s) | Total time spent in execution of the statement plan, in seconds | total_plan_time + total_exec_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by shared blocks dirtied” shows top statements by the number of shared dirtied buffers, which helps to detect statements that do most data changes in the cluster. Table G.29 lists columns of this report table.
Table G.29. Top SQL by shared blocks dirtied
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Dirtied | Number of shared buffers dirtied while executing this statement plan | shared_blks_dirtied |
%Total | Percentage of dirtied shared buffers for this statement plan in all dirtied shared buffers of all statements in the cluster | |
Hits(%) | Percentage of blocks got from buffers in all blocks got while executing this statement plan | |
WAL | Total amount of WAL bytes generated by the statement plan | wal_bytes |
%Total | Percentage of WAL bytes generated by the statement plan in total WAL generated in the cluster | |
Elapsed(s) | Total time spent in execution of the statement plan, in seconds | total_plan_time + total_exec_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by shared blocks written” shows top statements by the number of blocks written. Table G.30 lists columns of this report table.
Table G.30. Top SQL by shared blocks written
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Written | Number of blocks written while executing this statement plan | shared_blks_written |
%Total | Percentage of blocks written by this statement plan in all written blocks in the cluster | Percentage of shared_blks_written in (pg_stat_bgwriter .buffers_checkpoint + pg_stat_bgwriter .buffers_clean + pg_stat_bgwriter .buffers_backend ) |
%BackendW | Percentage of blocks written by this statement plan in all blocks in the cluster written by backends | Percentage of shared_blks_written in pg_stat_bgwriter .buffers_backend |
Hits(%) | Percentage of blocks got from buffers in all blocks got while executing this statement plan | |
Elapsed(s) | Total time spent in execution of the statement plan, in seconds | total_plan_time + total_exec_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by WAL size” shows top statements by the amount of WAL generated. Table G.31 lists columns of this report table.
Table G.31. Top SQL by WAL size
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
WAL | Total amount of WAL bytes generated by the statement plan | wal_bytes |
%Total | Percentage of WAL bytes generated by the statement plan in total WAL generated in the cluster | |
Dirtied | Number of shared buffers dirtied while executing this statement plan | shared_blks_dirtied |
WAL FPI | Total number of WAL full page images generated by the statement plan | wal_fpi |
WAL records | Total number of WAL records generated by the statement plan | wal_records |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by temp usage” shows top statements by temporary I/O, which is calculated as the sum of temp_blks_read
, temp_blks_written
, local_blks_read
and local_blks_written
fields. Table G.32 lists columns of this report table.
Table G.32. Top SQL by temp usage
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Local fetched | Number of local blocks retrieved | local_blks_hit + local_blks_read |
Hits(%) | Percentage of local blocks got from buffers in all local blocks got | |
Write Local (blk) | Number of blocks written by this statement plan that are used in temporary tables | local_blks_written |
Write Local %Total | Percentage of local_blks_written of this statement plan in the total of local_blks_written for all statements in the cluster | |
Read Local (blk) | Number of blocks read by this statement plan that are used in temporary tables | local_blks_read |
Read Local %Total | Percentage of local_blks_read of this statement plan in the total of local_blks_read for all statements in the cluster | |
Write Temp (blk) | Number of temp blocks written by this statement plan | temp_blks_written |
Write Temp %Total | Percentage of temp_blks_written of this statement plan in the total of temp_blks_written for all statements in the cluster | |
Read Temp (blk) | Number of temp blocks read by this statement plan | temp_blks_read |
Read Temp %Total | Percentage of temp_blks_read of this statement plan in the total of temp_blks_read for all statements in the cluster | |
Elapsed(s) | Total time spent in execution of the statement plan, in seconds | total_plan_time + total_exec_time |
Rows | Number of rows retrieved or affected by execution of the statement plan | rows |
Executions | Number of executions of the statement plan | calls |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
The report table “Top SQL by invalidation messages sent” shows top statements by the number of invalidation messages sent. Table G.33 lists columns of this report table.
Table G.33. Top SQL by invalidation messages sent
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | queryid |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Invalidation messages sent | Total number of invalidation messages sent by backends executing this statement. Statistics are provided for corresponding message types of pgpro_stats_inval_msgs | fields of pgpro_stats_statements .inval_msgs |
G.1.11.4.1. rusage statistics #
This section is included in the report only if the pgpro_stats or pg_stat_kcache extension was available during the report interval.
The report table “Top SQL by system and user time” shows top statements by the sum of user_time
and system_time
fields of pg_stat_kcache
or of the pgpro_stats_totals
view. Table G.34 lists columns of this report table.
Table G.34. Top SQL by system and user time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Plan User (s) | User CPU time elapsed during planning, in seconds | plan_user_time |
Exec User (s) | User CPU time elapsed during execution, in seconds | exec_user_time |
User %Total | Percentage of plan_user_time + exec_user_time in the total user CPU time for all statements | |
Plan System (s) | System CPU time elapsed during planning, in seconds | plan_system_time |
Exec System (s) | System CPU time elapsed during execution, in seconds | exec_system_time |
System %Total | Percentage of plan_system_time + exec_system_time in the total system CPU time for all statements |
The report table “Top SQL by reads/writes done by filesystem layer” shows top statements by the sum of reads
and writes
fields of pg_stat_kcache
. Table G.35 lists columns of this report table.
Table G.35. Top SQL by reads/writes done by filesystem layer
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Plan Read Bytes | Bytes read during planning | plan_reads |
Exec Read Bytes | Bytes read during execution | exec_reads |
Read Bytes %Total | Percentage of plan_reads + exec_reads in the total number of bytes read by the filesystem layer for all statements | |
Plan Writes | Bytes written during planning | plan_writes |
Exec Writes | Bytes written during execution | exec_writes |
Write %Total | Percentage of plan_writes + exec_writes in the total number of bytes written by the filesystem layer for all statements |
G.1.11.5. SQL query wait statistics #
If the pgpro_stats extension was available during the report interval, this section of the report will contain a table that is split into sections, each showing top statements by overall wait time or by wait time for a certain wait event type. Table sections related to specific wait events follow in the descending order of the total wait time in wait events of this type. Table G.36 lists columns of this report table. Times are provided in seconds.
Table G.36. SQL query wait statistics
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Hash of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
Waited | Total wait time for all wait events of this statement plan | |
%Total | Percentage of the total wait time of this statement plan in all the wait time in the cluster | |
Details | Waits of this statement plan by wait types |
If the JIT-related statistics was avaliable in the statement statistics extension during the report interval, the report table “Top SQL by JIT elapsed time” shows top statements by the sum of jit_*_time
fields of the pgpro_stats_statements
or pg_stat_statements
view. Table G.37 lists columns of this report table. Times are provided in seconds.
Table G.37. Top SQL by JIT elapsed time
Column | Description | Field/Calculation |
---|---|---|
Query ID | Hex representation of queryid . The hash of the query ID, database ID and user ID is in square brackets. The (N) mark will appear here for nested statements (such as statements invoked within top-level statements). | |
Plan ID | Internal hash code, computed from the tree of the statement plan | planid |
Database | Database name for the statement | Derived from dbid |
User | Name of the user executing the statement | Derived from userid |
JIT Time (s) | Total time spent by JIT executing this statement plan | jit_generation_time + jit_inlining_time + jit_optimization_time + jit_emission_time |
Generation count | Total number of functions JIT-compiled by this statement | Sum of jit_functions |
Generation time | Total time spent by this statement on generating JIT code | Sum of jit_generation_time |
Inlining count | Number of times functions have been inlined | Sum of jit_inlining_count |
Inlining time | Total time spent by this statement on inlining functions | Sum of jit_inlining_time |
Optimization count | Number of times this statement has been optimized | Sum of jit_optimization_count |
Optimization time | Total time spent by this statement on optimizing | Sum of jit_optimization_time |
Emission count | Number of times code has been emitted | Sum of jit_emission_count |
Emission time | Total time spent by this statement on emitting code | Sum of jit_emission_time |
Deform count | Number of tuple deform functions JIT-compiled by the statement | |
Deform time | Total time spent by the statement on JIT-compiling the tuple deform | |
Plan Time (s) | Total time spent in planning of the statement | total_plan_time |
Exec Time (s) | Total time spent in execution of the statement plan | total_exec_time |
Read I/O time (s) | Total time the statement spent reading blocks | blk_read_time |
Write I/O time (s) | Total time the statement spent writing blocks | blk_write_time |
%Cvr | Coverage: duration of statement statistics collection as the percentage of the report duration |
G.1.11.6. Complete list of SQL texts #
The “Complete list of SQL texts” section of the report contains a table that provides query and plan texts for all statements mentioned in the report. Use an appropriate Query ID
/Plan ID
link in any statistics table to see the corresponding query/plan text. Table G.38 lists columns of this report table.
Table G.38. Complete list of SQL texts
Column | Description |
---|---|
ID | Hex representation of the query or plan identifier |
Query/Plan Text | Text of the query or statement plan |
G.1.11.7. Schema object statistics #
Tables in this section of the report show top database objects by statistics from the Postgres Pro's Statistics Collector views.
The report table “Top tables by estimated sequentially scanned volume” shows top tables by estimated volume read by sequential scans. This can help you find database tables that possibly lack some index. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. The data is based on the pg_stat_all_tables
view. Table G.39 lists columns of this report table.
Table G.39. Top tables by estimated sequentially scanned volume
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
~SeqBytes | Estimated volume read by sequential scans | Sum of (pg_relation_size() * seq_scan ) |
SeqScan | Number of sequential scans performed on the table | seq_scan |
IxScan | Number of index scans initiated on the table | idx_scan |
IxFet | Number of live rows fetched by index scans | idx_tup_fetch |
Ins | Number of rows inserted | n_tup_ins |
Upd | Number of rows updated | n_tup_upd |
Del | Number of rows deleted | n_tup_del |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
In the report table “Top tables by blocks fetched”, blocks fetched include blocks being processed from disk (read) and from shared buffers (hit). This report table shows top database tables by the sum of blocks fetched for the table's heap, indexes, TOAST table (if any) and TOAST table index (if any). This can help you focus on tables with excessive processing of blocks. The data is based on the pg_statio_all_tables
view. Table G.40 lists columns of this report table.
Table G.40. Top tables by blocks fetched
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Heap Blks | Number of blocks fetched for the table's heap | heap_blks_read + heap_blks_hit |
Heap Blks %Total | Percentage of blocks fetched for the table's heap in all blocks fetched in the cluster | |
Ix Blks | Number of blocks fetched for table's indexes | idx_blks_read + idx_blks_hit |
Ix Blks %Total | Percentage of blocks fetched for table's indexes in all blocks fetched in the cluster | |
TOAST Blks | Number of blocks fetched for the table's TOAST table | toast_blks_read + toast_blks_hit |
TOAST Blks %Total | Percentage of blocks fetched for the table's TOAST table in all blocks fetched in the cluster | |
TOAST-Ix Blks | Number of blocks fetched for the table's TOAST index | tidx_blks_read + tidx_blks_hit |
TOAST-Ix Blks %Total | Percentage of blocks fetched for the table's TOAST index in all blocks fetched in the cluster |
The report table “Top tables by blocks read” shows top database tables by the number of blocks read for the table's heap, indexes, TOAST table (if any) and TOAST table index (if any). This can help you focus on tables with excessive block readings. The data is based on the pg_statio_all_tables
view. Table G.41 lists columns of this report table.
Table G.41. Top tables by blocks read
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Heap Blks | Number of blocks read for the table's heap | heap_blks_read |
Heap Blks %Total | Percentage of blocks read from the table's heap in all blocks read in the cluster | |
Ix Blks | Number of blocks read from table's indexes | idx_blks_read |
Ix Blks %Total | Percentage of blocks read from table's indexes in all blocks read in the cluster | |
TOAST Blks | Number of blocks read from the table's TOAST table | toast_blks_read |
TOAST Blks %Total | Percentage of blocks read from the table's TOAST table in all blocks read in the cluster | |
TOAST-Ix Blks | Number of blocks read from the table's TOAST index | tidx_blks_read |
TOAST-Ix Blks %Total | Percentage of blocks read from the table's TOAST index in all blocks read in the cluster | |
Hit(%) | Percentage of table, index, TOAST and TOAST index blocks got from buffers for this table in all blocks got for this table from either file system or buffers |
The report table “Top DML tables” shows top tables by the number of DML-affected rows, i.e., by the sum of n_tup_ins
, n_tup_upd
and n_tup_del
(including TOAST tables). The data is based on the pg_stat_all_tables
view. Table G.42 lists columns of this report table.
Table G.42. Top DML tables
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Ins | Number of rows inserted | n_tup_ins |
Upd | Number of rows updated, including HOT | n_tup_upd |
Del | Number of rows deleted | n_tup_del |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
SeqScan | Number of sequential scans performed on the table | seq_scan |
SeqFet | Number of live rows fetched by sequential scans | seq_tup_read |
IxScan | Number of index scans initiated on this table | idx_scan |
IxFet | Number of live rows fetched by index scans | idx_tup_fetch |
The report table “Top tables by updated/deleted tuples” shows top tables by tuples modified by UPDATE/DELETE operations, i.e., by the sum of n_tup_upd
and n_tup_del
(including TOAST tables). The data is based on the pg_stat_all_tables
view. Table G.43 lists columns of this report table.
Table G.43. Top tables by updated/deleted tuples
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Upd | Number of rows updated, including HOT | n_tup_upd |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
Del | Number of rows deleted | n_tup_del |
Vacuum | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
AutoVacuum | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
Analyze | Number of times this table was manually analyzed | analyze_count |
AutoAnalyze | Number of times this table was analyzed by the autovacuum daemon | autoanalyze_count |
The report table “Top tables by removed all-visible marks” shows top tables by the number of times that the all-visible mark was removed from the visibility map by any backend. This report section is only shown when corresponding statistics are available. Table G.44 lists columns of this report table.
Table G.44. Top tables by removed all-visible marks
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
All-Visible marks cleared | Number of times that the all-visible mark was removed from the relation visibility map | rev_all_visible_pages |
All-Visible marks set | Number of times that the all-visible mark was set in the relation visibility map | pages_all_visible |
All-Visible marks %Set | Percentage of the number of times that the all-visible mark was set in the number of times that it was set or removed | pages_all_visible * 100% / (rev_all_visible_pages + pages_all_visible) |
Vacuum | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
AutoVacuum | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
The report table “Top tables by removed all-frozen marks” shows top tables by the number of times that the all-frozen mark was removed from the visibility map by any backend. This report section is only shown when corresponding statistics are available. Table G.45 lists columns of this report table.
Table G.45. Top tables by removed all-frozen marks
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
All-Frozen marks cleared | Number of times that the all-frozen mark was removed from the relation visibility map | rev_all_frozen_pages |
All-Frozen marks set | Number of times that the all-frozen mark was set in the relation visibility map | pages_frozen |
All-Frozen marks %Set | Percentage of the number of times that the all-frozen mark was set in the number of times that it was set or removed | pages_frozen * 100% / (rev_all_frozen_pages + pages_frozen) |
Vacuum | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
AutoVacuum | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
The report table “Top tables by new-page updated tuples” shows top tables by the number of rows updated where the successor version goes onto a new heap page, leaving behind an original version with a t_ctid
field that points to a different heap page. These are always non-HOT updates. Table G.46 lists columns of this report table.
Table G.46. Top tables by new-page updated tuples
Column | Description |
---|---|
DB | Database name for the table |
Tablespace | Name of the tablespace where the table is located |
Schema | Schema name for the table |
Table | Table name |
NP Upd | Number of rows updated to a new heap page |
%Upd | Number of new-page updated rows as the percentage of all rows updated |
Upd | Number of rows updated, including HOT |
Upd(HOT) | Number of rows HOT updated (i.e., with no separate index update required) |
The report table “Top growing tables” shows top tables by growth. The data is based on the pg_stat_all_tables
view. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. Table G.47 lists columns of this report table.
Table G.47. Top growing tables
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Size | Table size at the time of the last sample in the report interval | pg_table_size() - pg_relation_size(toast) |
Growth | Table growth | |
Ins | Number of rows inserted | n_tup_ins |
Upd | Number of rows updated, including HOT | n_tup_upd |
Del | Number of rows deleted | n_tup_del |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
In the report table “Top indexes by blocks fetched”, blocks fetched include index blocks processed from disk (read) and from shared buffers (hit). The data is based on the pg_statio_all_indexes
view. Table G.48 lists columns of this report table.
Table G.48. Top indexes by blocks fetched
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the index | |
Tablespace | Name of the tablespace where the index is located | |
Schema | Schema name for the underlying table | |
Table | Underlying table name | |
Index | Index name | |
Scans | Number of index scans initiated on this index | idx_scan |
Blks | Number of blocks fetched for this index | idx_blks_read + idx_blks_hit |
%Total | Percentage of blocks fetched for this index in all blocks fetched in the cluster |
The report table “Top indexes by blocks read” is also based on the pg_statio_all_indexes
and pg_stat_all_indexes
views. Table G.49 lists columns of this report table.
Table G.49. Top indexes by blocks read
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the index | |
Tablespace | Name of the tablespace where the index is located | |
Schema | Schema name for the underlying table | |
Table | Underlying table name | |
Index | Index name | |
Scans | Number of index scans initiated on this index | idx_scan |
Blk Reads | Number of disk blocks read from this index | idx_blks_read |
%Total | Percentage of disk blocks read from this index in all disk blocks read in the cluster | |
Hits(%) | Percentage of index blocks got from buffers in all index blocks got for this index |
The report table “Top growing indexes” shows top indexes by growth. The table uses data from the pg_stat_all_tables
and pg_stat_all_indexes
views. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. Table G.50 lists columns of this report table.
Table G.50. Top growing indexes
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the index | |
Tablespace | Name of the tablespace where the index is located | |
Schema | Schema name for the underlying table | |
Table | Underlying table name | |
Index | Index name | |
Index Size | Index size at the time of the last sample in the report interval | pg_relation_size() |
Index Growth | Index growth during the report interval | |
Table Ins | Number of rows inserted into the underlying table | n_tup_ins |
Table Upd | Number of rows updated in the underlying table, without HOT | n_tup_upd - n_tup_hot_upd |
Table Del | Number of rows deleted from the underlying table | n_tup_del |
The report table “Unused indexes” shows top non-scanned indexes (during the report interval) by DML operations on underlying tables that caused index support. Constraint indexes are not counted. The table uses data from the pg_stat_all_tables
view. Table G.51 lists columns of this report table.
Table G.51. Unused indexes
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the index | |
Tablespace | Name of the tablespace where the index is located | |
Schema | Schema name for the underlying table | |
Table | Underlying table name | |
Index | Index name | |
Index Size | Index size at the time of the last sample in the report interval | pg_relation_size() |
Index Growth | Index growth during the report interval | |
Table Ins | Number of rows inserted into the underlying table | n_tup_ins |
Table Upd | Number of rows updated in the underlying table, without HOT | n_tup_upd - n_tup_hot_upd |
Table Del | Number of rows deleted from the underlying table | n_tup_del |
G.1.11.8. User function statistics #
Tables in this section of the report show top functions in the cluster by statistics from the pg_stat_user_functions
view. Times in the tables are provided in seconds.
The report table “Top functions by total time” shows top functions by the total time elapsed. The report table “Top functions by executions” shows top functions by the number of executions. The report table “Top trigger functions by total time” shows top trigger functions by the total time elapsed. Table G.52 lists columns of these report tables.
Table G.52. User function statistics
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the function | |
Schema | Schema name for the function | |
Function | Function name | |
Executions | Number of times this function has been called | calls |
Total Time (s) | Total time spent in this function and all other functions called by it | total_time |
Self Time (s) | Total time spent in this function itself, not including other functions called by it | self_time |
Mean Time (s) | Mean time of a single function execution | total_time /calls |
Mean self Time (s) | Mean self time of a single function execution | self_time /calls |
G.1.11.9. Vacuum-related statistics #
The report table “Top tables by vacuum time spent” is available if pgpro_stats can provide extended vacuum statistics. This table shows top tables by total time spent vacuuming them. The data is based on the pgpro_stats_vacuum_tables
view. Table G.53 lists columns of this report table.
Table G.53. Top tables by vacuum time spent
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Total vacuum time | Total time of vacuuming this table | total_time |
Delay vacuum time | Time spent sleeping in a vacuum delay point by vacuum operations performed on this table | delay_time |
Read I/O time | Time spent reading database blocks by vacuum operations performed on this table | blk_read_time |
Write I/O time | Time spent writing database blocks by vacuum operations performed on this table | blk_write_time |
User CPU time | User CPU time of vacuuming tables of this database | user_time |
System CPU time | System CPU time of vacuuming tables of this database | system_time |
Vacuum count | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum count | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
Total fetched | Total number of database blocks fetched by vacuum operations performed on this table | total_blks_read + total_blks_hit |
Heap fetched | Total number of blocks fetched from this table by vacuum operations performed on it | rel_blks_read + rel_blks_hit |
Scanned | Number of pages examined by vacuum operations performed on this table | pages_scanned |
The report table “Top indexes by vacuum time spent” is available if pgpro_stats can provide extended vacuum statistics. This table shows top indexes by total time spent vacuuming them. The data is based on the pgpro_stats_vacuum_indexes
view. Table G.54 lists columns of this report table.
Table G.54. Top indexes by vacuum time spent
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Index | Index name | |
Total vacuum time | Total time of vacuuming this index | total_time |
Delay vacuum time | Time spent sleeping in a vacuum delay point by vacuum operations performed on this index | delay_time |
Read I/O time | Time spent reading database blocks by vacuum operations performed on this index | blk_read_time |
Write I/O time | Time spent writing database blocks by vacuum operations performed on this index | blk_write_time |
User CPU time | User CPU time of vacuuming this index | user_time |
System CPU time | System CPU time of vacuuming this index | system_time |
Vacuum count | Number of times the underlying table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum count | Number of times the underlying table has been vacuumed by the autovacuum daemon | autovacuum_count |
Total fetched | Total number of database blocks fetched by vacuum operations performed on the underlying table | total_blks_read + total_blks_hit |
Index fetched | Total number of blocks fetched from this index by vacuum operations performed on it | rel_blks_read + rel_blks_hit |
The report table “Top tables by blocks vacuum fetched” is available if pgpro_stats can provide extended vacuum statistics. This table shows top tables by blocks fetched vacuuming these tables. The data is based on the pgpro_stats_vacuum_tables
view. Table G.55 lists columns of this report table.
Table G.55. Top tables by blocks vacuum fetched
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
DB fetched | Total number of database blocks fetched by vacuum operations performed on this table | total_blks_read + total_blks_hit |
%Total | Total number of database blocks fetched by vacuum operations performed on this table as the percentage of all blocks fetched in the cluster | |
DB read | Total number of database blocks read by vacuum operations performed on this table | total_blks_read |
%Total | Total number of database blocks read by vacuum operations performed on this table as the percentage of all blocks read in the cluster | |
Heap fetched | Total number of blocks fetched from this table by vacuum operations performed on it | rel_blks_read + rel_blks_hit |
%Rel | Total number of table blocks fetched by vacuum operations performed on this table as the percentage of all blocks fetched from this table | |
Heap read | Total number of blocks read from this table by vacuum operations performed on it | rel_blks_read |
%Rel | Total number of table blocks read by vacuum operations performed on this table as the percentage of all blocks read from this table | |
Scanned | Number of pages examined by vacuum operations performed on this table | pages_scanned |
The report table “Top indexes by blocks vacuum fetched” is available if pgpro_stats can provide extended vacuum statistics. This table shows top indexes by blocks fetched vacuuming underlying tables. The data is based on the pgpro_stats_vacuum_indexes
view. Table G.56 lists columns of this report table.
Table G.56. Top indexes by blocks vacuum fetched
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Index | Index name | |
DB fetched | Total number of database blocks fetched by vacuum operations performed on this index | total_blks_read + total_blks_hit |
%Total | Total number of database blocks fetched by vacuum operations performed on this index as the percentage of all blocks fetched in the cluster | |
DB read | Total number of database blocks read by vacuum operations performed on this index | total_blks_read |
%Total | Total number of database blocks read by vacuum operations performed on this index as the percentage of all blocks read in the cluster | |
Idx fetched | Total number of blocks fetched from this index by vacuum operations on it | rel_blks_read + rel_blks_hit |
%Idx | Total number of index blocks fetched by vacuum operations performed on this index as the percentage of all blocks fetched from this index | |
Idx read | Total number of blocks read from this index by vacuum operations performed on it | rel_blks_read |
%Idx | Total number of index blocks read by vacuum operations performed on this index as the percentage of all blocks read from this index |
The report table “Top tables by blocks vacuum read” is available if pgpro_stats can provide extended vacuum statistics. This table shows top tables by blocks read vacuuming these tables. The data is based on the pgpro_stats_vacuum_indexes
view. Table G.57 lists columns of this report table.
Table G.57. Top tables by blocks vacuum read
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
DB read | Total number of database blocks read by vacuum operations performed on this table | total_blks_read |
%Total | Total number of database blocks read by vacuum operations performed on this table as the percentage of all blocks read in the cluster | |
%Hit | Total number of database blocks found in shared buffers by vacuum operations performed on this table as the percentage of all blocks fetched by vacuum operations performed on this table | |
Heap read | Total number of database blocks vacuum operations read from this table | rel_blks_read |
%Rel | Total number of table blocks read by vacuum operations performed on this table as the percentage of all blocks read from this table | |
%Hit | Total number of table blocks found in shared buffers by vacuum operations performed on this table as the percentage of table blocks fetched by vacuum operations performed on this table | |
Scanned | Number of pages examined by vacuum operations performed on this table | pages_scanned |
The report table “Top indexes by blocks vacuum read” is available if pgpro_stats can provide extended vacuum statistics. This table shows top indexes by blocks read vacuuming underlying tables. The data is based on the pgpro_stats_vacuum_indexes
view. Table G.58 lists columns of this report table.
Table G.58. Top indexes by blocks vacuum read
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Index | Index name | |
DB read | Total number of database blocks read by vacuum operations performed on this index | total_blks_read |
%Total | Total number of database blocks read by vacuum operations performed on this index as the percentage of all blocks read in the cluster | |
%Hit | Total number of database blocks found in shared buffers by vacuum operations performed on this index as the percentage of all database blocks fetched by vacuum operations performed on this index | |
Idx read | Total number of blocks read from this index by vacuum operations performed on it | rel_blks_read |
%Idx | Total number of index blocks read by vacuum operations performed on this index as the percentage of all blocks read from this index | |
%Hit | Total number of index blocks found in shared buffers by vacuum operations performed on this index as the percentage of index blocks fetched by vacuum operations performed on this index |
The report table “Top tables by dead tuples vacuum left” is available if pgpro_stats can provide extended vacuum statistics. This table shows top tables by the number of dead tuples left by vacuum due to their visibility in transactions. The data is based on the pgpro_stats_vacuum_tables
view. Table G.59 lists columns of this report table.
Table G.59. Top tables by dead tuples vacuum left
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Dead tuples left | Total number of dead tuples vacuum operations left in this table due to their visibility in transactions | dead_tuples |
Dead tuples deleted | Total number of dead tuples vacuum operations deleted from this table | tuples_deleted |
%Eff | Vacuum efficiency in terms of deleted tuples. This is the percentage of dead tuples deleted from this table in all dead tuples to be deleted from this table. | tuples_deleted * 100 / (tuples_deleted + dead_tuples ) |
Tuples del | Number of rows deleted | pg_stat_all_tables .n_tup_del |
Tuples upd | Number of rows updated (includes HOT updated rows) | pg_stat_all_tables .n_tup_upd |
Vacuum | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
The report table “Top tables by WAL size generated by vacuum” is available if pgpro_stats can provide extended vacuum statistics. This table shows top tables by the amount of WAL generated by vacuum operations performed on them. The data is based on the pgpro_stats_vacuum_tables
view. Table G.60 lists columns of this report table.
Table G.60. Top tables by WAL size generated by vacuum
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
WAL size | Total amount of WAL bytes generated by vacuum operations performed on this table | wal_bytes |
%Total | Total amount of WAL bytes generated by vacuum operations performed on this table as the percentage of all WAL generated in the cluster | |
WAL FPI | Total number of WAL full page images generated by vacuum operations performed on this table | wal_fpi |
Scanned blocks | Number of pages examined by vacuum operations performed on this table | pages_scanned |
Dirtied blocks | Number of database blocks dirtied by vacuum operations performed on this table | total_blks_dirtied |
Removed blocks | Number of pages removed by vacuum operations performed on this table | pages_removed |
Vacuum | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
The report table “Top tables by vacuum operations” shows top tables by the number of vacuum operations performed (vacuum_count
+ autovacuum_count
). The data is based on the pg_stat_all_tables
view. Table G.61 lists columns of this report table.
Table G.61. Top tables by vacuum operations
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Vacuum count | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum count | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
Ins | Number of rows inserted | n_tup_ins |
Upd | Number of rows updated (includes HOT updated rows) | n_tup_upd |
Del | Number of rows deleted | n_tup_del |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
The report table “Top tables by analyze operations” shows top tables by the number of analyze operations performed (analyze_count
+ autoanalyze_count
). The data is based on the pg_stat_all_tables
view. Table G.62 lists columns of this report table.
Table G.62. Top tables by analyze operations
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Tablespace | Name of the tablespace where the table is located | |
Schema | Schema name for the table | |
Table | Table name | |
Analyze count | Number of times this table has been manually analyzed | analyze_count |
Autoanalyze count | Number of times this table has been analyzed by the autovacuum daemon | autoanalyze_count |
Ins | Number of rows inserted | n_tup_ins |
Upd | Number of rows updated, including HOT | n_tup_upd |
Del | Number of rows deleted | n_tup_del |
Upd(HOT) | Number of rows HOT updated | n_tup_hot_upd |
The report table “Top indexes by estimated vacuum load” shows top indexes by estimated implicit vacuum load. This load is calculated as the number of vacuum operations performed on the underlying table multiplied by the index size. The data is based on the pg_stat_all_indexes
view. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. Table G.63 lists columns of this report table.
Table G.63. Top indexes by estimated vacuum load
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the index | |
Tablespace | Name of the tablespace where the index is located | |
Schema | Schema name for the underlying table | |
Table | Underlying table name | |
Index | Index name | |
~Vacuum bytes | Vacuum load estimation | (vacuum_count + autovacuum_count ) * index_size |
Vacuum count | Number of times this table has been manually vacuumed (not counting VACUUM FULL ) | vacuum_count |
Autovacuum count | Number of times this table has been vacuumed by the autovacuum daemon | autovacuum_count |
IX size | Average index size during the report interval | |
Relsize | Average relation size during the report interval |
The report table “Top tables by dead tuples ratio” shows top tables larger than 5 MB by the ratio of dead tuples. Statistics are valid for the last sample in the report interval. The data is based on the pg_stat_all_tables
view. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. Table G.64 lists columns of this report table.
Table G.64. Top tables by dead tuples ratio
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Schema | Schema name for the table | |
Table | Table name | |
Live | Estimated number of live rows | n_live_tup |
Dead | Estimated number of dead rows | n_dead_tup |
%Dead | Percentage of dead rows in all table rows | |
Last AV | Last time at which this table was vacuumed by the autovacuum daemon | last_autovacuum |
Size | Table size | pg_table_size() - pg_relation_size(toast) |
The report table “Top tables by modified tuples ratio” shows top tables larger than 5 MB by the ratio of modified tuples. Statistics are valid for the last sample in the report interval. The data is based on the pg_stat_all_tables
view. When there are no relation sizes collected with pg_relation_size()
, relation-size estimates are based on the pg_class
.relpages
field. Since such values are less accurate, they are shown in square brackets. Table G.65 lists columns of this report table.
Table G.65. Top tables by modified tuples ratio
Column | Description | Field/Calculation |
---|---|---|
DB | Database name for the table | |
Schema | Schema name for the table | |
Table | Table name | |
Live | Estimated number of live rows | n_live_tup |
Dead | Estimated number of dead rows | n_dead_tup |
Mod | Estimated number of rows modified since this table was last analyzed | n_mod_since_analyze |
%Mod | Percentage of modified rows in all table rows | |
Last AA | Last time at which this table was analyzed by the autovacuum daemon | last_autoanalyze |
Size | Table size | pg_table_size() - pg_relation_size(toast) |
G.1.11.10. Cluster settings during the report interval #
This section of the report contains a table with Postgres Pro GUC parameters, values of functions version()
, pg_postmaster_start_time()
, pg_conf_load_time()
and the system_identifier
field of the pg_control_system()
function during the report interval. The data in the table is grouped under Defined settings
and Default settings
. Table G.66 lists columns of this report table.
Table G.66. Cluster settings during the report interval
Column | Description |
---|---|
Setting | Name of the parameter |
reset_val | reset_val field of the pg_settings view. Settings changed during the report interval are shown in bold font. |
Unit | Unit of the setting |
Source | Configuration file where this setting is defined, semicolon, line number |
Notes | Timestamp of the sample where this value was first observed |
G.1.12. pgpro_pwr Diagnostic Tools #
pgpro_pwr provides self-diagnostic tools.
G.1.12.1. Collecting Detailed Timing Statistics for Sampling Procedures #
pgpro_pwr collects detailed timing statistics of taking samples when the pgpro_pwr.track_sample_timings
parameter is on. You can get the results from the v_sample_timings
view. Table G.67 lists columns of this view.
Table G.67. v_sample_timings
View
Column | Description |
---|---|
server_name | Name of the server |
sample_id | Sample identifier |
sample_time | Time when the sample was taken |
sampling_event | Sampling stage. See Table G.68 for descriptions of sampling stages. |
time_spent | Time spent in the event |
Table G.68. sampling_event
Description
Event | Description |
---|---|
total | Taking the sample (all stages) |
connect | Making dblink connection to the server |
get server environment | Getting server GUC parameters, available extensions, etc. |
collect database stats | Querying the pg_stat_database view for statistics on databases |
calculate database stats | Calculating differential statistics on databases since the previous sample |
collect tablespace stats | Querying the pg_tablespace view for statistics on tablespaces |
collect statement stats | Collecting statistics on statements using the pgpro_stats and pg_stat_kcache extensions |
query pg_stat_bgwriter | Collecting cluster statistics using the pg_stat_bgwriter view |
query pg_stat_archiver | Collecting cluster statistics using the pg_stat_archiver view |
collect object stats | Collecting statistics on database objects. Includes events from Table G.69. |
maintain repository | Executing support routines |
calculate tablespace stats | Calculating differential statistics on tablespaces |
calculate object stats | Calculating differential statistics on database objects. Includes events from Table G.70. |
calculate cluster stats | Calculating cluster differential statistics |
calculate archiver stats | Calculating archiver differential statistics |
delete obsolete samples | Deleting obsolete baselines and samples |
Table G.69. Events of Collecting Statistics on Database Objects
Event | Description |
---|---|
db: dbname collect tables stats | Collecting statistics on tables for the dbname database |
db: dbname collect indexes stats | Collecting statistics on indexes for the dbname database |
db: dbname collect functions stats | Collecting statistics on functions for the dbname database |
Table G.70. Events of Calculating Differences of Statistics on Database Objects
Event | Description |
---|---|
calculate tables stats | Calculating differential statistics on tables of all databases |
calculate indexes stats | Calculating differential statistics on indexes of all databases |
calculate functions stats | Calculating differential statistics on functions of all databases |
G.1.13. Important Notes #
When using the pgpro_pwr extension, be aware of the following:
Postgres Pro collects execution statistics after the execution is complete. If a single execution of a statement lasts for several samples, it will only affect statistics of the last sample (in which the execution completed). Besides, statistics on statements that are still running are unavailable. Maintenance processes, such as vacuum and checkpointer, will update the statistics only on completion.
Resetting any Postgres Pro statistics may affect the accuracy of the next sample.
Exclusive locks on relations conflict with calculation of the relation size. If the
take_sample()
function is unable to acquire a lock for a short period of time (3 seconds), it will fail and no sample will be generated.