E.2. Postgres Pro Enterprise 16.4.1 #
Release Date: 2024-09-09
E.2.1. Overview #
This release is based on PostgreSQL 16.4 and Postgres Pro Enterprise 16.3.2. All changes inherited from PostgreSQL 16.4 are listed in PostgreSQL 16.4 Release Notes. As compared with Postgres Pro Enterprise 16.3.2, this version also provides the following changes:
Enhanced performance of segment search by implementing a new search strategy, allowing faster detection of the last segment.
Lowered the number of unneeded replanning attempts by adding the backend memory consumption trigger, whose value is defined by the replan_memory_limit configuration parameter, and changing the replanning behavior triggered by the number of processed node tuples.
Implemented the interaction of the PASSWORD_GRACE_TIME profile parameter with the
VALID UNTIL
role attribute. Now if both of them are set, the warning about password expiration will be displayed.Prevented potential authentication delays due to locking by not updating role's last login time if the role's profile
USER_INACTIVE_TIME
is set to unlimited (see Section 54.40 for details).Optimized the logic of pruning, which is now delayed until a page is nearly full, rather than relying solely on the fill factor. This reduces the frequency of pruning during
UPDATE
operations, leading to better performance in tables with frequent updates.Fixed an issue with statistics handling in autonomous transactions. Previously, statistics changes were only saved if both the autonomous transaction and the parent transaction were committed successfully. This oversight was usually pretty harmless since no reported failures have been traced.
Fixed an issue with nested loop parameters that forced Memoize to constantly purge the cache. This bugfix speeds up query execution.
Fixed issues related to processing of data structures in CFS by pg_rewind. Previously, pg_rewind did not fully support CFS, which could result in data corruption.
Fixed a segmentation fault that could occur when a connection to the built-in connection pooler was reset before a new session was created in the backend.
Fixed an issue with freezing 64-bit multitransaction IDs, which could manifest itself in PANIC-level errors like “xid XXX does not fit into valid range for base YYY” during autovacuum.
Fixed an issue related to suboptimal handling of
pd_prune_xid
. This issue did not result in any significant operation problems but caused unnecessary page pruning, which might have produced extra WAL records.Fixed an issue, which could manifest itself in errors like “invalid FSM request size”. The code was adjusted to reflect the changes in heap page structure, removing reliance on a constant related to maximum free heap page space where it is no longer applicable.
Fixed a bug that caused the optimizer to ignore columns from query conditions. Previously, when partially using a composite index, the number of rows could be overestimated, which led to an incorrect plan creation. The bug occurred due to a malfunction of the multi-column statistics elements.
Fixed a bug in
ANALYZE
that could occur because it was impossible to display thepg_statistic
system catalog. For the fix to work, if your database has indexes withINCLUDE
columns, after upgrading Postgres Pro, it is recommended to do anotherANALYZE
for these columns.Added support for ALT 11.
Upgraded the PostgreSQL ODBC driver to version 16.00.0005.
Improved the built-in high-availability feature to provide the following optimizations and bug fixes:
Upgraded biha to version 1.3.
Optimized the automatic rewind logic. Now if the biha.autorewind configuration parameter is set to
false
on the node and cluster timelines diverge, this node stops accepting WAL records after it goes into theNODE_ERROR
state. Based on the new logic, to execute queries on the node, removebiha
fromshared_preload_libraries
on this node and/or run the manual rewind. The rewind results can now be checked in therewind_state
field of thebiha.state
file.Optimized behavior of the synchronous referee node in the
referee_with_wal
mode, which now depends on the synchronous_commit value.Fixed an issue that could cause the leader node to accidentally demote. This happened because all follower nodes, which were executing queries, crashed due to a conflict between a query and the recovery process. Now the extension throws a warning instead of crashing.
Fixed a segmentation fault that could occur in the control channel while trying to remove a node from the cluster.
Fixed memory leaks in bihactl.
Upgraded citus to version 12.1.5.1, which now supports the ability to use the extension together with the enabled enable_group_by_reordering configuration parameter.
Upgraded dbms_lob to version 1.2, which now supports reading and writing blocks up to 1GB, an increase from the previous 32KB limit.
Added the hypopg extension, which provides support for hypothetical indexes in Postgres Pro.
Upgraded the mchar extension to fix a bug that caused the
mchar
andmvarchar
data types to ignore control characters during string comparison.Implemented the ability to slow down transaction execution on the donor node in multimaster using the
multimaster.tx_delay_on_slow_catchup
configuration parameter. This is useful when a lagging node is catching up to the donor node and cannot apply changes as quickly.Upgraded pg_filedump to version 17.0, which provides optimizations and bug fixes. In particular, contents of meta pages for GIN and SP-GiST indexes are now displayed correctly, and an issue of lacking memory for encoding and decompression is resolved.
Upgraded pg_proaudit to provide the following optimizations and bug fixes:
Improved performance and added the
pg_proaudit.max_rules_count
parameter, which allows specifying the maximum number of rules allowed.Corrected support of database names including uppercase symbols by the
pg_proaudit_set_rule
function.
Upgraded pg_probackup to version 2.8.3 Enterprise, which provides the following bug fixes:
Fixed backup validation for databases containing an OID larger than 10^9. Previously, the validation status could be displayed incorrectly in such cases.
Fixed a bug that could occur when pg_probackup was run as a user included in the
postgres
group in case the database used CFS.
Upgraded the pgpro_rp extension to version 1.1, which now supports plan assignment groups. The database administrator can now create plan assignment groups for different roles to control resource prioritization across large amounts of database users.
Upgraded pgpro_sfile to version 1.2, which adds the
sf_md5
function that calculates MD5 hash for ansfile
object.Upgraded pgvector to version 0.7.4.
Fixed incorrect behavior of pg_wait_sampling when used with the extended query protocol.
Upgraded sr_plan to provide new features and optimizations. Notable changes are as follows:
Added the sr_plan.sandbox configuration parameter that allows testing and analyzing queries without affecting the node operation by reserving a separate area in shared memory for the nodes.
Added three configuration parameters sr_plan.auto_capturing, sr_plan.max_captured_items, and sr_plan.max_consts_len that allow you to configure query capturing.
Added the sr_captured_queries view that displays information about queries captured in sessions.
Added the sr_captured_clean function that removes all records from the
sr_captured_queries
view.Renamed the
sr_plan.max
configuration parameter to sr_plan.fs_ctr_max.Replaced
queryid
withsql_hash
to reflect the new query identification logic.
Upgraded utl_http to provide support for
PUT
,UPLOAD
,PATCH
,HEAD
,OPTIONS
,DELETE
,TRACE
, as well as any custom HTTP methods.
E.2.2. Migration to Version 16.4.1 #
If you are upgrading from a Postgres Pro Enterprise release based on the same PostgreSQL major version, it is enough to install the new version into your current installation directory.
Important
If you are upgrading your built-in high-availability cluster from Postgres Pro Enterprise 16.3 or earlier to Postgres Pro Enterprise 16.4, take the following steps:
Set the
nquorum
andminnodes
options to the value greater than the number of nodes in the cluster to avoid unexpected leader elections and for the leader node to change its state fromLEADER_RW
toLEADER_RO
. Take this step using the biha.set_nquorum_and_minnodes function. After setting the values, wait until follower nodes have the same number of WAL records as the leader node. You can check it in the pg_stat_replication view on the leader node: thereplay_lag
column will beNULL
for all the follower nodes.Stop the follower using the
pg_ctl
command.Upgrade the follower node server.
Start the follower using the
pg_ctl
command.Promote the upgraded follower node using the biha.set_leader function.
Upgrade the servers of the remaining followers and the old leader.
Promote the old leader node using the biha.set_leader function.
Set the
nquorum
andminnodes
to the values, which were used before starting the Postgres Pro Enterprise upgrade. Take this step using the biha.set_nquorum_and_minnodes function.
Note that if a node with the Postgres Pro Enterprise 16.1 server goes into the NODE_ERROR
state, other nodes may “see” its state as incorrect, for example, as a REFEREE
. In this case, it is recommended to stop the node, upgrade its server, synchronize it using pg_rewind, and start it once again.
Important
When upgrading your high-availability cluster from Postgres Pro Enterprise versions 16.3.x or lower, first disable automatic failover if it was enabled and upgrade all the standby servers, then upgrade the primary server, promote a standby, and restart the former primary (possibly with pg_rewind).
If you take backups using pg_probackup and you have previously upgraded it to version 2.8.0 Enterprise or 2.8.1 Enterprise, make sure to upgrade it to version 2.8.2 Enterprise or higher and retake a full backup after upgrade, since backups taken using those versions might be corrupted. If you suspect that your backups taken with versions 2.8.0 or 2.8.1 may be corrupted, you can validate them using version 2.8.2.
If you are upgrading your built-in high-availability cluster from Postgres Pro Enterprise 16.1 and 16.2 to Postgres Pro Enterprise 16.3, take the following steps:
Stop the follower using the
pg_ctl
command.Upgrade the follower node server.
Start the follower using the
pg_ctl
command.Promote the upgraded follower node using the biha.set_leader function.
Upgrade the servers of the remaining followers and the old leader.
Promote the old leader node using the biha.set_leader function.
Note that if a node with the Postgres Pro Enterprise 16.1 server goes into the NODE_ERROR
state, other nodes may “see” its state as incorrect, for example, as a REFEREE
. In this case, it is recommended to stop the node, upgrade its server, synchronize it using pg_rewind, and start it once again.
To migrate from PostgreSQL, as well as Postgres Pro Standard or Postgres Pro Enterprise based on a previous PostgreSQL major version, see the migration instructions for version 16.