E.4. Postgres Pro Enterprise 16.3.1 #

Release Date: 2024-06-03

E.4.1. Overview #

This release is based on PostgreSQL 16.3 and Postgres Pro Enterprise 16.2.2. All changes inherited from PostgreSQL 16.3 are listed in PostgreSQL 16.3 Release Notes. As compared with Postgres Pro Enterprise 16.2.2, this version also provides the following changes:

  • Disabled the system startup timeout by setting the TimeoutSec parameter to 0 in systemd. Previously, big databases could fail to start within the specified timeout.

  • Reduced logging level during a specified restore point creation at a checkpoint timeout for inactive bases only. It allows removing excessive log messages.

  • Improved performance of subsystems on top of SLRU. This includes improvements for the LWLocks that changed the locking mode for the SLRU banks, so that each bank uses a separate LWLock. This results in increased scalability. Also, a new concurrency model for SLRU implementation that uses atomic reads and writes is proposed.

  • Introduced new features, optimizations and bug fixes for CFS. Notable changes are as follows:

    • Added the cfs_log_verbose configuration parameter that controls the CFS log level of defragmentation messages. It allows reducing the number of CFS messages in the log file.

    • Added the cfs_gc_lock_file configuration parameter that sets the path to the lock file to be used to ensure running only one garbage collection worker at a time for several Postgres Pro servers.

    • Modified the logic of truncating *.cfm files in CFS to improve performance. Previously, files were processed even if the file size didn't change, which could be unexpectedly heavy. The new approach involves checking the file size and only truncating if the size differs from the desired size to reduce unnecessary system calls.

    • Reduced the overhead of file size determination during query planning, particularly for queries involving numerous partitioned tables, which could be especially costly for CFS. Now file size is determined directly from the header without fully opening the file.

    • Fixed an issue in CFS, which could cause GC worker to fail with warning CFS GC failed to read block 0 of file X at position 0 size 0: Success. It happened due to incorrect handling of the first MB of the data file containing only zero pages.

  • Added the xml_parse_huge configuration parameter that allows allocating up to 1 GB of memory for processing XML data.

  • Fixed a bug resulting in error cache lookup failed for collation 128. Now, for the additional INCLUDE index columns, collation is ignored.

  • Fixed two planner issues. An expression duplication issue previously caused underestimation of node selectivity and cost in some cases, which could lead to a choice of a suboptimal plan. Another issue — the selectivity overestimation for an AppendOr node was caused by its subnode selectivity overestimation and led to the plan cost overestimation.

  • Fixed an issue of not streaming certain data to a subscriber by a logical replication after executing the ALTER PUBLICATION pub ADD TABLE tab command. The root cause of the issue was an insufficient interlocking between ALTER PUBLICATION and taking a snapshot.

  • Fixed an issue with the automatic aggressive vacuum being triggered too often. Now, an aggressive vacuum scan will only occur for any table with multixact-age strictly greater than autovacuum_multixact_freeze_max_age.

  • Added support for Astra Linux 1.8 and ended support for Astra Linux Orel 2.12, Astra Linux Smolensk 1.6.

  • Added support for Red OS Murom 8.

  • Added support for Ubuntu 24.04.

  • Upgraded aqo to version 2.1, which added the aqo.wal_rw configuration parameter to enable physical replication and allow complete aqo data recovery after failure.

  • Upgraded biha to version 1.2, which provides new features and bug fixes. Notable changes are as follows:

    • Implemented the ability to add the referee node in biha, which allows setting up the 2+1 high-availability cluster with two regular nodes and one referee node. The referee is used only during elections and helps to avoid potential split-brain issues if your cluster contains only two nodes, i.e. the leader and one follower. Note that it does not contain any user databases and cannot be used for data querying.

    • Fixed an issue that could cause a cluster to fail when trying to call the biha.set_leader function to promote each follower manually.

    • Fixed a segmentation fault that could occur when a node started after a manual rewind using pg_rewind. Now the node can restore automatically after the rewind.

    • Fixed a bug related to alive node removal using biha.remove_node, which could cause leader re-elections after the removal. Now the node must be stopped before removing.

  • Upgraded citus to version 12.1.3.1.

  • Added the dbcopies_decoding 1C module for updating database copies. It is implemented as a logical replication plug-in and provided in postgrespro-ent-16-contrib.

  • Upgraded dbms_lob to version 1.1, which includes necessary adjustments to provide interoperability with the new pgpro_sfile extension.

  • Upgraded mamonsu to version 3.5.8, which provides optimizations and bugfixes. Notable changes are as follows:

    • Added support for the Zabbix 6.4 API: handling of deprecated parameters for authentication requests.

    • Removed caching of the pgsql.connections[max_connections] metric.

    • Updated the default log rotation rules.

    • Prepared for support of Python 3.12.

    • Changed metric names of the pg_locks plugin. Be aware that the changes could break custom user-defined triggers and processing functions that use the item.name parameter.

    • Fixed type mismatch for pgpro_stats and pg_wait_sampling.

    • Fixed privileges for the mamonsu role created by bootstrap.

  • Upgraded orafce to version 4.10.0.

  • Upgraded pg_probackup to version 2.8.0 Enterprise, which provides new features, optimizations and bug fixes. Notable changes are as follows:

    • Added a possibility to limit the rate of disk writes using the option --write-rate-limit=bitrate (Mbps, Gbps).

    • Decreased the memory consumption when restoring long sequences of increments twice on average.

    • Added checksum validation for CFS files by checkdb.

    • Added a possibility to validate only a WAL archive.

    • Extended the use of the --dry-run option for all pg_probackup commands.

    • Made creation of a temporary slot during backups in the STREAM mode the default behavior unless it is specified otherwise.

    • Changed the default compression algorithm to zstd. If zstd is not supported by the system, lz4 has the next priority. The --compress option now sets the default values for --compress-level and --compress-algorithm.

    • Added a possibility to specify several hosts to connect to an S3 storage.

    • Implemented a new locking technique, which enables using the locks with S3 and NFS protocols.

  • Added the pgvector extension that provides vector similarity search for Postgres Pro.

  • Added the pgpro_sfile module, which is similar to Oracle LOBs. It allows storing multiple large objects, called sfile objects. The maximum number of such objects as well as object size in bytes is limited by 2^63 - 1.

  • Upgraded pgpro_stats to version 1.7.1, which provides optimizations and bug fixes. Notable changes are as follows:

    • Added saving non-normalized plans to pgpro_stats for queries where previously no plans were saved.

    • Fixed an issue that hindered the monitoring when the pgpro_stats_statements view contained quite a few rows with the same values of plan and queryid, but different values of planid. The issue was caused by an error in parsing the plan tree containing a T_Memoise node.

  • Fixed an error ERROR: query failed: ERROR: tablespace "XXXX" does not exist that could occur when the pg_repack command was trying to reorganize tables in a tablespace whose name started with a digit. The root cause of the issue was that pg_repack expected extra quotes.

  • Added the pljava module that brings Java stored procedures, triggers, and functions to the Postgres Pro backend.

  • Added the plpgsql_check extension that provides static code analysis for PL/pgSQL in Postgres Pro.

  • Upgraded sr_plan to provide new features, optimizations and bug fixes. Notable changes are as follows:

    • Added the sr_plan_hintset_update function, which allows changing the generated hint set with the set of custom hints.

    • Added the sr_plan.max_local_cache_size configuration parameter, which allows setting the maximum size of local cache, in kB. Also, the default value of sr_plan.max_items has been changed to 100.

    • Restricted query registration process so that only one query can be registered per backend.

    • Improved the algorithm for identifying a frozen query.

    • Implemented storage of query plans as separate JSON files.

    • Implemented type handling to attempt casting constants in the query to match the types of frozen query parameters automatically. If type casting is not possible, the frozen plan is ignored.

    • Removed the query tree validation for hint-set plans, allowing the use of hint-set plans during table recreations, field additions, etc.

E.4.2. Migration to Version 16.3.1 #

If you are upgrading from a Postgres Pro Enterprise release based on the same PostgreSQL major version, it is enough to install the new version into your current installation directory.

Important

If you are upgrading your built-in high-availability cluster from Postgres Pro Enterprise 16.1 and 16.2 to Postgres Pro Enterprise 16.3, take the following steps:

  1. Stop the follower using the pg_ctl command.

  2. Upgrade the follower node server.

  3. Start the follower using the pg_ctl command.

  4. Promote the upgraded follower node using the biha.set_leader function.

  5. Upgrade the servers of the remaining followers and the old leader.

  6. Promote the old leader node using the biha.set_leader function.

Note that if a node with the Postgres Pro Enterprise 16.1 server goes into the NODE_ERROR state, other nodes may see its state as incorrect, for example, as a REFEREE. In this case, it is recommended to stop the node, upgrade its server, synchronize it using pg_rewind, and start it once again.

To migrate from PostgreSQL, as well as Postgres Pro Standard or Postgres Pro Enterprise based on a previous PostgreSQL major version, see the migration instructions for version 16.