•   PostgreSQL   •   By Pavel Luzanov

PostgreSQL 19: part 4 or CommitFest 2026-01

Continuing the series of CommitFest 19 reviews, today we’re covering the January 2026 CommitFest.

The highlights from previous CommitFests are available here: 2025-07, 2025-09, 2025-11.

Partitioning: merging and splitting partitions

commit: f2e4cc42795, 4b3d173629f

The first attempt to implement commands for merging and splitting partitions in a partitioned table was made in PostgreSQL 17, but to no avail: the patches were reverted shortly before release. This is the second attempt, though with significant limitations.

Consider a table for flight bookings that’s divided into partitions by month. This one has three partitions for the last three months of last year:

SELECT tableoid::regclass AS partition, count(*)
FROM bookings_range
GROUP BY ALL;
 partition | count  
-----------+--------
 p2025_10  | 434287
 p2025_11  | 410680
 p2025_12  | 411025
(3 rows)

If last year’s data is rarely accessed, you can merge these partitions into one:

ALTER TABLE bookings_range
    MERGE PARTITIONS (
        p2025_10,
        p2025_11,
        p2025_12
    )
    INTO p2025;

SELECT tableoid::regclass AS partition, count(*)
FROM bookings_range
GROUP BY ALL;
 partition |  count  
-----------+---------
 p2025     | 1255992
(1 row)

You can revert to monthly partitioning with the reverse command:

ALTER TABLE bookings_range
    SPLIT PARTITION p2025
    INTO (
        PARTITION p2025_10 FOR VALUES FROM ('2025-10-01') TO ('2025-11-01'),
        PARTITION p2025_11 FOR VALUES FROM ('2025-11-01') TO ('2025-12-01'),
        PARTITION p2025_12 FOR VALUES FROM ('2025-12-01') TO ('2026-01-01')
    );

SELECT tableoid::regclass AS partition, count(*)
FROM bookings_range
GROUP BY ALL;
 partition | count  
-----------+--------
 p2025_10  | 434287
 p2025_11  | 410680
 p2025_12  | 411025
(3 rows)

Both commands acquire an exclusive lock on the parent table for the entire duration of execution. Therefore, they should not be run on systems under heavy load. However, they can be useful in certain situations. Moreover, having these commands provides a solid foundation for future optimizations, both in terms of reducing lock levels and enabling parallel execution.

pg_dump[all]/pg_restore: dumping and restoring extended statistics

commit: d756fa1019f, 0e80f3f88de, c32fb29e979, 302879bd68d, efbebb4e858

Since PostgreSQL 18, pg_dump and pg_dumpall can dump basic statistics for tables and indexes, and psql and pg_restore restore it. With these new patches, they can handle extended statistics too, including expression statistics.

The following example creates three types of extended statistics for a table:

CREATE TABLE test AS SELECT 1 AS id, 'a' AS descr;

CREATE STATISTICS test_stats (ndistinct, dependencies, mcv)
ON id, descr
FROM test;
ANALYZE test;

pg_dump[all] dumps extended statistics by calling the new function pg_restore_extended_stats. You can see the calls in the pg_dump output:

$ pg_dump --statistics-only| grep -A 12 pg_restore_extended_stats
SELECT * FROM pg_catalog.pg_restore_extended_stats(
    'version', '190000'::integer,
    'schemaname', 'public',
    'relname', 'test',
    'statistics_schemaname', 'public',
    'statistics_name', 'test_stats',
    'inherited', 'f'::boolean,
    'n_distinct', '[{"attributes": [1, 2], "ndistinct": 1}]'::pg_ndistinct,
    'dependencies', '[{"attributes": [1], "dependency": 2, "degree": 1.000000}, {"attributes": [2], "dependency": 1, "degree": 1.000000}]'::pg_dependencies,
    'most_common_vals', '{{1,a}}'::text[],
    'most_common_freqs', '{1}'::double precision[],
    'most_common_base_freqs', '{1}'::double precision[]
);

file_fdw: skipping initial rows

commit: 26cb14aea12

The article about the July’s PostgreSQL 19 CommitFest mentioned that when loading data into a table with the COPY command, you can specify how many initial rows to skip before the actual data begins. This now also works in file_fdw.

Logical replication: enabling and disabling WAL logical decoding without server restart

commit: 67c20979ce7

Logical replication requires WAL to include information needed for logical decoding. The default wal_level setting (replica) doesn’t include this information. Changing WAL level requires a server restart.

This patch makes the system determine the WAL level dynamically based on whether logical replication slots exist. When the first slot is created, the effective WAL level automatically increases to logical. Conversely, when the last slot is removed, the effective level drops back to replica. You can check the new read-only parameter effective_wal_level to see the current effective WAL level.

\dconfig+ *wal_level
                     List of configuration parameters
      Parameter      |  Value  | Type |  Context   |  Access privileges   
---------------------+---------+------+------------+----------------------
 effective_wal_level | replica | enum | internal   | 
 wal_level           | replica | enum | postmaster | postgres=sA/postgres+
                     |         |      |            | alice=sA/postgres
(2 rows)

Creating a logical replication slot causes the effective WAL level to increase:

SELECT pg_create_logical_replication_slot('slot', 'pgoutput');
 pg_create_logical_replication_slot 
------------------------------------
 (slot,2/4BB9C340)
(1 row)
\dconfig *wal_level
List of configuration parameters
      Parameter      |  Value  
---------------------+---------
 effective_wal_level | logical
 wal_level = replica
(2 rows)

Drop the only slot, and the next checkpoint will reduce the effective WAL level:

SELECT pg_drop_replication_slot('slot');
CHECKPOINT;

\dconfig *wal_level
List of configuration parameters
      Parameter      |  Value  
---------------------+---------
 effective_wal_level | replica
 wal_level = replica
(2 rows)

Monitoring logical replication slot synchronization delays

commit: 76b78721ca, e68b6adad96, 5db6a344abc

When the expected logical replication slot synchronization doesn’t occur, there should be a way to see what’s causing the delay. To address this, the pg_replication_slots view now includes a new column slotsync_skip_reason.

The pg_stat_replication_slots view now has two new columns: slotsync_skip_count and slotsync_last_skip, which show how many times slot synchronization was skipped and when it last occurred.

pg_available_extensions shows extension installation directories

commit: f3c9e341cdf

The pg_available_extensions and pg_available_extension_versions views now include a location column that displays the directory where each extension is installed.

SELECT name, location
FROM pg_available_extensions
LIMIT 5;
       name       | location 
------------------+----------
 pg_surgery       | $system
 dict_int         | $system
 plpython3u       | $system
 seg              | $system
 ltree_plpython3u | $system
(5 rows)

The $system value represents the system’s default directory.

New function pg_get_multixact_stats: multixact usage statistics

commit: 97b101776ce

Let’s lock the same row in KEY SHARE mode in two sessions:

1=# BEGIN;
1=*# SELECT * FROM bookings WHERE book_ref = '2EW1SQ' FOR KEY SHARE;
 book_ref |           book_date           | total_amount 
----------+-------------------------------+--------------
 2EW1SQ   | 2025-09-01 03:00:12.557744+03 |      8125.00
(1 row)
2=# BEGIN;
2=*# SELECT * FROM bookings WHERE book_ref = '2EW1SQ' FOR KEY SHARE;
 book_ref |           book_date           | total_amount 
----------+-------------------------------+--------------
 2EW1SQ   | 2025-09-01 03:00:12.557744+03 |      8125.00
(1 row)

These two sessions are using a multitransaction to hold a shared lock on the row. The function pg_get_multixact_stats provides aggregated statistics on multitransaction usage:

SELECT *
FROM  pg_get_multixact_stats()
\gx
-[ RECORD 1 ]----+--
num_mxids        | 1
num_members      | 2
members_size     | 0
oldest_multixact | 1

This includes:

  • the total number of current multitransactions in the system
  • the total number of member transactions within these multitransactions
  • the amount of memory (in bytes) required to store information in the pg_multixact/members directory
  • the oldest multitransaction ID currently in use

Improvements to vacuum and analyze progress monitoring

commit: 0d789520619, ab40db3852d

The view pg_stat_progress_vacuum now includes two new columns:

  • mode with possible values: normal, aggressive, and failsafe
  • started_by with values: manual, autovacuum, and autovacuum_wraparound

The started_by column has also been added to the pg_stat_progress_analyze view. There it can have two values: manual and autovacuum.

Vacuum: memory usage information

commit: 736f754eed0

The VACUUM command with the option verbose now displays a new line (memory usage) showing how much total memory was needed to store dead tuple IDs and how many times (resets) the ID list exceeded the allocated memory limit (maintenance_work_mem).

VACUUM(verbose) bookings;
INFO:  vacuuming "demo.bookings.bookings"
…
memory usage: dead item storage 0.02 MB accumulated across 0 resets (limit 64.00 MB each)
…
VACUUM

This same line will appear in the server log if autovacuum logging is enabled.

Notably, starting with PostgreSQL 17, the vacuum process stores dead tuple IDs in a compressed radix tree, as opposed to a regular array in the older versions. This wonderful optimization made getting a resets value greater than 0 extremely difficult – even when testing for it deliberately.

vacuumdb --dry-run

commit: d107176d27c

Following the pattern of some server tools, vacuumdb now has a --dry-run option. This option shows what vacuum and analyze commands the tool would send to the server.

Let’s create a table in the public schema and see what vacuumdb wants to do to it:

$ psql -d postgres -c 'CREATE TABLE public.test (id int)'
$ vacuumdb -d postgres -n public --dry-run
vacuumdb: Executing in dry-run mode.
No commands will be sent to the server.
vacuumdb: vacuuming database "postgres"
VACUUM (SKIP_DATABASE_STATS) public.test;
VACUUM (ONLY_DATABASE_STATS);

jsonb_agg optimization

commit: 0986e95161c, b61aa76e458, 005a2907dc3

The function jsonb_agg has become significantly faster, particularly when working with integer and numeric types. A simple run time comparison in PostgreSQL 18 and 19:

18=# \timing
18=# SELECT jsonb_agg(g.x) FROM generate_series(1.0,1000000) AS g(x)\g /dev/null
Time: 677,667 ms

In PostgreSQL 19:

19=# SELECT jsonb_agg(g.x) FROM generate_series(1.0,1000000) AS g(x)\g /dev/null
Time: 374,138 ms

This improvement is specific to jsonb_agg; the regular json_agg function already performs at roughly the same speed even without this optimization.

LISTEN/NOTIFY Optimization

commit: 282b1cde9de

The LISTEN/NOTIFY mechanism works well when multiple receivers are monitoring notifications from a single channel. However, the performance degrades when different receivers are waiting for notifications from different channels. The NOTIFY process attempts to deliver notifications to all receivers at all channels, rather than only to those subscribed to the specific channel.

In PostgreSQL 19, the LISTEN/NOTIFY mechanism maintains a hash table of channels and receivers, resulting in a significant performance boost for notification delivery.

ICU: character conversion function optimization

commit: c4ff35f1044

Character case conversion functions using ICU collation have been optimized for UTF8-encoded databases.

18=# SELECT count(upper(passenger_name COLLATE "ru-x-icu"))
FROM tickets;
  count   
----------
 10836563
(1 row)

Time: 3107,008 ms (00:03,107)

In PostgreSQL 19, the same query runs significantly faster:

19=# SELECT count(upper(passenger_name COLLATE "ru-x-icu"))
FROM tickets;
  count   
----------
 10836563
(1 row)

Time: 1815,985 ms (00:01,816)

The parameter standard_conforming_strings can no longer be disabled

commit: 45762084545

The standard_conforming_strings parameter can no longer be turned off. The related escape_string_warning parameter has been removed.

Behavior in PostgreSQL 18:

SET standard_conforming_strings = off;
SET escape_string_warning = off;
SELECT 'A\nB';
 ?column? 
----------
 A       +
 B
(1 row)

In other words, in a single-quoted string, the \ character is treated as a special character.

This behavior will no longer exist in PostgreSQL 19:

SET standard_conforming_strings = off;
ERROR:  non-standard string literals are not supported
SET escape_string_warning = off;
ERROR:  unrecognized configuration parameter "escape_string_warning"
SELECT 'A\nB';
 ?column? 
----------
 A\nB
(1 row)

To use special characters, you need to prefix the string literal with E:

SELECT E'A\nB';
 ?column? 
----------
 A       +
 B
(1 row)

We could have easily overlooked this patch if it weren’t for 1C Company. For their system, which is popular in Russia and beyond, disabling standard_conforming_strings is strongly recommended. Developers working with 1C and PostgreSQL-based DMBS will likely need to test their string handling with these changes in mind when upgrading.

 

That’s all for now. Looking forward to the fifth CommitFest in March.

← Back to all articles

Pavel Luzanov