PostgreSQL 19: part 2 or CommitFest 2025-09
We continue our series of articles reviewing changes in PostgreSQL 19. This time we'll look at what emerged from the September 2025 CommitFest.
The highlights from the first July CommitFest are available here: 2025-07.
- GROUP BY ALL
- Window functions: NULL value handling
- Event triggers in PL/Python
- More precise error message for incorrect routine parameter names
- random: random date/time within a specified range
- base64url format for encode and decode functions
- New debug_print_raw_parse parameter
- The log_lock_waits parameter is now enabled by default
- pg_stat_progress_basebackup: backup type
- vacuumdb: collecting statistics on partitioned tables
- Buffer cache: using clock-sweep algorithm to find free buffers
- Fake table aliases in queries
GROUP BY ALL
commit: ef38a4d9756
This will make life easier for those who don’t want to relist all the SELECT expressions in the GROUP BY clause. The ALL keyword automatically includes all SELECT expressions that don’t use aggregate functions.
SELECT to_char(actual_departure, 'YYYY'),
count(*)
FROM flights
GROUP BY ALL
ORDER BY 1; to_char | count
---------+-------
2025 | 16477
2026 | 42457
| 10776
(3 rows)If you want to add a new expression to the grouping, simply include it in the SELECT list – no need to modify the GROUP BY clause:
SELECT to_char(actual_departure, 'YYYY'),
status,
count(*)
FROM flights
GROUP BY ALL
ORDER BY 1; to_char | status | count
---------+-----------+-------
2025 | Arrived | 16477
2026 | Arrived | 42438
2026 | Departed | 19
| Boarding | 5
| Scheduled | 10249
| Delayed | 9
| Cancelled | 358
| On Time | 155
(8 rows)Not everyone in the patch discussion agreed with this change, though. Some voiced strong opposition. Their main argument was that developers should maintain full control over query code rather than relying on system defaults. However, since the committee has already approved the proposal to include GROUP BY ALL in the SQL standard, further debate about its necessity is moot, and PostgreSQL 19 will ship with this feature.
See also
- Waiting for SQL:202y: GROUP BY ALL (Peter Eisentraut)
Window functions: NULL value handling
commit: 25a30bbd423, 2273fa32bce
Following the SQL standard, the lag, lead, first_value, last_value, and nth_value functions now support skipping null values. You can specify IGNORE NULLS or RESPECT NULLS (the default) when calling these window functions.
SELECT a, b,
first_value(b) RESPECT NULLS OVER w AS respect_nulls,
first_value(b) IGNORE NULLS OVER w AS ignore_nulls
FROM (VALUES ('a',NULL),('b',1),('c',2)) AS t(a,b)
WINDOW w AS (ORDER BY a ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING); a | b | respect_nulls | ignore_nulls
---+---+---------------+--------------
a | | | 1
b | 1 | | 1
c | 2 | | 1
(3 rows)Event triggers in PL/Python
commit: 53eff471c
Event trigger functions can now be written in PL/Python. Here’s an example of a trigger that fires when a DDL operation completes. The trigger simply displays information about the executed commands:
CREATE EXTENSION IF NOT EXISTS plpython3u;
CREATE OR REPLACE FUNCTION describe_ddl()
RETURNS event_trigger AS $$
for row in plpy.cursor("SELECT command_tag, object_identity FROM pg_event_trigger_ddl_commands()"):
plpy.notice(
"{}. name: {}".format(
row['command_tag'],
row['object_identity']
)
)
$$ LANGUAGE plpython3u;
CREATE EVENT TRIGGER after_ddl
ON ddl_command_end EXECUTE FUNCTION describe_ddl();Let’s test it:
CREATE TABLE test(id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY); NOTICE: CREATE SEQUENCE. name: public.test_id_seq
NOTICE: CREATE TABLE. name: public.test
NOTICE: CREATE INDEX. name: public.test_pkey
NOTICE: ALTER SEQUENCE. name: public.test_id_seq
CREATE TABLEMore precise error message for incorrect routine parameter names
commit: 83a56419457
We have a function with a named parameter.
CREATE FUNCTION f(a int) RETURNS int LANGUAGE SQL RETURN a;
SELECT f(a=>42); f
----
42
(1 row)But if we specify an incorrect parameter name when calling the function, we'll get a generic error message:
18=# SELECT f(b=>42); ERROR: function f(b => integer) does not exist
LINE 1: SELECT f(b=>42);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.This hint is misleading. A function with that name and parameter types does exist, but the parameter has a different name. The error message in PostgreSQL 19 is more precise:
19=# SELECT f(b=>42); ERROR: function f(b => integer) does not exist
LINE 1: SELECT f(b=>42);
^
DETAIL: No function of that name accepts the given argument names.random: random date/time within a specified range
commit: faf071b5538, 9c24111c4da
In PostgreSQL 17, the random function gained the ability to generate uniformly distributed random numbers within a specified range. Now the same capability is available for date/time types:
SELECT random(current_date, current_date + 100); random
------------
2026-03-23
(1 row)base64url format for encode and decode functions
commit: e1d917182c1
The encode and decode functions now support the base64url format, which is safe to use in filenames and URLs.
SELECT encode(decode('+-/_', 'base64url'), 'base64url'); encode
--------
--__
(1 row)New debug_print_raw_parse parameter
commit: 06473f5a344
The debug_print_* family of parameters handles debug output for queries. A new debug_print_raw_parse parameter has been added to display the raw parse tree.
\dconfig debug_print_* List of configuration parameters
Parameter | Value
-----------------------+-------
debug_print_parse | off
debug_print_plan | off
debug_print_raw_parse | off
debug_print_rewritten | off
(4 rows)Let’s see what it shows for the COMMIT command.
SET debug_print_raw_parse = on;
SET client_min_messages = 'LOG';
COMMIT; LOG: raw parse tree:
DETAIL: (
{RAWSTMT
:stmt
{TRANSACTIONSTMT
:kind 2
:options <>
:savepoint_name <>
:gid <>
:chain false
:location -1
}
:stmt_location 0
:stmt_len 6
}
)
WARNING: there is no transaction in progress
COMMITThe log_lock_waits parameter is now enabled by default
commit: 2aac62be8cb
Prolonged locks are rightfully considered a problem for system operation. Logging lock information should be enabled by default.
\dconfig log_lock_waits List of configuration parameters
Parameter | Value
----------------+-------
log_lock_waits | on
(1 row)pg_stat_progress_basebackup: backup type
commit: deb674454c5
The new backup_type column in the pg_stat_progress_basebackup view indicates whether the backup being created is full or incremental:
\d pg_stat_progress_basebackup View "pg_catalog.pg_stat_progress_basebackup"
Column | Type | Collation | Nullable | Default
----------------------+---------+-----------+----------+---------
pid | integer | | |
phase | text | | |
backup_total | bigint | | |
backup_streamed | bigint | | |
tablespaces_total | bigint | | |
tablespaces_streamed | bigint | | |
backup_type | text | | | vacuumdb: collecting statistics on partitioned tables
commit: 6429e5b771d
The vacuumdb utility with the --analyze-only and --analyze-in-stages options will now collect statistics on partitioned tables in addition to regular tables.
Previously, collecting statistics on partitioned tables required explicitly specifying them in the --table parameter, which wasn't always convenient, especially when gathering statistics after a server upgrade.
Buffer cache: using clock-sweep algorithm to find free buffers
commit: 2c789405275
Previously, a free buffer list was maintained to help find available buffers in the buffer cache. This list is no longer used; instead, a clock-sweep algorithm that cycles through all buffers is now employed for the search. Eliminating the free buffer list is expected to simplify future patches related to NUMA support.
From a user perspective, this patch doesn’t change anything and might not warrant mention. However, several topics in our DBA2 course discuss the free buffer list (Buffer Cache, In-Memory Locks, currently available in Russian only). That’s why we've included this patch in the overview: it serves as a reminder that these topics will definitely need updating for the PostgreSQL 19 course.
Fake table aliases in queries
commit: 585e31fcb, 6f79024df34, 5a170e992a4
Historically, the planner has generated odd aliases for row sets when names aren't explicitly specified in queries. For example, "*VALUES*":
EXPLAIN SELECT * FROM (VALUES (2),(1)); QUERY PLAN
-------------------------------------------------------------
Values Scan on "*VALUES*" (cost=0.00..0.03 rows=2 width=4)
(1 row)You can even use these aliases in queries:
SELECT * FROM (VALUES (2),(1) ORDER BY "*VALUES*".column1); column1
---------
1
2
(2 rows)Among these names were: old, new, "*SELECT*", ANY_subquery, "*MERGE*", "*RESULT*", excluded, unnamed_subquery, unnamed_join, "*GROUP*", "*TLOCRN*", "*TROCRN*", "*SELECT* %d", "*VALUES*", xmltable, json_table.
Replacing them with something more meaningful proved challenging due to potential backward compatibility issues. It turns out that even PostgreSQL’s regression tests contain numerous queries using such names, particularly "*VALUES*".
Ultimately, only a few names were abandoned: "*SELECT*", "*SELECT* %d", "ANY_subquery", "*TLOCRN*", "*TROCRN*". These are now replaced with names following the pattern "unnamed_subquery" or "unnamed_subquery_%d".
That’s all for now. Up next: news from the November CommitFest.