Thread: Server closed the connection unexpectedly (memory leak)
My fuzzer finds a bug in Postgres, which makes the connection closed unexpectedly. This bug still can be reproduced even after applying the patch for https://www.postgresql.org/message-id/flat/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel@cybertec.at --- Steps to reproduce the bug --- 1. /usr/local/pgsql/bin/psql -d postgres -c "drop database redb"; 2. /usr/local/pgsql/bin/psql -d postgres -c "create database redb"; 3. /usr/local/pgsql/bin/psql -d redb -f pg_testdb55_bk.sql; 4. /usr/local/pgsql/bin/psql -d redb -f unexpected.sql pg_testdb55_bk.sql and unexpected.sql are attached. --- Expected behavior --- No error triggered. --- Actual behavior --- The test case makes the connection closed unexpectedly: psql:unexpected.sql:695: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. psql:unexpected.sql:695: error: connection to server was lost --- Postgres version --- Github commit: f5c446e3367527f9db1506d7c38d2f56e20950b6 Version: PostgreSQL 16beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit --- Platform information --- Platform: Ubuntu 20.04 Kernel: Linux 5.4.0-147-generic
Attachment
Sorry that the test cases is a bit large. It is difficult for me to reduce it to the minimal, because when I removed some parts, the bug easily disappears.
Best wishes,
Zuming Jiang
My fuzzer finds a bug in Postgres, which makes the connection closed unexpectedly. This bug still can be reproduced even after applying the patch for https://www.postgresql.org/message-id/flat/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel@cybertec.at
--- Steps to reproduce the bug ---
1. /usr/local/pgsql/bin/psql -d postgres -c "drop database redb";
2. /usr/local/pgsql/bin/psql -d postgres -c "create database redb";
3. /usr/local/pgsql/bin/psql -d redb -f pg_testdb55_bk.sql;
4. /usr/local/pgsql/bin/psql -d redb -f unexpected.sql
pg_testdb55_bk.sql and unexpected.sql are attached.
--- Expected behavior ---
No error triggered.
--- Actual behavior ---
The test case makes the connection closed unexpectedly:
psql:unexpected.sql:695: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
psql:unexpected.sql:695: error: connection to server was lost
--- Postgres version ---
Github commit: f5c446e3367527f9db1506d7c38d2f56e20950b6
Version: PostgreSQL 16beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
--- Platform information ---
Platform: Ubuntu 20.04
Kernel: Linux 5.4.0-147-generic
Best wishes,
Zuming Jiang
From: Zu-Ming Jiang [mailto:zuming.jiang@inf.ethz.ch]
Sent: Friday, June 23, 2023 at 6:06 PM
Subject: Server closed the connection unexpectedly (memory leak)
--- Steps to reproduce the bug ---
1. /usr/local/pgsql/bin/psql -d postgres -c "drop database redb";
2. /usr/local/pgsql/bin/psql -d postgres -c "create database redb";
3. /usr/local/pgsql/bin/psql -d redb -f pg_testdb55_bk.sql;
4. /usr/local/pgsql/bin/psql -d redb -f unexpected.sql
pg_testdb55_bk.sql and unexpected.sql are attached.
--- Expected behavior ---
No error triggered.
--- Actual behavior ---
The test case makes the connection closed unexpectedly:
psql:unexpected.sql:695: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
psql:unexpected.sql:695: error: connection to server was lost
--- Postgres version ---
Github commit: f5c446e3367527f9db1506d7c38d2f56e20950b6
Version: PostgreSQL 16beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
--- Platform information ---
Platform: Ubuntu 20.04
Kernel: Linux 5.4.0-147-generic
Attachment
Zu-Ming Jiang <zuming.jiang@inf.ethz.ch> writes: > My fuzzer finds a bug in Postgres, which makes the connection closed > unexpectedly. TBH, I think there's not much to be learned here, beyond "a ridiculously complicated query takes a ridiculous amount of memory to plan". The reason for the backend crash is presumably that the OOM killer decided to zap it. If you run the postmaster under a "ulimit -v" setting that's small enough to act before the OOM killer does, then you get an unexciting "out of memory" error. I did find that if you mark cte_3 as MATERIALIZED, the resource consumption is a lot less --- but you get a plan that requires 41191 lines to EXPLAIN, so it's still way outside any bounds of reasonability. Perhaps there's room there to argue that we shouldn't flatten CTE subqueries that are "too big" ... but it's hard to decide how to measure "too big". regards, tom lane
Hi, On 2023-06-24 12:56:06 -0400, Tom Lane wrote: > Zu-Ming Jiang <zuming.jiang@inf.ethz.ch> writes: > > My fuzzer finds a bug in Postgres, which makes the connection closed > > unexpectedly. > > TBH, I think there's not much to be learned here, beyond "a > ridiculously complicated query takes a ridiculous amount of memory > to plan". The reason for the backend crash is presumably that the > OOM killer decided to zap it. If you run the postmaster under a > "ulimit -v" setting that's small enough to act before the OOM killer > does, then you get an unexciting "out of memory" error. One small thing I find is interesting is that it takes quite a while to process cancel requests - which primarily appears to be because there's no CFI anywhere in copyObject(). Greetings, Andres Freund