Re: Add client connection check during the execution of the query - Mailing list pgsql-hackers
From | Konstantin Knizhnik |
---|---|
Subject | Re: Add client connection check during the execution of the query |
Date | |
Msg-id | 79f5f12b-184f-dcdb-b22f-4f95b6a89f2d@postgrespro.ru Whole thread Raw |
In response to | Re: Add client connection check during the execution of the query (Tatsuo Ishii <ishii@sraoss.co.jp>) |
Responses |
Re: Add client connection check during the execution of the query
|
List | pgsql-hackers |
On 18.07.2019 6:19, Tatsuo Ishii wrote: > > I noticed that there are some confusions in the doc and code regarding what the new > configuration parameter means. According to the doc: > > + Default value is <literal>zero</literal> - it disables connection > + checks, so the backend will detect client disconnection only when trying > + to send a response to the query. > > But guc.c comment says: > > + gettext_noop("Sets the time interval for checking connection with the client."), > + gettext_noop("A value of -1 disables this feature. Zero selects a suitable default value."), > > Probably the doc is correct since the actual code does so. Yes, value -1 is not even accepted due to the specified range. > tps = 67715.993428 (including connections establishing) > tps = 67717.251843 (excluding connections establishing) > > So the performance is about 5% down with the feature enabled in this > case. For me, 5% down is not subtle. Probably we should warn this in > the doc. I also see some performance degradation, although it is not so large in my case (I used pgbench with scale factor 10 and run the same command as you). In my case difference is 103k vs. 105k TPS is smaller than 2%. It seems to me that it is not necessary to enable timeout at each command: @@ -4208,6 +4210,9 @@ PostgresMain(int argc, char *argv[], */ CHECK_FOR_INTERRUPTS(); DoingCommandRead = false; + if (client_connection_check_interval) + enable_timeout_after(SKIP_CLIENT_CHECK_TIMEOUT, + client_connection_check_interval); /* * (5) turn off the idle-in-transaction timeout Unlike statement timeout or idle in transaction timeout price start of measuring time is not important. So it is possible to do once before main backend loop: @@ -3981,6 +3983,10 @@ PostgresMain(int argc, char *argv[], if (!IsUnderPostmaster) PgStartTime = GetCurrentTimestamp(); + if (client_connection_check_interval) + enable_timeout_after(SKIP_CLIENT_CHECK_TIMEOUT, + client_connection_check_interval); + /* * POSTGRES main processing loop begins here * But actually I do not see much difference from moving enabling timeout code. Moreover the difference in performance almost not depend on the value of timeout. I set it to 100 seconds with pgbench loop 30 seconds (so timeout never fired and recv is never called) and still there is small difference in performance. After some experiments I found out that just presence of active timer results some small performance penalty. You can easily check it: set for example statement_timeout to the same large value (100 seconds) and you will get the same small slowdown. So recv() itself is not source of the problem. Actually any system call (may be except fsync) performed with frequency less than one second can not have some noticeable impact on performance. So I do not think that recv(MSG_PEEK) can cause any performance problem at Windows or any other platform. But I wonder why we can not perform just pool with POLLOUT flag and zero timeout. If OS detected closed connection, it should return POLLHUP, should not it? I am not sure if it is more portable or more efficient way - just seems to be a little bit more natural way (from my point of view) to check if connection is still alive. -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
pgsql-hackers by date: