Re: postgres_fdw - should we tighten up batch_size, fetch_size options against non-numeric values? - Mailing list pgsql-hackers

From Tom Lane
Subject Re: postgres_fdw - should we tighten up batch_size, fetch_size options against non-numeric values?
Date
Msg-id 3079566.1621345762@sss.pgh.pa.us
Whole thread Raw
In response to Re: postgres_fdw - should we tighten up batch_size, fetch_size options against non-numeric values?  (Fujii Masao <masao.fujii@oss.nttdata.com>)
Responses Re: postgres_fdw - should we tighten up batch_size, fetch_size options against non-numeric values?  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
List pgsql-hackers
Fujii Masao <masao.fujii@oss.nttdata.com> writes:
> On 2021/05/17 18:58, Bharath Rupireddy wrote:
>> It looks like the values such as '123.456', '789.123' '100$%$#$#',
>> '9,223,372,' are accepted and treated as valid integers for
>> postgres_fdw options batch_size and fetch_size. Whereas this is not
>> the case with fdw_startup_cost and fdw_tuple_cost options for which an
>> error is thrown. Attaching a patch to fix that.

> This looks an improvement. But one issue is that the restore of
> dump file taken by pg_dump from v13 may fail for v14 with this patch
> if it contains invalid setting of fetch_size, e.g., "fetch_size '123.456'".
> OTOH, since batch_size was added in v14, it has no such issue.

Maybe better to just silently round to integer?  I think that's
what we generally do with integer GUCs these days, eg

regression=# set work_mem = 102.9;
SET
regression=# show work_mem;
 work_mem 
----------
 103kB
(1 row)

I agree with throwing an error for non-numeric junk though.
Allowing that on the grounds of backwards compatibility
seems like too much of a stretch.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Fujii Masao
Date:
Subject: Re: postgres_fdw - should we tighten up batch_size, fetch_size options against non-numeric values?
Next
From: Heikki Linnakangas
Date:
Subject: Multiple pg_waldump --rmgr options