Re: Fdw batch insert error out when set batch_size > 65535 - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: Fdw batch insert error out when set batch_size > 65535
Date
Msg-id 1ccf7409-db3e-5b9b-149c-8f1bc7e34e8b@enterprisedb.com
Whole thread Raw
In response to Re: Fdw batch insert error out when set batch_size > 65535  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
List pgsql-hackers

On 6/13/21 2:40 AM, Alvaro Herrera wrote:
> On 2021-Jun-12, Tomas Vondra wrote:
> 
>> There's one caveat, though - for regular builds the slowdown is pretty
>> much eliminated. But with valgrind it's still considerably slower. For
>> postgres_fdw the "make check" used to take ~5 minutes for me, now it
>> takes >1h. And yes, this is entirely due to the new test case which is
>> generating / inserting 70k rows. So maybe the test case is not worth it
>> after all, and we should get rid of it.
> 
> Hmm, what if the table is made 1600 columns wide -- would inserting 41
> rows be sufficient to trigger the problem case?  If it does, maybe it
> would reduce the runtime for valgrind/cache-clobber animals enough that
> it's no longer a concern.
> 

Good idea. I gave that a try, creating a table with 1500 columns and
inserting 50 rows (so 75k parameters). See the attached patch.

While this cuts the runtime about in half (to ~30 minutes on my laptop),
that's probably not enough - it's still about ~6x longer than it used to
take. All these timings are with valgrind.

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: a path towards replacing GEQO with something better
Next
From: Zhihong Yu
Date:
Subject: Re: unnesting multirange data types