On 3/18/21 9:06 PM, Euler Taveira wrote:
> On Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:
>> I think a better way to test this would be to use a tuple lock:
> I predicated such issues with this test. Your suggestion works for me. Maybe
> you should use less rows in the session 2 query.
>
>> setup:
>>
>> create table t (a int unique);
>>
>> session 1:
>>
>> begin;
>> insert into t values (1);
>> ... keep open ...
>>
>> session 2:
>>
>> begin;
>> set work_mem = '64kB';
>> insert into t select i from generate_series(1,10000) s(i);
>> ... should block ...
>>
>> Then, once the second session gets waiting on the tuple, kill the
>> backend. We might as well test that there actually is a temp file first,
>> and then test that it disappeared.
> Your suggestion works for me. Maybe you could use less rows in the session 2
> query. I experimented with 1k rows and it generates a temporary file.
>
OK. Can you prepare a patch with the proposed test approach?
FWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rows
simply does not use a temp file, and with 1000 rows it works fine. On
the x86_64 the temp file is created even with 500 rows. So there clearly
is some platform dependency, not sure if it's due to 32/64 bits,
alignment or something else. In any case, the 500 rows seems to be just
on the threshold.
We need to do both - stop using the timing and increase the number of
rows, to consistently get temp files.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company