Re: Convert MAX_SAOP_ARRAY_SIZE to new guc - Mailing list pgsql-hackers

From James Coleman
Subject Re: Convert MAX_SAOP_ARRAY_SIZE to new guc
Date
Msg-id CAAaqYe-ZzSOkvpECrB-CYFEf9Q95pguUMr9gd5NHNBQRQAb9vA@mail.gmail.com
Whole thread Raw
In response to Convert MAX_SAOP_ARRAY_SIZE to new guc  (James Coleman <jtc331@gmail.com>)
List pgsql-hackers
Note: the original email from David went to my spam folder, and it also didn't show up on the archives (I assume caught by a spam filter there also?)

Thanks for taking this on!

As far as you can tell, is the default correct at 100?

I'm not sure what a good way of measuring it would be (that is, what all the possible cases are). I did try very simple SELECT * FROM t WHERE i IN (...) style queries with increasing size and was able to see increased planning time, but nothing staggering (going from 1000 to 2000 increased from ~1.5ms to 2.5ms planning time, in an admittedly very unscientific test.)

I think it's reasonable to leave the default at 100 for now. You could make an argument for increasing it since the limit currently affects whether scalar array ops can use partial indexes with "foo is not null" conditions, but I think that's better solved more holistically, as I've attempted to do in https://www.postgresql.org/message-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg%40mail.gmail.com
 
What are some issues that might arise if it's set too low/too high?

Too low would result in queries being planned unsatisfactorily (i.e., scalar array ops switching from partial index scans to seq scans), and setting it too high could significantly increase planning time.

pgsql-hackers by date:

Previous
From: Thomas Munro
Date:
Subject: Re: Refactoring the checkpointer's fsync request queue
Next
From: Amit Langote
Date:
Subject: Re: Speeding up INSERTs and UPDATEs to partitioned tables