Re: max_expr_depth - Mailing list pgsql-general

From Tom Lane
Subject Re: max_expr_depth
Date
Msg-id 12330.992925244@sss.pgh.pa.us
Whole thread Raw
In response to Re: max_expr_depth  (Joseph Shraibman <jks@selectacast.net>)
List pgsql-general
Joseph Shraibman <jks@selectacast.net> writes:
> Do you really think I should do 1000 updates in a transaction instead of
> an IN with 1000 items?  I can do my buffer flush any way I want but I'd
> have to think the overhead of making 1000 calls to the backend would be
> more than overwhelm the cost of the big OR statement (especially if the
> server and client aren't on the same machine).

If your choices are 1000 separate updates or a 1000-way IN, then maybe
the IN will be faster, but it's likely not as fast as it could be.
The best plan you can hope for from the IN is one indexscan pass per
IN item.  You'd get the same sort of plan from the 1000 updates, but
it'd cost 1000 iterations of the parser and planner, so the updates
likely will come out behind.

The real issue is whether you could get a better plan (merge or hash
join, say) from a join to a temp table.  Not sure about that --- unless
you go to the trouble of vacuuming the temp table, the planner won't
know much about it and is likely to pick an unhelpful plan anyway.
So maybe you should stick with what you have.  You're likely to run into
trouble if you try to scale it to ~ 100000 IN values; that max_expr_depth
check does exist for a reason.  But at ~ 1000 values it doesn't sound
too awful.

            regards, tom lane

pgsql-general by date:

Previous
From: GH
Date:
Subject: Re: patent
Next
From: Namrata
Date:
Subject: Alternate Database Location.