Re: max_expr_depth - Mailing list pgsql-general

From Joseph Shraibman
Subject Re: max_expr_depth
Date
Msg-id 3B2EBF9F.CEC12521@selectacast.net
Whole thread Raw
In response to max_expr_depth  (Joseph Shraibman <jks@selectacast.net>)
Responses Re: max_expr_depth  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Doug McNaught wrote:
>
> Joseph Shraibman <jks@selectacast.net> writes:
>
> > Doug McNaught wrote:
> > >
> > > Joseph Shraibman <jks@selectacast.net> writes:
> > >
> > > > Compared to 1000 updates that took between 25 and 47 seconds, an update
> > > > with 1000 itmes in the IN() took less than three seconds.
> > >
> > > Did you wrap the 1000 separate updates in a transaction?
> > >
> > > -Doug
> >
> > No, at a high level in my application I was calling the method to do the
> > update.  How would putting it in a transaction help?
>
> If you don't, every update is its own transaction, and Postgres will
> sync the disks (and wait for the sync to complete) after every one.
> Doing N updates in one transaction will only sync after the whole
> transaction is complete.  Trust me; it's *way* faster.

I thought WAL did away with most of the syncing.

Do you really think I should do 1000 updates in a transaction instead of
an IN with 1000 items?  I can do my buffer flush any way I want but I'd
have to think the overhead of making 1000 calls to the backend would be
more than overwhelm the cost of the big OR statement (especially if the
server and client aren't on the same machine).

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio.  http://www.targabot.com

pgsql-general by date:

Previous
From: "Thomas T. Thai"
Date:
Subject: patent
Next
From: Doug McNaught
Date:
Subject: Re: max_expr_depth