Re: Pg_upgrade speed for many tables - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: Pg_upgrade speed for many tables
Date
Msg-id 20121105213316.GF12444@alvh.no-ip.org
Whole thread Raw
In response to Re: Pg_upgrade speed for many tables  (Bruce Momjian <bruce@momjian.us>)
Responses Re: Pg_upgrade speed for many tables  (Bruce Momjian <bruce@momjian.us>)
Re: Pg_upgrade speed for many tables  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Bruce Momjian escribió:
> On Mon, Nov  5, 2012 at 04:14:47PM -0500, Robert Haas wrote:
> > On Mon, Nov 5, 2012 at 4:07 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> > > Or have options for pg_dump and pg_restore to insert "set
> > > synchronous_commit=off" into the SQL stream?
> >
> > It would be kind of neat if we had a command that would force all
> > previously-asynchronous commits to complete.  It seems likely that
> > very, very few people would care about intermediate pg_dump states, so
> > we could do the whole dump asynchronously and then do "FORCE ALL
> > COMMITS;" or whatever at the end.
>
> Actually, I had assumed that a session disconnection forced a WAL fsync
> flush, but now I doubt that.  Seems only server shutdown does that, or a
> checkpoint.  Would this work?
>
>     SET synchronous_commit=on;
>     CREATE TABLE dummy(x int);
>     DROP TABLE dummy;

AFAIR any transaction that modifies catalogs gets sync commit forcibly,
regardless of the setting.  And sync commit means you get to wait for
all previous transactions to be flushed as well.  So simply creating a
temp table ought to do the trick ...

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Pg_upgrade speed for many tables
Next
From: Bruce Momjian
Date:
Subject: Re: install zic binary