Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold < - Mailing list pgsql-committers

From Kevin Grittner
Subject Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
Date
Msg-id CACjxUsNZPA=Oe=mkTLVcAjNJkqOv6g_=NKWFKroRZRAv9=zPPA@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
List pgsql-committers
On Wed, Jun 8, 2016 at 2:49 PM, Robert Haas <robertmhaas@gmail.com> wrote:

> Do you have a test case that demonstrates a problem, or an explanation
> of why you think there is one?

With old_snapshot_threshold = '1min'

-- connection 1
drop table if exists t1;
create table t1 (c1 int not null);
insert into t1 select generate_series(1, 1000000);
begin transaction isolation level repeatable read;
select 1;

-- connection 2
insert into t2 values (1);
delete from t1 where c1 between 200000 and 299999;
delete from t1 where c1 = 1000000;
vacuum analyze verbose t1;
select pg_sleep_for('2min');
vacuum analyze verbose t1;  -- repeat if needed until dead rows vacuumed

-- connection 1
select c1 from t1 where c1 = 100;
select c1 from t1 where c1 = 250000;

The problem occurs when an index is built while an old snapshot
exists which can't see the effects of early pruning/vacuuming.  The
fix prevents use of such an index until all snapshots early enough
to have a problem have been released.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-committers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] Re: pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <