Re: including backend ID in relpath of temp rels - updated patch - Mailing list pgsql-hackers

From Robert Haas
Subject Re: including backend ID in relpath of temp rels - updated patch
Date
Msg-id AANLkTikzBk3PPjiijHassy+V4vk+KysmwwqVWcy2S320@mail.gmail.com
Whole thread Raw
In response to Re: including backend ID in relpath of temp rels - updated patch  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: including backend ID in relpath of temp rels - updated patch
Re: including backend ID in relpath of temp rels - updated patch
Re: including backend ID in relpath of temp rels - updated patch
List pgsql-hackers
On Thu, Aug 12, 2010 at 12:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> Here's an updated patch, with the invalidation changes merged in and
>> hopefully-suitable adjustments elsewhere.
>
> I haven't tested this patch, but I read through it (and have I mentioned
> how unbelievably illegible -u format patches are?).

Sorry, I'll run it through filterdiff for you next time.  As you know,
I have the opposite preference, but since I was producing this
primarily for you to read....

I can clean up the rest of this stuff, but not this:

> Lastly, it bothers me that you've put in code to delete files belonging
> to temp rels during crash restart, without any code to clean up their
> catalog entries.  This will therefore lead to dangling pg_class
> references, with uncertain but probably not very nice consequences.
> I think that probably shouldn't go in until there's code to drop the
> catalog entries too.

I thought about this pretty carefully, and I don't believe that there
are any unpleasant consequences.  The code that assigns relfilenode
numbers is pretty careful to check that the newly assigned value is
unused BOTH in pg_class and in the directory where the file will be
created, so there should be no danger of a number getting used over
again while the catalog entries remain.  Also, the drop-object code
doesn't mind that the physical storage doesn't exist; it's perfectly
happy with that situation.  It is true that you will get an ERROR if
you try to SELECT from a table whose physical storage has been
removed, but that seems OK.

We have two existing mechanisms for removing the catalog entries: when
a backend is first asked to access a temporary file, it does a DROP
SCHEMA ... CASCADE on any pre-existing temp schema.  And a table is in
wraparound trouble and the owning backend is no longer running,
autovacuum will drop it.  Improving on this seems difficult: if you
wanted to *guarantee* that the catalog entries were removed before we
started letting in connections, you'd need to fork a backend per
database and have each one iterate through all the temp schemas and
drop them.  Considering that the existing code seems to have been
pretty careful about how this stuff gets handled, I don't think it's
worth making the whole startup sequence slower for it.  What might be
worth considering is changing the autovacuum policy to eliminate the
wraparound check, and just have it drop temp table catalog entries for
any backend not currently running, period.

Thoughts?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: including backend ID in relpath of temp rels - updated patch
Next
From: "Bozena Potempa"
Date:
Subject: Re: Libpq: PQftype, PQfsize