Re: drop/truncate table sucks for large values of shared buffers - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: drop/truncate table sucks for large values of shared buffers
Date
Msg-id CANP8+jJUqocFBwj=j0E-d3+VtFiu64bCpKSFyYRd6b0Qu8+wCA@mail.gmail.com
Whole thread Raw
In response to Re: drop/truncate table sucks for large values of shared buffers  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: drop/truncate table sucks for large values of shared buffers  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On 28 June 2015 at 17:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Simon Riggs <simon@2ndQuadrant.com> writes:
> On 27 June 2015 at 15:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I don't like this too much because it will fail badly if the caller
>> is wrong about the maximum possible page number for the table, which
>> seems not exactly far-fetched.  (For instance, remember those kernel bugs
>> we've seen that cause lseek to lie about the EOF position?)

> If that is true, then our reliance on lseek elsewhere could also cause data
> loss, for example by failing to scan data during a seq scan.

The lseek point was a for-example, not the entire universe of possible
problem sources for this patch.  (Also, underestimating the EOF point in
a seqscan is normally not an issue since any rows in a just-added page
are by definition not visible to the scan's snapshot.  But I digress.)

> The consequences of failure of lseek in this case are nowhere near as dire,
> since by definition the data is being destroyed by the user.

I'm not sure what you consider "dire", but missing a dirty buffer
belonging to the to-be-destroyed table would result in the system being
permanently unable to checkpoint, because attempts to write out the buffer
to the no-longer-extant file would fail.  You could only get out of the
situation via a forced database crash (immediate shutdown), followed by
replaying all the WAL since the time of the problem.  In production
contexts that could be pretty dire.

Yes, its bad, but we do notice that has happened. We can also put in code to specifically avoid this error at checkpoint time. 

If lseek fails badly then SeqScans would give *silent* data loss, which in my view is worse. Just added pages aren't the only thing we might miss if lseek is badly wrong.

So, I think this patch still has legs. We can check that the clean up has been 100% when we do the buffer scan at the start of the checkpoint - that way we do just one scan of the buffer pool and move a time-consuming operation into a background process.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: In-core regression tests for replication, cascading, archiving, PITR, etc.
Next
From: Tom Lane
Date:
Subject: Re: pg_file_settings view vs. Windows