Thread: Protecting against unexpected zero-pages: proposal

Protecting against unexpected zero-pages: proposal

From
Gurjeet Singh
Date:
<div dir="ltr"><span style="font-family: courier new,monospace;">A customer of ours is quite bothered about finding
zeropages in an index after</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">asystem crash. The task now is to improve the diagnosability of such an issue</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">and be able to
definitivelypoint to the source of zero pages.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">The proposed solution
belowhas been vetted in-house at EnterpriseDB and am</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">posting here to see any possible problems we missed, and also if the
community</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">wouldbe interested in incorporating this capability.</span><br style="font-family: courier
new,monospace;"/><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">Background:</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">-----------</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">SUSELinux, ATCA board, 4 dual core CPUs => 8 cores, 24 GB RAM, 140 GB disk,</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">PG 8.3.11. RAID-1 SAS
withSCSIinfo reporting that write-caching is disabled.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">The corrupted index's
filecontents, based on hexdump:</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">    It has a total of 525 pages (cluster block size
is8K: per pg_controldata)</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">   Blocks 0 to 278 look sane.</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">    Blocks 279 to 518 are full of zeroes.</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">    Block 519 to 522 look sane.</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">    Block 523 is filled
withzeroes.</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">   Block 524 looks sane.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">The tail end of blocks
278and 522 have some non-zero data, meaning that those</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">index pages have some valid 'Special space' contents. Also, head of blocks
519</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">and 524
looksane. These two findings imply that the zeroing action happened at</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">8K page boundary. This is a standard ext3 FS with 4K
blocksize, so this raises</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">questionas to how we can ascertain that this was indeed a hardware/FS</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">malfunction. And if it was a hardware/FS
problem,then why didn't we see zeroes</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">at 1/2 K boundary (generally the disk's sector size) or 4K boundary (default</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">ext3 FS block size)
whichdoes not align with an 8 K boundary.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">The backup from before
thecrash does not have these zero-pages.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">Disk Page Validity Check
UsingMagic Number</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">===========================================</span><brstyle="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">Requirement: </span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">------------  </span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">We have encountered
quitea few zero pages in an index after a machine crash,</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">causing this index to be unusable. Although REINDEX is an option but we
have</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">no way of
tellingif these zero pages were caused by hardware or filesystem or</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">by Postgres. Postgres code analysis shows that
Postgresbeing the culprit is a</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">very low probablity, and similarly, since our hardware is also considered of</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">good quality with
hardwarelevel RAID-1 over 2 disks, it is difficult to consider</span><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">the hardware to be a problem. The ext3 filesystem being used is also
quitea</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">time-testedpiece of software, hence it becomes very difficult to point fingers</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">at any of these 3
componentsfor this corruption.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">Postgres is being deployed as a component of a
carrier-gradeplatform, and it is</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">required to run unattended as much as possible. There is a High Availability</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">monitoring component
thatis tasked with performing switchover to a standby node</span><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">in the event of any problem with the primary node. This HA component
needsto</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">performregular checks on health of all the other components, including Postgres,</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">and take corrective
actions.</span><brstyle="font-family: courier new,monospace;" /><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">With the zero pages comes the difficulty of ascertaining whether these
are</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">legitimate
zeropages, (since Postgres considers zero pages as valid (maybe</span><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">leftover from previous extend-file followed by a crash)), or are
thesezero pages</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">aresult of FS/hardware failure.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">We are required to
definitivelydifferentiate between zero pages from Postgres</span><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">vs. zero pages caused by hardware failure. Obviously this is not
possibleby the</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">verynature of the problem, so we explored a few ideas, including per-block</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">checksums in-block or in
checksum-fork,S.M.A.R.T monitoring of disk drives,</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">PageInit() before smgrextend() in ReadBuffer_common(), and additional member
in</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">PageHeader
fora magic number.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">Following is an approach which we think is least
invasive,and does not threaten</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">code-breakage, yet provides a definitive detection of corruption/data-loss</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">outside Postgres with
leastperformance penalty.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">Implementation:</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">---------------</span><br
style="font-family:courier new,monospace;" /><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">.) The basic idea is to have a magic number in every PageHeader before it
is</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">written to
disk,and check for this magic number when performing page validity</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">checks.</span><br style="font-family: courier
new,monospace;"/><br style="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">.)
Toavoid adding a new field to PageHeader, and any code breakage, we reuse </span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   an existing member of the structure.</span><br
style="font-family:courier new,monospace;" /><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">.) We exploit the following facts and assumptions:</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">  -) Relations/files are
extended8 KB (BLCKSZ) at a time.</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">  -) Every I/O unit contains PageHeader structure (table/index/fork files),</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">     which in turn
containspd_lsn as the first member.</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">  -) Every newly written block is considered to be zero filled.</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">  -) PageIsNew() assumes that if pd_upper is
0then the page is zero.</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;"> -) PageHeaderIsValid() allows zero filled pages to be considered valid.</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">  -) Anyone wishing to use a new page has to
doPageInit() on the page.</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;"> -) PageInit() does a MemSet(0) on the whole page.</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">  -) XLogRecPtr={x,0} is considered
invalid</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">  -)
XLogRecPtr={x,~((uint32)0)} is not valid either (i.e. last byte of an xlog</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">      file (not segment)); we'll use this as the
magicnumber.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">      ... Above is my assumption, since it is not mentioned anywhere
inthe code.</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">     The XLogFileSize calculation seems to support this assumptiopn.</span><br style="font-family:
couriernew,monospace;" /><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">     ... If this assumption doesn't hold good, then the previous assumption {x,0}</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">      can also be used
toimplement this magic number (with x > 0).</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">  -) There's only one implementation of Storage Manager, i.e.
md.c.</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">  -)
smgr_extend()-> mdextend() is the only place where a relation is extended.</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">  -) Writing beyond EOF in a file causes the
intermediatespace to become a hole,</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">     and any reads from such a hole returns zero filled pages.</span><br style="font-family:
couriernew,monospace;" /><span style="font-family: courier new,monospace;">  -) Anybody trying to extend a file makes
surethat there's no cuncurrent</span><br style="font-family: courier new,monospace;" /><span style="font-family:
couriernew,monospace;">     extension going on from somewhere else.</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">     ... This is ensured either by implicit nature
ofthe calling code, or by</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">    calling LockRelationForExtension().</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">.) In mdextend(), if the
bufferbeing written is zero filled, then we write the</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">   magic number in that page's pd_lsn.</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   ... This check can be optimized to just check
sizeof(pd_lsn)worth of buffer.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">.) In mdextend(), if the buffer is being written
beyondcurrent EOF, then we</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">  forcibly write the intermediate blocks too, and write the magic number in</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">   each of
those.</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">   ...
Thisneeds an _mdnblocks() call and FileSeek(SEEK_END)+FileWrite() calls</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   for every block in the hole.</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">   </span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">   ... Creation of holes
isbeing assumed to be a very limited corner case,</span><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">   hence this performace hit is acceptable in these rare corner cases. Tests
are</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">   being
plannedusing real application, to check how many times this occurs.</span><br style="font-family: courier
new,monospace;"/><br style="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">.)
PageHeaderIsValid()needs to be modified to allow MagicNumber-followed-by-zeroes</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   as a valid page (rather than a completely zero
page)</span><brstyle="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">   ...
Ifthe page is completely filled with zeroes, this confirms the fact that</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   either the filesystem or the disk storage zeroed
thesepages, since Postgres</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">  never wrote zero pages to disk.</span><br style="font-family: courier new,monospace;" /><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">.) PageInit() and
PageIsNew()require no change.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">.) XLByteLT(), XLByteLE() and XLByteEQ() may be
changedto contain</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">   AssertMacro( !MagicNumber(a) && !MagicNumber(b) )</span><br style="font-family: courier
new,monospace;"/><br style="font-family: courier new,monospace;" /><span style="font-family: courier new,monospace;">.)
Ihaven't analyzed the effects of this change on the recovery code, but I</span><br style="font-family: courier
new,monospace;"/><span style="font-family: courier new,monospace;">   have a feeling that we might not need to change
anythingthere.</span><br style="font-family: courier new,monospace;" /><br style="font-family: courier new,monospace;"
/><spanstyle="font-family: courier new,monospace;">.) We can create a contrib module (standalone binary or a loadable
module)that</span><br style="font-family: courier new,monospace;" /><span style="font-family: courier
new,monospace;">  goes through each disk page and checks it for being zero filled, and raises</span><br
style="font-family:courier new,monospace;" /><span style="font-family: courier new,monospace;">   alarm if it finds
any.</span><brstyle="font-family: courier new,monospace;" /><br style="font-family: courier new,monospace;" /><span
style="font-family:courier new,monospace;">Thoughts welcome.</span><br style="font-family: courier new,monospace;" />--
<br/>gurjeet.singh<br />@ EnterpriseDB - The Enterprise Postgres Company<br /><a
href="http://www.EnterpriseDB.com">http://www.EnterpriseDB.com</a><br/><br />singh.gurjeet@{ gmail | yahoo }.com<br
/>Twitter/Skype:singh_gurjeet<br /><br />Mail sent from my BlackLaptop device<br /></div> 

Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Gurjeet Singh <singh.gurjeet@gmail.com> writes:
> .) The basic idea is to have a magic number in every PageHeader before it is
> written to disk, and check for this magic number when performing page
> validity
> checks.

Um ... and exactly how does that differ from the existing behavior?

> .) To avoid adding a new field to PageHeader, and any code breakage, we
> reuse
>    an existing member of the structure.

The amount of fragility introduced by the assumptions you have to make
for this seems to me to be vastly riskier than the risk you are trying
to respond to.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Sun, Nov 7, 2010 at 4:23 AM, Gurjeet Singh <singh.gurjeet@gmail.com> wrote:
> I understand that it is a pretty low-level change, but IMHO the change is
> minimal and is being applied in well understood places. All the assumptions
> listed have been effective for quite a while, and I don't see these
> assumptions being affected in the near future. Most crucial assumptions we
> have to work with are, that XLogPtr{n, 0xFFFFFFFF} will never be used, and
> that mdextend() is the only place that extends a relation (until we
> implement an md.c sibling, say flash.c or tape.c; the last change to md.c
> regarding mdextend() was in January 2007).

I think the assumption that isn't tested here is what happens if the
server crashes. The logic may work fine as long as nothing goes wrong
but if something does it has to be fool-proof.

I think having zero-filled blocks at the end of the file if it has
been extended but hasn't been fsynced is an expected failure mode of a
number of filesystems. The log replay can't assume seeing such a block
is a problem since that may be precisely the result of the crash that
caused the replay. And if you disable checking for this during WAL
replay then you've lost your main chance to actually detect the
problem.

Another issue -- though I think a manageable one -- is that I expect
we'll want to be be using posix_fallocate() sometime soon. That will
allow efficient guaranteed pre-allocated space with better contiguous
layout than currently. But ext4 can only pretend to give zero-filled
blocks, not any random bitpattern we request. I can see this being an
optional feature that is just not compatible with using
posix_fallocate() though.

It does seem like this is kind of part and parcel of adding checksums
to blocks. It's arguably kind of silly to add checksums to blocks but
have an commonly produced bitpattern in corruption cases go
undetected.

-- 
greg


Re: Protecting against unexpected zero-pages: proposal

From
Gurjeet Singh
Date:
On Sat, Nov 6, 2010 at 11:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Gurjeet Singh <singh.gurjeet@gmail.com> writes:
> .) The basic idea is to have a magic number in every PageHeader before it is
> written to disk, and check for this magic number when performing page
> validity
> checks.

Um ... and exactly how does that differ from the existing behavior?

Right now a zero filled page considered valid, and is treated as a new page; PageHeaderIsValid()->/* Check all-zeroes case */, and PageIsNew(). This means that looking at a  zero-filled page on disk (say after a crash) does not give us any clue if it was indeed left zeroed by Postgres, or did FS/storage failed to do their job.

With the proposed change, if it is a valid page (a page actually written by Postgres) it will either have a sensible LSN or the magic-LSN; the LSN will never be zero. OTOH, if we encounter a zero filled page ( => LSN={0,0)} ) it clearly would implicate elements outside Postgres in making that page zero.
 
The amount of fragility introduced by the assumptions you have to make
for this seems to me to be vastly riskier than the risk you are trying
to respond to.


I understand that it is a pretty low-level change, but IMHO the change is minimal and is being applied in well understood places. All the assumptions listed have been effective for quite a while, and I don't see these assumptions being affected in the near future. Most crucial assumptions we have to work with are, that XLogPtr{n, 0xFFFFFFFF} will never be used, and that mdextend() is the only place that extends a relation (until we implement an md.c sibling, say flash.c or tape.c; the last change to md.c regarding mdextend() was in January 2007).

Only mdextend() and PageHeaderIsValid() need to know this change in behaviour, and all the other APIs work and behave the same as they do now.

This change would increase the diagnosability of zero-page issues, and help the users point fingers at right places.

Regards,
--
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device

Re: Protecting against unexpected zero-pages: proposal

From
Aidan Van Dyk
Date:
On Sun, Nov 7, 2010 at 1:04 AM, Greg Stark <gsstark@mit.edu> wrote:
> It does seem like this is kind of part and parcel of adding checksums
> to blocks. It's arguably kind of silly to add checksums to blocks but
> have an commonly produced bitpattern in corruption cases go
> undetected.

Getting back to the checksum debate (and this seems like a
semi-version of the checksum debate), now that we have forks, could we
easily add block checksumming to a fork?  IT would mean writing to 2
files but that shouldn't be a problem, because until the checkpoint is
done (and thus both writes), the full-page-write in WAL is going to
take precedence on recovery.

a.


--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Aidan Van Dyk <aidan@highrise.ca> writes:
> Getting back to the checksum debate (and this seems like a
> semi-version of the checksum debate), now that we have forks, could we
> easily add block checksumming to a fork?  IT would mean writing to 2
> files but that shouldn't be a problem, because until the checkpoint is
> done (and thus both writes), the full-page-write in WAL is going to
> take precedence on recovery.

Doesn't seem like a terribly good design: damage to a checksum page
would mean that O(1000) data pages are now thought to be bad.

More generally, this re-opens the question of whether data in secondary
forks is authoritative or just hints.  Currently, we treat it as just
hints, for both FSM and VM, and thus sidestep the problem of
guaranteeing its correctness.  To use a secondary fork for checksums,
you'd need to guarantee correctness of writes to it.  This is the same
problem that index-only scans are hung up on, ie making the VM reliable.
I forget whether Heikki had a credible design sketch for making that
happen, but in any case it didn't look easy.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Gurjeet Singh <singh.gurjeet@gmail.com> writes:
> On Sat, Nov 6, 2010 at 11:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Um ... and exactly how does that differ from the existing behavior?

> Right now a zero filled page considered valid, and is treated as a new page;
> PageHeaderIsValid()->/* Check all-zeroes case */, and PageIsNew(). This
> means that looking at a  zero-filled page on disk (say after a crash) does
> not give us any clue if it was indeed left zeroed by Postgres, or did
> FS/storage failed to do their job.

I think this is really a non-problem.  You said earlier that the
underlying filesystem uses 4K blocks.  Filesystem misfeasance would
therefore presumably affect 4K at a time.  If you see that both halves
of an 8K block are zero, it's far more likely that Postgres left it that
way than that the filesystem messed up.  Of course, if only one half of
an 8K page went to zeroes, you know the filesystem or disk did it.

There are also crosschecks that you can apply: if it's a heap page, are
there any index pages with pointers to it?  If it's an index page, are
there downlink or sibling links to it from elsewhere in the index?
A page that Postgres left as zeroes would not have any references to it.

IMO there are a lot of methods that can separate filesystem misfeasance
from Postgres errors, probably with greater reliability than this hack.
I would also suggest that you don't really need to prove conclusively
that any particular instance is one or the other --- a pattern across
multiple instances will tell you what you want to know.

> This change would increase the diagnosability of zero-page issues, and help
> the users point fingers at right places.

[ shrug... ] If there were substantial user clamor for diagnosing
zero-page issues, I might be for this.  As is, I think it's a non
problem.  What's more, if I did believe that this was a safe and
reliable technique, I'd be unhappy about the opportunity cost of
reserving it for zero-page testing rather than other purposes.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
I wrote:
> Aidan Van Dyk <aidan@highrise.ca> writes:
>> Getting back to the checksum debate (and this seems like a
>> semi-version of the checksum debate), now that we have forks, could we
>> easily add block checksumming to a fork?

> More generally, this re-opens the question of whether data in secondary
> forks is authoritative or just hints.  Currently, we treat it as just
> hints, for both FSM and VM, and thus sidestep the problem of
> guaranteeing its correctness.  To use a secondary fork for checksums,
> you'd need to guarantee correctness of writes to it.

... but wait a minute.  What if we treated the checksum as a hint ---
namely, on checksum failure, we just log a warning rather than doing
anything drastic?  A warning is probably all you want to happen anyway.

A corrupted page of checksums would then show up as warnings for most or
all of a range of data pages, and it'd be pretty obvious (if the data
seemed OK) where the failure had been.

So maybe Aidan's got a good idea here.  It would sure be a lot easier
to shoehorn checksum checking in as an optional feature if the checksums
were kept someplace else.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Mon, Nov 8, 2010 at 5:00 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> So maybe Aidan's got a good idea here.  It would sure be a lot easier
> to shoehorn checksum checking in as an optional feature if the checksums
> were kept someplace else.

Would it? I thought the only problem was the hint bits being set
behind the checksummers back. That'll still happen even if it's
written to a different place.



--
greg


Re: Protecting against unexpected zero-pages: proposal

From
Aidan Van Dyk
Date:
On Mon, Nov 8, 2010 at 12:53 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Mon, Nov 8, 2010 at 5:00 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> So maybe Aidan's got a good idea here.  It would sure be a lot easier
>> to shoehorn checksum checking in as an optional feature if the checksums
>> were kept someplace else.
>
> Would it? I thought the only problem was the hint bits being set
> behind the checksummers back. That'll still happen even if it's
> written to a different place.

The problem that putting checksums in a different place solves is the
page layout (binary upgrade) problem.  You're still doing to need to
"buffer" the page as you calculate the checksum and write it out.
buffering that page is absolutely necessary no mater where you put the
checksum, unless you've got an exclusive lock that blocks even hint
updates on the page.

But if we can start using forks to put "other data", that means that
keeping the page layouts is easier, and thus binary upgrades are much
more feasible.

At least, that was my thought WRT checksums being out-of-page.

a.

--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Mon, Nov 8, 2010 at 5:59 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
> The problem that putting checksums in a different place solves is the
> page layout (binary upgrade) problem.  You're still doing to need to
> "buffer" the page as you calculate the checksum and write it out.
> buffering that page is absolutely necessary no mater where you put the
> checksum, unless you've got an exclusive lock that blocks even hint
> updates on the page.

But buffering the page only means you've got some consistent view of
the page. It doesn't mean the checksum will actually match the data in
the page that gets written out. So when you read it back in the
checksum may be invalid.

I wonder if we could get by by having some global counter on the page
which you increment when you set a hint bit. That way when we you read
the page back in you could compare the counter on the page and the
counter for the checksum and if the checksum counter is behind ignore
the checksum? It would be nice to do better but I'm not sure we can.


>
> But if we can start using forks to put "other data", that means that
> keeping the page layouts is easier, and thus binary upgrades are much
> more feasible.
>

The difficulty with the page layout didn't come from the checksum
itself. We can add 4 or 8 bytes to the page header easily enough. The
difficulty came from trying to move the hint bits for all the tuples
to a dedicated area. That means three resizable areas so either one of
them would have to be relocatable or some other solution (like not
checksumming the line pointers and putting the hint bits in the line
pointers). If we're willing to have invalid checksums whenever the
hint bits get set then this wouldn't be necessary.

--
greg


Re: Protecting against unexpected zero-pages: proposal

From
Aidan Van Dyk
Date:
On Tue, Nov 9, 2010 at 8:45 AM, Greg Stark <gsstark@mit.edu> wrote:

> But buffering the page only means you've got some consistent view of
> the page. It doesn't mean the checksum will actually match the data in
> the page that gets written out. So when you read it back in the
> checksum may be invalid.

I was assuming that if the code went through the trouble to buffer the
shared page to get a "stable, non-changing" copy to use for
checksumming/writing it, it would write() the buffered copy it just
made, not the original in shared memory...  I'm not sure how that
write could be in-consistent.

a.

--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 2:28 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
> On Tue, Nov 9, 2010 at 8:45 AM, Greg Stark <gsstark@mit.edu> wrote:
>
>> But buffering the page only means you've got some consistent view of
>> the page. It doesn't mean the checksum will actually match the data in
>> the page that gets written out. So when you read it back in the
>> checksum may be invalid.
>
> I was assuming that if the code went through the trouble to buffer the
> shared page to get a "stable, non-changing" copy to use for
> checksumming/writing it, it would write() the buffered copy it just
> made, not the original in shared memory...  I'm not sure how that
> write could be in-consistent.

Oh, I'm mistaken. The problem was that buffering the writes was
insufficient to deal with torn pages. Even if you buffer the writes if
the machine crashes while only having written half the buffer out then
the checksum won't match. If the only changes on the page were hint
bit updates then there will be no full page write in the WAL log to
repair the block.

It's possible that *that* situation is rare enough to let the checksum
raise a warning but not an error.

But personally I'm pretty loath to buffer every page write. The state
of the art are zero-copy processes and we should be looking to reduce
copies rather than increase them. Though I suppose if we did a
zero-copy CRC that might actually get us this buffered write for free.


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote:
> Oh, I'm mistaken. The problem was that buffering the writes was
> insufficient to deal with torn pages. Even if you buffer the writes if
> the machine crashes while only having written half the buffer out then
> the checksum won't match. If the only changes on the page were hint
> bit updates then there will be no full page write in the WAL log to
> repair the block.

Huh, this implies that if we did go through all the work of
segregating the hint bits and could arrange that they all appear on
the same 512-byte sector and if we buffered them so that we were
writing the same bits we checksummed then we actually *could* include
them in the CRC after all since even a torn page will almost certainly
not tear an individual sector.

-- 
greg


Re: Protecting against unexpected zero-pages: proposal

From
Jim Nasby
Date:
On Nov 9, 2010, at 9:27 AM, Greg Stark wrote:
> On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote:
>> Oh, I'm mistaken. The problem was that buffering the writes was
>> insufficient to deal with torn pages. Even if you buffer the writes if
>> the machine crashes while only having written half the buffer out then
>> the checksum won't match. If the only changes on the page were hint
>> bit updates then there will be no full page write in the WAL log to
>> repair the block.
>
> Huh, this implies that if we did go through all the work of
> segregating the hint bits and could arrange that they all appear on
> the same 512-byte sector and if we buffered them so that we were
> writing the same bits we checksummed then we actually *could* include
> them in the CRC after all since even a torn page will almost certainly
> not tear an individual sector.

If there's a torn page then we've crashed, which means we go through crash recovery, which puts a valid page (with
validCRC) back in place from the WAL. What am I missing? 

BTW, I agree that at minimum we need to leave the option of only raising a warning when we hit a checksum failure. Some
peoplemight want Postgres to treat it as an error by default, but most folks will at least want the option to look at
their(corrupt) data. 
--
Jim C. Nasby, Database Architect                   jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net




Re: Protecting against unexpected zero-pages: proposal

From
Gurjeet Singh
Date:
On Tue, Nov 9, 2010 at 12:32 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
There are also crosschecks that you can apply: if it's a heap page, are
there any index pages with pointers to it?  If it's an index page, are
there downlink or sibling links to it from elsewhere in the index?
A page that Postgres left as zeroes would not have any references to it.

IMO there are a lot of methods that can separate filesystem misfeasance
from Postgres errors, probably with greater reliability than this hack.
I would also suggest that you don't really need to prove conclusively
that any particular instance is one or the other --- a pattern across
multiple instances will tell you what you want to know.

Doing this postmortem on a regular deployment and fixing the problem would not be too difficult. But this platform, which Postgres is a part of,  would be mostly left unattended once deployed (pardon me for not sharing the details, as I am not sure if I can).

An external HA component is supposed to detect any problems (by querying Postgres or by external means) and take an evasive action. It is this automation of problem detection that we are seeking.

As Greg pointed out, even with this hack in place, we might still get zero pages from the FS (say, when ext3 does metadata journaling but not block journaling). In that case we'd rely on recovery's WAL replay of relation extension to reintroduce the magic number in pages.
 
What's more, if I did believe that this was a safe and
reliable technique, I'd be unhappy about the opportunity cost of
reserving it for zero-page testing rather than other purposes.


This is one of those times where you are a bit too terse for me. What does zero-page imply that this hack wouldn't?

Regards,
--
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device

Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 4:26 PM, Jim Nasby <jim@nasby.net> wrote:
>> On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote:
>>> Oh, I'm mistaken. The problem was that buffering the writes was
>>> insufficient to deal with torn pages. Even if you buffer the writes if
>>> the machine crashes while only having written half the buffer out then
>>> the checksum won't match. If the only changes on the page were hint
>>> bit updates then there will be no full page write in the WAL log to
>>> repair the block.
>
> If there's a torn page then we've crashed, which means we go through crash recovery, which puts a valid page (with
validCRC) back in place from the WAL. What am I missing?
 

"If the only changes on the page were hint bit updates then there will
be no full page write in the WAL to repair the block"



-- 
greg


Re: Protecting against unexpected zero-pages: proposal

From
Aidan Van Dyk
Date:
On Tue, Nov 9, 2010 at 11:26 AM, Jim Nasby <jim@nasby.net> wrote:

>> Huh, this implies that if we did go through all the work of
>> segregating the hint bits and could arrange that they all appear on
>> the same 512-byte sector and if we buffered them so that we were
>> writing the same bits we checksummed then we actually *could* include
>> them in the CRC after all since even a torn page will almost certainly
>> not tear an individual sector.
>
> If there's a torn page then we've crashed, which means we go through crash recovery, which puts a valid page (with
validCRC) back in place from the WAL. What am I missing? 

The problem case is where hint-bits have been set.  Hint bits have
always been "we don't really care, but we write them".

A torn-page on hint-bit-only writes is ok, because with a torn page
(assuming you dont' get zero-ed pages), you get the old or new chunks
of the complete 8K buffer, but they are identical except for only
hint-bits, which eiterh the old or new state is sufficient.

But with a check-sum, now, getting a torn page w/ only hint-bit
updates now becomes noticed.  Before, it might have happened, but we
wouldn't have noticed or cared.

So, for getting checksums, we have to offer up a few things:
1) zero-copy writes, we need to buffer the write to get a consistent
checksum (or lock the buffer tight)
2) saving hint-bits on an otherwise unchanged page.  We either need to
just not write that page, and loose the work the hint-bits did, or do
a full-page WAL of it, so the torn-page checksum is fixed

Both of these are theoretical performance tradeoffs.  How badly do we
want to verify on read that it is *exactly* what we thought we wrote?

a.


--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Gurjeet Singh <singh.gurjeet@gmail.com> writes:
> On Tue, Nov 9, 2010 at 12:32 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> IMO there are a lot of methods that can separate filesystem misfeasance
>> from Postgres errors, probably with greater reliability than this hack.

> Doing this postmortem on a regular deployment and fixing the problem would
> not be too difficult. But this platform, which Postgres is a part of,  would
> be mostly left unattended once deployed (pardon me for not sharing the
> details, as I am not sure if I can).

> An external HA component is supposed to detect any problems (by querying
> Postgres or by external means) and take an evasive action. It is this
> automation of problem detection that we are seeking.

To be blunt, this argument is utter nonsense.  The changes you propose
would still require manual analysis of any detected issues in order to
do anything useful about them.  Once you know that there is, or isn't,
a filesystem-level error involved, what are you going to do next?
You're going to go try to debug the component you know is at fault,
that's what.  And that problem is still AI-complete.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 5:06 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
> So, for getting checksums, we have to offer up a few things:
> 1) zero-copy writes, we need to buffer the write to get a consistent
> checksum (or lock the buffer tight)
> 2) saving hint-bits on an otherwise unchanged page.  We either need to
> just not write that page, and loose the work the hint-bits did, or do
> a full-page WAL of it, so the torn-page checksum is fixed

Actually the consensus the last go-around on this topic was to
segregate the hint bits into a single area of the page and skip them
in the checksum. That way we don't have to do any of the above. It's
just that that's a lot of work.

--
greg


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 12:31 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Tue, Nov 9, 2010 at 5:06 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
>> So, for getting checksums, we have to offer up a few things:
>> 1) zero-copy writes, we need to buffer the write to get a consistent
>> checksum (or lock the buffer tight)
>> 2) saving hint-bits on an otherwise unchanged page.  We either need to
>> just not write that page, and loose the work the hint-bits did, or do
>> a full-page WAL of it, so the torn-page checksum is fixed
>
> Actually the consensus the last go-around on this topic was to
> segregate the hint bits into a single area of the page and skip them
> in the checksum. That way we don't have to do any of the above. It's
> just that that's a lot of work.

And it still allows silent data corruption, because bogusly clearing a
hint bit is, at the moment, harmless, but bogusly setting one is not.
I really have to wonder how other products handle this.  PostgreSQL
isn't the only database product that uses MVCC - not by a long shot -
and the problem of detecting whether an XID is visible to the current
snapshot can't be ours alone.  So what do other people do about this?
They either don't cache the information about whether the XID is
committed in-page (in which case, are they just slower or do they have
some other means of avoiding the performance hit?) or they cache it in
the page (in which case, they either WAL log it or they don't checksum
it).  I mean, there aren't any other options, are there?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Kenneth Marshall
Date:
On Tue, Nov 09, 2010 at 02:05:57PM -0500, Robert Haas wrote:
> On Tue, Nov 9, 2010 at 12:31 PM, Greg Stark <gsstark@mit.edu> wrote:
> > On Tue, Nov 9, 2010 at 5:06 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
> >> So, for getting checksums, we have to offer up a few things:
> >> 1) zero-copy writes, we need to buffer the write to get a consistent
> >> checksum (or lock the buffer tight)
> >> 2) saving hint-bits on an otherwise unchanged page. ?We either need to
> >> just not write that page, and loose the work the hint-bits did, or do
> >> a full-page WAL of it, so the torn-page checksum is fixed
> >
> > Actually the consensus the last go-around on this topic was to
> > segregate the hint bits into a single area of the page and skip them
> > in the checksum. That way we don't have to do any of the above. It's
> > just that that's a lot of work.
> 
> And it still allows silent data corruption, because bogusly clearing a
> hint bit is, at the moment, harmless, but bogusly setting one is not.
> I really have to wonder how other products handle this.  PostgreSQL
> isn't the only database product that uses MVCC - not by a long shot -
> and the problem of detecting whether an XID is visible to the current
> snapshot can't be ours alone.  So what do other people do about this?
> They either don't cache the information about whether the XID is
> committed in-page (in which case, are they just slower or do they have
> some other means of avoiding the performance hit?) or they cache it in
> the page (in which case, they either WAL log it or they don't checksum
> it).  I mean, there aren't any other options, are there?
> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 

That would imply that we need to have a CRC for just the hint bit
section or some type of ECC calculation that can detect bad hint
bits independent of the CRC for the rest of the page.

Regards,
Ken


Re: Protecting against unexpected zero-pages: proposal

From
Alvaro Herrera
Date:
Excerpts from Robert Haas's message of mar nov 09 16:05:57 -0300 2010:

> And it still allows silent data corruption, because bogusly clearing a
> hint bit is, at the moment, harmless, but bogusly setting one is not.
> I really have to wonder how other products handle this.  PostgreSQL
> isn't the only database product that uses MVCC - not by a long shot -
> and the problem of detecting whether an XID is visible to the current
> snapshot can't be ours alone.  So what do other people do about this?
> They either don't cache the information about whether the XID is
> committed in-page (in which case, are they just slower or do they have
> some other means of avoiding the performance hit?) or they cache it in
> the page (in which case, they either WAL log it or they don't checksum
> it).  I mean, there aren't any other options, are there?

Maybe allocate enough shared memory for pg_clog buffers back to the
freeze horizon, and just don't use hint bits?  Maybe some intermediate
solution, i.e. allocate a large bunch of pg_clog buffers, and do
WAL-logged setting of hint bits only for tuples that go further back.

I remember someone had a patch to set all the bits in a page that passed
a threshold of some kind.  Ah, no, that was for freezing tuples.

-- 
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: Protecting against unexpected zero-pages: proposal

From
Josh Berkus
Date:
> PostgreSQL
> isn't the only database product that uses MVCC - not by a long shot -
> and the problem of detecting whether an XID is visible to the current
> snapshot can't be ours alone.  So what do other people do about this?
> They either don't cache the information about whether the XID is
> committed in-page (in which case, are they just slower or do they have
> some other means of avoiding the performance hit?) or they cache it in
> the page (in which case, they either WAL log it or they don't checksum
> it).

Well, most of the other MVCC-in-table DBMSes simply don't deal with
large, on-disk databases.  In fact, I can't think of one which does,
currently; while MVCC has been popular for the New Databases, they're
all focused on "in-memory" databases.  Oracle and InnoDB use rollback
segments.

Might be worth asking the BDB folks.

Personally, I think we're headed inevitably towards having a set of
metadata bitmaps for each table, like we do currently for the FSM.

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 7:37 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Well, most of the other MVCC-in-table DBMSes simply don't deal with
> large, on-disk databases.  In fact, I can't think of one which does,
> currently; while MVCC has been popular for the New Databases, they're
> all focused on "in-memory" databases.  Oracle and InnoDB use rollback
> segments.

Well rollback segments are still MVCC. However Oracle's MVCC is
block-based. So they only have to do the visibility check once per
block, not once per row. Once they find the right block version they
can process all the rows on it.

Also Oracle's snapshots are just the log position. Instead of having
to check whether every transaction committed or not, they just find
the block version which was last modified before the log position for
when their transaction started.

> Might be worth asking the BDB folks.
>
> Personally, I think we're headed inevitably towards having a set of
> metadata bitmaps for each table, like we do currently for the FSM.

Well we already have a metadata bitmap for transaction visibility.
It's called the clog. There's no point in having one structured
differently around the table.

The whole point of the hint bits is that it's in the same place as the data.


--
greg


Re: Protecting against unexpected zero-pages: proposal

From
Josh Berkus
Date:
> The whole point of the hint bits is that it's in the same place as the data.

Yes, but the hint bits are currently causing us trouble on several
features or potential features:

* page-level CRC checks
* eliminating vacuum freeze for cold data
* index-only access
* replication
* this patch
* etc.

At a certain point, it's worth the trouble to handle them differently
because of the other features that enables or makes much easier.

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Re: Protecting against unexpected zero-pages: proposal

From
Greg Stark
Date:
On Tue, Nov 9, 2010 at 8:12 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> The whole point of the hint bits is that it's in the same place as the data.
>
> Yes, but the hint bits are currently causing us trouble on several
> features or potential features:

Then we might have to get rid of hint bits. But they're hint bits for
a metadata file that already exists, creating another metadata file
doesn't solve anything.

Though incidentally all of the other items you mentioned are generic
problems caused by with MVCC, not hint bits.


-- 
greg


Re: Protecting against unexpected zero-pages: proposal

From
Aidan Van Dyk
Date:
On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote:

> Then we might have to get rid of hint bits. But they're hint bits for
> a metadata file that already exists, creating another metadata file
> doesn't solve anything.

Is there any way to instrument the writes of dirty buffers from the
share memory, and see how many of the pages normally being written are
not backed by WAL (hint-only updates)?  Just "dumping" those buffers
without writes would allow at least *checksums* to go throug without
loosing all the benifits of the hint bits.

I've got a hunch (with no proof) that the penalty of not writing them
will be born largely by small database installs.  Large OLTP databases
probably won't have pages without a WAL'ed change and hint-bits set,
and large data warehouse ones will probably vacuum freeze big tables
on load to avoid the huge write penalty the 1st time they scan the
tables...

</waving hands>

--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


Re: Protecting against unexpected zero-pages: proposal

From
Josh Berkus
Date:
> Though incidentally all of the other items you mentioned are generic
> problems caused by with MVCC, not hint bits.

Yes, but the hint bits prevent us from implementing workarounds.


--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 2:05 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 9, 2010 at 12:31 PM, Greg Stark <gsstark@mit.edu> wrote:
>> On Tue, Nov 9, 2010 at 5:06 PM, Aidan Van Dyk <aidan@highrise.ca> wrote:
>>> So, for getting checksums, we have to offer up a few things:
>>> 1) zero-copy writes, we need to buffer the write to get a consistent
>>> checksum (or lock the buffer tight)
>>> 2) saving hint-bits on an otherwise unchanged page.  We either need to
>>> just not write that page, and loose the work the hint-bits did, or do
>>> a full-page WAL of it, so the torn-page checksum is fixed
>>
>> Actually the consensus the last go-around on this topic was to
>> segregate the hint bits into a single area of the page and skip them
>> in the checksum. That way we don't have to do any of the above. It's
>> just that that's a lot of work.
>
> And it still allows silent data corruption, because bogusly clearing a
> hint bit is, at the moment, harmless, but bogusly setting one is not.
> I really have to wonder how other products handle this.  PostgreSQL
> isn't the only database product that uses MVCC - not by a long shot -
> and the problem of detecting whether an XID is visible to the current
> snapshot can't be ours alone.  So what do other people do about this?
> They either don't cache the information about whether the XID is
> committed in-page (in which case, are they just slower or do they have
> some other means of avoiding the performance hit?) or they cache it in
> the page (in which case, they either WAL log it or they don't checksum
> it).  I mean, there aren't any other options, are there?

An examination of the MySQL source code reveals their answer.  In
row_vers_build_for_semi_consistent_read(), which I can't swear is the
right place but seems to be, there is this comment:
                       /* We assume that a rolled-back transaction stays in                       TRX_ACTIVE state
untilall the changes have been                       rolled back and the transaction is removed from
  the global list of transactions. */ 

Which makes sense.  If you never leave rows from aborted transactions
in the heap forever, then the list of aborted transactions that you
need to remember for MVCC purposes will remain relatively small and
you can just include those XIDs in your MVCC snapshot.  Our problem is
that we have no particular bound on the number of aborted transactions
whose XIDs may still be floating around, so we can't do it that way.

<dons asbestos underpants>

To impose a similar bound in PostgreSQL, you'd need to maintain the
set of aborted XIDs and the relations that need to be vacuumed for
each one.  As you vacuum, you prune any tuples with aborted xmins
(which is WAL-logged already anyway) and additionally WAL-log clearing
the xmax for each tuple with an aborted xmax.  Thus, when you
finishing vacuuming the relation, the aborted XID is no longer present
anywhere in it.  When you vacuum the last relation for a particular
XID, that XID no longer exists in the relation files anywhere and you
can remove it from the list of aborted XIDs.  I think that WAL logging
the list of XIDs and list of unvacuumed relations for each at each
checkpoint would be sufficient for crash safety.  If you did this, you
could then assume that any XID which precedes your snapshot's xmin is
committed.

1. When a big abort happens, you may have to carry that XID around in
every snapshot - and avoid advancing RecentGlobalXmin - for quite a
long time.
2. You have to WAL log marking the XMAX of an aborted transaction invalid.
3. You have to WAL log the not-yet-cleaned-up XIDs and the relations
each one needs vacuumed at each checkpoint.
4. There would presumably be some finite limit on the size of the
shared memory structure for aborted transactions.  I don't think
there'd be any reason to make it particularly small, but if you sat
there and aborted transactions at top speed you might eventually run
out of room, at which point any transactions you started wouldn't be
able to abort until vacuum made enough progress to free up an entry.
5. It would be pretty much impossible to run with autovacuum turned
off, and in fact you would likely need to make it a good deal more
aggressive in the specific case of aborted transactions, to mitigate
problems #1, #3, and #4.

I'm not sure how bad those things would be, or if there are more that
I'm missing (besides the obvious "it would be a lot of work").

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Josh Berkus
Date:
On 11/9/10 1:50 PM, Robert Haas wrote:
> 5. It would be pretty much impossible to run with autovacuum turned
> off, and in fact you would likely need to make it a good deal more
> aggressive in the specific case of aborted transactions, to mitigate
> problems #1, #3, and #4.

6. This would require us to be more aggressive about VACUUMing old-cold
relations/page, e.g. VACUUM FREEZE.  This it would make one of our worst
issues for data warehousing even worse.

What about having this map (and other hintbits) be per-relation?  Hmmm.That wouldn't work for DDL, I suppose ...

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Re: Protecting against unexpected zero-pages: proposal

From
"Kevin Grittner"
Date:
Josh Berkus <josh@agliodbs.com> wrote:
> 6. This would require us to be more aggressive about VACUUMing
> old-cold relations/page, e.g. VACUUM FREEZE.  This it would make
> one of our worst issues for data warehousing even worse.
I continue to feel that it is insane that when a table is populated
within the same database transaction which created it (e.g., a bulk
load of a table or partition), that we don't write the tuples with
hint bits set for commit and xmin frozen.  By the time any but the
creating transaction can see the tuples, *if* any other transaction
is ever able to see the tuples, these will be the correct values;
we really should be able to deal with it within the creating
transaction somehow.
If we ever handle that, would #6 be a moot point, or do you think
it's still a significant issue?
-Kevin


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 5:03 PM, Josh Berkus <josh@agliodbs.com> wrote:
> On 11/9/10 1:50 PM, Robert Haas wrote:
>> 5. It would be pretty much impossible to run with autovacuum turned
>> off, and in fact you would likely need to make it a good deal more
>> aggressive in the specific case of aborted transactions, to mitigate
>> problems #1, #3, and #4.
>
> 6. This would require us to be more aggressive about VACUUMing old-cold
> relations/page, e.g. VACUUM FREEZE.  This it would make one of our worst
> issues for data warehousing even worse.

Uh, no it doesn't.  It only requires you to be more aggressive about
vacuuming the transactions that are in the aborted-XIDs array.  It
doesn't affect transaction wraparound vacuuming at all, either
positively or negatively.  You still have to freeze xmins before they
flip from being in the past to being in the future, but that's it.

> What about having this map (and other hintbits) be per-relation?  Hmmm.
>  That wouldn't work for DDL, I suppose ...

"This map"?  I suppose you could track aborted XIDs per relation
instead of globally, but I don't see why that would affect DDL any
differently than anything else.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 5:15 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Josh Berkus <josh@agliodbs.com> wrote:
>
>> 6. This would require us to be more aggressive about VACUUMing
>> old-cold relations/page, e.g. VACUUM FREEZE.  This it would make
>> one of our worst issues for data warehousing even worse.
>
> I continue to feel that it is insane that when a table is populated
> within the same database transaction which created it (e.g., a bulk
> load of a table or partition), that we don't write the tuples with
> hint bits set for commit and xmin frozen.  By the time any but the
> creating transaction can see the tuples, *if* any other transaction
> is ever able to see the tuples, these will be the correct values;
> we really should be able to deal with it within the creating
> transaction somehow.

I agree.

> If we ever handle that, would #6 be a moot point, or do you think
> it's still a significant issue?

I think it's a moot point anyway, per previous email.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 3:05 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Tue, Nov 9, 2010 at 7:37 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> Well, most of the other MVCC-in-table DBMSes simply don't deal with
>> large, on-disk databases.  In fact, I can't think of one which does,
>> currently; while MVCC has been popular for the New Databases, they're
>> all focused on "in-memory" databases.  Oracle and InnoDB use rollback
>> segments.
>
> Well rollback segments are still MVCC. However Oracle's MVCC is
> block-based. So they only have to do the visibility check once per
> block, not once per row. Once they find the right block version they
> can process all the rows on it.
>
> Also Oracle's snapshots are just the log position. Instead of having
> to check whether every transaction committed or not, they just find
> the block version which was last modified before the log position for
> when their transaction started.

That is cool.  One problem is that it might sometimes result in
additional I/O.  A transaction begins and writes a tuple.  We must
write a preimage of the page (or at least, sufficient information to
reconstruct a preimage of the page) to the undo segment.  If the
transaction commits relatively quickly, and all transactions which
took their snapshots before the commit end either by committing or by
aborting, we can discard that information from the undo segment
without ever writing it to disk.  However, if that doesn't happen, the
undo log page may get evicted, and we're now doing three writes (WAL,
page, undo) rather than just two (WAL, page).  That's no worse than an
update where the old and new tuples land on different pages, but it IS
worse than an update where the old and new tuples are on the same
page, or at least I think it is.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Josh Berkus
Date:
Robert,

> Uh, no it doesn't.  It only requires you to be more aggressive about
> vacuuming the transactions that are in the aborted-XIDs array.  It
> doesn't affect transaction wraparound vacuuming at all, either
> positively or negatively.  You still have to freeze xmins before they
> flip from being in the past to being in the future, but that's it.

Sorry, I was trying to say that it's similar to the freeze issue, not
that it affects freeze.  Sorry for the lack of clarity.

What I was getting at is that this could cause us to vacuum
relations/pages which would otherwise never be vaccuumed (or at least,
not until freeze).  Imagine a very large DW table which is normally
insert-only and seldom queried, but once a month or so the insert aborts
and rolls back.

I'm not saying that your proposal isn't worth testing.  I'm just saying
that it may prove to be a net loss to overall system efficiency.

>> If we ever handle that, would #6 be a moot point, or do you think
>> > it's still a significant issue?

Kevin, the case which your solution doesn't fix is the common one of
"log tables" which keep adding records continuously, with < 5% inserts
or updates.  That may seem like a "corner case" but such a table,
partitioned or unpartitioned, exists in around 1/3 of the commercial
applications I've worked on, so it's a common pattern.

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Josh Berkus <josh@agliodbs.com> writes:
>> Though incidentally all of the other items you mentioned are generic
>> problems caused by with MVCC, not hint bits.

> Yes, but the hint bits prevent us from implementing workarounds.

If we got rid of hint bits, we'd need workarounds for the ensuing
massive performance loss.  There is no reason whatsoever to imagine
that we'd come out ahead in the end.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> <dons asbestos underpants>
> 4. There would presumably be some finite limit on the size of the
> shared memory structure for aborted transactions.  I don't think
> there'd be any reason to make it particularly small, but if you sat
> there and aborted transactions at top speed you might eventually run
> out of room, at which point any transactions you started wouldn't be
> able to abort until vacuum made enough progress to free up an entry.

Um, that bit is a *complete* nonstarter.  The possibility of a failed
transaction always has to be allowed.  What if vacuum itself gets an
error for example?  Or, what if the system crashes?

I thought for a bit about inverting the idea, such that there were a
limit on the number of unvacuumed *successful* transactions rather than
the number of failed ones.  But that seems just as unforgiving: what if
you really need to commit a transaction to effect some system state
change?  An example might be dropping some enormous table that you no
longer need, but vacuum is going to insist on plowing through before
it'll let you have any more transactions.

I'm of the opinion that any design that presumes it can always fit all
the required transaction-status data in memory is probably not even
worth discussing.  There always has to be a way for status data to spill
to disk.  What's interesting is how you can achieve enough locality of
access so that most of what you need to look at is usually in memory.
        regards, tom lane


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 5:45 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Robert,
>
>> Uh, no it doesn't.  It only requires you to be more aggressive about
>> vacuuming the transactions that are in the aborted-XIDs array.  It
>> doesn't affect transaction wraparound vacuuming at all, either
>> positively or negatively.  You still have to freeze xmins before they
>> flip from being in the past to being in the future, but that's it.
>
> Sorry, I was trying to say that it's similar to the freeze issue, not
> that it affects freeze.  Sorry for the lack of clarity.
>
> What I was getting at is that this could cause us to vacuum
> relations/pages which would otherwise never be vaccuumed (or at least,
> not until freeze).  Imagine a very large DW table which is normally
> insert-only and seldom queried, but once a month or so the insert aborts
> and rolls back.

Oh, I see.  In that case, under the proposed scheme, you'd get an
immediate vacuum of everything inserted into the table since the last
failed insert.  Everything prior to the last failed insert would be
OK, since the visibility map bits would already be set for those
pages.  Yeah, that would be annoying.

There's a related problem with index-only scans.  If a large DW table
which is normally insert-only, but which IS queried regularly, it
won't be able to use index-only scans effectively because without
regularly vacuuming, the visibility map bits won't be set.  We've
previously discussed the possibility of having the background writer
set hint bits before writing the pages, and maybe it could even set
the all-visible bit and update the visibility map, too.  But that
won't help if the transaction inserts a large enough quantity of data
that it starts spilling buffers to disk before it commits.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Gurjeet Singh
Date:
On Wed, Nov 10, 2010 at 1:15 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Once you know that there is, or isn't,
a filesystem-level error involved, what are you going to do next?
You're going to go try to debug the component you know is at fault,
that's what.  And that problem is still AI-complete.


If we know for sure that Postgres was not at fault then we have standby node to failover to, where Postgres warm standby is being maintained by streaming replication.

Regards
--
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device

Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 6:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> <dons asbestos underpants>
>> 4. There would presumably be some finite limit on the size of the
>> shared memory structure for aborted transactions.  I don't think
>> there'd be any reason to make it particularly small, but if you sat
>> there and aborted transactions at top speed you might eventually run
>> out of room, at which point any transactions you started wouldn't be
>> able to abort until vacuum made enough progress to free up an entry.
>
> Um, that bit is a *complete* nonstarter.  The possibility of a failed
> transaction always has to be allowed.  What if vacuum itself gets an
> error for example?  Or, what if the system crashes?

I wasn't proposing that it was impossible to abort, only that aborts
might have to block.  I admit I don't know what to do about VACUUM
itself failing.  A transient failure mightn't be so bad, but if you
find yourself permanently unable to eradicate the XIDs left behind by
an aborted transaction, you'll eventually have to shut down the
database, lest the XID space wrap around.

Actually, come to think of it, there's no reason you COULDN'T spill
the list of aborted-but-not-yet-cleaned-up XIDs to disk.  It's just
that XidInMVCCSnapshot() would get reeeeeeally expensive after a
while.

> I thought for a bit about inverting the idea, such that there were a
> limit on the number of unvacuumed *successful* transactions rather than
> the number of failed ones.  But that seems just as unforgiving: what if
> you really need to commit a transaction to effect some system state
> change?  An example might be dropping some enormous table that you no
> longer need, but vacuum is going to insist on plowing through before
> it'll let you have any more transactions.

The number of relevant aborted XIDs tends naturally to decline to zero
as vacuum does its thing, while the number of relevant committed XIDs
tends to grow very, very large (it starts to decline only when we
start freezing things), so remembering the not-yet-cleaned-up aborted
XIDs seems likely to be cheaper.  In fact, in many cases, the set of
not-yet-cleaned-up aborted XIDs will be completely empty.

> I'm of the opinion that any design that presumes it can always fit all
> the required transaction-status data in memory is probably not even
> worth discussing.

Well, InnoDB does it.

> There always has to be a way for status data to spill
> to disk.  What's interesting is how you can achieve enough locality of
> access so that most of what you need to look at is usually in memory.

We're not going to get any more locality of reference than we're
already getting from hint bits, are we?  The advantage of trying to do
timely cleanup of aborted transactions is that you can assume that any
XID before RecentGlobalXmin is committed, without checking CLOG and
without having to update hint bits and write out the ensuing dirty
pages.  If we could make CLOG access cheap enough that we didn't need
hint bits, that would also solve that problem, but nobody (including
me) seems to think that's feasible.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Protecting against unexpected zero-pages: proposal

From
Robert Haas
Date:
On Tue, Nov 9, 2010 at 7:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 9, 2010 at 5:45 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> Robert,
>>
>>> Uh, no it doesn't.  It only requires you to be more aggressive about
>>> vacuuming the transactions that are in the aborted-XIDs array.  It
>>> doesn't affect transaction wraparound vacuuming at all, either
>>> positively or negatively.  You still have to freeze xmins before they
>>> flip from being in the past to being in the future, but that's it.
>>
>> Sorry, I was trying to say that it's similar to the freeze issue, not
>> that it affects freeze.  Sorry for the lack of clarity.
>>
>> What I was getting at is that this could cause us to vacuum
>> relations/pages which would otherwise never be vaccuumed (or at least,
>> not until freeze).  Imagine a very large DW table which is normally
>> insert-only and seldom queried, but once a month or so the insert aborts
>> and rolls back.
>
> Oh, I see.  In that case, under the proposed scheme, you'd get an
> immediate vacuum of everything inserted into the table since the last
> failed insert.  Everything prior to the last failed insert would be
> OK, since the visibility map bits would already be set for those
> pages.  Yeah, that would be annoying.

Ah, but it might be fixable.  You wouldn't really need to do a
full-fledged vacuum.  It would be sufficient to scan the heap pages
that might contain the XID we're trying to clean up after, without
touching the indexes.  Instead of actually removing tuples with an
aborted XMIN, you could just mark the line pointers LP_DEAD.  Tuples
with an aborted XMAX don't require touching the indexes anyway.  So as
long as you have some idea which segment of the relation was
potentially dirtied by that transaction, you could just scan those
blocks and update the item pointers and/or XMAX values for the
offending tuples without doing anything else (although you'd probably
want to opportunistically grab the buffer cleanup lock and defragment
if possible).

Unfortunately, I'm now realizing another problem.  During recovery,
you have to assume that any XIDs that didn't commit are aborted; under
the scheme I proposed upthread, if a transaction that was in-flight at
crash time had begun prior to the last checkpoint, you wouldn't know
which relations it had potentially dirtied.  Ouch.  But I think this
is fixable, too.  Let's invent a new on-disk structure called the
content-modified log.  Transactions that want to insert, update, or
delete tuples allocate pages from this structure.  The header of each
page stores the XID of the transaction that owns that page and the ID
of the database to which that transaction is bound.  Following the
header, there are a series of records of the form: tablespace OID,
table OID, starting page number, ending page number.  Each such record
indicates that the given XID may have put its XID on disk within the
given page range of the specified relation.  Each checkpoint flushes
the dirty pages of the modified-content log to disk along with
everything else.  Thus, on redo, we can reconstruct the additional
entries that need to be added to the log from the contents of WAL
subsequent to the redo pointer.

If a transaction commits, we can remove all of its pages from the
modified-content log; in fact, if a transaction begins and commits
without an intervening checkpoint, the pages never need to hit the
disk at all.  If a transaction aborts, its modified-content log pages
must stick around until we've eradicated any copies of its XID in the
relation data files.  We maintain a global value for the oldest
aborted XID which is not yet fully cleaned up (let's called this the
OldestNotQuiteDeadYetXID).  When we see an XID which precedes
OldestNotQuiteDeadYetXID, we know it's committed.  Otherwise, we check
whether the XID precedes the xmin of our snapshot.  If it does, we
have to check whether the XID is committed or aborted (it must be one
or the other).  If it does not, we use our snapshot, as now.  Checking
XIDs between OldestNotQuiteDeadYetXID and our snapshot's xmin is
potentially expensive, but (1) if there aren't many aborted
transactions, this case shouldn't arise very often; (2) if the XID
turns out to be aborted and we can get an exclusive buffer content
lock, we can nuke that copy of the XID to save the next guy the
trouble of examining it; and (3) we can maintain a size-limited
per-backend cache of this information, which should help in the normal
cases where there either aren't that many XIDs that fall into this
category or our transaction doesn't see all that many of them.

This also addresses Tom's concern about needing to store all the
information in memory, and the need to WAL-log not-yet-cleaned-up XIDs
at each checkpoint.  You still need to aggressively clean up after
aborted transactions, either using our current vacuum mechanism or the
"just zap the XIDs" shortcut described above.

(An additional interesting point about this design is that you could
potentially also use it to drive vacuum activity for transactions that
commit, especially if we were to also store a flag indicating whether
each page range contained updates/deletes or only inserts.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


> If we got rid of hint bits, we'd need workarounds for the ensuing
> massive performance loss.  There is no reason whatsoever to imagine
> that we'd come out ahead in the end.

Oh, there's no question that we need something which serves the same
purpose as the existing hit bits.  But there's a lot of question about
whether our existing implementation is optimal.

For example, imagine if the hint bits were moved to a separate per-table
bitmap outside the table instead of being stored with each row, as the
current FSM is.  Leaving aside the engineering required for this (which
would be considerable, especially when it comes to consistency and
durability), this would potentially allow solutions to the following issues:

* Index-only access
* I/O associated with hint bit setting
* Vacuum freezing old-cold data
* Page-level CRCs
* Rsyncing tables for replication

Alternately, we could attack this by hint bit purpose.  For example, if
we restructured the CLOG so that it was an efficient in-memory index
(yes, I'm being handwavy), then having the XID-is-visible hint bits
might become completely unnecessary.  We could then also improve the
visibility map to be reliable and include "frozen" bits as well.

Overall, what I'm pointing out is that our current implementation of
hint bits is blocking not one by several major features and causing our
users performance pain.  It's time to look for an implementation which
doesn't have the same problems we're familiar with.

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


On Sun, Nov 14, 2010 at 8:52 PM, Josh Berkus <josh@agliodbs.com> wrote:
> For example, imagine if the hint bits were moved to a separate per-table
> bitmap outside the table instead of being stored with each row, as the
> current FSM is.

How many times do we have to keep going around the same block?

We *already* have separate bitmap outside the table for transaction
commit bits. It's the clog.

The only reason the hint bits exist is to cache that so we don't need
to do extra I/O to check tuple visibility. If the hint bits are moved
outside the table then they serve no purpose whatsover. Then you have
an additional I/O to attempt to save an additional I/O.

The only difference between the clog and your proposal is that the
clog is two bits per transaction and your proposal is 4 bits per
tuple. The per-tuple idea guarantees that the extra I/O will be very
localized which isn't necessarily true for the clog but the clog is
small enough that it probably is true anyways. And even if there's no
I/O the overhead to consult the clog/per-table fork in memory is
probably significant.


-- 
greg


Greg Stark <gsstark@mit.edu> writes:
> On Sun, Nov 14, 2010 at 8:52 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> For example, imagine if the hint bits were moved to a separate per-table
>> bitmap outside the table instead of being stored with each row, as the
>> current FSM is.

> How many times do we have to keep going around the same block?

> We *already* have separate bitmap outside the table for transaction
> commit bits. It's the clog.

> The only reason the hint bits exist is to cache that so we don't need
> to do extra I/O to check tuple visibility. If the hint bits are moved
> outside the table then they serve no purpose whatsover. Then you have
> an additional I/O to attempt to save an additional I/O.

Well, not quite.  The case this could improve is index-only scans:
you could go (or so he hopes) directly from the index to the hint
bits given the TID stored by the index.  A clog lookup is not possible
without XMIN & XMAX, which we do not keep in index entries.

But I'm just as skeptical as you are about this being a net win.
It'll pessimize too much other stuff.

Josh is ignoring the proposal that is on the table and seems actually
workable, which is to consult the visibility map during index-only
scans.  For mostly-static tables this would save trips to the heap for
very little extra I/O.  The hard part is to make the VM reliable, but
that is not obviously harder than making separately-stored hint bits
reliable.
        regards, tom lane


Re: Re: Rethinking hint bits WAS: Protecting against unexpected zero-pages: proposal

From
Andrew Dunstan
Date:

On 11/14/2010 05:15 PM, Tom Lane wrote:
> Josh is ignoring the proposal that is on the table and seems actually
> workable, which is to consult the visibility map during index-only
> scans.  For mostly-static tables this would save trips to the heap for
> very little extra I/O.  The hard part is to make the VM reliable, but
> that is not obviously harder than making separately-stored hint bits
> reliable.

I thought we had agreement in the past that this was the way we should 
proceed.

cheers

andrew


Greg, Tom,

> We *already* have separate bitmap outside the table for transaction
> commit bits. It's the clog.

You didn't read my whole e-mail.  I talk about the CLOG further down.

> Josh is ignoring the proposal that is on the table and seems actually
> workable, which is to consult the visibility map during index-only
> scans.  For mostly-static tables this would save trips to the heap for
> very little extra I/O.  The hard part is to make the VM reliable, but
> that is not obviously harder than making separately-stored hint bits
> reliable.

No, I'm not.  I'm pointing out that it doesn't unblock the other 4
features/improvements I mentioned, *all* of which would be unblocked by
not storing the hint bits in the table, whatever means we use to do so.You, for your part, are consistently ignoring
theseother issues.
 

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


Josh Berkus <josh@agliodbs.com> writes:
> No, I'm not.  I'm pointing out that it doesn't unblock the other 4
> features/improvements I mentioned, *all* of which would be unblocked by
> not storing the hint bits in the table, whatever means we use to do so.
>  You, for your part, are consistently ignoring these other issues.

I'm not ignoring them; I just choose to work on other issues, since
there is no viable proposal for fixing them.  I don't intend to put
my time into dead ends.
        regards, tom lane


> I'm not ignoring them; I just choose to work on other issues, since
> there is no viable proposal for fixing them.  I don't intend to put
> my time into dead ends.

So, that's a "show me a patch and we'll talk"?  Understood, then.

--                                  -- Josh Berkus                                    PostgreSQL Experts Inc.
                        http://www.pgexperts.com
 


On Mon, Nov 15, 2010 at 2:06 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> I'm not ignoring them; I just choose to work on other issues, since
>> there is no viable proposal for fixing them.  I don't intend to put
>> my time into dead ends.
>
> So, that's a "show me a patch and we'll talk"?  Understood, then.

Or even just a proposal.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Nov 15, 2010 at 2:06 PM, Josh Berkus <josh@agliodbs.com> wrote:
>>> I'm not ignoring them; I just choose to work on other issues, since
>>> there is no viable proposal for fixing them. �I don't intend to put
>>> my time into dead ends.

>> So, that's a "show me a patch and we'll talk"? �Understood, then.

> Or even just a proposal.

Well, he did have a proposal ... it just wasn't very credible.  Moving
the hint bits around is at best a zero-sum game; it seems likely to
degrade cases we now handle well more than it improves cases we don't.
I think what we need is a fundamentally new idea, and I've not seen one.
        regards, tom lane


On Nov 14, 2010, at 3:40 PM, Greg Stark wrote:<br /><blockquote type="cite">On Sun, Nov 14, 2010 at 8:52 PM, Josh
Berkus<josh@agliodbs.com> wrote:<br /></blockquote><blockquote type="cite"><blockquote type="cite">For example,
imagineif the hint bits were moved to a separate per-table<br /></blockquote></blockquote><blockquote
type="cite"><blockquotetype="cite">bitmap outside the table instead of being stored with each row, as the<br
/></blockquote></blockquote><blockquotetype="cite"><blockquote type="cite">current FSM is.<br
/></blockquote></blockquote><blockquotetype="cite"><br /></blockquote><blockquote type="cite">How many times do we have
tokeep going around the same block?<br /></blockquote><blockquote type="cite"><br /></blockquote><blockquote
type="cite">We*already* have separate bitmap outside the table for transaction<br /></blockquote><blockquote
type="cite">commitbits. It's the clog.<br /></blockquote><blockquote type="cite"><br /></blockquote><blockquote
type="cite">Theonly reason the hint bits exist is to cache that so we don't need<br /></blockquote><blockquote
type="cite">todo extra I/O to check tuple visibility. If the hint bits are moved<br /></blockquote><blockquote
type="cite">outsidethe table then they serve no purpose whatsover. Then you have<br /></blockquote><blockquote
type="cite">anadditional I/O to attempt to save an additional I/O.<br /></blockquote><br />Are you sure hint bits are
onlyfor IO savings? Calculating visibility from CLOG involves a hell of a lot more CPU than checking a hint bit.<br
/><br/>It would be extremely interesting if the CPU overhead wasn't very noticeable however. That would mean we *only*
haveto worry about CLOG IO, and there's probably a lot of ways around that (memory mapping CLOG is one possibility),
especiallyconsidering that 4G isn't exactly a large amount of memory these days.<br />--<br />Jim C. Nasby, Database
Architect                  jim@nasby.net<br />512.569.9461 (cell)                         http://jim.nasby.net<br /><br
/><br/>