Thread: MultiXacts & WAL
I am working on a possible extension of postgresql mvcc to support very timely failure masking in the context of three-tierapplications so i am currently studying Postgresql internals...<br /><br />I am wondering what are the reasonswhy both the MultiXactIds and the corresponding OFFSETs and MEMBERs are currently persisted.<br />In multixact.c 'sdocumentation on the top of the file you can find the following statement:<br />"...This allows us to completely rebuildthe data entered since the last checkpoint during XLOG replay..."<br /><br />I can see the need to persist (not eagerly)multixactids to avoid wraparounds. Essentially, mass storage is used to extend the limited capabity of slrus datastructures in shared memory.<br /><br />The point i am missing is the need to be able to completely recover multixactsoffsets and members data. These carry information about current transactions holding shared locks on db tuples,which should not be essential for recovery purposes. After a crash you want to recover the content of your data, notthe presence of shared locks on any tuple. AFAICS, this seems true for both committed/aborted transactions (which beingconcluded do not care any more about the fact that they could have held any shared lock), as well as prepared transactions(which only need to recover their exclusive locks).<br /><br />I have tried to dig around the comments withinthe main multixact.c functions and i have walked through this comment (CreateMultiXactId())):<br /><br />"...The onlyway for the MXID to be referenced from any data page is for heap_lock_tuple() to have put it there, and heap_lock_tuple()generates an XLOG record that must follow ours... "<br /><br />But still I cannot see the need to recovercomplete shared locks info (i.e. not only multixactids but also the corresponding registered transactionids that wereholding the lock)...<br /><br />May be this is needed to support savepoints/subtransactions? Or is it something elsethat i am missing?<br /><br />Thanks for your precious help!<br /><br /> Paolo<span class="comment"><br /></span><p>Chiacchiera con i tuoi amici in tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
paolo romano <paolo.romano@yahoo.it> writes: > The point i am missing is the need to be able to completely recover > multixacts offsets and members data. These carry information about > current transactions holding shared locks on db tuples, which should > not be essential for recovery purposes. This might be optimizable if we want to assume that multixacts will never be used for any purpose except holding locks, but that seems a bit short sighted. Is there any actually significant advantage to not logging this information? regards, tom lane
On Sat, 17 Jun 2006, paolo romano wrote: > May be this is needed to support savepoints/subtransactions? Or is it > something else that i am missing? It's for two-phase commit. A prepared transaction can hold locks that need to be recovered. - Heikki
<blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"><br />>May be this is needed to support savepoints/subtransactions? Or is it <br />> something else that i am missing?<br/><br />It's for two-phase commit. A prepared transaction can hold locks that need <br />to be recovered.<br /><br/></blockquote><br />When a transaction enters (successfully) the prepared state it only retains its exclusive locksand releases any shared locks (i.e. multixacts)... or, at least, that's how it should be in principle according to serializiatontheory, i haven't yet checked out if this is what is done in postgresql .<br /><p> Chiacchiera con i tuoi amiciin tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
<br /><br /><b><i>Tom Lane <tgl@sss.pgh.pa.us></i></b> ha scritto: <blockquote class="replbq" style="border-left: 2pxsolid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"> paolo romano writes:<br />> The point i am missingis the need to be able to completely recover<br />> multixacts offsets and members data. These carry informationabout<br />> current transactions holding shared locks on db tuples, which should<br />> not be essentialfor recovery purposes.<br /><br />This might be optimizable if we want to assume that multixacts will never<br />beused for any purpose except holding locks, but that seems a bit short<br />sighted. Is there any actually significantadvantage to not logging<br />this information?<br /><br /> regards, tom lane<br /></blockquote><br />I can seetwo main advantages:<br /><br /> * Reduced I/O Activity: during transaction processing: current workloads are typicallydominated by reads (rather than updates)... and reads give rise to multixacts (if there are at least two transactionsreading the same page or if an explicit lock request is performed through heap_lock_tuple). And (long) transactionscan read a lot of tuples, which directly translates into (long) multixact logging sooner or later. To accuratelyestimate the possible performance gain one should perform some profiling, but at first glance ISTM that there aregood potentialities.<br /><br /> * Reduced Recovery Time: because of shorter logs & less data structures to rebuild...and reducing recovery time helps improving system availability so should not be overlooked.<br /><br /><br />Regards,<br/><br /> Paolo<br /><p> Chiacchiera con i tuoi amici in tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
On Sat, 17 Jun 2006, paolo romano wrote: > When a transaction enters (successfully) the prepared state it only > retains its exclusive locks and releases any shared locks (i.e. > multixacts)... or, at least, that's how it should be in principle > according to serializiaton theory, i haven't yet checked out if this is > what is done in postgresql . In PostgreSQL, shared locks are not taken when just reading data. They're used to enforce foreign key constraints. When inserting a row to a table with a foreign key, the row in the parent table is locked to keep another transaction from deleting it. It's not safe to release the lock before end of transaction. - Heikki
On Sat, 17 Jun 2006, paolo romano wrote: > * Reduced I/O Activity: during transaction processing: current workloads > are typically dominated by reads (rather than updates)... and reads give > rise to multixacts (if there are at least two transactions reading the > same page or if an explicit lock request is performed through > heap_lock_tuple). And (long) transactions can read a lot of tuples, > which directly translates into (long) multixact logging sooner or later. > To accurately estimate the possible performance gain one should perform > some profiling, but at first glance ISTM that there are good > potentialities. Read-only transactions don't acquire shared locks. And updating transcations emit WAL records anyway; the additional I/O caused by multixact records is negligable. Also, multixacts are only used when two transactions hold a shared lock on the same row. > * Reduced Recovery Time: because of shorter logs & less data > structures to rebuild... and reducing recovery time helps improving > system availability so should not be overlooked. I doubt the multixact stuff makes much difference compared to all other WAL traffic. In fact, logging the multixact stuff could be skipped when no two-phase transactions are involved. The problem is, you don't know if a transaction is one phase or two phase before you see COMMIT or PREPARE TRANSACTION. - Heikki
Heikki Linnakangas <hlinnaka@iki.fi> writes: > Also, multixacts are only used when two transactions hold a shared lock > on the same row. Yeah, it's difficult to believe that multixact stuff could form a noticeable fraction of the total WAL load, except perhaps under really pathological circumstances, because the code just isn't supposed to be exercised often. So I don't think this is worth pursuing. Paolo's free to try to prove the opposite of course ... but I'd want to see numbers not speculation. regards, tom lane
<blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"><br />InPostgreSQL, shared locks are not taken when just reading data. They're <br />used to enforce foreign key constraints.When inserting a row to a table <br />with a foreign key, the row in the parent table is locked to <br />keepanother transaction from deleting it. It's not safe to release the <br />lock before end of transaction.<br /><br /><br/></blockquote>Releasing shared locks (whether used for plain reading or enforcing foreign keys) before transactionend would be clearly wrong.<br />The original point I was moving is if there were any concrete reason (which stillI can't see) to require Multixacts recoverability (by means of logging). <br />Concerning the prepare state of two phasecommit, as I was pointing out in my previous post, shared locks can safely be released once a transaction gets precommitted,hence they do not have to be made durable.<br /><br /><br /><br /><p> Chiacchiera con i tuoi amici in temporeale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
<br /><blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;">Yeah,it's difficult to believe that multixact stuff could form a<br />noticeable fraction of the total WAL load, exceptperhaps under really<br />pathological circumstances, because the code just isn't supposed to be<br />exercised often.So I don't think this is worth pursuing. Paolo's free<br />to try to prove the opposite of course ... but I'd wantto see numbers<br />not speculation.<br /><br /> regards, tom lane<br /></blockquote>Tom is right, mine are indeed justplain speculations, motivated by my original doubt concerning whether there were hidden reasons for requiring multixactsrecoverability.<br />I don't know if I'll find the time to do some performance tests, at least in the short term,but I've enjoyed to exchange my views with you all, so thanks a lot for your feedback!<br /><br />Just a curiosity,what kind of benchmarks would you use to evaluate this effect? I am quite familiar with TPC-C and TPC-W, but iam a newbie of postgresql community so i was wondering if you were using any reference benchmark....<br /><br /><br /><p>Chiacchiera con i tuoi amici in tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
On Sat, 17 Jun 2006, paolo romano wrote: > The original point I was moving is if there were any concrete reason > (which still I can't see) to require Multixacts recoverability (by means > of logging). > Concerning the prepare state of two phase commit, as I was pointing out > in my previous post, shared locks can safely be released once a > transaction gets precommitted, hence they do not have to be made > durable. No, it's not safe to release them until 2nd phase commit. Imagine table foo and table bar. Table bar has a foreign key reference to foo. 1. Transaction A inserts a row to bar, referencing row R in foo. This acquires a shared lock on R. 2. Transaction A precommits, releasing the lock. 3. Transaction B deletes R. The new row inserted by A is not visible to B, so the delete succeeds. 4. Transaction A and B commit. Oops, the new row in bar references R that doesn't exist anymore. Holding the lock until the true end of transaction, the 2nd phase of commit, blocks B from deleting R. - Heikki
paolo romano <paolo.romano@yahoo.it> writes: > Concerning the prepare state of two phase commit, as I was pointing out in my previous post, shared locks can safely bereleased once a transaction gets precommitted, hence they do not have to be made durable. The above statement is plainly wrong. It would for example allow violation of FK constraints. regards, tom lane
Tom, Paolo, > Yeah, it's difficult to believe that multixact stuff could form a > noticeable fraction of the total WAL load, except perhaps under really > pathological circumstances, because the code just isn't supposed to be > exercised often. So I don't think this is worth pursuing. Paolo's free > to try to prove the opposite of course ... but I'd want to see numbers > not speculation. I would like to see some checking of this, though. Currently I'm doing testing of PostgreSQL under very large numbers of connections (2000+) and am finding that there's a huge volume of xlog output ... far more than comparable RDBMSes. So I think we are logging stuff we don't really have to. -- Josh Berkus PostgreSQL @ Sun San Francisco
Josh Berkus <josh@agliodbs.com> writes: > I would like to see some checking of this, though. Currently I'm doing > testing of PostgreSQL under very large numbers of connections (2000+) and am > finding that there's a huge volume of xlog output ... far more than > comparable RDBMSes. So I think we are logging stuff we don't really have > to. Please dump some of the WAL segments with xlogdump so we can get a feeling for what's in there. regards, tom lane
Tom, > Please dump some of the WAL segments with xlogdump so we can get a > feeling for what's in there. OK, will do on Monday's test run. Is it possible for me to run this at the end of the test run, or do I need to freeze it in the middle to get useful data? Also, we're toying with the idea of testing full_page_writes=off for Solaris. The Solaris engineers claim that it should be safe on Sol10 + Sun hardware. I'm not entirely sure that's true; is there a destruction test of the bug that caused us to remove that option? -- Josh Berkus PostgreSQL @ Sun San Francisco
Josh Berkus <josh@agliodbs.com> writes: >> Please dump some of the WAL segments with xlogdump so we can get a >> feeling for what's in there. > OK, will do on Monday's test run. Is it possible for me to run this at the > end of the test run, or do I need to freeze it in the middle to get useful > data? I'd just copy off a random sample of WAL segment files while the run is proceeding. You don't need very many, half a dozen at most. > Also, we're toying with the idea of testing full_page_writes=off for Solaris. > The Solaris engineers claim that it should be safe on Sol10 + Sun hardware. > I'm not entirely sure that's true; is there a destruction test of the bug > that caused us to remove that option? The bug that made us turn it off in the 8.1 branch had nothing to do with hardware reliability or the lack thereof. As for testing, will they let you yank the power cord? regards, tom lane
<br /><blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;">No,it's not safe to release them until 2nd phase commit.<br /><br />Imagine table foo and table bar. Table bar hasa foreign key reference to <br />foo.<br /><br />1. Transaction A inserts a row to bar, referencing row R in foo. This<br />acquires a shared lock on R.<br />2. Transaction A precommits, releasing the lock.<br />3. Transaction B deletesR. The new row inserted by A is not visible to <br />B, so the delete succeeds.<br />4. Transaction A and B commit.Oops, the new row in bar references R that <br />doesn't exist anymore.<br /><br />Holding the lock until the trueend of transaction, the 2nd phase <br />of commit, blocks B from deleting R.<br /><br />- Heikki<br /><br />---------------------------(endof broadcast)---------------------------<br />TIP 1: if posting/reading through Usenet,please send an appropriate<br /> subscribe-nomail command to majordomo@postgresql.org so that your<br /> message canget through to the mailing list cleanly<br /></blockquote><br /><br />Heikki, thanks for the clarifications. I was notconsidering the additional issues arising in case of referential integrity constraints... in fact i was citing a knownresult from theory books on 2PC, which did not include FK in their speculations... But as usual in theory things lookalways much simpler than in practice!<br /><br />Anyway, again in theory, if one wanted to minimize logging overheadfor shared locks, one might adopt a different treatment for (i) regular shared locks (i.e. locks due to plain readsnot requiring durability in case of 2PC) and (ii) shared locks held because some SQL command is referencing a tuplevia a FK, which have to be persisted until the 2-nd 2PC phase (There is no any other scenario in which you *must* persistshared locks, is there?)<br /><br /> Of course, in practice distinguishing the 2 above situations may not be so simpleand it still has to be shown whether such an optimization is really worth of... <br />By the way, postgresql is detailedlylogging *every* single shared lock, even though this is actually needed only if (i) the transaction turns out tobe a distributed one (i.e. prepare is issued on that transactions), AND (ii) the shared lock is due to ensure validityof a FK. AFAICS, in most practical workloads (i) local transactions dominate distributed ones and (ii) shared locksdue to plain reads dominate locks due to FK, so the current implementaion does not seem to be optimizing the most frequentscenario.<br /><br />regards,<br /><br /> paolo<br /><p> Chiacchiera con i tuoi amici in tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
paolo romano <paolo.romano@yahoo.it> writes: > Anyway, again in theory, if one wanted to minimize logging overhead for shared locks, one might adopt a different treatmentfor (i) regular shared locks (i.e. locks due to plain reads not requiring durability in case of 2PC) and (ii) sharedlocks held because some SQL command is referencing a tuple via a FK, which have to be persisted until the 2-nd 2PCphase (There is no any other scenario in which you *must* persist shared locks, is there?) I can't see any basis at all for asserting that you don't need to persist particular types of locks. In the current system, a multixact lock might arise from either FK locking, or a user-issued SELECT FOR SHARE. In either case it's possible that the lock was taken to guarantee the integrity of a data change made somewhere else. So we can't release it before commit. regards, tom lane
On Sun, 18 Jun 2006, paolo romano wrote: > Anyway, again in theory, if one wanted to minimize logging overhead for > shared locks, one might adopt a different treatment for (i) regular > shared locks (i.e. locks due to plain reads not requiring durability in > case of 2PC) and (ii) shared locks held because some SQL command is > referencing a tuple via a FK, which have to be persisted until the 2-nd > 2PC phase (There is no any other scenario in which you *must* persist > shared locks, is there?) There is no "regular shared locks" in postgres in that sense. Shared locks are only used for maintaining FK integrity. Or by manually issuing a SELECT FOR SHARE, but that's also for maintaining integrity. MVCC rules take care of the "plain reads". If you're not familiar with MVCC, it's explained in chapter 12 of the manual. The source code in heapam.c also mentions Point In Time Recovery to require logging the locks, though I'm not sure why. > By the way, postgresql is detailedly logging *every* single shared > lock, even though this is actually needed only if (i) the transaction > turns out to be a distributed one (i.e. prepare is issued on that > transactions), AND (ii) the shared lock is due to ensure validity of a > FK. AFAICS, in most practical workloads (i) local transactions dominate > distributed ones and (ii) shared locks due to plain reads dominate locks > due to FK, so the current implementaion does not seem to be optimizing > the most frequent scenario. The problem with is that we don't know beforehand if a transaction is a distributed one or not. Feel free to write a benchmark to see how much difference the logging makes! If it's significant, I'm sure we can figure out ways to improve it. - Heikki
>There is no "regular shared locks" in postgres in that sense. Shared locks <br />>are only used for maintaining FKintegrity. Or by manually issuing a <br />>SELECT FOR SHARE, but that's also for maintaining integrity. MVCC <br />>rulestake care of the "plain reads". If you're not familiar with MVCC, <br />>it's explained in chapter 12 of themanual.<br />><br />>The source code in heapam.c also mentions Point In Time Recovery to <br />>require loggingthe locks, though I'm not sure why.<br /><br />Thanks for your explanations, now I can see what was confusing me.<br/><blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;">Theproblem with is that we don't know beforehand if a transaction is a <br />distributed one or not.<br /><br />Feelfree to write a benchmark to see how much difference the logging <br />makes! If it's significant, I'm sure we canfigure out ways to improve it.<br /><br /></blockquote>Now that i finally see that multixacts are due only to explicitshared lock requests or to FKs, I tend to agree with tom's original doubts about the actual impact of the multixactrelated logging activities. Of course in practice such an impact would vary from application to application, soit may still make sense for some classes of workloads to avoid multixact logging, assuming they contain no distributedtransactions and finding an hack to know beforehand whether a transaction is distributed or not... BTW, if i manageto find some free time to do some performance tests, i'll sure let you know!<br /><br /><br />Thanks again,<br /><br/> Paolo<p> Chiacchiera con i tuoi amici in tempo reale! <br /> http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
> I would like to see some checking of this, though. Currently > I'm doing testing of PostgreSQL under very large numbers of > connections (2000+) and am finding that there's a huge volume > of xlog output ... far more than > comparable RDBMSes. So I think we are logging stuff we > don't really have to. I think you really have to lengthen the checkpoint interval to reduce WAL overhead (20 min or so). Also imho you cannot only compare the log size/activity since other db's write part of what pg writes to WAL to other areas (physical log, rollback segment, ...). If we cannot afford lenghtening the checkpoint interval because of too heavy checkpoint load, we need to find ways to tune bgwriter, and not reduce checkpoint interval. Andreas