Re: Hot Standby, release candidate? - Mailing list pgsql-hackers
From | Heikki Linnakangas |
---|---|
Subject | Re: Hot Standby, release candidate? |
Date | |
Msg-id | 4B260B5E.3010401@enterprisedb.com Whole thread Raw |
In response to | Hot Standby, release candidate? (Simon Riggs <simon@2ndQuadrant.com>) |
Responses |
Re: Hot Standby, release candidate?
Re: Hot Standby, release candidate? |
List | pgsql-hackers |
Simon Riggs wrote: > Enclose latest version of Hot Standby. This is the "basic" patch, with > no must-fix issues and no known bugs. Further additions will follow > after commit, primarily around usability, which will include additional > control functions for use in testing. Various thoughts discussed here > http://wiki.postgresql.org/wiki/Hot_Standby_TODO I still consider it highly important to be able to start standby from a shutdown checkpoint. If you remove it, at the very least put it back on the TODO. But as it is, StandbyRecoverPreparedTransactions() is dead code, and the changes to PrescanPreparedTransactions() are not needed either. > Patch now includes a set of regression tests that can be run against a > standby server using "make standbycheck" Nice! > (requires setup, see src/test/regress/standby_schedule). Any chance of automating that? > Complete with full docs and comments. > > Barring resolving a few points and subject to even more testing, this is > the version I expect to commit to CVS on Wednesday. I would appreciate > further objective testing before commit, if possible. * Please remove any spurious whitespace. "git diff --color" makes them stand out like a sore thumb, in red. (pgindent will fix them but always better to fix them before committing, IMO). * vacuum_defer_cleanup_age is very hard to tune. You'll need an estimate on your transaction rate to begin with. Do we really want this setting in its current form? Does it make sense as PGC_USERSET, as if one backend uses a lower setting than others, that's the one that really determines when transactions are killed in the standby? I think this should be discussed and implemented as a separate patch. * Are you planning to remove the recovery_connections setting before release? The documentation makes it sound like it's a temporary hack that we're not really sure is needed at all. That's not very comforting. * You removed this comment from KnownAssignedXidsInit: - /* - * XXX: We should check that we don't exceed maxKnownAssignedXids. - * Even though the hash table might hold a few more entries than that, - * we use fixed-size arrays of that size elsewhere and expected all - * entries in the hash table to fit. - */ but AFAICS you didn't address the issue. It's referring to the 'xids' array in TransactionIdIsInProgress(), which KnownAssignedXidsGet() fills in without checking that it fits. * LockAcquireExtended needs a function comment. Or at least something explaining what report_memory_error does. And perhaps the argument should be named "reportMemoryError" to be in line with the other arguments. * We tend to not add remarks about authors in code (comments in standby.c). * This optimization in GetConflictingVirtualXIDs(): > + /* > + * If we don't know the TransactionId that created the conflict, set > + * it to latestCompletedXid which is the latest possible value. > + */ > + if (!TransactionIdIsValid(limitXmin)) > + limitXmin = ShmemVariableCache->latestCompletedXid; > + needs a lot more explanation. It took me a very long time to figure out why using latest completed xid is safe. * Are you going to leave the trace_recovery GUC in? * Can you merge with CVS HEAD, please? There's some merge conflicts. > Last remaining points > > * NonTransactionalInvalidation logging has been removed following > review, but AFAICS that means VACUUM FULL doesn't work correctly on > catalog tables, which regrettably will be the only ones still standing > even after we apply VFI patch. Did I misunderstand the original intent? > Was it just buggy somehow? Or is this hoping VF goes completely, which > seems unlikely in this release. Just noticed this, decided better to get > stuff out there now. I removed it in the hope that VF is gone before beta. > * Are we OK with using hash indexes in standby plans, even when we know > the indexes are stale and could return incorrect results? It doesn't seem any more wrong than using hash indexes right after recovery, crash recovery or otherwise. It's certainly broken, but I don't see much value in a partial fix. The bottom line is that hash indexes should be WAL-logged. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
pgsql-hackers by date: