Thread: Sequence moves forward when failover is triggerred
Hey All,
We used the linked guide to setup streaming replication.
When testing the failover procedure, we noticed that when the new master comes up, some sequences have moved forward (by between 30 and 40). I see there's a "cache" option when creating the sequence but we're not using that.
Is this to be expected?
Thanks,
Andy
--
Andy Chambers
Software Engineer
(e) achambers@mcna.net
(t) 954-682-0573
CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.
Andy Chambers
Software Engineer
(e) achambers@mcna.net
(t) 954-682-0573
CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.
Andy Chambers <achambers@mcna.net> writes: > When testing the failover procedure, we noticed that when the new master > comes up, some sequences have moved forward (by between 30 and 40). I see > there's a "cache" option when creating the sequence but we're not using > that. > Is this to be expected? Yes. This is an artifact of an optimization that reduces the number of WAL records generated by nextval() calls --- server processes will write WAL records that say they've consumed multiple sequence values ahead of where they actually have. AFAICS this is not distinguishably different from the case where a transaction consumes that number of sequence values and then rolls back, so I don't see much wrong with that optimization. regards, tom lane
On Tue, Jul 10, 2012 at 6:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andy Chambers <achambers@mcna.net> writes:Yes. This is an artifact of an optimization that reduces the number of
> When testing the failover procedure, we noticed that when the new master
> comes up, some sequences have moved forward (by between 30 and 40). I see
> there's a "cache" option when creating the sequence but we're not using
> that.
> Is this to be expected?
WAL records generated by nextval() calls --- server processes will write
WAL records that say they've consumed multiple sequence values ahead of
where they actually have.
AFAICS this is not distinguishably different from the case where a
transaction consumes that number of sequence values and then rolls back,
so I don't see much wrong with that optimization.
OK Cool. Thanks for confirming.
I think I made a poor decision by having our application generate checkbook numbers on demand using sequences. I've since realized (due to this and other reasons like not being able to see what nextval() would return without actually moving the sequence forward) that it would probably be better to generate an entire checkbook's worth of numbers whenever the checks are physically received from the bank. Then just have the app pull the next available check.
Andy
On 07/11/2012 07:23 AM, Andy Chambers wrote:
I think I made a poor decision by having our application generate checkbook numbers on demand using sequences.
Sure did. Sequences are exempt from most transactional rules; that's why they're fast and lock-free.
I'm surprised to find that the only mention of this is at the very bottom of this page:
http://www.postgresql.org/docs/9.1/static/functions-sequence.html
as it's an important property of sequences and one that shouldn't just be a footnote.
I'd use a real table for this job.
--
Craig Ringer
Craig Ringer wrote: > On 07/11/2012 07:23 AM, Andy Chambers wrote: >> >> I think I made a poor decision by having our application >> generate checkbook numbers on demand using sequences. > > Sure did. Sequences are exempt from most transactional rules; > that's why they're fast and lock-free. > This may be another case for the "gap-less" sequence (I hate that term ... please let's call it a "uniformly-increasing sequence"). http://archives.postgresql.org/pgsql-general/2006-08/msg00744.php
On 07/10/2012 04:23 PM, Andy Chambers wrote:
On Tue, Jul 10, 2012 at 6:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Chambers <achambers@mcna.net> writes:Yes. This is an artifact of an optimization that reduces the number of
> When testing the failover procedure, we noticed that when the new master
> comes up, some sequences have moved forward (by between 30 and 40). I see
> there's a "cache" option when creating the sequence but we're not using
> that.
> Is this to be expected?
WAL records generated by nextval() calls --- server processes will write
WAL records that say they've consumed multiple sequence values ahead of
where they actually have.
AFAICS this is not distinguishably different from the case where a
transaction consumes that number of sequence values and then rolls back,
so I don't see much wrong with that optimization.OK Cool. Thanks for confirming.I think I made a poor decision by having our application generate checkbook numbers on demand using sequences. I've since realized (due to this and other reasons like not being able to see what nextval() would return without actually moving the sequence forward) that it would probably be better to generate an entire checkbook's worth of numbers whenever the checks are physically received from the bank. Then just have the app pull the next available check.Andy
That approach does more accurately model a checkbook (predefined bunch of numbered checks) but will potentially lead to some other problems.
If multiple users are interacting with the database and a transaction rolls back you could still have a hole in your "checkbook." Depending on your overall design you would have to either "void" that check or re-use the now blank check. Non-sequential consumption of checks is a common situation in real life as well, of course (spouses using different books of checks from the same account, keeping "emergency" checks in a purse/wallet, etc), so it's best to plan for it.
The stickier issue is queuing. You have multiple users and need to ensure that you grab an unused check from the book but each concurrent user needs to get a different check. "Select from checkbook where not check_used order by check_no limit 1 for update" seems like a reasonable approach but if two users run it simultaneously the first user will get 1 check and succeed while the second user will attempt to lock same check record, block until the first user completes then recheck and find the selected record no longer
meets the check_used criteria so the second user will see zero records returned.
This site has one approach for dealing with the queuing situation using advisory locks:
http://postgres.cz/wiki/PostgreSQL_SQL_Tricks#Taking_first_unlocked_row_from_table
Cheers,
Steve
On 07/12/2012 04:17 AM, Steve Crawford wrote: > The stickier issue is queuing. You have multiple users and need to > ensure that you grab an unused check from the book but each concurrent > user needs to get a different check. "Select from checkbook where not > check_used order by check_no limit 1 for update" seems like a > reasonable approach but if two users run it simultaneously the first > user will get 1 check and succeed while the second user will attempt > to lock same check record, block until the first user completes then > recheck and find the selected record no longer > meets the check_used criteria so the second user will see zero records > returned. In short-transaction systems where the blocking isn't an issue it's often fine to just re-try when you don't get a result. Much simpler than advisory locking tricks. Using a SERIALIZABLE transaction should also work AFAIK, causing a serialization failure and forcing the app to re-issue the transaction. -- Craig Ringer