On 2/3/15 4:57 PM, Vasudevan, Ramya wrote:
> -> Append (cost=0.00..1748.87 rows=17 width=8) (actual time=11.454..11.454 rows=1 loops=1847)
> Buffers: shared hit=1093445 read=538
> I/O Timings: read=21.060
> -> Seq Scan on emailsubscription.reg_email_subscriptions res (cost=0.00..1728.07 rows=1 width=8)
(actualtime=11.316..11.316 rows=0 loops=1847)
> Output: res.registration_id
> Filter: ((res.subscription_id = 200400) AND (it.recipient_id = res.registration_id))
> Rows Removed by Filter: 77271
> Buffers: shared hit=1050943
> -> Index Only Scan using reg_email_subscriptions_p00_pkey on
emailsubscription.reg_email_subscriptions_p00res_1 (cost=0.57..2.60 rows=2 width=8) (actual time=0.033..0.033 rows=0
loops=1847)
> Output: res_1.registration_id
> Index Cond: ((res_1.registration_id = it.recipient_id) AND (res_1.subscription_id = 200400))
> Heap Fetches: 0
> Buffers: shared hit=7415 read=65
> I/O Timings: read=2.802
...
Here's the part that's slow. The index scan on each partition is taking
~2.5ms, and is being repeated 1847 times *for each partition*.
What is the table partitioned on?
>> I don't believe that session_replication_role does anything to affect indexing, nor do the docs indicate that. I
alsosee nothing indicating that in the source code (search for SessionReplicationRole).
>> So if you suddenly started seeing dupes then I think your index is corrupted.
> How can we tell if any index is corrupted or not?
> If vacuum full re-indexes all the indexes in the table, would that have fixed the corruption, if we had any?
Index corruption, yes.
>> It won't report it, but it would fix it. (Note that it would throw an error if the data in the table isn't actually
unique.)
> We did not get any error during re-indexing. So our understanding that there are no more duplicates in the table is
correct!
That's good. Now the question is: why was the index corrupted? It's
*extremely* unlikely that it's Postgres. In my experience, corruption is
caused be bad hardware, or a misconfiguration (specifically, fsync not
doing what it's supposed to).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com