If long-running transaction is "read committed", then we are sure that any new query coming
(even on same table1 as vacuum table) will need snapshot on point of time query start and not the time transaction
starts (but still why read committed transaction on table2 cause vacuum on table1 to skip rows).
Hence if a vacuum on table1 sees that all the transactions in the database are "read committed" and no one
accessing table1, vacuum should be able to clear dead rows.
For read committed transactions, different table should not interfere with each other.
Virender Singla <virender.cse@gmail.com> writes:
> Currently I see the vacuum behavior for a table is that, even if a long
> running query on a different table is executing in another read committed
> transaction.
> That vacuum in the 1st transaction skips the dead rows until the long
> running query finishes.
> Why that is the case, On same table long running query blocking vacuum we
> can understand but why query on a different table block it.
Probably because vacuum's is-this-row-dead-to-everyone tests are based
on the global xmin minimum. This must be so, because even if the
long-running transaction hasn't touched the table being vacuumed,
we don't know that it won't do so in future. So we can't remove
rows that it should be able to see if it were to look.
regards, tom lane