> On Fri, Nov 01, 2019 at 03:15:33PM +0600, Alexey Ermakov wrote:
>
> I reproduced Sergei's test case on postgresql 11.5, replica hung up almost
> immediately after pgbench ran.
>
> 9907 - pid of startup process, 16387 - oid of test table, 2619 - oid of
> pg_statistic, 2840 - oid of toast table of pg_statistic.
>
> 1) pgbench on replica with 1 concurrent processes (-c 1):
>
> this case looks a bit different from what happened on initial report (which
> recently happened again btw) because at this time I can't even open new
> connection via psql or run query with pg_stat_activity - in hangs (pg_locks
> query works).
> perhaps because this time access exclusive lock is on pg_statistic table
> too, not only on it's toast table.
Interesting. I've tried the test case from previous email on the master
branch, and looks like I've got something similar with similar
stacktraces. After a short investigation, it looks pretty strange, a
backend 12682 is waiting on a lock, taken by 12584 (startup process):
[12682] LOG: process 12682 still waiting for AccessShareLock on relation 2619 of database 16384 after 1000.038 ms
[12682] DETAIL: Process holding the lock: 12584. Wait queue: 12689, 12674, 12671, 12682, 12683, 12677, 12680,
12676,12686, 12670, 12678, 12688, 12679, 12681, 12684, 12685, 12687.
[12682] STATEMENT: select * from tablename where i = 95;
And if I understand correctly, startup process is waiting inside
ResolveRecoveryConflictWithVirtualXIDs with a waitlist, containing
backendId = 14:
>>> p *waitlist
$3 = {
backendId = 14,
localTransactionId = 218
}
>>> p allProcs[pgprocnos]
...
lxid = 218,
pid = 12682,
pgprocno = 87,
backendId = 14,
databaseId = 16384,
...
So the same backend 12682, although I'm not sure yet why.