Even though there was a *global* statement_timeout=61s configured, backends accessing the same table were hanging with ``LWLock AioUringCompletion``
The statement_timeout not interrupting and not erroring out looks weird, and this part could be a postgres bug in itself.
Restarting the cluster did not go through until the hanging leader PID was ``SIGKILL``ed
Am I understanding this correctly as "Normal shutdown (SIGTERM or pg_ctl stop) did not complete, and postmaster remained waiting on until AioIoUringExecution was force killed" ?
I’m interested in digging into this and am wondering about the below
1. What filesystem and storage was this instance running on 2. Was this a parallel sequential scan, was any index access involved?
3. By any chance do you have a reproducible test case? 4. Can you share what shared_preload_libraries you are using?