OK, I've decided to 'painfully' look at the PostgreSQL RDS logs and it is showing something like below.
There seems to be a locking/deadlock issue of some sort somewhere.
I have checked the other days' log prior to the patching and these appear to be a 'normal' occurrence for this database and it wasn't affecting the application nonetheless.
After the patching, it starts affecting the application. Not sure what else I can check on the Aurora PostgreSQL RDS end. I may request them to restart the app server.
[25751]:LOG: process 25751 still waiting for ShareLock on transaction 114953443 after 1000.054 ms
[25751]:DETAIL: Process holding the lock: 22297. Wait queue: 25751.
[25751]:CONTEXT: while locking tuple (1,17) in relation "[table_name]"
[25751]:STATEMENT: [SQL_STATEMENT] for update [25751]:LOG: process 25751 acquired ShareLock on transaction 114953443 after 4756.967 ms [25751]:CONTEXT: while locking tuple (1,17) in relation "[table_name]"
Edwin UY <edwin.uy@gmail.com> writes: > I thought it could be the backend has sent something back to the client but > it never received it and it just kept on doing the same at some intervals.
Your pg_stat_activity output shows the backend is idle, meaning it's waiting for a client command. While the session has been around for days, we can see the last client command was about 47 minutes ago, from your "now() - pg_stat_activity.query_start AS duration" column. I see no reason to think there is anything interesting here at all, except for a client that is sitting doing nothing for long periods.