Thread: Transaction Size in PostgreSQL
Hello Experts,
I need a query or a way to find the transaction size in postgreSQL. Can any one provide SQL or a method to figure this out?
On 12/6/21 22:26, Sivasamy Subramaniam wrote: > Hello Experts, > > I need a query or a way to find the transaction size in postgreSQL. > Can any one provide SQL or a method to figure this out? How do you define transaction size? By the number of bytes it changed? By the duration of the transaction? By the amount of generated WAL entries? Maybe it's the amount of memory consumed by the transaction? All of these quantities could, with a reasonable justification, be called "transaction size". Anyway, the old adage says that size doesn't matter. It's not the size of transaction that matters, regardless of how the size is defined, it's the magic in the optimizer. I am quoting from memory, I apologize if I got my quotes wrong. Why would you need such a measure? Are you trying to do some capacity planning for moving to the cloud? I have usually used things like "transactions per second" and measured the system response time. You may want to try pg_stat_xact_user_tables which contains the numbers of updated, inserted and fetched rows. That could also be the measure of the transaction size. Probably the most useful way would be to build a measure into the application and let the application measure whatever you define as "transaction size". Regards -- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
On Monday, December 6, 2021, Sivasamy Subramaniam <sivasamyinfo@gmail.com> wrote:
I need a query or a way to find the transaction size in postgreSQL. Can any one provide SQL or a method to figure this out?
That doesn’t exist, or is even particularly well defined, that I know of. You may want to phrase your question using more detail. Though I suspect your best avenue would be WAL decoding (but that’s well outside my experience).
David J.
Transaction size meaning, for example - any transaction over 1GB in size? I am looking to set up an alert if any massive data changes are happening in a single transaction. I can do it for long running transactions or query but trying to figure out any easy way to calculate the amount of data changes per transaction.
On Mon, Dec 6, 2021 at 8:04 PM David G. Johnston <david.g.johnston@gmail.com> wrote:
On Monday, December 6, 2021, Sivasamy Subramaniam <sivasamyinfo@gmail.com> wrote:I need a query or a way to find the transaction size in postgreSQL. Can any one provide SQL or a method to figure this out?That doesn’t exist, or is even particularly well defined, that I know of. You may want to phrase your question using more detail. Though I suspect your best avenue would be WAL decoding (but that’s well outside my experience).David J.
--
Thank you,
Siva.
Siva.
Sivasamy Subramaniam schrieb am 07.12.2021 um 06:13: > Transaction size meaning, for example - any transaction over 1GB in > size? I am looking to set up an alert if any massive data changes are > happening in a single transaction. I can do it for long running > transactions or query but trying to figure out any easy way to > calculate the amount of data changes per transaction. A long running transaction doesn't necessary mean it wrote "1GB" of data, and a transaction that writes 1GB of data isn't necessarily a "long running" transaction. "Massive data changes" in a single transaction aren't really a problem. In my experience a single transaction changing a lot of rows is more efficient than multiple transactions changing fewer rows. A long _running_ transaction - even if it only changed one row - is a much bigger problem though.
On Dec 7, 2021, at 1:41 AM, Thomas Kellerer <shammat@gmx.net> wrote:"Massive data changes" in a single transaction aren't really a problem.
In my experience a single transaction changing a lot of rows is more
efficient than multiple transactions changing fewer rows.
That might work on a idle system but I would have to disagree with that statement. In theory it may be more efficient but it practice large long running transactions cause replication lag issues as well as vacuuming issues by holding on to older xmin value. Basically, they are bad for high transaction systems.
On Monday, December 6, 2021, Sivasamy Subramaniam <sivasamyinfo@gmail.com> wrote:
Transaction size meaning, for example - any transaction over 1GB in size? I am looking to set up an alert if any massive data changes are happening in a single transaction. I can do it for long running transactions or query but trying to figure out any easy way to calculate the amount of data changes per transaction.
So if you truncate a 4GB table you want to know about it? I’m reasonably confident PostgreSQL does not have the tools needed to accomplish your goal at 100%. You can possibly pick up some subset though if you really want…
David J.
On 12/7/21 00:13, Sivasamy Subramaniam wrote: > Transaction size meaning, for example - any transaction over 1GB in > size? I am looking to set up an alert if any massive data changes are > happening in a single transaction. I can do it for long running > transactions or query but trying to figure out any easy way to > calculate the amount of data changes per transaction. You will have to build such tool into the application. Application will have to monitor pg_stat_xact_user_tables and send alert if the alert threshold is exceeded. That will make the application unnecessarily complex and will also impact the performance. This falls into the "no can do" category. -- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com