Re: Confused by 'timing' results - Mailing list pgsql-admin

From A J
Subject Re: Confused by 'timing' results
Date
Msg-id 294404.58402.qm@web120007.mail.ne1.yahoo.com
Whole thread Raw
In response to Re: Confused by 'timing' results  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: Confused by 'timing' results  (Scott Marlowe <scott.marlowe@gmail.com>)
Re: Confused by 'timing' results  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-admin
Kevin
The problem I am trying to solve is:
measure accurately both the database server time + network time when several clients connect to the database from different geographic location.
All the clients hit the database simultaneously with a long script each of insert/update/select queries.

I don't need aggregate numbers but the 2 components of the time taken for each and every query to drive analysis. So need in some fashion:
Query1, DB Time, Total Time (or Network Time)
....
Query<n>, DB Time, Total Time (or Network Time)

From the query, I can relate to what client in what geographic location fired that query and so have the full picture.

I initially thought combination  of time + timing will give me this. Now I know 'timing' includes part of network time , so it is not just the database server time.
On second try, by trying to log to log_directory/log_filename by setting log_min_duration_statement=0, seems to be doing something weird. The durations are very very high in the file and cannot be true. My theory is that it is a single file being written by several concurrent queries and so they might be all queuing up causing the time being registered in this log file to be way high. (I might be wrong)
I could not believe the times being registered in the log file (several hundred ms as opposed to the expected few tens of ms) and ran the test several times over couple of days, still getting the same high numbers.
On setting log_min_duration_statement=-1, the performance comes back to the normal acceptable performance (but I cannot measure the db time).

So really what I want to measure is the database time for several queries by several concurrent users. Because each query takes only a few ms, any sort of overhead has to be put carefully to not skew the measurement being tried.

Looking for suggestions to solve this.

Thank you, AJ




From: Kevin Grittner <Kevin.Grittner@wicourts.gov>
To: Scott Marlowe <scott.marlowe@gmail.com>; A J <s5aly@yahoo.com>
Cc: pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 12:48:58 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> wrote:
> I am conducting the test with several concurrent clients.

I didn't see a question in your latest email.  Do you now understand
why the network affects timings?  Do you have any other questions?
Is there some particular problem you're trying to solve, for which
you don't yet have a solution?  (If so, please describe the problem
you're trying to solve; someone might be able to suggest a solution
you won't get to by asking narrower questions.)

-Kevin

pgsql-admin by date:

Previous
From: Kong Mansatiansin
Date:
Subject: Re: Query Optimization with Partitioned Tables
Next
From: Scott Marlowe
Date:
Subject: Re: Confused by 'timing' results