Re: pg_stat_statements - Mailing list pgsql-hackers

From ITAGAKI Takahiro
Subject Re: pg_stat_statements
Date
Msg-id 20080616110358.7517.52131E4D@oss.ntt.co.jp
Whole thread Raw
In response to Re: pg_stat_statements  (Robert Treat <xzilla@users.sourceforge.net>)
Responses Re: pg_stat_statements  ("Koichi Suzuki" <koichi.szk@gmail.com>)
Re: pg_stat_statements  (Robert Treat <xzilla@users.sourceforge.net>)
Re: pg_stat_statements  (ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp>)
List pgsql-hackers
Robert Treat <xzilla@users.sourceforge.net> wrote:

> On Friday 13 June 2008 12:58:22 Josh Berkus wrote:
> > I can see how this would be useful, but I can also see that it could be a
> > huge performance burden when activated.  So it couldn't be part of the
> > standard statistics collection.
> 
> A lower overhead way to get at this type of information is to quantize dtrace 
> results over a specific period of time.  Much nicer than doing the whole 
> logging/analyze piece.

DTrace is disabled in most installation as default, and cannot be used in
some platforms (especially I want to use the feature in Linux). I think
DTrace is known as a tool for developers, but not for DBAs. However,
statement logging is required by DBAs who used to use STATSPACK in Oracle.


I will try to measure overheads of logging in some implementation: 1. Log statements and dump them into server logs. 2.
Logstatements and filter them before to be written. 3. Store statements in shared memory.
 

I know 1 is slow, but I don't know what part of it is really slow;
If the reason is to write statements into disks, 2 would be a solution.
3 will be needed if sending statements to loggger itself is the reason
of the overhead.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center




pgsql-hackers by date:

Previous
From: Abhijit Menon-Sen
Date:
Subject: psql: \edit-function
Next
From: David Fetter
Date:
Subject: Re: How to Sponsor a Feature