Thread: Profiling tool for postgres under win32

Profiling tool for postgres under win32

From
aymen marouani
Date:
Hi,
I'm using the Postgres database system version 8.3 for a J2EE application.
I'd  like to profile and monitor in "real time" the status of my queries because I notice some slow loading.
Can anyone tell how to get a "good" profiling tool for the Postgres database system ?
Thanks in advance.

Re: Profiling tool for postgres under win32

From
"A. Kretschmer"
Date:
In response to aymen marouani :
> Hi,
> I'm using the Postgres database system version 8.3 for a J2EE application.
> I'd  like to profile and monitor in "real time" the status of my queries
> because I notice some slow loading.
> Can anyone tell how to get a "good" profiling tool for the Postgres database
> system ?

Set log_min_duration to a propper value and analyse your slow queries.

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99


Re: Profiling tool for postgres under win32

From
Craig Ringer
Date:
On 26/11/2009 5:48 PM, aymen marouani wrote:
> Hi,
> I'm using the Postgres database system version 8.3 for a J2EE application.
> I'd  like to profile and monitor in "real time" the status of my queries
> because I notice some slow loading.
> Can anyone tell how to get a "good" profiling tool for the Postgres
> database system ?

I'm not aware of any tools that connect to the database to provide
profiling and monitoring.

What the right course of action is depends on what sort of slowdown
you're encountering. If it's particular activities within the program
that're a problem, then you should probably use logging in your
application to record database activity (including query runtimes) to
see what exactly it's doing.

If it's unpredictable slowdowns in operations that are normally fast,
then you need to look at the database end. Look into locking issues (
see: pg_catalog.pg_locks ), table bloat and VACUUM / autovacuum, and the
possibility of simple concurrent load spikes ( see
pg_catalog.pg_stat_activity ).

Typically what you'll do if you're seeing unpredictably slow queries is
use log_min_duration to log problem statements to the PostgreSQL log,
which you can then analyse. If you enable CSV logging, you can pull the
PostgreSQL log into a database, spreadsheet, or whatever for easier
analysis.

The new auto_explain module in 8.4 is excellent and very helpful too,
since it can help you find out what plans were used to execute problem
queries easily and conveniently.

Of course, this won't help you much if your problem is an application
issuing *huge* numbers of very small queries. You can set Pg to log
every query, but you'll rapidly have an apalling amount of data to troll
through, and it's hard to know which ones are related to user-perceived
slowdowns. Application logging is usually a better option for tracking
this sort of thing down.

Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?

If you're using Hibernate, just enable its query logging features via
log4j and watch what happens.

--
Craig Ringer


Re: Profiling tool for postgres under win32

From
aymen marouani
Date:
Hi,
I'm sorry and thanks for the help, concerning your question :

"Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?"

I'm using JPA/Toplink to talk to the PostgresSQL, and I asked my question about profiling because of a slow simple query

"SELECT i FROM Item i"

which takes 4s to execute.

Cordially and best regards.

2009/11/26 Craig Ringer <craig@postnewspapers.com.au>
On 26/11/2009 5:48 PM, aymen marouani wrote:
> Hi,
> I'm using the Postgres database system version 8.3 for a J2EE application.
> I'd  like to profile and monitor in "real time" the status of my queries
> because I notice some slow loading.
> Can anyone tell how to get a "good" profiling tool for the Postgres
> database system ?

I'm not aware of any tools that connect to the database to provide
profiling and monitoring.

What the right course of action is depends on what sort of slowdown
you're encountering. If it's particular activities within the program
that're a problem, then you should probably use logging in your
application to record database activity (including query runtimes) to
see what exactly it's doing.

If it's unpredictable slowdowns in operations that are normally fast,
then you need to look at the database end. Look into locking issues (
see: pg_catalog.pg_locks ), table bloat and VACUUM / autovacuum, and the
possibility of simple concurrent load spikes ( see
pg_catalog.pg_stat_activity ).

Typically what you'll do if you're seeing unpredictably slow queries is
use log_min_duration to log problem statements to the PostgreSQL log,
which you can then analyse. If you enable CSV logging, you can pull the
PostgreSQL log into a database, spreadsheet, or whatever for easier
analysis.

The new auto_explain module in 8.4 is excellent and very helpful too,
since it can help you find out what plans were used to execute problem
queries easily and conveniently.

Of course, this won't help you much if your problem is an application
issuing *huge* numbers of very small queries. You can set Pg to log
every query, but you'll rapidly have an apalling amount of data to troll
through, and it's hard to know which ones are related to user-perceived
slowdowns. Application logging is usually a better option for tracking
this sort of thing down.

Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?

If you're using Hibernate, just enable its query logging features via
log4j and watch what happens.

--
Craig Ringer

Re: Profiling tool for postgres under win32

From
Craig Ringer
Date:
On 30/11/2009 4:42 PM, aymen marouani wrote:
> Hi,
> I'm sorry and thanks for the help, concerning your question :
>
> "Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
> JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?"
>
> I'm using JPA/Toplink to talk to the PostgresSQL, and I asked my
> question about profiling because of a slow simple query
>
> "SELECT i FROM Item i"
>
> which takes 4s to execute.
>
> Cordially and best regards.

In my off-list reply, where I suggested that you follow up on the list 
instead, I pointed you to the EXPLAIN command. Also, the initial reply I 
made pointed you to the logging options like log_min_duration.

You might want to use those tools to see what's going on. Start with:
  EXPLAIN ANALYZE SELECT i FROM Item i;

... but I'd expect to see just a normal sequential scan of a table with 
lots of entries. If that's the case, options to make it faster include:

- don't do it if you don't have to, it's always going to be expensive

- Make sure your tables aren't bloated. See:

http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html

http://wiki.postgresql.org/wiki/Introduction_to_VACUUM%2C_ANALYZE%2C_EXPLAIN%2C_and_COUNT
 ... and use proper autovacuuming.

- Don't have tables that are too "wide", ie with too many fields. While 
they're supported fine, they can be slower to scan because there's just 
more data there. If you need crazy-fast sequential scans of the whole 
table for just a few points of data, consider splitting the table into 
two tables with a one-to-one relationship - but understand that that'll 
slow other things down. A materialized view is another alternative.

- Write your app to deal with the latency. Sometimes queries are slow, 
especially over slow links. Do your work in a background worker thread, 
and keep the UI responsive. (Doesn't make sense for web apps, but is 
important for normal GUI apps).

- Get faster disks, more RAM for caching, etc.

--
Craig Ringer


Schema's, roles and privileges

From
Michael Gould
Date:
I have a database with a schema called ISS. This is where all of our
application definitions are stored. We did add 2 contribute modules (citext)
and guid generator and both of these by default went to the public schema.
It is our intent to not allow any access to public by our users.

A few questions

1. Can I reinstall the contrib modules in the ISS schema only or do they
need to be in the public schema

2. If they need to stay in the public schema and I don't want to give any
insert, update, delete or select access to public, can I revoke those
privileges and just give execute on the functions that were added by the
contrib module.

3. If I can reinstall the contrib modules in the application schema, can I
delete the public schema or does it still need to be there and I would just
revoke all except for the superuser id which would be for our installer or
tech support if needed. We have a separate userid for the security
administrator. All of the functions that the security administrator needs
are provided by a application module and they will not be directly accessing
the database via a SQL utility at all.

Best Regards


--
Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax





Re: Schema's, roles and privileges

From
Adrian Klaver
Date:
On Monday 30 November 2009 6:50:27 am Michael Gould wrote:
> I have a database with a schema called ISS. This is where all of our
> application definitions are stored. We did add 2 contribute modules
> (citext) and guid generator and both of these by default went to the public
> schema. It is our intent to not allow any access to public by our users.
>
> A few questions
>
> 1. Can I reinstall the contrib modules in the ISS schema only or do they
> need to be in the public schema
>
> 2. If they need to stay in the public schema and I don't want to give any
> insert, update, delete or select access to public, can I revoke those
> privileges and just give execute on the functions that were added by the
> contrib module.
>
> 3. If I can reinstall the contrib modules in the application schema, can I
> delete the public schema or does it still need to be there and I would just
> revoke all except for the superuser id which would be for our installer or
> tech support if needed. We have a separate userid for the security
> administrator. All of the functions that the security administrator needs
> are provided by a application module and they will not be directly
> accessing the database via a SQL utility at all.
>
> Best Regards
>
>
> --
> Michael Gould, Managing Partner
> Intermodal Software Solutions, LLC
> 904.226.0978
> 904.592.5250 fax

From a quick look it would seem the easiest solution would be to change the
search_path in:
citext.sql.in
uuid-ossp.sql.in
These files are found in the respective contrib directories. Uninstall the
modules. Rerun make and then reinstall.

From here:
http://www.postgresql.org/docs/8.4/interactive/ddl-schemas.html
"There is nothing special about the public schema except that it exists by
default. It can be dropped, too. "

--
Adrian Klaver
aklaver@comcast.net


Re: Schema's, roles and privileges

From
Michael Gould
Date:
Thanks

Mike Gould

"Adrian Klaver" <aklaver@comcast.net> wrote:
> On Monday 30 November 2009 6:50:27 am Michael Gould wrote:
>> I have a database with a schema called ISS. This is where all of our
>> application definitions are stored. We did add 2 contribute modules
>> (citext) and guid generator and both of these by default went to the
public
>> schema. It is our intent to not allow any access to public by our users.
>>
>> A few questions
>>
>> 1. Can I reinstall the contrib modules in the ISS schema only or do they
>> need to be in the public schema
>>
>> 2. If they need to stay in the public schema and I don't want to give any
>> insert, update, delete or select access to public, can I revoke those
>> privileges and just give execute on the functions that were added by the
>> contrib module.
>>
>> 3. If I can reinstall the contrib modules in the application schema, can
I
>> delete the public schema or does it still need to be there and I would
just
>> revoke all except for the superuser id which would be for our installer
or
>> tech support if needed. We have a separate userid for the security
>> administrator. All of the functions that the security administrator needs
>> are provided by a application module and they will not be directly
>> accessing the database via a SQL utility at all.
>>
>> Best Regards
>>
>>
>> --
>> Michael Gould, Managing Partner
>> Intermodal Software Solutions, LLC
>> 904.226.0978
>> 904.592.5250 fax
>
> From a quick look it would seem the easiest solution would be to change
the
> search_path in:
> citext.sql.in
> uuid-ossp.sql.in
> These files are found in the respective contrib directories. Uninstall the

> modules. Rerun make and then reinstall.
>
> From here:
> http://www.postgresql.org/docs/8.4/interactive/ddl-schemas.html
> "There is nothing special about the public schema except that it exists by

> default. It can be dropped, too. "
>
> --
> Adrian Klaver
> aklaver@comcast.net
>
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql
>

--
Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax




Re: Schema's, roles and privileges

From
Harald Fuchs
Date:
In article <200911300708.12397.aklaver@comcast.net>,
Adrian Klaver <aklaver@comcast.net> writes:

> From a quick look it would seem the easiest solution would be to change the 
> search_path in:
> citext.sql.in 
> uuid-ossp.sql.in
> These files are found in the respective contrib directories. Uninstall the 
> modules. Rerun make and then reinstall.

FWIW:

I also manage some databases needing one or more contrib/pgfoundry
modules.  By habit, I install each module in its own schema (with read
access to everybody and write access to nobody) - this helps to
distinguish between your own DDL and the DDL of helper modules.  For
example:
 CREATE SCHEMA prefix; SET search_path = mymodule; \i path_to/mymodule.sql ALTER DATABASE mydb SET search_path =
public,mymodule;
 

The SQL files of a few modules mess with the search_path themselves;
this can be fixed by editing a copy of <module>.sql and \inserting the
copy.