Re: Query performance problem - Mailing list pgsql-general

From Richard Huxton
Subject Re: Query performance problem
Date
Msg-id 4239560A.50707@archonet.com
Whole thread Raw
In response to Query performance problem  (Phil Daintree <weberp@paradise.net.nz>)
List pgsql-general
Phil Daintree wrote:
>
> There are 2 tables used in the sql we need to optimise .....
>
> CREATE TABLE chartdetails (
> CREATE TABLE gltrans (

> So there is a chartdetail record for every period for every general ledger
> account.  So if there are 5 years x 12 periods (months) and 200 general
> ledger accounts this table will be 12,000 records.
>
> There is a gltrans record for every side of a journal entry. This can get to
> be quite a significant table - easily more than 200,000 per annum - depending
> on the size of the business obviously.
>
>
> Can anyone tell me why does the following code chokes ... literally - this
> works almost invisbly under mysql - pg takes more than an hour even on a very
> small 30 record database.

There's nothing obvious in the code below. You could probably do it all
with one (more complex) sql query (maybe two), but it shouldn't be too
bad as it stands.

The code doesn't seem to use gltrans at all though (unless I'm missing
something). Is there something missing?

Anyway, the first step will be to get some accurate figures. Can you
turn statement logging on in postgresql.conf and also timestamps (unless
it's logging to syslogd, in which case you'll get them automatically).
That way we can identify exactly where the time is being spent.

--
   Richard Huxton
   Archonet Ltd

pgsql-general by date:

Previous
From: Alban Hertroys
Date:
Subject: Re: help with plpgsql function called by trigger
Next
From: Marco Colombo
Date:
Subject: Re: plpython function problem workaround