Re: Triggers made with plpythonu performance issue - Mailing list pgsql-general

From Adrian Klaver
Subject Re: Triggers made with plpythonu performance issue
Date
Msg-id 200912191244.09746.aklaver@comcast.net
Whole thread Raw
In response to Triggers made with plpythonu performance issue  (sabrina miller <sabrina.miller@gmail.com>)
List pgsql-general
On Friday 18 December 2009 11:00:33 am sabrina miller wrote:
> Hi everybody,
> My requirements was:
>  + Made a table charge to be partitioned by carrier and month
>  + summarize by charges
>  + summarize by users,
>  + each summarization must be by month and several others columns.
>
>
>
> Doesn't sound like too much? As I say, im new and I didn't found any
> better. But an insert takes around 135ms in the worst case (create tables
> and insert rows) and about 85 ms in best case (only updates). There are
> something better?

If I am following this it means there is an average of 50ms extra overhead to do
an INSERT on charges.charges then an UPDATE correct? If so you have to consider
that an INSERT is actually doing quite a lot besides creating a new row in
charges.charges. There is a time cost to querying the database for existence of
objects , making decisions based on the result, creating new database objects
and the populating those objects. The issue then becomes where you want to pay
it? So the something better question then becomes where is the best place to
incur that cost. If the 135ms worst case works and does not impede your process
then it may be the best solution. Unfortunately there is not enough information
to give a definitive answer.

>
> Thanks in advance, Sabrina



--
Adrian Klaver
aklaver@comcast.net

pgsql-general by date:

Previous
From: Garry Saddington
Date:
Subject: modelling question
Next
From: Clayton Graf
Date:
Subject: Re: AccessShareLock question