Thread: left join with smaller table or index on (XXX is not null) to avoid upsert

left join with smaller table or index on (XXX is not null) to avoid upsert

From
Ivan Sergio Borgonovo
Date:
I've to apply a discounts to products.

For each promotion I've a query that select a list of products and
should apply a discount.

Queries may have intersections, in these intersections the highest
discount should be applied.

Since queries may be slow I decided to proxy the discount this way:

create table Product(
  ProductID int primary key,
  ListPrice numeric
);

create table ProductPrice(
 ProductID int references Products (ProcuctID),
 DiscountedPrice numeric
);

Case A)
If I want the ProductPrice to contain just products with a
discount I'll have to update, then see if the update was successful
otherwise insert.
I expect that the products involved may be around 10% of the overall
products.

To retrieve a list of products I could:
select [some columns from Product],
  least(coalesce(p.ListPrice,0),
    coalesce(pp.DiscountedPrice,0)) as Price
  from Product
  left join ProductPrice pp on p.ProductID=pp.ProductID
  where [some conditions on Product table];

create index ProductDiscount_ProductID_idx on DiscountPrice
(ProductID);

Case B)
Or ProductPrice may just contain ALL the products and everything
will be managed with updates.

select [some columns from Product],
  least(coalesce(p.ListPrice,0),
    coalesce(pd.DiscountedPrice,0))
  from Product
  left join ProductDiscount pd on p.ProductID=pd.ProductID and
    pd.DiscountPrice is not null
  where [some conditions on Product table];

create index ProductDiscount_DiscountedPrice_idx on DiscountPrice
(DiscountPrice is not null);
create index ProductDiscount_ProductID_idx on DiscountPrice
(ProductID);

I'm expecting that:
- ProductPrice will contain roughly but less than 10% of the
catalogue.
- I may have from 0 to 60% overlap on queries generating the list of
products to be discounted.
- The overall number of promotions/query running concurrently may be
in the range of 20-100
- promotions will be created/deletes at a rate of 5-10 a day, so
that discount will have to be recalculated
- searches in the catalogue have to be fast

Since I haven't been able to find a quick way to build up a
hierarchy of promotions to apply/re-apply discounts when promotion
are added/deleted, creating/deleting promotions looks critical as
well.
The best thing I was able to plan was just to reapply all promotions
if one is deleted.

So it looks to me that approach B is going to make updating of
discounts easier, but I was wondering if it makes retrieval of
Products and Prices slower.

Having a larger table that is being updated at a rate of 5% to 10% a
day may make it a bit "fragmented".

Advices on the overall problem of discount overlap management will
be appreciated too.

--
Ivan Sergio Borgonovo
http://www.webthatworks.it




--
Ivan Sergio Borgonovo
http://www.webthatworks.it


Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Martin Gainty
Date:
Sergio-

1)Index all joined columns
2)Put your NOT NULL test up front e.g
where
pd.DiscountPrice is not null
AND
left join ProductDiscount pd on p.ProductID=pd.ProductID

Martin
______________________________________________
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission.




> Date: Sun, 18 Jan 2009 22:12:07 +0100
> From: mail@webthatworks.it
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] left join with smaller table or index on (XXX is not null) to avoid upsert
>
> I've to apply a discounts to products.
>
> For each promotion I've a query that select a list of products and
> should apply a discount.
>
> Queries may have intersections, in these intersections the highest
> discount should be applied.
>
> Since queries may be slow I decided to proxy the discount this way:
>
> create table Product(
> ProductID int primary key,
> ListPrice numeric
> );
>
> create table ProductPrice(
> ProductID int references Products (ProcuctID),
> DiscountedPrice numeric
> );
>
> Case A)
> If I want the ProductPrice to contain just products with a
> discount I'll have to update, then see if the update was successful
> otherwise insert.
> I expect that the products involved may be around 10% of the overall
> products.
>
> To retrieve a list of products I could:
> select [some columns from Product],
> least(coalesce(p.ListPrice,0),
> coalesce(pp.DiscountedPrice,0)) as Price
> from Product
> left join ProductPrice pp on p.ProductID=pp.ProductID
> where [some conditions on Product table];
>
> create index ProductDiscount_ProductID_idx on DiscountPrice
> (ProductID);
>
> Case B)
> Or ProductPrice may just contain ALL the products and everything
> will be managed with updates.
>
> select [some columns from Product],
> least(coalesce(p.ListPrice,0),
> coalesce(pd.DiscountedPrice,0))
> from Product
> left join ProductDiscount pd on p.ProductID=pd.ProductID and
> pd.DiscountPrice is not null
> where [some conditions on Product table];
>
> create index ProductDiscount_DiscountedPrice_idx on DiscountPrice
> (DiscountPrice is not null);
> create index ProductDiscount_ProductID_idx on DiscountPrice
> (ProductID);
>
> I'm expecting that:
> - ProductPrice will contain roughly but less than 10% of the
> catalogue.
> - I may have from 0 to 60% overlap on queries generating the list of
> products to be discounted.
> - The overall number of promotions/query running concurrently may be
> in the range of 20-100
> - promotions will be created/deletes at a rate of 5-10 a day, so
> that discount will have to be recalculated
> - searches in the catalogue have to be fast
>
> Since I haven't been able to find a quick way to build up a
> hierarchy of promotions to apply/re-apply discounts when promotion
> are added/deleted, creating/deleting promotions looks critical as
> well.
> The best thing I was able to plan was just to reapply all promotions
> if one is deleted.
>
> So it looks to me that approach B is going to make updating of
> discounts easier, but I was wondering if it makes retrieval of
> Products and Prices slower.
>
> Having a larger table that is being updated at a rate of 5% to 10% a
> day may make it a bit "fragmented".
>
> Advices on the overall problem of discount overlap management will
> be appreciated too.
>
> --
> Ivan Sergio Borgonovo
> http://www.webthatworks.it
>
>
>
>
> --
> Ivan Sergio Borgonovo
> http://www.webthatworks.it
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Windows Live™: Keep your life in sync. Check it out.

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
"Scott Marlowe"
Date:
On Sun, Jan 18, 2009 at 2:12 PM, Ivan Sergio Borgonovo
<mail@webthatworks.it> wrote:
> I've to apply a discounts to products.
>
> For each promotion I've a query that select a list of products and
> should apply a discount.
>
> Queries may have intersections, in these intersections the highest
> discount should be applied.
>
> Since queries may be slow I decided to proxy the discount this way:
>
> create table Product(
>  ProductID int primary key,
>  ListPrice numeric
> );
>
> create table ProductPrice(
>  ProductID int references Products (ProcuctID),
>  DiscountedPrice numeric
> );
>
> Case A)
> If I want the ProductPrice to contain just products with a
> discount I'll have to update, then see if the update was successful
> otherwise insert.
> I expect that the products involved may be around 10% of the overall
> products.

You could update returning rowsupdated, so you could run that and get
a list of all the rows that were updated.  Then build a simple select
where not in (those rows) to get the rest for inserting.



> I'm expecting that:
> - ProductPrice will contain roughly but less than 10% of the
> catalogue.

Then an index will only help when you're selecting on something more
selective.  unless your rows are really skinny, a sequential scan will
usually win over an index scan.

> Since I haven't been able to find a quick way to build up a
> hierarchy of promotions to apply/re-apply discounts when promotion
> are added/deleted, creating/deleting promotions looks critical as
> well.
> The best thing I was able to plan was just to reapply all promotions
> if one is deleted.

Watch out for bloat when doing this.  A simple where change of

update table set b = 45 ;

to

update table set b = 45 where b <> 45 ;

can save the db a lot of work, and if you can apply the same logic to
your update to save some dead tuples it's worth looking into.
Updating whole tables wholesale is not definitely not pgsql's strong
suit.

> So it looks to me that approach B is going to make updating of
> discounts easier, but I was wondering if it makes retrieval of
> Products and Prices slower.

If you do bulk updates, you'll blow out your tables if you don't keep
them vacuumed.  50% dead space is manageable, if your data set is
reasonably small (under a few hundred meg).  Just make sure you don't
run 20 updates on a table in a row, that kind of thing.

> Having a larger table that is being updated at a rate of 5% to 10% a
> day may make it a bit "fragmented".

Nah, autovacuum should keep it clean and running smooth.  Fragmenting
isn't a problem in postgresql so much.

Tips: Look at indexes that match common where clauses.  If you do a
lot of "where a.x=b.y and b.x is not null" then index b.y where b.x is
not null kinda thing.

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Grzegorz Jaśkiewicz
Date:
On Mon, Jan 19, 2009 at 2:44 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> Watch out for bloat when doing this.  A simple where change of
>
> update table set b = 45 ;
>
> to
>
> update table set b = 45 where b <> 45 ;
>
> can save the db a lot of work, and if you can apply the same logic to
> your update to save some dead tuples it's worth looking into.

I wonder why DB can't do it on its own :)

--
GJ

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
"Scott Marlowe"
Date:
On Mon, Jan 19, 2009 at 12:12 AM, Grzegorz Jaśkiewicz <gryzman@gmail.com> wrote:
> On Mon, Jan 19, 2009 at 2:44 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> Watch out for bloat when doing this.  A simple where change of
>>
>> update table set b = 45 ;
>>
>> to
>>
>> update table set b = 45 where b <> 45 ;
>>
>> can save the db a lot of work, and if you can apply the same logic to
>> your update to save some dead tuples it's worth looking into.
>
> I wonder why DB can't do it on its own :)

Submit a patch. :)

But seriously, it's doing what you told it to do. There might be
corner cases where you need a trigger to fire for a row on change, and
short-circuiting could cause things to fail in unexpected ways.

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Grzegorz Jaśkiewicz
Date:
2009/1/19 Scott Marlowe <scott.marlowe@gmail.com>:
> Submit a patch. :)
>
> But seriously, it's doing what you told it to do. There might be
> corner cases where you need a trigger to fire for a row on change, and
> short-circuiting could cause things to fail in unexpected ways.

as far as my little knowledge about pg goes, that would be just
another addition to planner. <daydreaming> Say - when there's more
than X % of value Y, and we do set column X to Y, it could add that
'where'. But what if we have more WHERE statements, and they are quite
contradictory, etc, etc. It could actually do more damage than good.
(yes, I do have quite few more 'against' than for)</daydreaming>

I wrote that previous email, while waiting for breakfast, so I guess
it wasn't the best idea in the world ;)

--
GJ

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
"Scott Marlowe"
Date:
On Mon, Jan 19, 2009 at 12:53 AM, Grzegorz Jaśkiewicz <gryzman@gmail.com> wrote:
> 2009/1/19 Scott Marlowe <scott.marlowe@gmail.com>:
>> Submit a patch. :)
>>
>> But seriously, it's doing what you told it to do. There might be
>> corner cases where you need a trigger to fire for a row on change, and
>> short-circuiting could cause things to fail in unexpected ways.
>
> as far as my little knowledge about pg goes, that would be just
> another addition to planner. <daydreaming> Say - when there's more
> than X % of value Y, and we do set column X to Y, it could add that
> 'where'. But what if we have more WHERE statements, and they are quite
> contradictory, etc, etc. It could actually do more damage than good.
> (yes, I do have quite few more 'against' than for)</daydreaming>

Yes, but what about a table with an update trigger on it that does
some interesting bit of housekeeping when rows are updated?  It might
be that you have ten rows, all with the number 4 in them, and you
update the same field again to 4.  With the trigger some other
processing gets kicked off and some maintenance script picks up those
values and does something.  If the db autoshort-circuited like you
want, the trigger would never fire.  According to the strictest
interpretation, setting a value from 4 to 4 is still a change.  But
the database just changed the rules underneath you.

It's a prime example of fixing a problem created by not knowing how
the database works, and creating a possible problem for people who do
know how it works.

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Grzegorz Jaśkiewicz
Date:
2009/1/19 Scott Marlowe <scott.marlowe@gmail.com>:

> Yes, but what about a table with an update trigger on it that does
> some interesting bit of housekeeping when rows are updated?
exactly, that's another one of reasons why I wouldn't write that patch :P

> It's a prime example of fixing a problem created by not knowing how
> the database works, and creating a possible problem for people who do
> know how it works.

Like I said, I was just daydreaming right after getting out of bed.
Forgive me, also - I do know how it works, but it is interesting to
explore such options sometimes - to learn that the simple design of db
is the best possible :)


--
GJ

On Mon, 19 Jan 2009 01:18:35 -0700
"Scott Marlowe" <scott.marlowe@gmail.com> wrote:

> On Mon, Jan 19, 2009 at 12:53 AM, Grzegorz Jaśkiewicz
> <gryzman@gmail.com> wrote:
> > 2009/1/19 Scott Marlowe <scott.marlowe@gmail.com>:
> >> Submit a patch. :)
> >>
> >> But seriously, it's doing what you told it to do. There might be
> >> corner cases where you need a trigger to fire for a row on
> >> change, and short-circuiting could cause things to fail in
> >> unexpected ways.

> > as far as my little knowledge about pg goes, that would be just
> > another addition to planner. <daydreaming> Say - when there's
> > more than X % of value Y, and we do set column X to Y, it could
> > add that 'where'. But what if we have more WHERE statements, and
> > they are quite contradictory, etc, etc. It could actually do
> > more damage than good. (yes, I do have quite few more 'against'
> > than for)</daydreaming>

> Yes, but what about a table with an update trigger on it that does
> some interesting bit of housekeeping when rows are updated?  It
> might be that you have ten rows, all with the number 4 in them,
> and you update the same field again to 4.  With the trigger some

But what should be the expected/standard behaviour?
It seems that unless an update should fire triggers just if columns
get updated... things will start to be a bit non-deterministic.
You'll have to take into account rules etc...

eg. FOUND is set true when conditions are met, not when columns are
changed etc...

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Ivan Sergio Borgonovo
Date:
On Sun, 18 Jan 2009 19:44:40 -0700
"Scott Marlowe" <scott.marlowe@gmail.com> wrote:

> You could update returning rowsupdated, so you could run that and
> get a list of all the rows that were updated.  Then build a simple
> select where not in (those rows) to get the rest for inserting.

uh nice addition. I didn't check all the goodies I got when I moved
from 8.1 to 8.3. I mostly was interested in tsearch.
Still while it makes nearly trivial to write upsert it looks like it
will still make the server sweat compared to MySQL REPLACE.
In postgresql I could write a rule, but it will be globally defined
and it is a much permanent solution than using an upsert (aka
REPLACE) on a statement basis.

> > I'm expecting that:
> > - ProductPrice will contain roughly but less than 10% of the
> > catalogue.

> Then an index will only help when you're selecting on something
> more selective.  unless your rows are really skinny, a sequential
> scan will usually win over an index scan.

They should be very skinny.
create table ProductPrice(
  ProductID int references Product (ProductID),
  DiscountedPrice numeric(4,2)
);

> > Since I haven't been able to find a quick way to build up a
> > hierarchy of promotions to apply/re-apply discounts when
> > promotion are added/deleted, creating/deleting promotions looks
> > critical as well.
> > The best thing I was able to plan was just to reapply all
> > promotions if one is deleted.

> Watch out for bloat when doing this.  A simple where change of

> update table set b = 45 ;

> to

> update table set b = 45 where b <> 45 ;

> can save the db a lot of work, and if you can apply the same logic
> to your update to save some dead tuples it's worth looking into.
> Updating whole tables wholesale is not definitely not pgsql's
> strong suit.

oh that's really a good suggestion since just a higher discount have
to be applied.

I just have to understand how to apply it.

If I actually have all the rows everything will be an update and I
can actually exploit your suggestion.

update ProductPrices set DiscountedPrice=round(q.Price*Discount,2)
  from somefunction() q where ProductPrices.ProductID=q.ProductID and
  ProductPrices.DiscountedPrice>round(q.Price*Discount,2);
If I have to "upsert" most of the advantage of reducing the # of
updates with an additional condition seems to be lost.

> > So it looks to me that approach B is going to make updating of
> > discounts easier, but I was wondering if it makes retrieval of
> > Products and Prices slower.
>
> If you do bulk updates, you'll blow out your tables if you don't
> keep them vacuumed.  50% dead space is manageable, if your data
> set is reasonably small (under a few hundred meg).  Just make sure
> you don't run 20 updates on a table in a row, that kind of thing.

That's going exactly to be the case. If I can't easily spot
intersections between promotions, and I doubt I can in a cheap way,
I'll have to run 20 to 100 updates every time I delete a promotion.
Set the DiscountedPrice to null for every ProductID I'm going to
delete from the promotion and reapply all the promotions.

But well maybe you helped me to find another approach.

Since I'm going to reapply all promotions I could:
- delete whole table
- insert prices starting from the promotions with higher discount
- skip on failed insert since other promotions will have same or
lower discount.

So I'll have just the rows I need in ProductPrice table, no need to
index on is not null, smaller table so faster to left join and keep
it in memory.

But... well how am I going to:

-- Discount=40
insert into ProductPrice from (
  select ProductID, round(q.Price*Discount,2) from
   mypromofunction(...)
);
-- Discount=40
insert into ProductPrice from (
  select ProductID, round(q.Price*Discount,2) from
   mypromofunction(...)
);
-- Discount=30
insert into ProductPrice from (
  select ProductID, round(q.Price*Discount,2) from
   mypromofunction(...)
);

OK one more approach:
create table ProductPromoPrice(
  PromoID references Promo (PromoID) on delete cascade,
  ProductID references Product (ProductID on delete cascade,
  DiscountedPrice numeric
);
create table ProductPrice(
  ProductID references Product (ProductID) on delete cascade,
  DiscountedPrice numeric
);

insert into ProductPromoPrice from select PromoID, ProductID,
round(q.Price*Discount,2) from
   mypromofunction(...)
);
...
[1]
insert into ProductPrice select ProductID, min(DiscountedPrice) from
  ProductPromoPrice group by ProductID;

I just did:
create table test.Prices(ItemID int, Price real);

insert into test.Prices select BrandID, max(ListPrice) from
catalog_items group by BrandID;
took 1sec

insert into test.Prices select ProductID, ListPrice from
catalog_items;
took 4 sec

If I'm expecting that
- discounted articles may be 10% of the whole catalogue
- a maximum overlap of 60%.
- large overlap it may involve a small # of promotions
- no more than 100 promotions
what execution time should I expect from query [1]?

Should an index on ProductPromoPrice.Price help on taking out the
max?
Should I index on ProductPromoPrice.ItemID?

Meanwhile I thought I may have another requirement that may make
things enough complicated to need a completely different strategy.
If discounts have to be applied just when goods are in stock... I'll
have to rebuild the discount table every time a product goes out of
stock.

I may have a PriceInStock table and a PriceBackOrder table an then
just chose the LEAST.
Since few products will get out of stock concurrently I'll have to
regenerate the entries just for those product... with a rule or a
trigger.

I'm still looking for advices for an overall better strategy or just
to lower the numbers of actual tests to see if this stuff is
feasible in a decent time.

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> But seriously, it's doing what you told it to do. There might be
> corner cases where you need a trigger to fire for a row on change, and
> short-circuiting could cause things to fail in unexpected ways.

The other argument against doing this by default is that with
non-stupidly-written applications, the cycles expended to check for
vacuous updates would invariably be wasted.  Even if the case did
come up occasionally, it's not hard at all to foresee that the extra
checking could be a net loss overall.

But having said that: 8.4 will provide a standard trigger that
short-circuits vacuous updates, which you can apply to tables in which
you think vacuous updates are likely.  It's your responsibility to place
the trigger so that it doesn't interfere with any other trigger
processing you may have.

            regards, tom lane

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Grzegorz Jaśkiewicz
Date:
On Mon, Jan 19, 2009 at 4:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> But having said that: 8.4 will provide a standard trigger that
> short-circuits vacuous updates, which you can apply to tables in which
> you think vacuous updates are likely.  It's your responsibility to place
> the trigger so that it doesn't interfere with any other trigger
> processing you may have.

Tom, Can you point us to
http://developer.postgresql.org/pgdocs/postgres/ where it is described
in more detail ?

--
GJ

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Alex Hunsaker
Date:
On Mon, Jan 19, 2009 at 09:48, Grzegorz Jaśkiewicz <gryzman@gmail.com> wrote:
> On Mon, Jan 19, 2009 at 4:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> But having said that: 8.4 will provide a standard trigger that
>> short-circuits vacuous updates, which you can apply to tables in which
>> you think vacuous updates are likely.  It's your responsibility to place
>> the trigger so that it doesn't interfere with any other trigger
>> processing you may have.
>
> Tom, Can you point us to
> http://developer.postgresql.org/pgdocs/postgres/ where it is described
> in more detail ?

I assume he is talking about suppress_redundant_updates_trigger, see
http://developer.postgresql.org/pgdocs/postgres/functions-trigger.html

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Ivan Sergio Borgonovo
Date:
On Sun, 18 Jan 2009 22:12:07 +0100
Ivan Sergio Borgonovo <mail@webthatworks.it> wrote:

> I've to apply a discounts to products.
>
> For each promotion I've a query that select a list of products and
> should apply a discount.
>
> Queries may have intersections, in these intersections the highest
> discount should be applied.
>
> Since queries may be slow I decided to proxy the discount this way:

Actually:
premature optimization is the root of all evil (Knuth).

Although I haven't reached any definitive conclusion clean design
and normalisation seem paid off.

A normal query to retrieve a list of products seems nearly
unaffected by keeping a

create table Promo (
PromoID serial primary key,
PromoStart timestamp,
PromoEnd timestamp,
..);
and a
create table PromoItem(
  PromoID int references Promo (PromoID) on delete cascade,
  ItemID int references Product (ProductID) on delete cascade,
  Discount numeric(4,2) not null
);

and looking for max discount in a join on the fly.
That's on a 1M items and on 40K products on promo.
Distribution of promo was random, I'll dig further to get an idea of
worst case.
What's important is that a simple search over the catalogue takes
nearly the same time that a query that search through the catalogue
and find the appropriate discount.

Thanks to Knuth and to Postgresql coders.

I'll post a more detailed solution as soon as it's enough refined
and if I'm sure of its correctness.

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Grzegorz Jaśkiewicz
Date:
the only difference here is, that the trigger will memcmp (compare)
all data. Say, if we have two columns, int and bytea, and just want to
compare fist one - it will use a lot of cpu in vain.
I have to say, it is a shame sometimes - that trigger isn't aware of
what fields we do update exactly

Re: left join with smaller table or index on (XXX is not null) to avoid upsert

From
Dimitri Fontaine
Date:
Hi,

Le lundi 19 janvier 2009, Tom Lane a écrit :
> But having said that: 8.4 will provide a standard trigger that
> short-circuits vacuous updates, which you can apply to tables in which
> you think vacuous updates are likely.  It's your responsibility to place
> the trigger so that it doesn't interfere with any other trigger
> processing you may have.

I'm preparing an 8.3 backport of it, which in fact is running just fine
already now. Would pgfoundry let me chekout the module I imported earlier
today the code and debian packaging would be on a public CVS already.
  http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/backports/min_update/

If people want to see the code before pgfoundry allows me to put it in the CVS
overthere, here it is (slow server):
  http://pgsql.tapoueh.org/min_update

Regards,
--
dim

Attachment