AW: day 2 results - Mailing list pgsql-hackers

From Zeugswetter Andreas SB
Subject AW: day 2 results
Date
Msg-id 11C1E6749A55D411A9670001FA687963368191@sdexcsrv1.f000.d0188.sd.spardat.at
Whole thread Raw
List pgsql-hackers
> > VACUUM ANALYZE after the INSERTs made no performance  difference at all,
> > which is good since no other modern database requires  anything to be done
> > to improve performance after a large number of INSERTs.  (i can understand
> > why COPY would need it, but not INSERT.)

I know of no DB that keeps statistics on the fly for each insert/update (maybe Adabas D ?).

> afaik every modern database requires something like this to update
> optimizer stats, since on-the-fly stats accumulation can be expensive
> and inaccurate. But most of my recent experience has been with
> PostgreSQL and perhaps some other DBs have added some hacks to get
> around this. Of course, some databases advertised as modern don't do
> much optimization, so don't need the stats.

To add another 2 Cents, "most :-)" other DB's have two modes of operation,
one rule based when stats are missing alltogether (never ran analyze/update statistics)
which is actually not bad given a pure OLTP access pattern with a high modification volume.
The other is the cost based optimizer that needs stats and typically improves performance
for query intensive applications that also do OLAP access.

Actually PostgreSQL also has this "sort of" rule based optimizer which works well
before the first vacuum of a table. The down side is, that the first vacuum
creates statistics, and that is not avoidable. My suggestion would be to alter 
vacuum to have a mode of operation that does not create (or even drops) all statistics.

Andreas


pgsql-hackers by date:

Previous
From: Zeugswetter Andreas SB
Date:
Subject: AW: PostgreSQL pre-7.1 Linux/Alpha Status...
Next
From: Oleg Bartunov
Date:
Subject: Re: Who is a maintainer of GiST code ?