Re: Re: [HACKERS] Re: [QUESTIONS] Business cases - Mailing list pgsql-hackers

From Mattias Kregert
Subject Re: Re: [HACKERS] Re: [QUESTIONS] Business cases
Date
Msg-id 34C34E4C.3440D5B0@algonet.se
Whole thread Raw
In response to Re: [QUESTIONS] Business cases  (Tom <tom@sdf.com>)
Responses Re: Re: [HACKERS] Re: [QUESTIONS] Business cases  (Bruce Momjian <maillist@candle.pha.pa.us>)
List pgsql-hackers
Tom wrote:
> > >   How are large users handling the vacuum problem?  vaccuum locks other
> > > users out of tables too long.  I don't need a lot performance (a few per
> > > minutes), but I need to be handle queries non-stop).
> >
> >       Not sure, but this one is about the only major thing that is continuing
> > to bother me :(  Is there any method of improving this?
>
>   vacuum seems to do a _lot_ of stuff.  It seems that crash recovery
> features, and maintenance features should be separated.  I believe the
> only required maintenance features are recovering space used by deleted
> tuples and updating stats?  Both of these shouldn't need to lock the
> database for long periods of time.

Would it be possible to add an option to VACUUM, like a max number
of blocks to sweep? Or is this impossible because of the way PG works?

Would it be possible to (for example) compact data from the front of
the file to make one block free somewhere near the beginning of the
file and then move rows from the last block to this new, empty block?

-- To limit the number of rows to compact:
psql=> VACUUM MoveMax 1000; -- move max 1000 rows

-- To limit the time used for vacuuming:
psql=> VACUUM MaxSweep 1000; -- Sweep max 1000 blocks

Could this work with the current method of updating statistics?


*** Btw, why doesn't PG update statistics when inserting/updating?


/* m */

pgsql-hackers by date:

Previous
From: "Vadim B. Mikheev"
Date:
Subject: Re: subselects
Next
From: jwieck@debis.com (Jan Wieck)
Date:
Subject: Re: [HACKERS] *Major* Patch for PL