Re: Almost happy ending (from "Data files became huge with - Mailing list pgsql-general

From Bruce Momjian
Subject Re: Almost happy ending (from "Data files became huge with
Date
Msg-id 200209021350.g82DoKJ26980@candle.pha.pa.us
Whole thread Raw
In response to Almost happy ending (from "Data files became huge with no apparent reason" thread)  (dfumagalli@tin.it)
List pgsql-general
dfumagalli@tin.it wrote:
> Hi all,
>
> Al last I recieved a new disk and was able to VACUUM ANALYZE.
>
> The repaired database from the initial 500MB became 185 MB.
> I'm very happy that the combined disk full + unexpected server shutdown
> didn't suffice to crash the data beyond recovery.
>
> The next thing I did was to further reduce this rescued database. In fact,
> the original data files took only 26 MB, not 185.
>
> I checked the objects file size (by looking at pg_class -> it's a bit difficult
> with those "numeric table names") and quicky found that I had some 150 MB
> of index files!
>
> My first try was to go stand alone and to reindex force the entire database,
> but the rebuilt indexes still took the same space (I thought the reindex
> would rebuild them from scratch, freeing unused index pages, but perhaps
> I'm wrong, at least for PostgreSQL 7.1.1).

That is strange.  How did you reindex, and did you reindex the toast
tables too?  7.3, due out in a few months, has a reindex script that
reindexes everything.

> So I dropped all non-primary key indexes (they were auto-created as *_pkey)
> and recreated them. I did not drop primary keys because I fear it could
> break something about constraints. Now the database is 47MB and I'm pretty
> happy with it. I only I could safely drop and recreate the primary keys
> I'm sure the database would return to its original 26 MB.
>
> Now, three simple questions to the pros on this mailing list:
> - Is there a cleaner way of recreating packed indexes?

Yes, 7.3 will have a reindexdb script.

> - May I drop and recreate primary indexes without breaking rules, triggers,
> constraints and sequences?

Not easily, no.

> - We are evaluating PostgreSQL for a mission critical application, currently
> managed by an Informix IDS 2000 server on a Sun Ultra 420 with Solaris 8
> with raid disks and SAN for storage. Currently there are about 1000 concurrent
> users (each with one to three ODBC sessions), with a CPU load of about 15%.
> There are 30 databases: some have only some thousand records, while others
> have a thousand tables with some of them totalling 10M+ records, with need
> of sub-selects, joins and transactions (no constraints for now). Is PostgreSQL
> able to manage *RELIABLY* and 24/7/365 this amount of data? Does the newest

You need 7.2.2, which was designed for 24 hour operation.

> version come with some handy tool like the Informix Onstat (it reports users
> sessions and executed queries given a connection id)? Does it use all the

There is pgmonitor on gborg.postgresql.org that displays active sessions
and queries.

> available processors or only one? How may we work around the periodic need
> of a VACUUM on such big tables (and they are vastly modified every day),
> without disrupting the server availability?

In 7.2.2, VACUUM doesn't lock tables so you can vacuum more easily.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

pgsql-general by date:

Previous
From: "Jerome Chochon"
Date:
Subject: Triggers and Rules
Next
From: "Warren Massengill"
Date:
Subject: php