Data files became huge with no apparent reason - Mailing list pgsql-general
From | Dario Fumagalli |
---|---|
Subject | Data files became huge with no apparent reason |
Date | |
Msg-id | 3D6C7193.7060702@tin.it Whole thread Raw |
Responses |
Re: Data files became huge with no apparent reason
|
List | pgsql-general |
[(Please disregard my previous post, this is more updated)] Hello to all, I'm returned from a 2 weeks vacation and I'm now facing a problem that is effectively blocking a critical database server. I have a main database (proj_store) that holds 34 objects (tables and sequences). Only one table has more than some 100s records (the products table, totalling 975 records). The backend is PostgreSQL 7.1.1 on a Caldera Linux OpenServer, compiled from sources. A full, uncompressed SQL dump (with full inserts) is about 3,4 MB. Each day, a cron-driven program I wrote one year ago updates the database from a MS-SQL 7.0 server (inserts and updates only, in the range of 2-5 each day). So this database server (hosted on its stand alone corporate pretty high end server, a Compaq ML-370 PIII PC with RAID, 1 GB RAM etc.) should run very smootly. And in fact it is at least one year it does so. But this is not the case any more. In fact it has grown in some abnormal fashion and now the "du -h" command reports the following: 1.6M ./data/base/1 1.5M ./data/base/18719 470M ./data/base/242014 11M ./data/base/46821 1.7M ./data/base/197097 2.3M ./data/base/279236 488M ./data/base 750k ./data/global 16M ./data/pg_xlog 505M ./data The ls -l reports: drwx------ 2 postgres database 2048 Jul 17 16:42 1 drwx------ 2 postgres database 2048 May 22 2001 18719 drwx------ 2 postgres database 2048 Feb 14 2002 197097 drwx------ 2 postgres database 3072 Jul 3 08:35 242014 drwx------ 2 postgres database 2048 Jul 17 16:16 279236 drwx------ 2 postgres database 3072 Jan 29 2002 46821 That is this DB is 500+ MB! And to think that I perform VACUUM ANALYZE at least weekly! The only period I didn't VACUUM the database was during the mentioned 2 weeks vacation. The database filled its partition and the backend crashed. This morning, when I was told of the malfunction, I tried to restart the daemon with: /etc/rc.d/init.d/postgres start as always and it blew up. Then I tried manually and it told me about a FATAL 2 error (searching on the archives revealed me it means "disk full" and from that I understood that the database grew too large). So freed some disk space (52 MB) and the backend finally restarted. By the way, I have a mirrored develompent machine, last update just before leaving for vacation. Here the "du -h" command reports the following "assuring" stats: 1.6M ./1 1.5M ./18719 7.3M ./250600 11M ./46821 1.7M ./197097 2.2M ./259865 26M . (Some directories have different names but I assure the contents is the same). Now, the questions are: - How is it possible this exceptional growth (N.B. the database server has all the default values set in its configuration files - i.e. was not "optimized")? - More urgently: why now it takes 30 seconds to perform a select count(*) on 900 records (no other clients connected), while in the past it was almost instantaneous? All database operations are now slow as dogs. And I have to live with this 52 MB until the technician comes with a new disk (1 week... he is in vacation now). - Why do the backend crashed immediately if I try to VACUUM (ANALYZE) it? I think the backend claims additional disk space for this operation and fills the disk again, but I'm not sure. - And last, but not least... is it possible to restore the situation without loosing data (backup is 3 weeks old)? I'm able to start the daemon and perform SQL operations, but I don't know how to make PostgreSQL release the disk space after I dumped the database in order to reload it. And I fear an InitDB will destroy db users and their privileges. Thanks in advance, Dario Fumagalli
pgsql-general by date: