Thread: BUG #8612: Truncate did not release disk space
The following bug has been logged on the website: Bug reference: 8612 Logged by: Eduardo Armendariz Email address: eduardoa@mirthcorp.com PostgreSQL version: 9.0.13 Operating system: CentOS Description: Ran out of disk space and postgres shut down. Recovered enough disk space for database to be operational. Truncated the largest table in the database, the message table. This table had over 600gb of data. The result of the truncate was that only about 200gb of the data was actually released to the OS. Made sure the data was actually in in the postgres directory: [05:41 PM|V1000-04-3DE62003-0001|mirth primary device|/opt]# du -sh * 3.4M openvpn 380K pdmenu 437G pgsql Size of tables after truncate of message table: mirthdb=# SELECT relname AS "tablename", pg_size_pretty(pg_total_relation_size(relid)) AS "total_usage", pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) AS "external_table_usage" FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC; tablename | total_usage | external_table_usage --------------------+-------------+---------------------- event | 101 MB | 85 MB channel_statistics | 37 MB | 35 MB channel | 1944 kB | 1808 kB script | 1552 kB | 624 kB message | 1032 kB | 768 kB template | 168 kB | 160 kB alert | 64 kB | 56 kB person | 64 kB | 56 kB configuration | 48 kB | 40 kB person_password | 40 kB | 32 kB channel_alert | 40 kB | 32 kB alert_email | 40 kB | 32 kB attachment | 24 kB | 24 kB code_template | 16 kB | 16 kB schema_info | 8192 bytes | 0 bytes Here is the size of the actual mirthdb database though: mirthdb=# SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database; datname | size ----------+-------- template1 | 5329 kB template0 | 5329 kB postgres | 5457 kB mirthdb | 465 GB (4 rows) I have never seen truncate behave this way and I am not able to explain this behavior. Here is the logs around the time of the truncate: 2013-11-16 16:29:16 MST - LOG: could not write temporary statistics file "pg_stat_tmp/pgstat.tmp": No space left on device 2013-11-16 16:29:16 MST - LOG: could not write temporary statistics file "pg_stat_tmp/pgstat.tmp": No space left on2013-11-16 17:24:19 MST - LOG: database system shutdown was interrupted; last known up at 2013-11-16 16:42:57 MST 2013-11-16 17:24:19 MST - LOG: database system was not properly shut down; automatic recovery in progress 2013-11-16 17:24:19 MST - LOG: redo starts at CDE/EFFFEFF8 2013-11-16 17:24:19 MST - FATAL: the database system is starting up 2013-11-16 17:24:20 MST - LOG: record with zero length at CDE/F34E3B98 2013-11-16 17:24:20 MST - LOG: redo done at CDE/F34E3B58 2013-11-16 17:24:20 MST - LOG: last completed transaction was at log time 2013-11-16 16:42:53.361929-07 2013-11-16 17:24:20 MST - FATAL: the database system is starting up 2013-11-16 17:24:21 MST - LOG: autovacuum launcher started 2013-11-16 17:24:21 MST - LOG: database system is ready to accept connections 2013-11-16 17:32:51 MST - LOG: automatic analyze of table "mirthdb.public.channel_statistics" system usage: CPU 0.00s/0.00u sec elapsed 0.04 sec 2013-11-16 17:32:52 MST - LOG: automatic vacuum of table "mirthdb.public.script": index scans: 1 pages: 0 removed, 116 remain tuples: 294 removed, 346 remain system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec 2013-11-16 17:32:52 MST - LOG: automatic analyze of table "mirthdb.public.script" system usage: CPU 0.00s/0.00u sec elapsed 0.03 sec 2013-11-16 17:33:52 MST - LOG: automatic analyze of table "mirthdb.public.message" system usage: CPU 0.00s/0.02u sec elapsed 0.15 sec 2013-11-16 17:33:57 MST - LOG: automatic vacuum of table "mirthdb.public.channel_statistics": index scans: 1 pages: 0 removed, 228 remain tuples: 168 removed, 147 remain system usage: CPU 0.00s/0.00u sec elapsed 5.02 sec 2013-11-16 17:33:57 MST - LOG: automatic analyze of table "mirthdb.public.channel_statistics" system usage: CPU 0.00s/0.00u sec elapsed 0.04 sec 2013-11-16 17:34:52 MST - LOG: automatic vacuum of table "mirthdb.pg_toast.pg_toast_2619": index scans: 1 pages: 0 removed, 69 remain tuples: 207 removed, 37 remain system usage: CPU 0.00s/0.00u sec elapsed 0.13 sec Thanks, Eduardo Armendariz
On 11/20/2013 08:35 PM, eduardoa@mirthcorp.com wrote: > The following bug has been logged on the website: > > Bug reference: 8612 > Logged by: Eduardo Armendariz > Email address: eduardoa@mirthcorp.com > PostgreSQL version: 9.0.13 > Operating system: CentOS > Description: > > Ran out of disk space and postgres shut down. Recovered enough disk space > for database to be operational. Truncated the largest table in the database, > the message table. This table had over 600gb of data. The result of the > truncate was that only about 200gb of the data was actually released to the > OS. sure that no other backend was/is actually still a file-handle referenced? That open filehandle will prevent the OS from showing up the freed space on the filesystem and can happen if you have backends still running that once referenced the table now truncated and have not done any work since you did the truncate (like a large connection pool or idle client connections, maybe an open psql session or something like that). Stefan
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes: > On 11/20/2013 08:35 PM, eduardoa@mirthcorp.com wrote: >> The following bug has been logged on the website: >> >> Bug reference: 8612 >> Logged by: Eduardo Armendariz >> Email address: eduardoa@mirthcorp.com >> PostgreSQL version: 9.0.13 >> Operating system: CentOS >> Description: >> >> Ran out of disk space and postgres shut down. Recovered enough disk space >> for database to be operational. Truncated the largest table in the database, >> the message table. This table had over 600gb of data. The result of the >> truncate was that only about 200gb of the data was actually released to the >> OS. > sure that no other backend was/is actually still a file-handle > referenced? That open filehandle will prevent the OS from showing up the > freed space on the filesystem and can happen if you have backends still > running that once referenced the table now truncated and have not done > any work since you did the truncate (like a large connection pool or > idle client connections, maybe an open psql session or something like that). I think recent versions of PG contain logic that should ensure such open handles will be released within a reasonable period of time (like one checkpoint cycle). 9.0 I wouldn't bet on, though. If nothing else, a quick shutdown-and-restart of the database should release the space. regards, tom lane
On Fri, Nov 22, 2013 at 11:12 AM, Stefan Kaltenbrunner < stefan@kaltenbrunner.cc> wrote: > On 11/20/2013 08:35 PM, eduardoa@mirthcorp.com wrote: > > The following bug has been logged on the website: > > > > Bug reference: 8612 > > Logged by: Eduardo Armendariz > > Email address: eduardoa@mirthcorp.com > > PostgreSQL version: 9.0.13 > > Operating system: CentOS > > Description: > > > > Ran out of disk space and postgres shut down. Recovered enough disk space > > for database to be operational. Truncated the largest table in the > database, > > the message table. This table had over 600gb of data. The result of the > > truncate was that only about 200gb of the data was actually released to > the > > OS. > > sure that no other backend was/is actually still a file-handle > referenced? That open filehandle will prevent the OS from showing up the > freed space on the filesystem and can happen if you have backends still > running that once referenced the table now truncated and have not done > any work since you did the truncate (like a large connection pool or > idle client connections, maybe an open psql session or something like > that). > If that were the case, I don't think pg_database_size would still be counting the size, as it would no longer be able to find it the files in that directory. I think the next step would be to get an 'ls -l' listing of the $PGDATA/base/<dboid> directory. Cheers, Jeff
Some of the data was still in the base directory which corresponded to the = table that was truncated.=20 Unfortunately, this was somewhat of a time sensitive issue so I did end up = dropping the whole database and recreating it. That did clear the disk spac= e which was originally left behind from the truncate. I don't really have t= he means to reproduce this. The only application that had connections open = with the database was down at the time of the truncate. The only thing out = of the ordinary was that there was very little disk space left when I attem= pted to truncate. Thanks, Eduardo Armendariz On Nov 22, 2013, at 12:04 PM, Jeff Janes <jeff.janes@gmail.com> wrote: > On Fri, Nov 22, 2013 at 11:12 AM, Stefan Kaltenbrunner <stefan@kaltenbrun= ner.cc> wrote: > On 11/20/2013 08:35 PM, eduardoa@mirthcorp.com wrote: > > The following bug has been logged on the website: > > > > Bug reference: 8612 > > Logged by: Eduardo Armendariz > > Email address: eduardoa@mirthcorp.com > > PostgreSQL version: 9.0.13 > > Operating system: CentOS > > Description: > > > > Ran out of disk space and postgres shut down. Recovered enough disk spa= ce > > for database to be operational. Truncated the largest table in the data= base, > > the message table. This table had over 600gb of data. The result of the > > truncate was that only about 200gb of the data was actually released to= the > > OS. >=20 > sure that no other backend was/is actually still a file-handle > referenced? That open filehandle will prevent the OS from showing up the > freed space on the filesystem and can happen if you have backends still > running that once referenced the table now truncated and have not done > any work since you did the truncate (like a large connection pool or > idle client connections, maybe an open psql session or something like tha= t). >=20 > If that were the case, I don't think pg_database_size would still be coun= ting the size, as it would no longer be able to find it the files in that d= irectory. >=20 > I think the next step would be to get an 'ls -l' listing of the $PGDATA/b= ase/<dboid> directory. >=20 > Cheers, >=20 > Jeff --=20 CONFIDENTIALITY NOTICE: The information contained in this electronic=20 transmission may be confidential. If you are not an intended recipient, be= =20 aware that any disclosure, copying, distribution or use of the information= =20 contained in this transmission is prohibited and may be unlawful. If you=20 have received this transmission in error, please notify us by email reply= =20 and then erase it from your computer system.