Thread: Excess disk usage

Excess disk usage

From
John Summerfield
Date:
I attempted to load data amounting to 21 mbytes into a table which has
a unique key both otherwise doesn't have indexes.

The WALs consumed 2.9 Gigabytes of disk (and doubtless would have taken
more if there was more to be had).


Considering that the entire data would fit into RAM (I have 128 Mbytes)
I think this a little excessive.

In the interests of having space for other programs, I removed all the
logs (recognising that I might have to start again).

Now I find that postmaster won't start.

2001-08-28 06:56:48 [5093]   DEBUG:  redo starts at (1, 259081116)
2001-08-28 06:56:54 [5093]   DEBUG:  open(logfile 1 seg 16) failed: No
such file or directory
2001-08-28 06:56:54 [5093]   DEBUG:  redo done at (1, 268428788)
2001-08-28 06:57:01 [5093]   FATAL 2:  ZeroFill(/var/lib/pgsql/data/pg_x
log/xlogtemp.5093) failed: No such file or directory
/usr/bin/postmaster: Startup proc 5093 exited with status 512 - abort
2001-08-28 09:16:19 [18473]  DEBUG:  database system was shut down at
2001-08-28 06:57:00 WST
2001-08-28 09:16:19 [18473]  DEBUG:  open(logfile 1 seg 15) failed: No
such file or directory
2001-08-28 09:16:19 [18473]  DEBUG:  Invalid primary checkPoint record
2001-08-28 09:16:19 [18473]  DEBUG:  open(logfile 1 seg 15) failed: No
such file or directory
2001-08-28 09:16:19 [18473]  DEBUG:  Invalid secondary checkPoint record
2001-08-28 09:16:19 [18473]  FATAL 2:  Unable to locate a valid
CheckPoint record
/usr/bin/postmaster: Startup proc 18473 exited with status 512 - abort
[root@dugite log]#

The actual messages are no surprise.


I appreciate that I've not provided a lot of information about what I'm
trying to do. However, I don't think that ANYTHING I do should case PG
to use so much disk for so little data.


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my
disposition.

Re: Excess disk usage

From
Tom Lane
Date:
John Summerfield <summer@os2.ami.com.au> writes:
> I attempted to load data amounting to 21 mbytes into a table which has
> a unique key both otherwise doesn't have indexes.

> The WALs consumed 2.9 Gigabytes of disk (and doubtless would have taken
> more if there was more to be had).

That seems like a large growth factor.  What is the exact schema
declaration of the table, and how are you measuring the "21 mbytes"?

The immediate problem should be fixed if you update to 7.1.3, but
I'm curious about the 100:1 WAL-size-to-data-size ratio that you're
reporting.

            regards, tom lane