[For whatever reason this got delayed in delivery. Sending again from another route. -jwb]
On Thu, 2004-05-13 at 09:28, Tom Lane wrote:
> "Jeffrey W. Baker" <jwbaker@acm.org> writes:
> > Sorry, my last mail got cut off. The server aborted because it couldn't
> > write the xlog. Looks like I omitted this from my last mail:
>
> Selective quoting of the log output? Naughty naughty.
>
> However, that still doesn't explain how you got into the current state.
> Had you once had checkpoint_segments set much higher than the current
> value of 24? On looking at the code I see that it doesn't make any
> attempt to prune future log segments after a decrease in
> checkpoint_segments, so if a previous misconfiguration had allowed the
> number of future segments to get really large, that could be the root of
> the issue.
Okay, I installed a fresh, completely stock 7.4.2 and did the following:
#!/bin/sh
createdb growxlog
echo "create table data (a int, b int, c int, d int, e int)" | psql growxlog
perl -e 'use POSIX qw(floor); print "COPY data FROM STDIN;\n"; for ($i = 0; $i < 100000000; $i++) {print(join("\t", $i,
floor(rand()*1000000),floor(rand()*1000000), floor(rand()*1000000), floor(rand()*1000000)), "\n")}' | psql growxlog
echo "create unique index data_pkey on data(a,b,c)" | psql growxlog
The result was a table with 100 million rows, a 5.3GB data table, a
2.7GB index, and 2.6GB in pg_xlog. Reproducable every time.
For the less patient, the problem is also reproduceable at only 10
million rows (xlog = 337MB), but not at 1 million rows (presumably the
whole mess fits inside the usual # of segments).
You'll need approximately rows*125 bytes of free space to run the test.
-jwb