Re: error updating a very large table - Mailing list pgsql-performance

From Tom Lane
Subject Re: error updating a very large table
Date
Msg-id 9060.1239803497@sss.pgh.pa.us
Whole thread Raw
In response to error updating a very large table  (Brian Cox <brian.cox@ca.com>)
Responses Re: error updating a very large table  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-performance
Brian Cox <brian.cox@ca.com> writes:
> I changed the logic to update the table in 1M row batches. However,
> after 159M rows, I get:

> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
> 8192 bytes at block 7621407

You're out of disk space.

> A df run on this machine shows plenty of space:

Per-user quota restriction, perhaps?

I'm also wondering about temporary files, although I suppose 100G worth
of temp files is a bit much for this query.  But you need to watch df
while the query is happening, rather than suppose that an after-the-fact
reading means anything.

            regards, tom lane

pgsql-performance by date:

Previous
From: Matthew Wakeling
Date:
Subject: Re: INSERT times - same storage space but more fields -> much slower inserts
Next
From: Simon Riggs
Date:
Subject: Re: error updating a very large table