Re: error updating a very large table

From: Tom Lane
Subject: Re: error updating a very large table
Date: ,
Msg-id: 9060.1239803497@sss.pgh.pa.us
(view: Whole thread, Raw)
In response to: error updating a very large table  (Brian Cox)
Responses: Re: error updating a very large table  (Simon Riggs)
List: pgsql-performance

Tree view

error updating a very large table  (Brian Cox, )
 Re: error updating a very large table  (Grzegorz Jaśkiewicz, )
 Re: error updating a very large table  (Tom Lane, )
  Re: error updating a very large table  (Simon Riggs, )

Brian Cox <> writes:
> I changed the logic to update the table in 1M row batches. However,
> after 159M rows, I get:

> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
> 8192 bytes at block 7621407

You're out of disk space.

> A df run on this machine shows plenty of space:

Per-user quota restriction, perhaps?

I'm also wondering about temporary files, although I suppose 100G worth
of temp files is a bit much for this query.  But you need to watch df
while the query is happening, rather than suppose that an after-the-fact
reading means anything.

            regards, tom lane


pgsql-performance by date:

From: Simon Riggs
Date:
Subject: Re: error updating a very large table
From: Lists
Date:
Subject: Re: Shouldn't the planner have a higher cost for reverse index scans?