error updating a very large table - Mailing list pgsql-performance

From Brian Cox
Subject error updating a very large table
Date
Msg-id 49E52D34.4080200@ca.com
Whole thread Raw
Responses Re: error updating a very large table  (Grzegorz Jaśkiewicz <gryzman@gmail.com>)
Re: error updating a very large table  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
ts_defect_meta_values has 460M rows. The following query, in retrospect
not too surprisingly, runs out of memory on a 32 bit postgres:

update ts_defect_meta_values set ts_defect_date=(select ts_occur_date
from ts_defects where ts_id=ts_defect_id)

I changed the logic to update the table in 1M row batches. However,
after 159M rows, I get:

ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
8192 bytes at block 7621407

A df run on this machine shows plenty of space:

[root@rql32xeoall03 tmp]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2            276860796 152777744 110019352  59% /
/dev/sda1               101086     11283     84584  12% /boot
none                   4155276         0   4155276   0% /dev/shm

The updates are done inside of a single transaction. postgres 8.3.5.

Ideas on what is going on appreciated.

Thanks,
Brian

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: INSERT times - same storage space but more fields -> much slower inserts
Next
From: Stephen Frost
Date:
Subject: Re: INSERT times - same storage space but more fields -> much slower inserts