Re: error updating a very large table - Mailing list pgsql-performance

From Grzegorz Jaśkiewicz
Subject Re: error updating a very large table
Date
Msg-id 2f4958ff0904150419i7a8c2461m5c74acc508a0cfa6@mail.gmail.com
Whole thread Raw
In response to error updating a very large table  (Brian Cox <brian.cox@ca.com>)
List pgsql-performance
On Wed, Apr 15, 2009 at 1:41 AM, Brian Cox <brian.cox@ca.com> wrote:
> ts_defect_meta_values has 460M rows. The following query, in retrospect not
> too surprisingly, runs out of memory on a 32 bit postgres:
>
> update ts_defect_meta_values set ts_defect_date=(select ts_occur_date from
> ts_defects where ts_id=ts_defect_id)
>
> I changed the logic to update the table in 1M row batches. However, after
> 159M rows, I get:
>
> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of 8192
> bytes at block 7621407
>
> A df run on this machine shows plenty of space:
>
> [root@rql32xeoall03 tmp]# df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sda2            276860796 152777744 110019352  59% /
> /dev/sda1               101086     11283     84584  12% /boot
> none                   4155276         0   4155276   0% /dev/shm
>
> The updates are done inside of a single transaction. postgres 8.3.5.
>
> Ideas on what is going on appreciated.
>
any triggers on updated table ?

as for the update query performance, try different way of doing it:
update foo set bar=x.z FROM foo2 WHERE foo.z=bar.sadfasd;


--
GJ

pgsql-performance by date:

Previous
From: Stephen Frost
Date:
Subject: Re: INSERT times - same storage space but more fields -> much slower inserts
Next
From: "Peeyush"
Date:
Subject: need information