Thread: error updating a very large table

error updating a very large table

From
Brian Cox
Date:
ts_defect_meta_values has 460M rows. The following query, in retrospect
not too surprisingly, runs out of memory on a 32 bit postgres:

update ts_defect_meta_values set ts_defect_date=(select ts_occur_date
from ts_defects where ts_id=ts_defect_id)

I changed the logic to update the table in 1M row batches. However,
after 159M rows, I get:

ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
8192 bytes at block 7621407

A df run on this machine shows plenty of space:

[root@rql32xeoall03 tmp]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2            276860796 152777744 110019352  59% /
/dev/sda1               101086     11283     84584  12% /boot
none                   4155276         0   4155276   0% /dev/shm

The updates are done inside of a single transaction. postgres 8.3.5.

Ideas on what is going on appreciated.

Thanks,
Brian

Re: error updating a very large table

From
Grzegorz Jaśkiewicz
Date:
On Wed, Apr 15, 2009 at 1:41 AM, Brian Cox <brian.cox@ca.com> wrote:
> ts_defect_meta_values has 460M rows. The following query, in retrospect not
> too surprisingly, runs out of memory on a 32 bit postgres:
>
> update ts_defect_meta_values set ts_defect_date=(select ts_occur_date from
> ts_defects where ts_id=ts_defect_id)
>
> I changed the logic to update the table in 1M row batches. However, after
> 159M rows, I get:
>
> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of 8192
> bytes at block 7621407
>
> A df run on this machine shows plenty of space:
>
> [root@rql32xeoall03 tmp]# df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sda2            276860796 152777744 110019352  59% /
> /dev/sda1               101086     11283     84584  12% /boot
> none                   4155276         0   4155276   0% /dev/shm
>
> The updates are done inside of a single transaction. postgres 8.3.5.
>
> Ideas on what is going on appreciated.
>
any triggers on updated table ?

as for the update query performance, try different way of doing it:
update foo set bar=x.z FROM foo2 WHERE foo.z=bar.sadfasd;


--
GJ

Re: error updating a very large table

From
Tom Lane
Date:
Brian Cox <brian.cox@ca.com> writes:
> I changed the logic to update the table in 1M row batches. However,
> after 159M rows, I get:

> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
> 8192 bytes at block 7621407

You're out of disk space.

> A df run on this machine shows plenty of space:

Per-user quota restriction, perhaps?

I'm also wondering about temporary files, although I suppose 100G worth
of temp files is a bit much for this query.  But you need to watch df
while the query is happening, rather than suppose that an after-the-fact
reading means anything.

            regards, tom lane

Re: error updating a very large table

From
Simon Riggs
Date:
On Wed, 2009-04-15 at 09:51 -0400, Tom Lane wrote:
> Brian Cox <brian.cox@ca.com> writes:
> > I changed the logic to update the table in 1M row batches. However,
> > after 159M rows, I get:
>
> > ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
> > 8192 bytes at block 7621407
>
> You're out of disk space.
>
> > A df run on this machine shows plenty of space:
>
> Per-user quota restriction, perhaps?
>
> I'm also wondering about temporary files, although I suppose 100G worth
> of temp files is a bit much for this query.  But you need to watch df
> while the query is happening, rather than suppose that an after-the-fact
> reading means anything.

Anytime we get an out of space error we will be in the same situation.

When we get this error, we should
* summary of current temp file usage
* df (if possible on OS)

Otherwise we'll always be wondering what caused the error.

--
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support