oops,
> and then call your function with the values 1..15 (when using 16
it should of course be 0..15
Marc Mamin
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
> owner@postgresql.org] On Behalf Of Marc Mamin
> Sent: Donnerstag, 9. August 2012 09:12
> To: Geert Mak; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] processing large amount of rows with plpgsql
>
> > > There is (almost) no way to
> > > force commit inside a function --
> >
> > So what you are saying is that this behavior is normal and we should
> > either equip ourselves with enough disk space (which I am trying
now,
> > it is a cloud server, which I am resizing to gain more disk space
and
> > see what will happen) or do it with an external (scripting)
language?
> >
>
> Hello,
>
> a relative simple way to workaround your performance/resource problem
> is
> to slice the update.
>
> e.g.:
>
> create function myupdate(slice int) ...
>
> for statistics_row in
> SELECT * FROM statistics
> WHERE id % 16 = slice
> or:
> WHERE hashtext(id::text) % 16 = slice
> ...
>
> and then call your function with the values 1..15 (when using 16
> slices)
>
> Use a power of 2 for the number of slices.
>
> It may be faster to use many slices and
> this allows to do the job in parallel on a few threads.
>
> HTH,
>
> Marc Mamin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general