On Thu, Feb 21, 2008 at 1:07 AM, hewei <heweiweihe@gmail.com> wrote:
> Hi, Scott Marlowe:
>
> You said that " As for processing them in order versus randomly,that's a
> common problem. "
> do you know why? how postgres work in this scenario.
Pretty much the same way any database would. it's likely that your
data in the table is in some order. When you update one row, then the
next n rows are read into memory as well. Updating these is cheaper
because they don't have to be read, just flushed out to the write
ahead log. If you have very random access on a table much larger than
your shared_buffers or OS cache, then it's likely that by the time
you get back to a row on page x it's already been flushed out of the
OS or pg and has to be fetched again.