Re: Performance with very large tables - Mailing list pgsql-general

From Jan van der Weijde
Subject Re: Performance with very large tables
Date
Msg-id 4B9C73D1EB78FE4A81475AE8A553B3C67DC54E@exch-lei1.attachmate.com
Whole thread Raw
In response to Performance with very large tables  ("Jan van der Weijde" <Jan.van.der.Weijde@attachmate.com>)
List pgsql-general
Hi Bruno,

Good to read that your advice to me is the solution I was considering!
Although I think this is something PostgreSQL should solve internally, I
prefer the WHERE clause over a long lasting SERIALIZABLE transaction.

Thanks,
Jan

-----Original Message-----
From: Bruno Wolff III [mailto:bruno@wolff.to]
Sent: Tuesday, January 16, 2007 19:12
To: Jan van der Weijde; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables

On Tue, Jan 16, 2007 at 12:06:38 -0600,
  Bruno Wolff III <bruno@wolff.to> wrote:
>
> Depending on exactly what you want to happen, you may be able to
continue
> where you left off using a condition on the primary key, using the
last
> primary key value for a row that you have viewed, rather than OFFSET.
> This will still be fast and will not skip rows that are now visible to
> your transaction (or show duplicates when deleted rows are no longer
visible
> to your transaction).

I should have mentioned that you also will need to use an ORDER BY
clause
on the primary key when doing things this way.

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: too many trigger records found for relation "item" -
Next
From: "Steven De Vriendt"
Date:
Subject: PostgreSQL 8.1: createdb: xflush error ?