Performance with very large tables - Mailing list pgsql-general

From Jan van der Weijde
Subject Performance with very large tables
Date
Msg-id 4B9C73D1EB78FE4A81475AE8A553B3C67DC532@exch-lei1.attachmate.com
Whole thread Raw
Responses Re: Performance with very large tables  (Richard Huxton <dev@archonet.com>)
Re: Performance with very large tables  (Bruno Wolff III <bruno@wolff.to>)
List pgsql-general
Hello all,
 
one of our customers is using PostgreSQL with tables containing millions of records. A simple 'SELECT * FROM <table>'  takes way too much time in that case, so we have advised him to use the LIMIT and OFFSET clauses. However now he has a concurrency problem. Records deleted, added or updated in one process have an influence on the OFFSET value of another process such that records are either skipped of read again.
The solution to that problem is to use transactions with isolation level serializable. But to use transactions around a loop that reads millions of records is far from ideal I think.
Does anyone have a suggestion for this problem ? Is there for instance an alternative to LIMIT/OFFSET so that SELECT on large tables has a good performance ?
 
Thank you for your help
 
Jan van der Weijde
 

pgsql-general by date:

Previous
From: "Shoaib Mir"
Date:
Subject: Re: Backup the part of postgres database
Next
From: Richard Huxton
Date:
Subject: Re: Performance with very large tables