Re: Huge Data - Mailing list pgsql-general

From Richard Huxton
Subject Re: Huge Data
Date
Msg-id 200401141148.15286.dev@archonet.com
Whole thread Raw
In response to Huge Data  (Sezai YILMAZ <sezai.yilmaz@pro-g.com.tr>)
Responses Re: Huge Data  (Sezai YILMAZ <sezai.yilmaz@pro-g.com.tr>)
Re: Huge Data  (Sezai YILMAZ <sezai.yilmaz@pro-g.com.tr>)
List pgsql-general
On Wednesday 14 January 2004 11:11, Sezai YILMAZ wrote:
> Hi,
>
> I use PostgreSQL 7.4 for storing huge amount of data. For example 7
> million rows. But when I run the query "select count(*) from table;", it
> results after about 120 seconds. Is this result normal for such a huge
> table? Is there any methods for speed up the querying time? The huge
> table has integer primary key and some other indexes for other columns.

PG uses MVCC to manage concurrency. A downside of this is that to verify the
exact number of rows in a table you have to visit them all.

There's plenty on this in the archives, and probably the FAQ too.

What are you using the count() for?

--
  Richard Huxton
  Archonet Ltd

pgsql-general by date:

Previous
From: Együd Csaba
Date:
Subject: Using regular expressions in LIKE
Next
From: Richard Huxton
Date:
Subject: Re: What are nested transactions then? was Nested transaction workaround?