If you could post the queries in question along with the table structure and
EXPLAIN output of the queries, I'm sure someone might be able to suggest
something..
-Mitch
----- Original Message -----
From: "Alfred Perlstein" <bright@wintelcom.net>
To: "Igor V. Rafienko" <igorr@ifi.uio.no>
Cc: <pgsql-general@postgresql.org>
Sent: Friday, October 13, 2000 10:47 AM
Subject: Re: [GENERAL] Postgres-7.0.2 optimization question
> * Igor V. Rafienko <igorr@ifi.uio.no> [001013 05:09] wrote:
> >
> >
> > Hi,
> >
> >
> > I've got a slight optimization problem with postgres and I was hoping
> > someone could give me a clue as to what could be tweaked.
> >
> > I have a couple of tables which contain little data (around 500,000
tuples
> > each), and most operations take insanely long time to complete. The
> > primary keys in both tables are ints (int8, iirc). When I perform a
delete
> > (with a where clause on a part of a primary key), an strace shows that
> > postgres reads the entire table sequentially (lseek() and read()). Since
> > each table is around 200MB, things take time.
>
> Postgresql fails to use the index on several of our tables, an
> 'EXPLAIN <query>' would probably output a lot of lines about
> doing a 'sequential scan'.
>
> The only solution that I've been able to come across is to issue
> a 'set enable_seqscan=off;' SQL statement on most of my queries
> to force postgresql to use an index.
>
> hope this helps,
> -Alfred
>