Re: query cost too high, anyway to reduce it - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: query cost too high, anyway to reduce it
Date
Msg-id dcc563d10912051245l5336c7b3w8b1056c846e46b66@mail.gmail.com
Whole thread Raw
In response to query cost too high, anyway to reduce it  (nair rajiv <rajivnair@gnu.org>)
List pgsql-performance
On Fri, Dec 4, 2009 at 3:15 AM, nair rajiv <rajivnair@gnu.org> wrote:

> We populated the table with data and used EXPLAIN
>
>
> dbpedia=# EXPLAIN   SELECT   nid,max(ssid) FROM gbobjects  where ssid<=
> 100000  group by nid  ;
>
>               QUERY PLAN
> --------------------------------------------------------------------------------------------------
>  GroupAggregate  (cost=20966.03..22944.49 rows=98923 width=27)
>    ->  Sort  (cost=20966.03..21213.34 rows=98923 width=27)
>          Sort Key: nid
>          ->  Index Scan using ssid_object on gbobjects  (cost=0.00..10388.88
> rows=98923 width=27)
>                Index Cond: (ssid <= 100000)
>
>
> Total rows : 875459
>
>
> The cost is very high.

Compared to what?

> Is there a way to reduce the cost ?. We have kept the
> postgresql configuration files as it is i.e. they are the default
> configuration
> files.
> Can the cost be reduced by changing some parameters in
> postgresql.conf file. If yes which are those parameters ?

Sure you can change the numbers for random_page_cost and
sequential_page_cost, but the query isn't gonna run faster.  You're
retrieving 875k rows, that's never gonna be cheap.

Better is to run explain analyze and look at the times you're getting
for each step in the query plan.

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: performance while importing a very large data set in to database
Next
From: Scott Marlowe
Date:
Subject: Re: performance while importing a very large data set in to database