Re: Table partition for very large table - Mailing list pgsql-general

From Scott Marlowe
Subject Re: Table partition for very large table
Date
Msg-id 1112041102.22988.4.camel@state.g2switchworks.com
Whole thread Raw
In response to Re: Table partition for very large table  (Yudie Gunawan <yudiepg@gmail.com>)
Responses Re: Table partition for very large table
List pgsql-general
On Mon, 2005-03-28 at 13:50, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain <query> tell you?  Have you analyzed the database?
>
>
> This is the QUERY PLAN
> Hash Left Join  (cost=25.00..412868.31 rows=4979686 width=17)
>   Hash Cond: (("outer".groupnum = "inner".groupnum) AND
> (("outer".sku)::text = ("inner".sku)::text))
>   Filter: (("inner".url IS NULL) OR (("inner".url)::text = ''::text))
>   ->  Seq Scan on prdt_old mc  (cost=0.00..288349.86 rows=4979686 width=17)
>   ->  Hash  (cost=20.00..20.00 rows=1000 width=78)
>         ->  Seq Scan on prdt_new mi  (cost=0.00..20.00 rows=1000 width=78)
>
>
> > What are your postgresql.conf settings?
>
> What suspected specific setting need to be changed?

sort_mem also known as work_mem (in 8.0)

Also, this is important, have you anayzed the table?  I'm guessing no,
since the estimates are 1,000 rows, but the has join is getting a little
bit more than that.  :)

Analyze your database and then run the query again.

pgsql-general by date:

Previous
From: Yudie Gunawan
Date:
Subject: Re: Table partition for very large table
Next
From: "Florian G. Pflug"
Date:
Subject: Re: Referential integrity using constant in foreign key