Re: DB cache size strategies - Mailing list pgsql-general

From NTPT
Subject Re: DB cache size strategies
Date
Msg-id 001301c3f083$5e43f5c0$d300a8c0@webpropag.cz
Whole thread Raw
In response to DB cache size strategies  ("Ed L." <pgsql@bluepolka.net>)
Responses Re: DB cache size strategies
Re: DB cache size strategies
List pgsql-general
I make some test on my production system while maintance time so no user use the system.It run on old box with k6-2/500
and 3* 128mb DIMMs pg 7.3.2 


I have table with around 100 000 lines like this : owner int8,id int8, content text , index on column owner (this index
isused by text query). 

i manage a test query  select * from table where owner='10000' order by id  oddset 100 limit 50 . There is around 10
000lines  before limit and offset apply. i run explain analyze and got this 

Case 1: (no memory to Pg )
sort_mem   = 64Kb
effective_cache_size = 64 (8k blocks - Around 512 kb )

Take 3300 ms


Case 2: (give a lot of ram to postgres)
sort_mem   = 64000 Kb
effective_cache_size = 8192  (8k blocks)

Swapping occured , execution take 5400 ms


Case 3:
sort_mem   = 16000 Kb
effective_cache_size = 512 (8k blocks - Around 4 Mb )

Take 1200 ms. The best result.


Case 4:
sort_mem   = 16000 Kb
effective_cache_size = 64 (8k blocks - Around 512 kb )

Take 1900 ms.. In this case i try to increase effective_cache_size step by step 64,128,256,512,1024 but increase
effective_cache_sizeup from  512 have no dramatic impact on performance. 


I test  shared_buffer settings, between 2*max_connection ie 128*2 to 1024  but have no dramatic impact on performance,
butin my tests no focus on this  


It seems that :

1: Settings memory limits too high,  whnen machine start using swap space is WROSE then give postgres as low memory as
possible.

2: settings of  sort_mem have bigger impact on performance  then settings of effective_cache_size , if   db cache hold
atleast hunderds of disk pages. More than 1000 disk pages in effective cache size have no sense. 

I can not made reliable simulations with this database on real user load :(, so with more concurent user result may be
different.  May be a good "stress test" suite for postgresql database is needed for future testings.  




----- Původní zpráva -----
Od: "Ed L." <pgsql@bluepolka.net>
Komu: "Tom Lane" <tgl@sss.pgh.pa.us>
Kopie: "Martijn van Oosterhout" <kleptog@svana.org>; <pgsql-general@postgresql.org>
Odesláno: 11. února 2004 7:01
Předmět: Re: [GENERAL] DB cache size strategies


> On Tuesday February 10 2004 10:26, Tom Lane wrote:
> >
> > Giving PG half the RAM is counterproductive no matter what --- that
> > pretty much guarantees that every page that's in RAM will be in RAM
> > twice, once in PG buffers and once in kernel buffers.  The two
> > reasonable schools of thought are (1) to make PG buffers relatively
> > small and let the kernel do the bulk of the buffering, or (2) to give
> > PG most of the RAM and expect it to do the bulk of the buffering.
> >
> > Past experience has been that theory (1) generally wins.  Jan's recent
> > work on ARC buffering may make things more interesting, though.
>
> So, pursuing theory (1) in 'the time before ARC', assuming you have a
> dedicated box with little or no non-db competition for resources, why give
> PG anymore than is absolutely required (max_connections*2 or 16)?  Why not
> just let the kernel reign as completely as possible?  Is that what you mean
> by "relatively small"?
>
> TIA.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>
>

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Temporary views
Next
From: "Matthew Lunnon"
Date:
Subject: Re: Join query on 1M row table slow