Thread: Problem with database performance, Debian 4gb ram ?

Problem with database performance, Debian 4gb ram ?

From
Grant Masan
Date:
Hi Hi all,

I have now readed many many forums and tried many different solutions and I am not getting good performance to database. My server is Debian linux, with 4gb ram, there is also java application and I am giving to that 512mb (JAVA_OPTS) memory. In database there is now like 4milj rows. What should I do better.
Now .conf is:

max_connections = 80
shared_buffers = 512MB
temp_buffers = 8MB
work_mem = 20MB
maintenance_work_mem = 384MB
wal_buffers = 8MB
checkpoint_segments = 128MB
effective_cache_size = 2304MB
cpu_tuple_cost = 0.0030
cpu_index_tuple_cost = 0.0010
cpu_operator_cost = 0.0005
fsync = off
checkpoint_timeout = 1h

and I am giving kernels like:

sysctl -w kernel.shmmax=1073741824
sysctl -w kernel.shmall=2097152

btw, what file I should modify to give this kernels as defaults ?

Thank you very much ! Hope you can clear my problem !

Re: Problem with database performance, Debian 4gb ram ?

From
Grzegorz Jaśkiewicz
Date:


On Mon, Nov 2, 2009 at 2:16 PM, Grant Masan <grant.massan@gmail.com> wrote:
Hi Hi all,

I have now readed many many forums and tried many different solutions and I am not getting good performance to database. My server is Debian linux, with 4gb ram, there is also java application and I am giving to that 512mb (JAVA_OPTS) memory. In database there is now like 4milj rows. What should I do better.


I would rather start to look at queries performance.

What's the size of db ?
select pg_size_pretty(pg_database_size('yourDbName'));
 
start logging queries, with time of execution, to see which ones are causing problems.



--
GJ

Re: Problem with database performance, Debian 4gb ram ?

From
Ivan Voras
Date:
Grant Masan wrote:
> Hi Hi all,
>
> I have now readed many many forums and tried many different solutions
> and I am not getting good performance to database. My server is Debian
> linux, with 4gb ram, there is also java application and I am giving to
> that 512mb (JAVA_OPTS) memory. In database there is now like 4milj rows.
> What should I do better.
> Now .conf is:
>
> max_connections = 80
> shared_buffers = 512MB
> temp_buffers = 8MB
> work_mem = 20MB
> maintenance_work_mem = 384MB
> wal_buffers = 8MB
> checkpoint_segments = 128MB
> effective_cache_size = 2304MB
> cpu_tuple_cost = 0.0030
> cpu_index_tuple_cost = 0.0010
> cpu_operator_cost = 0.0005
> fsync = off
> checkpoint_timeout = 1h
>
> and I am giving kernels like:
>
> sysctl -w kernel.shmmax=1073741824
> sysctl -w kernel.shmall=2097152
>
> btw, what file I should modify to give this kernels as defaults ?
>
> Thank you very much ! Hope you can clear my problem !

You have given almost no information that can be used to help you. In
particular, you seem to be mixing up Java performance and database
performance (JAVA_OPTS has nothing to do with pg performance).

How do you know your performance is low? What is your hardware, what
performance do you get and what do you expect?

If after this you are still convinced the problem is database-related,
you will probably need to run a tool like
http://pqa.projects.postgresql.org/ to search which queries are slow and
then start analyzing each particular query.

In short, there is no magical answer to your question :)

Re: Problem with database performance, Debian 4gb ram ?

From
"Kevin Grittner"
Date:
Grant Masan <grant.massan@gmail.com> wrote:

> max_connections = 80
> shared_buffers = 512MB
> temp_buffers = 8MB
> work_mem = 20MB
> maintenance_work_mem = 384MB
> wal_buffers = 8MB
> checkpoint_segments = 128MB
> effective_cache_size = 2304MB
> checkpoint_timeout = 1h

Pending further information, these seem sane to me.

> cpu_tuple_cost = 0.0030
> cpu_index_tuple_cost = 0.0010
> cpu_operator_cost = 0.0005

Why did you make these adjustments?  I usually have to change the
ratio between page and cpu costs toward the other direction.  Unless
you have carefully measured performance with and without these changes
and found a clear win with these, I would recommend going back to the
defaults for these three and tuning from there.

> fsync = off

Only use this if you can afford to lose all data in the database.
(There are some situations where this is actually OK, but they are
unusual.)

As others have stated, though, we'd need more information to really
give much useful advice.  An EXPLAIN ANALYZE of a query which isn't
performing to expectations would be helpful, especially if you include
the table definitions (with indexes) of all tables involved in the
query.  Output from vmstat or iostat with a fairly small interval (I
usually use 1) while the query is running would be useful, too.

Knowing the exact version of PostgreSQL (like from SELECT version();)
would be useful, as well as knowing more about you disk array and
controller(s).

-Kevin

Re: Problem with database performance, Debian 4gb ram ?

From
Jeff Janes
Date:
On Tue, Nov 3, 2009 at 7:13 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Grant Masan <grant.massan@gmail.com> wrote:
>
>
>> cpu_tuple_cost = 0.0030
>> cpu_index_tuple_cost = 0.0010
>> cpu_operator_cost = 0.0005
>
> Why did you make these adjustments?  I usually have to change the
> ratio between page and cpu costs toward the other direction.

Is that because the database is mostly cached in memory?  If I take the
documented descriptions of the costs parameters at face value, I find
that cpu_tuple_cost should be even lower yet.


Cheer,

Jeff

Re: Problem with database performance, Debian 4gb ram ?

From
"Kevin Grittner"
Date:
Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Nov 3, 2009 at 7:13 AM, Kevin Grittner
> <Kevin.Grittner@wicourts.gov> wrote:
>> Grant Masan <grant.massan@gmail.com> wrote:
>>
>>
>>> cpu_tuple_cost = 0.0030
>>> cpu_index_tuple_cost = 0.0010
>>> cpu_operator_cost = 0.0005
>>
>> Why did you make these adjustments?  I usually have to change the
>> ratio between page and cpu costs toward the other direction.
>
> Is that because the database is mostly cached in memory?  If I take
> the documented descriptions of the costs parameters at face value, I
> find that cpu_tuple_cost should be even lower yet.

Right, the optimizer doesn't model caching effects very well, so I
find that in practice I have to fudge these from their putative
meanings to allow for typical caching.  Even with only a small
fraction of the database cached, the heavily accessed indexes tend to
be fairly well cached, so overall performance improves markedly by
dropping random_page_cost to about 2, even in our lowest-percentage-
cached databases.

I've occasionally tried using the defaults for that GUC, which has
always resulted in user complaints about unacceptable performance of
important queries.  While I tend to reduce the random_page_cost and
seq_page_cost to tweak things, raising the cpu_*_cost settings would
accomplish the same thing, so reducing them as show above would tend
to push things into sequential scans where indexed access might be
faster.

-Kevin

Re: Problem with database performance, Debian 4gb ram ?

From
"Kevin Grittner"
Date:
Please keep the list copied.

Grant Masan <grant.massan@gmail.com> wrote:

> CREATE FUNCTION ... RETURNS SETOF ...

> FOR ... IN SELECT ... LOOP

>     FOR ... IN SELECT ... LOOP

> FOR ... IN SELECT ... LOOP
>
>                 RETURN NEXT text_output;
>
>             END LOOP;
>     END LOOP;
>     END LOOP;

I don't have time to work through the logic of all this to try to
discern what your goal is; but in my experience, such procedural code
can usually be rewritten as a single query.  The results are typically
orders of magnitude better.

> SELECT * FROM info_tool(linest,date,date)

> "Function Scan on info_tool  (cost=0.00..260.00 rows=1000 width=108)
> (actual time=437712.611..437712.629 rows=14 loops=1)"
> "Total runtime: 437712.686 ms"

To get useful information you need EXPLAIN ANALYZE from statements
inside the function, not of the execution of the function.

-Kevin