Thread: PostgreSQL + FreeBSD memory configuration, and an issue
Hi,
I've got an IBM x3200 server, with 1,8 Ghz dual core processor, and with 4 GB RAM. I've installed a FreeBSD 8.2, PostgreSQL 9.0.3, Apache22, with php5.3.5 and extensions for postgre, session,pdf and others.
After the install, I recieved lot of "too many user" in the postgresql.log, and after that, I reconfigured the postgresql.conf with some parameters:
max connection = 200
shared buffers = 512 MB
work mem = 1 MB
maintenance_work_mem = 128 MB
checkpoint_segments = 32
checkpoint_timeout = 10min
deadlock_timeout = 1s
max_lock_per_transaction = 64
I've saw in the postgresql manual, that I have to reconfigure the kernel, with these parameters in sysctl.conf:
kern.ipc.shmall=262144
kern.ipc.shmmax=1073741824
kern.ipc.semmap=256
and loader.conf:
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
My question is the following: if this is a dedicated database server, with maximum 30 users (but they are using ODBC with Microsoft Acces, and each of them generating 4-6 connection at the same time), and other 200 people will use this server through drupal, php, apache not in daily basis, but weekly, what is the ideal memory configuration?
After the settings in the postgresql.conf our system is much faster, and no more error messages in the postgres.log, but If I try to drop a table, or add a new one, our system is stopping, until I kill the process, which is dropping or adding a table.
Thank you in advance, the config files can be found here: http://kissceg.sitefarm.hu/conf_files.tar
I've got an IBM x3200 server, with 1,8 Ghz dual core processor, and with 4 GB RAM. I've installed a FreeBSD 8.2, PostgreSQL 9.0.3, Apache22, with php5.3.5 and extensions for postgre, session,pdf and others.
After the install, I recieved lot of "too many user" in the postgresql.log, and after that, I reconfigured the postgresql.conf with some parameters:
max connection = 200
shared buffers = 512 MB
work mem = 1 MB
maintenance_work_mem = 128 MB
checkpoint_segments = 32
checkpoint_timeout = 10min
deadlock_timeout = 1s
max_lock_per_transaction = 64
I've saw in the postgresql manual, that I have to reconfigure the kernel, with these parameters in sysctl.conf:
kern.ipc.shmall=262144
kern.ipc.shmmax=1073741824
kern.ipc.semmap=256
and loader.conf:
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
My question is the following: if this is a dedicated database server, with maximum 30 users (but they are using ODBC with Microsoft Acces, and each of them generating 4-6 connection at the same time), and other 200 people will use this server through drupal, php, apache not in daily basis, but weekly, what is the ideal memory configuration?
After the settings in the postgresql.conf our system is much faster, and no more error messages in the postgres.log, but If I try to drop a table, or add a new one, our system is stopping, until I kill the process, which is dropping or adding a table.
Thank you in advance, the config files can be found here: http://kissceg.sitefarm.hu/conf_files.tar
regards,
Carl
Carl
On Fri, Apr 8, 2011 at 4:00 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote:
My question is the following: if this is a dedicated database server, with maximum 30 users (but they are using ODBC with Microsoft Acces, and each of them generating 4-6 connection at the same time), and other 200 people will use this server through drupal, php, apache not in daily basis, but weekly, what is the ideal memory configuration?
if it is a dedicated DB server, then give shared memory about 1/4 of the RAM, and perhaps a slightly larger maintenance work mem. depending on your workload you may want to increase the checkpoint segments (if write-mostly, then add more segments).
Here is what I use on my FreeBSD 8, Pg 9 big-memory servers. these have 24GB or more of RAM and are attached to SSD external storage for the database:
max_connections = 200
shared_buffers = 5120MB
work_mem = 512MB
maintenance_work_mem = 1024MB
max_stack_depth = 8MB
vacuum_cost_delay = 15checkpoint_segments = 64
checkpoint_timeout = 15min
checkpoint_completion_target = 0.8
random_page_cost = 1.0 # RAM disk. set equal seq_page_cost
effective_cache_size = 6400MB # shared_buffers + `sysctl -n vfs.hibufspace` / 8192 (BLKSZ)
for individual complicated queries, you can increase the sort mem and work mem on a per-connection basis as needed.
After the settings in the postgresql.conf our system is much faster, and no more error messages in the postgres.log, but If I try to drop a table, or add a new one, our system is stopping, until I kill the process, which is dropping or adding a table.
Is something else using the table you want to drop and blocking the drop statement from taking the locks it needs? It should be fairly instant.
On Fri, Apr 8, 2011 at 3:00 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote: > After the settings in the postgresql.conf our system is much faster, and no > more error messages in the postgres.log, but If I try to drop a table, or > add a new one, our system is stopping, until I kill the process, which is > dropping or adding a table. you are almost certainly waiting on a lock. check pg_locks for ungranted locks and pg_stat_activity for idle/long running transactions. If you ever find yourself having to bump max_connections again, it is almost certainly a good idea to investigate a connection pooler like pgbouncer. merlin
On Fri, Apr 8, 2011 at 9:53 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote:
Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and maintenance_work_mem, and 102 MB of work_mem.A question: I didn't use (it's marked with #) the effective_planner (or any other planner method or config option). Is it ok, when I turn it on with that parameter: 1036MB?
the variables below are all the ones I change from default other than the logging settings (I like more verbose logging).
I have no opinion or experiences on any other settings.
DROP/ADD TABLE stuck: I realized, that the locks number is so high, what about these settings:
I don't think it has to do with number of locks, but with actually waiting for a lock.
deadlock_timeout = 1smax_locks_per_transaction = 64is it ok? or is it too high?
That depends really on your application's needs. I wouldn't change it unless you get warnings that you are hitting this limit.
Please keep the message on-list.
Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and maintenance_work_mem, and 102 MB of work_mem.
A question: I didn't use (it's marked with #) the effective_planner (or any other planner method or config option). Is it ok, when I turn it on with that parameter: 1036MB?
DROP/ADD TABLE stuck: I realized, that the locks number is so high, what about these settings:
deadlock_timeout = 1s
max_locks_per_transaction = 64
is it ok? or is it too high?
Thanks,
Carl
2011/4/8 Vick Khera <vivek@khera.org>
On Fri, Apr 8, 2011 at 4:00 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote:My question is the following: if this is a dedicated database server, with maximum 30 users (but they are using ODBC with Microsoft Acces, and each of them generating 4-6 connection at the same time), and other 200 people will use this server through drupal, php, apache not in daily basis, but weekly, what is the ideal memory configuration?if it is a dedicated DB server, then give shared memory about 1/4 of the RAM, and perhaps a slightly larger maintenance work mem. depending on your workload you may want to increase the checkpoint segments (if write-mostly, then add more segments).Here is what I use on my FreeBSD 8, Pg 9 big-memory servers. these have 24GB or more of RAM and are attached to SSD external storage for the database:max_connections = 200shared_buffers = 5120MBwork_mem = 512MBmaintenance_work_mem = 1024MBmax_stack_depth = 8MBvacuum_cost_delay = 15checkpoint_segments = 64checkpoint_timeout = 15mincheckpoint_completion_target = 0.8random_page_cost = 1.0 # RAM disk. set equal seq_page_costeffective_cache_size = 6400MB # shared_buffers + `sysctl -n vfs.hibufspace` / 8192 (BLKSZ)for individual complicated queries, you can increase the sort mem and work mem on a per-connection basis as needed.
After the settings in the postgresql.conf our system is much faster, and no more error messages in the postgres.log, but If I try to drop a table, or add a new one, our system is stopping, until I kill the process, which is dropping or adding a table.Is something else using the table you want to drop and blocking the drop statement from taking the locks it needs? It should be fairly instant.
I think, the main problem is the following: all of the user are autheticated in the psql with the same username, and the thing, that you've mentioned, the locks (I will talk with the programmer, or create new users in the psql, and modify the ODBC settings at the client side). How can I setup a shorter time period for the idle time or less lock time?
regards,
Carl
2011/4/8 Merlin Moncure <mmoncure@gmail.com>
On Fri, Apr 8, 2011 at 3:00 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote:> After the settings in the postgresql.conf our system is much faster, and noyou are almost certainly waiting on a lock. check pg_locks for
> more error messages in the postgres.log, but If I try to drop a table, or
> add a new one, our system is stopping, until I kill the process, which is
> dropping or adding a table.
ungranted locks and pg_stat_activity for idle/long running
transactions. If you ever find yourself having to bump
max_connections again, it is almost certainly a good idea to
investigate a connection pooler like pgbouncer.
merlin
Ok Vick, thanks, and sorry for the off-list message.
regards,
Carl
2011/4/8 Vick Khera <vivek@khera.org>
On Fri, Apr 8, 2011 at 9:53 AM, Gipsz Jakab <clausewitz45@gmail.com> wrote:Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and maintenance_work_mem, and 102 MB of work_mem.A question: I didn't use (it's marked with #) the effective_planner (or any other planner method or config option). Is it ok, when I turn it on with that parameter: 1036MB?the variables below are all the ones I change from default other than the logging settings (I like more verbose logging).I have no opinion or experiences on any other settings.DROP/ADD TABLE stuck: I realized, that the locks number is so high, what about these settings:I don't think it has to do with number of locks, but with actually waiting for a lock.deadlock_timeout = 1smax_locks_per_transaction = 64is it ok? or is it too high?That depends really on your application's needs. I wouldn't change it unless you get warnings that you are hitting this limit.Please keep the message on-list.