Thread: Very specific server situation
Hi, We're facing some perfomance problems with the database for a web site with very specific needs. First of all, we're using version 8.1 in a server with 1GB of RAM. I know memory normally should be more, but as our tables are not so big (as a matter of fact, they are small) I think the solution would not be adding more RAM. What we basically have is a site where each user has a box with links to other randomly selected users. Whenever a box from a user is shown, a SPs is executed: a credit is added to that user and a credit is substracted from the accounts of the shown links. Accounts with no credits do not have to be listed. So, we've lots (LOTS) of users querying and updating the same table. Sometimes with big peaks. Our first attempt was to split that table in two: one for the actual credits and another one for the users. So, only the credits table gets updated on every request, but it has a trigger that updates a flag field in the users table saying if the user has credits. This had a good impact, but I guess it's not enough. For now, we only have 23.000 users, but it's going to grow. Do you have any advice? Is this possible with postgres or do you recommend just to try with a volatile memory approach for the credits? We're using pgpool and the output from free shows only 350M of RAM being used. Some relevants parts of the .conf: max_connections = 160 shared_buffers = 40000 work_mem = 3096 maintenance_work_mem = 131072 max_fsm_pages = 70000 fsync = false autovacuum = on Any help would be really appreciated. Thanks in advance, Mauro.
"Mauro N. Infantino" <mauroi@digbang.com> writes: > What we basically have is a site where each user has a box with links to > other randomly selected users. Whenever a box from a user is shown, a SPs is > executed: a credit is added to that user and a credit is substracted from > the accounts of the shown links. Accounts with no credits do not have to be > listed. So, we've lots (LOTS) of users querying and updating the same table. Have you checked to make sure the query plans are reasonable? Have you checked that autovacuum is running often enough? (You might want to try contrib/pgstattuple to see how much dead space there is in your heavily-updated tables.) Also, on a high-update workload it is absolutely critical to boost checkpoint_segments far enough that you are not doing checkpoints oftener than maybe once every five minutes. If the performance problems seem "bursty" then you may also need to look at adjusting bgwriter and/or vacuum cost delay parameters to smooth out the I/O load. regards, tom lane
Tom, Thank you very much for your suggestions. > -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > > Have you checked to make sure the query plans are reasonable? I've attached the main query and its explain plan. I can't find a way to improve it. Does it make any difference if it's executed from a stored procedure? Is there any difference between the SP's language (PL/pgSQL, PL/php, etc. It needs to make some other tiny things besides the query)? > You might want to try contrib/pgstattuple Thanks. I'll give it a try and report the results here. > absolutely critical to boost checkpoint_segments far enough How do I know how ofen checkpoints are done? I've modified the parameters: checkpoint_segments = 36 # it was 12 before checkpoint_timeout = 1000 checkpoint_warning = 300 # so, I'll get a warning if it's too frequent. commit_delay = 5000 commit_siblings = 2 > adjusting bgwriter and/or vacuum cost delay parameters I've used a moderate cost delay configuration to see how it responds (vacuum_cost_delay = 100 & vacuum_cost_limit = 200). Do you have any advice on how to configure the bgwriter? I have no clue about it and couldn't find anything clear. Also, I know an upgrade to 8.2 is always a good thing, but is there any change that could help this specific situation? Again, thank you very much for your answers (and, of course, everything you do in pgsql). Regards, Mauro.