Thread: Lack of Performance
Hi all! I wonder if there is any possibilty to boost the performance of my postgresql machine. I used to have it running on a dual p3-450 + 512mb ram (with apache/php on the same machine). Recently I moved it to a new machine, p3-800 + 512mb ram, and moved the apache to another machine. Now the performance is somewhat BAD. The load of the machine is arround 10.00 and the postmaster processes are taking all of the machines cpu power. I don't have correct stats to my hand but the web server is currently handling something about 220.000 page impressions per day (220.000 page calls which open a php -> db connection running at least 1 query). I didn't went to deep into query optimization yet cause I didn't read enough about that now. So I tried adjusting the command line options .... What are the best values to use for sort_mem and shmem_buffers? I'm currently starting the service with -B 256 -i -N 48 -o '-F -S 512' This seems NOT to be the right value but I don't know it what way I can experiment with them? If somebody has some tips handy i'd like to hear them. Thanks, Thomas
* Thomas Heller <th.heller@comtron.net> [001121 14:59] wrote: > Hi all! > > I wonder if there is any possibilty to boost the performance of my > postgresql machine. I used to have it running on a dual p3-450 + 512mb ram > (with apache/php on the same machine). Recently I moved it to a new machine, > p3-800 + 512mb ram, and moved the apache to another machine. > > Now the performance is somewhat BAD. The load of the machine is arround > 10.00 and the postmaster processes are taking all of the machines cpu power. > I don't have correct stats to my hand but the web server is currently > handling something about 220.000 page impressions per day (220.000 page > calls which open a php -> db connection running at least 1 query). > > I didn't went to deep into query optimization yet cause I didn't read enough > about that now. So I tried adjusting the command line options .... > > What are the best values to use for sort_mem and shmem_buffers? > > I'm currently starting the service with > > -B 256 > -i > -N 48 > -o '-F -S 512' > > This seems NOT to be the right value but I don't know it what way I can > experiment with them? > > If somebody has some tips handy i'd like to hear them. I have a gig of ram and use: -B 32768 -o "-F -S 65534" maybe if you halve those values... you may also need to vacuum the database. What OS are you running? -- -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] "I have the heart of a child; I keep it in a jar on my desk."
Hi again! > > -B 256 > > -i > > -N 48 > > -o '-F -S 512' > > > I have a gig of ram and use: > > -B 32768 > -o "-F -S 65534" > Hmmm, during peak time these values have no influence to perfomance at all. :( The values help to decrease the load during "not-so-busy" times, but during peak times the load is still arround 12-20. This is absolutly inacceptable for me. What I dont understand about it, is that the DB is not THAT big. The tables are arround 10.000-30.000 rows and there are only about 6 tables. They all use indexes where needed and everything is vacuumed up to 8 times a day. But the load is not affected by it. I can't seem to find "what" is pressing the db. Does query optimization help a lot or does it only affect the performance in a little manner? Most queries look for rows with a specific primary id and return parts/the whole row. Any optimizations hints? Thanks, Thomas
I was having the same problems with performance with postgres earlier this year. I used to have my information in a single table with able 30,000 rows. The problem I was having was that I get an average of about 50,000 users per day hitting the site and about 375,000 pages to 425,000 pages served per day. I was usuing MySQL until a friend suggested it to me. The table isn't all that big, it's the combination of all those people making queries to the table of that size that is the problem. I would never see free CPU even in the middle of the night between 4-6AM EST which was usually a pretty dead time period. Postgres didn't solve the problem that I was having with MySQL. High CPU. I totally restructured the way my scripts work and how they interact with the database. I now have the site creating tables on the fly to hold other sections of the site with an "index" table that holds the table information to other spots in the site. It's semi involved so I won't bore you with details. Here are my findings with my new setup: Postgres is MUCH happier to do many more queries that pull from little tables than to do one query from one bigger table. The biggest table I have at this point will always be the main index table, which now holds about 400 rows. My new setup I see about 40% CPU free on average. You can see the final product: http://www.nutz.org/ This is generally an adult humor site and holds rather graphic images so you guys are being warned in advance. The whole thing runs on PHP and Postgres. -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org]On Behalf Of Thomas Heller Sent: Thursday, November 23, 2000 4:52 AM To: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Lack of Performance Hi again! > > -B 256 > > -i > > -N 48 > > -o '-F -S 512' > > > I have a gig of ram and use: > > -B 32768 > -o "-F -S 65534" > Hmmm, during peak time these values have no influence to perfomance at all. :( The values help to decrease the load during "not-so-busy" times, but during peak times the load is still arround 12-20. This is absolutly inacceptable for me. What I dont understand about it, is that the DB is not THAT big. The tables are arround 10.000-30.000 rows and there are only about 6 tables. They all use indexes where needed and everything is vacuumed up to 8 times a day. But the load is not affected by it. I can't seem to find "what" is pressing the db. Does query optimization help a lot or does it only affect the performance in a little manner? Most queries look for rows with a specific primary id and return parts/the whole row. Any optimizations hints? Thanks, Thomas
> What I dont understand about it, is that the DB is not THAT big. > The tables > are arround 10.000-30.000 rows and there are only about 6 tables. They all > use indexes where needed and everything is vacuumed up to 8 times > a day. But > the load is not affected by it. I can't seem to find "what" is > pressing the > db. > Does query optimization help a lot or does it only affect the > performance in a little manner? I think you need to be looking at your queries to see if one or more of them is causing the performance problem. Look at all the EXPLAIN output for each query, and also be aware that certain operations are always going to be slow, eg. a function call for every row of a large table, or using the builtin regular expression operators I found to be VERY slow. Perhaps post more details here to the list in case anyone can spot anything. Do some tests and find which query takes the longest. Find out if it is actually postgresql that is using all the CPU time. Include details of what versions of PostgreSQL and operating system you are using as well as hardware configuration. - Andrew
Yeah, I guess I didn't make myself clear. It wouldn't have mattered which DB I would have ended up sticking with. Postgres or MySQL. Using a table that is 30,000+ rows and growing fast would have dogged any system. What it all comes down to is if you are going to be expecting a massive number of queries constantly, then you need to manage your tables to be more efficient and be able to get the answers you want without the system needing to parse through too much BS. This holds true for any DB. I ended up sticking with Postgres, not neccesarily because I preferred it, but because I'm a lazy sack and didn't want to reconvert my scripts to pull from a different DB. In my personal experience I haven't noticed any performance benefits either way between MySQL and Postgres. -----Original Message----- From: Marten Feldtmann [mailto:M.Feldtmann@t-online.de] Sent: Saturday, November 25, 2000 12:43 PM To: toma@nutz.org Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Lack of Performance Toma Vailikit schrieb: > > Postgres is MUCH happier to do many more queries that pull from little > tables than to do one query from one bigger table. The biggest table I have > at this point will always be the main index table, which now holds about 400 > rows. > But this is typical for every database !? * Smaller tables increases the chance, that no further pages have to be reloaded from disc. The table is fully in RAM. * Smaller tables does mean smaller indices storage or perhaps even no need for indices. etc .... Marten
Toma Vailikit schrieb: > > Postgres is MUCH happier to do many more queries that pull from little > tables than to do one query from one bigger table. The biggest table I have > at this point will always be the main index table, which now holds about 400 > rows. > But this is typical for every database !? * Smaller tables increases the chance, that no further pages have to be reloaded from disc. The table is fully in RAM. * Smaller tables does mean smaller indices storage or perhaps even no need for indices. etc .... Marten