Re: Hardware upgraded but performance still ain't good enough - Mailing list pgsql-performance
From | Alex Turner |
---|---|
Subject | Re: Hardware upgraded but performance still ain't good enough |
Date | |
Msg-id | 33c6269f0608072333o67ab6adei29095091718901dc@mail.gmail.com Whole thread Raw |
In response to | Hardware upgraded but performance still ain't good enough (Alvaro Nunes Melo <al_nunes@atua.com.br>) |
Responses |
Re: Hardware upgraded but performance still ain't good enough
Re: Hardware upgraded but performance still ain't good enough Re: Hardware upgraded but performance still ain't good |
List | pgsql-performance |
First off - very few third party tools support debian. Debian is a sure fire way to have an unsupported system. Use RedHat or SuSe (flame me all you want, it doesn't make it less true).
Second, run bonnie++ benchmark against your disk array(s) to see what performance you are getting, and make sure it's reasonable.
Single drives for stuff is not a great way to go for either speed or reliability, highly not recommended for a production system. Use SAS or SATA for the best speed for your $$s, don't buy SAN, they are overpriced and often don't perform. RAM could be more to be honest too.
Diagnosing the bottleneck can be done with combinations of top, iostat and vmstat. If you have high iowait numbers then your system is waiting on the disks. If you have high system CPU usage, then disks are also worth a look, but not as bad as high iowait. If you have high user CPU with little iowait and little system CPU, and very little io activity in iostat, then you are CPU bound. If you are IO bound, you need to figure if it's reads or writes. If it's reads, then more RAM will help. if it's writes, then you need more spindles and more controller cache with RAID (please think carefully before using RAID 5 in a write intensive environment, it's not ideal).
The other thing is you will probably want to turn on stats in postgres to figure out which queries are the bad ones (does anyone have good docs posted for this?). Once you have identified the bad queries, you can explain analyze them, and figure out why they suck.
Alex.
Second, run bonnie++ benchmark against your disk array(s) to see what performance you are getting, and make sure it's reasonable.
Single drives for stuff is not a great way to go for either speed or reliability, highly not recommended for a production system. Use SAS or SATA for the best speed for your $$s, don't buy SAN, they are overpriced and often don't perform. RAM could be more to be honest too.
Diagnosing the bottleneck can be done with combinations of top, iostat and vmstat. If you have high iowait numbers then your system is waiting on the disks. If you have high system CPU usage, then disks are also worth a look, but not as bad as high iowait. If you have high user CPU with little iowait and little system CPU, and very little io activity in iostat, then you are CPU bound. If you are IO bound, you need to figure if it's reads or writes. If it's reads, then more RAM will help. if it's writes, then you need more spindles and more controller cache with RAID (please think carefully before using RAID 5 in a write intensive environment, it's not ideal).
The other thing is you will probably want to turn on stats in postgres to figure out which queries are the bad ones (does anyone have good docs posted for this?). Once you have identified the bad queries, you can explain analyze them, and figure out why they suck.
Alex.
On 8/7/06, Alvaro Nunes Melo <al_nunes@atua.com.br> wrote:
Hi,
First of all I must tell that my reality in a southern brazilian city is
way different than what we read in the list. I was lookig for ways to
find the HW bottleneck and saw a configuration like:
"we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4
opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian
Sarge amd64, PostgreSQL is 8.0.3." on
(http://archives.postgresql.org/pgsql-performance/2005-07/msg00431.php )
Our old server was a very modest Dell Xeon 2.8 (512 Kb Cache), with 1 GB
RAM and one SCSI disc. This server runs PostgreSQL (8.1.4), Apache (PHP)
and other minor services. We managed to get a test machine, a HP Xeon
3.2 (2 MB cache), also with 1 GB RAM but 4 SCSI discs (in one sigle
array controller). They're organized in the following way:
disk 0: Linux Root
disk 1: Database Cluster
disk 2: pg_xlog
disk 3: a dir the suffers constant read/write operations
The database size stands around 10 GB. The new server has a better
performance than the old one, but sometimes it still stucks. We tried to
use a HP proprietary tool to monitor the server, and find out what is
the bottleneck, but it's been difficult to install it on Debian. The
tool is only certified for SuSe and RedHat. So we tried to use some
Linux tools to see what's going on, like vmstat and iostat. Are this
tools (vm and iostat) enough? Should we use something else? Is there any
specifical material about finding bottlenecks in Linux/PostgreSQL
machines? Is our disks design proper?
I really apologize for my lack of knowledge in this area, and for the
excessive number of questions in a single e-mail.
Best regards,
Alvaro
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
pgsql-performance by date: