Re: please please please PLEASE help! - Mailing list pgsql-admin
From | Andrew Biagioni |
---|---|
Subject | Re: please please please PLEASE help! |
Date | |
Msg-id | 410914D7.8010106@e-greek.net Whole thread Raw |
In response to | please please please PLEASE help! (Steve <steve@hotmail.com>) |
List | pgsql-admin |
Steve, I'm not as much of an expert on PostgreSQL as others, but at least I have SOME answer. The way to leverage the multi-disk capability is to use RAID, either software or hardware, rather than trying to twist the PostgreSQL structure and files to do so. RAID (I forget what the acronym stands for) is basically a number of methods to provide increased reliability and/or performance from multiple disks, while having them appear to the programs (including DB programs) as a single disk. You will want to look for RAID information elsewhere, especially since you should weigh the pros and cons of the various types of RAID, but basically what you will want is to have your data pread out uniformly across the disks so that any writes and reads (in your case, especially the reads) are done in parallel on all three devices, thus significantly cutting the access time. Another solution that MIGHT help is increasing the memory. If a heavily-used index can be cached in RAM it will speed searches up a lot... I would recommend that you also study ways to redistribute your data and/or add better indices. Keep in mind that an index will be quite large for a big table, and may not be used to its full extent even when it might seem that it should (at least in my experience). You can also use EXPLAIN and EXPLAIN ANALYZE to find out where your bottlenecks are, to see what else can be done. I hope this helps, and that the last paragraph didn't come across as patronizing. I'm aware that maybe you already did all that, but I can't know -- and if you didn't, then it would be a shame not to mention it! Andrew Steve wrote: > Hi, > > I've asked this question a couple of times before on this forum but no > one seems to be nice enough to point me to the right direction or help > me out with any information, if possible. Please help me out with this > because this is a very serious issue for me and I need to learn more > about this. And here it is again: > > I've been running postgres on my server for over a year now and the > tables have become huge. I have 3 tables that have data over 10GB each > and these tables are read very very frequently. In fact, heavy > searches on these tables are expected every 2 to 3 minutes. This > unfortunately gives a very poor response time to the end user and so > I'm looking at other alternatives now. > > Currently, the postgresql installation is on a single disk and so all > the tables have their data read from a single disk. Searching on > different tables by multiple users at the same time results in very > slow searches, as it's mainly dependant on the spindle speed. I > recently gained access to another server which has 3 SCSI disks. I > know there is a way to mirror the tables across the three different > disks but I'm not sure if it's as easy as symlinking the files (WAL > files only?) across. Can anyone please tell me what to do here and how > to harness the power of the three SCSI drives that I have. Which files > in the data directory need to be moved? Is this safe? Can backups etc > be easily done? Any information will be greatly appreciated. Thank you, > > > Steve > > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > >
pgsql-admin by date: