Re: PostgreSQL clustering VS MySQL clustering - Mailing list pgsql-performance
From | Tatsuo Ishii |
---|---|
Subject | Re: PostgreSQL clustering VS MySQL clustering |
Date | |
Msg-id | 20050122.121300.41631131.t-ishii@sra.co.jp Whole thread Raw |
In response to | Re: PostgreSQL clustering VS MySQL clustering (Marty Scholes <marty@outputservices.com>) |
Responses |
Re: PostgreSQL clustering VS MySQL clustering
Re: PostgreSQL clustering VS MySQL clustering |
List | pgsql-performance |
IMO the bottle neck is not WAL but table/index bloat. Lots of updates on large tables will produce lots of dead tuples. Problem is, There' is no effective way to reuse these dead tuples since VACUUM on huge tables takes longer time. 8.0 adds new vacuum delay paramters. Unfortunately this does not help. It just make the execution time of VACUUM longer, that means more and more dead tuples are being made while updating. Probably VACUUM works well for small to medium size tables, but not for huge ones. I'm considering about to implement "on the spot salvaging dead tuples". -- Tatsuo Ishii > This is probably a lot easier than you would think. You say that your > DB will have lots of data, lots of updates and lots of reads. > > Very likely the disk bottleneck is mostly index reads and writes, with > some critical WAL fsync() calls. In the grand scheme of things, the > actual data is likely not accessed very often. > > The indexes can be put on a RAM disk tablespace and that's the end of > index problems -- just make sure you have enough memory available. Also > make sure that the machine can restart correctly after a crash: the > tablespace is dropped and recreated, along with the indexes. This will > cause a machine restart to take some time. > > After that, if the WAL fsync() calls are becoming a problem, put the WAL > files on a fast RAID array, etiher a card or external enclosure, that > has a good amount of battery-backed write cache. This way, the WAL > fsync() calls will flush quickly to the RAM and Pg can move on while the > RAID controller worries about putting the data to disk. With WAL, low > access time is usually more important than total throughput. > > The truth is that you could have this running for not much money. > > Good Luck, > Marty > > > Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit : > > > Could you explain us what do you have in mind for that solution? I mean, > > > forget the PostgreSQL (or any other database) restrictions and > > explain us > > > how this hardware would be. Where the data would be stored? > > > > > > I've something in mind for you, but first I need to understand your > > needs! > > > > I just want to make a big database as explained in my first mail ... At the > > beginning we will have aprox. 150 000 000 records ... each month we will > > add > > about 4/8 millions new rows in constant flow during the day ... and in same > > time web users will access to the database in order to read those data. > > Stored data are quite close to data stored by google ... (we are not > > making a > > google clone ... just a lot of data many small values and some big ones ... > > that's why I'm comparing with google for data storage). > > Then we will have a search engine searching into those data ... > > > > Dealing about the hardware, for the moment we have only a bi-pentium Xeon > > 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results > > ... so > > we are thinking about a new solution with maybe several servers (server > > design may vary from one to other) ... to get a kind of cluster to get > > better > > performance ... > > > > Am I clear ? > > > > Regards, > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) >
pgsql-performance by date: