Thread: PostgreSQL HardWare
I need to build a postgresql database with 2 tables containing 70.000 records each one, but they'll increase their size in 4.000 records monthly and some triggers and functions will be running on this tables plus other smaller tables less than 40.000 records. So i'm planning to implement a Intel PentiumIII Server with 2 cpu, 1GB RAM and a SCSI HDD with 10 GB running red hat 7.2. will be enough? have you experiences about it? some tips? thanks and happy new year.... Fernando San Martín Woerner counter.li.org Linux User #216550 Jefe Depto. Informática Galilea S.A. Talca, VII Región Chile (56)71-224876 ---------------------------------------- Si hubiera previsto las consecuencias me hubiera hecho relojero. Albert Einstein
I ran some performance test on a system with ~100k records in each of several primary tables. The system was being used both as a db host and as a host for the testing applications (it was just my home system). It was a PIII 1 Ghz, 256 (maybe 400 at the time) MB Ram, IDE disk. It performed very well ... maxing out at some very respectible number of inserts and selects per second. I don't recall the exact statistics but I posted all that info to the list so you should be able to find it in the archives (around september or november of 2001). It also scaled without problem once the appropriate indices were in place (this is very important ... performance will really hurt without it). I wasn't doing any extremely complicated queries. I had no triggers but most tables had one or two foreign keys on them. I think your configuration will be fine. Regards, Sheer On Fri, 4 Jan 2002, [iso-8859-1] Fernando San Mart�n Woerner wrote: > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records monthly > and some triggers and functions will be running on this tables plus other > smaller tables less than 40.000 records. > > So i'm planning to implement a Intel PentiumIII Server with 2 cpu, 1GB RAM > and a SCSI HDD with 10 GB running red hat 7.2. > > will be enough? have you experiences about it? some tips? > > > thanks and happy new year.... > > > > Fernando San Mart�n Woerner counter.li.org Linux User #216550 > Jefe Depto. Inform�tica Galilea S.A. > Talca, VII Regi�n Chile (56)71-224876 > ---------------------------------------- > Si hubiera previsto las consecuencias me hubiera hecho relojero. > > Albert Einstein > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster >
On Fri, 4 Jan 2002, Fernando San Martín Woerner wrote: > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records monthly > and some triggers and functions will be running on this tables plus other > smaller tables less than 40.000 records. That's a really small database. You should be able to run it on practically any hardware, and probably store all the data in memory. -jwb
In my last job we ran a multi-tiered online futures and options trading system. Our database was originally on a 2 processor system. What was interesting was when we ran tests we decided to try a single processor system and found that the performance was only marginally (1-2%) better on the 2 processor system. So for future upgrades we spent the extra cash on the fastest single processor we could find rather than on 2 processors. We never tested for 4 or more processors so I can't comment on the performance issues there, but my 2 cents would be to spend the extra money on a faster processor (if you even need to -- maybe save the money altogether!). Mike Shelton -----Original Message----- From: Fernando San Martín Woerner [mailto:snmartin@galilea.cl] Sent: Friday, January 04, 2002 7:39 AM To: pgsql-general@postgresql.org Subject: [GENERAL] PostgreSQL HardWare I need to build a postgresql database with 2 tables containing 70.000 records each one, but they'll increase their size in 4.000 records monthly and some triggers and functions will be running on this tables plus other smaller tables less than 40.000 records. So i'm planning to implement a Intel PentiumIII Server with 2 cpu, 1GB RAM and a SCSI HDD with 10 GB running red hat 7.2. will be enough? have you experiences about it? some tips? thanks and happy new year.... Fernando San Martín Woerner counter.li.org Linux User #216550 Jefe Depto. Informática Galilea S.A. Talca, VII Región Chile (56)71-224876 ---------------------------------------- Si hubiera previsto las consecuencias me hubiera hecho relojero. Albert Einstein ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster
Well did he mean 70.000 records or 70,000 records? -----Original Message----- From: Jeffrey W. Baker [mailto:jwbaker@acm.org] Sent: Friday, January 04, 2002 2:02 PM To: Fernando San Martín Woerner Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL HardWare On Fri, 4 Jan 2002, Fernando San Martín Woerner wrote: > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records > monthly and some triggers and functions will be running on this tables > plus other smaller tables less than 40.000 records. That's a really small database. You should be able to run it on practically any hardware, and probably store all the data in memory. -jwb ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
It all depends on what you want to do with that data. To give you an idea as to what I mean I currently have a database that resides on a Pentium II 450 with 768 M of ram and IDE hard drives. This database has several tables with over ten million records in them and each of these tables gets an average of nearly 18,000 inserts a day (there are no updates or deletes on these large tables). Of course, this system has a fairly limited number of users (less than 30), and the queries generally only ask for a small subset of the data (sequential scans of the large tables take more than a minute to complete, but index scans return very fast). My guess is that you are going to be just fine :), and if you do end up with a query that takes a long time to return chances are good that someone on the lists will have a solution. Jason Fernando San Martín Woerner <snmartin@galilea.cl> writes: > I need to build a postgresql database with 2 tables containing > 70.000 records each one, but they'll increase their size in 4.000 > records monthly and some triggers and functions will be running on > this tables plus other smaller tables less than 40.000 records. > > So i'm planning to implement a Intel PentiumIII Server with 2 cpu, > 1GB RAM and a SCSI HDD with 10 GB running red hat 7.2. > > will be enough? have you experiences about it? some tips? > > > thanks and happy new year.... > > > > Fernando San Martín Woerner counter.li.org Linux User #216550 > Jefe Depto. Informática Galilea S.A. > Talca, VII Región Chile (56)71-224876 > ---------------------------------------- > Si hubiera previsto las consecuencias me hubiera hecho relojero. > > Albert Einstein > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster
don't forget to check ebay out - they have a lot of decent deals from the recent dot-bomb fallout. i got a dell 4350 w/ dual PIII 600's, 3x18gb scsi (raid 5), raid controller, 1gb ram, 3 hot-swap power supplies, 4 hot swap fans, etc. for under $2000 including shipping. rjsjr > -----Original Message----- > From: pgsql-general-owner@postgresql.org > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of SHELTON,MICHAEL > (Non-HP-Boise,ex1) > Sent: Friday, January 04, 2002 1:42 PM > To: 'Fernando San Martín Woerner'; pgsql-general@postgresql.org > Subject: Re: [GENERAL] PostgreSQL HardWare > > > In my last job we ran a multi-tiered online futures and options trading > system. Our database was originally on a 2 processor system. What was > interesting was when we ran tests we decided to try a single processor > system and found that the performance was only marginally (1-2%) better on > the 2 processor system. So for future upgrades we spent the extra cash on > the fastest single processor we could find rather than on 2 > processors. We > never tested for 4 or more processors so I can't comment on the > performance > issues there, but my 2 cents would be to spend the extra money on a faster > processor (if you even need to -- maybe save the money altogether!). > > Mike Shelton > > -----Original Message----- > From: Fernando San Martín Woerner [mailto:snmartin@galilea.cl] > Sent: Friday, January 04, 2002 7:39 AM > To: pgsql-general@postgresql.org > Subject: [GENERAL] PostgreSQL HardWare > > > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records monthly > and some triggers and functions will be running on this tables plus other > smaller tables less than 40.000 records. > > So i'm planning to implement a Intel PentiumIII Server with 2 cpu, 1GB RAM > and a SCSI HDD with 10 GB running red hat 7.2. > > will be enough? have you experiences about it? some tips? > > > thanks and happy new year.... > > > > Fernando San Martín Woerner counter.li.org Linux User #216550 > Jefe Depto. Informática Galilea S.A. > Talca, VII Región Chile (56)71-224876 > ---------------------------------------- > Si hubiera previsto las consecuencias me hubiera hecho relojero. > > Albert Einstein > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org >
i'm guessing he meant what he said, 70.000, which to american audiences reads as seventy thousand and is written as 70,000. european countries use a period as the thousands separator rather than a comma like we do. rjsjr > -----Original Message----- > From: pgsql-general-owner@postgresql.org > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of > Troy.Campano@LibertyMutual.com > Sent: Friday, January 04, 2002 1:30 PM > To: jwbaker@acm.org; snmartin@galilea.cl > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] PostgreSQL HardWare > > > Well did he mean 70.000 records or 70,000 records? > > > > > -----Original Message----- > From: Jeffrey W. Baker [mailto:jwbaker@acm.org] > Sent: Friday, January 04, 2002 2:02 PM > To: Fernando San Martín Woerner > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] PostgreSQL HardWare > > > > > On Fri, 4 Jan 2002, Fernando San Martín Woerner wrote: > > > I need to build a postgresql database with 2 tables containing 70.000 > > records each one, but they'll increase their size in 4.000 records > > monthly and some triggers and functions will be running on this tables > > plus other smaller tables less than 40.000 records. > > That's a really small database. You should be able to run it on > practically > any hardware, and probably store all the data in memory. > > -jwb > > > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster >
In europe 70.000,00 = 70,000.00 in US (got bit by that little bug when we brought on a brokerage in Germany -- also watch out for 5/6/02 = May 6th, 2002 in US and June 5th, 2002 in Europe!!). Mike -----Original Message----- From: Troy.Campano@LibertyMutual.com [mailto:Troy.Campano@LibertyMutual.com] Sent: Friday, January 04, 2002 12:30 PM To: jwbaker@acm.org; snmartin@galilea.cl Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL HardWare Well did he mean 70.000 records or 70,000 records? -----Original Message----- From: Jeffrey W. Baker [mailto:jwbaker@acm.org] Sent: Friday, January 04, 2002 2:02 PM To: Fernando San Martín Woerner Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL HardWare On Fri, 4 Jan 2002, Fernando San Martín Woerner wrote: > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records > monthly and some triggers and functions will be running on this tables > plus other smaller tables less than 40.000 records. That's a really small database. You should be able to run it on practically any hardware, and probably store all the data in memory. -jwb ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster
So in ten years you will have about a million records. That's not really a lot of data. It would all fit in RAM at once given your 1GB RAM size. Now for the data rate: 30 days X (8 hrs/day) X (3600 sec/hour) --------------------------------------- ~= 250 seconds 4000 records Is that right?? you have a few __minutes__ to process each new row. I think your hardware is overkill. Any low end box would work for you. But if you have MANY users trying to query this data all at once the hardware may be needed. I don't think you need that Second CPU _unless_ you plan for many concurent client conections or if the server will be performing other services (apache, mail, NFS....) at the same time. --- Fernando_San_Mart�n_Woerner <snmartin@galilea.cl> wrote: > I need to build a postgresql database with 2 tables containing 70.000 > records each one, but they'll increase their size in 4.000 records > monthly > and some triggers and functions will be running on this tables plus > other > smaller tables less than 40.000 records. > > So i'm planning to implement a Intel PentiumIII Server with 2 cpu, > 1GB RAM > and a SCSI HDD with 10 GB running red hat 7.2. > > will be enough? have you experiences about it? some tips? > > > thanks and happy new year.... > > > > Fernando San Mart�n Woerner counter.li.org Linux User #216550 > Jefe Depto. Inform�tica Galilea S.A. > Talca, VII Regi�n Chile (56)71-224876 > ---------------------------------------- > Si hubiera previsto las consecuencias me hubiera hecho relojero. > > Albert Einstein > > > ---------------------------(end of > broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster ===== Chris Albertson Home: 310-376-1029 chrisalbertson90278@yahoo.com Cell: 310-990-7550 Office: 310-336-5189 Christopher.J.Albertson@aero.org __________________________________________________ Do You Yahoo!? Send your FREE holiday greetings online! http://greetings.yahoo.com
On Friday 04 January 2002 02:30 pm, Troy.Campano@LibertyMutual.com wrote: > Well did he mean 70.000 records or 70,000 records? There is more than one way of writing seventy thousand, depending upon where you are in the world. Many peoples use the period and comma differently than us in the US -- seeing seven hundred million, three hundred seventy-five thousandths written as 700.000.000,375 is jarring the first time -- but then it also depends upon what a million is (that also depends upon where you are...:-) -- it could mean writing 700.000.000.000,375 instead....) -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
I mean seventy thousand in fact... the main question is that i have 70.000 records in one table, 90.000 in the other one, but there it's a lot of backend process like triggers, pgsql functions, inserts, updates. So simple querys works very good but what about some hard work?, what you say it's that ammount of records it's not a problem? > i'm guessing he meant what he said, 70.000, which > to american audiences reads as seventy thousand > and is written as 70,000. european countries use > a period as the thousands separator rather than a > comma like we do. > > rjsjr > > > -----Original Message----- > > From: pgsql-general-owner@postgresql.org > > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of > > Troy.Campano@LibertyMutual.com > > Sent: Friday, January 04, 2002 1:30 PM > > To: jwbaker@acm.org; snmartin@galilea.cl > > Cc: pgsql-general@postgresql.org > > Subject: Re: [GENERAL] PostgreSQL HardWare > > > > > > Well did he mean 70.000 records or 70,000 records? > > > > > > > > > > -----Original Message----- > > From: Jeffrey W. Baker [mailto:jwbaker@acm.org] > > Sent: Friday, January 04, 2002 2:02 PM > > To: Fernando San Martín Woerner > > Cc: pgsql-general@postgresql.org > > Subject: Re: [GENERAL] PostgreSQL HardWare > > > > > > > > > > On Fri, 4 Jan 2002, Fernando San Martín Woerner wrote: > > > > > I need to build a postgresql database with 2 tables containing 70.000 > > > records each one, but they'll increase their size in 4.000 records > > > monthly and some triggers and functions will be running on this tables > > > plus other smaller tables less than 40.000 records. > > > > That's a really small database. You should be able to run it on > > practically > > any hardware, and probably store all the data in memory. > > > > -jwb > > > > > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 4: Don't 'kill -9' the postmaster > > > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html -- Galilea S.A.
> I think your hardware is overkill. Any low end > box would work for you. But if you have MANY users trying > to query this data all at once the hardware may be needed. > I don't think you need that Second CPU _unless_ you plan > for many concurent client conections or if the server will > be performing other services (apache, mail, NFS....) at the > same time. Adding a second CPU to a machine you're building yourself costs a (relatively) very small amount of money, but nearly doubles the capacity of the machine, and greatly extends it's useful lifetime. I think that the benefits far outweigh the cost - adding a second CPU may add 20% (or less) to the cost of the machine, but get you an 80% increase in capacity. As an example, I have an old dual Pentium-133 that I picked up for $40. Comparing it to using a machine with a single 650 MHz P3, the little machine is usually MORE responsive, and always at least nearly as responsive. Very CPU-intensive apps do take longer, but overall the machine is extremely pleasant to work on. When you compare the significant performance difference between a Pentium 133 and a P3/650, I think that says a LOT about the merits of multi-processor systems. For production servers, it's a pretty rare day when I wouldn't fork over $40 more for a dual CPU board, and buy a second processor. Or, if money was tight, I'd buy the board, and get the second CPU in a month or two. : ) (And, hey, the first time you see a PCI device using IRQ 27, it makes you take a double-take!) steve
> In my last job we ran a multi-tiered online futures and options trading > system. Our database was originally on a 2 processor system. What was > interesting was when we ran tests we decided to try a single processor > system and found that the performance was only marginally (1-2%) better on > the 2 processor system. So for future upgrades we spent the extra cash on > the fastest single processor we could find rather than on 2 processors. We > never tested for 4 or more processors so I can't comment on the performance > issues there, but my 2 cents would be to spend the extra money on a faster > processor (if you even need to -- maybe save the money altogether!). Were you testing with a single process? Multiple processors under most all database systems don't really speed up the execution time of a single connection, but they let you run multiple connections simultaneously in parallel. I know that I can run a lot more concurrent postgres connections on a dual-cpu than a single-cpu machine, and the quad-cpu machine we use can handle a LOT of simultaneous traffic thrown it's way, and handle it quite quickly. In other words, it's not a matter of "I have a query that I want to run more quickly", it's "My goodness, there are a lot of people hitting the database" where multi-processors become just what the doctor ordered.... steve
I found when prototyping my Postgresql application that there is a BIG diference in performance if the entire set of data fits in RAM. Just about anything is fast if that is the case. But when your data gets to be 10 or 100 times what will fit is RAM it can slow down drastically. When you do your testing you must use realistic sized test data. I wrote some functions to produce random numbers and strings and them COPYed them into tables. In your case, even after years, the data will still fit all in the RAM cache. You can expect good performance. Also, Postgresql is very uneven. Some things it does well and fast and then one small change to the SQL that should not matter and it just dies. Sometimes when a querry is runing slow you can re-write the SQL to something equivalent and see a speedup. This is IMO one of the major diferances between Postgresql and Oracle. Oracle is not so uneven while with Postgresql very similar querries can have very diferent times to complete. A lot depends on your exact SQL querry. I can write one that would take hours even on a small table. That second CPU will ONLY help you if more then one client is connected to Postgresql at the same time or if the computer has some other non-database task to run. So if you expect much concurrent access go with multi-CPU setup if not then go with a faster single CPU. In every case RAM helps more then anything else. The more RAM the better. At today's prices 1GB not unreasonable. --- Fernado San Martin <snmartin@galilea.cl> wrote: > I mean seventy thousand in fact... > > the main question is that i have 70.000 records in one table, 90.000 > in the > other one, but there it's a lot of backend process like triggers, > pgsql > functions, inserts, updates. So simple querys works very good but > what about > some hard work?, what you say it's that ammount of records it's not a > problem? > > > i'm guessing he meant what he said, 70.000, which > > to american audiences reads as seventy thousand > > and is written as 70,000. european countries use > > a period as the thousands separator rather than a > > comma like we do. > > > > rjsjr > > > > > -----Original Message----- > > > From: pgsql-general-owner@postgresql.org > > > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of > > > Troy.Campano@LibertyMutual.com > > > Sent: Friday, January 04, 2002 1:30 PM > > > To: jwbaker@acm.org; snmartin@galilea.cl > > > Cc: pgsql-general@postgresql.org > > > Subject: Re: [GENERAL] PostgreSQL HardWare > > > > > > > > > Well did he mean 70.000 records or 70,000 records? > > > > > > > > > > > > > > > -----Original Message----- > > > From: Jeffrey W. Baker [mailto:jwbaker@acm.org] > > > Sent: Friday, January 04, 2002 2:02 PM > > > To: Fernando San Mart�n Woerner > > > Cc: pgsql-general@postgresql.org > > > Subject: Re: [GENERAL] PostgreSQL HardWare > > > > > > > > > > > > > > > On Fri, 4 Jan 2002, Fernando San Mart�n Woerner wrote: > > > > > > > I need to build a postgresql database with 2 tables containing > 70.000 > > > > records each one, but they'll increase their size in 4.000 > records > > > > monthly and some triggers and functions will be running on this > tables > > > > plus other smaller tables less than 40.000 records. > > > > > > That's a really small database. You should be able to run it on > > > practically > > > any hardware, and probably store all the data in memory. > > > > > > -jwb > > > > > > > > > > > > ---------------------------(end of > broadcast)--------------------------- > > > TIP 2: you can get off all lists at once with the unregister > command > > > (send "unregister YourEmailAddressHere" to > majordomo@postgresql.org) > > > > > > ---------------------------(end of > broadcast)--------------------------- > > > TIP 4: Don't 'kill -9' the postmaster > > > > > > > ---------------------------(end of > broadcast)--------------------------- > > TIP 5: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/users-lounge/docs/faq.html > > > > -- > Galilea S.A. > > ---------------------------(end of > broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster ===== Chris Albertson Home: 310-376-1029 chrisalbertson90278@yahoo.com Cell: 310-990-7550 Office: 310-336-5189 Christopher.J.Albertson@aero.org __________________________________________________ Do You Yahoo!? Send your FREE holiday greetings online! http://greetings.yahoo.com
> Adding a second CPU to a machine you're building yourself costs a > (relatively) very small amount of money, but nearly doubles the capacity > of the machine, and greatly extends it's useful lifetime. I think that > the benefits far outweigh the cost - adding a second CPU may add 20% (or > less) to the cost of the machine, but get you an 80% increase in capacity. I agree 100% on this. I have a bi-Celeron 500 at one of my clients and it is a very fast machine when compared to my singleAthlon 800. Everything just seems so much more snappy. SCSI 160 and software RAID with two mirrored drives and only 256 Mb RAM. I would now go for 3 drives in RAID 5 and 1 Gb RAM. Building a machine like this yourself is a lot cheaper than buying. Iusually charge a days labor for building the box and software installation and take a 40% margin on the hardware and amstill cheaper than any equivalent quality machine. When you build yourself you do have a tendancy to use the best components money can buy. Unlike certain "name brands" thatI won't name here but which I have oppened to look inside. If I had to buy a machine I would buy IBM. Cheers Tony -- tony@animaproductions.com JWebMail WebMail/Java v0.7.6 WWW to Mail Gateway
>When you compare the >significant performance difference between a Pentium 133 and a P3/650, I >think that says a LOT about the merits of multi-processor systems. It think that says both systems are disk i/o bound, so the faster's CPU power is wasted. My bet would be extra money going into the disk system, like battery-backed caching controllers, and separating transaction logging disk from data storage disk. Len http://MenAndMice.com/DNS-training http://BIND8NT.MEIway.com : ISC BIND 8.2.4 for NT4 & W2K http://IMGate.MEIway.com : Build free, hi-perf, anti-abuse mail gateways
Hi! I'm new in postgres so I have one question here. Many of you say that you can put database in RAM. What should i do to tell Postgre to use for example 512MB of RAM an put everything in it. What i have to put in config. -- bye, Uros
On Sun, 6 Jan 2002, Uros Gruber wrote: > Hi! > > I'm new in postgres so I have one question here. Many of you say that > you can put database in RAM. > > What should i do to tell Postgre to use for example 512MB of RAM an > put everything in it. What i have to put in config. This isn't really necessary. On any reasonable operating system, the kernel will keep disk contents in RAM for fast access. You needn't worry about it. If you REALLY insist on doing this, you need to make a filesystem in RAM and put your databases there. In Linux use the ramdisk driver. -jwb