Re: Multiple buffer cache? - Mailing list pgsql-general
From | Bret S. Lambert |
---|---|
Subject | Re: Multiple buffer cache? |
Date | |
Msg-id | 20100206140419.GA27464@FlamingKaty.my.domain Whole thread Raw |
In response to | Multiple buffer cache? (Alexei Vladishev <alexei.vladishev@zabbix.com>) |
Responses |
Re: Multiple buffer cache?
|
List | pgsql-general |
On Sat, Feb 06, 2010 at 03:46:58PM +0200, Alexei Vladishev wrote: > Bret, > > Thank you for your response! > > >>Greetings, > >> > >>Is there a way of configuring PostgreSQL so that one specific table would > >>use, say, 4GB of buffer cache while other tables would use the rest? > >> > >>I would like to keep the table and its indexes always in "hot" > >>state, so that > >>other queries won't pollute this part of the buffer cache. It would ensure > >>reliable performance and much less disk IOPS working with the table. > > > >Fiddling with the buffer cache like that would require some sort of > >OS support, if I'm not mistaken in what you're asking for. > I am talking about PostgreSQL buffer cache not OS level. I believe > it has nothing to do with > OS support. Well, kinda; but I'd been spending too much time doing admin, so I'd completely spaced on Postgres terms when you used "buffer cache", so sorry for the mixup. > > It would be great to have support of multiple cache buffers assigned > to different set of tables. > Having this implemented, I would assign frequently accessed > configuration tables (selects > and updates) to one buffer and historical tables (lots of insert > operations) to another buffer, so > the sets would use independent buffers and won't affect each other. Fair enough. > > >And then, even if the support is there, you'd need to outline exactly > >how you're planning on pushing this button. > > > >Specifically, what's your usage pattern that would make this a > >win for you? > Let me explain. I have a very busy application generating thousands > of SQLs per second. > There is an application level cache built into the application already. > > The important part is that once per hour the application writes > collected data to huge historical > tables (100M up-to billions of records, partitioned). Since it > happens every hour database buffer > cache is already overwritten by data and indexes of other tables, so > the write operation is very > slow and requires huge amount of disk seeks causing 50-100x drop of > performance. The disk seeks will happen regardless of what Postgres does, as the OS pulls in new disk blocks to perform the write. If your OS' buffer cache is large enough to hold all the data you need, then your best bet is likely partitioning data across multiple disks, so that queuing the archive reads doesn't get in the way of production reads. As I'm a unix admin mostly, I'm not qualified to give advice on whether or not that's possible, or how to do it if it is ;) > > So, my idea is to assign a separate buffer cache for the historical > tables. It would guarantee that > index data is always cached, so the write operation will be very fast. > > Is it possible? Is there any other techniques available? If it were at all possible, I'd actually set up a secondary archiving server (unless you need the historical data on tap for the production system as well), either on another port on the same machine, or on another machine which won't impact your production system if it has to suddenly do a bunch of disk I/O, and log the history to that. > > >If the table and its indexes can already fit into the buffer cache, > >and it's as commonly accessed as you think it is, the OS should > >probably have it cached anyway. > I see what you are saying but the problem is that it is normally > accessed once per hour only. > > Any thoughts? > > Alexei
pgsql-general by date: