On Wed, May 11, 2016 at 4:09 AM, Digit Penguin <digitpenguin@gmail.com> wrote: > Hello, > > > we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with > about 1.000 queries per second. > Now we have to scale the system up to 100.000 queries per second (about). > > Bind/DNS is very light and i think can not give us bottleneck. > The question is how to dimension the backend database. > > The queries are select (only few insert or update), but the 100.000 queries > per second are only select. > > How can i calculate/dimensionate? > We think to put mor ethan one Bind Server (with backend database) behinbd a > router with balancing capabilities. > > The problem is to know which requirements and limits does a Postgresql 9.x > installation - 64 bit - can have. > Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other > replication modules that can work also if connction link, from Database > Server, went down?
Definitely looks like multiple read slaves is the answer. How man depends on a few things.
How big is your data set? How many clients need to have an open connection at a time? How man updates / inserts / second are we talking equals "a few"? One per second? Ten, a hundred, a thousand?
How often and for how long will your connection link be going down? Slony is quite robust. Postgresql's built in streaming replication works well enough if you use something liek WALE or OmniPITR to archive xlogs and make them available in case of loss of connection. -- To understand recursion, one must first understand recursion.