Re: Scaling Database for heavy load - Mailing list pgsql-general

From Chris Travers
Subject Re: Scaling Database for heavy load
Date
Msg-id CAKt_ZfsOni=JJiL=b=Faj9doyQN2mfM12pijvxfxc-d+i07sbw@mail.gmail.com
Whole thread Raw
In response to Scaling Database for heavy load  (Digit Penguin <digitpenguin@gmail.com>)
List pgsql-general


On Wed, May 11, 2016 at 12:09 PM, Digit Penguin <digitpenguin@gmail.com> wrote:
Hello,


we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with about 1.000 queries per second.
Now we have to scale the system up to 100.000 queries per second (about).

Bind/DNS is very light and i think can not give us bottleneck.
The question is how to dimension the backend database.

The queries are select (only few insert or update), but the 100.000 queries per second are only select.

How can i calculate/dimensionate?
We think to put mor ethan one Bind Server (with backend database) behinbd a router with balancing capabilities.

The problem is to know which requirements and limits does a Postgresql 9.x installation - 64 bit - can have.
Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other replication modules that can work also if connction link, from Database Server, went down?

If they are almost all select queries and a little lag between write and read visibility is ok, I would recommend Streaming replication, Slony, or Bucardo and to query against your replicas.  A specific architecture using one or more of these replication technologies would need to be designed based on your specific needs of course.

Thank you!
Francesco



--
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.

pgsql-general by date:

Previous
From: Digit Penguin
Date:
Subject: Scaling Database for heavy load
Next
From: martin.kamp.jensen@schneider-electric.com
Date:
Subject: Re: Invalid data read from synchronously replicated hot standby