Thread: Scaling Database for heavy load

Scaling Database for heavy load

From
Digit Penguin
Date:
Hello,


we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with about 1.000 queries per second.
Now we have to scale the system up to 100.000 queries per second (about).

Bind/DNS is very light and i think can not give us bottleneck.
The question is how to dimension the backend database.

The queries are select (only few insert or update), but the 100.000 queries per second are only select.

How can i calculate/dimensionate?
We think to put mor ethan one Bind Server (with backend database) behinbd a router with balancing capabilities.

The problem is to know which requirements and limits does a Postgresql 9.x installation - 64 bit - can have.
Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other replication modules that can work also if connction link, from Database Server, went down?

Thank you!
Francesco

Re: Scaling Database for heavy load

From
Chris Travers
Date:


On Wed, May 11, 2016 at 12:09 PM, Digit Penguin <digitpenguin@gmail.com> wrote:
Hello,


we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with about 1.000 queries per second.
Now we have to scale the system up to 100.000 queries per second (about).

Bind/DNS is very light and i think can not give us bottleneck.
The question is how to dimension the backend database.

The queries are select (only few insert or update), but the 100.000 queries per second are only select.

How can i calculate/dimensionate?
We think to put mor ethan one Bind Server (with backend database) behinbd a router with balancing capabilities.

The problem is to know which requirements and limits does a Postgresql 9.x installation - 64 bit - can have.
Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other replication modules that can work also if connction link, from Database Server, went down?

If they are almost all select queries and a little lag between write and read visibility is ok, I would recommend Streaming replication, Slony, or Bucardo and to query against your replicas.  A specific architecture using one or more of these replication technologies would need to be designed based on your specific needs of course.

Thank you!
Francesco



--
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.

Re: Scaling Database for heavy load

From
Scott Marlowe
Date:
On Wed, May 11, 2016 at 4:09 AM, Digit Penguin <digitpenguin@gmail.com> wrote:
> Hello,
>
>
> we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
> about 1.000 queries per second.
> Now we have to scale the system up to 100.000 queries per second (about).
>
> Bind/DNS is very light and i think can not give us bottleneck.
> The question is how to dimension the backend database.
>
> The queries are select (only few insert or update), but the 100.000 queries
> per second are only select.
>
> How can i calculate/dimensionate?
> We think to put mor ethan one Bind Server (with backend database) behinbd a
> router with balancing capabilities.
>
> The problem is to know which requirements and limits does a Postgresql 9.x
> installation - 64 bit - can have.
> Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
> replication modules that can work also if connction link, from Database
> Server, went down?

Definitely looks like multiple read slaves is the answer. How man
depends on a few things.

How big is your data set? How many clients need to have an open
connection at a time? How man updates / inserts / second are we
talking equals "a few"? One per second? Ten, a hundred, a thousand?

How often and for how long will your connection link be going down?
Slony is quite robust. Postgresql's built in streaming replication
works well enough if you use something liek WALE or OmniPITR to
archive xlogs and make them available in case of loss of connection.

--
To understand recursion, one must first understand recursion.


Re: Scaling Database for heavy load

From
Melvin Davidson
Date:


On Wed, May 11, 2016 at 8:52 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Wed, May 11, 2016 at 4:09 AM, Digit Penguin <digitpenguin@gmail.com> wrote:
> Hello,
>
>
> we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
> about 1.000 queries per second.
> Now we have to scale the system up to 100.000 queries per second (about).
>
> Bind/DNS is very light and i think can not give us bottleneck.
> The question is how to dimension the backend database.
>
> The queries are select (only few insert or update), but the 100.000 queries
> per second are only select.
>
> How can i calculate/dimensionate?
> We think to put mor ethan one Bind Server (with backend database) behinbd a
> router with balancing capabilities.
>
> The problem is to know which requirements and limits does a Postgresql 9.x
> installation - 64 bit - can have.
> Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
> replication modules that can work also if connction link, from Database
> Server, went down?

Definitely looks like multiple read slaves is the answer. How man
depends on a few things.

How big is your data set? How many clients need to have an open
connection at a time? How man updates / inserts / second are we
talking equals "a few"? One per second? Ten, a hundred, a thousand?

How often and for how long will your connection link be going down?
Slony is quite robust. Postgresql's built in streaming replication
works well enough if you use something liek WALE or OmniPITR to
archive xlogs and make them available in case of loss of connection.

--
To understand recursion, one must first understand recursion.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

You might also want to consider using pgbouncer to help balance the connections/queries among the slaves.

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.