Re: Which replication is the best for our case ? - Mailing list pgsql-general

From Arthur Silva
Subject Re: Which replication is the best for our case ?
Date
Msg-id CAO_YK0XF90mGCXDaV5XGFwmv6kAPBG=a4YC9O2zp1HwgtZpUDQ@mail.gmail.com
Whole thread Raw
In response to Re: Which replication is the best for our case ?  ("ben.play" <benjamin.cohen@playrion.com>)
List pgsql-general

On Wed, Jul 1, 2015 at 7:08 AM, ben.play <benjamin.cohen@playrion.com> wrote:
In fact, the cron job will :
-> select about 10 000 lines from a big table (>100 Gb of data). 1 user has
about 10 lines.
-> each line will be examinate by an algorithm
-> at the end of each line, the cron job updates a few parameters for the
user (add some points for example)
-> Then, it inserts a line in another table to indicate to the user each
transaction.

All updates and inserts can be inserted ONLY by the cron job ...
Therefore ... the merge can be done easily : no one can be update these new
datas.

But ... how big company like Facebook or Youtube can calculate on (a)
dedicated server(s) without impacting users ?



--
View this message in context: http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685p5856062.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I'm assuming this query is really HUGE,
otherwise I can't see why it'd bring your database to halt, specially with that amount of main memory.

That aside, I don't see why you can't send inserts in small batches back to the master DB.

Regards.

pgsql-general by date:

Previous
From: John R Pierce
Date:
Subject: Re: Which replication is the best for our case ?
Next
From: Jack Christensen
Date:
Subject: Text to interval conversion can silently truncate data