using pg_basebackup for point in time recovery - Mailing list pgsql-general

From Pierre Timmermans
Subject using pg_basebackup for point in time recovery
Date
Msg-id 202988089.1841507.1529409838414@mail.yahoo.com
Whole thread Raw
In response to Re: PostgreSQL Volume Question  (Ron <ronljohnsonjr@gmail.com>)
Responses Re: using pg_basebackup for point in time recovery  (Michael Paquier <michael@paquier.xyz>)
List pgsql-general
Hi,
I find the documentation about pg_basebackup misleading : the documentation states that standalone hot backups cannot be used for point in time recovery, however I don't get the point : if one has a combination of the nightly pg_basebackup and the archived wals, then it is totally OK to do point in time I assume ? (of course the recovery.conf must be manually changed to set the restore_command and the recovery target time) 
Here is the doc, the sentence that I find misleading is "There are backups that cannot be used for point-in-time recovery", also mentioning that they are faster than pg_dumps add to confusion (since pg_dumps cannot be used for PITR)

It is possible to use PostgreSQL's backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and restore than pg_dump dumps. (They are also much larger than pg_dump dumps, so in some cases the speed advantage might be negated.)

As with base backups, the easiest way to produce a standalone hot backup is to use the pg_basebackup tool. If you include the -X parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup.

Thanks and regards,


Pierre


On Tuesday, June 19, 2018, 1:38:40 PM GMT+2, Ron <ronljohnsonjr@gmail.com> wrote:


On 06/15/2018 11:26 AM, Data Ace wrote:

Well I think my question is somewhat away from my intention cause of my poor understanding and questioning :( 

 

Actually, I have 1TB data and have hardware spec enough to handle this amount of data, but the problem is that it needs too many join operations and the analysis process is going too slow right now.

 

I've searched and found that graph model nicely fits for network data like social data in query performance.


If your data is hierarchal, then storing it in a network database is perfectly reasonable.  I'm not sure, though, that there are many network databases for Linux.  Raima is the only one I can think of.


 Should I change my DB (I mean my DB for analysis)? or do I need some other solutions or any extension?


Thanks


--
Angular momentum makes the world go 'round.

pgsql-general by date:

Previous
From: Ron
Date:
Subject: Re: PostgreSQL Volume Question
Next
From: Alban Hertroys
Date:
Subject: Is postorder tree traversal possible with recursive CTE's?