Re: Regarding db dump with Fc taking very long time to completion - Mailing list pgsql-general

From Durgamahesh Manne
Subject Re: Regarding db dump with Fc taking very long time to completion
Date
Msg-id CAJCZkoKE+q6C67EwKLcn+H7xC2NaHs8pp7Yt0G1VK++EtypZRg@mail.gmail.com
Whole thread Raw
In response to Re: Regarding db dump with Fc taking very long time to completion  (Luca Ferrari <fluca1978@gmail.com>)
Responses Re: Regarding db dump with Fc taking very long time to completion
List pgsql-general


On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari <fluca1978@gmail.com> wrote:
On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
<maheshpostgres9@gmail.com> wrote:
>  Logical dump of that table is taking more than 7 hours to be completed
>
>  I need to reduce to dump time of that table that has 88GB in size

Good luck!
I would see two possible solutions to the problem:
1) use physical backup and switch to incremental (e..g, pgbackrest)
2) partition the table and backup single pieces, if possible
(constraints?) and be assured it will become hard to maintain (added
partitions, and so on).

Are all of the 88 GB be written during a bulk process? I guess no, so
maybe partitioning you can avoid locking the whole dataset and reduce
contention (and thus time).

Luca


Hi respected postgres team

  Are all of the 88 GB be written during a bulk process?
   NO
 Earlier table size was 88gb
 Now table size is about 148 GB 
 Is there any way to reduce dump time when i take dump of the table which has 148gb in size without creating partiton on that table has 148gb in size ?


Regards
Durgamahesh Manne

pgsql-general by date:

Previous
From: Luca Ferrari
Date:
Subject: Re: Securing records using linux grou permissions
Next
From: Vicente Juan Tomas Monserrat
Date:
Subject: connection timeout with psycopg2