On 9/25/19 12:15 AM, Krishnakant Mane wrote:
> Hello all,
>
> I have been using postgresql for an enterprise quality account's
> automation and inventory management software called GNUKhata
> <https://gnukhata.in>
>
> Our team is planning to add backup and restore function in the software.
>
> But we don't want to dump the entire database and then restore the same.
>
> What we are trying to do is to copy data specific to an organization.
>
> The challenge here is that I might copy all data (account heads, bills,
> vouchers etc ) for one organization from an instance on one machine.
>
> I take the archive in what ever format to another machine and now
> attempt to restore.
>
> The risk here is for example if the primary key value for orgcode in the
> organization table is 5, it might conflict with the data where I am
> attempting it to be restored.
>
> Same holds true for bills, invoices etc.
>
> A certain account head with accountcode 1 might be already present on
> the second machine.
>
> I am not expecting the users to empty all data from the destination
> machine before restoring a backup.
>
> The reason is that an auditor may have many client's data and one can't
> predict what primary key values are going to come from a backup.
>
> Basically I can even say this is a copy paste instead of a pure backup
> and restore.
>
> Can any one suggest how to handle such conflicts?
Hard to say. If the data is held in common tables(bills, vouchers,
etc)then the only thing I see happening is changing the PK values to an
unused value. That could turn into a nightmare though. Not only that you
lose the connection to the original data source. If the data can be
broken out into separate tables then I could see placing them in their
own schema.
>
>
> --
> Regards,
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata <https://gnukhata.in/>
> //(Opensource Accounting, Billing and Inventory Management Software)//
--
Adrian Klaver
adrian.klaver@aklaver.com