Re: Populating large tables with occasional bad values - Mailing list pgsql-jdbc

From John T. Dow
Subject Re: Populating large tables with occasional bad values
Date
Msg-id 200806111658.m5BGwRZX057718@web2.nidhog.com
Whole thread Raw
In response to Re: Populating large tables with occasional bad values  (Craig Ringer <craig@postnewspapers.com.au>)
Responses Re: Populating large tables with occasional bad values
Re: Populating large tables with occasional bad values
List pgsql-jdbc
Latency it is.

I just had no idea it would add up so fast. I guess I was thinking that you could pump a lot of data over the Internet
withoutrealizing the overhead when the data is broken down into little chunks. 

I'm not sure what the best solution is. I do this rarely, usually when first loading the data from the legacy. When
readyto go live, my (remote) client will send the data, I'll massage it for loading, then load it to their (remote)
postgresserver. This usually takes place over a weekend, but last time was in an evening which lasted until 4AM.  

If I did this regularly, three options seem easiest.

1 - Load locally to get clean data and then COPY. This requires the server to have access local access to the file to
becopied, and if the server is hosted by an isp, it depends on them whether you can do this easily. 

2 - Send the data to the client to run the Java app to insert over their LAN (this only works if the database server is
localto them and not at an ISP). 

3 - If the only problem is duplicate keys, load into a special table without the constraint, issue update commands to
rewritethe keys as needed, then select/insert to the correct table. 

Thanks

John


pgsql-jdbc by date:

Previous
From: Craig Ringer
Date:
Subject: Re: Populating large tables with occasional bad values
Next
From: Craig Ringer
Date:
Subject: Re: Populating large tables with occasional bad values