Albretch Mueller wrote:
> On Tue, Jun 3, 2008 at 11:03 PM, Oliver Jowett <oliver@opencloud.com> wrote:
>> That's essentially the same as the COPY you quoted in your original email,
>> isn't it? So.. what exactly is it you want to do that COPY doesn't do?
> ~
> well, actually, not exactly; based on:
> ~
> http://postgresql.com.cn/docs/8.3/static/sql-copy.html
> ~
> COPY <table_name> [FROM|TO] <data_feed> <OPTIONS>
> ~
> import/export the data into/out of PG, so you will be essentially
> duplicating the data and having to synch it. This is exactly what I am
> trying to avoid, I would like for PG to handle the data right from the
> data feed
As Dave said, PG won't magically keep the data up to date for you, you
will need some external process to do the synchronization with the feed.
That could use COPY if it wanted ..
Then you said:
> Hmm! Doesn't PG have a way to do something like this, say in MySQL:
>
> load data local infile 'uniq.csv' into table tblUniq
> fields terminated by ','
> enclosed by '"'
> lines terminated by '\n'
> (uniqName, uniqCity, uniqComments)
>
> and even in low end (not real) DBs like MS Access?
But isn't this doing exactly what PG's COPY does - loads data, once,
from a local file, with no ongoing synchronization?
> Is there a technical reason for that, or should I apply for a RFE?
Personally I don't see this sort of synchronization as something that
you want the core DB to be doing anyway. The rules for how you get the
data, how often you check for updates, how you merge the updates, and so
on are very application specific.
-O