Thread: backup and restore questions
I had a few questions concerning the backup/restore process for pg. 1) Is it possible to dump data onto an existing database that contains data (assumning the schema of both are the same).Has anyone done this? I am thinking of this in order to expediate the data load process 2) I read that when dumping and restoring data the insert option is safer but slower than copy? Does anyone know from experiencehow much slower (especially for a database containing millions of records). 3) can pg_restore accept a file that is not archived like a zipped file or plain text file (file.gz or file) 4) Is the general practise to have one whole dump of a database or several separate dumps (by table etc...)? Thanks a lot Sally Take off on a romantic weekend or a family adventure to these great U.S. locations.
Dear Sally Sally , > I had a few questions concerning the backup/restore process for pg. > > 1) Is it possible to dump data onto an existing database that contains > data (assumning the schema of both are the same). Has anyone done > this? I am thinking of this in order to expediate the data load process This can work provided the database are on same system and have have same schema not tried though. > > 2) I read that when dumping and restoring data the insert option is > safer but slower than copy? Does anyone know from experience how much > slower (especially for a database containing millions of records). If you are real serious about your data best way AFAIK is insert because with insert statments you can move around in case you upgrade your database or add a new colum in new table but trying to restore a old data of the same table. On an Celeron 900 PostgreSQL 7.3.4 RH 9.0 a 151Kb tared backup takes about 5 Minutes. Though data restore depends 99 % on disk throughput 1% on CPU in case of plain insert file and 90 % on disk throughput and 10 % CPU in case of tared file. > > 3) can pg_restore accept a file that is not archived like a zipped > file or plain text file (file.gz or file) Can use both zipped and Plain. New versions of pg_restore i.e 7.3 > identify the file format automatically > > 4) Is the general practise to have one whole dump of a database or > several separate dumps (by table etc...)? One dump for data and other dump for schema will always help. -- Regards, Vishal Kashyap Director / Lead Developer, Sai Hertz And Control Systems Pvt Ltd, http://saihertz.rediffblogs.com ~*~*~*~*~*~*~* You Guys start coding I will take care of what this customers needs. ~*~*~*~*~*~*~* I Know you believe my words so logon to Jabber.org and add vishalkashyap@jabber.org to your roster. OR Seek Me at 264360076 ~*~*~*~*~*~*~* I am usually called as Vishal Kashyap but my Girlfriend calls me as Vishal CASH UP. This is because others identify me because of my generosity but my Girlfriend identify me because of my CASH. ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
On Thu, 19 Feb 2004, Sally Sally wrote: > I had a few questions concerning the backup/restore process for pg. > > 1) Is it possible to dump data onto an existing database that contains data (assumning the schema of > both are the same). Has anyone done this? I am thinking of this in order to expediate the data load > process I do it all the time. Note that if you have constraints that adding the new data would violate, it's likely to not import anything. > 2) I read that when dumping and restoring data the insert option is safer but slower than copy? Does > anyone know from experience how much slower (especially for a database containing millions of > records). Depends, but usually about twice as slow to as much as ten times slower. It isn't really any "safer" just more portable to other databases. > 3) can pg_restore accept a file that is not archived like a zipped file or plain text file (file.gz > or file) yes, plain text is fine. to do a .gz file you might have to do a gunzip first. I usually just stick to plain text. > 4) Is the general practise to have one whole dump of a database or several separate dumps (by table > etc...)? It's normal to see a single large dump. Where I work we run >80 databases (running on 7.2.x so no schemas) with each database belonging to a particular application. I wrote a custom wrapper for pg_dump that acts something like pg_dumpall but dumps each database to a seperate file. Makes restoring one table or something like that for a single database much easier when you don't have to slog though gigabytes of unrelated data.
On Thursday 19 February 2004 17:41, Sally Sally wrote: > I had a few questions > concerning the backup/restore process for pg. Please don't post HTML if you can avoid it. > class=RTE> </DIV> > <DIV class=RTE>1) Is it possible to dump data onto an existing > database that contains data (assumning the schema of both are the same). > Has anyone done this? I am thinking of this in order to expediate the data > load process</DIV> No reason why you can't copy the data in - assuming primary keys don't clash of course. > <DIV class=RTE>2) I read that when dumping and restoring data the insert > option is safer but slower than copy? Does anyone know from experience how > much slower (especially for a database containing millions of > records).</DIV> <DIV class=RTE> </DIV> It's not safer so much as more standard - any database can handle a series of INSERT statements. Inserts are a lot slower. > <DIV class=RTE>3) can pg_restore accept a file that is not archived like a > zipped file or plain text file (file.gz or file)</DIV> <DIV You can't restore a whole database. It's straightforward enough to use COPY or \copy with psql to handle a simple tab (or whatever) separated file though. If you want to unzip the file first, well, that's what scripting is for. > <DIV class=RTE>4) Is the general practise to have one whole dump > of a database or several separate dumps (by table etc...)? Personally, I dump the data+schema in one file and the schema (again) in another file. Makes it easy to scan through the schema. You can restore just a single table from a full dump anyway - see the manuals for full details. -- Richard Huxton Archonet Ltd