Thread: Re: [Retrieved]RE: backup and recovery

Re: [Retrieved]RE: backup and recovery

From
Naomi Walker
Date:
I'm not sure of the correct protocol for getting things on the "todo"
list.  Whom shall we beg?


At 10:13 AM 3/22/2004, Mark M. Huber wrote:
>That sounds like a brilliant idea, who do we say it to make it so?
>
>Mark H
>
>-----Original Message-----
>From: Naomi Walker [mailto:nwalker@eldocomp.com]
>Sent: Monday, March 22, 2004 8:19 AM
>To: Mark M. Huber
>Cc: Naomi Walker; pgsql-admin@postgresql.org
>Subject: Re: [ADMIN] backup and recovery
>
>
>That brings up a good point.  It would be extremely helpful to add two
>parameters to pg_dump.  One, to add how many rows to insert before a
>commit, and two, to live through X number of errors before dying (and
>putting the "bad" rows in a file).
>
>
>At 10:15 AM 3/19/2004, Mark M. Huber wrote:
> >What it was that I guess the pg_dump makes one large transaction and our
> >shell script wizard wrote a perl program to  add a commit transaction
> >every 500 rows or what every you set. Also I should have said that we were
> >doing the recovery with the insert statements created from pg_dump. So...
> >my 500000 row table recovery took < 10 Min.
> >
> >Thanks for your help.
> >
> >Mark H
> >
> >
> >-
>

>-------------------------------------------------------------------------------------------------------------------------
>Naomi Walker                         Chief Information Officer
>                                                Eldorado Computing, Inc.
>nwalker@eldocomp.com           602-604-3100

>-------------------------------------------------------------------------------------------------------------------------
>Forget past mistakes. Forget failures. Forget everything except what you're
>going to do now and do it.
>- William Durant, founder of General Motors

>------------------------------------------------------------------------------------------------------------------------

Naomi Walker                         Chief Information Officer
                                               Eldorado Computing, Inc.
nwalker@eldocomp.com           602-604-3100

-------------------------------------------------------------------------------------------------------------------------
Forget past mistakes. Forget failures. Forget everything except what you're
going to do now and do it.
- William Durant, founder of General Motors

------------------------------------------------------------------------------------------------------------------------

-- CONFIDENTIALITY NOTICE --

This message is intended for the sole use of the individual and entity to whom it is addressed, and may contain
informationthat is privileged, confidential and exempt from disclosure under applicable law. If you are not the
intendedaddressee, nor authorized to receive for the intended addressee, you are hereby notified that you may not use,
copy,disclose or distribute to anyone the message or any information contained in the message. If you have received
thismessage in error, please immediately advise the sender by reply email, and delete the message. Thank you. 

Re: [Retrieved]RE: backup and recovery

From
Bruce Momjian
Date:
Naomi Walker wrote:
>
> I'm not sure of the correct protocol for getting things on the "todo"
> list.  Whom shall we beg?
>

Uh, you just ask and we discuss it on the list.

Are you using INSERTs from pg_dump?  I assume so because COPY uses a
single transaction per command.  Right now with pg_dump -d I see:

    --
    -- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
    postgres
    --

    INSERT INTO has_oids VALUES (1);
    INSERT INTO has_oids VALUES (1);
    INSERT INTO has_oids VALUES (1);
    INSERT INTO has_oids VALUES (1);

Seems that should be inside a BEGIN/COMMIT for performance reasons, and
to have the same behavior as COPY (fail if any row fails).  Commands?

As far as skipping on errors, I am unsure on that one, and if we put the
INSERTs in a transaction, we will have no way of rolling back only the
few inserts that fail.

---------------------------------------------------------------------------

> >
> >That brings up a good point.  It would be extremely helpful to add two
> >parameters to pg_dump.  One, to add how many rows to insert before a
> >commit, and two, to live through X number of errors before dying (and
> >putting the "bad" rows in a file).
> >
> >
> >At 10:15 AM 3/19/2004, Mark M. Huber wrote:
> > >What it was that I guess the pg_dump makes one large transaction and our
> > >shell script wizard wrote a perl program to  add a commit transaction
> > >every 500 rows or what every you set. Also I should have said that we were
> > >doing the recovery with the insert statements created from pg_dump. So...
> > >my 500000 row table recovery took < 10 Min.
> > >

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: [Retrieved]RE: backup and recovery

From
Tsirkin Evgeny
Date:
Isn't it better to put this say in the pg_restore
or integrate such thing into psql ?
On Tue, 23 Mar 2004 17:02:36 -0700, Naomi Walker <nwalker@eldocomp.com>
wrote:

>
> I'm not sure of the correct protocol for getting things on the "todo"
> list.  Whom shall we beg?
>
>
> At 10:13 AM 3/22/2004, Mark M. Huber wrote:
>> That sounds like a brilliant idea, who do we say it to make it so?
>>
>> Mark H
>>
>> -----Original Message-----
>> From: Naomi Walker [mailto:nwalker@eldocomp.com]
>> Sent: Monday, March 22, 2004 8:19 AM
>> To: Mark M. Huber
>> Cc: Naomi Walker; pgsql-admin@postgresql.org
>> Subject: Re: [ADMIN] backup and recovery
>>
>>
>> That brings up a good point.  It would be extremely helpful to add two
>> parameters to pg_dump.  One, to add how many rows to insert before a
>> commit, and two, to live through X number of errors before dying (and
>> putting the "bad" rows in a file).
>>
>>
>> At 10:15 AM 3/19/2004, Mark M. Huber wrote:
>> >What it was that I guess the pg_dump makes one large transaction and
>> our
>> >shell script wizard wrote a perl program to  add a commit transaction
>> >every 500 rows or what every you set. Also I should have said that we
>> were
>> >doing the recovery with the insert statements created from pg_dump.
>> So...
>> >my 500000 row table recovery took < 10 Min.
>> >
>> >Thanks for your help.
>> >
>> >Mark H
>> >
>> >
>> >-
>>
>>
-------------------------------------------------------------------------------------------------------------------------
>> Naomi Walker                         Chief Information Officer
>>                                                Eldorado Computing, Inc.
>> nwalker@eldocomp.com           602-604-3100
>>
-------------------------------------------------------------------------------------------------------------------------
>> Forget past mistakes. Forget failures. Forget everything except what
>> you're
>> going to do now and do it.
>> - William Durant, founder of General Motors
>>
------------------------------------------------------------------------------------------------------------------------
>
> Naomi Walker                         Chief Information Officer
>                                                Eldorado Computing, Inc.
> nwalker@eldocomp.com           602-604-3100
>
-------------------------------------------------------------------------------------------------------------------------
> Forget past mistakes. Forget failures. Forget everything except what
> you're
> going to do now and do it.
> - William Durant, founder of General Motors
>
------------------------------------------------------------------------------------------------------------------------
>
> -- CONFIDENTIALITY NOTICE --
>
> This message is intended for the sole use of the individual and entity
> to whom it is addressed, and may contain information that is privileged,
> confidential and exempt from disclosure under applicable law. If you are
> not the intended addressee, nor authorized to receive for the intended
> addressee, you are hereby notified that you may not use, copy, disclose
> or distribute to anyone the message or any information contained in the
> message. If you have received this message in error, please immediately
> advise the sender by reply email, and delete the message. Thank you.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly



Re: [Retrieved]RE: backup and recovery

From
Tsirkin Evgeny
Date:
On Tue, 23 Mar 2004 19:50:24 -0500 (EST), Bruce Momjian
<pgman@candle.pha.pa.us> wrote:

> Naomi Walker wrote:
>>
>> I'm not sure of the correct protocol for getting things on the "todo"
>> list.  Whom shall we beg?
>>
>
> Uh, you just ask and we discuss it on the list.
>
> Are you using INSERTs from pg_dump?  I assume so because COPY uses a
> single transaction per command.  Right now with pg_dump -d I see:
>
>     --
>     -- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
>     postgres
>     --
>
>     INSERT INTO has_oids VALUES (1);
>     INSERT INTO has_oids VALUES (1);
>     INSERT INTO has_oids VALUES (1);
>     INSERT INTO has_oids VALUES (1);
>
> Seems that should be inside a BEGIN/COMMIT for performance reasons, and
> to have the same behavior as COPY (fail if any row fails).  Commands?
>
> As far as skipping on errors, I am unsure on that one, and if we put the
> INSERTs in a transaction, we will have no way of rolling back only the
> few inserts that fail.
>
That is right but there are sutuation when you prefer at least some
data to be inserted and not all changes to be ralled back because
of errors.
> ---------------------------------------------------------------------------
>
>> >
>> >That brings up a good point.  It would be extremely helpful to add two
>> >parameters to pg_dump.  One, to add how many rows to insert before a
>> >commit, and two, to live through X number of errors before dying (and
>> >putting the "bad" rows in a file).
>> >
>> >
>> >At 10:15 AM 3/19/2004, Mark M. Huber wrote:
>> > >What it was that I guess the pg_dump makes one large transaction and
>> our
>> > >shell script wizard wrote a perl program to  add a commit transaction
>> > >every 500 rows or what every you set. Also I should have said that
>> we were
>> > >doing the recovery with the insert statements created from pg_dump.
>> So...
>> > >my 500000 row table recovery took < 10 Min.
>> > >
>



Re: [Retrieved]RE: backup and recovery

From
Bruce Momjian
Date:
Tsirkin Evgeny wrote:
> > Uh, you just ask and we discuss it on the list.
> >
> > Are you using INSERTs from pg_dump?  I assume so because COPY uses a
> > single transaction per command.  Right now with pg_dump -d I see:
> >
> >     --
> >     -- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
> >     postgres
> >     --
> >
> >     INSERT INTO has_oids VALUES (1);
> >     INSERT INTO has_oids VALUES (1);
> >     INSERT INTO has_oids VALUES (1);
> >     INSERT INTO has_oids VALUES (1);
> >
> > Seems that should be inside a BEGIN/COMMIT for performance reasons, and
> > to have the same behavior as COPY (fail if any row fails).  Commands?
> >
> > As far as skipping on errors, I am unsure on that one, and if we put the
> > INSERTs in a transaction, we will have no way of rolling back only the
> > few inserts that fail.
> >
> That is right but there are sutuation when you prefer at least some
> data to be inserted and not all changes to be ralled back because
> of errors.

Added to TODO:

    * Have pg_dump use multi-statement transactions for INSERT dumps

For simple performance reasons, it would be good.  I am not sure about
allowing errors to continue loading.   Anyone else?

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: [Retrieved]RE: backup and recovery

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Added to TODO:
>     * Have pg_dump use multi-statement transactions for INSERT dumps

> For simple performance reasons, it would be good.  I am not sure about
> allowing errors to continue loading.   Anyone else?

Of course, anyone who actually cares about reload speed shouldn't be
using INSERT-style dumps anyway ... I'm not sure why we should expend
effort on that rather than just telling people to use the COPY mode.

            regards, tom lane

Re: [Retrieved]RE: backup and recovery

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Added to TODO:
> >     * Have pg_dump use multi-statement transactions for INSERT dumps
>
> > For simple performance reasons, it would be good.  I am not sure about
> > allowing errors to continue loading.   Anyone else?
>
> Of course, anyone who actually cares about reload speed shouldn't be
> using INSERT-style dumps anyway ... I'm not sure why we should expend
> effort on that rather than just telling people to use the COPY mode.

My bigger issue is that COPY will fail on a single row failure, and
nothing will be in the table, while INSERT will not.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: [Retrieved]RE: backup and recovery

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> My bigger issue is that COPY will fail on a single row failure, and
> nothing will be in the table, while INSERT will not.

In theory, there shouldn't be any failures, because the data was known
valid when it was dumped.

Of course, practice often differs from theory, but I wonder whether we
aren't talking about palliating a symptom instead of fixing the real
problem.

            regards, tom lane

Torn pages Detection in SQL SERVER

From
Hemapriya
Date:
Hi,

Any one know how to detect torn pages/ torn bits from
memory dump of DBCC PAGE command in SQL Server??

Any help or hint is highly appreciated.

Thanks
priya

__________________________________
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html