-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Yes, but that's not always a valid assumption.
And still PITR must update the index at each "insert", which is much
slower than the "bulk load then create index" of pg_dump.
On 05/17/07 16:01, Ben wrote:
> Yes, but the implication is that large databases probably don't update
> every row between backup periods.
>
> On Thu, 17 May 2007, Ron Johnson wrote:
>
> On 05/17/07 11:04, Jim C. Nasby wrote:
> [snip]
>>>>
>>>> Ultimately though, once your database gets past a certain size, you
>>>> really want to be using PITR and not pg_dump as your main recovery
>>>> strategy.
>
> But doesn't that just replay each transaction? It must manage the
> index nodes during each update/delete/insert, and multiple UPDATE
> statements means that you hit the same page over and over again.
- --
Ron Johnson, Jr.
Jefferson LA USA
Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGTMZGS9HxQb37XmcRAuMPAKCMfQxwJhGoVVKw/VGM4rai7pBnTwCgliwc
CfnCseBnXep4prffuqnQPNc=
=xE0J
-----END PGP SIGNATURE-----