Re: For the ametures. (related to "Are we losing momentum?") - Mailing list pgsql-hackers

From Matthew T. O'Connor
Subject Re: For the ametures. (related to "Are we losing momentum?")
Date
Msg-id 00ca01c3051a$13a2c9b0$5a00a8c0@hplaptop
Whole thread Raw
In response to Re: For the ametures. (related to "Are we losing momentum?")  ("Dave Page" <dpage@vale-housing.co.uk>)
Responses Re: For the ametures. (related to "Are we losing momentum?")
List pgsql-hackers
----- Original Message -----
From: "Ben Clewett" <B.Clewett@roadrunner.uk.com>
> >>- There are no administrative mandatorys.  Eg, VACUUM.
> >>(A stand-alone
> >>commercial app, like an Email client, will be contrainted by
> >>having to
> >>be an app and a DBA in one.)
> >
> > PostgreSQL is by no means alone in this requirement. SQL Server for
> > example has 'optimizations' that are performed usually as part of a
> > scheduled maintenance plan and are analagous to vacuum in some ways.
>
> Is this a weekness in DBMS's that don't require this?  (MySQL, Liant
> etc.)  Is there a way of building a guarbage collector into the system?
> My Windows PC has no 'cron'.

Work is being done to build vacuum into the backend so that cron is not
required.  Hopefully will be in 7.4

> >>- The tables (not innodb) are in different files of the
> >>same name.
> >>Allowing the OS adminitrator great ability.  EG, putting tables on
> >>separate partitions and therefore greatly speeding performance.
> >
> > One reason for not doing this is that a table in PostgreSQL might span
> > mutiple files if it exceeds a couple of gigs in size.
>
> Working with IDE drives on PC's,  you can double the performace of a DB
> just by putting half the tables on a disk on another IDE chain.  Adding
> a DB using 'tar' is very a powerful ability.

You can do this using symlinks, but you do have to shut down the postmaster
before you play with the files directly.

> >>- They have extensive backup support.  Including now,
> >>concurrent backup
> >>without user interuption or risk of inconsistency.
> >
> > So does PostgreSQL (pg_dump/pg_dumpall).
>
> I have used this, and it's a great command.
>
> I could not work out from the documentation whether it takes a snapshot
> at the start time, or archives data at the time it find's it.  The
> documentation (app-pg-dump.html).  As the documentation does not clarify
> this very important point, I desided it's not safe to use when the
> system is in use.
>
> Can this command can be used, with users in the system making heavy
> changes, and when takes many hours to complete, does produce a valid and
> consistent backup?

Yes it takes a snapshot from when it starts dumping the database, so it's
consistent no matter how much activity is going on after you start pg_dump.



pgsql-hackers by date:

Previous
From: Rod Taylor
Date:
Subject: Re: FE/BE Protocol, Tom?
Next
From: Tom Lane
Date:
Subject: Should libpq's environment settings affect the session default?