Thread: Constructing a backup routine

Constructing a backup routine

From
Johnny Jørgensen
Date:
I've been over the docs, Bruce's book, and a good part of the web, where
there are plenty documentation on COPY, \copy and pgdump, but I have yet
to make a working backup (and the system goes into production state in 3
weeks..). The problem is, i get my data moved out & in fine, but
sequences are reset? This is, of course, something of a problem, since
it breaks new inserts into unique colums from sequences after a restore.
Also, I've been putting a lot of work into some plpgsql functions, that
I have to load manually - there's only a handful, so it isn't too big a
problem, but all in all, to get a near-workable backup, takes me an hour
or so by hand..

What way do you big folks handle backing up your systems?

regards,
-- 
Johnny Jørgensen <pgsql@halfahead.dk>



Performance problem, what to do?

From
Archibald Zimonyi
Date:
Hi there,

My name is Archibald and I am fairly new to PostGreSQL but I've worked
with databases for a few years on and off and I consider myself to be a
fairly good database designer and I know my SQL statements.

I am currently building a database to represent a card game, where one is
supposed to search through the cards in order to find cards you want.

Each card has certain attributes (cost, name, requirements and such). What
I have done is made one table cards which only includes the name and then
"objectified" the database by using tables connected to the cards table.

Once all the data (and we're talking a small amount of data here, the
whole database is 1.4M)) is in I add my two views and run them. Here is
when my problems start. I did nothing to increase performance by using
VACUUM or CLUSTER but I frequently used EXPLAIN to see how much diskaccess
I had (since I felt that my query went to slow). The numbers went down
drastically and now I have an EXPLAIN output that I think looks very very
good, very few rows affected and the cost is very low (started at around
6000 and up and is now down att 200 so it is a big cut).

The EXPLAIN on my view can be found in the attachment.

How come my query takes almost a minute execute?

My problems actually started when I used VACUUM ANALYZE. After that the
query took forever. A friend suggested I recreate the database and run
VACUUM ANALYZE when the data is just added, something that is also
suggested in various docs and books but that didn't help. So I tried to
CLUSTER some of my tables, it worked better but then suddenly all was slow
again.

We're running this database on a 700Mhz PC running Linux with 396M of RAM,
so for a "hobby" database that is fairly much. The harddrive is a faster 
one as well.

The database design follows the normalization rules of designing
databases, I have made som odd choices where I have created a table which
could be ignored and instead added as a column in another table but we're
talking about such small amounts of data that that shouldn't be a problem.

If anyone has any idea at all feel free to respond, I am grateful for any
help I can get on this.

Thanks in advance,

Archie

Re: Performance problem, what to do?

From
Markus Bertheau
Date:
On Tue, 2001-12-18 at 13:34, Archibald Zimonyi wrote:
> How come my query takes almost a minute execute?

Could you post your table structure and the query for the explain?

Markus Bertheau


Re: Performance problem, what to do?

From
Archibald Zimonyi
Date:
On demand, I'll post the database structure as well as the SQL query which
creates the slow respons. I'll add them as attachments (do they come at
all?).

Thanks in advance,

Archie


Re: Constructing a backup routine

From
GB Clark II
Date:
On Tuesday 18 December 2001 05:50, Johnny Jørgensen wrote:
> I've been over the docs, Bruce's book, and a good part of the web, where
> there are plenty documentation on COPY, \copy and pgdump, but I have yet
> to make a working backup (and the system goes into production state in 3
> weeks..). The problem is, i get my data moved out & in fine, but
> sequences are reset? This is, of course, something of a problem, since
> it breaks new inserts into unique colums from sequences after a restore.
> Also, I've been putting a lot of work into some plpgsql functions, that
> I have to load manually - there's only a handful, so it isn't too big a
> problem, but all in all, to get a near-workable backup, takes me an hour
> or so by hand..
>
> What way do you big folks handle backing up your systems?
>
> regards,
Hi,

I ran into the same problem and the only way I could get pg_dump to work was
to dump the entire database instead of just the tables.

GB

-- 
GB Clark II             | Roaming FreeBSD Admin
gclarkii@VSServices.COM | General Geek           CTHULU for President - Why choose the lesser of two evils?


Re: Performance problem, what to do?

From
Tom Lane
Date:
Archibald Zimonyi <archie@netg.se> writes:
> On demand, I'll post the database structure as well as the SQL query which
> creates the slow respons. I'll add them as attachments (do they come at
> all?).

You're forcing the join order by using explicit JOIN syntax.
Rearranging the join order might yield a better plan.  See
http://www.postgresql.org/users-lounge/docs/7.1/postgres/explicit-joins.html
        regards, tom lane


Re: Performance problem, what to do?

From
Archibald Zimonyi
Date:


> Archibald Zimonyi <archie@netg.se> writes:
> > On demand, I'll post the database structure as well as the SQL query which
> > creates the slow respons. I'll add them as attachments (do they come at
> > all?).
> 
> You're forcing the join order by using explicit JOIN syntax.
> Rearranging the join order might yield a better plan.  See
> http://www.postgresql.org/users-lounge/docs/7.1/postgres/explicit-joins.html
> 
I know I am, but if you look at the plan (which I will again add as an
attachment) you see that the plan looks pretty good, in fact way better
then the original plan did.

I don't know how much time each "unit" takes, but a normal select (SELECT 
* FROM <table>), which might yield a plan of 0..22.2 "units", takes no
time at all. This plan is at most up at 200, which I don't think is costly
considering what it does.

Archie

Re: Constructing a backup routine

From
Kovacs Zoltan
Date:
In our production state system we usually do this:

$ pg_dump -xacnDO database_name | grep "^SELECT setval "

This gives SQL statements which can be reloaded into an arbitrary
database. A bit slow, IMHO a clever select on the system tables should
give you the same result much faster.

HTH, Zoltan

--                         Kov\'acs, Zolt\'an                        kovacsz@pc10.radnoti-szeged.sulinet.hu
          http://www.math.u-szeged.hu/~kovzol                        ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz