Re: Maintenance question / DB size anomaly... - Mailing list pgsql-performance

From Kurt Overberg
Subject Re: Maintenance question / DB size anomaly...
Date
Msg-id B79F5693-AD32-4254-8768-059E911D2703@hotdogrecords.com
Whole thread Raw
In response to Re: Maintenance question / DB size anomaly...  (Richard Huxton <dev@archonet.com>)
Responses Re: Maintenance question / DB size anomaly...
List pgsql-performance
Richard,

Thanks for the feedback!  I found oid2name and have been mucking
about with it, but haven't really found anything that stands out
yet.  Most of the tables/indexes I'm comparing across machines seem
to take up a similar amount of disk space.  I think I'm going to have
to get fancy and write some shell scripts.  Regarding the slony
configuration scripts, you're assuming that I have such scripts.  Our
slony install was originally installed by a contractor, and modified
since then so "getting my act together with respect to slony" is
kinda beyond the scope of what I'm trying to accomplish with this
maintenance.  I really just want to figure out whats going on with
db1, and want to do so in a way that won't ruin slony since right now
it runs pretty well, and I doubt I'd be able to fix it if it
seriously broke.

Upon a cursory pass with oid2name, it seems that my  sl_log_1_idx1
index is out of hand:


-bash-3.00$ oid2name -d mydb -f 955960160
 From database "mydb":
    Filenode     Table Name
--------------------------
   955960160  sl_log_1_idx1

-bash-3.00$ ls -al 955960160*
-rw-------  1 postgres postgres 1073741824 Jun 19 11:08 955960160
-rw-------  1 postgres postgres 1073741824 Jun 13  2006 955960160.1
-rw-------  1 postgres postgres  909844480 Jun 19 10:47 955960160.10
-rw-------  1 postgres postgres 1073741824 Jul 31  2006 955960160.2
-rw-------  1 postgres postgres 1073741824 Sep 12  2006 955960160.3
-rw-------  1 postgres postgres 1073741824 Oct 19  2006 955960160.4
-rw-------  1 postgres postgres 1073741824 Nov 27  2006 955960160.5
-rw-------  1 postgres postgres 1073741824 Feb  3 12:57 955960160.6
-rw-------  1 postgres postgres 1073741824 Mar  2 11:57 955960160.7
-rw-------  1 postgres postgres 1073741824 Mar 29 09:46 955960160.8
-rw-------  1 postgres postgres 1073741824 Mar 29 09:46 955960160.9


I know that slony runs its vacuuming in the background, but it
doesn't seem to be cleaning this stuff up.  Interestingly, from my
VACUUM pgfouine output,
that index doesn't take that long at all to vacuum analyze (compared
to my other, much larger tables).  Am I making the OID->filename
translation properly?

Running this:
SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
...gives me...

sl_log_1_idx1                                                   |
1421785
xrefmembergroup                                                 |
1023460
answerselectinstance                                            |
565343

...does this jibe with what I'm seeing above?  I guess I'll run a
full vacuum on the slony tables too?  I figured something would else
would jump out bigger than this.  FWIW, the same table on db2 and db3
is very small, like zero.  I guess this is looking like it is
overhead from slony?  Should I take this problem over to the slony
group?

Thanks again, gang-

/kurt



On Jun 19, 2007, at 10:13 AM, Richard Huxton wrote:

> Kurt Overberg wrote:
>> In my investigation of this anomaly, I noticed that the data/ dir
>> on db1 (the master) is around 60 Gigs.  The data directory on the
>> slaves is around 25Gb.  After about 3 months of head scratching,
>> someone on the irc channel suggested that it may be due to index
>> bloat.  Although, doing some research, it would seem that those
>> problems were resolved in 7.4(ish), and it wouldn't account for
>> one database being 2.5x bigger.  Another unknown is Slony overhead
>> (both in size and vacuum times).
>
> Check the oid2name/dbsize utilities in the contrib RPM for 8.0.x
> http://www.postgresql.org/docs/8.0/static/diskusage.html
> Shouldn't be too hard to find out where the disk space is going.
>
> Oh and 8.0.13 is the latest release of 8.0 series, so you'll want
> to use your maintenance window to upgrade too. Lots of good
> bugfixes there.
>
>> The ONLY thing I can think of is that I DROPped a large number of
>> tables from db1 a few months ago (they weren't getting
>> replicated).  This is on the order of 1700+ fairly largeish (50,000
>> + row) tables.  I do not remember doing a vacuum full after
>> dropping them, so perhaps that's my problem.  I'm planning on
>> doing some maintenance this weekend, during which I will take the
>> whole system down, then on db1, run a VACUUM FULL ANALYZE on the
>> whole database, then a REINDEX on my very large tables.  I may
>> drop and recreate the indexes on my big tables, as I hear that may
>> be faster than a REINDEX.  I will probably run a VACUUM FULL
>> ANALYZE on the slaves as well.
>
> You'll probably find CLUSTER to be quicker than VACUUM FULL,
> although you need enough disk-space free for temporary copies of
> the table/indexes concerned.
>
> Dropping and recreating indexes should prove much faster than
> VACUUMING with them. Shouldn't matter for CLUSTER afaict.
>
>> Thoughts?  Suggestions?  Anyone think this will actually help my
>> problem of size and vacuum times?  Do I need to take Slony down
>> while I do this?  Will the VACUUM FULL table locking interfere
>> with Slony?
>
> Well, I'd take the opportunity to uninstall/reinstall slony just to
> check my scripts/procedures are working.
>
> --
>   Richard Huxton
>   Archonet Ltd


pgsql-performance by date:

Previous
From: "Campbell, Lance"
Date:
Subject: PostgreSQL Configuration Tool for Dummies
Next
From: Richard Huxton
Date:
Subject: Re: PostgreSQL Configuration Tool for Dummies