On Thu, Mar 10, 2011 at 07:01:10AM -0600, Scott Whitney wrote:
> Ooops...I accidentally took this off list, as Kevin was nice enough to point out.
>
>
> >> What am I looking for?
>
> >Outliers.
>
> > Yeah. It's just those 2. I'd assume that the db I created
> > yesterday would be an outlier, but template0 has been there all along
> > (of course) and is still listed as 648, a significantly smaller number.
>
>
> >> The output shows me 345 rows, most of which are 132xxxxx numbers.
> >> Two of them (template0 and a database created yesterday) say 648.
>
> >The template0 database is what's keeping the clog files from being
> >cleaned up, but I guess the big question is why you care. They will
> >go away eventually, and shouldn't affect performance. Are they
> >taking enough space to merit extraordinary effort to clean them up?
> > -Kevin
>
>
> My concern is that when we had a failure a few years ago, and one of the clog files went bad. I had to manually
recreatesome customer data after bringing up the previous backup. So, I'd rather have them not there, because, well, if
thereare 200 of them in the dir, there's a higher chance in a case of a crash that one goes bad than if I have 15.
>
> Would adding -f (full) clean these up? I seem to recall it did in earlier versions. I've added the -F to it already,
andthat didn't seem to help.
>
If you have hardware problems like that you have way more problems.
You could have corruption (silent) occurring in any of the other database
files. Good luck.
Cheers,
Ken