Thread: Postgres problems with 6.4 / 6.5 (fwd)

Postgres problems with 6.4 / 6.5 (fwd)

From
"Oliver Elphick"
Date:
------- Forwarded Message

Date:    Tue, 19 Oct 1999 07:45:23 +1300
From:    Andrew McMillan <Andrew@cat-it.co.nz>
To:      Oliver Elphick <olly@lfix.co.uk>
Subject: Postgres problems with 6.4 / 6.5

Hi,

I have a couple of problems with Postgres 6.5 and I'm not sure where to
put them (who to tell?).

Do you know if there is a place to notify bugs to for Postgres?  I am
using the Debian packages, so I can enter them there if necessary.
Anyway, here's a brief description of the bugs I'm experiencing:

1)    Doing a pg_dump and psql -f on a database I get lots of errors saying
"query buffer max length of 16384 exceeded" and then (eventually) I get
a segmentation fault.  The load lines don't seem to be that large (the
full insert statement, including error, is maybe 220 bytes.  It seems
that if I split the dumped file into 40-line chunks and do a vacuum
after each one, I can get the whole thing to load without the errors.

I have only tested this on Version 6.5.1.


2)    I have a table with around 85 fields in it, and a cron job running
every 20 minutes which did a "SELECT INTO ..." from that table, did some
processing and then DROPped the new table.  After a few days I found
that my database was around 13MB, which seemed odd.  A couple of days
later it was around 17MB, and only a couple of records had been added.

Further investigation reveals that if I do a VACUUM immediately after
the DROP TABLE that things are OK, but otherwise the pg_attribute* files
in the database directory just get bigger and bigger.  This is even the
case when I do a VACUUM after every second 'DROP TABLE' - for the space
to be recovered, I have to VACUUM immediately after a DROP TABLE, which
doesn't seem right somehow.

The same behaviour seems to happen on both version 6.5.1 and 6.4.3 .



If you can pass these bugs on to an appropriate person I would
appreciate it.  In our company we are just starting to use Postgres and
I would like to see it becoming an important part of our repertoire.

Many thanks,
                    Andrew McMillan.

_____________________________________________________________________
            Andrew McMillan, e-mail: Andrew@cat-it.co.nz
Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington
Me: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267


------- End of Forwarded Message


--
      Vote against SPAM: http://www.politik-digital.de/spam/
                 ========================================
Oliver Elphick                                Oliver.Elphick@lfix.co.uk
Isle of Wight                              http://www.lfix.co.uk/oliver
               PGP key from public servers; key ID 32B8FAA1
                 ========================================
     "Commit thy way unto the LORD; trust also in him and
      he shall bring it to pass."          Psalms 37:5



Re: [BUGS] Postgres problems with 6.4 / 6.5 (fwd)

From
Tom Lane
Date:
Hi Andrew,

> 1)    Doing a pg_dump and psql -f on a database I get lots of errors saying
> "query buffer max length of 16384 exceeded" and then (eventually) I get
> a segmentation fault.  The load lines don't seem to be that large (the
> full insert statement, including error, is maybe 220 bytes.  It seems
> that if I split the dumped file into 40-line chunks and do a vacuum
> after each one, I can get the whole thing to load without the errors.

I think there must be some specific peculiarity in your data that's
causing this; certainly lots of people rely on pg_dump for backup
without problems.  Can you provide a sample script that triggers the
problem?

> Further investigation reveals that if I do a VACUUM immediately after
> the DROP TABLE that things are OK, but otherwise the pg_attribute* files
> in the database directory just get bigger and bigger.  This is even the
> case when I do a VACUUM after every second 'DROP TABLE' - for the space
> to be recovered, I have to VACUUM immediately after a DROP TABLE, which
> doesn't seem right somehow.

That does seem odd.  If you just create and drop tables like mad then
I'd expect pg_class, pg_attribute, etc to grow --- the rows in them
that describe your dropped tables don't get recycled until you vacuum.
But vacuum should reclaim the space.

Actually, wait a minute.  Is it pg_attribute itself that fails to shrink
after vacuum, or is it the indexes on pg_attribute?  IIRC we have a known
problem with vacuum failing to reclaim space in indexes.  There is a
patch available that improves the behavior for 6.5.*, and I believe that
improving it further is on the TODO list for 7.0.

I think you can find that patch in the patch mailing list archives at
www.postgresql.org, or it may already be in 6.5.2 (or failing that,
in the upcoming 6.5.3).  [Anyone know for sure?]

For user tables it's possible to work around the problem by dropping and
rebuilding indexes every so often, but DO NOT try that on pg_attribute.
As a stopgap solution you might consider not dropping and recreating
your temp table; leave it around and just delete all the rows in it
between uses.

            regards, tom lane