Thread: vacuum analyze: Tuple is too big (corrupts DB?)

vacuum analyze: Tuple is too big (corrupts DB?)

From
Dirk Lutzebaeck
Date:
Hello,

when I vacuum analyze my db (6.5.3 on Linux) I cannot access
some data afterwards because the vacuum terminates with

ERROR:  Tuple is too big: size 8596

I did pg_dump -o and read in back again, still the same error.

Eg. accessing data after vacuum results in

SELECT envpart_map.*,
       envpart_map.oid AS recioid,
       member.*,
       member.oid AS memoid
       WHERE envpart_map.member=member.oid
       AND envpart_map.role > 0
       AND envpart_map.envelope IN (1072343,1073358,1074472,1073722,14235
82)': pqReadData() -- backend closed the channel unexpectedly.


Where can I get on from here?

Dirk

[GENERAL/INTERFACES] Dynamically detecting need to vacuum?

From
Ed Loehr
Date:
I'm seeing a few intermittent show-stopper errors (some of which I've asked
about previously in these forums) which go away after vacuum is run.  Pgsql
documentation recommends to vacuum nightly or after copying a large class
(not sure exactly what that means) or deleting a large number of records.
I'm tempted to run vaccum almost anytime I detect Pgsql errors (via perl
DBI) in order to try to salvage/retry the query at hand on the fly at the
cost of a much slower query from the user's perspective.

I'd like to learn how to programmatically calculate/detect when a vacuuming
is needed, and would like to hear any advice on how folks are successfully
doing this or think would best be done.

Any comments?

[Hackers:  it'd be nice to have pgsql optionally take care of this...]

I'm using Pg 6.5.2 on RH Linux 6.0 2.2.5-15smp via perl DBI/DBD::Pg...

Thanks in advance,
Ed Loehr


Re: [GENERAL] Is it safe to INSERT/DELETE during VACUUM?

From
Ed Loehr
Date:
Question:  is it safe to insert/update/delete during VACUUM?

Ed


unsubscribe

From
Umar Goldeli
Date:

Re: [GENERAL] [GENERAL/INTERFACES] Dynamically detecting need to vacuum?

From
Peter Eisentraut
Date:
On 1999-12-13, Ed Loehr mentioned:

> documentation recommends to vacuum nightly or after copying a large class
> (not sure exactly what that means) or deleting a large number of records.

copying -- as invoked via the COPY command (in general this means
inserting, though)
large -- probably anything with triple-digit row counts
class -- table, relation; in general rows, records, data, stuff

> [Hackers:  it'd be nice to have pgsql optionally take care of this...]

There are some disabled code fragments that take into account a "vacuum
daemon". It's even on the TODO list, but vacuum itself needs to be sorted
out first.

Sorry, no free lunch here, probably.

--
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden



Re: [GENERAL] Is it safe to INSERT/DELETE during VACUUM?

From
Adriaan Joubert
Date:
Ed Loehr wrote:
>
> Question:  is it safe to insert/update/delete during VACUUM?
>

It is supposed to be (MVCC should take care of it), but in practice I've
got bad experiences with it. The system should at least have a very low
load. You can get dead-lock errors, as the tables being vacuumed are
locked for a long time and then inaccessible to other queries (that's
unavoidable, but still annoying). And a couple of times something went
wrong and I ended up with duplicate tuples in my tables -- really
duplicate, including the oid's. That required a table
dump/sort/uniq/load to fix :-(.

Adriaan

Re: [GENERAL] Is it safe to INSERT/DELETE during VACUUM?

From
postgres@taifun.interface-business.de
Date:
Hi Ed,

> Question:  is it safe to insert/update/delete during VACUUM?

We have done a stress test on FreeBSD 2.2.7 with PG 6.4.3 and
6.5.2 with 2 processes in parallel: the first runs VACUUM VERBOSE
in a loop and the second does INSERT/UPDATE/DELETE's.
The database crashed after <5 mins, with different errors (not
only "duplicate tuples").

Howto VACUUM an online database?

Gerald