Re: Strange Problem - Mailing list pgsql-novice

From Thom Brown
Subject Re: Strange Problem
Date
Msg-id CAA-aLv7vVYk6E6Ddy3gahUPSsqkjqB-q9cD1qq-Tf42-XeAHJg@mail.gmail.com
Whole thread Raw
In response to Strange Problem  (Tim Sailer <tps@unslept.org>)
Responses Re: Strange Problem  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-novice
On 11 February 2016 at 17:30, Tim Sailer <tps@unslept.org> wrote:
> I have an application that parses web server log files, and inserts a few
> million rows into a table each month, one table per month. Lately, I've been
> getting reports that there is no recent data, so I went looking.  I'll
> concentrate on the first table that seems to have an issue because there are
> a few, all seemingly the same. \dt+ shows the table size of 6463 MB.  OK,
> there's data there. "select count(*) from table" comes back with 0; "select
> ctid from table" comes back with 0, but after a while. pg_dump or pg_dumpall
> completely skip the table.
>
> I'm not a complete novice, but I'm stumped. No errors in the OS system logs,
> postgres starts and stops with no errors. This is running on ubuntu
> 14.04.03, with postgres 9.3

There are 2 possibilities I can think of:

1) The application which inserts the rows still has the transaction
open that inserted all the rows, and hasn't yet committed.

2) A transaction is open, but another transaction has since deleted
all the rows and committed, and the open transaction still has
visibility of the rows, so they can't be cleaned up by a vacuum
process.

I'd suggest checking pg_stat_activity for connections with an old
xact_start and a status of 'idle', and also check pg_locks to see if
the affected table is listed as being locked by another process.

Thom


pgsql-novice by date:

Previous
From: Tim Sailer
Date:
Subject: Strange Problem
Next
From: Tom Lane
Date:
Subject: Re: Strange Problem