Thread: Crash in postgres/linux on verly large database

Crash in postgres/linux on verly large database

From
Bernhard Ankenbrand
Date:
Hi,

we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable.

Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?

Thanks

Bernhard Ankenbrand


Re: Crash in postgres/linux on verly large database

From
Tom Lane
Date:
Bernhard Ankenbrand <b.ankenbrand@media-one.de> writes:
> we have a table width about 60.000.000 entrys and about 4GB storage size.
> When creating an index on this table the whole linux box freezes and the
> reiser-fs file system is corrupted on not recoverable.

> Does anybody have experience with this amount of data in postgres 7.4.2?
> Is there a limit anywhere?

Many people run Postgres with databases far larger than that.  In any
case a Postgres bug could not cause a system-level freeze or filesystem
corruption, since it's not a privileged process.

I'd guess that you are dealing with a hardware problem: flaky disk
and/or bad RAM are the usual suspects.  See memtest86 and badblocks
as the most readily available hardware test aids.

            regards, tom lane

Re: Crash in postgres/linux on verly large database

From
Richard Huxton
Date:
On Tuesday 06 April 2004 12:22, Bernhard Ankenbrand wrote:
> Hi,
>
> we have a table width about 60.000.000 entrys and about 4GB storage size.
> When creating an index on this table the whole linux box freezes and the
> reiser-fs file system is corrupted on not recoverable.
>
> Does anybody have experience with this amount of data in postgres 7.4.2?
> Is there a limit anywhere?

Plenty of people with more data than that. It should be impossible for an
application to corrupt a file-system in any case. The two things to look at
would be your hardware or perhaps reiser-fs itself. I have heard about
problems with SMP machines locking up (some unusual glitch in some versions
of Linux kernel IIRC).

You might be able to see what is going wrong with careful use of vmstat and
strace -p <pid>. Start to create your index, find the pid of the backend
doing so and strace it. See if anything interesting comes out of it.

HTH, and stick around - someone else might have better advice.
--
  Richard Huxton
  Archonet Ltd

Re: Crash in postgres/linux on verly large database

From
"scott.marlowe"
Date:
On Tue, 6 Apr 2004, Bernhard Ankenbrand wrote:

> Hi,
>
> we have a table width about 60.000.000 entrys and about 4GB storage size.
> When creating an index on this table the whole linux box freezes and the
> reiser-fs file system is corrupted on not recoverable.
>
> Does anybody have experience with this amount of data in postgres 7.4.2?
> Is there a limit anywhere?

If your file system is getting corrupted, then you likely have found a bug
in reiserfs or the linux kernel.  While some pgsql bug might be able to
corrupt the contents of a file belonging to it, it doesn't have the power
to corrupt the file system itself.

Or is the problem a corrupted database, not a corrupted file system?


Re: Crash in postgres/linux on verly large database

From
"Gregory S. Williamson"
Date:
Different file system but we've loaded 39,000,000 row tables (about 30 gigs of spatial data) without any issues ... CPU
loadwas 1.something for a while but not problems with loading, indexing or doing the stats command. I might suspect the
underlyingOS ... 

Greg Williamson
DBA
GlobeXplorer LLC

-----Original Message-----
From: Bernhard Ankenbrand [mailto:b.ankenbrand@media-one.de]
Sent: Tuesday, April 06, 2004 4:22 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Crash in postgres/linux on verly large database


Hi,

we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable.

Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?

Thanks

Bernhard Ankenbrand


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Re: Crash in postgres/linux on verly large database

From
Date:
On 4/6/04 5:36 PM, "Gregory S. Williamson" <gsw@globexplorer.com> wrote:

> we have a table width about 60.000.000 entrys and about 4GB storage size.
> When creating an index on this table the whole linux box freezes and the
> reiser-fs file system is corrupted on not recoverable.
>
> Does anybody have experience with this amount of data in postgres 7.4.2?
> Is there a limit anywhere?

This may or may not have anything to do with your problem, but I have seen
issues with the 2.4.x kernel where kscand goes ape under heavy I/O.  In the
cases I've seen, the system appears to freeze for a period of time (15-30
seconds) while kscand goes nuts, then the system takes off again for a
while, freezes, etc...  Upgrading to the 2.6 kernel solved our problem.

I assume you gave it plenty of time before hitting the button?

Wes