Re: file size issue? - Mailing list pgsql-general

From Johnson, Shaunn
Subject Re: file size issue?
Date
Msg-id 73309C2FDD95D11192E60008C7B1D5BB0452E14C@snt452.corp.bcbsm.com
Whole thread Raw
In response to file size issue?  ("Johnson, Shaunn" <SJohnson6@bcbsm.com>)
List pgsql-general
--I think you've answered at least 1/2 of my question,
Andrew.
 
--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' -  it will continue until it is done, splitting each
table on 1Gig increments).
 
--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?
 
--And if it's an OS (or any other) problem, how can I factor
out Postgres?
 
--Thanks!
 
-X
 
 
[snip]
 
 
> Has anyone seen if it is a problem with the OS or with the way
> Postgres handles large files (or, if I should compile it again
> with some new options).
 
 
What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
were vacuuming.
 
 
A

[snip]

pgsql-general by date:

Previous
From: Andrew Sullivan
Date:
Subject: Archives (again)
Next
From: Jan Wieck
Date:
Subject: Re: accessing fully qualified fields in records in PLPGSQL?