Re: [GENERAL] Linux Largefile Support In Postgresql RPMS - Mailing list pgsql-hackers

From Greg Copeland
Subject Re: [GENERAL] Linux Largefile Support In Postgresql RPMS
Date
Msg-id 1028989269.32105.314.camel@mouse.copelandconsulting.net
Whole thread Raw
In response to Re: [GENERAL] Linux Largefile Support In Postgresql RPMS  (Mark Kirkwood <markir@slingshot.co.nz>)
Responses Re: [GENERAL] Linux Largefile Support In Postgresql RPMS  (Andrew Sullivan <andrew@libertyrms.info>)
List pgsql-hackers
On Sat, 2002-08-10 at 00:25, Mark Kirkwood wrote:
> Ralph Graulich wrote:
>
> >Hi,
> >
> >just my two cents worth: I like having the files sized in a way I can
> >handle them easily with any UNIX tool on nearly any system. No matter
> >wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at
> >a maximum size below any limits, handy for handling.
> >
> Good point... however I was thinking that being able to dump the entire
> database without resporting to "gzips and splits" was handy...
>
> >
> >For example, Oracle suggests it somewhere in their documentation, to keep
> >datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had
> >any problems with it.
> >
> Yep, fixed or controlled sizes for data files is great... I was thinking
> about databases rather than data files (altho I may not have made that
> clear in my mail)
>

I'm actually amazed that postgres isn't already using large file
support.  Especially for tools like dump.  I do recognize the need to
keep files manageable in size but my file sizes for my needs may differ
from your sizing needs.

Seems like it would be a good thing to enable and simply make it a
function for the DBA to handle.  After all, even if I'm trying to keep
my dumps at around 1GB, I probably would be okay with a dump of 1.1GB
too.  To me, that just seems more flexible.

Greg


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: pg_stat_reset() weirdness
Next
From: Joe Conway
Date:
Subject: Re: pg_stat_reset() weirdness