Thread: limited disk space

limited disk space

From
"David Parker"
Date:
We need to run a server (7.4.5, Solaris 9/Intel) in an environment with a defined limit on disk size. We know basically the set of data we will be working with, and can size the disk space accordingly, but there will be a fair amount of update churn in the data.
 
We are running autovacuum, but we still seem to be running out of disk space in our long running tests. The testers claim that the disk usage is not going down with a VACUUM FULL, but I have not verified that independently.
 
Given that our "real" dataset is fairly fixed, and the growth in the database size is due to updates, I'm wondering if there is a way that I can allocate enough disk space at the outset to allow the database to have a large enough "working set" of free pages so that once it reaches a certain threshold it doesn't have to grow the database files anymore.
 
I'm also wondering how WAL settings may affect the disk usage. It's not an option to place the logs on a separate device in this case, so I imagine I want to limit the size there, too.
 
Is anybody running postgres in a similar constrained environment, or are there any general tips on controlling disk usage that somebody could point me to?
 
Thanks.

- DAP
----------------------------------------------------------------------------------
David Parker    Tazz Networks    (401) 709-5130
 

 

Re: limited disk space

From
Peter Eisentraut
Date:
David Parker wrote:
> Is anybody running postgres in a similar constrained environment, or
> are there any general tips on controlling disk usage that somebody
> could point me to?

PostgreSQL is not particularly tuned to such scenarios.  The only chance
you have to control disk usage is to vacuum and checkpoint a lot.
There is no general "use only X bytes" control, nor a combination of
controls that amount to such.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/