Re: About replication minimal disk space usage - Mailing list pgsql-general

From Tomas Vondra
Subject Re: About replication minimal disk space usage
Date
Msg-id 912190b3-7aaf-4f64-9870-9e96888f968e@vondra.me
Whole thread Raw
In response to About replication minimal disk space usage  (Manan Kansara <manan.kansara@vlo.city>)
List pgsql-general
On 8/24/24 14:18, Manan Kansara wrote:
> Hello All,
> I have my self hosted postgres server on aws with 16gb disk space
> attached to it for ml stuff and analysis stuff we are using vertex ai so
> i have setup live replication of postgres using data stream service to
> BigQuery table.  We use BigQuery table as data warehouse because we have
> so many different data source so our data analysis and ml can
> happened at one place.
> but problem is there When i am starting replication in there pg_wal take
> whole space about 15.8gb in some days of starting replication 
> 
> *_Question_ *:  how can i setup something like that that optimally use
> disk space so old pg_wal data that are not usable can we delete  i think
> i should create one cron job which taken care whole that things but i
> don't know any approach can you please guide
> In future if as data grew i will attached more disk space to that
> instance but i want to make optimal setup so my whole disk is not in
> full usage any time and my server crash again.
> 

Why don't you just give it more disk space? I'm not a fan of blindly
throwing hardware at an issue, but 16GB is tiny these days, especially
if it's shared by both data and WAL, and the time you spend optimizing
this is likely more expensive than any savings.

If you really want to keep this on 16GB, I think we'll need more details
about what exactly you see on the instance / how it runs out of disk
space. AFAIK datastream relies on logical replication, and there's a
couple ways how that may consume disk space.

For example, the datastream replication may pause for a while, in which
case the replication slot will block removal of still-needed WAL, and if
the pause is long enough, that may be an issue. Of course, we have no
idea how much data you're dealing with (clearly not much, if it fits
onto 16GB of disk space with everything else).

Another option is that you have a huge transaction (inserting and/or
modifying a lot of data at once), and the logical decoding ends up
spilling the decoded transaction to disk.

If you want a better answer, I think you'll have to provide a lot more
details. For example, which PostgreSQL version are you using, and how is
it configured? What config parameters have non-default values?


regards

-- 
Tomas Vondra



pgsql-general by date:

Previous
From: Manan Kansara
Date:
Subject: About replication minimal disk space usage
Next
From: Marcelo Zabani
Date:
Subject: ERROR: could not open relation with OID XXXX