Re: [GENERAL] huge table occupation after updates - Mailing list pgsql-general

From Tom DalPozzo
Subject Re: [GENERAL] huge table occupation after updates
Date
Msg-id CAK77FCThRptU8Coi-Ckv6xVmOHQtxHCYMsjvvDSxmUPomy3tLQ@mail.gmail.com
Whole thread Raw
In response to Re: [GENERAL] huge table occupation after updates  (Rob Sargent <robjsargent@gmail.com>)
Responses Re: [GENERAL] huge table occupation after updates
List pgsql-general
Hi,
I'd like to do that! But my DB must be crash proof! Very high reliability is a must.
I also use sycn replication.
Regards
Pupillo




2016-12-10 16:04 GMT+01:00 Rob Sargent <robjsargent@gmail.com>:

> On Dec 10, 2016, at 6:25 AM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
>
> Hi,
> you're right, VACUUM FULL  recovered the space, completely.
> So, at this point I'm worried about my needs.
> I cannot issue vacuum full as I read it locks the table.
> In my DB, I (would) need to have a table with one bigint id field+ 10 bytea fields, 100 bytes long each (more or less, not fixed).
> 5/10000 rows maximum, but let's say 5000.
> As traffic I can suppose 10000 updates per row per day (spread over groups of hours; each update involving two of those fields, randomly.
> Also rows are chosen randomly (in my test I used a block of 2000 just to try one possibility).
> So, it's a total of 50 millions updates per day, hence (50millions * 100 bytes *2 fields updated) 10Gbytes net per day.
> I'm afraid it's not possible, according to my results.
> Reagrds
> Pupillo
>

Are each of the updates visible to a user or read/analyzed by another activity?  If not you can do most of the update in memory and flush a snapshot periodically to the database.


pgsql-general by date:

Previous
From: Rob Sargent
Date:
Subject: Re: [GENERAL] huge table occupation after updates
Next
From: Rob Sargent
Date:
Subject: Re: [GENERAL] huge table occupation after updates