Re: Writing 1100 rows per second - Mailing list pgsql-performance

From Arya F
Subject Re: Writing 1100 rows per second
Date
Msg-id CAFoK1az5D3SJqHd6oH8zmPFTGvgi-qVhjjs2fEhx5WL4ozF4SQ@mail.gmail.com
Whole thread Raw
In response to Re: Writing 1100 rows per second  (Justin Pryzby <pryzby@telsasoft.com>)
Responses Re: Writing 1100 rows per second  (Haroldo Kerry <hkerry@callix.com.br>)
Re: Writing 1100 rows per second  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-performance
If I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?

On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:
> I'm looking to write about 1100 rows per second to tables up to 100 million
> rows. I'm trying to come up with a design that I can do all the writes to a
> database with no indexes. When having indexes the write performance slows
> down dramatically after the table gets bigger than 30 million rows.
>
> I was thinking of having a server dedicated for all the writes and have
> another server for reads that has indexes and use logical replication to
> update the read only server.

Wouldn't the readonly server still have bad performance for all the wites being
replicated to it ?

> Would that work? Or any recommendations how I can achieve good performance
> for a lot of writes?

Can you use partitioning so the updates are mostly affecting only one table at
once, and its indices are of reasonable size, such that they can fit easily in
shared_buffers.

brin indices may help for some, but likely not for all your indices.

Justin

pgsql-performance by date:

Previous
From: Justin Pryzby
Date:
Subject: Re: Writing 1100 rows per second
Next
From: Haroldo Kerry
Date:
Subject: Re: Writing 1100 rows per second