On Wed, Feb 26, 2025 at 9:21 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> I have done the performance testing for cases where we distribute a
> small amount of invalidation messages to many concurrently decoded
> transactions.
> Here are results:
>
> Concurrent Txn | Head (sec) | Patch (sec) | Degradation in %
> ---------------------------------------------------------------------------------------------
> 50 | 0.2627734 | 0.2654608 | 1.022706256
> 100 | 0.4801048 | 0.4869254 | 1.420648158
> 500 | 2.2170336 | 2.2438656 | 1.210265825
> 1000 | 4.4957402 | 4.5282574 | 0.723289126
> 2000 | 9.2013082 | 9.21164 | 0.112286207
>
> The steps I followed is:
> 1. Initially logical replication is setup.
> 2. Then we start 'n' number of concurrent transactions.
> Each txn look like:
> BEGIN;
> Insert into t1 values(11);
> 3. Now we add two invalidation which will be distributed each
> transaction by running command:
> ALTER PUBLICATION regress_pub1 DROP TABLE t1
> ALTER PUBLICATION regress_pub1 ADD TABLE t1
> 4. Then run an insert for each txn. It will build cache for relation
> in each txn.
> 5. Commit Each transaction.
>
> I have also attached the script.
>
The tests are done using pub-sub setup which has some overhead of
logical replication as well. Can we try this test by fetching changes
via SQL API using pgoutput as plugin to see the impact?
--
With Regards,
Amit Kapila.