Re: Read write performance check - Mailing list pgsql-general

From veem v
Subject Re: Read write performance check
Date
Msg-id CAB+=1TW_Ss7wHhJB+CKwT5W9TvBc-pLttH5VBzW8K+-cvAcMSA@mail.gmail.com
Whole thread Raw
In response to Re: Read write performance check  (veem v <veema0000@gmail.com>)
Responses Re: Read write performance check
Re: Read write performance check
List pgsql-general
Can someone please guide me, if any standard scripting is available for doing such read/write performance test? Or point me to any available docs? 

On Wed, 20 Dec, 2023, 10:39 am veem v, <veema0000@gmail.com> wrote:
Thank you. 

That would really be helpful if such test scripts or similar setups are already available. Can someone please guide me to some docs or blogs or sample scripts, on same please. 

On Wed, 20 Dec, 2023, 10:34 am Lok P, <loknath.73@gmail.com> wrote:
As Rob mentioned, the syntax you posted is not correct. You need to process or read a certain batch of rows like 1000 or 10k etc. Not all 100M at one shot. 

But again your uses case seems common one considering you want to compare the read and write performance on multiple databases with similar table structure as per your usecase. So in that case, you may want to use some test scripts which others must have already done rather reinventing the wheel.


On Wed, 20 Dec, 2023, 10:19 am veem v, <veema0000@gmail.com> wrote:
Thank you. 

Yes, actually we are trying to compare and see what maximum TPS are we able to reach with both of these row by row and batch read/write test. And then afterwards, this figure may be compared with other databases etc with similar setups.

 So wanted to understand from experts here, if this approach is fine? Or some other approach is advisable? 

I agree to the point that , network will play a role in real world app, but here, we are mainly wanted to see the database capability, as network will always play a similar kind of role across all databases. Do you suggest some other approach to achieve this objective? 


On Wed, 20 Dec, 2023, 2:42 am Peter J. Holzer, <hjp-pgsql@hjp.at> wrote:
On 2023-12-20 00:44:48 +0530, veem v wrote:
>  So at first, we need to populate the base tables with the necessary data (say
> 100million rows) with required skewness using random functions to generate the
> variation in the values of different data types. Then in case of row by row
> write/read test , we can traverse in a cursor loop. and in case of batch write/
> insert , we need to traverse in a bulk collect loop. Something like below and
> then this code can be wrapped into a procedure and passed to the pgbench and
> executed from there. Please correct me if I'm wrong.

One important point to consider for benchmarks is that your benchmark
has to be similar to the real application to be useful. If your real
application runs on a different node and connects to the database over
the network, a benchmark running within a stored procedure may not be
very indicative of real performance.

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

pgsql-general by date:

Previous
From: Andrey Zhidenkov
Date:
Subject: PostgreSQL 15.5 stops processing user queries
Next
From: "Wilma Wantren"
Date:
Subject: Changing a schema's name with function1 calling function2