On Mon, 2023-10-30 at 08:05 -0700, David Ventimiglia wrote:
> Can someone help me develop a good mental model for estimating PostgreSQL throughput?
> Here's what I mean. Suppose I have:
> * 1000 connections
> * typical query execution time of 1ms
> * but additional network latency of 100ms
> What if at all would be an estimate of the number of operations that can be performed
> within 1 second? My initial guess would be ~10000, but then perhaps I'm overlooking
> something. I expect a more reliable figure would be obtained through testing, but
> I'm looking for an a priori back-of-the-envelope estimate. Thanks!
It depends on the number of cores, if the workload is CPU bound.
If the workload is disk bound, look for the number of I/O requests a typical query
needs, and how many of them you can perform per second.
The network latency might well be a killer.
Use pgBouncer with transaction mode pooling.
Yours,
Laurenz Albe