From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>
> Unfortunately we have not to wait for decade or two.
> Postgres is faced with multiple problems at existed multiprocessor
> systems (64, 96,.. cores).
> And it is not even necessary to initiate thousands of connections: just
> enough to load all this cores and let them compete for some
> resource (LW-lock, buffer,...). Even standard pgbench/YCSB benchmarks
> with zipfian distribution may illustrate this problems.
I concur with you. VMs and bare metal machines with 100~200 CPU cores and TBs of RAM are already available even on
publicclouds. The users easily set max_connections to a high value like 10,000, create thousands or tens of thousands
ofrelations, and expect it to go smoothly. Although it may be a horror for PG developers who know the internals well,
Postgreshas grown a great database to be relied upon.
Besides, I don't want people to think like "Postgres cannot scale up on one machine, so we need scale-out." I
understandsome form of scale-out is necessary for large-scale analytics and web-scale multitenant OLTP, but it would be
desirableto be able to cover the OLTP workloads for one organization/region with the advances in hardware and Postgres
leveragingthose advances, without something like Oracle RAC.
> There were many proposed patches which help to improve this situation.
> But as far as this patches increase performance only at huge servers
> with large number of cores and show almost no
> improvement (or even some degradation) at standard 4-cores desktops,
> almost none of them were committed.
> Consequently our customers have a lot of troubles trying to replace
> Oracle with Postgres and provide the same performance at same
> (quite good and expensive) hardware.
Yeah, it's a pity that the shiny-looking patches from Postgres Pro (mostly from Konstantin san?) -- autoprepare,
built-inconnection pooling, fair lwlock, and revolutionary multi-threaded backend -- haven't gained hot atension.
Regards
Takayuki Tsunakawa