On 6/27/23 9:32 AM, Ben Chobot wrote:
> We certainly have databases where far more than 100 tables are updated
> within a 10 second period. Is there a specific concern you have?
>
Thank Ben, not a concern but I'm trying to better understand how common
this might be. And I think sharing general statistics about how people
use PostgreSQL is a great help to the developers who build and maintain it.
One really nice thing about PostgreSQL is that two quick copies of
pg_stat_all_tables and you can easily see this sort of info.
If you have a database where more than 100 tables are updated within a
10 second period - this seems really uncommon to me - I'm very curious
about the workload.
For example:
1) Is the overall total number of tables for this database in the
thousands, 10s of thousands or 100s of thousands?
2) How many CPUs or cores does the server have?
3) Are you using partitions and counting each one? What's the number if
you count each partitioned table as a single table?
4) Would you characterize this database as SaaS, ie. many copies of a
similar schema? Or is it one very large schema of many different tables?
-Jeremy
--
http://about.me/jeremy_schneider