Thread: Operational performance: one big table versus many smaller tables
If I have various record types that are "one up" records that are structurally similar (same columns) and are mostly retrieved one at a time by its primary key, is there any performance or operational benefit to having millions of such records split across multiple tables (say by their application-level purpose) rather than all in one big table? I am thinking of PG performance (handing queries against multiple tables each with hundreds of thousands or rows, versus queries against a single table with millions of rows), and operational performance (number of WAL files created, pg_dump, vacuum, etc.). If anybody has any tips, I'd much appreciate it. Thanks, David
David Wall wrote: > If I have various record types that are "one up" records that are > structurally similar (same columns) and are mostly retrieved one at a > time by its primary key, is there any performance or operational benefit > to having millions of such records split across multiple tables (say by > their application-level purpose) rather than all in one big table? Probably doesn't matter if you're accessing by pkey (and hence index). Certainly not when you're talking about a few million rows. Arrange your tables so they have meaning and only change that if necessary. -- Richard Huxton Archonet Ltd