We have several tables each of which has an objectid field. These fields
contain id values, 99% of which are in a 0 to 1,000,000 range and 1% of
ids is randomly dispersed between 1,000,000 and 10,000,000,000 (these
are the identity values we imported from SQL Server when moving to
Postgres).
We started a new project and want to use id values > 10,000,000,000 for
it, so that they do not overlap with existing values.
1. Will this hit performance of queries involving this table?
2. Can I help the planner by providing stats about ranges of values in
this table? If yes then how?
Thanks.
Oleg