I am curious if there is a significant speed up with doing this if most of the queries run against it are going to be table wide. I won't drop the data and the data won't really grow. Do I get better speedup with one large table and large indexes or many small tables with many small indexes?
The tables would have to be specified with a table pk constraint falling between two ranges. A view would then be created to manage all of the small tables with triggers handling insert and update operations. Select would have to be view specific but that is really cheap compared to updates. That should have the additional benefit of only hitting a specific table(s) with an update.
Basically, I don't see how this particular configuration breaks and if PostgreSQL already has the ability to do this as it seems very useful to manage very large data sets.