Joshua Marsh <icub3d@gmail.com> writes:
> ... We did some original testing and with a server with 8GB or RAM and
> found we can do operations on data file up to 50 million fairly well,
> but performance drop dramatically after that.
What you have to ask is *why* does it drop dramatically? There aren't
any inherent limits in Postgres that are going to kick in at that level.
I'm suspicious that you could improve the situation by adjusting
sort_mem and/or other configuration parameters; but there's not enough
info here to make specific recommendations. I would suggest posting
EXPLAIN ANALYZE results for your most important queries both in the size
range where you are getting good results, and the range where you are not.
Then we'd have something to chew on.
regards, tom lane