I am constructing a large ( by some standards) database where the largest table
threatens to be about 6-10 Gb on a Linux system. I understand that postgresql
splits the tables into manageable chunks & I have no problem with that as a
workround for the 2 GB fs limit
.. My question concerns the indexes ,the first of which looks to be around
40 % of the table size. How is this
handled and how do I create subsequent indices on large tables given that I
can't interrupt the process, move and symbolically link chunks ?