On Thu, Sep 15, 2011 at 3:32 PM, marvin.deoliveira
<marvin.deoliveira@gmail.com> wrote:
> Hi.
> I'm restoring a database (only rows) that has some tables with 9 millions
> rows and others have even more.
> It's going to slow ( more than 24 hours by now ).
> I'm disabling the triggers but I guess if I drop the indexes it will have
> more performance. Am I right?
> If yes, does anyone have a script that generates the drop and create
> indexes?
What part of the import process is slow? If you're running with the
-v option to pg_restore you should be able to monitor what portion is
taking a long time. Is it the indices, or the actual data as well?
Also, the number of rows isn't necessarily a reason for an import to
be slow. An extreme example, if 9 millions rows only had a single
column that would likely import faster than a much smaller number of
rows with many columns.
Additionally, what kind of HW are you using and which version of PostgreSQL?
Anyway, here are some good suggestions on how to improve restore performance:
http://stackoverflow.com/questions/2094963/postgresql-improving-pg-dump-pg-restore-performance
http://postgresql.1045698.n5.nabble.com/Fastest-pq-restore-td3911438.html