Re: hundreds of millions row dBs - Mailing list pgsql-general

From Dann Corbit
Subject Re: hundreds of millions row dBs
Date
Msg-id D425483C2C5C9F49B5B7A41F894415470557A0@postal.corporate.connx.com
Whole thread Raw
In response to hundreds of millions row dBs  ("Greer, Doug [NTK]" <doug.r.greer@mail.sprint.com>)
Responses Re: hundreds of millions row dBs
List pgsql-general

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Wes
Sent: Tuesday, January 04, 2005 8:59 AM
To: Guy Rouillier; pgsql-general@postgresql.org; Greer, Doug [NTK]
Subject: Re: [GENERAL] hundreds of millions row dBs

> We're getting about 64 million rows inserted in about 1.5 hrs into a
> table with a multiple-column primary key - that's the only index.
> That's seems pretty good to me - SQL Loader takes about 4 hrs to do
the
> same job.

As I recall, the last time we rebuilt our database, it took about 3
hours to
import 265 million rows of data.
>>
24537 rows per second.
<<

It then took another 16 hours to rebuild
all the indexes.  I think the entire pg_dumpall/reload process took
about 21
hours +/-.  I wonder what it will be like with 1.5 billion rows...
>>
Load will probably scale linearly, so I think you could just multiply by
5.66 go get 17 hours to load.

Building indexes is likely to be at least n*log(n) and maybe even n^2.
For sure, it would take a whole weekend.

Here is an instance where a really big ram disk might be handy.
You could create a database on a big ram disk and load it, then build
the indexes.
Then shut down the database and move it to hard disk.
It might save a few days of effort if you have billions of rows to load.
<<

pgsql-general by date:

Previous
From: Wes
Date:
Subject: Re: hundreds of millions row dBs
Next
From: Tom Lane
Date:
Subject: Re: hundreds of millions row dBs