On 12/27/05,
Eugene <
evgenius@hot.ee> wrote:
Hello!
I've managed to import into postgre DB 3 800 000 rows of data (500 MB pure
CSV ~ 2 GB SQL DB)
It looks like this
"69110784","69111807","US","UNITED
STATES","ILLINOIS","BLOOMINGTON"," 40.4758","-88.9894","61701","LEVEL 3
COMMUNICATIONS INC","DSL-VERIZON.NET"
"69111808","69112831","US","UNITED
STATES","TEXAS","GRAPEVINE","32.9309","-97.0755","76051","LEVEL 3
COMMUNICATIONS INC","DSL-VERIZON.NET"
"69112832","69113087","US","UNITED
STATES","TEXAS","DENTON","33.2108","-97.1231","76201","LEVEL 3
COMMUNICATIONS INC"," DSL-VERIZON.NET"
CREATE TABLE ipdb2
(
ipFROM int4 NOT NULL,
ipTO int4 NOT NULL ,
countrySHORT CHARACTER(2) NOT NULL,
countryLONG VARCHAR(64) NOT NULL,
ipREGION VARCHAR(128) NOT NULL,
ipCITY VARCHAR(128) NOT NULL,
ipLATITUDE DOUBLE PRECISION,
ipLONGITUDE DOUBLE PRECISION,
ipZIPCODE VARCHAR(5),
ipISP VARCHAR(255) NOT NULL,
ipDOMAIN VARCHAR(128) NOT NULL
);
1st. of all - change ipfrom and ipto column types to int8.
integer types in postgresql are signed, so their effective "max" is around 2000000000, which makes your example with over 3000000000 technically not working.
2nd. do vacuum analyze
3rd. show explain analyze. how can we tell you how to speed it up, when we dont know what/how postgres is doing with it.
depesz