Thanks everyone for your tips, i forgot to tell that the load is being done
via jdbc... So if you have any other tips please tell me!
Thanks a lot again, it manged to reduce the load time to 37 mins, really
close to the 30 mins i need.
JuanF
El 9/10/03 12:37 PM, "M. Bastin" <marcbastin@mindspring.com> escribió:
> Hi Juan,
>
> Why don't you do a COPY FROM STDIN? You could import these records
> in a few minutes time over the LAN or even the internet. I do 5
> million records in less than 3 minutes this way over localhost on a
> PowerBook G4 550 MHz.
>
> It's also recommended to drop your indexes while you do such large
> inserts/imports. (Create them again afterwards of course.)
>
> Marc
>
>>>>> Juan Francisco Diaz <j-diaz@publicar.com> 09/09/2003 23:05:54 >>>
>> Hi, have tried by all means to optimize the insertion of data in my db
>> but
>> it has been impossible.
>> Righto now to insert around 300 thou records it takes soemthing like 50
>> to
>> 65 minutes (too long).
>> Im using a Mac powerpc g4 533Mhz with 256 RAM.
>> I would relly appreciate that the insertion process is done in like 30
>> or 35
>> minutes TOPS. So far it is impossible.
>> My db right now has no FK, no indexes, the insertions is being done in
>> batch
>> (19 thou records).
>> Is it possible with my current machine to achieve the level of
>> performance
>> i've metioned?
>> Any help would be greatly appreciated, by the way the same insertion
>> takes
>> 25 mins in ms sqlserver2000 in a p3 1.4ghz 1gig ram.
>> Thanks
>>
>> JuanF
>>
>>
>> ---------------------------(end of
>> broadcast)---------------------------
>> TIP 5: Have you checked our extensive FAQ?
>>
>> http://www.postgresql.org/docs/faqs/FAQ.html
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 4: Don't 'kill -9' the postmaster
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>