El Mar 18 Nov 2003 17:43, Rick Gigger escribió:
> I am currently trying to import a text data file without about 45,000
> records. At the end of the import it does an update on each of the 45,000
> records. Doing all of the inserts completes in a fairly short amount of
> time (about 2 1/2 minutes). Once it gets to the the updates though it slows
> to a craw. After about 10 minutes it's only done about 3000 records.
Thats not a big amount of rows. It shouldn't be making that amount of trouble,
unless you have something wrong with the update query.
Try inserting all the 45K rows and then run an explain analyze to the update
query to see what's wrong (reply the output of explain analyze to the list).
The update query itself would help alot, too.
> Is that normal? Is it because it's inside such a large transaction? Is
> there anything I can do to speed that up. It seems awfully slow to me.
Not sure, but I have had lot's of big transactions with heady load, and never
had a problem like the one you describe.
> I didn't think that giving it more shared buffers would help but I tried
> anyway. It didn't help.
>
> I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
> of stuff but it didn't speed up the updates at all.
>
> I am using a dual 800mhz xeon box with 2 gb of ram. I've tried anywhere
> from about 16,000 to 65000 shared buffers.
How's memory performace while you're running the updates? What does free say
(if you are in some unix environment).
--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
-----------------------------------------------------------------
Martín Marqués | mmarques@unl.edu.ar
Programador, Administrador, DBA | Centro de Telemática
Universidad Nacional
del Litoral
-----------------------------------------------------------------