Thread: Re: [GENERAL] So slow inserts... Why?
As i dont know anything about triggers:) Have you done a vacuum on the databases to clear out old transactions? /Johan Michal Samek <webmaster@tony.cz> on 99-05-26 10:47:43 To: pgsql-general@postgreSQL.org cc: (bcc: Johan Granlund/Solid AB) Subject: [GENERAL] So slow inserts... Why? Hi. At first sorry of my English :) I'm using Postgresql in the music e-shop; the main table kat (katalogue of CDs and MCs) is filled from .dbf files by a php script. And it's just very, very slow - about 1 inserted record per sec. Table kat has one primary index and three another indexes, and it's triggered both on insert and update by plpgsql function to track updates / inserts to mark rows which needs updating to production database on internet. It's no php slowing the process, only postgresql backend loads processor very much (about 50% on Celeron 266 with 64 MB ram). So I can understand that it's a time-consuming to insert row, run the trigger (which is only several lines of the code) and update indicies, but I thing it's reallt SLOWER THAN IT SHOULD BE. I've tried MySQL doing the same job, of course without trigger, but it was TERRIBLY QUICK, for the same job about one minute to process with Mysql - with postgresql it lasted about 45 min. I do now that Mysql is considered rather quick, but so much difference? I can't believe that postgresql is so slow, rather I'm thinking about how to optimize my job. So, if you are smart enough and want to help me, please, give me some information about how to affect the speed of inserting. I dno't know, maybe is better to drop non-primary key indexes, to do inserts and then re-create indexes? Or to forget using triggers or optimize the backend some way? Of course I can provide you with details about my querries and data structures, but I think I'm not doing any special job. And, postgresql is 6.4.2. Many thanks for any ideas. -- Michal Samek, Tony distribuce s.r.o. webmaster@tony.cz (++420659/321350) ICQ: 38607210
johan.granlund@solid.se wrote: > > As i dont know anything about triggers:) > Have you done a vacuum on the databases to clear out old transactions? I recreated the table, so I think there is not necessary to vacuum it (it takes unbelieveable amount of time); maybe when I'm filling the table by sending a lot of INSERT queries I should sometimes between inserts do a vacuum ? It looks strange for me. And many thanks for your help; I was thinking about dropping indexes and trigger, but It's impossible. Imagine this is not just a series of insert queries, the process I'm using is (simplicied): For each dbase record (the catalogue from the music label) to look-up if exist and if so, update; if no, insert. So I can't drop indexes which are needed by look-up function; nor can't drop trigger because it tracks inserts / updates of my table. I'm testing to group inserts / updates to blocks enclosed with transaction BEGIN; END;, but it now looks that it will help only little. PS I plan to rewrite the whole thing to Mysql (even it's no triggers, it will be harder but possible of course) and to report the result here if will be interesting) -- Michal Samek, Tony distribuce s.r.o. webmaster@tony.cz (++420659/321350) ICQ: 38607210
> PS I plan to rewrite the whole thing to Mysql (even it's no triggers, it > will be harder but possible of course) and to report the result here if > will be interesting) Why don't you at the same time try the approach you're going to use for MySQL ona PostgreSQL system? It's kinda unfair to use triggers with PostgreSQL, no triggers with MySQL, and then say that MySQL is faster (althought it very likely is). Maarten -- Maarten Boekhold, boekhold@tibco.com TIBCO Finance Technology Inc. The Atrium Strawinskylaan 3051 1077 ZX Amsterdam, The Netherlands tel: +31 20 3012158, fax: +31 20 3012358 http://www.tibco.com
Try using transactions. I found when doing single inserts they could take hours, but when grouping all the inserts into a single transaction, it was relatively quick. It's been a while, but I believe the syntax is: BEGIN insert ... insert ... insert ... COMMIT; At 11:15 AM 5/26/99, Michal Samek wrote: >johan.granlund@solid.se wrote: >> >> As i dont know anything about triggers:) >> Have you done a vacuum on the databases to clear out old transactions? > >I recreated the table, so I think there is not necessary to vacuum it >(it takes unbelieveable amount of time); maybe when I'm filling the >table by sending a lot of INSERT queries I should sometimes between >inserts do a vacuum ? It looks strange for me. > >And many thanks for your help; I was thinking about dropping indexes and >trigger, but It's impossible. Imagine this is not just a series of >insert queries, the process I'm using is (simplicied): For each dbase >record (the catalogue from the music label) to look-up if exist and if >so, update; if no, insert. So I can't drop indexes which are needed by >look-up function; nor can't drop trigger because it tracks inserts / >updates of my table. >I'm testing to group inserts / updates to blocks enclosed with >transaction BEGIN; END;, but it now looks that it will help only little. > >PS I plan to rewrite the whole thing to Mysql (even it's no triggers, it >will be harder but possible of course) and to report the result here if >will be interesting) > > >-- >Michal Samek, Tony distribuce s.r.o. >webmaster@tony.cz (++420659/321350) >ICQ: 38607210 >
> Try using transactions. I found when doing single inserts they could take > hours, but when grouping all the inserts into a single transaction, it was > relatively quick. It's been a while, but I believe the syntax is: > BEGIN > insert ... > insert ... > insert ... > COMMIT; Btw. I want to ask about how much "inserts" can I insert between BEGIN and END; When I do it last time ( I want to add about 1000 "inserts") - I get error - that buffer can contain to 20 000 characters. I use psql, so I'm not sure if this is a limit of psql itself or limit of backend. And may be anybody knows if it is possible change this limit and where? TIA Rem -------------------------------------------------------------------*------------ Remigiusz Sokolowski e-mail: rems@gdansk.sprint.pl * * -----------------------------------------------------------------*****----------
Remigiusz Sokolowski wrote: > > > Try using transactions. I found when doing single inserts they could take > > hours, but when grouping all the inserts into a single transaction, it was > > relatively quick. It's been a while, but I believe the syntax is: > > BEGIN > > insert ... > > insert ... > > insert ... > > COMMIT; > > Btw. I want to ask about how much "inserts" can I insert between BEGIN and 2^32 - 2 -:) > END; When I do it last time ( I want to add about 1000 "inserts") - I get > error - that buffer can contain to 20 000 characters. > I use psql, so I'm not sure if this is a limit of psql itself or limit of > backend. > And may be anybody knows if it is possible change this limit and where? You have to add ; after each insert... Vadim