Thread: Time to insert
Hi everyone Could someone explain me why time of insertion become that long when table increase ? For the same kind of insertion: When table is empty : 0.03s/row Table has 3663 rows : 2.07s/row Is that normal ? Sincerely, Renaud THONNART
On Thu, 3 May 2001, Renaud Thonnart wrote: > Hi everyone > Could someone explain me why time of insertion become that long when > table increase ? > For the same kind of insertion: > When table is empty : 0.03s/row > Table has 3663 rows : 2.07s/row > Is that normal ? > > Sincerely, Renaud THONNART > Have you performed a VACUUM [ANALYZE]? If you are inserting data in larger patches consider using copy or dropping indexes. What version of pgsql are you using? There used to be (if I remember correctly) a bug in 7.1 betas that made the inserts slow down. - Einar Karttunen
It is a little difficult to perform VACUUM analyse because I'm writing an application in C++ using libpq++. I Use version 7.0.3 I try COPY too but result was about the same. Renaud. Einar Karttunen wrote: > On Thu, 3 May 2001, Renaud Thonnart wrote: > > > Hi everyone > > Could someone explain me why time of insertion become that long when > > table increase ? > > For the same kind of insertion: > > When table is empty : 0.03s/row > > Table has 3663 rows : 2.07s/row > > Is that normal ? > > > > Sincerely, Renaud THONNART > > > Have you performed a VACUUM [ANALYZE]? If you are inserting data in larger > patches consider using copy or dropping indexes. What version of pgsql are > you using? There used to be (if I remember correctly) a bug in 7.1 betas > that made the inserts slow down. > > - Einar Karttunen
On Thu, 3 May 2001, Renaud Thonnart wrote: > It is a little difficult to perform VACUUM analyse because I'm writing an > application in C++ using libpq++. > I Use version 7.0.3 > I try COPY too but result was about the same. > int PgConnection::ExecCommandOk("VACUUM ANALYZE"); If you can upgrading to 7.1 it will probably help with the performance problem. The copy is good only if you perform very many inserts the same time. I use a cron job that performs the vacuuming so I don't have to worry about it in client aps. If you have multiple inserts, but not so many that you should use a copy, try wrapping them in a single transaction. - Einar Karttunen
Thank you Einar and Colin for your help. I have some little qusetions more : - How do I interpret a VACUUM ANALYSE ? - I'm going to load version 7.1 : what is the difference between 7.1 and 7.1rc4 ? Renaud THONNART Einar Karttunen wrote: > On Thu, 3 May 2001, Renaud Thonnart wrote: > > > It is a little difficult to perform VACUUM analyse because I'm writing an > > application in C++ using libpq++. > > I Use version 7.0.3 > > I try COPY too but result was about the same. > > > int PgConnection::ExecCommandOk("VACUUM ANALYZE"); > > If you can upgrading to 7.1 it will probably help with the performance > problem. The copy is good only if you perform very many inserts the > same time. I use a cron job that performs the vacuuming so I don't > have to worry about it in client aps. If you have multiple inserts, > but not so many that you should use a copy, try wrapping them in a > single transaction. > > - Einar Karttunen
On Thu, 3 May 2001, Renaud Thonnart wrote: > Thank you Einar and Colin for your help. > > I have some little qusetions more : > > - How do I interpret a VACUUM ANALYSE ? > - I'm going to load version 7.1 : what is the difference between 7.1 and > 7.1rc4 ? > 7.1rc4 is the fourth release candidate. 7.1 is the final release. So you should use 7.1. You just send "VACUUM ANALYZE" command to the backend. The vacuum analyze deletes rows, which are marked as unused and recreates the statistics about the table. Its use is essential (one time in 2-48h, or after large modifications), keeps the database running smoothly. You use vacuum analyze as any other command eg (using libpq++): PgDatabase data; data.ExecTuplesOk("MISC SQL QUERY") // when you need it data.ExecCommandOk("VACUUM ANALYZE"); I think that it is better to do the vacuuming server side, if the application doesn't change a large amount of data. - Einar Karttunen
Ok, I will try to do a VACUUM ANALYZE in my program. I suppose it is better to do it too much than too less time.... Many thanks for your help Einar. Renaud Einar Karttunen wrote: > On Thu, 3 May 2001, Renaud Thonnart wrote: > > > Thank you Einar and Colin for your help. > > > > I have some little qusetions more : > > > > - How do I interpret a VACUUM ANALYSE ? > > - I'm going to load version 7.1 : what is the difference between 7.1 and > > 7.1rc4 ? > > > 7.1rc4 is the fourth release candidate. 7.1 is the final release. So you > should use 7.1. > > You just send "VACUUM ANALYZE" command to the backend. The vacuum analyze > deletes rows, which are marked as unused and recreates the statistics > about the table. Its use is essential (one time in 2-48h, or after large > modifications), keeps the database running smoothly. You use vacuum > analyze as any other command eg (using libpq++): > > PgDatabase data; > > data.ExecTuplesOk("MISC SQL QUERY") > > // when you need it > data.ExecCommandOk("VACUUM ANALYZE"); > > I think that it is better to do the vacuuming server side, if the > application doesn't change a large amount of data. > > - Einar Karttunen
Is it possible to view the last 100 queries issued to the database? -r --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01
If you have the ram, does pg need to be optimized to store more data in ram? -r --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01
On Thu, 3 May 2001, Ryan Mahoney wrote: > Is it possible to view the last 100 queries issued to the database? > If you have set the backend to log queries this should be trivial. You can set the level of output from command line or from postgresql.conf, which I think is a better approach. If you don't log the queries I don't think there is a way to implement it. - Einar Karttunen
Ryan Mahoney writes: > If you have the ram, does pg need to be optimized to store more data in ram? You can use the postmaster -B option to allocate more buffers, but remember to leave enough memory to the kernel for file caching. -- Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter
The fastest way to add many records is to drop the indexes and then use copy., then rebuild the idexes. Yes I know it's not pretty. You can in fact use VACUUM from C/C++. Use fork/exec to start a shell that runs vaccum. Takes abut six or eight lines of code. > -----Original Message----- > From: Renaud Thonnart [mailto:thonnart@amwdb.u-strasbg.fr] > Sent: Thursday, May 03, 2001 3:50 AM > To: Einar Karttunen > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Time to insert > > > It is a little difficult to perform VACUUM analyse because > I'm writing an > application in C++ using libpq++. > I Use version 7.0.3 > I try COPY too but result was about the same. > > Renaud. > > > Einar Karttunen wrote: > > > On Thu, 3 May 2001, Renaud Thonnart wrote: > > > > > Hi everyone > > > Could someone explain me why time of insertion become > that long when > > > table increase ? > > > For the same kind of insertion: > > > When table is empty : 0.03s/row > > > Table has 3663 rows : 2.07s/row > > > Is that normal ? > > > > > > Sincerely, Renaud THONNART > > > > > Have you performed a VACUUM [ANALYZE]? If you are inserting > data in larger > > patches consider using copy or dropping indexes. What > version of pgsql are > > you using? There used to be (if I remember correctly) a bug > in 7.1 betas > > that made the inserts slow down. > > > > - Einar Karttunen > > > ---------------------------(end of > broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html >
> If you have the ram, does pg need to be optimized to store more data in ram? > > -r I would recommend increasing the number of PostgreSQL shared buffers with the PostgreSQL -B switch. You can read my new performance article: http://candle.pha.pa.us/main/writings/pgsql/performance/index.html It talks about memory usage and performance. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Ryan Mahoney writes: > Is it possible to view the last 100 queries issued to the database? Turn on query logging and search the log file. -- Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter