Thread: ecpg: how select/insert n rows (array) in one query

ecpg: how select/insert n rows (array) in one query

From
"Jorma O. Tähtinen"
Date:
Hello

I'm new postgres user (currenly converting db application with quite
large database from mysql to postgres, hopefully pg is not too slow but
I really need triggers and views).

Anyway could somebody please explain how multiple row select/insert
should be used in embedded sql (ecpg)

1) select - this is simple (well documented)   EXEC SQL BEGIN DECLARE SECTION;   struct data_t {       char
md5[100][32];      int  size[100];   } data;   struct ind_t { short md5,size } ind;   EXEC SQL END DECLARE SECTION;
 
   EXEC SQL BEGIN TRANSACTION;   EXEC SQL DECLARE cur CURSOR FOR       select md5,size FROM filetable;   EXEC SQL OPEN
cur;  EXEC SQL WHENEVER NOT FOUND DO break;   while(1) {       EXEC SQL FETCH 1000 FROM cur INTO :data:ind;
for(n=0;n<sqlca.sqlerrd[2];++n){ /*do something*/   }   EXEC SQL CLOSE cur;   EXEC SQL COMMIT;
 
   This is fine, speed on my hardware about (magnitude) 10.000 rows/per
sec.   (mostly limited by disk i/o)
   Only question here is: why can't I declare that data struct like   struct data_t {       char md5[32];       int
size;  } data[100];   Is this possible and how to use it.
 
.
2) insert - Now this is my real problem   if I have understod docs correctly : use of insert like select in
about example is NOT possible   please somebody say that I'm wrong - small example would be nice to
see ;-)
   And yes I have tried to insert multiple rows in one transaction
block like   EXEC SQL BEGIN TRANSACTION       for(....) {           insert .....       }   EXEC SQL COMMIT
   but this is still way to slow at max 50-100 insert per sec. I
understand that inserting is slow operation   but 10.000/sec in select and 50/sec in insert - can't be THAT
slow??? (inserting 12-13M rows is a bit pain...;-)
   And yes, I really want to use c+embedded sql code to do inserting
(real app/table struct of course more complex)
   -jt-




Re: ecpg: how select/insert n rows (array) in one query

From
Michael Meskes
Date:
On Sat, Jun 10, 2000 at 05:49:42PM +0300, Jorma O. Tähtinen wrote:
> 1) select - this is simple (well documented)
> ...

>     EXEC SQL BEGIN DECLARE SECTION;
>     struct data_t {
>         char md5[100][32];
>         int  size[100];
>     } data;
> ...

>     Only question here is: why can't I declare that data struct like
>     struct data_t {
>         char md5[32];
>         int size;
>     } data[100];
>     Is this possible and how to use it.

Hmm, it certainly should be possible. I will look into it as soon as I find
some spare time. I do not remember if there was a real problem with this.

>     if I have understod docs correctly : use of insert like select in
> about example is NOT possible
>     please somebody say that I'm wrong - small example would be nice to
> see ;-)

selects work because the backend returns more than one tuple at a time.
However, inserts do not work that way. An SQL insert command is designed to
insert one tuple. I wouldn't know which syntax to use to do that sort of
bulk loading.

>     And yes I have tried to insert multiple rows in one transaction
> block like
>     EXEC SQL BEGIN TRANSACTION
>         for(....) {
>             insert .....
>         }
>     EXEC SQL COMMIT
> 
>     but this is still way to slow at max 50-100 insert per sec. I
> understand that inserting is slow operation

Try starting the backend without -F and putting each insert into its own
transaction. Keeping lots of updates/insert in one transaction always slows
down a system. The very same would happen to mysql if it used transactions. 

>     And yes, I really want to use c+embedded sql code to do inserting
> (real app/table struct of course more complex)

If this is a ecpg problem we surely will fix it. But I doubt the performance
is better when using psql to enter the data.

Michael
-- 
Michael Meskes
Michael@Fam-Meskes.De
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!