On 3/9/24 08:00, kuldeep singh wrote: > Copy may not work in our scenario since we need to join data from > multiple tables & then convert it to json using row_to_json . This > json data eventually needs to be stored in a target table .
A SELECT, VALUES, INSERT, UPDATE, or DELETE command whose results are to be copied. Note that parentheses are required around the query.
For INSERT, UPDATE and DELETE queries a RETURNING clause must be provided, and the target relation must not have a conditional rule, nor an ALSO rule, nor an INSTEAD rule that expands to multiple statements. "
> > Will it be better if we break the process into batches of like 10,000 > rows & insert the data in its individual transactions? Or any other > better solution available ? > > On Sat, Mar 9, 2024 at 9:01 PM hector vass <hector.vass@gmail.com > <mailto:hector.vass@gmail.com>> wrote: > > > > On Sat, Mar 9, 2024 at 3:02 PM kuldeep singh > <kuldeeparora89@gmail.com <mailto:kuldeeparora89@gmail.com>> wrote: > > Hi, > > We are inserting data close to 1M record & having a single Jsonb > column but query is getting stuck. > > We are using insert into select * .. , so all the operations are > within the DB. > > If we are running select query individually then it is returning > the data in 40 sec for all rows but with insert it is getting stuck. > > PG Version - 15. > > What could be the problem here ? > > Regards, > KD > > > insert 1M rows especially JSON that can be large, variable in size > and stored as blobs and indexed is not perhaps the correct way to do > this > insert performance will also depend on your tuning. Supporting > transactions, users or bulk processing are 3x sides of a compromise. > you should perhaps consider that insert is for inserting a few rows > into live tables ... you might be better using copy or \copy, > pg_dump if you are just trying to replicate a large table >