""" INSERT statements with 16 clients Another common scenario that I tested was accessing the database from multiple clients - 16 in this case. What I found out, as can be seen below, is that compression performance of single large files (HTML, English text, source code, executable binary, pictures) using LZ4 was 60% to 70% faster compared to PGLZ, and that there was also a small improvement while inserting multiple small files (PostgreSQL document).
"""
kind regards, Imre
aditya desai <admad123@gmail.com> ezt írta (időpont: 2022. márc. 4., P, 19:42):
Hi Bruce,
Correct rows are wider. One of the columns is text and one is bytea.
Regards,
Aditya.
On Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian <bruce@momjian.us> wrote:
On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote: > Hi, > One of the service layer app is inserting Millions of records in a table but > one row at a time. Although COPY is the fastest way to import a file in a > table. Application has a requirement of processing a row and inserting it into > a table. Is there any way this INSERT can be tuned by increasing parameters? It > is taking almost 10 hours for just 2.2 million rows in a table. Table does not > have any indexes or triggers.