Thread: My Query to insert and retrieve takes time

My Query to insert and retrieve takes time

From
Shivakumar Ramannavar
Date:
Hi Community,

Require some help, I am running postgres 9.1 database, which has table having around 300,000 records, my problem is that it takes 8.314 milliseconds to update one record, and the below is the explain PLAN of  (terminal_id is UNIQUE index, and term_index is primary key)

 EXPLAIN ANALYZE  UPDATE terminal
                   SET    unit_address=E'\\x00000015a3'
                    WHERE terminal_id = '320000'


                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Update  (cost=0.00..8.38 rows=1 width=64) (actual time=0.074..0.074 rows=0 loops=1)
   ->  Index Scan using terminal_ind2 on terminal  (cost=0.00..8.38 rows=1 width=64) (actual time=0.047..0.048 rows=1 loops=1)
         Index Cond: (terminal_id = 320000)
 Total runtime: 0.182 ms

There are around 300,000 update operations and it is taking approx 80 min, please let me know how to improve the performance.

Thanks!

--
Shivakumar Ramannavar

Re: My Query to insert and retrieve takes time

From
"Kevin Grittner"
Date:
Shivakumar Ramannavar <shivasr@gmail.com> wrote:

> my problem is that it takes 8.314 milliseconds to update one
> record

> EXPLAIN ANALYZE  UPDATE terminal

> Update  ...  actual time=0.074..0.074

If you look at the EXPLAIN ANALYZE of the UPDATE, you'll see that
the UPDATE itself actually took only a small fraction of one ms.

My guess is that your write-ahead log (WAL) is on a 7200 or 7500 RPM
drive, and that you're committing each update separately.  Further,
I would bet that you're not going through a RAID controller with
battery-backup (BBU) cache configured for write-back.  And you're
also using a single connection to do all the updates.

Each commit must wait until the WAL is persisted, which can mean
waiting for a disk drive to spin all the way around again -- which
for a 7200 RPM drive takes 8.3 ms and for a 7500 RPM drive takes 8
ms.  Without better hardware, you face a hard limit on how many
database transactions can commit on a single connection based on
that rotational delay.

> There are around 300,000 update operations and it is taking approx
> 80 min,

One way to solve the problem would be to BEGIN a transaction, do
your 300,000 updates, and then COMMIT the transaction.  Another
would be to use a good RAID controller.  A third would be to turn
off synchronized_commit.  (Be sure to read the description of that
in the documentation to understand the implications.)  Or you could
use a number of connections working in parallel.

-Kevin

Fwd: My Query to insert and retrieve takes time

From
Shivakumar Ramannavar
Date:
Hi Community,

We are analyzing performance behaviour with postgres database, for INSERT operation for 3,00,000 records (On SLES 10 Linux, 64 bit server) postgreSQL is taking around 150 min. 

Here is some observation drawn from PostgreSQL performance testing:

There are around 310,000 records, it takes 8.313 milliseconds to retrieval/search a record, and it takes 8.321 milliseconds more to update a record. Totally it takes approximate of 17.9289 ms for searching and update during a  Synchronisation operation (these timings are recorded from postgres Embedded SQL modules).

This is very slow compared to Sybase for the same INSERT operation (with 3,00,000 students). There are around 300,000 update operations and it is taking approx 150 min as against 30 - 40 min at Sybase (when run the same operation/hardware). 

However when checked at PSQL client, the queries are pretty faster, and the below is the explain PLANS of  SELECT and UPDATE ((student_id is UNIQUE index, and id is primary key) :

EXPLAIN ANALYSE SELECT id from students where student_id='9101';
                                                       QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Index Scan using student_ind2 on students(cost=0.00..8.38 rows=1 width=4) (actual time=0.096..0.097 rows=1 loops=1)
   Index Cond: (student_id = 9101)
 Total runtime: 0.111 ms

(3 rows)


 EXPLAIN ANALYZE  UPDATE students SET    marks = 10
             WHERE student_id = '9101'
 
                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Update  (cost=0.00..8.38 rows=1 width=64) (actual time=0.039..0.039 rows=0 loops=1)
   ->  Index Scan using student_ind2 on students (cost=0.00..8.38 rows=1 width=64) (actual time=0.016..0.017 rows=1 loops=1)
         Index Cond: (student_id = 9101)

 Total runtime: 0.080 ms

As per the above plans, the queries are using indexes as per our expectation.

Can anyone help us in improving the performance,
Shiva

---------- Forwarded message ----------
From: Shivakumar Ramannavar <shivasr@gmail.com>
Date: Mon, Oct 31, 2011 at 4:51 PM
Subject: My Query to insert and retrieve takes time
To: pgsql-admin@postgresql.org


Hi Community,

Require some help, I am running postgres 9.1 database, which has table having around 300,000 records, my problem is that it takes 8.314 milliseconds to update one record, and the below is the explain PLAN of  (terminal_id is UNIQUE index, and term_index is primary key)

 EXPLAIN ANALYZE  UPDATE terminal
                   SET    unit_address=E'\\x00000015a3'
                    WHERE terminal_id = '320000'


                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Update  (cost=0.00..8.38 rows=1 width=64) (actual time=0.074..0.074 rows=0 loops=1)
   ->  Index Scan using terminal_ind2 on terminal  (cost=0.00..8.38 rows=1 width=64) (actual time=0.047..0.048 rows=1 loops=1)
         Index Cond: (terminal_id = 320000)
 Total runtime: 0.182 ms

There are around 300,000 update operations and it is taking approx 80 min, please let me know how to improve the performance.

Thanks!

--
Shivakumar Ramannavar



--
Shivakumar Ramannavar

Re: Fwd: My Query to insert and retrieve takes time

From
Tom Lane
Date:
Shivakumar Ramannavar <shivasr@gmail.com> writes:
> Here is some observation drawn from PostgreSQL performance testing:

> There are around 310,000 records, it takes 8.313 milliseconds to
> retrieval/search a record, and it takes 8.321 milliseconds more to update a
> record.

It was already pointed out to you that these numbers are suspiciously
close together, and also suspiciously close to the rotation time of a
7200RPM disk.  It seems very likely that you have configured things so
that each operation is done in a transaction that has to be explicitly
committed to disk, thus requiring a write to WAL, which would limit the
transaction rate to one per rotation.  However, since you haven't shown
us anything about how the queries are being issued, we can't confirm
or disprove that theory.

Consider batching multiple operations into a single transaction.
Turning off synchronous_commit is another possibility, depending on
what your crash-safety requirements are.

            regards, tom lane