Re: PostgreSQL insert speed tests - Mailing list pgsql-general

From Sezai YILMAZ
Subject Re: PostgreSQL insert speed tests
Date
Msg-id 403F5488.5050705@pro-g.com.tr
Whole thread Raw
In response to PostgreSQL insert speed tests  (Sezai YILMAZ <sezai.yilmaz@pro-g.com.tr>)
Responses Re: PostgreSQL insert speed tests
List pgsql-general
Sezai YILMAZ wrote:

> create index agentid_ndx on logs using hash (agentid);
> create index ownerid_ndx on logs using hash (ownerid);
> create index hostid_ndx on logs using hash (hostid);
> ------------------------------------------------------------
>                           speed for         speed for
> # of EXISTING RECORDS    PostgreSQL 7.3.4  PostgreSQL 7.4.1
> =========================================================================
>
>      0 initial records   1086 rows/s       1324 rows/s
> 200.000 initial records    781 rows/s        893 rows/s
> 400.000 initial records    576 rows/s        213 rows/s
> 600.000 initial records    419 rows/s        200 rows/s
> 800.000 initial records    408 rows/s       not tested because of bad
> results

I changed the three hash indexes to btree.

The performance is increased about 2 times (in PostgreSQL 7.3.4  1905
rows/s).

Concurrent inserts now work.

Changed indexes are more suitable for hash type. Because, there is no
ordering on them, instead exact values are matched which is more natural
for hash type of indexes. But hash indexes has possible dead lock
problems on multiple concurrent inserts. I think I can live with btree
indexes. They work better. :-)

-sezai

pgsql-general by date:

Previous
From: Michael Chaney
Date:
Subject: Re: correlated delete with "in" and "left outer join"
Next
From: Bruno Wolff III
Date:
Subject: Re: check for user validity