Thread: gin fast insert performance

gin fast insert performance

From
Jeff Davis
Date:
Here is a test of the fast insert patch. The patch has gone through some
changes, so this set of tests is to see the current performance impact
compared with HEAD.

The test is simple: inserting a bunch of integer arrays into a table
with a GIN index on the array column.

I'm testing with small work_mem and large work_mem because the smaller
work mem should be periodically flushing the pending list throughout a
large insert, while large work_mem should allow larger batches to build
up before flushing the pending list. For HEAD, larger work_mem should
have no effect.

HEAD (work_mem = 1MB):
 10000 tuples, 1000 array elements each   insert:         run1: 127859.012 ms; run2: 124092.098 ms   vacuum analyze:
run1:  1681.521 ms; run2:   1688.483 ms
 
 1000000 tuples, 10 array elements each   insert:         run1: 122069.072 ms; run2: 116476.058 ms   vacuum analyze:
run1:  3349.210 ms; run2:   2826.328 ms
 

With Fast Insert Patch (work_mem = 1MB):
 10000 tuples, 1000 array elements each   insert:         run1: 111376.670 ms; run2: 110733.455 ms   vacuum analyze:
run1:  2078.427 ms; run2:   1781.152 ms
 
 1000000 tuples, 10 array elements each   insert:         run1:  65743.075 ms; run2:  65252.698 ms   vacuum analyze:
run1:  3458.500 ms; run2:   3719.916 ms
 

With Fast Insert Patch (work_mem = 1GB):
 10000 tuples, 1000 array elements each   insert:         run1:  69419.986 ms; run2:  68985.635 ms   vacuum analyze:
run1: 57521.067 ms; run2:  43385.682 ms
 
 1000000 tuples, 10 array elements each   insert:         run1:  48343.827 ms; run2:  49192.153 ms   vacuum analyze:
run1: 21134.267 ms; run2:  20474.775 ms
 


With the fast insert patch, the total time for insert + vacuum isn't
much different with increased work_mem, but increased work_mem clearly
defers a lot of the work to VACUUM.

For comparison, building the index afterward is: 10000 tuples, 1000 array elements each   insert time:        7531.645
ms  index build time:  24393.737 ms 1000000 tuples, 10 array elements each   insert time:       11564.604 ms   index
buildtime:  12753.891 ms
 

So, unfortunately, the fast insert patch does not appear to bring the
overall time anywhere close to building the index from scratch. When the
work_mem is set to 1GB, the VACUUM took about twice as long to run than
the entire index build. Teodor, can you explain why that might be?

It does show improvement, and I think my test case might just be too
small. It seems like a lot of time is used inserting into the pending
list, but it seems like it should be a simple operation. Maybe that can
be optimized?

Regards,Jeff Davis





Re: gin fast insert performance

From
Teodor Sigaev
Date:
> Here is a test of the fast insert patch. The patch has gone through some
> changes, so this set of tests is to see the current performance impact
> compared with HEAD.
> 
> The test is simple: inserting a bunch of integer arrays into a table
> with a GIN index on the array column.
> 
> I'm testing with small work_mem and large work_mem because the smaller
> work mem should be periodically flushing the pending list throughout a
> large insert, while large work_mem should allow larger batches to build
> up before flushing the pending list. For HEAD, larger work_mem should
> have no effect.

You didn't provide distributions of array's element, number of unique element 
and so on. And I make simple test script, which generates data  rather close to 
typical tsearch installation (see tst.sql).

And results on my notebook (in seconds): fastupdate |   insert   | vacuum
------------+------------+--------
100000 rows off        |    316.147 |  0.770 on         |     65.461 | 12.795
1000000 rows off        |  >16 hours |    - on         |   6612.595 | 12.795

I stop the test with fastupdate=off and one million rows - it ran too long :).
Changes in postgresql.conf:
shared_buffers=128MB
temp_buffers=16MB
work_mem=16MB
maintenance_work_mem=256MB
effective_cache_size=1024MB
autovacuum=off

Fastest way is create table, fill it, create index and vacuum it (for 100000 
records):
17 secs to insert
27 secs to create an index
1  second to vacuum
So, in summary, it takes 45 secs instead of 78 secs with fast update and 317 
seconds without fast update. I think, it's a win in performance.

> With the fast insert patch, the total time for insert + vacuum isn't
> much different with increased work_mem, but increased work_mem clearly
> defers a lot of the work to VACUUM.

"but increased work_mem clearly *may* defer a lot of the work to VACUUM." 
Because in real world it's impossible to predict when clearing of pending list 
will be started. And autovacuum usually will fire the clearing earlier than 
pending list reaches the limit.

> So, unfortunately, the fast insert patch does not appear to bring the
> overall time anywhere close to building the index from scratch. When the
> work_mem is set to 1GB, the VACUUM took about twice as long to run than
> the entire index build. Teodor, can you explain why that might be?
Yeah, creation of index is much more optimizable than sequential insertions. 
With enabled fast update there is a overhead of writing pending pages (and WAL 
too), and that pages should be readed to collect data into memory. Next, 
clearing process uses work_mem instead of maintenance_work_mem, which is usually 
greater. Algorithm of bulk insertion (it's used in creation and cleaning too) 
likes tids at the end of table, if lowest tid to insert is greater than lastest 
tid in current tree then algorithm could insert more than one tid at once.


> It does show improvement, and I think my test case might just be too
If dataset is bigger then improvement is better :).

> small. It seems like a lot of time is used inserting into the pending
> list, but it seems like it should be a simple operation. Maybe that can
> be optimized?

As you can see (ginfast.c), insertion into pending list could cause not fully 
filled pages, in worst case pending list will contain about 50% of unused space 
(if every indexed value takes GIN_PAGE_FREESIZE+1 bytes then value will takes 
two pages). This is a price to keep concurrency at high level :(. If you have an 
idea how to do compact, fast and concurrent insertion into pending list (or 
another structure) and keep reasonable time to search, please, don't be quiet :)


-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: gin fast insert performance

From
Teodor Sigaev
Date:
Sorry, lost test sript

BTW, is btree_gin ready to commit by your opinion?

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/
CREATE OR REPLACE FUNCTION gena()
RETURNS _int4 AS
$$
SELECT array(
    SELECT (100000*random())::int
    FROM generate_series( 0, 2 + (100*random())::int )
);
$$
LANGUAGE SQL VOLATILE;


\echo ============ FU = off =============
DROP TABLE IF EXISTS ta;
CREATE TABLE ta (
    a    int[]
);

CREATE INDEX taidx ON ta USING gin (a) with (fastupdate=off);
INSERT INTO ta (SELECT gena() FROM generate_series(1,100000));
VACUUM ANALYZE ta;


\echo ============ FU = on =============

DROP TABLE IF EXISTS ta;
CREATE TABLE ta (
    a    int[]
);

CREATE INDEX taidx ON ta USING gin (a) with (fastupdate=on);
INSERT INTO ta (SELECT gena() FROM generate_series(1,100000));
VACUUM ANALYZE ta;

Re: gin fast insert performance

From
Jeff Davis
Date:
On Tue, 2009-01-27 at 20:36 +0300, Teodor Sigaev wrote:
> You didn't provide distributions of array's element, number of unique element 
> and so on. And I make simple test script, which generates data  rather close to 
> typical tsearch installation (see tst.sql).

The arrays I was inserting were actually all identical. In the case of a
1000-element array inserted 10000 times, it was just ARRAY[1, 2, ...,
1000].

My test case must have been much to simple, but I expected that it would
still benefit from fast insert.

> "but increased work_mem clearly *may* defer a lot of the work to VACUUM." 
> Because in real world it's impossible to predict when clearing of pending list 
> will be started. And autovacuum usually will fire the clearing earlier than 
> pending list reaches the limit.

Yes, that is the expected result and part of the design. It was just an
observation, not a criticism.

I will try with a better test case.

Regards,Jeff Davis