Re: [WIP] Effective storage of duplicates in B-tree index. - Mailing list pgsql-hackers

From Aleksander Alekseev
Subject Re: [WIP] Effective storage of duplicates in B-tree index.
Date
Msg-id 20160129184733.2ca9026a@fujitsu
Whole thread Raw
In response to Re: [WIP] Effective storage of duplicates in B-tree index.  (Anastasia Lubennikova <a.lubennikova@postgrespro.ru>)
Responses Re: [WIP] Effective storage of duplicates in B-tree index.  (Thom Brown <thom@linux.com>)
List pgsql-hackers
I tested this patch on x64 and ARM servers for a few hours today. The
only problem I could find is that INSERT works considerably slower after
applying a patch. Beside that everything looks fine - no crashes, tests
pass, memory doesn't seem to leak, etc.

> Okay, now for some badness.  I've restored a database containing 2
> tables, one 318MB, another 24kB.  The 318MB table contains 5 million
> rows with a sequential id column.  I get a problem if I try to delete
> many rows from it:
> # delete from contacts where id % 3 != 0 ;
> WARNING:  out of shared memory
> WARNING:  out of shared memory
> WARNING:  out of shared memory

I didn't manage to reproduce this. Thom, could you describe exact steps
to reproduce this issue please?



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Fuzzy substring searching with the pg_trgm extension
Next
From: Artur Zakirov
Date:
Subject: Re: Fuzzy substring searching with the pg_trgm extension