Any feedbacks, bug reports and suggestions are welcome.
Vertical representation of data is stored in PostgreSQL shared memory. This is why it is important to be able to utilize all available physical memory. Now servers with Tb or more RAM are not something exotic, especially in financial world. But there is limitation in Linux with standard 4kb pages for maximal size of mapped memory segment: 256Gb. It is possible to overcome this limitation either by creating multiple segments - but it requires too much changes in PostgreSQL memory manager. Or just set MAP_HUGETLB flag (assuming that huge pages were allocated in the system).
excellent work! I begin to do testing and it's very fast, by the way I found a strange case of "endless" query with CPU a 100% when the value used as filter does not exists:
I am testing with postgres 9.3.1 on debian and I used default value for the extension except memory ( 512mb )
how to recreate the test case :
## create a table :
create table endless ( col1 int , col2 char(30) , col3 int ) ;