Thread: Cube Index Size

Cube Index Size

From
Nick Raj
Date:
Hi,<br /><br />Cube code provided by postgres contrib folder. It uses the NDBOX structure.<br />On creating index, it's
sizeincrease at a high rate.<br /><br />On inserting some tuple and creating indexes its behaviour is shown below.<br
/><br/>1. When there is only one tuple<br />    select pg_size_pretty(pg_relation_<div id=":x4">size('cubtest'));  
//Tablesize without index <br /> pg_size_pretty <br />----------------<br /> 8192 bytes<br />(1 row)<br /><br /> select
pg_size_pretty(pg_total_relation_size('cubtest'));//Table size with index<br />  pg_size_pretty <br
/>----------------<br/> 16 kB<br />(1 row)<br /><br />i.e. Index size in nearly 8kB<br /><br />2. When tuples are
20,000<br/><br />Table Size without index - 1.6 MB<br />Table Size with index - 11 MB<br />i.e. Index size is nearly
9.4MB<br /><br />3. When tuples are 5 lakh <br /><br /> Table Size without index - 40 MB<br /> Table Size with index -
2117MB<br /> i.e. Index size is nearly 2077 MB ~ 2GB<br />It is taking nearly 20-25 min for creating index for 5 lakh
tuples.<br/><br />Can some one tell me why index is becoming so large?<br />How to compress or reduce its size?<br
/><br/>Thanks<br /><font color="#888888"> Nick</font></div> 

Re: Cube Index Size

From
Heikki Linnakangas
Date:
On 30.05.2011 21:51, Nick Raj wrote:
> Hi,
>
> Cube code provided by postgres contrib folder. It uses the NDBOX structure.
> On creating index, it's size increase at a high rate.
>
> On inserting some tuple and creating indexes its behaviour is shown below.
>
> 1. When there is only one tuple
>      select pg_size_pretty(pg_relation_
> size('cubtest'));   //Table size without index
>   pg_size_pretty
> ----------------
>   8192 bytes
> (1 row)
>
> select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
> index
>   pg_size_pretty
> ----------------
>   16 kB
> (1 row)
>
> i.e. Index size in nearly 8kB
>
> 2. When tuples are 20,000
>
> Table Size without index - 1.6 MB
> Table Size with index - 11 MB
> i.e. Index size is nearly 9.4 MB
>
> 3. When tuples are 5 lakh
>
> Table Size without index - 40 MB
> Table Size with index - 2117 MB
> i.e. Index size is nearly 2077 MB ~ 2GB
> It is taking nearly 20-25 min for creating index for 5 lakh tuples.
>
> Can some one tell me why index is becoming so large?
> How to compress or reduce its size?

Which version of PostgreSQL are you using? I wonder if this could be due 
to the bug in cube's picksplit algorithm that was fixed a while ago:

http://archives.postgresql.org/message-id/AANLkTimC8W6guHpWJeWdjQA6WGoVH-7qG9Ar4pem2N2V@mail.gmail.com

If not, please post a self-contained test case to create and populate 
the table, so that others can easily try to reproduce it.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Cube Index Size

From
Nicolas Barbier
Date:
2011/5/30, Nick Raj <nickrajjain@gmail.com>:

> 3. When tuples are 5 lakh

For the benefit of the others: "5 lakh" seems to mean 500,000.

<URL:http://en.wikipedia.org/wiki/Lakh>

Nicolas

-- 
A. Because it breaks the logical sequence of discussion.
Q. Why is top posting bad?


Re: Cube Index Size

From
Heikki Linnakangas
Date:
On 01.06.2011 10:48, Nick Raj wrote:
> On Tue, May 31, 2011 at 12:46 PM, Heikki Linnakangas<
> heikki.linnakangas@enterprisedb.com>  wrote:
>> If not, please post a self-contained test case to create and populate the
>> table, so that others can easily try to reproduce it.
>>
>
>    I have attached .sql file that having 20000 tuples
>    Table creation - create table cubtest(c cube);
>    Index creation - create index t on cubtest using gist(c);

Ok, I can reproduce the issue with that. The index is only 4MB in size 
when I populate it with random data (vs. 15 MB with your data). The 
command I used is:

INSERT INTO cubtest SELECT cube(random(), random()) FROM 
generate_series(1,20000);

My guess is that the picksplit algorithm performs poorly with that data. 
Unfortunately, I have no idea how to improve that.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Cube Index Size

From
Alexander Korotkov
Date:
On Wed, Jun 1, 2011 at 3:37 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote:
My guess is that the picksplit algorithm performs poorly with that data. Unfortunately, I have no idea how to improve that.

Current cube picksplit function have no storage utilization guarantees, while original Guttman's picksplit has them (if one of group size reaches some threshold, then all other entries go to another group). Also, current picksplit is mix of Guttman's linear and quadratic algorithms. It picks seeds quadratically, but distributes entries linearly. 
I see following ways of solving picksplit problem for cube:
1) Add storage utilization guarantees to current picksplit. It may cause increase of overlaps, but should descrease index size.
2) Add storage utilization guarantees to current picksplit and replace entries distribution algorithm to the quadratic one. Picksplit will take more time, but it should give more stable and predictable result.
3) I had some experiments with my own picksplit algorithm, which showed pretty good results on tests which I've run. But current implementation is dirty and it's require more testing.

 ------
With best regards,
Alexander Korotkov.

Re: Cube Index Size

From
Teodor Sigaev
Date:
> Ok, I can reproduce the issue with that. The index is only 4MB in size
> when I populate it with random data (vs. 15 MB with your data). The
> command I used is:
>
> INSERT INTO cubtest SELECT cube(random(), random()) FROM
> generate_series(1,20000);
>
> My guess is that the picksplit algorithm performs poorly with that data.
> Unfortunately, I have no idea how to improve that.

One of idea is add sorting of Datums to be splitted by cost of insertion. It's 
implemented in intarray/tsearch GiST indexes.

Although I'm not sure that it will help but our researches on Guttman's 
picksplit algorimth show significant improvements.
-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: Cube Index Size

From
Alexander Korotkov
Date:
2011/6/1 Teodor Sigaev <teodor@sigaev.ru>
One of idea is add sorting of Datums to be splitted by cost of insertion. It's implemented in intarray/tsearch GiST indexes.

Yes, it's a good compromise between linear and quadratic entries distribution algorithms. In quadratic algorithm each time entry with maximal difference of inserion cost is inserted. Quadratic algorithm runs slowly than sorting one, but on my tests it shows slightly better results.
 
 ------
With best regards,
Alexander Korotkov. 

Re: Cube Index Size

From
Nick Raj
Date:


2011/6/1 Alexander Korotkov <aekorotkov@gmail.com>
2011/6/1 Teodor Sigaev <teodor@sigaev.ru>
One of idea is add sorting of Datums to be splitted by cost of insertion. It's implemented in intarray/tsearch GiST indexes.

Yes, it's a good compromise between linear and quadratic entries distribution algorithms. In quadratic algorithm each time entry with maximal difference of inserion cost is inserted. Quadratic algorithm runs slowly than sorting one, but on my tests it shows slightly better results.
 

Can we figure out some information about index i.e. whet is the height of index tree, how many values are placed in one leaf node and one non leaf level node?

Regards,
Nick

Re: Cube Index Size

From
Teodor Sigaev
Date:
> Can we figure out some information about index i.e. whet is the height
> of index tree, how many values are placed in one leaf node and one non
> leaf level node?

http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/
-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: Cube Index Size

From
Nick Raj
Date:


2011/6/2 Teodor Sigaev <teodor@sigaev.ru>
Can we figure out some information about index i.e. whet is the height
of index tree, how many values are placed in one leaf node and one non
leaf level node?

http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/

For improving space utilization, When node is splitted, then we have to assign enteries to two groups. Once, one group is reached some threshod (m) then, insert the remaining entries into another group.

Can you suggest some way to choose 'm' (beacuse cube store in form of NDBOX that having variable length) or provide some guide with code?

Thanks 

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                  WWW: http://www.sigaev.ru/