On 2017-03-28 07:15 PM, Claudio Freire wrote:
> On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander <rstander@exa.co.za> wrote:
>> CREATE TABLE [dbo].[usrUserRights] (
>> [UserId] [dbo].[dm_Id] NOT NULL,
>> [SiteId] [dbo].[dm_Id] NOT NULL,
>> [RightId] [dbo].[dm_Id] NOT NULL,
>> CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
>> CLUSTERED([UserId],[RightId],[SiteId])
>> );
>>
>> Takes 23GB for data and 200MB for indexes.
>>
>> Postgres table with over 700mil records:
>>
>> CREATE TABLE security.user_right_site2
>> (
>> user_id bigint NOT NULL,
>> right_id bigint NOT NULL,
>> site_id bigint NOT NULL
>> );
>> create index on security.user_right_site2(user_id, right_id);
>>
>> Takes 35GB data and 26GB index, for a total of 61GB.
>>
>> That is quite a large increase over SQL Server storage. Am I missing
>> something? Makes me worry about the rest of the database we still have to
>> convert.
> Indexes are quite fat in postgres, especially if you index all
> columns. To make the difference even bigger, it seems like there is
> very hardcore compression going on in SQL Server, for that index to be
> only 200MB. Are you sure you measured it correctly?
>
> In any case, yes, indexes will be fatter in postgres. Their
> performance shouldn't suffer considerably, though, given enough RAM.
>
>
That 200Mb is for another index on that table. Due to the table being
clustered on those 3 columns SQL Server sees the clustered index as the
table storage.