Re: query a table with lots of coulmns - Mailing list pgsql-performance

From Pavel Stehule
Subject Re: query a table with lots of coulmns
Date
Msg-id CAFj8pRDufkFjWkdHUWOfzHVJ71E0iVHuRdLuiL0K3j9fgHjpSA@mail.gmail.com
Whole thread Raw
In response to query a table with lots of coulmns  (Björn Wittich <Bjoern_Wittich@gmx.de>)
List pgsql-performance


2014-09-19 13:51 GMT+02:00 Björn Wittich <Bjoern_Wittich@gmx.de>:
Hi mailing list,

I am relatively new to postgres. I have a table with 500 coulmns and about 40 mio rows. I call this cache table where one column is a unique key (indexed) and the 499 columns (type integer) are some values belonging to this key.

Now I have a second (temporary) table (only 2 columns one is the key of my cache table) and I want  do an inner join between my temporary table and the large cache table and export all matching rows. I found out, that the performance increases when I limit the join to lots of small parts.
But it seems that the databases needs a lot of disk io to gather all 499 data columns.
Is there a possibilty to tell the databases that all these colums are always treated as tuples and I always want to get the whole row? Perhaps the disk oraganization could then be optimized?

sorry for offtopic

array databases are maybe better for your purpose

http://rasdaman.com/
http://www.scidb.org/
 


Thank you for feedback and ideas
Best
Neo


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

pgsql-performance by date:

Previous
From: Björn Wittich
Date:
Subject: Re: query a table with lots of coulmns
Next
From: Merlin Moncure
Date:
Subject: Re: Yet another abort-early plan disaster on 9.3