Kirk Wythers <kirk.wythers@gmail.com> wrote:
> I hace a fairly large table with two columns that I need to
> "de-normalize" (235 million rows) There has got to be a better
> (i.e. faster) approach than what I am doing. I am using a MAX
> CASE on each of the 24 variables (column names variable and
> value) that I want to unstack. Any suggestions would be most
> appreciated.
I didn't understand your description of what you are trying to do,
and the example has so many columns and cases that it would take a
long time to understand it. Can you distill this down to just a
few columns and cases so that it is easier to understand what you
are trying to accomplish? Even better would be a self-contained
test case with just a few rows so people can see "before" and
"after" data. What you have already posted will help give context
on how it needs to scale, which is important, too; but if you make
the issue easier to understand, the odds improve that someone will
volunteer the time needed to make a suggestion.
-Kevin
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company