From: Merlin Moncure
> FWIW, Distributed analytical queries is the right market to be in.
> This is the field in which I work, and this is where the action is
at.
> I am very, very, sure about this. My view is that many of the
> existing solutions to this problem (in particular hadoop class
> soltuions) have major architectural downsides that make them
> inappropriate in use cases that postgres really shines at; direct
> hookups to low latency applications for example. postgres is
> fundamentally a more capable 'node' with its multiple man-millennia
of
> engineering behind it. Unlimited vertical scaling (RAC etc) is
> interesting too, but this is not the way the market is moving as
> hardware advancements have reduced or eliminated the need for that
in
> many spheres.
I'm feeling the same. As the Moore's Law ceases to hold, software
needs to make most of the processor power. Hadoop and Spark are
written in Java and Scala. According to Google [1] (see Fig. 8), Java
is slower than C++ by 3.7x - 12.6x, and Scala is slower than C++ by
2.5x - 3.6x.
Won't PostgreSQL be able to cover the workloads of Hadoop and Spark
someday, when PostgreSQL supports scaleout, in-memory database,
multi-model capability, and in-database filesystem? That may be a
pipedream, but why do people have to tolerate the separation of the
relational-based data warehouse and Hadoop-based data lake?
[1] Robert Hundt. "Loop Recognition in C++/Java/Go/Scala".
Proceedings of Scala Days 2011
Regards
MauMau