Re: What limits Postgres performance when the whole database lives in cache? - Mailing list pgsql-general

On 2016-09-02 11:10:35 -0600, Scott Marlowe wrote:
> On Fri, Sep 2, 2016 at 4:49 AM, dandl <david@andl.org> wrote:
> > Re this talk given by Michael Stonebraker:
> >
> > http://slideshot.epfl.ch/play/suri_stonebraker
> >
> >
> >
> > He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
> > SQL Server, Postgres, given enough memory that the entire database lives in
> > cache, the server will spend 96% of its memory cycles on unproductive
> > overhead. This includes buffer management, locking, latching (thread/CPU
> > conflicts) and recovery (including log file reads and writes).

I think those numbers are overblown, and more PR than reality.

But there certainly are some things that can be made more efficient if
you don't care about durability and replication.


> > I wondered if there are any figures or measurements on Postgres performance
> > in this ‘enough memory’ environment to support or contest this point of
> > view?

I don't think that's really answerable without individual use-cases in
mind.  Answering that question for analytics, operational, ... workloads
is going to look different, and the overheads are elsewhere.

I personally think that each implementations restrictions are more
likely to be an issue than anything "fundamental".


> What limits postgresql when everything fits in memory? The fact that
> it's designed to survive a power outage and not lose all your data.
>
> Stonebraker's new stuff is cool, but it is NOT designed to survive
> total power failure.
>
> Two totally different design concepts. It's apples and oranges to compare them.

I don't think they're that fundamentally different.


Greetings,

Andres Freund


pgsql-general by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: RETURNS TABLE function returns nothingness
Next
From: Alexander Farber
Date:
Subject: Re: RETURNS TABLE function returns nothingness