Re: Millions of tables - Mailing list pgsql-performance

From Stephen Frost
Subject Re: Millions of tables
Date
Msg-id 20160928180035.GX5148@tamriel.snowman.net
Whole thread Raw
In response to Re: Millions of tables  (Greg Spiegelberg <gspiegelberg@gmail.com>)
List pgsql-performance
Greg,

* Greg Spiegelberg (gspiegelberg@gmail.com) wrote:
> On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <sfrost@snowman.net> wrote:
> > * Greg Spiegelberg (gspiegelberg@gmail.com) wrote:
> > > Bigger buckets mean a wider possibility of response times.  Some buckets
> > > may contain 140k records and some 100X more.
> >
> > Have you analyzed the depth of the btree indexes to see how many more
> > pages need to be read to handle finding a row in 140k records vs. 14M
> > records vs. 140M records?
> >
> > I suspect you'd find that the change in actual depth (meaning how many
> > pages have to actually be read to find the row you're looking for) isn't
> > very much and that your concern over the "wider possibility of response
> > times" isn't well founded.

> Excellent feedback!   Um, how does one look at tree depth in PostgreSQL?
> Oracle I know but have not done the same in PG.  Pointers?

CREATE EXTENSION pageinspect;

SELECT * FROM bt_metap('indexname');

https://www.postgresql.org/docs/9.5/static/pageinspect.html

Thanks!

Stephen

Attachment

pgsql-performance by date:

Previous
From: Greg Spiegelberg
Date:
Subject: Re: Millions of tables
Next
From: Jake Nielsen
Date:
Subject: Re: Unexpected expensive index scan