Thread: BRIN index on timestamptz

BRIN index on timestamptz

From
Mohan Radhakrishnan
Date:
Hi,
         I am planning to use as I search based on timestamptz fields. There are millions of records.I refer https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits

I execute this on the AWS RDS instance. Is there something in the plan I should pay attention to ? I notice the Execution Time.

Thanks,
Mohan

INSERT INTO testtab (id, date, level, msg) SELECT g, CURRENT_TIMESTAMP + ( g || 'minute' ) :: interval, random() * 6, md5(g::text) FROM generate_series(1,8000000) as g;
create index testtab_date_idx  on testtab(date);

"Gather  (cost=1000.00..133475.57 rows=1 width=49) (actual time=848.040..862.638 rows=0 loops=1)"
"  Workers Planned: 2"
"  Workers Launched: 2"
"  ->  Parallel Seq Scan on testtab  (cost=0.00..132475.47 rows=1 width=49) (actual time=832.108..832.109 rows=0 loops=3)"
"        Filter: ((date >= '2019-08-08 14:40:47.974791'::timestamp without time zone) AND (date <= '2019-08-08 14:50:47.974791'::timestamp without time zone))"
"        Rows Removed by Filter: 2666667"
"Planning Time: 0.238 ms"
"Execution Time: 862.662 ms"

explain analyze select * from public.testtab where date between '2019-08-08 14:40:47.974791' and '2019-08-08 14:50:47.974791';

"Gather  (cost=1000.00..133475.57 rows=1 width=49) (actual time=666.283..681.586 rows=0 loops=1)"
"  Workers Planned: 2"
"  Workers Launched: 2"
"  ->  Parallel Seq Scan on testtab  (cost=0.00..132475.47 rows=1 width=49) (actual time=650.661..650.661 rows=0 loops=3)"
"        Filter: ((date >= '2019-08-08 14:40:47.974791'::timestamp without time zone) AND (date <= '2019-08-08 14:50:47.974791'::timestamp without time zone))"
"        Rows Removed by Filter: 2666667"
"Planning Time: 0.069 ms"
"Execution Time: 681.617 ms"

create index testtab_date_brin_idx  on rm_owner.testtab using brin (date);

explain analyze select * from public.testtab where date between '2019-08-08 14:40:47.974791' and '2019-08-08 14:50:47.974791';

"Bitmap Heap Scan on testtab  (cost=20.03..33406.84 rows=1 width=49) (actual time=0.143..0.143 rows=0 loops=1)"
"  Recheck Cond: ((date >= '2019-08-08 14:40:47.974791'::timestamp without time zone) AND (date <= '2019-08-08 14:50:47.974791'::timestamp without time zone))"
"  ->  Bitmap Index Scan on "testtab_date_brin_idx "  (cost=0.00..20.03 rows=12403 width=0) (actual time=0.141..0.141 rows=0 loops=1)"
"        Index Cond: ((date >= '2019-08-08 14:40:47.974791'::timestamp without time zone) AND (date <= '2019-08-08 14:50:47.974791'::timestamp without time zone))"
"Planning Time: 0.126 ms"
"Execution Time: 0.161 ms"

Re: BRIN index on timestamptz

From
Tomas Vondra
Date:

On 4/23/21 10:31 AM, Mohan Radhakrishnan wrote:
> Hi,
>           I am planning to use as I search based on timestamptz fields. 
> There are millions of records.I refer 
> https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits 
> <https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits>
> 
> I execute this on the AWS RDS instance. Is there something in the plan I 
> should pay attention to ? I notice the Execution Time.
> 
> Thanks,
> Mohan
> 
> INSERT INTO testtab (id, date, level, msg) SELECT g, CURRENT_TIMESTAMP + 
> ( g || 'minute' ) :: interval, random() * 6, md5(g::text) FROM 
> generate_series(1,8000000) as g;

Two things:

a) You need to do ANALYZE, otherwise there are no statistics the 
optimizer could use (which is why the row estimates in the plans are 
entirely bogus).

b) BRIN indexes don't work on random data, because the whole idea is 
about eliminating large blocks of data (e.g. 1MB). But with random data 
that's not going to happen, because each such range will match anything. 
Which is why seqscan is a bit faster than when using BRIN index.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: BRIN index on timestamptz

From
Mohan Radhakrishnan
Date:
>a) You need to do ANALYZE, otherwise >there are no statistics the optimizer >could use 

I execute and analyze. The actual timestamps I have are not random. I will order them chronologically.

Thanks

On Saturday, April 24, 2021, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:


On 4/23/21 10:31 AM, Mohan Radhakrishnan wrote:
Hi,
          I am planning to use as I search based on timestamptz fields. There are millions of records.I refer https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits <https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits>

I execute this on the AWS RDS instance. Is there something in the plan I should pay attention to ? I notice the Execution Time.

Thanks,
Mohan

INSERT INTO testtab (id, date, level, msg) SELECT g, CURRENT_TIMESTAMP + ( g || 'minute' ) :: interval, random() * 6, md5(g::text) FROM generate_series(1,8000000) as g;

Two things:

a) You need to do ANALYZE, otherwise there are no statistics the optimizer could use (which is why the row estimates in the plans are entirely bogus).

b) BRIN indexes don't work on random data, because the whole idea is about eliminating large blocks of data (e.g. 1MB). But with random data that's not going to happen, because each such range will match anything. Which is why seqscan is a bit faster than when using BRIN index.


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: BRIN index on timestamptz

From
Michael Lewis
Date:


On Fri, Apr 23, 2021, 2:32 AM Mohan Radhakrishnan <radhakrishnan.mohan@gmail.com> wrote:
I execute this on the AWS RDS instance. Is there something in the plan I should pay attention to ? I notice the Execution Time.


"  ->  Bitmap Index Scan on "testtab_date_brin_idx "  (cost=0.00..20.03 rows=12403 width=0) (actual time=0.141..0.141 rows=0 loops=1)"
"        Index Cond: ((date >= '2019-08-08 14:40:47.974791'::timestamp without time zone) AND (date <= '2019-08-08 14:50:47.974791'::timestamp without time zone))"
"Planning Time: 0.126 ms"
"Execution Time: 0.161 ms"

What's your question exactly? If you have confidence that correlation will remain high (insert only table, or occasional cluster/repack with cluster is done), then BRIN can be a good fit. If you do updates and deletes and new tuples (inserts and updates) come in and fill in those gaps left behind in early pages even though timestamp is high, then correlation will go down and brin will no longer be a good fit.

Note- timestamp *with* timezone is recommended.

Re: BRIN index on timestamptz

From
Michael Lewis
Date:


On Sat, Apr 24, 2021, 1:27 AM Mohan Radhakrishnan <radhakrishnan.mohan@gmail.com> wrote:
What's your question exactly? If you have confidence that correlation will remain high (insert only table, or occasional cluster/repack with cluster is done), then BRIN can be a good fit. If you do updates and deletes and new tuples (inserts and updates) come in and fill in those gaps left behind in early pages even though timestamp is high, then correlation will go down and brin will no longer be a good fit.

Note- timestamp *with* timezone is recommended.

The timestamptz isn't deleted or updated. It is only inserted. Another ENUM column will be updated.
It looks like I should use brin. We also have other history tables like this. 

Thanks.

That's not a correct conclusion. Reply all btw.

Updating any value in the row means a new version of the row is inserted and old one is marked expired and will be cleaned up by vacuum after no transactions might need that row version (tuple). Research a bit about how MVCC is implemented in Postgres.

If those updates would only be on recent rows and autovacuum is tuned aggressively to keep the maintenance task under control, then the correlation may remain high as only recent rows are being updated. If the updates go back a long ways, then BRIN is unlikely to be a good fit. Maybe it still could be if table fillfactor is lowered a bit and the enum is not indexed so you get mostly HOT (heap only tuple) updates, but I wouldn't count on it.

Why not use a btree index for the timestamptz column?

Re: BRIN index on timestamptz

From
Mohan Radhakrishnan
Date:
> Why not use a btree index for the >timestamptz column?
There are some capabilities our team lacks. Due to that  autovacuum tuning mechanisms  isn't considered at all. It may be in the future.

I know about basic MVCC though. BRIN was an option as the characteristics you describe match the requirements.

1. Only recent rows are updated. One or two ENUMS, 4-5 states per  ENUM for each timestamptz.
2.ENUMs are not indexed. Will that  help too ? That is probably an unrelated question.

Btree may be the default option.

Thanks.


On Saturday, April 24, 2021, Michael Lewis <mlewis@entrata.com> wrote:


On Sat, Apr 24, 2021, 1:27 AM Mohan Radhakrishnan <radhakrishnan.mohan@gmail.com> wrote:
What's your question exactly? If you have confidence that correlation will remain high (insert only table, or occasional cluster/repack with cluster is done), then BRIN can be a good fit. If you do updates and deletes and new tuples (inserts and updates) come in and fill in those gaps left behind in early pages even though timestamp is high, then correlation will go down and brin will no longer be a good fit.

Note- timestamp *with* timezone is recommended.

The timestamptz isn't deleted or updated. It is only inserted. Another ENUM column will be updated.
It looks like I should use brin. We also have other history tables like this. 

Thanks.

That's not a correct conclusion. Reply all btw.

Updating any value in the row means a new version of the row is inserted and old one is marked expired and will be cleaned up by vacuum after no transactions might need that row version (tuple). Research a bit about how MVCC is implemented in Postgres.

If those updates would only be on recent rows and autovacuum is tuned aggressively to keep the maintenance task under control, then the correlation may remain high as only recent rows are being updated. If the updates go back a long ways, then BRIN is unlikely to be a good fit. Maybe it still could be if table fillfactor is lowered a bit and the enum is not indexed so you get mostly HOT (heap only tuple) updates, but I wouldn't count on it.

Why not use a btree index for the timestamptz column?

Re: BRIN index on timestamptz

From
Mohan Radhakrishnan
Date:
Isn't a btree subject to these effects ? So when I update ENUMS for each timestamptz, btree indexes are less susceptible
to the effects than BRIN indexes  ?

Thanks.

On Sat, Apr 24, 2021 at 9:05 PM Mohan Radhakrishnan <radhakrishnan.mohan@gmail.com> wrote:
> Why not use a btree index for the >timestamptz column?
There are some capabilities our team lacks. Due to that  autovacuum tuning mechanisms  isn't considered at all. It may be in the future.

I know about basic MVCC though. BRIN was an option as the characteristics you describe match the requirements.

1. Only recent rows are updated. One or two ENUMS, 4-5 states per  ENUM for each timestamptz.
2.ENUMs are not indexed. Will that  help too ? That is probably an unrelated question.

Btree may be the default option.

Thanks.


On Saturday, April 24, 2021, Michael Lewis <mlewis@entrata.com> wrote:


On Sat, Apr 24, 2021, 1:27 AM Mohan Radhakrishnan <radhakrishnan.mohan@gmail.com> wrote:
What's your question exactly? If you have confidence that correlation will remain high (insert only table, or occasional cluster/repack with cluster is done), then BRIN can be a good fit. If you do updates and deletes and new tuples (inserts and updates) come in and fill in those gaps left behind in early pages even though timestamp is high, then correlation will go down and brin will no longer be a good fit.

Note- timestamp *with* timezone is recommended.

The timestamptz isn't deleted or updated. It is only inserted. Another ENUM column will be updated.
It looks like I should use brin. We also have other history tables like this. 

Thanks.

That's not a correct conclusion. Reply all btw.

Updating any value in the row means a new version of the row is inserted and old one is marked expired and will be cleaned up by vacuum after no transactions might need that row version (tuple). Research a bit about how MVCC is implemented in Postgres.

If those updates would only be on recent rows and autovacuum is tuned aggressively to keep the maintenance task under control, then the correlation may remain high as only recent rows are being updated. If the updates go back a long ways, then BRIN is unlikely to be a good fit. Maybe it still could be if table fillfactor is lowered a bit and the enum is not indexed so you get mostly HOT (heap only tuple) updates, but I wouldn't count on it.

Why not use a btree index for the timestamptz column?

Re: BRIN index on timestamptz

From
"Peter J. Holzer"
Date:
On 2021-04-26 17:23:49 +0530, Mohan Radhakrishnan wrote:
> Isn't a btree subject to these effects ? So when I update ENUMS for each
> timestamptz, btree indexes are less susceptible
> to the effects than BRIN indexes  ?

A btree index contains one entry for each record which points to that
records. If you select a small range of values via a btree index in the
worst case you will have one random seek per row. This is not ideal, but
doesn't matter much if the number of records is low.

A BRIN index contains a minimum and maximum value per range of blocks.
In the worst case (each block range contains a totally random sample of
values) the minimum for each block range will be near the minimum of the
whole table and the maximum of each block range will be near the maximum
for the whole table. So when searching, the BRIN index will exclude very
few block ranges.

So a BRIN index will work best when each block range contains only a
small range of indexed values.

If you index on a timestamp this will work nicely if you either don't
update rows at all after inserting them or only update them for a short
time relative to the total time spanned by the table. So if your table
contains say records from the last year and records are normally only
updated after one or two days after being created that would probably
still work quite well. If there is a substantial number of records which
is still updated after a year, it probably won't work at all.

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Attachment