Thread: less than 2 sec for response - possible?

less than 2 sec for response - possible?

From
trafdev
Date:
Hi.

I'm trying to build an OLAP-oriented DB based on PostgresSQL.

User works with a paginated report in the web-browser. Interface allows
to fetch data for a custom date-range selection,
display individual rows (20-50 per page) and totals (for entire
selection, even not visible on the current page) and sorting by any column.

The main goal is to deliver results of the basic SELECT queries to the
end-user in less than 2 seconds.

I was able to achieve that except for one table (the biggest one).

It consist of multiple dimensions (date, gran, aid, pid, sid, fid,
subid) and metrics (see below).
User can filter by any dimension and sort by any metric.

Here is a CREATE script for this table:

CREATE TABLE stats.feed_sub
(
   date date NOT NULL,
   gran interval NOT NULL,
   aid smallint NOT NULL,
   pid smallint NOT NULL,
   sid smallint NOT NULL,
   fid smallint NOT NULL,
   subid text NOT NULL,
   rev_est_pub real NOT NULL,
   rev_est_feed real NOT NULL,
   rev_raw real NOT NULL,
   c_total bigint NOT NULL,
   c_passed bigint NOT NULL,
   q_total bigint NOT NULL,
   q_passed bigint NOT NULL,
   q_filt_geo bigint NOT NULL,
   q_filt_browser bigint NOT NULL,
   q_filt_os bigint NOT NULL,
   q_filt_ip bigint NOT NULL,
   q_filt_subid bigint NOT NULL,
   q_filt_pause bigint NOT NULL,
   q_filt_click_cap_ip bigint NOT NULL,
   q_filt_query_cap bigint NOT NULL,
   q_filt_click_cap bigint NOT NULL,
   q_filt_rev_cap bigint NOT NULL,
   q_filt_erpm_floor bigint NOT NULL,
   c_filt_click_cap_ip bigint NOT NULL,
   c_filt_doubleclick bigint NOT NULL,
   c_filt_url_expired bigint NOT NULL,
   c_filt_fast_click bigint NOT NULL,
   c_filt_delay_clicks bigint NOT NULL,
   c_filt_ip_mismatch bigint NOT NULL,
   c_filt_ref_mismatch bigint NOT NULL,
   c_filt_lng_mismatch bigint NOT NULL,
   c_filt_ua_mismatch bigint NOT NULL,
   res_impr bigint NOT NULL,
   rev_ver_pub real,
   rev_ver_feed real,
   c_ver bigint,
   q_filt_ref bigint NOT NULL
)
WITH (
   OIDS=FALSE
);

CREATE INDEX ix_feed_sub_date
   ON stats.feed_sub
   USING brin
   (date);

CREATE UNIQUE INDEX ixu_feed_sub
   ON stats.feed_sub
   USING btree
   (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog."default");

Here is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):

relation,size
stats.feed_sub,5644 MB
stats.ixu_feed_sub,1594 MB

row_estimate
15865627

Here is the typical query (for totals beige):
SELECT
   sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,
   sum(stats.feed_sub.c_filt_doubleclick)  AS clicks_on_target,
   sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,
   sum(stats.feed_sub.c_filt_fast_click)   AS fast_click,
   sum(stats.feed_sub.c_filt_ip_mismatch)  AS ip_mismatch,
   sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,
   sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,
   sum(stats.feed_sub.c_filt_ua_mismatch)  AS ua_mismatch,
   sum(stats.feed_sub.c_filt_url_expired)  AS url_expired,
   stats.feed_sub.subid                    AS stats_feed_sub_subid,
   stats.feed_sub.sid                      AS stats_feed_sub_sid
FROM stats.feed_sub
WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND
       stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND
       stats.feed_sub.gran = '1 day'
       AND stats.feed_sub.aid = 3
GROUP BY
   stats.feed_sub.subid, stats.feed_sub.sid;

QUERY PLAN
HashAggregate  (cost=901171.72..912354.97 rows=344100 width=86) (actual
time=7207.825..7335.473 rows=126044 loops=1)
"  Group Key: subid, sid"
   Buffers: shared hit=3635804
   ->  Index Scan using ixu_feed_sub on feed_sub  (cost=0.56..806544.38
rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)
         Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without
time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time
zone) AND (gran = '1 day'::interval) AND (aid = 3))
         Buffers: shared hit=3635804
Planning time: 0.150 ms
Execution time: 7352.009 ms

As I can see - it takes 3.6 seconds just for an index scan (which sits
in RAM).
+3 seconds for groupings +1-2 seconds for network transfers, so I'm
completely out of my "sub 2 seconds" goal.

Questions are:
1. Am I using the right DB\architecture for achieving my goal? Are there
any better solution for that?
2. Have I reached some physical limits? Will installing faster RAM\CPU help?

Thanks in advance!

Server config:

OS:
 > uname -a
FreeBSD sqldb 10.2-RELEASE-p9 FreeBSD 10.2-RELEASE-p9 #0: Thu Jan 14
01:32:46 UTC 2016
root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


CPU: Intel(R) Xeon(R) CPU E5-1630 v3
 > sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'
hw.machine: amd64
hw.model: Intel(R) Xeon(R) CPU E5-1630 v3 @ 3.70GHz
hw.ncpu: 8
hw.machine_arch: amd64


MEM: 64GB
 > sysctl hw.physmem
hw.physmem: 68572983296


HDD: 2x480GB SSD (ZFS mirror)
 > camcontrol devlist
<INTEL SSDSC2BB480H4 D2010380>     at scbus5 target 0 lun 0 (ada0,pass1)
<INTEL SSDSC2BB480H4 D2010380>     at scbus6 target 0 lun 0 (ada1,pass2)


FS:
 > zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
zroot                  36.5G   390G    96K  /zroot
...
zroot/ara/sqldb/pgsql  33.7G   390G  33.7G  /ara/sqldb/pgsql

 > zfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql
NAME                   PROPERTY      VALUE         SOURCE
zroot/ara/sqldb/pgsql  primarycache  all           local
zroot/ara/sqldb/pgsql  recordsize    8K            local
zroot/ara/sqldb/pgsql  logbias       latency       local
zroot/ara/sqldb/pgsql  compression   lz4           inherited from zroot


Misc:
 > cat /etc/sysctl.conf
vfs.zfs.metaslab.lba_weighting_enabled=0


Postgres:
 > /usr/local/bin/postgres --version
postgres (PostgreSQL) 9.5.3

 > cat postgresql.conf:
...
listen_addresses = '*'

max_connections = 100
shared_buffers = 16GB
effective_cache_size = 48GB
work_mem = 500MB
maintenance_work_mem = 2GB
min_wal_size = 4GB
max_wal_size = 8GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500
random_page_cost = 1

log_lock_waits = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_destination = 'csvlog'
logging_collector = on
log_min_duration_statement = 10000

shared_preload_libraries = 'pg_stat_statements'
track_activity_query_size = 10000
track_io_timing = on


Re: less than 2 sec for response - possible?

From
Tom Lane
Date:
trafdev <trafdev@mail.ru> writes:
> CREATE INDEX ix_feed_sub_date
>    ON stats.feed_sub
>    USING brin
>    (date);

> CREATE UNIQUE INDEX ixu_feed_sub
>    ON stats.feed_sub
>    USING btree
>    (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog."default");

> HashAggregate  (cost=901171.72..912354.97 rows=344100 width=86) (actual
> time=7207.825..7335.473 rows=126044 loops=1)
> "  Group Key: subid, sid"
>    Buffers: shared hit=3635804
>    ->  Index Scan using ixu_feed_sub on feed_sub  (cost=0.56..806544.38
> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)
>          Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without
> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time
> zone) AND (gran = '1 day'::interval) AND (aid = 3))
>          Buffers: shared hit=3635804
> Planning time: 0.150 ms
> Execution time: 7352.009 ms

Neither of those indexes is terribly well designed for this query.
A btree index on (aid, gran, date) or (gran, aid, date) would work
much better.  See

https://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html

You could rearrange the column order in that giant unique index
and get some of the benefit.  But if you're desperate to optimize
this particular query, an index not bearing so many irrelevant columns
would probably be better for it.

An alternative way of thinking would be to create an index with those
three leading columns and then all of the other columns used by this
query as later columns.  That would be an even larger index, but it would
allow an index-only scan, which might be quite a lot faster.  The fact
that you seem to be hitting about one page for each row retrieved says
that the data you need is pretty badly scattered, so constructing an index
that concentrates everything you need into one range of the index might
be the ticket.

Either of these additional-index ideas is going to penalize table
insertions/updates, so keep an eye on that end of the performance
question too.

            regards, tom lane


Re: less than 2 sec for response - possible?

From
trafdev
Date:
Thanks Tom.

I've created index on aid, date:

create index aaa on stats.feed_sub(aid,date);

and simplified a query (dropped gran as it's equal for all rows anyway):

SELECT
   sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,
   sum(stats.feed_sub.c_filt_doubleclick)  AS clicks_on_target,
   sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,
   sum(stats.feed_sub.c_filt_fast_click)   AS fast_click,
   sum(stats.feed_sub.c_filt_ip_mismatch)  AS ip_mismatch,
   sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,
   sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,
   sum(stats.feed_sub.c_filt_ua_mismatch)  AS ua_mismatch,
   sum(stats.feed_sub.c_filt_url_expired)  AS url_expired,
   stats.feed_sub.subid                    AS stats_feed_sub_subid,
   stats.feed_sub.sid                      AS stats_feed_sub_sid
FROM stats.feed_sub
WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND
       stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND
       stats.feed_sub.aid = 3
GROUP BY
   stats.feed_sub.subid, stats.feed_sub.sid;

All data is in the cache and it still takes almost 5 seconds to complete:

QUERY PLAN
HashAggregate  (cost=792450.42..803727.24 rows=346979 width=86) (actual
time=4742.145..4882.468 rows=126533 loops=1)
"  Group Key: subid, sid"
   Buffers: shared hit=1350371
   ->  Index Scan using aaa on feed_sub  (cost=0.43..697031.39
rows=3469783 width=86) (actual time=0.026..1655.394 rows=3588376 loops=1)
         Index Cond: ((aid = 3) AND (date >= '2016-06-01
00:00:00'::timestamp without time zone) AND (date <= '2016-06-30
00:00:00'::timestamp without time zone))
         Buffers: shared hit=1350371
Planning time: 0.159 ms
Execution time: 4899.934 ms

It's better, but still is far from "<2 secs" goal.

Any thoughts?


On 07/01/16 18:23, Tom Lane wrote:
> trafdev <trafdev@mail.ru> writes:
>> CREATE INDEX ix_feed_sub_date
>>    ON stats.feed_sub
>>    USING brin
>>    (date);
>
>> CREATE UNIQUE INDEX ixu_feed_sub
>>    ON stats.feed_sub
>>    USING btree
>>    (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog."default");
>
>> HashAggregate  (cost=901171.72..912354.97 rows=344100 width=86) (actual
>> time=7207.825..7335.473 rows=126044 loops=1)
>> "  Group Key: subid, sid"
>>    Buffers: shared hit=3635804
>>    ->  Index Scan using ixu_feed_sub on feed_sub  (cost=0.56..806544.38
>> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)
>>          Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without
>> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time
>> zone) AND (gran = '1 day'::interval) AND (aid = 3))
>>          Buffers: shared hit=3635804
>> Planning time: 0.150 ms
>> Execution time: 7352.009 ms
>
> Neither of those indexes is terribly well designed for this query.
> A btree index on (aid, gran, date) or (gran, aid, date) would work
> much better.  See
>
> https://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html
>
> You could rearrange the column order in that giant unique index
> and get some of the benefit.  But if you're desperate to optimize
> this particular query, an index not bearing so many irrelevant columns
> would probably be better for it.
>
> An alternative way of thinking would be to create an index with those
> three leading columns and then all of the other columns used by this
> query as later columns.  That would be an even larger index, but it would
> allow an index-only scan, which might be quite a lot faster.  The fact
> that you seem to be hitting about one page for each row retrieved says
> that the data you need is pretty badly scattered, so constructing an index
> that concentrates everything you need into one range of the index might
> be the ticket.
>
> Either of these additional-index ideas is going to penalize table
> insertions/updates, so keep an eye on that end of the performance
> question too.
>
>             regards, tom lane
>


Re: less than 2 sec for response - possible?

From
Pujol Mathieu
Date:
Hello

Have you solved your problem ?

Could it be a conversion overhead from 'timestamp without time zone' to
'date' ? In this case, I don't know if planer store constants as date or
timestamp.

Mathieu Pujol

Le 02/07/2016 à 04:48, trafdev a écrit :
> Thanks Tom.
>
> I've created index on aid, date:
>
> create index aaa on stats.feed_sub(aid,date);
>
> and simplified a query (dropped gran as it's equal for all rows anyway):
>
> SELECT
>   sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,
>   sum(stats.feed_sub.c_filt_doubleclick)  AS clicks_on_target,
>   sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,
>   sum(stats.feed_sub.c_filt_fast_click)   AS fast_click,
>   sum(stats.feed_sub.c_filt_ip_mismatch)  AS ip_mismatch,
>   sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,
>   sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,
>   sum(stats.feed_sub.c_filt_ua_mismatch)  AS ua_mismatch,
>   sum(stats.feed_sub.c_filt_url_expired)  AS url_expired,
>   stats.feed_sub.subid                    AS stats_feed_sub_subid,
>   stats.feed_sub.sid                      AS stats_feed_sub_sid
> FROM stats.feed_sub
> WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND
>       stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND
>       stats.feed_sub.aid = 3
> GROUP BY
>   stats.feed_sub.subid, stats.feed_sub.sid;
>
> All data is in the cache and it still takes almost 5 seconds to complete:
>
> QUERY PLAN
> HashAggregate  (cost=792450.42..803727.24 rows=346979 width=86)
> (actual time=4742.145..4882.468 rows=126533 loops=1)
> "  Group Key: subid, sid"
>   Buffers: shared hit=1350371
>   ->  Index Scan using aaa on feed_sub  (cost=0.43..697031.39
> rows=3469783 width=86) (actual time=0.026..1655.394 rows=3588376 loops=1)
>         Index Cond: ((aid = 3) AND (date >= '2016-06-01
> 00:00:00'::timestamp without time zone) AND (date <= '2016-06-30
> 00:00:00'::timestamp without time zone))
>         Buffers: shared hit=1350371
> Planning time: 0.159 ms
> Execution time: 4899.934 ms
>
> It's better, but still is far from "<2 secs" goal.
>
> Any thoughts?
>
>
> On 07/01/16 18:23, Tom Lane wrote:
>> trafdev <trafdev@mail.ru> writes:
>>> CREATE INDEX ix_feed_sub_date
>>>    ON stats.feed_sub
>>>    USING brin
>>>    (date);
>>
>>> CREATE UNIQUE INDEX ixu_feed_sub
>>>    ON stats.feed_sub
>>>    USING btree
>>>    (date, gran, aid, pid, sid, fid, subid COLLATE
>>> pg_catalog."default");
>>
>>> HashAggregate (cost=901171.72..912354.97 rows=344100 width=86) (actual
>>> time=7207.825..7335.473 rows=126044 loops=1)
>>> "  Group Key: subid, sid"
>>>    Buffers: shared hit=3635804
>>>    ->  Index Scan using ixu_feed_sub on feed_sub (cost=0.56..806544.38
>>> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344
>>> loops=1)
>>>          Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without
>>> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time
>>> zone) AND (gran = '1 day'::interval) AND (aid = 3))
>>>          Buffers: shared hit=3635804
>>> Planning time: 0.150 ms
>>> Execution time: 7352.009 ms
>>
>> Neither of those indexes is terribly well designed for this query.
>> A btree index on (aid, gran, date) or (gran, aid, date) would work
>> much better.  See
>>
>> https://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html
>>
>> You could rearrange the column order in that giant unique index
>> and get some of the benefit.  But if you're desperate to optimize
>> this particular query, an index not bearing so many irrelevant columns
>> would probably be better for it.
>>
>> An alternative way of thinking would be to create an index with those
>> three leading columns and then all of the other columns used by this
>> query as later columns.  That would be an even larger index, but it
>> would
>> allow an index-only scan, which might be quite a lot faster. The fact
>> that you seem to be hitting about one page for each row retrieved says
>> that the data you need is pretty badly scattered, so constructing an
>> index
>> that concentrates everything you need into one range of the index might
>> be the ticket.
>>
>> Either of these additional-index ideas is going to penalize table
>> insertions/updates, so keep an eye on that end of the performance
>> question too.
>>
>>             regards, tom lane
>>
>
>



Re: less than 2 sec for response - possible?

From
Torsten Zuehlsdorff
Date:

On 02.07.2016 02:54, trafdev wrote:
> Hi.
>
> I'm trying to build an OLAP-oriented DB based on PostgresSQL.
>
> User works with a paginated report in the web-browser. Interface allows
> to fetch data for a custom date-range selection,
> display individual rows (20-50 per page) and totals (for entire
> selection, even not visible on the current page) and sorting by any column.
>
> The main goal is to deliver results of the basic SELECT queries to the
> end-user in less than 2 seconds.
>
> I was able to achieve that except for one table (the biggest one).
>
> It consist of multiple dimensions (date, gran, aid, pid, sid, fid,
> subid) and metrics (see below).
> User can filter by any dimension and sort by any metric.
>
> Here is a CREATE script for this table:
>
> CREATE TABLE stats.feed_sub
> (
>   date date NOT NULL,
>   gran interval NOT NULL,
>   aid smallint NOT NULL,
>   pid smallint NOT NULL,
>   sid smallint NOT NULL,
>   fid smallint NOT NULL,
>   subid text NOT NULL,
>   rev_est_pub real NOT NULL,
>   rev_est_feed real NOT NULL,
>   rev_raw real NOT NULL,
>   c_total bigint NOT NULL,
>   c_passed bigint NOT NULL,
>   q_total bigint NOT NULL,
>   q_passed bigint NOT NULL,
>   q_filt_geo bigint NOT NULL,
>   q_filt_browser bigint NOT NULL,
>   q_filt_os bigint NOT NULL,
>   q_filt_ip bigint NOT NULL,
>   q_filt_subid bigint NOT NULL,
>   q_filt_pause bigint NOT NULL,
>   q_filt_click_cap_ip bigint NOT NULL,
>   q_filt_query_cap bigint NOT NULL,
>   q_filt_click_cap bigint NOT NULL,
>   q_filt_rev_cap bigint NOT NULL,
>   q_filt_erpm_floor bigint NOT NULL,
>   c_filt_click_cap_ip bigint NOT NULL,
>   c_filt_doubleclick bigint NOT NULL,
>   c_filt_url_expired bigint NOT NULL,
>   c_filt_fast_click bigint NOT NULL,
>   c_filt_delay_clicks bigint NOT NULL,
>   c_filt_ip_mismatch bigint NOT NULL,
>   c_filt_ref_mismatch bigint NOT NULL,
>   c_filt_lng_mismatch bigint NOT NULL,
>   c_filt_ua_mismatch bigint NOT NULL,
>   res_impr bigint NOT NULL,
>   rev_ver_pub real,
>   rev_ver_feed real,
>   c_ver bigint,
>   q_filt_ref bigint NOT NULL
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX ix_feed_sub_date
>   ON stats.feed_sub
>   USING brin
>   (date);
>
> CREATE UNIQUE INDEX ixu_feed_sub
>   ON stats.feed_sub
>   USING btree
>   (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog."default");
>
> Here is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):
>
> relation,size
> stats.feed_sub,5644 MB
> stats.ixu_feed_sub,1594 MB
>
> row_estimate
> 15865627
>
> Here is the typical query (for totals beige):
> SELECT
>   sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,
>   sum(stats.feed_sub.c_filt_doubleclick)  AS clicks_on_target,
>   sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,
>   sum(stats.feed_sub.c_filt_fast_click)   AS fast_click,
>   sum(stats.feed_sub.c_filt_ip_mismatch)  AS ip_mismatch,
>   sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,
>   sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,
>   sum(stats.feed_sub.c_filt_ua_mismatch)  AS ua_mismatch,
>   sum(stats.feed_sub.c_filt_url_expired)  AS url_expired,
>   stats.feed_sub.subid                    AS stats_feed_sub_subid,
>   stats.feed_sub.sid                      AS stats_feed_sub_sid
> FROM stats.feed_sub
> WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND
>       stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND
>       stats.feed_sub.gran = '1 day'
>       AND stats.feed_sub.aid = 3
> GROUP BY
>   stats.feed_sub.subid, stats.feed_sub.sid;

You cast every date to an timestamp. Why? You can adjust the index to:

CREATE UNIQUE INDEX ixu_feed_sub
ON stats.feed_sub
USING btree
(date::timestamp, gran, aid, pid, sid, fid, subid COLLATE
pg_catalog."default");

But since i see no need for the cast at all (maybe i missed it) try it
without!

Greetings,
Torsten


Re: less than 2 sec for response - possible?

From
trafdev
Date:
Hi, yes I've tried it in the past, it makes no any difference at all:

With TIMESTAMP cast:

QUERY PLAN
HashAggregate  (cost=1405666.90..1416585.93 rows=335970 width=86)
(actual time=4794.585..4923.062 rows=126533 loops=1)
"  Group Key: subid, sid"
   Buffers: shared hit=1486949
   ->  Index Scan using ix_feed_sub_aid_date on feed_sub
(cost=0.44..1313275.32 rows=3359694 width=86) (actual
time=0.020..1736.005 rows=3588376 loops=1)
         Index Cond: ((aid = 3) AND (date >= '2016-06-01
00:00:00'::timestamp without time zone) AND (date <= '2016-06-30
00:00:00'::timestamp without time zone))
         Buffers: shared hit=1486949
Planning time: 0.158 ms
Execution time: 4939.965 ms


Without TIMESTAMP cast:

QUERY PLAN
HashAggregate  (cost=1405666.90..1416585.93 rows=335970 width=86)
(actual time=4797.272..4924.015 rows=126533 loops=1)
"  Group Key: subid, sid"
   Buffers: shared hit=1486949
   ->  Index Scan using ix_feed_sub_aid_date on feed_sub
(cost=0.44..1313275.32 rows=3359694 width=86) (actual
time=0.019..1783.104 rows=3588376 loops=1)
         Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND
(date <= '2016-06-30'::date))
         Buffers: shared hit=1486949
Planning time: 0.164 ms
Execution time: 4941.259 ms

I need to be sure it's a physical limitation of a Postgresql (when all
data is in a memory and fetching\joining 1.5 mln of rows can't be done
in less than 2-3 seconds) and there is no way to improve it.



On 07/05/16 04:39, Torsten Zuehlsdorff wrote:
>
>
> On 02.07.2016 02:54, trafdev wrote:
> > Hi.
> >
> > I'm trying to build an OLAP-oriented DB based on PostgresSQL.
> >
> > User works with a paginated report in the web-browser. Interface allows
> > to fetch data for a custom date-range selection,
> > display individual rows (20-50 per page) and totals (for entire
> > selection, even not visible on the current page) and sorting by any
> > column.
> >
> > The main goal is to deliver results of the basic SELECT queries to the
> > end-user in less than 2 seconds.
> >
> > I was able to achieve that except for one table (the biggest one).
> >
> > It consist of multiple dimensions (date, gran, aid, pid, sid, fid,
> > subid) and metrics (see below).
> > User can filter by any dimension and sort by any metric.
> >
> > Here is a CREATE script for this table:
> >
> > CREATE TABLE stats.feed_sub
> > (
> >   date date NOT NULL,
> >   gran interval NOT NULL,
> >   aid smallint NOT NULL,
> >   pid smallint NOT NULL,
> >   sid smallint NOT NULL,
> >   fid smallint NOT NULL,
> >   subid text NOT NULL,
> >   rev_est_pub real NOT NULL,
> >   rev_est_feed real NOT NULL,
> >   rev_raw real NOT NULL,
> >   c_total bigint NOT NULL,
> >   c_passed bigint NOT NULL,
> >   q_total bigint NOT NULL,
> >   q_passed bigint NOT NULL,
> >   q_filt_geo bigint NOT NULL,
> >   q_filt_browser bigint NOT NULL,
> >   q_filt_os bigint NOT NULL,
> >   q_filt_ip bigint NOT NULL,
> >   q_filt_subid bigint NOT NULL,
> >   q_filt_pause bigint NOT NULL,
> >   q_filt_click_cap_ip bigint NOT NULL,
> >   q_filt_query_cap bigint NOT NULL,
> >   q_filt_click_cap bigint NOT NULL,
> >   q_filt_rev_cap bigint NOT NULL,
> >   q_filt_erpm_floor bigint NOT NULL,
> >   c_filt_click_cap_ip bigint NOT NULL,
> >   c_filt_doubleclick bigint NOT NULL,
> >   c_filt_url_expired bigint NOT NULL,
> >   c_filt_fast_click bigint NOT NULL,
> >   c_filt_delay_clicks bigint NOT NULL,
> >   c_filt_ip_mismatch bigint NOT NULL,
> >   c_filt_ref_mismatch bigint NOT NULL,
> >   c_filt_lng_mismatch bigint NOT NULL,
> >   c_filt_ua_mismatch bigint NOT NULL,
> >   res_impr bigint NOT NULL,
> >   rev_ver_pub real,
> >   rev_ver_feed real,
> >   c_ver bigint,
> >   q_filt_ref bigint NOT NULL
> > )
> > WITH (
> >   OIDS=FALSE
> > );
> >
> > CREATE INDEX ix_feed_sub_date
> >   ON stats.feed_sub
> >   USING brin
> >   (date);
> >
> > CREATE UNIQUE INDEX ixu_feed_sub
> >   ON stats.feed_sub
> >   USING btree
> >   (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog."default");
> >
> > Here is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):
> >
> > relation,size
> > stats.feed_sub,5644 MB
> > stats.ixu_feed_sub,1594 MB
> >
> > row_estimate
> > 15865627
> >
> > Here is the typical query (for totals beige):
> > SELECT
> >   sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,
> >   sum(stats.feed_sub.c_filt_doubleclick)  AS clicks_on_target,
> >   sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,
> >   sum(stats.feed_sub.c_filt_fast_click)   AS fast_click,
> >   sum(stats.feed_sub.c_filt_ip_mismatch)  AS ip_mismatch,
> >   sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,
> >   sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,
> >   sum(stats.feed_sub.c_filt_ua_mismatch)  AS ua_mismatch,
> >   sum(stats.feed_sub.c_filt_url_expired)  AS url_expired,
> >   stats.feed_sub.subid                    AS stats_feed_sub_subid,
> >   stats.feed_sub.sid                      AS stats_feed_sub_sid
> > FROM stats.feed_sub
> > WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND
> >       stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND
> >       stats.feed_sub.gran = '1 day'
> >       AND stats.feed_sub.aid = 3
> > GROUP BY
> >   stats.feed_sub.subid, stats.feed_sub.sid;
>
> You cast every date to an timestamp. Why? You can adjust the index to:
>
> CREATE UNIQUE INDEX ixu_feed_sub
> ON stats.feed_sub
> USING btree
> (date::timestamp, gran, aid, pid, sid, fid, subid COLLATE
> pg_catalog."default");
>
> But since i see no need for the cast at all (maybe i missed it) try it
> without!
>
> Greetings,
> Torsten
>



Re: less than 2 sec for response - possible?

From
Torsten Zuehlsdorff
Date:
On 05.07.2016 17:35, trafdev wrote:
 > [..]
> Without TIMESTAMP cast:
>
> QUERY PLAN
> HashAggregate  (cost=1405666.90..1416585.93 rows=335970 width=86)
> (actual time=4797.272..4924.015 rows=126533 loops=1)
> "  Group Key: subid, sid"
>   Buffers: shared hit=1486949
>   ->  Index Scan using ix_feed_sub_aid_date on feed_sub
> (cost=0.44..1313275.32 rows=3359694 width=86) (actual
> time=0.019..1783.104 rows=3588376 loops=1)
>         Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND
> (date <= '2016-06-30'::date))
>         Buffers: shared hit=1486949
> Planning time: 0.164 ms
> Execution time: 4941.259 ms
>
> I need to be sure it's a physical limitation of a Postgresql (when all
> data is in a memory and fetching\joining 1.5 mln of rows can't be done
> in less than 2-3 seconds) and there is no way to improve it.

It could be a physical limitation of your hardware. I just did a short
test on one of my databases:

Aggregate  (cost=532018.95..532018.96 rows=1 width=0) (actual
time=3396.689..3396.689 rows=1 loops=1)
    Buffers: shared hit=155711
    ->  Index Only Scan using requests_request_time_idx on requests
(cost=0.43..493109.90 rows=15563620 width=0) (actual
time=0.021..2174.614 rows=16443288 loops=1)
          Index Cond: ((request_time >= '2016-07-01
00:00:00+00'::timestamp with time zone) AND (request_time <= '2017-07-06
00:00:00+00'::timestamp with time zone))
          Heap Fetches: 31254
          Buffers: shared hit=155711
  Planning time: 0.143 ms
  Execution time: 3396.715 ms
(8 rows)

As you can see i can get 16.4 Mio rows within 3.4 seconds from cache.
Your index-scan fetches 3.5 mio in 1.7 second, that's hardly half of the
performance of my database.

Greetings,
Torsten


Re: less than 2 sec for response - possible?

From
trafdev
Date:
Wondering what are your CPU\RAM characteristics?

On 07/06/16 01:35, Torsten Zuehlsdorff wrote:
> On 05.07.2016 17:35, trafdev wrote:
>> [..]
>> Without TIMESTAMP cast:
>>
>> QUERY PLAN
>> HashAggregate  (cost=1405666.90..1416585.93 rows=335970 width=86)
>> (actual time=4797.272..4924.015 rows=126533 loops=1)
>> "  Group Key: subid, sid"
>>   Buffers: shared hit=1486949
>>   ->  Index Scan using ix_feed_sub_aid_date on feed_sub
>> (cost=0.44..1313275.32 rows=3359694 width=86) (actual
>> time=0.019..1783.104 rows=3588376 loops=1)
>>         Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND
>> (date <= '2016-06-30'::date))
>>         Buffers: shared hit=1486949
>> Planning time: 0.164 ms
>> Execution time: 4941.259 ms
>>
>> I need to be sure it's a physical limitation of a Postgresql (when all
>> data is in a memory and fetching\joining 1.5 mln of rows can't be done
>> in less than 2-3 seconds) and there is no way to improve it.
>
> It could be a physical limitation of your hardware. I just did a short
> test on one of my databases:
>
> Aggregate  (cost=532018.95..532018.96 rows=1 width=0) (actual
> time=3396.689..3396.689 rows=1 loops=1)
>    Buffers: shared hit=155711
>    ->  Index Only Scan using requests_request_time_idx on requests
> (cost=0.43..493109.90 rows=15563620 width=0) (actual
> time=0.021..2174.614 rows=16443288 loops=1)
>          Index Cond: ((request_time >= '2016-07-01
> 00:00:00+00'::timestamp with time zone) AND (request_time <= '2017-07-06
> 00:00:00+00'::timestamp with time zone))
>          Heap Fetches: 31254
>          Buffers: shared hit=155711
>  Planning time: 0.143 ms
>  Execution time: 3396.715 ms
> (8 rows)
>
> As you can see i can get 16.4 Mio rows within 3.4 seconds from cache.
> Your index-scan fetches 3.5 mio in 1.7 second, that's hardly half of the
> performance of my database.
>
> Greetings,
> Torsten
>


Re: less than 2 sec for response - possible?

From
Torsten Zuehlsdorff
Date:
On 06.07.2016 17:06, trafdev wrote:
> Wondering what are your CPU\RAM characteristics?

Intel Core i7-2600 Quad Core
32 GB DDR3 RAM
2x 3 TB SATA III HDD

HDD is:
Model Family:     Seagate Barracuda XT
Device Model:     ST33000651AS
Firmware Version: CC45
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)

RAM is for example:

Handle 0x002D, DMI type 17, 28 bytes
Memory Device
         Array Handle: 0x002A
         Error Information Handle: No Error
         Total Width: 64 bits
         Data Width: 64 bits
         Size: 8192 MB
         Form Factor: DIMM
         Set: None
         Locator: DIMM0
         Bank Locator: BANK0
         Type: DDR3
         Type Detail: Synchronous
         Speed: 1333 MHz
         Manufacturer: Undefined
         Serial Number: 4430793
         Asset Tag: AssetTagNum0
         Part Number: CT102464BA160B.C16
         Rank: 2

OS is FreeBSD 10.3. Do you need more information?

Greetings,
Torsten


Re: less than 2 sec for response - possible?

From
trafdev
Date:
Well, our CPU\RAM configs are almost same...

The difference is - you're fetching\grouping 8 times less rows than I:

You scan 16.5 mln rows and fetch ~200k rows in 2 seconds and than spend
1.4 sec for aggregation

I'm scanning 3.5 mln rows and fetching 1.5 mln rows (8 times more than
you) in 1.8 seconds and then spending rest (2.3 seconds) for aggregation...

So please try to extend dates range 8 times and repeat your test.



On 07/06/16 08:27, Torsten Zuehlsdorff wrote:
>
> On 06.07.2016 17:06, trafdev wrote:
>> Wondering what are your CPU\RAM characteristics?
>
> Intel Core i7-2600 Quad Core
> 32 GB DDR3 RAM
> 2x 3 TB SATA III HDD
>
> HDD is:
> Model Family:     Seagate Barracuda XT
> Device Model:     ST33000651AS
> Firmware Version: CC45
> User Capacity:    3,000,592,982,016 bytes [3.00 TB]
> Sector Size:      512 bytes logical/physical
> Rotation Rate:    7200 rpm
> Form Factor:      3.5 inches
> Device is:        In smartctl database [for details use: -P show]
> ATA Version is:   ATA8-ACS T13/1699-D revision 4
> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
>
> RAM is for example:
>
> Handle 0x002D, DMI type 17, 28 bytes
> Memory Device
>         Array Handle: 0x002A
>         Error Information Handle: No Error
>         Total Width: 64 bits
>         Data Width: 64 bits
>         Size: 8192 MB
>         Form Factor: DIMM
>         Set: None
>         Locator: DIMM0
>         Bank Locator: BANK0
>         Type: DDR3
>         Type Detail: Synchronous
>         Speed: 1333 MHz
>         Manufacturer: Undefined
>         Serial Number: 4430793
>         Asset Tag: AssetTagNum0
>         Part Number: CT102464BA160B.C16
>         Rank: 2
>
> OS is FreeBSD 10.3. Do you need more information?
>
> Greetings,
> Torsten
>


Re: less than 2 sec for response - possible?

From
trafdev
Date:
So does that mean Postgres is not capable to scan\aggregate less than 10
mln rows and deliver result in less than 2 seconds?

On 07/06/16 09:46, trafdev wrote:
> Well, our CPU\RAM configs are almost same...
>
> The difference is - you're fetching\grouping 8 times less rows than I:
>
> You scan 16.5 mln rows and fetch ~200k rows in 2 seconds and than spend
> 1.4 sec for aggregation
>
> I'm scanning 3.5 mln rows and fetching 1.5 mln rows (8 times more than
> you) in 1.8 seconds and then spending rest (2.3 seconds) for aggregation...
>
> So please try to extend dates range 8 times and repeat your test.
>
>
>
> On 07/06/16 08:27, Torsten Zuehlsdorff wrote:
>>
>> On 06.07.2016 17:06, trafdev wrote:
>>> Wondering what are your CPU\RAM characteristics?
>>
>> Intel Core i7-2600 Quad Core
>> 32 GB DDR3 RAM
>> 2x 3 TB SATA III HDD
>>
>> HDD is:
>> Model Family:     Seagate Barracuda XT
>> Device Model:     ST33000651AS
>> Firmware Version: CC45
>> User Capacity:    3,000,592,982,016 bytes [3.00 TB]
>> Sector Size:      512 bytes logical/physical
>> Rotation Rate:    7200 rpm
>> Form Factor:      3.5 inches
>> Device is:        In smartctl database [for details use: -P show]
>> ATA Version is:   ATA8-ACS T13/1699-D revision 4
>> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
>>
>> RAM is for example:
>>
>> Handle 0x002D, DMI type 17, 28 bytes
>> Memory Device
>>         Array Handle: 0x002A
>>         Error Information Handle: No Error
>>         Total Width: 64 bits
>>         Data Width: 64 bits
>>         Size: 8192 MB
>>         Form Factor: DIMM
>>         Set: None
>>         Locator: DIMM0
>>         Bank Locator: BANK0
>>         Type: DDR3
>>         Type Detail: Synchronous
>>         Speed: 1333 MHz
>>         Manufacturer: Undefined
>>         Serial Number: 4430793
>>         Asset Tag: AssetTagNum0
>>         Part Number: CT102464BA160B.C16
>>         Rank: 2
>>
>> OS is FreeBSD 10.3. Do you need more information?
>>
>> Greetings,
>> Torsten
>>


Re: less than 2 sec for response - possible?

From
Jim Nasby
Date:
On 7/9/16 12:26 PM, trafdev wrote:
> So does that mean Postgres is not capable to scan\aggregate less than 10
> mln rows and deliver result in less than 2 seconds?

That's going to depend entirely on your hardware, and how big the rows
are. At some point you're simply going to run out of memory bandwidth,
especially since your access pattern is very scattered.

> On 07/06/16 09:46, trafdev wrote:
>> Well, our CPU\RAM configs are almost same...
>>
>> The difference is - you're fetching\grouping 8 times less rows than I:

Huh? The explain output certainly doesn't show that.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


Re: less than 2 sec for response - possible?

From
trafdev
Date:
>>> The difference is - you're fetching\grouping 8 times less rows than I:
>
> Huh? The explain output certainly doesn't show that.

Why not?

My output:
Buffers: shared hit=1486949

Torsten's output:
Buffers: shared hit=155711

This is amount of rows fetched for further processing (when all data is
in memory), isn't it?


Re: less than 2 sec for response - possible?

From
Jim Nasby
Date:
On 7/19/16 9:28 AM, trafdev wrote:
>>>> The difference is - you're fetching\grouping 8 times less rows than I:
>>
>> Huh? The explain output certainly doesn't show that.
>
> Why not?
>
> My output:
> Buffers: shared hit=1486949
>
> Torsten's output:
> Buffers: shared hit=155711
>
> This is amount of rows fetched for further processing (when all data is
> in memory), isn't it?

That's buffers, not rows.

BTW, if my math is correct, reading 1486949 8K buffers is 11GB, which
your query did in ~1.8s at 6GB/s. Admittedly that's pretty hand-wavy
(pulling a datum from a shared buffer doesn't require reading the whole
buffer; on the other hand, you also visited each buffer
3359694/1486949=2.6 times), but last time I measured, 6GB/s was a pretty
reasonable amount of memory bandwidth for something hitting main memory.

You've got ~30 bigints in that table (240 bytes) plus a bunch of other
stuff. That means you'll only be able to fit maybe 20 rows per 8K page.
At some point you'll simply hit the limits of hardware.

If you really need that kind of performance you'll probably need to have
some form of aggregate tables that you pull from. In your case, an
aggregate of each day would presumably work well; that would mean you'd
be reading 30 rows instead of 3.5M.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


Re: less than 2 sec for response - possible?

From
trafdev
Date:
Right, buffers are not rows, but still 8 times less...

The table I'm reading from is already aggregated on daily basis (so
there is no way to aggregate it more).

Will extending page to say 128K improve performance?

On 07/19/16 07:41, Jim Nasby wrote:
> On 7/19/16 9:28 AM, trafdev wrote:
>>>>> The difference is - you're fetching\grouping 8 times less rows than I:
>>>
>>> Huh? The explain output certainly doesn't show that.
>>
>> Why not?
>>
>> My output:
>> Buffers: shared hit=1486949
>>
>> Torsten's output:
>> Buffers: shared hit=155711
>>
>> This is amount of rows fetched for further processing (when all data is
>> in memory), isn't it?
>
> That's buffers, not rows.
>
> BTW, if my math is correct, reading 1486949 8K buffers is 11GB, which
> your query did in ~1.8s at 6GB/s. Admittedly that's pretty hand-wavy
> (pulling a datum from a shared buffer doesn't require reading the whole
> buffer; on the other hand, you also visited each buffer
> 3359694/1486949=2.6 times), but last time I measured, 6GB/s was a pretty
> reasonable amount of memory bandwidth for something hitting main memory.
>
> You've got ~30 bigints in that table (240 bytes) plus a bunch of other
> stuff. That means you'll only be able to fit maybe 20 rows per 8K page.
> At some point you'll simply hit the limits of hardware.
>
> If you really need that kind of performance you'll probably need to have
> some form of aggregate tables that you pull from. In your case, an
> aggregate of each day would presumably work well; that would mean you'd
> be reading 30 rows instead of 3.5M.


Re: less than 2 sec for response - possible?

From
Jim Nasby
Date:
On 7/19/16 9:56 AM, trafdev wrote:
> Will extending page to say 128K improve performance?

Well, you can't go to more than 32K, but yes, it might.

Even then, I think your biggest problem is that the data locality is too
low. You're only grabbing ~3 rows every time you read a buffer that
probably contains ~20 rows. So that's an area for improvement. The other
thing that would help a lot is to trim the table down so it's not as wide.

Actually, something else that could potentially help a lot is to store
arrays of many data points in each row, either by turning each column
into an array or storing an array of a composite type. [1] is exploring
those ideas right now.

You could also try cstore_fdw. It's not a magic bullet, but it's storage
will be much more efficient than what you're doing right now.

[1] https://github.com/ElephantStack/ElephantStack
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461