Thread: Why my query not using index to sort?
My query is not using name index to sort the result.
explain analyze select
"vessels" .*
from
"vessels"
where
"vessels"."deleted_at" is null
and "vessels"."company_id" = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'
order by
"vessels"."name" ASC;
I have below index in my vessels table:
Indexes:
"vessels_pkey" PRIMARY KEY, btree (id)
"inspector_tool_idx20_1" btree (company_id)
"inspector_tool_idx20_2" btree (name)
and following is my plan output.
Sort (cost=17.29..17.48 rows=76 width=107) (actual time=0.468..0.473 rows=77 loops=1)
Sort Key: name
Sort Method: quicksort Memory: 38kB
-> Bitmap Heap Scan on vessels (cost=4.90..14.91 rows=76 width=107) (actual time=0.036..0.059 rows=77 loops=1)
Recheck Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 4
Heap Blocks: exact=3
-> Bitmap Index Scan on inspector_tool_idx20_1 (cost=0.00..4.88 rows=81 width=0) (actual time=0.022..0.023 rows=81 loops=1)
Index Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Planning time: 0.178 ms
Execution time: 0.527 ms
########################################################
After that I changed my index as follows, and the got almost same plan output.
Indexes:
"vessels_pkey" PRIMARY KEY, btree (id)
"inspector_tool_idx20_1" btree (company_id, name)
Sort (cost=17.29..17.48 rows=76 width=107) (actual time=0.475..0.480 rows=77 loops=1)
Sort Key: name
Sort Method: quicksort Memory: 38kB
-> Bitmap Heap Scan on vessels (cost=4.90..14.91 rows=76 width=107) (actual time=0.038..0.058 rows=77 loops=1)
Recheck Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 4
Heap Blocks: exact=3
-> Bitmap Index Scan on inspector_tool_idx20_1 (cost=0.00..4.88 rows=81 width=0) (actual time=0.025..0.025 rows=81 loops=1)
Index Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Planning time: 0.168 ms
Execution time: 0.533 ms
Is there anyway, I can improve the sorting so that it can use the index ?
Arup Rakshit <ar@zeit.io> writes: > My query is not using name index to sort the result. Given the rowcounts here, I think the planner is making the right choice. Sorting 70-some rows with a Sort node is probably cheaper than doing random disk I/O to get them in sorted order. With more rows involved, it might make the other choice. As a testing measure (don't do it in production!), you could set enable_sort = off, which will force the planner to pick a non-Sort plan if possible. Then you could see whether that's actually faster or slower, and by how much. regards, tom lane
Hello Tom,
Here is the plan with `enable_sort = off`.
# set enable_sort = off; SET
# explain analyze select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" = '86529964-6e9b-4bfa-ba9e-62bd24eaa954' order by "vessels"."name" ASC;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Scan using inspector_tool_idx20_1 on vessels (cost=0.27..40.76 rows=76 width=107) (actual time=0.047..0.120 rows=77 loops=1)
Index Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 4
Planning time: 1.867 ms
Execution time: 0.252 ms
(6 rows)
Why it is showing *6 rows*? Also it seems less than what I had before:
# explain analyze select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" = '86529964-6e9b-4bfa-ba9e-62bd24eaa954' order by "vessels"."name" ASC;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=17.29..17.48 rows=76 width=107) (actual time=0.789..0.796 rows=77 loops=1)
Sort Key: name
Sort Method: quicksort Memory: 38kB
-> Bitmap Heap Scan on vessels (cost=4.90..14.91 rows=76 width=107) (actual time=0.090..0.122 rows=77 loops=1)
Recheck Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 4
Heap Blocks: exact=3
-> Bitmap Index Scan on inspector_tool_idx20_1 (cost=0.00..4.88 rows=81 width=0) (actual time=0.059..0.059 rows=81 loops=1)
Index Cond: (company_id = '86529964-6e9b-4bfa-ba9e-62bd24eaa954'::uuid)
Planning time: 1.743 ms
Execution time: 0.954 ms
(12 rows)
On 28-Sep-2018, at 7:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Arup Rakshit <ar@zeit.io> writes:My query is not using name index to sort the result.
Given the rowcounts here, I think the planner is making the right choice.
Sorting 70-some rows with a Sort node is probably cheaper than doing
random disk I/O to get them in sorted order. With more rows involved,
it might make the other choice.
As a testing measure (don't do it in production!), you could set
enable_sort = off, which will force the planner to pick a non-Sort
plan if possible. Then you could see whether that's actually faster
or slower, and by how much.
regards, tom lane
Am 28.09.2018 um 16:49 schrieb Arup Rakshit: > Planning time: 1.867 ms > Execution time: 0.252 ms > (6 rows) > > Why it is showing *6 rows*? Also it seems less than what I had before: > the explain-output are 6 rows ;-) Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
Oh I see. That makes sense, I was reading too much into that line.. :)
On 28-Sep-2018, at 9:29 PM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:
Am 28.09.2018 um 16:49 schrieb Arup Rakshit:Planning time: 1.867 ms
Execution time: 0.252 ms
(6 rows)
Why it is showing *6 rows*? Also it seems less than what I had before:
the explain-output are 6 rows ;-)
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Also I meatn the execution time is less than I had before with enable_sort = off. Why 6 rows was a side question. :)
On 28-Sep-2018, at 9:29 PM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:
Am 28.09.2018 um 16:49 schrieb Arup Rakshit:Planning time: 1.867 ms
Execution time: 0.252 ms
(6 rows)
Why it is showing *6 rows*? Also it seems less than what I had before:
the explain-output are 6 rows ;-)
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Am 28.09.2018 um 18:03 schrieb Arup Rakshit: > Also I meatn the execution time is less than I had before with > enable_sort = off. yeah, but not that much. different plan. Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
Yes. But I thought I could improve it, so that it can be finished in 0.xx ms. It takes now between 1.7 to 1.9 ms in production.
On 28-Sep-2018, at 9:46 PM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:
Am 28.09.2018 um 18:03 schrieb Arup Rakshit:Also I meatn the execution time is less than I had before with enable_sort = off.
yeah, but not that much. different plan.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
On 9/28/18 11:32 AM, Arup Rakshit wrote: > Yes. But I thought I could improve it, so that it can be finished in > 0.xx ms. It takes now between 1.7 to 1.9 ms in production. That is not what you showed in your first post: Planning time: 0.178 ms Execution time: 0.527 ms > > > Thanks, > > Arup Rakshit > ar@zeit.io <mailto:ar@zeit.io> > > > >> On 28-Sep-2018, at 9:46 PM, Andreas Kretschmer >> <andreas@a-kretschmer.de <mailto:andreas@a-kretschmer.de>> wrote: >> >> >> >> Am 28.09.2018 um 18:03 schrieb Arup Rakshit: >>> Also I meatn the execution time is less than I had before with >>> enable_sort = off. >> >> yeah, but not that much. different plan. >> >> >> Regards, Andreas >> >> -- >> 2ndQuadrant - The PostgreSQL Support Company. >> www.2ndQuadrant.com <http://www.2ndQuadrant.com> >> > -- Adrian Klaver adrian.klaver@aklaver.com
Yes, I have shown the explain plan output. But in my application log the sql query prints 1.7 to 1.9 ms.
On 29-Sep-2018, at 12:17 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/28/18 11:32 AM, Arup Rakshit wrote:Yes. But I thought I could improve it, so that it can be finished in 0.xx ms. It takes now between 1.7 to 1.9 ms in production.
That is not what you showed in your first post:
Planning time: 0.178 ms
Execution time: 0.527 msThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>On 28-Sep-2018, at 9:46 PM, Andreas Kretschmer <andreas@a-kretschmer.de <mailto:andreas@a-kretschmer.de>> wrote:
Am 28.09.2018 um 18:03 schrieb Arup Rakshit:Also I meatn the execution time is less than I had before with enable_sort = off.
yeah, but not that much. different plan.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com <http://www.2ndQuadrant.com>
--
Adrian Klaver
adrian.klaver@aklaver.com
On 09/28/2018 12:51 PM, Arup Rakshit wrote: > > Yes, I have shown the explain plan output. But in my application log > the sql query prints 1.7 to 1.9 ms. > > How often does the production app make this call? Apparently it could do it 500 times per second. But at such a rate the network overhead would be stupendous - you might think about getting more data per call, few calls.
On 9/28/18 11:51 AM, Arup Rakshit wrote: > > Yes, I have shown the explain plan output. But in my application log the > sql query prints 1.7 to 1.9 ms. So you have added another layer to the process. The application is? The log settings are? What is being printed? Where is it being printed? > > > Thanks, > > Arup Rakshit > ar@zeit.io <mailto:ar@zeit.io> > -- Adrian Klaver adrian.klaver@aklaver.com
Hi Adrian,
> The application is?
It is Ruby On Rails application
> The log settings are?
> Where is it being printed?
The default application log, the production.log file.
> What is being printed?
On 29-Sep-2018, at 12:28 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/28/18 11:51 AM, Arup Rakshit wrote:Yes, I have shown the explain plan output. But in my application log the sql query prints 1.7 to 1.9 ms.
So you have added another layer to the process.
The application is?
The log settings are?
What is being printed?
Where is it being printed?Thanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
--
Adrian Klaver
adrian.klaver@aklaver.com
Forgot to mention in my previous email, it was a quick send click. Sorry for that.
In the gist you need to see all the line with Vessel Load(.. . I load the page multiple times to catch the different times, so you will the line multiple times there in the log file.
On 29-Sep-2018, at 12:40 AM, Arup Rakshit <ar@zeit.io> wrote:Hi Adrian,> The application is?It is Ruby On Rails application> The log settings are?> Where is it being printed?The default application log, the production.log file.> What is being printed?On 29-Sep-2018, at 12:28 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/28/18 11:51 AM, Arup Rakshit wrote:Yes, I have shown the explain plan output. But in my application log the sql query prints 1.7 to 1.9 ms.
So you have added another layer to the process.
The application is?
The log settings are?
What is being printed?
Where is it being printed?Thanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
--
Adrian Klaver
adrian.klaver@aklaver.com
On 9/28/18 12:14 PM, Arup Rakshit wrote: > Forgot to mention in my previous email, it was a quick send click. Sorry > for that. > > In the gist you need to see all the line with Vessel Load(.. . I load > the page multiple times to catch the different times, so you will the > line multiple times there in the log file. Do you know what Vessel Load () is actually measuring? To me it looks like it is covering both the query(including ROR overhead) and the HTTP request/response cycle. Also have you looked at: https://guides.rubyonrails.org/debugging_rails_applications.html#impact-of-logs-on-performance > > Thanks, > > Arup Rakshit > ar@zeit.io <mailto:ar@zeit.io> > -- Adrian Klaver adrian.klaver@aklaver.com
Hello Adrian,
I used to consider this time as the sql execution time approximately. I’ll do some research and get back to you. Thanks for mentioning it.
On 29-Sep-2018, at 2:50 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/28/18 12:14 PM, Arup Rakshit wrote:Forgot to mention in my previous email, it was a quick send click. Sorry for that.
In the gist you need to see all the line with Vessel Load(.. . I load the page multiple times to catch the different times, so you will the line multiple times there in the log file.
Do you know what Vessel Load () is actually measuring?
To me it looks like it is covering both the query(including ROR overhead) and the HTTP request/response cycle.
Also have you looked at:
https://guides.rubyonrails.org/debugging_rails_applications.html#impact-of-logs-on-performanceThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
--
Adrian Klaver
adrian.klaver@aklaver.com
Hello Tom,
I have another query, where I am expecting the sort from index, but it is in memory and takes lot of time.
Query:
explain analyze select
*
from
"standard_workitems"
where
"standard_workitems"."deleted_at" is null
and "standard_workitems"."company_id" = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'
order by
standard_workitems.item_code asc;
Explain plan:
Sort (cost=3454.03..3458.18 rows=1660 width=810) (actual time=20.302..20.502 rows=2071 loops=1)
Sort Key: item_code
Sort Method: quicksort Memory: 800kB
-> Bitmap Heap Scan on standard_workitems (cost=57.29..3365.25 rows=1660 width=810) (actual time=0.297..0.781 rows=2071 loops=1)
Recheck Cond: ((company_id = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'::uuid) AND (deleted_at IS NULL))
Heap Blocks: exact=139
-> Bitmap Index Scan on standard_workitems_partial_index_idx_1_1 (cost=0.00..56.87 rows=1660 width=0) (actual time=0.272..0.272 rows=2071 loops=1)
Index Cond: (company_id = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'::uuid)
Planning time: 0.199 ms
Execution time: 20.688 ms
Indexes I have:
Indexes:
"standard_workitems_pkey" PRIMARY KEY, btree (id)
"index_standard_workitems_on_company_id" btree (company_id)
"index_standard_workitems_on_deleted_at" btree (deleted_at)
"index_standard_workitems_on_item_code" btree (item_code)
"index_standard_workitems_on_workitem_category_id" btree (workitem_category_id)
"standard_workitems_partial_index_idx_1_1" btree (company_id, item_code) WHERE deleted_at IS NULL
On 28-Sep-2018, at 7:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Arup Rakshit <ar@zeit.io> writes:My query is not using name index to sort the result.
Given the rowcounts here, I think the planner is making the right choice.
Sorting 70-some rows with a Sort node is probably cheaper than doing
random disk I/O to get them in sorted order. With more rows involved,
it might make the other choice.
As a testing measure (don't do it in production!), you could set
enable_sort = off, which will force the planner to pick a non-Sort
plan if possible. Then you could see whether that's actually faster
or slower, and by how much.
regards, tom lane
When I keep the sort off, I see it is executing much more faster.
set enable_sort = off;
explain analyze select
*
from
"standard_workitems"
where
"standard_workitems"."deleted_at" is null
and "standard_workitems"."company_id" = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'
order by
item_code asc;
Index Scan using standard_workitems_partial_index_idx_1_1 on standard_workitems (cost=0.42..5802.04 rows=1697 width=763) (actual time=0.018..1.076 rows=2071 loops=1)
Index Cond: (company_id = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'::uuid)
Planning time: 0.191 ms
Execution time: 1.210 ms
I have no idea why PG thinks in memory sort will be cheaper..
On 29-Sep-2018, at 9:40 PM, Arup Rakshit <ar@zeit.io> wrote:Hello Tom,I have another query, where I am expecting the sort from index, but it is in memory and takes lot of time.Query:explain analyze select*from"standard_workitems"where"standard_workitems"."deleted_at" is nulland "standard_workitems"."company_id" = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'order bystandard_workitems.item_code asc;Explain plan:Sort (cost=3454.03..3458.18 rows=1660 width=810) (actual time=20.302..20.502 rows=2071 loops=1)Sort Key: item_codeSort Method: quicksort Memory: 800kB-> Bitmap Heap Scan on standard_workitems (cost=57.29..3365.25 rows=1660 width=810) (actual time=0.297..0.781 rows=2071 loops=1)Recheck Cond: ((company_id = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'::uuid) AND (deleted_at IS NULL))Heap Blocks: exact=139-> Bitmap Index Scan on standard_workitems_partial_index_idx_1_1 (cost=0.00..56.87 rows=1660 width=0) (actual time=0.272..0.272 rows=2071 loops=1)Index Cond: (company_id = '6fed40b7-fdd7-4efb-a163-c2b42e6486ae'::uuid)Planning time: 0.199 msExecution time: 20.688 msIndexes I have:Indexes:"standard_workitems_pkey" PRIMARY KEY, btree (id)"index_standard_workitems_on_company_id" btree (company_id)"index_standard_workitems_on_deleted_at" btree (deleted_at)"index_standard_workitems_on_item_code" btree (item_code)"index_standard_workitems_on_workitem_category_id" btree (workitem_category_id)"standard_workitems_partial_index_idx_1_1" btree (company_id, item_code) WHERE deleted_at IS NULLOn 28-Sep-2018, at 7:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Arup Rakshit <ar@zeit.io> writes:My query is not using name index to sort the result.
Given the rowcounts here, I think the planner is making the right choice.
Sorting 70-some rows with a Sort node is probably cheaper than doing
random disk I/O to get them in sorted order. With more rows involved,
it might make the other choice.
As a testing measure (don't do it in production!), you could set
enable_sort = off, which will force the planner to pick a non-Sort
plan if possible. Then you could see whether that's actually faster
or slower, and by how much.
regards, tom lane
Hi Adrian,
I am on psql (10.5, server 9.5.14).
I am still investigating the Rails side. I found a blog (https://schneems.com/2015/10/27/sql-in-rails-logs.html) , where a Rails core team member said that Load time is basically SQL execution time.
I also had done a little test on psql. Here I also see the explain output and actual execution time far apart and it matches what I see in the Rails log.
docking_dev=# explain analyze select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on vessels (cost=0.00..630.62 rows=923 width=1439) (actual time=0.279..4.921 rows=753 loops=1)
Filter: ((deleted_at IS NULL) AND (company_id = ANY ('{6f56abfe-a355-44b9-b728-a642f661a8e7,c82d08d7-bec1-4413-b627-63e1898c568c,5404cd95-29d9-44c3-8ca3-261be373bf31,19d5224a-ff2b-43c4-b756-4e82c29a4712,6fed40b7-fdd7-4efb-a163-c2b42e6486ae,a0793a69-330c-4a6b-8a55-ee6daf574371,3e936613-04e6-4db0-8595-5b8ae9cc2a40,059275e2-933d-4543-8eef-91a5af87849b,c64a658a-6ff1-4e76-8551-dfa62383bac3,ce634593-555a-4f3c-af79-d7a7cf3796fb,e18c4dab-f536-46f8-9f85-8bf51dbe989e,cfe6a534-b33b-4493-862e-6aa4f0b4a790,58d628c1-af0d-4e64-b672-87c77b677c7b,36cb8ab0-5e8a-40db-a296-5e4b5dc666f3,0d15288d-c042-461e-b4cb-0b2ad47ead29,e8cd498d-9da5-4d54-ac07-e4fc48c84a67,c192c117-0ec7-4c1d-94a6-0a773e70e58b,97dd15db-5862-4d49-9047-e3e0307e5c95,03766e4f-4f64-4927-be6d-a126958ac1a8,26987b95-8652-491c-b950-4fb3128f4714,c4f55044-3f4e-439e-a586-8b6978c7b406,36825da8-2f58-4a62-bdb4-2b91cbe18299,cca256cf-b415-472c-8b9f-a8432d02c580}'::uuid[])))
Rows Removed by Filter: 1947
Planning time: 2.249 ms
Execution time: 5.046 ms
(5 rows)
docking_dev=# \timing on
Timing is on.
docking_dev=# \o ~/Downloads/a.txt docking_dev=# select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
Time: 14.377 ms
docking_dev=#
On 29-Sep-2018, at 2:50 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/28/18 12:14 PM, Arup Rakshit wrote:Forgot to mention in my previous email, it was a quick send click. Sorry for that.
In the gist you need to see all the line with Vessel Load(.. . I load the page multiple times to catch the different times, so you will the line multiple times there in the log file.
Do you know what Vessel Load () is actually measuring?
To me it looks like it is covering both the query(including ROR overhead) and the HTTP request/response cycle.
Also have you looked at:
https://guides.rubyonrails.org/debugging_rails_applications.html#impact-of-logs-on-performanceThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
--
Adrian Klaver
adrian.klaver@aklaver.com
On 9/30/18 1:21 PM, Arup Rakshit wrote: > Hi Adrian, > > I am on psql (10.5, server 9.5.14). > > I am still investigating the Rails side. I found a blog > (https://schneems.com/2015/10/27/sql-in-rails-logs.html) , where a Rails > core team member said that Load time is basically SQL execution time. From what I read he said it is the SQL call, with no indication of whether that includes the the ORM/framwork overhead to make the call or not. > > I also had done a little test on psql. Here I also see the explain > output and actual execution time far apart and it matches what I see in > the Rails log. No it does not. In a previous post you said: "It takes now between 1.7 to 1.9 ms in production." Which was confirmed by the information here: https://gist.github.com/aruprakshit/a6bd7ca221c9a13cd583e0827aa24ad6 Below you show 5.046 ms and then 14.377 ms. The second number is not really relevant as it includes the time to write out to a file, so is different operation all together. > > docking_dev=# explain analyze select > > "vessels" > .* > > from > > > "vessels" > > > where > > > "vessels"."deleted_at" is null > > > and "vessels"."company_id" in > ('6f56abfe-a355-44b9-b728-a642f661a8e7', > > 'c82d08d7-bec1-4413-b627-63e1898c568c', > > > '5404cd95-29d9-44c3-8ca3-261be373bf31', > > > '19d5224a-ff2b-43c4-b756-4e82c29a4712', > > > '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', > > > 'a0793a69-330c-4a6b-8a55-ee6daf574371', > > > '3e936613-04e6-4db0-8595-5b8ae9cc2a40', > > > '059275e2-933d-4543-8eef-91a5af87849b', > > > 'c64a658a-6ff1-4e76-8551-dfa62383bac3', > > > 'ce634593-555a-4f3c-af79-d7a7cf3796fb', > > > 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', > > > 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', > > > '58d628c1-af0d-4e64-b672-87c77b677c7b', > > > '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', > > > '0d15288d-c042-461e-b4cb-0b2ad47ead29', > > > 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', > > > 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', > > > '97dd15db-5862-4d49-9047-e3e0307e5c95', > > > '03766e4f-4f64-4927-be6d-a126958ac1a8', > > > '26987b95-8652-491c-b950-4fb3128f4714', > > > 'c4f55044-3f4e-439e-a586-8b6978c7b406', > > > '36825da8-2f58-4a62-bdb4-2b91cbe18299', > > > 'cca256cf-b415-472c-8b9f-a8432d02c580'); > > > > > > > QUERY PLAN > ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > Seq Scan on vessels (cost=0.00..630.62 rows=923 width=1439) (actual > time=0.279..4.921 rows=753 loops=1) > Filter: ((deleted_at IS NULL) AND (company_id = ANY > ('{6f56abfe-a355-44b9-b728-a642f661a8e7,c82d08d7-bec1-4413-b627-63e1898c568c,5404cd95-29d9-44c3-8ca3-261be373bf31,19d5224a-ff2b-43c4-b756-4e82c29a4712,6fed40b7-fdd7-4efb-a163-c2b42e6486ae,a0793a69-330c-4a6b-8a55-ee6daf574371,3e936613-04e6-4db0-8595-5b8ae9cc2a40,059275e2-933d-4543-8eef-91a5af87849b,c64a658a-6ff1-4e76-8551-dfa62383bac3,ce634593-555a-4f3c-af79-d7a7cf3796fb,e18c4dab-f536-46f8-9f85-8bf51dbe989e,cfe6a534-b33b-4493-862e-6aa4f0b4a790,58d628c1-af0d-4e64-b672-87c77b677c7b,36cb8ab0-5e8a-40db-a296-5e4b5dc666f3,0d15288d-c042-461e-b4cb-0b2ad47ead29,e8cd498d-9da5-4d54-ac07-e4fc48c84a67,c192c117-0ec7-4c1d-94a6-0a773e70e58b,97dd15db-5862-4d49-9047-e3e0307e5c95,03766e4f-4f64-4927-be6d-a126958ac1a8,26987b95-8652-491c-b950-4fb3128f4714,c4f55044-3f4e-439e-a586-8b6978c7b406,36825da8-2f58-4a62-bdb4-2b91cbe18299,cca256cf-b415-472c-8b9f-a8432d02c580}'::uuid[]))) > Rows Removed by Filter: 1947 > Planning time: 2.249 ms > Execution time: 5.046 ms > (5 rows) > > docking_dev=# \timing on > Timing is on. > docking_dev=# \o ~/Downloads/a.txt > > > docking_dev=# select > > "vessels" > .* > > from > > > "vessels" > > > where > > > "vessels"."deleted_at" is null > > > and "vessels"."company_id" in > ('6f56abfe-a355-44b9-b728-a642f661a8e7', > > 'c82d08d7-bec1-4413-b627-63e1898c568c', > > > '5404cd95-29d9-44c3-8ca3-261be373bf31', > > > '19d5224a-ff2b-43c4-b756-4e82c29a4712', > > > '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', > > > 'a0793a69-330c-4a6b-8a55-ee6daf574371', > > > '3e936613-04e6-4db0-8595-5b8ae9cc2a40', > > > '059275e2-933d-4543-8eef-91a5af87849b', > > > 'c64a658a-6ff1-4e76-8551-dfa62383bac3', > > > 'ce634593-555a-4f3c-af79-d7a7cf3796fb', > > > 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', > > > 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', > > > '58d628c1-af0d-4e64-b672-87c77b677c7b', > > > '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', > > > '0d15288d-c042-461e-b4cb-0b2ad47ead29', > > > 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', > > > 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', > > > '97dd15db-5862-4d49-9047-e3e0307e5c95', > > > '03766e4f-4f64-4927-be6d-a126958ac1a8', > > > '26987b95-8652-491c-b950-4fb3128f4714', > > > 'c4f55044-3f4e-439e-a586-8b6978c7b406', > > > '36825da8-2f58-4a62-bdb4-2b91cbe18299', > > > 'cca256cf-b415-472c-8b9f-a8432d02c580'); > Time: 14.377 ms > docking_dev=# > > > Thanks, > > Arup Rakshit > ar@zeit.io <mailto:ar@zeit.io> > > > >> On 29-Sep-2018, at 2:50 AM, Adrian Klaver <adrian.klaver@aklaver.com >> <mailto:adrian.klaver@aklaver.com>> wrote: >> >> On 9/28/18 12:14 PM, Arup Rakshit wrote: >>> Forgot to mention in my previous email, it was a quick send click. >>> Sorry for that. >>> In the gist you need to see all the line with Vessel Load(.. . I load >>> the page multiple times to catch the different times, so you will the >>> line multiple times there in the log file. >> >> Do you know what Vessel Load () is actually measuring? >> >> To me it looks like it is covering both the query(including ROR >> overhead) and the HTTP request/response cycle. >> >> Also have you looked at: >> >> https://guides.rubyonrails.org/debugging_rails_applications.html#impact-of-logs-on-performance >> >>> Thanks, >>> Arup Rakshit >>> ar@zeit.io <mailto:ar@zeit.io> >> >> >> -- >> Adrian Klaver >> adrian.klaver@aklaver.com > -- Adrian Klaver adrian.klaver@aklaver.com
Hello Adrian,
The one I said in gist is different query, and the previous mail I posted another query because I was testing with different types of queries. So 1.5, 1.7 is not for this current one. My main point now I am trying to understand why it is not matching at all. The current query timing in psql and production log is very close, but not the explain output.
Regarding the file written time, I think it is not the time to write into the file. Because If I don’t write this to the file still it shows 14ms + always. I used the file not the sql output so that I can paste here clean stuffs I want to show. If you want, I can show the output without writing it to the external file, and you will see the same time.
docking_dev=# \timing on Timing is on.
docking_dev=# select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
code | name | technical_details | created_at | updated_at | deleted_at | id | company_id | vessel_category_id
------------+-------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+----------------------------+------------+--------------------------------------+--------------------------------------+--------------------------------------
HHS | Hamburg Star | {"class_number":"9298325","owner_salutation":"Dear Mr.","charterer":"Scorpio","overview_docking_dates":"2015-07-18T22:00:00.000+00:00","cr_last_date":"2010-12-22","cr_due_date":"2010-12-21","edd_vessel":"No","imo_number":"9298325","classification_society":"Lloyds Register","flag_state":"Liberia","port_of_registry":"Monrovia","scantling_deadweight":"73748,55","gross_tonnage":"41968","net_tonnage":"22047","breadth":"32,26","length_pp":"218,0","length_overall":"228,6","docking_draft":"5,5","scantling_draft":"14,45","lightship_weight":"15000,07","type_of_vessel":"Oil/Product Carrier","main_engine_fuel_type":"IFO 380","auxiliary_engines_maker":"Yanmar","auxiliary_engines_types":"8 21L-EV","auxiliary_engines_output":"900 kW","owner_name":"MT Hamburg Star Schiffahrtsgesellschaft mbh \u0026 Co. KG","official_number":"91186","commissioned":"2005-12-22","liquid_cargo_intake":"83611,31","depth_to_main_deck":"20,8","class_assignment":"Oil / Product, 100 A1 Oil tanker, ESP, *IWS, LI, LMC, UMS, IGS","main_engine_type_and_make":"B \u0026 W 5S60 MC-C, 11.300 kW, 105 rpm"} | 2015-06-29 13:19:44.449149 | 2015-07-09 07:21:43.842282 | | 113e113d-9fbe-4429-99a6-c8ed306f9e29 | e8cd498d-9da5-4d54-ac07-e4fc48c84a67 | 18497ecf-5435-4e76-bc13-cda11f106457
4020 | Hansa Fyn | {"vessels_registered_name":"Hansa Fyn","type_of_vessel":"Container Geared","imo_number":"9256418","official_number":"20276","class_number":"110280","builder_hull_number":"GWS 299","classification_society":"DNV GL","class_assignment":"GL 100 A5 E CONTAINER SHIP, SOLAS II-2 Reg., 54 IW","flag_state":"Grand Duchy of Luxembourg","port_of_registry":”Lux
Time: 14.173 ms
docking_dev=#
docking_dev=# explain analyze select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on vessels (cost=0.00..630.62 rows=923 width=1439) (actual time=0.201..2.927 rows=753 loops=1)
Filter: ((deleted_at IS NULL) AND (company_id = ANY ('{6f56abfe-a355-44b9-b728-a642f661a8e7,c82d08d7-bec1-4413-b627-63e1898c568c,5404cd95-29d9-44c3-8ca3-261be373bf31,19d5224a-ff2b-43c4-b756-4e82c29a4712,6fed40b7-fdd7-4efb-a163-c2b42e6486ae,a0793a69-330c-4a6b-8a55-ee6daf574371,3e936613-04e6-4db0-8595-5b8ae9cc2a40,059275e2-933d-4543-8eef-91a5af87849b,c64a658a-6ff1-4e76-8551-dfa62383bac3,ce634593-555a-4f3c-af79-d7a7cf3796fb,e18c4dab-f536-46f8-9f85-8bf51dbe989e,cfe6a534-b33b-4493-862e-6aa4f0b4a790,58d628c1-af0d-4e64-b672-87c77b677c7b,36cb8ab0-5e8a-40db-a296-5e4b5dc666f3,0d15288d-c042-461e-b4cb-0b2ad47ead29,e8cd498d-9da5-4d54-ac07-e4fc48c84a67,c192c117-0ec7-4c1d-94a6-0a773e70e58b,97dd15db-5862-4d49-9047-e3e0307e5c95,03766e4f-4f64-4927-be6d-a126958ac1a8,26987b95-8652-491c-b950-4fb3128f4714,c4f55044-3f4e-439e-a586-8b6978c7b406,36825da8-2f58-4a62-bdb4-2b91cbe18299,cca256cf-b415-472c-8b9f-a8432d02c580}'::uuid[])))
Rows Removed by Filter: 1947
Planning time: 0.528 ms
Execution time: 3.076 ms
(5 rows)
On 01-Oct-2018, at 3:15 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 9/30/18 1:21 PM, Arup Rakshit wrote:Hi Adrian,
I am on psql (10.5, server 9.5.14).
I am still investigating the Rails side. I found a blog (https://schneems.com/2015/10/27/sql-in-rails-logs.html) , where a Rails core team member said that Load time is basically SQL execution time.
From what I read he said it is the SQL call, with no indication of whether that includes the the ORM/framwork overhead to make the call or not.I also had done a little test on psql. Here I also see the explain output and actual execution time far apart and it matches what I see in the Rails log.
No it does not. In a previous post you said:
"It takes now between 1.7 to 1.9 ms in production."
Which was confirmed by the information here:
https://gist.github.com/aruprakshit/a6bd7ca221c9a13cd583e0827aa24ad6
Below you show 5.046 ms and then 14.377 ms. The second number is not really relevant as it includes the time to write out to a file, so is different operation all together.docking_dev=# explain analyze select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on vessels (cost=0.00..630.62 rows=923 width=1439) (actual time=0.279..4.921 rows=753 loops=1)
Filter: ((deleted_at IS NULL) AND (company_id = ANY ('{6f56abfe-a355-44b9-b728-a642f661a8e7,c82d08d7-bec1-4413-b627-63e1898c568c,5404cd95-29d9-44c3-8ca3-261be373bf31,19d5224a-ff2b-43c4-b756-4e82c29a4712,6fed40b7-fdd7-4efb-a163-c2b42e6486ae,a0793a69-330c-4a6b-8a55-ee6daf574371,3e936613-04e6-4db0-8595-5b8ae9cc2a40,059275e2-933d-4543-8eef-91a5af87849b,c64a658a-6ff1-4e76-8551-dfa62383bac3,ce634593-555a-4f3c-af79-d7a7cf3796fb,e18c4dab-f536-46f8-9f85-8bf51dbe989e,cfe6a534-b33b-4493-862e-6aa4f0b4a790,58d628c1-af0d-4e64-b672-87c77b677c7b,36cb8ab0-5e8a-40db-a296-5e4b5dc666f3,0d15288d-c042-461e-b4cb-0b2ad47ead29,e8cd498d-9da5-4d54-ac07-e4fc48c84a67,c192c117-0ec7-4c1d-94a6-0a773e70e58b,97dd15db-5862-4d49-9047-e3e0307e5c95,03766e4f-4f64-4927-be6d-a126958ac1a8,26987b95-8652-491c-b950-4fb3128f4714,c4f55044-3f4e-439e-a586-8b6978c7b406,36825da8-2f58-4a62-bdb4-2b91cbe18299,cca256cf-b415-472c-8b9f-a8432d02c580}'::uuid[])))
Rows Removed by Filter: 1947
Planning time: 2.249 ms
Execution time: 5.046 ms
(5 rows)
docking_dev=# \timing on
Timing is on.
docking_dev=# \o ~/Downloads/a.txt docking_dev=# select "vessels" .* from "vessels" where "vessels"."deleted_at" is null and "vessels"."company_id" in ('6f56abfe-a355-44b9-b728-a642f661a8e7', 'c82d08d7-bec1-4413-b627-63e1898c568c', '5404cd95-29d9-44c3-8ca3-261be373bf31', '19d5224a-ff2b-43c4-b756-4e82c29a4712', '6fed40b7-fdd7-4efb-a163-c2b42e6486ae', 'a0793a69-330c-4a6b-8a55-ee6daf574371', '3e936613-04e6-4db0-8595-5b8ae9cc2a40', '059275e2-933d-4543-8eef-91a5af87849b', 'c64a658a-6ff1-4e76-8551-dfa62383bac3', 'ce634593-555a-4f3c-af79-d7a7cf3796fb', 'e18c4dab-f536-46f8-9f85-8bf51dbe989e', 'cfe6a534-b33b-4493-862e-6aa4f0b4a790', '58d628c1-af0d-4e64-b672-87c77b677c7b', '36cb8ab0-5e8a-40db-a296-5e4b5dc666f3', '0d15288d-c042-461e-b4cb-0b2ad47ead29', 'e8cd498d-9da5-4d54-ac07-e4fc48c84a67', 'c192c117-0ec7-4c1d-94a6-0a773e70e58b', '97dd15db-5862-4d49-9047-e3e0307e5c95', '03766e4f-4f64-4927-be6d-a126958ac1a8', '26987b95-8652-491c-b950-4fb3128f4714', 'c4f55044-3f4e-439e-a586-8b6978c7b406', '36825da8-2f58-4a62-bdb4-2b91cbe18299', 'cca256cf-b415-472c-8b9f-a8432d02c580');
Time: 14.377 ms
docking_dev=#
Thanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>On 29-Sep-2018, at 2:50 AM, Adrian Klaver <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:
On 9/28/18 12:14 PM, Arup Rakshit wrote:Forgot to mention in my previous email, it was a quick send click. Sorry for that.
In the gist you need to see all the line with Vessel Load(.. . I load the page multiple times to catch the different times, so you will the line multiple times there in the log file.
Do you know what Vessel Load () is actually measuring?
To me it looks like it is covering both the query(including ROR overhead) and the HTTP request/response cycle.
Also have you looked at:
https://guides.rubyonrails.org/debugging_rails_applications.html#impact-of-logs-on-performanceThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Adrian Klaver
adrian.klaver@aklaver.com
On 9/30/18 9:26 PM, Arup Rakshit wrote: > Hello Adrian, > > The one I said in gist is different query, and the previous mail I > posted another query because I was testing with different types of > queries. So 1.5, 1.7 is not for this current one. My main point now I am > trying to understand why it is not matching at all. The current query > timing in psql and production log is very close, but not the explain output. 1) If you want to track down what is happening you will need to provide a consistent test case. Jumping from query to query, dataset to dataset, condition to condition is not conducive to coming up with an answer. 2) What is the answer you are looking for? Or to put it another way what is the problem you are trying to solve? 3) Taking 1) & 2) into account you need to do something along lines of: a) Information about size of table. b) A set query against said table. c) Some indication of the problem the query is causing. > > Regarding the file written time, I think it is not the time to write > into the file. Because If I don’t write this to the file still it shows > 14ms + always. I used the file not the sql output so that I can paste > here clean stuffs I want to show. If you want, I can show the output > without writing it to the external file, and you will see the same time. > > Well you are looking at two different times. The EXPLAIN ANALYZE is showing the time to execute the query. The \timing is including the time to display the output which is why is comparable to what you see in the log. To illustrate: select count(*) from projection ; count ------- 28447 (1 row) \timing Timing is on. explain analyze select * from projection ; QUERY PLAN ---------------------------------------------------------------------------------------------------------------- Seq Scan on projection (cost=0.00..751.47 rows=28447 width=109) (actual time=0.012..3.853 rows=28447 loops=1) Planning time: 0.066 ms Execution time: 5.381 ms (3 rows) explain analyze select * from projection where p_item_no = 2; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------- Index Scan using pj_pno_idx on projection (cost=0.29..21.93 rows=5 width=109) (actual time=0.021..0.032 rows=10 loops=1) Index Cond: (p_item_no = 2) Planning time: 0.117 ms Execution time: 0.061 ms (4 rows) select * from projection ; --Returns 28447 rows Time: 56.186 ms select * from projection where p_item_no = 2; --Returns 10 rows Time: 0.372 ms Repeating it: explain analyze select * from projection ; QUERY PLAN ---------------------------------------------------------------------------------------------------------------- Seq Scan on projection (cost=0.00..751.47 rows=28447 width=109) (actual time=0.012..3.377 rows=28447 loops=1) Planning time: 0.056 ms Execution time: 4.759 ms (3 rows) explain analyze select * from projection where p_item_no = 2; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------- Index Scan using pj_pno_idx on projection (cost=0.29..21.93 rows=5 width=109) (actual time=0.021..0.031 rows=10 loops=1) Index Cond: (p_item_no = 2) Planning time: 0.112 ms Execution time: 0.059 ms (4 rows) select * from projection ; Time: 56.548 ms select * from projection where p_item_no = 2; Time: 0.463 ms -- Adrian Klaver adrian.klaver@aklaver.com