Re: Query taking long time - Mailing list pgsql-performance

From acanada
Subject Re: Query taking long time
Date
Msg-id FA02E6B1-9C0A-4122-8642-7D497D7DBB42@cnio.es
Whole thread Raw
In response to Query taking long time  ("acanada" <acanada@cnio.es>)
List pgsql-performance
Hello,

I have changed the multicolumn index from:
    "entity2document_name_qualifier_index" btree (name, qualifier)
 to:
     "document_qualifier_name_index" btree (qualifier, name)

And now the planner doesn't "Recheck cond:"  (there are only three different qualifiers vs. millions of names...)

But still taking long time


Limit  (cost=384043.64..384043.66 rows=10 width=313) (actual time=80555.930..80555.934 rows=10 loops=1)
   ->  Sort  (cost=384043.64..384081.19 rows=15020 width=313) (actual time=80555.928..80555.931 rows=10 loops=1)
         Sort Key: d1_.hepval
         Sort Method:  top-N heapsort  Memory: 29kB
         ->  Nested Loop  (cost=0.00..383719.06 rows=15020 width=313) (actual time=223.778..80547.196 rows=3170
loops=1)
               ->  Index Scan using document_qualifier_name_index on entity2document e0_  (cost=0.00..52505.40
rows=15020width=59) (actual time=126.880..11549.392 rows=3170 loops=1) 
                     Index Cond: (((qualifier)::text = 'CompoundDict'::text) AND ((name)::text =
'galactosamine'::text))
               ->  Index Scan using "DocumentOLD_pkey" on documentold d1_  (cost=0.00..22.04 rows=1 width=254) (actual
time=21.747..21.764rows=1 loops=3170) 
                     Index Cond: (d1_.id = e0_.document_id)
                     Filter: (d1_.hepval IS NOT NULL)
 Total runtime: 80556.027 ms



Any help/point to any direction, would be very appreciated.
Thank you,
Andrés

El Feb 26, 2014, a las 4:41 PM, acanada escribió:

> Hello to everybody and thanks in advance to take a look to this message.
> I'm new in this list and with PostgreSQL.
> My queries are taking too much time to complete and I don't know what to do right now. I think I'm providing all  the
inforequired for you to help me. If you need extra info please tell me. 
>
> I am using DQL included in the last version of symfony2 (2.4.2). This is the query, formed by DQL, but coppied-pasted
tothe psql client (9.1.11, server 8.3.8) 
>
> explain analyze SELECT e0_.id AS id0, e0_.name AS name1, e0_.qualifier AS qualifier2, e0_."tagMethod" AS tagmethod3,
e0_.curationAS curation4, e0_.created AS created5, e0_.updated AS updated6, d1_.id AS id7, d1_.kind AS kind8, d1_.uid
ASuid9, d1_."sentenceId" AS sentenceid10, d1_.text AS text11, d1_.hepval AS hepval12, d1_.cardval AS cardval13,
d1_.nephvalAS nephval14, d1_.phosval AS phosval15, d1_."patternCount" AS patterncount16, d1_."ruleScore" AS
rulescore17,d1_."hepTermNormScore" AS heptermnormscore18, d1_."hepTermVarScore" AS heptermvarscore19, d1_.created AS
created20,d1_.updated AS updated21, e0_.document_id AS document_id22 FROM Entity2Document e0_ INNER JOIN documentold
d1_ON e0_.document_id = d1_.id WHERE e0_.name ='ranitidine' AND e0_.qualifier = 'CompoundDict' AND d1_.hepval IS NOT
NULLORDER BY d1_.hepval DESC limit 10; 
>
>
> limtox=> \d+ documentold;
>                               Table "public.documentold"
>      Column      |              Type              | Modifiers | Storage  | Description
> ------------------+--------------------------------+-----------+----------+-------------
> id               | integer                        | not null  | plain    |
> kind             | character varying(255)         | not null  | extended |
> uid              | character varying(255)         | not null  | extended |
> sentenceId       | character varying(255)         | not null  | extended |
> text             | text                           | not null  | extended |
> hepval           | double precision               |           | plain    |
> created          | timestamp(0) without time zone | not null  | plain    |
> updated          | timestamp(0) without time zone |           | plain    |
> cardval          | double precision               |           | plain    |
> nephval          | double precision               |           | plain    |
> phosval          | double precision               |           | plain    |
> patternCount     | double precision               |           | plain    |
> ruleScore        | double precision               |           | plain    |
> hepTermNormScore | double precision               |           | plain    |
> hepTermVarScore  | double precision               |           | plain    |
> Indexes:
>    "DocumentOLD_pkey" PRIMARY KEY, btree (id)
>    "document_cardval_index" btree (cardval)
>    "document_heptermnorm_index" btree ("hepTermNormScore" DESC NULLS LAST)
>    "document_heptermvar_index" btree ("hepTermVarScore" DESC NULLS LAST)
>    "document_hepval_index" btree (hepval DESC NULLS LAST)
>    "document_kind_index" btree (kind)
>    "document_nephval_index" btree (nephval DESC NULLS LAST)
>    "document_patterncount_index" btree ("patternCount" DESC NULLS LAST)
>    "document_phosval_index" btree (phosval DESC NULLS LAST)
>    "document_rulescore_index" btree ("ruleScore" DESC NULLS LAST)
>    "document_sentenceid_index" btree ("sentenceId")
>    "document_uid_index" btree (uid)
> Referenced by:
>    TABLE "hepkeywordtermnorm2document" CONSTRAINT "fk_1c19bcd0c33f7837" FOREIGN KEY (document_id) REFERENCES
documentold(id)
>    TABLE "cytochrome2document" CONSTRAINT "fk_21f7636fc33f7837" FOREIGN KEY (document_id) REFERENCES documentold(id)
>    TABLE "hepkeywordtermvariant2document" CONSTRAINT "fk_a316e36bc33f7837" FOREIGN KEY (document_id) REFERENCES
documentold(id)
>    TABLE "entity2document" CONSTRAINT "fk_a6020c0dc33f7837" FOREIGN KEY (document_id) REFERENCES documentold(id)
>    TABLE "specie2document" CONSTRAINT "fk_b6e551c8c33f7837" FOREIGN KEY (document_id) REFERENCES documentold(id)
> Has OIDs: no
>
>
>
>
> limtox=> \d+ entity2document;                                     Table "public.entity2document"   Column    |
     Type              |            Modifiers            | Storage  | Description
-------------+--------------------------------+---------------------------------+----------+-------------id          |
integer                       | not null                        | plain    |  
> document_id | integer                        |                                 | plain    |
> name        | character varying(255)         | not null                        | extended |
> qualifier   | character varying(255)         | not null                        | extended |
> tagMethod   | character varying(255)         | default NULL::character varying | extended |
> created     | timestamp(0) without time zone | not null                        | plain    |
> updated     | timestamp(0) without time zone |                                 | plain    |
> curation    | integer                        |                                 | plain    |
> Indexes:
>    "Entity2Document_pkey" PRIMARY KEY, btree (id)
>    "entity2Document_name_index" btree (name)
>    "entity2document_name_qualifier_index" btree (name, qualifier)
>    "idx_a6020c0dc33f7837" btree (document_id)
>    "qualifier_index" btree (qualifier)
> Foreign-key constraints:
>    "fk_a6020c0dc33f7837" FOREIGN KEY (document_id) REFERENCES documentold(id)
> Has OIDs: no
>
>
>
>
>
>
> Table metadata:
>    documentold: 124.515.592 of rows. It has several columns with a large proportion of NULLs(updated, patternCount,
ruleScore,hepTermNormScore, hepTermVarScore) 
>    entity2document: 93.785.968 of rows. It has two columns with a large proportion of NULLs (updated, curation)
>
> None of the tables receive updates or deletes regularly
>
>
>                                                                                QUERY PLAN
                                                   
>
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> Limit  (cost=387929.02..387929.05 rows=10 width=313) (actual time=55980.472..55980.476 rows=10 loops=1)
>   ->  Sort  (cost=387929.02..387966.75 rows=15090 width=313) (actual time=55980.471..55980.473 rows=10 loops=1)
>         Sort Key: d1_.hepval
>         Sort Method:  top-N heapsort  Memory: 28kB
>         ->  Nested Loop  (cost=469.14..387602.93 rows=15090 width=313) (actual time=96.716..55974.004 rows=2774
loops=1)
>               ->  Bitmap Heap Scan on entity2document e0_  (cost=469.14..54851.25 rows=15090 width=59) (actual
time=51.299..8452.592rows=2774 loops=1) 
>                     Recheck Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text =
'CompoundDict'::text))
>                     ->  Bitmap Index Scan on entity2document_name_qualifier_index  (cost=0.00..465.36 rows=15090
width=0)(actual time=36.467..36.467 rows=2774 loops=1) 
>                           Index Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text =
'CompoundDict'::text))
>               ->  Index Scan using "DocumentOLD_pkey" on documentold d1_  (cost=0.00..22.04 rows=1 width=254) (actual
time=17.113..17.129rows=1 loops=2774) 
>                     Index Cond: (d1_.id = e0_.document_id)
>                     Filter: (d1_.hepval IS NOT NULL)
> Total runtime: 55980.554 ms
> (13 rows)
>
> version
> -----------------------------------------------------------------------------------------------------
> PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 10.4.1-3ubuntu3) 10.4.1
>
> This query has been always slow. It's fast only when it's cached. Vacuum and analyze have been done manually very
recently
>
>
>
>
> SELECT name, current_setting(name), source
>  FROM pg_settings
>  WHERE source NOT IN ('default', 'override');
>
>             name            |  current_setting   |        source
> ----------------------------+--------------------+----------------------
> client_encoding            | UTF8               | client
> DateStyle                  | ISO, DMY           | configuration file
> default_text_search_config | pg_catalog.spanish | configuration file
> effective_cache_size       | 7500MB             | configuration file
> lc_messages                | es_ES.UTF-8        | configuration file
> lc_monetary                | es_ES.UTF-8        | configuration file
> lc_numeric                 | C                  | configuration file
> lc_time                    | es_ES.UTF-8        | configuration file
> listen_addresses           | *                  | configuration file
> log_line_prefix            | %t                 | configuration file
> log_timezone               | localtime          | command line
> maintenance_work_mem       | 2000MB             | configuration file
> max_connections            | 100                | configuration file
> max_fsm_pages              | 63217760           | configuration file
> max_stack_depth            | 2MB                | environment variable
> port                       | 5432               | configuration file
> shared_buffers             | 1500MB             | configuration file
> ssl                        | on                 | configuration file
> tcp_keepalives_count       | 9                  | configuration file
> tcp_keepalives_idle        | 7200               | configuration file
> tcp_keepalives_interval    | 75                 | configuration file
> TimeZone                   | localtime          | command line
> timezone_abbreviations     | Default            | command line
> work_mem                   | 50MB               | configuration file
>
> Setting the work_mem to 3000MB doesn't change anything...
>
> Everything seems good to me but the Recheck Cond, because of the large ammount of rows, is slowing the query too
much.I have read that is not a good point to try to get rid of recheck cond (maybe even not possible, I don't know, I'm
newto PostgreSQL). I'd like to know what I am doing wrong and how can I solve it... 
>
> Any help please?
>
> Thank you very much,
>
> Andrés
> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información
protegidapara el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de
transmisiónpor parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega
comunicarloal remitente y borrar el mensaje recibido. 
> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged
informationfor the sole use of the designated recipient named above. Distribution, reproduction or any other use of
thistransmission by any party other than the intended recipient is prohibited. If you are not the intended recipient
pleasecontact the sender and delete all copies. 
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información
protegidapara el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de
transmisiónpor parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega
comunicarloal remitente y borrar el mensaje recibido. 
**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged
informationfor the sole use of the designated recipient named above. Distribution, reproduction or any other use of
thistransmission by any party other than the intended recipient is prohibited. If you are not the intended recipient
pleasecontact the sender and delete all copies. 



pgsql-performance by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: Postgresql tunning-- help needed
Next
From: Jean-Max Reymond
Date:
Subject: not using my GIN index in JOIN expression