Re: I could not get postgres to utilizy indexes - Mailing list pgsql-performance

From Leeuw van der, Tim
Subject Re: I could not get postgres to utilizy indexes
Date
Msg-id BF88DF69D9E2884B9BE5160DB2B97A85010D6D03@nlshl-exch1.eu.uis.unisys.com
Whole thread Raw
In response to I could not get postgres to utilizy indexes  ("Igor Artimenko" <iartimenko@bluecatnetworks.com>)
Responses Re: I could not get postgres to utilizy indexes
List pgsql-performance
Hi,

You asked the very same question yesterday, and I believe you got some useful answers. Why do you post the question
again?

You don't even mention your previous post, and you didn't continue the thread which you started yesterday.

Did you try out any of the suggestions which you got yesterday? Do you have further questions about, for instance, how
todo casting of values? If so, please continue posting with the previous thread, rather than reposting the same
questionwith a different subject. 

regards,

--Tim


-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org]On Behalf Of Igor
Artimenko
Sent: dinsdag 17 augustus 2004 16:23
To: pgsql-performance@postgresql.org
Subject: [PERFORM] I could not get postgres to utilizy indexes


Hi verybody!

I can't make use of indexes even I tried the same test by changing different settings in postgres.conf like geqo to
off/on& geqo related parameters, enable_seqscan off/on & so on. Result is the same.  

Here is test itself:

I've created simplest table test and executed the same statement "explain analyze select id from test where id =
50000;"Few times I added 100,000 records, applied vacuum full; and issued above explain command.  
Postgres uses sequential scan instead of index one.
Of cause Time to execute the same statement constantly grows. In my mind index should not allow time to grow so much.

Why Postgres does not utilizes primary unique index?
What I'm missing? It continue growing even there are 1,200,000 records. It should at least start using index at some
point.


Details are below:
100,000 records:
QUERY PLAN
----------------------------------------------------------------------------------------------------
 Seq Scan on test  (cost=0.00..2427.00 rows=2 width=8) (actual time=99.626..199.835 rows=1 loops=1)
   Filter: (id = 50000)
 Total runtime: 199.990 ms

200,000 records:
QUERY PLAN
-----------------------------------------------------------------------------------------------------
 Seq Scan on test  (cost=0.00..4853.00 rows=2 width=8) (actual time=100.389..402.770 rows=1 loops=1)
   Filter: (id = 50000)
 Total runtime: 402.926 ms


300,000 records:
QUERY PLAN
-----------------------------------------------------------------------------------------------------
 Seq Scan on test  (cost=0.00..7280.00 rows=1 width=8) (actual time=100.563..616.064 rows=1 loops=1)
   Filter: (id = 50000)
 Total runtime: 616.224 ms
(3 rows)

I've created test table by script:

CREATE TABLE test
(
  id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,
  description char(50),
  CONSTRAINT users_pkey PRIMARY KEY (id)
);

CREATE SEQUENCE next_id_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 10000000000
  START 1
  CACHE 5
  CYCLE;

I use postgres 7.4.2




---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Help specifying new machine
Next
From: Dave Cramer
Date:
Subject: help with query