BUG REPORT - Mailing list pgsql-bugs

From Marcin Polak
Subject BUG REPORT
Date
Msg-id Pine.LNX.3.96.990319194851.1247A-200000@lokalik.on.the.Sun
Whole thread Raw
List pgsql-bugs
============================================================================
                        POSTGRESQL BUG REPORT TEMPLATE
============================================================================


Your name        :Marcin Polak
Your email address    :marcin@indigo.pl


System Configuration
---------------------
  Architecture (example: Intel Pentium)      :Intel Pentium

  Operating System (example: Linux 2.0.26 ELF)     :Linux 2.0.35 ELF

  PostgreSQL version (example: PostgreSQL-6.4)  :PostgreSQL-6.4.2

  Compiler used (example:  gcc 2.8.0)        :egcs-1.1b-2


Please enter a FULL description of your problem:
------------------------------------------------

After creating index on a text field (1000000 records), Postgres doesn't
give proper answer on a query using this index.

Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
----------------------------------------------------------------------

I've created database with a python sript (attached)
It makes:
"create table tt2_1 (ntt int, nttext int, ntttext varchar(50) )"
and then insert 1000000 records.

Then I've created index:
create index i1 on tt2_1(ntttext); -- index on third column

When I'm connected with psql, I'm doing:
select * from tt2_1 where nttext = 123456 and ntttext =
'123456123456123456';

and answer is:
   ntt|nttext|           ntttext
------+------+------------------
123456|123456|123456123456123456
(1 row)

and that's right;

But on:
marcin=> select * from tt2_1 where ntttext = '123456123456123456';
ntt|nttext|ntttext
---+------+-------
(0 rows)


and that's not right, am I right :-) ?
It happens when I'm using similar like condition, like
"blah blah ntttext like '123456%' "


What more:

postgres is run with parameters: -B 1000 -o -F


If you know how this problem might be fixed, list the solution below:
---------------------------------------------------------------------

Sorry :-(.


BTW. How big database you think, Postgres can manage with?

PS. My interesting observation is, that if I create six or seven tables (I
don't remember exactly how many) with 1000 to 5000 records and do simple
"SELECT" using all tables Postgres answers in a second (with indices), and
with one more table, when GEQO turns on, I'm getting answer in 30 seconds.
I can't see, in which condition it helps?

Regards from Poland
Marcin
#!/usr/bin/python

import pg

c=pg.connect("marcin")

for i in range(1,2):
  qs = "drop table tt2_%d"%i
  print qs
  c.query(qs )
  qs = "create table tt2_%d (ntt int, nttext int, ntttext varchar(50) )"%i
  print qs
  c.query(qs )
  for j in range(1,1000000):
    qs = "insert into tt2_%d values(%d, %d, '%d%d%d')"%(i,j,j,j,j,j)
    print qs
    c.query(qs )

pgsql-bugs by date:

Previous
From: Unprivileged user
Date:
Subject: General Bug Report: createuser bails out with ERROR
Next
From: Chris Dunlop
Date:
Subject: backend crash with inet type