Thread: quest:optimization with indexes

quest:optimization with indexes

From
Clayton Cottingham
Date:
hello,
im looking for info/help on creating indexes
im using 6.5
we have a table overe 100,000 rows

if we put in an index or not an explain say it still does a seq scan,
same amount of time taken!

ive tried putting the index using just the oid , a combo of some fields
of our data using btree

where/what rules should be followed for setting up a good indexing
scheme under pg?




Re: [SQL] quest:optimization with indexes

From
Herouth Maoz
Date:
At 01:42 +0300 on 11/08/1999, Clayton Cottingham wrote:


> if we put in an index or not an explain say it still does a seq scan,
> same amount of time taken!
>
> ive tried putting the index using just the oid , a combo of some fields
> of our data using btree
>
> where/what rules should be followed for setting up a good indexing
> scheme under pg?

You should put indices on the fields mentioned in your query's WHERE clause.

You should try and avoid using functions on the fields in a WHERE clause.
For example, even if you have an index on the field "surname" in your
table, a query with WHERE upper( surname ) = "SMITH" will not use that
index. You can define indices on the function upper(surname) if this sort
of query is common.

The same goes for operators such as LIKE. LIKE will only use the index if
the expression is anchored to the beginning of the string, as in WHERE
surname LIKE 'Smi%', but not WHERE surname LIKE '%mit%'.

And always keep in mind that adding indices penalizes you on insert and update.

Herouth

--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma




Re: [SQL] quest:optimization with indexes

From
"Ross J. Reedstrom"
Date:
On Wed, Aug 11, 1999 at 04:30:13PM +0300, Herouth Maoz wrote:
> At 01:42 +0300 on 11/08/1999, Clayton Cottingham wrote:
> >
> > where/what rules should be followed for setting up a good indexing
> > scheme under pg?
> 
<snipped Herouth's good summation of the rules of indices>

One thing to add: after adding your index, vacuum analyze you table, or else
the optimizer may thing you've only got a few rows in your table, and refuse
to use the indices, because it thinks sequential scans are relatively cheap.

Ross
-- 
Ross J. Reedstrom, Ph.D., <reedstrm@rice.edu> 
NSBRI Research Scientist/Programmer
Computer and Information Technology Institute
Rice University, 6100 S. Main St.,  Houston, TX 77005


Re: [SQL] quest:optimization with indexes

From
Clayton Cottingham
Date:
thanks all, i actually was able to piece this all together last night!

what i did was create another column that had a nextval sequence
and im using that as an index, instead of over a thousand sec i am now getting
a index with 2 sec

thanks for all those 'like' tips i was wonderinng, would using regex be the
same as far as the indexing rules go?






Re: [SQL] quest:optimization with indexes

From
Tom Lane
Date:
Clayton Cottingham <drfrog@smartt.com> writes:
> thanks for all those 'like' tips i was wonderinng, would using regex be the
> same as far as the indexing rules go?

It's equivalent --- an index can help if the pattern is anchored to the
start of the string.  Of course the pattern notation is different...
        regards, tom lane


Re: [SQL] quest:optimization with indexes

From
Clayton Cottingham
Date:
Tom Lane wrote:

> Clayton Cottingham <drfrog@smartt.com> writes:
> > thanks for all those 'like' tips i was wonderinng, would using regex be the
> > same as far as the indexing rules go?
>
> It's equivalent --- an index can help if the pattern is anchored to the
> start of the string.  Of course the pattern notation is different...
>
>                         regards, tom lane

what about helper software like glimpse, as per my other thread?

i was told to use glimpse, because full text index {fti} isnt robust enough to
handle  large db's

what do you think the most optimized way of handling this sort of thing would
be?



Re: [SQL] quest:optimization with indexes

From
Tom Lane
Date:
Clayton Cottingham <drfrog@smartt.com> writes:
> what about helper software like glimpse, as per my other thread?
> i was told to use glimpse, because full text index {fti} isnt robust
> enough to handle large db's

AFAIK there isn't any real good way to use glimpse in combination with
Postgres.  Glimpse is a file-granularity indexer --- that is, if you are
searching for a word, say "foobar", then glimpse's index tells it which
files within a large directory tree contain that word, and then it goes
and runs "grep foobar" on each such file to find the actual occurrences
of that word.  Glimpse works great for large, slowly changing
collections of files (I say "slowly changing" because updating the index
is fairly expensive).  For instance, I use it all the time to search the
Postgres source files.  But it'd be quite useless for a Postgres
database where all the data is in one table (= one file).  Also, there's
no natural equivalent of file granularity in a database; if you did
adapt glimpse to know about the inner structure of a Postgres database,
you'd end up indexing at the record level, which'd be a mighty
inefficient way to use glimpse.

If you do have a moderately large collection of slowly-changing
documents, I'd suggest storing them as a plain tree of files, and using
glimpse and Postgres separately as index-maintaining tools.  (That is,
your database would just contain references to the files, plus whatever
information you wanted to manipulate with SQL operations.)
        regards, tom lane