How good I can get - Mailing list pgsql-performance

From Vernon Wu
Subject How good I can get
Date
Msg-id 65TWVMK3ZVRRMF0DYTHF74KGKE08CA.3e24680f@kimiko
Whole thread Raw
List pgsql-performance
In my current project, among of over twenty main tables, three of them are the master tables and result are the
multivalued detail tables. All of those table have the field name userid which is varchar data type.

A selection statement, having a level of subquery, can involve up to twenty tables. After some query statement tuning
and indexing (on all of the multivalued field of all the detail table), a query performance improves spectacularly. The

following is the output of  ?explain analyze? on a query involved ten tables.

NOTICE:  QUERY PLAN:

Result  (cost=0.00..23.42 rows=1 width=939) (actual time=28.00..28.00 rows=0 loops=1)
. . .
Total runtime: 31.00 msec

I would like to find any performance improvement potentiality in terms of DB table design (indexing always can be done
later). The first thing I know I can do is to change the join key, userid, to numeral. Since implementing the change
requests some work on the system, I would like to know how significant performance improement it can bring.

Almost all fields of those tables are a single digit character. I can guess that change them to number type also can
improve the selection performance. My question again is how much it can get. The application is a web application. All
data on a page is a string, or text type. All number type data  has to be parsed from a string to the back-end, and
converted into a string from the back end. So that change will have performance overhead on the application.

Thanks for your input.

Vernon





pgsql-performance by date:

Previous
From: CaptainX0r
Date:
Subject: Re: Sun vs. Mac
Next
From: Noah Silverman
Date:
Subject: Multiple databases