Thread: cidr question (resent)

cidr question (resent)

From
"Mir S Islam"
Date:
Resending this. Aplogies for duplicates if any.


maybe I am not understanding it right. Here is my problem.

I have a table like this
 Attribute   |           Type           | Modifier
--------------+--------------------------+----------ip           | cidr                     | not null

with following rows     ip
---------------10.0.0.1/3210.0.0.50/3210.0.0.255/3211.0.0.0/3211.0.1.200/3211.0.1.20/32
(6 rows)

should not the command
DELETE from ip_space where ip = '11.0.1.0/24'::cidr
delete last two rows ? But it does not delete anything. Basically I would
like to be able to delete a whole range or block of ip/network addresses.

I suppose I could rewrite the above sql as
delete from ip_space where ip >='11.0.1.0/24'::cidr and ip
<='11.0.2.0/24'::cidr;
Then it works. But I should not have to do that. Right ?

Thanks in advance. Please email and post here.

Mir








Re: cidr question (resent)

From
Alex Pilosov
Date:
On Wed, 12 Dec 2001, Mir S Islam wrote:

> should not the command
> DELETE from ip_space where ip = '11.0.1.0/24'::cidr
> delete last two rows ? But it does not delete anything. Basically I would
> like to be able to delete a whole range or block of ip/network addresses.

You need:
DELETE from ip_space where ip << '11.0.1.0/24'::cidr

<< is 'belongs to'. 

-alex



Re: cidr question (resent)

From
David Stanaway
Date:
There are some operators that you can use.

col1 << col2    BOOLEAN indicating if col1 is a subnet of col2
col1 <<= col2   BOOLEAN indicating if col1 is equal or a subnet of col2
col1 >> col2    BOOLEAN indicating if col1 is a supernet of col2
col1 >>= col2   BOOLEAN indicating if col1 is equal or a supernet of col2


On Thursday, December 13, 2001, at 01:32  PM, Mir S Islam wrote:

> Resending this. Aplogies for duplicates if any.
>
>
> maybe I am not understanding it right. Here is my problem.
>
> I have a table like this
>
>   Attribute   |           Type           | Modifier
> --------------+--------------------------+----------
>  ip           | cidr                     | not null
>
> with following rows
>       ip
> ---------------
>  10.0.0.1/32
>  10.0.0.50/32
>  10.0.0.255/32
>  11.0.0.0/32
>  11.0.1.200/32
>  11.0.1.20/32
> (6 rows)
>
> should not the command
> DELETE from ip_space where ip = '11.0.1.0/24'::cidr
> delete last two rows ? But it does not delete anything. Basically I 
> would
> like to be able to delete a whole range or block of ip/network 
> addresses.
>
> I suppose I could rewrite the above sql as
> delete from ip_space where ip >='11.0.1.0/24'::cidr and ip
> <='11.0.2.0/24'::cidr;
> Then it works. But I should not have to do that. Right ?
>
> Thanks in advance. Please email and post here.
>
> Mir
>
>
>
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>
>
--
Best Regards
David Stanaway
================================
Technology Manager
Australia's Premier Internet Broadcasters
Phone: +612 9357 1699
Fax: +612 9357 1169
Web: http://www.netventures.com.au
Support: support@netventures.com.au
================================
The Inspire Foundation is proudly supported by Net Ventures through the 
provision of streaming solutions for it's national centres.  The Inspire 
Foundation is an Internet-based foundation that inspires young people to 
help themselves, get involved and get online. Please visit Inspire at 
http://www.inspire.org.au



performance tuning in large function / transaction

From
MindTerm
Date:
Hi all,
 I am writing a large function which perform more
than 2000 modify statements ( delete / insert ) . The
execution time of this function is not acceptable. How
can I config postgresql ( in postgresql.conf ?? ) so
that the performance will be better ? The machine
runing postgresql have spare memory and harddisk
space.


M.T. 

__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
Jason Earl
Date:
Start here:

http://candle.pha.pa.us/main/writings/pgsql/hw_performance/index.html

If you have already fiddled around with these settings then share some
of the queries (and their plans).

Jason

MindTerm <mindterm@yahoo.com> writes:

> Hi all,
> 
>   I am writing a large function which perform more
> than 2000 modify statements ( delete / insert ) . The
> execution time of this function is not acceptable. How
> can I config postgresql ( in postgresql.conf ?? ) so
> that the performance will be better ? The machine
> runing postgresql have spare memory and harddisk
> space.
> 
> 
> M.T. 
> 
> __________________________________________________
> Do You Yahoo!?
> Check out Yahoo! Shopping and Yahoo! Auctions for all of
> your unique holiday gifts! Buy at http://shopping.yahoo.com
> or bid at http://auctions.yahoo.com
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html


Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Thu, 13 Dec 2001, MindTerm wrote:

> Hi all,
>
>   I am writing a large function which perform more
> than 2000 modify statements ( delete / insert ) . The
> execution time of this function is not acceptable. How
> can I config postgresql ( in postgresql.conf ?? ) so
> that the performance will be better ? The machine
> runing postgresql have spare memory and harddisk
> space.

It's hard without more information, is the function like
an internal function or do you mean an external one?
In the latter case, the first thing to do is make
sure you're not in autocommit (ie, you've begun a
transaction) since otherwise you're doing 2000
separate transactions.
The settings that are the first to look at in the
postgresql.conf are shared_buffers and sort_mem.



Re: performance tuning in large function / transaction

From
"Christopher Kings-Lynne"
Date:
Try running the entire set of statements within one transaction - that will
speed things up considerably.

You can also increase the number of buffers you are using, but that also
requires fiddling about with shared memory.

Chris

> -----Original Message-----
> From: pgsql-sql-owner@postgresql.org
> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of MindTerm
> Sent: Friday, 14 December 2001 12:27 AM
> To: pgsql-sql@postgresql.org
> Subject: [SQL] performance tuning in large function / transaction
>
>
> Hi all,
>
>   I am writing a large function which perform more
> than 2000 modify statements ( delete / insert ) . The
> execution time of this function is not acceptable. How
> can I config postgresql ( in postgresql.conf ?? ) so
> that the performance will be better ? The machine
> runing postgresql have spare memory and harddisk
> space.
>
>
> M.T.
>
> __________________________________________________
> Do You Yahoo!?
> Check out Yahoo! Shopping and Yahoo! Auctions for all of
> your unique holiday gifts! Buy at http://shopping.yahoo.com
> or bid at http://auctions.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html
>



Re: performance tuning in large function / transaction

From
MindTerm
Date:
Hi,
 I am writing a function (plpgsql) which equals to a
single transaction transaction in postgresql ( as I
known ). So I think that it is not a autocommmit mode.
 I have add following lines in postgresql.conf.

postgresql.conf:
====================
shared_buffers = 640
wal_buffers = 80


M.T.
 

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> On Thu, 13 Dec 2001, MindTerm wrote:
> 
> > Hi all,
> >
> >   I am writing a large function which perform more
> > than 2000 modify statements ( delete / insert ) .
> The
> > execution time of this function is not acceptable.
> How
> > can I config postgresql ( in postgresql.conf ?? )
> so
> > that the performance will be better ? The machine
> > runing postgresql have spare memory and harddisk
> > space.
> 
> It's hard without more information, is the function
> like
> an internal function or do you mean an external one?
> In the latter case, the first thing to do is make
> sure you're not in autocommit (ie, you've begun a
> transaction) since otherwise you're doing 2000
> separate transactions.
> The settings that are the first to look at in the
> postgresql.conf are shared_buffers and sort_mem.
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Thu, 13 Dec 2001, MindTerm wrote:

>   I am writing a function (plpgsql) which equals to a
> single transaction transaction in postgresql ( as I
> known ). So I think that it is not a autocommmit mode.
>
>   I have add following lines in postgresql.conf.
>
> postgresql.conf:
> ====================
> shared_buffers = 640
> wal_buffers = 80

It depends on how much memory you have, but even 640 is pretty
low (I think that works out to 5M).  Probably a few thousand
is better if you've got the ram.



Re: performance tuning in large function / transaction

From
MindTerm
Date:
Hi all,
 I will try to assign more memory to shared memory,
say 50M ~ 100M . 
 I am runing something like that ..
 cursor loop 1  cursor loop 2    cursor loop 3    tmp = primary key     delete tmp ....    insert tmp ....   end loop 3
end loop 2 end loop 3
 

will delete and insert record with same primary key
within a transaction reduce the performance ?

M.T.

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> On Thu, 13 Dec 2001, MindTerm wrote:
> 
> >   I am writing a function (plpgsql) which equals
> to a
> > single transaction transaction in postgresql ( as
> I
> > known ). So I think that it is not a autocommmit
> mode.
> >
> >   I have add following lines in postgresql.conf.
> >
> > postgresql.conf:
> > ====================
> > shared_buffers = 640
> > wal_buffers = 80
> 
> It depends on how much memory you have, but even 640
> is pretty
> low (I think that works out to 5M).  Probably a few
> thousand
> is better if you've got the ram.
> 


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
"Christopher Kings-Lynne"
Date:
Hmmm...

I have a database server for a website for which I am 'dedicating' at least
128MB of ram (I could say that it can have 256MB)

I have max_connections 64
and shared_buffers 256
and sort_mem 1024

Is that really small?

I have this SHM config:

options         SYSVSHM
options         SYSVMSG
options         SYSVSEM

options         SHMMAXPGS=16384         # 64MB shared mem?
#options        SHMALL=1025             # max kb of shared mem
options         SHMSEG=256              # 256 shared segs per proc

options         SEMMNI=256              # 256 semaphore identifiers
options         SEMMNS=512              # 512 semaphores in the system
options         SEMMNU=256              # 256 undo structures in system
options         SEMMAP=256              # 256 entries in semaphore map

How do you calculate the shared memory required by postgres given the
shared_buffers value???

Chris

> -----Original Message-----
> From: pgsql-sql-owner@postgresql.org
> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Stephan Szabo
> Sent: Friday, 14 December 2001 11:02 AM
> To: MindTerm
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] performance tuning in large function / transaction
>
>
> On Thu, 13 Dec 2001, MindTerm wrote:
>
> >   I am writing a function (plpgsql) which equals to a
> > single transaction transaction in postgresql ( as I
> > known ). So I think that it is not a autocommmit mode.
> >
> >   I have add following lines in postgresql.conf.
> >
> > postgresql.conf:
> > ====================
> > shared_buffers = 640
> > wal_buffers = 80
>
> It depends on how much memory you have, but even 640 is pretty
> low (I think that works out to 5M).  Probably a few thousand
> is better if you've got the ram.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>



Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Fri, 14 Dec 2001, Christopher Kings-Lynne wrote:

> Hmmm...
>
> I have a database server for a website for which I am 'dedicating' at least
> 128MB of ram (I could say that it can have 256MB)
>
> I have max_connections 64
> and shared_buffers 256
> and sort_mem 1024
>
> Is that really small?
>
> I have this SHM config:
>
> options         SYSVSHM
> options         SYSVMSG
> options         SYSVSEM
>
> options         SHMMAXPGS=16384         # 64MB shared mem?
> #options        SHMALL=1025             # max kb of shared mem
> options         SHMSEG=256              # 256 shared segs per proc
>
> options         SEMMNI=256              # 256 semaphore identifiers
> options         SEMMNS=512              # 512 semaphores in the system
> options         SEMMNU=256              # 256 undo structures in system
> options         SEMMAP=256              # 256 entries in semaphore map
>
> How do you calculate the shared memory required by postgres given the
> shared_buffers value???

IIRC each buffer is 8k, so 256 shared buffers is still pretty small.

sort_mem is wierd due to the way it's used (I think that's per sort, so
maybe you could get more than that per backend if a query had multiple
sort steps), but I think that's backend local memory not shared.



Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Thu, 13 Dec 2001, MindTerm wrote:

> Hi all,
>
>   I will try to assign more memory to shared memory,
> say 50M ~ 100M .

Forgot to mention.  This depends on how much memory you've
actually got in the machine, but keep a bunch for backend
local memory stuff and system disk caching too.  Going too
high on shared memory can also be bad.



Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Thu, 13 Dec 2001, MindTerm wrote:

> Hi all,
>
>   I will try to assign more memory to shared memory,
> say 50M ~ 100M .
>
>   I am runing something like that ..
>
>   cursor loop 1
>    cursor loop 2
>     cursor loop 3
>      tmp = primary key
>      delete tmp ....
>      insert tmp ....
>     end loop 3
>    end loop 2
>   end loop 3
>
> will delete and insert record with same primary key
> within a transaction reduce the performance ?

Hmm, that might, is the tmp different for each
combination of 1,2 and 3?  I'm not really sure.
You might also want to watch the backend's memory
usage during the call to see how big it gets just
to make sure there's not something bad happening.




Re: performance tuning in large function / transaction

From
MindTerm
Date:
Hi all,
 senario:  cursor 1 loop ( e.g. find student id )  cursor 2 loop ( e.g. find courses of this student )   cursor 3 loop
(e.g. update course information )     delele course detail ...     or     delete course detail ... ( same primary key )
    insert course detail ... ( same primary key )   end loop 3  end loop 2 end loop 1
 
 It did 75 delete actions 140 update actions ( delete
and insert ). The process time was about 5-6 minutes
while oracle was 10 seconds to 20 seconds . 

postgresql.conf : 
=================
shared_buffers = 3200
wal_buffers = 80


running on linux 7.1 , 512M ram. 

M.T.




--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> On Thu, 13 Dec 2001, MindTerm wrote:
> 
> > Hi all,
> >
> >   I will try to assign more memory to shared
> memory,
> > say 50M ~ 100M .
> >
> >   I am runing something like that ..
> >
> >   cursor loop 1
> >    cursor loop 2
> >     cursor loop 3
> >      tmp = primary key
> >      delete tmp ....
> >      insert tmp ....
> >     end loop 3
> >    end loop 2
> >   end loop 3
> >
> > will delete and insert record with same primary
> key
> > within a transaction reduce the performance ?
> 
> Hmm, that might, is the tmp different for each
> combination of 1,2 and 3?  I'm not really sure.
> You might also want to watch the backend's memory
> usage during the call to see how big it gets just
> to make sure there's not something bad happening.
> 
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Fri, 14 Dec 2001, MindTerm wrote:

> Hi all,
>
>   senario:
>
>   cursor 1 loop ( e.g. find student id )
>    cursor 2 loop ( e.g. find courses of this student )
>     cursor 3 loop ( e.g. update course information )
>       delele course detail ...
>       or
>       delete course detail ... ( same primary key )
>       insert course detail ... ( same primary key )
>     end loop 3
>    end loop 2
>   end loop 1
>
>   It did 75 delete actions 140 update actions ( delete
> and insert ). The process time was about 5-6 minutes
> while oracle was 10 seconds to 20 seconds .

Can you send the actual function and table schema for the associated
tables and sizes of the tables involved (I figure the data itself is
probably not available, but we may be able to see what's going on then)




Re: performance tuning in large function / transaction

From
MindTerm
Date:
Hi all,

  table :
  application : 220 rows ;
  usermenu    : 6055 rows ;

M.T.

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> On Fri, 14 Dec 2001, MindTerm wrote:
>
> > Hi all,
> >
> >   senario:
> >
> >   cursor 1 loop ( e.g. find student id )
> >    cursor 2 loop ( e.g. find courses of this
> student )
> >     cursor 3 loop ( e.g. update course information
> )
> >       delele course detail ...
> >       or
> >       delete course detail ... ( same primary key
> )
> >       insert course detail ... ( same primary key
> )
> >     end loop 3
> >    end loop 2
> >   end loop 1
> >
> >   It did 75 delete actions 140 update actions (
> delete
> > and insert ). The process time was about 5-6
> minutes
> > while oracle was 10 seconds to 20 seconds .
>
> Can you send the actual function and table schema
> for the associated
> tables and sizes of the tables involved (I figure
> the data itself is
> probably not available, but we may be able to see
> what's going on then)
>
>


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com-- Table: usermenu
CREATE TABLE "usermenu" (
  "um_user" numeric(11, 0) NOT NULL,
  "um_app_id" numeric(11, 0) NOT NULL,
  "um_order" varchar(50),
  "um_gif" varchar(100),
  "um_alt" varchar(50),
  "um_link" varchar(100),
  "um_level" varchar(5),
  "um_initial" varchar(1),
  "um_gallery_id" numeric(11, 0),
  "um_en_length" numeric(4, 0),
  "um_tc_length" numeric(4, 0),
  "um_sc_length" numeric(4, 0),
  "um_terminator" varchar(1),
  "um_menu" varchar(1),
  "um_en_name" varchar(1000),
  "um_tc_name" varchar(1000),
  "um_sc_name" varchar(1000),
  "um_ext" varchar(1),
  CONSTRAINT "usermenu_pkey" PRIMARY KEY ("um_user", "um_app_id"),
  CONSTRAINT "usermenu_fk1992931308295" FOREIGN KEY (um_app_id) REFERENCES "application" (app_id)
);

-- Table: application
CREATE TABLE "application" (
  "app_id" numeric(11, 0) NOT NULL,
  "app_en_name" varchar(100) NOT NULL,
  "app_tc_name" varchar(100),
  "app_sc_name" varchar(100),
  "app_menu" varchar(1),
  "app_installed" varchar(1),
  "app_order" varchar(50),
  "app_alt" varchar(50),
  "app_gif" varchar(100),
  "app_link" varchar(100),
  "app_initial" varchar(1),
  "app_gallery_id" numeric(11, 0),
  "app_terminator" varchar(1),
  "app_en_length" numeric(4, 0),
  "app_tc_length" numeric(4, 0),
  "app_sc_length" numeric(4, 0),
  "app_ext" varchar(1),
  "app_type" varchar(30),
  CONSTRAINT "application_pkey" PRIMARY KEY ("app_id")
);



BEGIN TRANSACTION;

drop function hku_usermenu( numeric, varchar, int, varchar );
CREATE FUNCTION hku_USERMENU ( numeric, varchar , int , varchar ) RETURNS varchar AS '
DECLARE
 vGroupId        ALIAS FOR $1 ;
 vUserID         ALIAS FOR $2 ;
 vGalleryId      ALIAS FOR $3 ;
 rSuccess        ALIAS FOR $4 ;
 vTmpUserID      user_group.ug_user_id%TYPE;
 vTmpGroupID     user_group.ug_id%TYPE;
 vTmpOrder       application.app_order%TYPE;
 vAclLevel       acl.acl_level%TYPE;
 vAppId          application.app_id%TYPE;
 vAppMenu        application.app_menu%TYPE;
 vAppInstalled   application.app_installed%TYPE;
 vAppTerminator  application.app_terminator%TYPE;
 vAppOrder       application.app_order%TYPE;
 vAppOrderLen    int:=0;
 vLength         int:=0;
 vOrder          application.app_order%TYPE;
 app_rec         application%ROWTYPE;
 cnt1            int:=0;
 cnt2            int:=0;
 vcnt            varchar(10):='''';
 v_str           varchar(1500):='''';
 c1              record ;
 c2              record ;
 tmptime         timestamp;
 t1              int := 0 ;
 t2              int := 0 ;
BEGIN

select timeofday() into tmptime ;
RAISE NOTICE ''[MT][START]HKU_USERMENU % % % %'', vGroupId, vUserID, vGalleryId, tmptime ;

  if vUserID = '' '' or vUserID = '''' or vUserID is null then
    vTmpGroupID := vGroupId;
  else
    vTmpGroupID := 0;
  end if;

  for c1  in
    select user_id from (
    (SELECT UG_USER_ID as user_id
     FROM   USER_GROUP
     WHERE  UG_ID          = vTmpGroupID
     and    GALLERY_ID     = vGalleryId)
    union
    (SELECT USER_ID as user_id
     FROM   HKU_USERS
     WHERE  text( USER_ID )   = vUserId
     and    GALLERY_ID     = vGalleryId)
    ) tmp
  loop
        vTmpUserID := text( c1.user_id );

        delete from usermenu where um_user = vTmpUserID ;
        for c2 in
        select ACL_APP, max(ACL_LEVEL) as acl_level, app.APP_ORDER, length(app.APP_ORDER) as app_order_length,
app.APP_MENU,app.app_installed, app.app_terminator 
        from   ACL, APPLICATION app
        where  ACL_APP            = app.APP_ID
        and    ACL_GALLERY_ID     = vGalleryId
        and    app.APP_GALLERY_ID = vGalleryId
        and    ACL_GRP in
          (select u.ug_id
           from   user_group u, user_group_master ug
           where  u.ug_user_id = vTmpUserID
           and    ug.ug_id     = u.ug_id
           and   (ug.deleted   = ''N'' or ug.deleted IS NULL )
           and    u.gallery_id = vGalleryID
           and    ug.gallery_id = vGalleryID
          )
        group by acl_app, app.app_order,app.APP_MENU, app.app_installed, app.app_terminator
        order  by app.app_order desc, ACL_LEVEL
        loop

          vAppId         := c2.ACL_APP ;
          vAclLevel      := c2.acl_level ;
          vAppOrder      := c2.APP_ORDER ;
          vAppOrderLen   := c2.app_order_length ;
          vAppMenu       := c2.APP_MENU ;
          vAppInstalled  := c2.app_installed ;
          vAppTerminator := c2.app_terminator ;

       vcnt := 0;
       vLength := vAppOrderLen;
       vTmpOrder  := vAppOrder;
       select count(*) into vcnt
       from   ACL, APPLICATION app
       where  ACL_APP            = app.app_id
       and    ACL_LEVEL          > 0
       and    APP_INSTALLED      = ''Y''
       and    ACL_GALLERY_ID     = vGalleryID
       and    app.APP_GALLERY_ID = vGalleryID
       and    substr(app.app_order,1,vAppOrderLen) = vAppOrder
       and    app.app_order      <> vAppOrder
       and    ACL_GRP in
              (select u.ug_id
               from   user_group u, user_group_master ug
               where  u.ug_user_id = vTmpUserID
               and    ug.ug_id     = u.ug_id
               and   (ug.deleted   = ''N'' or ug.deleted IS NULL)
               and    u.gallery_id = vGalleryID
               and    ug.gallery_id = vGalleryID);


       IF vAclLevel IS NULL or vAclLevel <= 0  or upper(vAppInstalled) = ''N'' THEN
         delete from usermenu where um_user = vTmpUserID and substr( um_order, 1, vAppOrderLen ) = vAppOrder;

         t1 = t1 + 1 ;

       ELSE IF vcnt > 0 or vAppTerminator = ''Y'' THEN
         for app_rec in SELECT * FROM APPLICATION WHERE app_order = vTmpOrder and  app_gallery_id = vGalleryId
         loop

           delete from usermenu where um_user = vTmpUserID and um_app_id = app_rec.app_id;
             insert into usermenu(um_user, um_app_id, um_order, um_level, um_menu, um_gallery_id,
                     um_gif, um_alt, um_link, um_terminator,
                     um_en_name, um_tc_name, um_sc_name, um_ext)
                    values (vTmpUserID, app_rec.app_id, app_rec.app_order, text( vAclLevel ) , app_rec.app_menu,
vGalleryId,
                     app_rec.app_gif, app_rec.app_alt, app_rec.app_link, app_rec.app_terminator,
                     app_rec.app_en_name, app_rec.app_tc_name, app_rec.app_sc_name, app_rec.app_ext);

/*RAISE NOTICE ''insert into usermenu(um_user, um_app_id, um_order, um_level, um_menu, um_gallery_id,
                     um_gif, um_alt, um_link, um_terminator,
                     um_en_name, um_tc_name, um_sc_name, um_ext)
                    values ( %, % , ''''%'''', % , ''''%'''', % , ''''%'''', ''''%'''' , ''''%'''', ''''%'''' ,
''''%'''',''''%'''' , ''''%'''', ''''%'''' ); ''                      
                    ,
                    vTmpUserID, app_rec.app_id, app_rec.app_order,  vAclLevel , app_rec.app_menu, vGalleryId,
                     app_rec.app_gif, app_rec.app_alt, app_rec.app_link, app_rec.app_terminator,
                     app_rec.app_en_name, app_rec.app_tc_name, app_rec.app_sc_name, app_rec.app_ext ; */

           t2 = t2 + 1 ;

           /* SELECT count(*) into rSuccess from usermenu  WHERE um_user = vTmpUserID  and   um_app_id =
app_rec.app_id;*/ 

           EXIT; /* run once (if c3%FOUND then) */

         end loop;
       END IF;
       END IF;
       vcnt := 0;
     end loop;

   end loop;

select timeofday() into tmptime ;
RAISE NOTICE ''[MT][STOP]HKU_USERMENU % % % % % %'', vGroupId, vUserID, vGalleryId, tmptime, t1, t2 ;


return ''Y'' ;

END;
' LANGUAGE 'plpgsql' ;

Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Fri, 14 Dec 2001, MindTerm wrote:

> Hi all,
>
>   table :
>   application : 220 rows ;
>   usermenu    : 6055 rows ;

Well, I'd need the other tables involved in the function
too, but to start with, try rewriting the queries in the
function to use EXISTS rather than IN (see the faq), or
bring them into a normal join with the other tables where
possible.



Re: performance tuning in large function / transaction

From
MindTerm
Date:
Dear Stephan,

-- Table: user_group
CREATE TABLE "user_group" ( "ug_id" numeric(10, 0),  "ug_user_id" numeric(11, 0),  "gallery_id" numeric(11, 0),
"deleted"varchar(1) DEFAULT 'N',  CONSTRAINT "ug_mid" FOREIGN KEY (ug_id) REFERENCES
 
"user_group_master" (ug_id) ON DELETE NO ACTION ON
UPDATE NO ACTION NOT DEFERRABLE INITIALLY IMMEDIATE,  CONSTRAINT "user_group_fk" FOREIGN KEY (ug_user_id)
REFERENCES "i2users" (user_id) ON DELETE NO ACTION ON
UPDATE NO ACTION NOT DEFERRABLE INITIALLY IMMEDIATE
);


-- Table: usermenu
CREATE TABLE "usermenu" ( "um_user" numeric(11, 0) NOT NULL,  "um_app_id" numeric(11, 0) NOT NULL,  "um_order"
varchar(50), "um_gif" varchar(100),  "um_alt" varchar(50),  "um_link" varchar(100),  "um_level" varchar(5),
"um_initial"varchar(1),  "um_gallery_id" numeric(11, 0),  "um_en_length" numeric(4, 0),  "um_tc_length" numeric(4, 0),
"um_sc_length"numeric(4, 0),  "um_terminator" varchar(1),  "um_menu" varchar(1),  "um_en_name" varchar(1000),
"um_tc_name"varchar(1000),  "um_sc_name" varchar(1000),  "um_ext" varchar(1),  CONSTRAINT "usermenu_pkey" PRIMARY KEY
("um_user",
"um_app_id"),  CONSTRAINT "usermenu_fk1992931308295" FOREIGN KEY
(um_app_id) REFERENCES "application" (app_id) ON
DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE
INITIALLY IMMEDIATE
);


-- Table: application
CREATE TABLE "application" ( "app_id" numeric(11, 0) NOT NULL,  "app_en_name" varchar(100) NOT NULL,  "app_tc_name"
varchar(100), "app_sc_name" varchar(100),  "app_menu" varchar(1),  "app_installed" varchar(1),  "app_order"
varchar(50), "app_alt" varchar(50),  "app_gif" varchar(100),  "app_link" varchar(100),  "app_initial" varchar(1),
"app_gallery_id"numeric(11, 0),  "app_terminator" varchar(1),  "app_en_length" numeric(4, 0),  "app_tc_length"
numeric(4,0),  "app_sc_length" numeric(4, 0),  "app_ext" varchar(1),  "app_type" varchar(30),  CONSTRAINT
"application_pkey"PRIMARY KEY ("app_id")
 
);


-- Table: acl
CREATE TABLE "acl" ( "acl_id" numeric(10, 0) DEFAULT 0 NOT NULL,  "acl_app" numeric(10, 0),  "acl_grp" numeric(10, 0),
"acl_level"numeric(3, 0),  "acl_gallery_id" numeric(11, 0),  CONSTRAINT "acl_acl_app_key" UNIQUE ("acl_app",
 
"acl_grp"),  CONSTRAINT "acl_pkey" PRIMARY KEY ("acl_id"),  CONSTRAINT "acl_fk9992931283980" FOREIGN KEY
(acl_app) REFERENCES "application" (app_id) ON DELETE
NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE INITIALLY
IMMEDIATE,  CONSTRAINT "acl_ug" FOREIGN KEY (acl_grp) REFERENCES
"user_group_master" (ug_id) ON DELETE NO ACTION ON
UPDATE NO ACTION NOT DEFERRABLE INITIALLY IMMEDIATE
);

-- Table: i2users
CREATE TABLE "i2users" ( "user_id" numeric(11, 0) NOT NULL,  "parent_id" numeric(11, 0) NOT NULL,  "password"
varchar(128)NOT NULL,  "status" varchar(1),  "tx_password" varchar(125),  "login_name" varchar(125),  "join_date"
timestamp, "deleted_date_time" timestamp,  "sc_title" varchar(50),  "sc_department" varchar(50),  "en_title"
varchar(50), "en_department" varchar(50),  "gallery_id" numeric(11, 0),  "tc_title" varchar(50),  "tc_department"
varchar(50), "reject_reason" varchar(500),  "approver" numeric(11, 0),  "report_to" varchar(50),  "the_level"
numeric(11,0),  "modified_date" timestamp DEFAULT 'now',  "modified_password" timestamp DEFAULT 'now',  "max_prospect"
numeric(11,0),  "ratio" numeric(11, 0),  "location" varchar(20),  "nickname" varchar(50),  "team" varchar(255),
"email_address"varchar(125),  "mobile" varchar(20),  "en_name" varchar(100),  "tc_name" varchar(100),  "sc_name"
varchar(100), "approve_limit" numeric(14, 4),  "default_language" numeric(1, 0),  CONSTRAINT "i2users_pkey" PRIMARY KEY
("user_id"), CONSTRAINT "i2users_fk2995438601367" FOREIGN KEY
 
(parent_id) REFERENCES "i2company" (company_id) ON
DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE
INITIALLY IMMEDIATE
);

-- Table: user_group_master
CREATE TABLE "user_group_master" ( "ug_id" numeric(11, 0) NOT NULL,  "ug_en_name" varchar(50),  "ug_tc_name"
varchar(50), "ug_sc_name" varchar(50),  "gallery_id" numeric(11, 0),  "deleted" varchar(1) DEFAULT 'N',  "code"
varchar(10), "company_id" numeric(11, 0) NOT NULL,  "ug_en_description" varchar(1000),  "ug_tc_description"
varchar(1000), "ug_sc_description" varchar(1000),  CONSTRAINT "user_group_maste_gallery_id_key" UNIQUE
 
("gallery_id", "code", "company_id"),  CONSTRAINT "user_group_master_pkey" PRIMARY KEY
("ug_id"),  CONSTRAINT "user_group_ma_fk2996119764375" FOREIGN
KEY (company_id) REFERENCES "i2company" (company_id)
ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE
INITIALLY IMMEDIATE
);

all tables here

M.T. 

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> 
> On Fri, 14 Dec 2001, MindTerm wrote:
> 
> > Hi all,
> >
> >   table :
> >   application : 220 rows ;
> >   usermenu    : 6055 rows ;
> 
> Well, I'd need the other tables involved in the
> function
> too, but to start with, try rewriting the queries in
> the
> function to use EXISTS rather than IN (see the faq),
> or
> bring them into a normal join with the other tables
> where
> possible.
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to
majordomo@postgresql.org


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
> --- Stephan Szabo <sszabo@megazone23.bigpanda.com>
> wrote:
> >
> > On Fri, 14 Dec 2001, MindTerm wrote:
> >
> > > Hi all,
> > >
> > >   table :
> > >   application : 220 rows ;
> > >   usermenu    : 6055 rows ;
> >
> > Well, I'd need the other tables involved in the
> > function
> > too, but to start with, try rewriting the queries in
> > the
> > function to use EXISTS rather than IN (see the faq),
> > or
> > bring them into a normal join with the other tables
> > where
> > possible.

As a note, does rewriting the queries in the function
to use exists rather than in help any?



Re: performance tuning in large function / transaction

From
"Christopher Kings-Lynne"
Date:
> As a note, does rewriting the queries in the function
> to use exists rather than in help any?

Is it true that the IN command is implemented sort of as a linked list
linear time search?  Is there any plan for a super-fast implementation of
'IN'?

Chris



Re: performance tuning in large function / transaction

From
Bruce Momjian
Date:
> > As a note, does rewriting the queries in the function
> > to use exists rather than in help any?
> 
> Is it true that the IN command is implemented sort of as a linked list
> linear time search?  Is there any plan for a super-fast implementation of
> 'IN'?

I keep asking for it and am told it is hard to do.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: performance tuning in large function / transaction

From
Tom Lane
Date:
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> Is it true that the IN command is implemented sort of as a linked list
> linear time search?  Is there any plan for a super-fast implementation of
> 'IN'?

This deserves a somewhat long-winded answer.

Postgres presently supports two kinds of IN (I'm not sure whether SQL92
allows any additional kinds):

1. Scalar-list IN:  foo IN ('bar', 'baz', 'quux', ...)

2. Sub-select IN:   foo IN (SELECT bar FROM ...)

In the scalar-list form, a variable is compared to an explicit list of
constants or expressions.  This form is exactly equivalent tofoo = 'bar' OR foo = 'baz' OR foo = 'quux' OR ...
and is converted into that form by the parser.  The planner is capable
of converting a WHERE clause of this kind into multiple passes of
indexscan, when foo is an indexed column and all the IN-list elements
are constants.  Whether it actually will make that conversion depends
on the usual vagaries of pg_statistic entries, etc.  But if it's a
unique or fairly-selective index, and there aren't a huge number of
entries in the IN list, a multiple indexscan should be a good plan.

In the sub-select form, we pretty much suck: for each tuple in the outer
query, we run the inner query until we find a matching value or the
inner query ends.  This is basically a nested-loop scenario, with the
only (minimally) redeeming social value being that the planner realizes
it should pick a fast-start plan for the inner query.  I think it should
be possible to convert this form into a modified kind of join (sort of
the reverse of an outer join: rather than at least one result per
lefthand row, at most one result per lefthand row), and then we could
use join methods that are more efficient than nested-loop.  But no one's
tried to make that happen yet.
        regards, tom lane


'IN' performance

From
"Christopher Kings-Lynne"
Date:
> In the sub-select form, we pretty much suck: for each tuple in the outer
> query, we run the inner query until we find a matching value or the
> inner query ends.  This is basically a nested-loop scenario, with the
> only (minimally) redeeming social value being that the planner realizes
> it should pick a fast-start plan for the inner query.  I think it should
> be possible to convert this form into a modified kind of join (sort of
> the reverse of an outer join: rather than at least one result per
> lefthand row, at most one result per lefthand row), and then we could
> use join methods that are more efficient than nested-loop.  But no one's
> tried to make that happen yet.

That's what I was thinking...where abouts does all that activity happen?

I assume the planner knows that it doesn't have to reevaluate the subquery
if it's not correlated?

Chris



Re: 'IN' performance

From
Tom Lane
Date:
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> That's what I was thinking...where abouts does all that activity happen?

The infrastructure for different join rules already exists.  There'd
need to be a new JOIN_xxx type added to the various join nodes in the
executor, but AFAICS that's just a minor extension.  The part that is
perhaps not trivial is in the planner.  All the existing inner and outer
join types start out expressed as joins in the original query.  To make
IN into a join, the planner would have to hoist up a clause from WHERE
into the join-tree structure.  I think it can be done, but I have not
thought hard about where and how, nor about what semantic restrictions
might need to be checked.
        regards, tom lane


Re: performance tuning in large function / transaction

From
MindTerm
Date:
Dear stephan,
 I found that the long execution time was due to
following statement which will execute many times in a
loop:
      select count(*)       from   ACL, APPLICATION app      where  ACL_APP            = app.app_id      and
APP_INSTALLED     = 'Y'      and    ACL_LEVEL          > 0      and    ACL_GALLERY_ID     = 1      and
app.APP_GALLERY_ID= 1      and    substr(app.app_order,1, 6 ) = '021101'      and    app.app_order      <> '021101'
and    ACL_GRP in             (select u.ug_id              from   user_group u, user_group_master
 
ug              where  u.ug_user_id = 5170              and    ug.ug_id     = u.ug_id              and   (ug.deleted
='N' or ug.deleted
 
IS NULL)              and    u.gallery_id = 1              and    ug.gallery_id = 1 );


I had explain it and got the result :
Aggregate  (cost=4836.61..4836.61 rows=1 width=24) ->  Nested Loop  (cost=0.00..4836.61 rows=2
width=24)       ->  Index Scan using
application_app_gallery_id on application app 
(cost=0.00..3.05 rows=1 width=12)       ->  Index Scan using acl_acl_gallery_id on acl(cost=0.00..4830.80 rows=220
width=12)            SubPlan               ->  Materialize  (cost=6.10..6.10
 
rows=1 width=24)                     ->  Nested Loop 
(cost=0.00..6.10 rows=1 width=24)                           ->  Index Scan using
user_group_ug_user_id on user_group u 
(cost=0.00..2.02 rows=1 width=12)                           ->  Index Scan using
user_group_master_gallery_id on user_group_master ug 
(cost=0.00..4.07 rows=1 width=12)


after rewrote it to :      select count(*)       from   ACL a, APPLICATION app      where  ACL_APP            =
app.app_id     and    APP_INSTALLED      = 'Y'      and    ACL_LEVEL          > 0      and    ACL_GALLERY_ID     = 1
 and    app.APP_GALLERY_ID = 1      and    substr(app.app_order,1, 6 ) = '021101'      and    app.app_order      <>
'021101'     and    exists             (select u.ug_id              from   user_group u, user_group_master
 
ug              where  a.ACL_GRP = u.ug_id              and    u.ug_user_id = 5170              and    ug.ug_id     =
u.ug_id             and   (ug.deleted   = 'N' or ug.deleted
 
IS NULL)              and    u.gallery_id = 1              and    ug.gallery_id = 1 );


the explain was :
Aggregate  (cost=4836.69..4836.69 rows=1 width=24) ->  Nested Loop  (cost=0.00..4836.69 rows=2
width=24)       ->  Index Scan using
application_app_gallery_id on application app 
(cost=0.00..3.05 rows=1 width=12)       ->  Index Scan using acl_acl_gallery_id on acl
a  (cost=0.00..4830.89 rows=220 width=12)             SubPlan               ->  Nested Loop  (cost=0.00..6.10
rows=1 width=24)                     ->  Index Scan using
user_group_ug_id on user_group u  (cost=0.00..2.02
rows=1 width=12)                     ->  Index Scan using
user_group_master_gallery_id on user_group_master ug 
(cost=0.00..4.07 rows=1 width=12)


the performance seems no improvement. 

the table ACL contains 106057 rows and index on
acl_gallery_id, acl_grp and acl_level . 

the table APPLICATION contains 220 rows and index on
app_gallery_id and app_order .

-- Table: acl
CREATE TABLE "acl" ( "acl_id" numeric(10, 0) DEFAULT 0 NOT NULL,  "acl_app" numeric(10, 0),  "acl_grp" numeric(10, 0),
"acl_level"numeric(3, 0),  "acl_gallery_id" numeric(11, 0),  CONSTRAINT "acl_acl_app_key" UNIQUE ("acl_app",
 
"acl_grp"),  CONSTRAINT "acl_pkey" PRIMARY KEY ("acl_id"),  CONSTRAINT "acl_fk9992931283980" FOREIGN KEY
(acl_app) REFERENCES "application" (app_id) ON DELETE
NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE INITIALLY
IMMEDIATE,  CONSTRAINT "acl_ug" FOREIGN KEY (acl_grp) REFERENCES
"user_group_master" (ug_id) ON DELETE NO ACTION ON
UPDATE NO ACTION NOT DEFERRABLE INITIALLY IMMEDIATE
);

-- Table: application
CREATE TABLE "application" ( "app_id" numeric(11, 0) NOT NULL,  "app_en_name" varchar(100) NOT NULL,  "app_tc_name"
varchar(100), "app_sc_name" varchar(100),  "app_menu" varchar(1),  "app_installed" varchar(1),  "app_order"
varchar(50), "app_alt" varchar(50),  "app_gif" varchar(100),  "app_link" varchar(100),  "app_initial" varchar(1),
"app_gallery_id"numeric(11, 0),  "app_terminator" varchar(1),  "app_en_length" numeric(4, 0),  "app_tc_length"
numeric(4,0),  "app_sc_length" numeric(4, 0),  "app_ext" varchar(1),  "app_type" varchar(30),  CONSTRAINT
"application_pkey"PRIMARY KEY ("app_id")
 
);

M.T.

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> > --- Stephan Szabo <sszabo@megazone23.bigpanda.com>
> > wrote:
> > >
> > > On Fri, 14 Dec 2001, MindTerm wrote:
> > >
> > > > Hi all,
> > > >
> > > >   table :
> > > >   application : 220 rows ;
> > > >   usermenu    : 6055 rows ;
> > >
> > > Well, I'd need the other tables involved in the
> > > function
> > > too, but to start with, try rewriting the
> queries in
> > > the
> > > function to use EXISTS rather than IN (see the
> faq),
> > > or
> > > bring them into a normal join with the other
> tables
> > > where
> > > possible.
> 
> As a note, does rewriting the queries in the
> function
> to use exists rather than in help any?
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the
> unregister command
>     (send "unregister YourEmailAddressHere" to
majordomo@postgresql.org)


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com


Re: performance tuning in large function / transaction

From
Stephan Szabo
Date:
On Mon, 17 Dec 2001, MindTerm wrote:

>        select count(*)
>        from   ACL a, APPLICATION app
>        where  ACL_APP            = app.app_id
>        and    APP_INSTALLED      = 'Y'
>        and    ACL_LEVEL          > 0
>        and    ACL_GALLERY_ID     = 1
>        and    app.APP_GALLERY_ID = 1
>        and    substr(app.app_order,1, 6 ) = '021101'
>        and    app.app_order      <> '021101'
>        and    exists
>               (select u.ug_id
>                from   user_group u, user_group_master
> ug
>                where  a.ACL_GRP = u.ug_id
>                and    u.ug_user_id = 5170
>                and    ug.ug_id     = u.ug_id
>                and   (ug.deleted   = 'N' or ug.deleted
> IS NULL)
>                and    u.gallery_id = 1
>                and    ug.gallery_id = 1 );

I don't know if it'll help, but does:

select count(*)from ACL a, APPLICATION app, user_group u,user_group_master ug       where  ACL_APP            =
app.app_id      and    APP_INSTALLED      = 'Y'       and    ACL_LEVEL          > 0       and    ACL_GALLERY_ID     = 1
     and    app.APP_GALLERY_ID = 1       and    substr(app.app_order,1, 6 ) = '021101'       and    app.app_order
<>'021101'       and    a.ACL_GRP = u.ug_id               and    u.ug_user_id = 5170               and    ug.ug_id
=u.ug_id               and   (ug.deleted   = 'N' or ug.deletedIS NULL)               and    u.gallery_id = 1
  and    ug.gallery_id = 1;
 

give the same results as the original query.  Maybe
that form will work faster.  (I'm going to play with
it a little tomorrow, but since I don't have much data
in there, I'm not sure how well it'll translate)



Re: performance tuning in large function / transaction

From
MindTerm
Date:
Dear Stephan,
 Yes, you are correct. After implementment of your
suggestion, the execution time was half of original
one ( 4 minutes -> 2 minutes ) . Then, I made some
other modifications, execution was about one minute. 
 The execution time of this type of operation was 3-4
times longer as compare to oracle.  :( 

M.T.

--- Stephan Szabo <sszabo@megazone23.bigpanda.com>
wrote:
> On Mon, 17 Dec 2001, MindTerm wrote:
> 
> >        select count(*)
> >        from   ACL a, APPLICATION app
> >        where  ACL_APP            = app.app_id
> >        and    APP_INSTALLED      = 'Y'
> >        and    ACL_LEVEL          > 0
> >        and    ACL_GALLERY_ID     = 1
> >        and    app.APP_GALLERY_ID = 1
> >        and    substr(app.app_order,1, 6 ) =
> '021101'
> >        and    app.app_order      <> '021101'
> >        and    exists
> >               (select u.ug_id
> >                from   user_group u,
> user_group_master
> > ug
> >                where  a.ACL_GRP = u.ug_id
> >                and    u.ug_user_id = 5170
> >                and    ug.ug_id     = u.ug_id
> >                and   (ug.deleted   = 'N' or
> ug.deleted
> > IS NULL)
> >                and    u.gallery_id = 1
> >                and    ug.gallery_id = 1 );
> 
> I don't know if it'll help, but does:
> 
> select count(*)
>  from ACL a, APPLICATION app, user_group u,
>  user_group_master ug
>         where  ACL_APP            = app.app_id
>         and    APP_INSTALLED      = 'Y'
>         and    ACL_LEVEL          > 0
>         and    ACL_GALLERY_ID     = 1
>         and    app.APP_GALLERY_ID = 1
>         and    substr(app.app_order,1, 6 ) =
> '021101'
>         and    app.app_order      <> '021101'
>         and    a.ACL_GRP = u.ug_id
>                 and    u.ug_user_id = 5170
>                 and    ug.ug_id     = u.ug_id
>                 and   (ug.deleted   = 'N' or
> ug.deleted
>  IS NULL)
>                 and    u.gallery_id = 1
>                 and    ug.gallery_id = 1;
> 
> give the same results as the original query.  Maybe
> that form will work faster.  (I'm going to play with
> it a little tomorrow, but since I don't have much
> data
> in there, I'm not sure how well it'll translate)
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the
> unregister command
>     (send "unregister YourEmailAddressHere" to
majordomo@postgresql.org)


__________________________________________________
Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com