Thread: Resources

Resources

From
Gurudutt
Date:
Hi,

I have a pentium III server, running on RHL 7.1 with 256 MB RAM,

The following is output of the "top" command for query which involves
fetch from a table with about MAX of 10,000 rows.

-------------------------------------TOP------------------------------ PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU
%MEM  TIME COMMAND
 
3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58 postmaster1199 nobody     9   0  3728 3728  2704 S     0.5
1.4   0:03 httpd3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top   1 root       8   0   544  544   472
S    0.0  0.2   0:04 init   2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd   3 root
 


Now, my question is, it takes ages(2 mints) for the query to run (regularly
VACUUM ANALYZED Database) and if you look closely at the resources
consumed by the postgres, it is almost taking away 100% CPU time.

How can we make it faster and to consume less resources ??

Can anybody suggest the steps they are taking for time-critical
applications to run efficiently.

Thanks in advance

-- 
Best regards,Gurudutt                          mailto:guru@indvalley.com

Life is not fair - get used to it.
Bill Gates



Re: Resources

From
Jason Earl
Date:
You've shown that the query takes up a lot of CPU, and that it is
slower than you would like.  We have an idea as to how big the table
is (not very), and we know that you have done your homework and have
vacuum analyzed.

Next we need the query in question (so we can check for some of the
more well known performance pitfalls like using IN on large result
sets), and also the explain output (so we can see what PostgreSQL
thinks of your query).  You also might want to consider including the
schema of the tables involved.

Jason

Gurudutt <guru@indvalley.com> writes:

> Hi,
> 
> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
> 
> The following is output of the "top" command for query which involves
> fetch from a table with about MAX of 10,000 rows.
> 
> -------------------------------------TOP------------------------------
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> 
>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58 postmaster
>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>     3 root
> 
> 
> Now, my question is, it takes ages(2 mints) for the query to run (regularly
> VACUUM ANALYZED Database) and if you look closely at the resources
> consumed by the postgres, it is almost taking away 100% CPU time.
> 
> How can we make it faster and to consume less resources ??
> 
> Can anybody suggest the steps they are taking for time-critical
> applications to run efficiently.
> 
> Thanks in advance
> 
> -- 
> Best regards,
>  Gurudutt                          mailto:guru@indvalley.com
> 
> Life is not fair - get used to it.
> Bill Gates
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


Re: Resources

From
Stephan Szabo
Date:
On Thu, 10 Jan 2002, Gurudutt wrote:

> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
>
> The following is output of the "top" command for query which involves
> fetch from a table with about MAX of 10,000 rows.
>
> -------------------------------------TOP------------------------------
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>
>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58 postmaster
>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>     3 root
>
>
> Now, my question is, it takes ages(2 mints) for the query to run (regularly
> VACUUM ANALYZED Database) and if you look closely at the resources
> consumed by the postgres, it is almost taking away 100% CPU time.
>
> How can we make it faster and to consume less resources ??
>
> Can anybody suggest the steps they are taking for time-critical
> applications to run efficiently.

An important thing is checking the explain output for the query.  If you
want to  post the schema, query and explain output, we might be able to
come up with suggestions on that level.

You may also want to look at your ram usage.  The default shared buffers
and sort memory are very low and you'll probably want to raise them.
You don't want to make them too big because you want to leave memory for
disk caching, but you can raise them to low thousands and see if that
helps.



Re: Resources

From
"Nick Fankhauser"
Date:
I think we'll need a lot more of the particulars about this query to help.
The query itself & the results of an "explain" on the query are a good
starting place.

Unless the query is really unusual, that does seem like a very excessive
time which can probably be improved.

Please give us some of the details.

Regards,

-Nick



> -----Original Message-----
> From: pgsql-sql-owner@postgresql.org
> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Gurudutt
> Sent: Thursday, January 10, 2002 8:33 AM
> To: pgsql-sql@postgresql.org
> Subject: [SQL] Resources
>
>
> Hi,
>
> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
>
> The following is output of the "top" command for query which involves
> fetch from a table with about MAX of 10,000 rows.
>
> -------------------------------------TOP------------------------------
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>
>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58 postmaster
>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>     3 root
>
>
> Now, my question is, it takes ages(2 mints) for the query to run
> (regularly
> VACUUM ANALYZED Database) and if you look closely at the resources
> consumed by the postgres, it is almost taking away 100% CPU time.
>
> How can we make it faster and to consume less resources ??
>
> Can anybody suggest the steps they are taking for time-critical
> applications to run efficiently.
>
> Thanks in advance
>
> --
> Best regards,
>  Gurudutt                          mailto:guru@indvalley.com
>
> Life is not fair - get used to it.
> Bill Gates
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>



Re: Resources

From
Wei Weng
Date:
I have a side question. How do I stop this kind of crazy query to suck up
my CPU power if it is fed by a database driver? (like ODBC, for example)

Since kill -9 postmaster is highly not recommended, can i do a
/sbin/service postgresql stop to force it to shut down? (I am a redhat
user.)

Thank you!

On Thu, 10 Jan 2002 08:12:12 -0800 (PST)
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:

> On Thu, 10 Jan 2002, Gurudutt wrote:
> 
> > I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
> >
> > The following is output of the "top" command for query which involves
> > fetch from a table with about MAX of 10,000 rows.
> >
> > -------------------------------------TOP------------------------------
> >   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> >
> >  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58
> >  postmaster 1199 nobody     9   0  3728 3728  2704 S     0.5  1.4  
> >  0:03 httpd 3035 root      10   0  1048 1048   840 R     0.3  0.4  
> >  0:15 top   1 root       8   0   544  544   472 S     0.0  0.2   0:04
> >  init   2 root       9   0     0    0     0 SW    0.0  0.0   0:00
> >  keventd   3 root
> >
> >
> > Now, my question is, it takes ages(2 mints) for the query to run
> > (regularly VACUUM ANALYZED Database) and if you look closely at the
> > resources consumed by the postgres, it is almost taking away 100% CPU
> > time.
> >
> > How can we make it faster and to consume less resources ??
> >
> > Can anybody suggest the steps they are taking for time-critical
> > applications to run efficiently.
> 
> An important thing is checking the explain output for the query.  If you
> want to  post the schema, query and explain output, we might be able to
> come up with suggestions on that level.
> 
> You may also want to look at your ram usage.  The default shared buffers
> and sort memory are very low and you'll probably want to raise them.
> You don't want to make them too big because you want to leave memory for
> disk caching, but you can raise them to low thousands and see if that
> helps.
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 


-- 
Wei Weng
Network Software Engineer
KenCast Inc.



Re: Resources

From
"SHELTON,MICHAEL (Non-HP-Boise,ex1)"
Date:
Try "pg_ctl stop" -- you can get the usage and explanation in the idocs:

http://www.postgresql.org/idocs/index.php?app-pg-ctl.html

Mike

-----Original Message-----
From: Wei Weng [mailto:wweng@kencast.com]
Sent: Thursday, January 10, 2002 1:34 PM
To: Stephan Szabo
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Resources


I have a side question. How do I stop this kind of crazy query to suck up
my CPU power if it is fed by a database driver? (like ODBC, for example)

Since kill -9 postmaster is highly not recommended, can i do a
/sbin/service postgresql stop to force it to shut down? (I am a redhat
user.)

Thank you!

On Thu, 10 Jan 2002 08:12:12 -0800 (PST)
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:

> On Thu, 10 Jan 2002, Gurudutt wrote:
> 
> > I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
> >
> > The following is output of the "top" command for query which involves
> > fetch from a table with about MAX of 10,000 rows.
> >
> > -------------------------------------TOP------------------------------
> >   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> >
> >  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58
> >  postmaster 1199 nobody     9   0  3728 3728  2704 S     0.5  1.4  
> >  0:03 httpd 3035 root      10   0  1048 1048   840 R     0.3  0.4  
> >  0:15 top   1 root       8   0   544  544   472 S     0.0  0.2   0:04
> >  init   2 root       9   0     0    0     0 SW    0.0  0.0   0:00
> >  keventd   3 root
> >
> >
> > Now, my question is, it takes ages(2 mints) for the query to run
> > (regularly VACUUM ANALYZED Database) and if you look closely at the
> > resources consumed by the postgres, it is almost taking away 100% CPU
> > time.
> >
> > How can we make it faster and to consume less resources ??
> >
> > Can anybody suggest the steps they are taking for time-critical
> > applications to run efficiently.
> 
> An important thing is checking the explain output for the query.  If you
> want to  post the schema, query and explain output, we might be able to
> come up with suggestions on that level.
> 
> You may also want to look at your ram usage.  The default shared buffers
> and sort memory are very low and you'll probably want to raise them.
> You don't want to make them too big because you want to leave memory for
> disk caching, but you can raise them to low thousands and see if that
> helps.
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 


-- 
Wei Weng
Network Software Engineer
KenCast Inc.


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly


Re: Resources

From
Gurudutt
Date:
Hi,

Please find the table schemas, queries and also explain on those
queries.

############### TABLES SCHEMA ##########################

/* -------------------------------------------------------- Table structure for table "subscribertab"
-------------------------------------------------------- */
CREATE TABLE "subscribertab" (  "custcode" int8 NOT NULL,  "netcode" int8 NOT NULL,  "custname" varchar(100) NOT NULL,
"address1"varchar(100) NOT NULL,  "address2" varchar(100),  "citycode" int8 NOT NULL,  "pincode" int8 NOT NULL,
"telephoneno"varchar(15),  "emailid" varchar(50),  "typeconnection" char(1) NOT NULL,  "pagerno" varchar(10),
"mobileno"varchar(10),  "notv" int8,  "nocomp" int8,  "userid" varchar(12),  "packagecode" int8,  "techcode" int8,
"cablesubamount"float8,  "esn" varchar(25),  "contactperson" varchar(50),  "rfctx" float8,  "rfcrx" float8,  "rfstx"
float8, "rfsrx" float8,  "latencygateway" float8,  "latencyrouter" float8,  "pitstop" float8,  "topsites" varchar(150),
"usagetime" varchar(100),  "typicalusage" varchar(100),  "activatedate" date,  CONSTRAINT "subscribertab_pkey" PRIMARY
KEY("custcode")
 
);


/* -------------------------------------------------------- Table structure for table "tickettab"
-------------------------------------------------------- */
CREATE TABLE "tickettab" (  "tokenid" varchar(25) NOT NULL,  "subcode" int8,  "techcode" int8,  "agentid" varchar(12)
NOTNULL,  "hmsupcode" int8,  "problemstmt" text,  "arrivaldate" date NOT NULL,  "arrivaltime" time NOT NULL,
"attendeddate"date,  "attendedtime" time,  "statuscode" int8,  "shiftcode" int8,  "catcode" int8,  "othercat"
varchar(50), "remarks" text,  "updatedby" varchar(12),  "routedby" varchar(12),  "routeddate" date,  "routedtime" time,
"solonline" char(1),  CONSTRAINT "tickettab_pkey" PRIMARY KEY ("tokenid")
 
);


/* -------------------------------------------------------- Table structure for table "ticketmultab"
-------------------------------------------------------- */
CREATE TABLE "ticketmultab" (  "tcode" int8 NOT NULL,  "tokenid" varchar(25) NOT NULL,  "techcode" int8,  "agentid"
varchar(12)NOT NULL,  "problemstmt" text,  "arrivaldate" date NOT NULL,  "arrivaltime" time NOT NULL,  "attendeddate"
date, "attendedtime" time,  "statuscode" int8,  "shiftcode" int8,  "catcode" int8,  "othercat" varchar(50),  "remarks"
text, "updatedby" varchar(12),  "routedby" varchar(12),  "routeddate" date,  "routedtime" time,  "solonline" char(1),
CONSTRAINT"ticketmultab_pkey" PRIMARY KEY ("tcode")
 
);

#################### QUERIES ############################

/* These queries are called maximum 4 times with different  matching strings */

/* QUERY No 1  ---------- */

select count(tickettab.tokenid) as ticketcount
from tickettab,subscribertab
where (tickettab.arrivaldate >='2001-12-12'
and tickettab.arrivaldate <='2002-01-12') and
tickettab.subcode=subscribertab.custcode and
((subscribertab.address1 ILIKE '%Cunningham%') OR
(subscribertab.address2 ILIKE '%Cunningham%'))

/* QUERY No 2  ---------- */

select count(ticketmultab.tokenid) as ticketmulcount
from ticketmultab,subscribertab
where (ticketmultab.arrivaldate >='2001-12-12'
and ticketmultab.arrivaldate <='2002-01-12') and
(substring(tokenid FROM 0 FOR strpos(tokenid,'A')))=subscribertab.custcode and
((subscribertab.address1 ILIKE '%Cunningham%') OR
(subscribertab.address2 ILIKE '%Cunningham%'))


######################EXPLAINS ON QUERIES ########################

/* EXPLAIN ON QUERY No 1  --------------------- */

NOTICE:  QUERY PLAN:

Aggregate  (cost=276.27..276.27 rows=1 width=28) ->  Nested Loop  (cost=0.00..276.26 rows=1 width=28)       ->  Seq
Scanon subscribertab  (cost=0.00..265.87 rows=1 width=8)       ->  Seq Scan on tickettab  (cost=0.00..8.40 rows=160
width=20) 
 
/* EXPLAIN ON QUERY No 1  --------------------- */

NOTICE:  QUERY PLAN:

Aggregate  (cost=269.02..269.02 rows=1 width=20) ->  Nested Loop  (cost=0.00..269.02 rows=1 width=20)       ->  Seq
Scanon subscribertab  (cost=0.00..265.87 rows=1 width=8)       ->  Seq Scan on ticketmultab  (cost=0.00..2.50 rows=33
width=12)


################ END OF REQUIRED DATA ########################

Thanks in advance I hope I have provided the required information.
Basically, I need to optimize my operations, but changing the table
structures may be too late in the day.

-- 
Best regards,Gurudutt                            mailto:guru@indvalley.com

Life is not fair - get used to it.
Bill Gates


Thursday, January 10, 2002, 9:20:36 PM, you wrote:


JE> You've shown that the query takes up a lot of CPU, and that it is
JE> slower than you would like.  We have an idea as to how big the table
JE> is (not very), and we know that you have done your homework and have
JE> vacuum analyzed.

JE> Next we need the query in question (so we can check for some of the
JE> more well known performance pitfalls like using IN on large result
JE> sets), and also the explain output (so we can see what PostgreSQL
JE> thinks of your query).  You also might want to consider including the
JE> schema of the tables involved.

JE> Jason

JE> Gurudutt <guru@indvalley.com> writes:

>> Hi,
>> 
>> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
>> 
>> The following is output of the "top" command for query which involves
>> fetch from a table with about MAX of 10,000 rows.
>> 
>> -------------------------------------TOP------------------------------
>>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>> 
>>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58 postmaster
>>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
>>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
>>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
>>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>>     3 root
>> 
>> 
>> Now, my question is, it takes ages(2 mints) for the query to run (regularly
>> VACUUM ANALYZED Database) and if you look closely at the resources
>> consumed by the postgres, it is almost taking away 100% CPU time.
>> 
>> How can we make it faster and to consume less resources ??
>> 
>> Can anybody suggest the steps they are taking for time-critical
>> applications to run efficiently.
>> 
>> Thanks in advance
>> 
>> -- 
>> Best regards,
>>  Gurudutt                          mailto:guru@indvalley.com
>> 
>> Life is not fair - get used to it.
>> Bill Gates
>> 
>> 
>> ---------------------------(end of broadcast)---------------------------
>> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

JE> ---------------------------(end of broadcast)---------------------------
JE> TIP 2: you can get off all lists at once with the unregister command
JE>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)



Re: Resources

From
Isabelle Brette
Date:
Hello,

I just looked at your tables/queries/explains.

My piece of advice would be : add indexes on the columns you use the
most in the query (especially those used for joins).

A "seq scan" takes quite a time, an "index scan" takes nearly no time.

Well, indexes take a little space, but if your database is not that big,
it should be what you're looking for. Disk space does not cost that much
these times ^^

-- 
Isabelle Brette - isabelle@apartia.fr


Re: Resources

From
"Nick Fankhauser"
Date:
The explains indicate that a sequential scan is happening, so you want to
index the columns involved.

For instance for query No. 1, create an index on tickettab.arrivaldate,
tickettab.subcode, subscribertab.custcode, subscribertab.address1 &
subscribertab.address2.

This will hurt your performance on inserts, but should really help the
query. Try doing an explain again with these indexes on to see how it
changes the situation. If inserts are an issue, you may need to play with it
a bit to find the indexes that give you the best gain.

-Nick



> /* QUERY No 1
>    ---------- */
>
> select count(tickettab.tokenid) as ticketcount
> from tickettab,subscribertab
> where (tickettab.arrivaldate >='2001-12-12'
> and tickettab.arrivaldate <='2002-01-12') and
> tickettab.subcode=subscribertab.custcode and
> ((subscribertab.address1 ILIKE '%Cunningham%') OR
> (subscribertab.address2 ILIKE '%Cunningham%'))
>
> /* QUERY No 2
>    ---------- */
>
> select count(ticketmultab.tokenid) as ticketmulcount
> from ticketmultab,subscribertab
> where (ticketmultab.arrivaldate >='2001-12-12'
> and ticketmultab.arrivaldate <='2002-01-12') and
> (substring(tokenid FROM 0 FOR
> strpos(tokenid,'A')))=subscribertab.custcode and
> ((subscribertab.address1 ILIKE '%Cunningham%') OR
> (subscribertab.address2 ILIKE '%Cunningham%'))
>
>
> ######################EXPLAINS ON QUERIES ########################
>
> /* EXPLAIN ON QUERY No 1
>    --------------------- */
>
> NOTICE:  QUERY PLAN:
>
> Aggregate  (cost=276.27..276.27 rows=1 width=28)
>   ->  Nested Loop  (cost=0.00..276.26 rows=1 width=28)
>         ->  Seq Scan on subscribertab  (cost=0.00..265.87 rows=1 width=8)
>         ->  Seq Scan on tickettab  (cost=0.00..8.40 rows=160 width=20)
>
> /* EXPLAIN ON QUERY No 1
>    --------------------- */
>
> NOTICE:  QUERY PLAN:
>
> Aggregate  (cost=269.02..269.02 rows=1 width=20)
>   ->  Nested Loop  (cost=0.00..269.02 rows=1 width=20)
>         ->  Seq Scan on subscribertab  (cost=0.00..265.87 rows=1 width=8)
>         ->  Seq Scan on ticketmultab  (cost=0.00..2.50 rows=33 width=12)
>
>
> ################ END OF REQUIRED DATA ########################
>
> Thanks in advance I hope I have provided the required information.
> Basically, I need to optimize my operations, but changing the table
> structures may be too late in the day.
>
> --
> Best regards,
>  Gurudutt                            mailto:guru@indvalley.com
>
> Life is not fair - get used to it.
> Bill Gates
>
>
> Thursday, January 10, 2002, 9:20:36 PM, you wrote:
>
>
> JE> You've shown that the query takes up a lot of CPU, and that it is
> JE> slower than you would like.  We have an idea as to how big the table
> JE> is (not very), and we know that you have done your homework and have
> JE> vacuum analyzed.
>
> JE> Next we need the query in question (so we can check for some of the
> JE> more well known performance pitfalls like using IN on large result
> JE> sets), and also the explain output (so we can see what PostgreSQL
> JE> thinks of your query).  You also might want to consider including the
> JE> schema of the tables involved.
>
> JE> Jason
>
> JE> Gurudutt <guru@indvalley.com> writes:
>
> >> Hi,
> >>
> >> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
> >>
> >> The following is output of the "top" command for query which involves
> >> fetch from a table with about MAX of 10,000 rows.
> >>
> >> -------------------------------------TOP------------------------------
> >>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> >>
> >>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58
> postmaster
> >>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
> >>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
> >>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
> >>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
> >>     3 root
> >>
> >>
> >> Now, my question is, it takes ages(2 mints) for the query to
> run (regularly
> >> VACUUM ANALYZED Database) and if you look closely at the resources
> >> consumed by the postgres, it is almost taking away 100% CPU time.
> >>
> >> How can we make it faster and to consume less resources ??
> >>
> >> Can anybody suggest the steps they are taking for time-critical
> >> applications to run efficiently.
> >>
> >> Thanks in advance
> >>
> >> --
> >> Best regards,
> >>  Gurudutt                          mailto:guru@indvalley.com
> >>
> >> Life is not fair - get used to it.
> >> Bill Gates
> >>
> >>
> >> ---------------------------(end of
> broadcast)---------------------------
> >> TIP 1: subscribe and unsubscribe commands go to
> majordomo@postgresql.org
>
> JE> ---------------------------(end of
> broadcast)---------------------------
> JE> TIP 2: you can get off all lists at once with the unregister command
> JE>     (send "unregister YourEmailAddressHere" to
> majordomo@postgresql.org)
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>



Re: Resources

From
"Nick Fankhauser"
Date:
> I don't beleive indexes will improve SELECT using LIKE.

I wondered about that too, so I did a test using a database I'm working
with. The results indicate that it helps substantially- here is my terminal
log file:

staging=#
staging=#
staging=# explain select count(*) from actor where actor_full_name like
'A%';
NOTICE:  QUERY PLAN:

Aggregate  (cost=1067.72..1067.72 rows=1 width=0) ->  Seq Scan on actor  (cost=0.00..1066.29 rows=572 width=0)

EXPLAIN
staging=# create index test on actor(actor_full_name);
CREATE
staging=# explain select count(*) from actor where actor_full_name like
'A%';
NOTICE:  QUERY PLAN:

Aggregate  (cost=577.46..577.46 rows=1 width=0) ->  Index Scan using test on actor  (cost=0.00..576.03 rows=572
width=0)

EXPLAIN
staging=#


-Nick

--------------------------------------------------------------------------
Nick Fankhauser  nickf@ontko.com  Phone 1.765.935.4283  Fax 1.765.962.9788
Ray Ontko & Co.     Software Consulting Services     http://www.ontko.com/



Re: Resources

From
Frank Bax
Date:
I don't beleive indexes will improve SELECT using LIKE.

In second query, don't index on tokenid, index on the expression:(substring(tokenid FROM 0 FOR strpos(tokenid,'A')))

Frank


At 08:45 AM 1/11/02 -0500, Nick Fankhauser wrote:
>The explains indicate that a sequential scan is happening, so you want to
>index the columns involved.
>
>For instance for query No. 1, create an index on tickettab.arrivaldate,
>tickettab.subcode, subscribertab.custcode, subscribertab.address1 &
>subscribertab.address2.
>
>This will hurt your performance on inserts, but should really help the
>query. Try doing an explain again with these indexes on to see how it
>changes the situation. If inserts are an issue, you may need to play with it
>a bit to find the indexes that give you the best gain.
>
>-Nick
>
>
>
>> /* QUERY No 1
>>    ---------- */
>>
>> select count(tickettab.tokenid) as ticketcount
>> from tickettab,subscribertab
>> where (tickettab.arrivaldate >='2001-12-12'
>> and tickettab.arrivaldate <='2002-01-12') and
>> tickettab.subcode=subscribertab.custcode and
>> ((subscribertab.address1 ILIKE '%Cunningham%') OR
>> (subscribertab.address2 ILIKE '%Cunningham%'))
>>
>> /* QUERY No 2
>>    ---------- */
>>
>> select count(ticketmultab.tokenid) as ticketmulcount
>> from ticketmultab,subscribertab
>> where (ticketmultab.arrivaldate >='2001-12-12'
>> and ticketmultab.arrivaldate <='2002-01-12') and
>> (substring(tokenid FROM 0 FOR
>> strpos(tokenid,'A')))=subscribertab.custcode and
>> ((subscribertab.address1 ILIKE '%Cunningham%') OR
>> (subscribertab.address2 ILIKE '%Cunningham%'))
>>
>>
>> ######################EXPLAINS ON QUERIES ########################
>>
>> /* EXPLAIN ON QUERY No 1
>>    --------------------- */
>>
>> NOTICE:  QUERY PLAN:
>>
>> Aggregate  (cost=276.27..276.27 rows=1 width=28)
>>   ->  Nested Loop  (cost=0.00..276.26 rows=1 width=28)
>>         ->  Seq Scan on subscribertab  (cost=0.00..265.87 rows=1 width=8)
>>         ->  Seq Scan on tickettab  (cost=0.00..8.40 rows=160 width=20)
>>
>> /* EXPLAIN ON QUERY No 1
>>    --------------------- */
>>
>> NOTICE:  QUERY PLAN:
>>
>> Aggregate  (cost=269.02..269.02 rows=1 width=20)
>>   ->  Nested Loop  (cost=0.00..269.02 rows=1 width=20)
>>         ->  Seq Scan on subscribertab  (cost=0.00..265.87 rows=1 width=8)
>>         ->  Seq Scan on ticketmultab  (cost=0.00..2.50 rows=33 width=12)
>>
>>
>> ################ END OF REQUIRED DATA ########################
>>
>> Thanks in advance I hope I have provided the required information.
>> Basically, I need to optimize my operations, but changing the table
>> structures may be too late in the day.
>>
>> --
>> Best regards,
>>  Gurudutt                            mailto:guru@indvalley.com
>>
>> Life is not fair - get used to it.
>> Bill Gates
>>
>>
>> Thursday, January 10, 2002, 9:20:36 PM, you wrote:
>>
>>
>> JE> You've shown that the query takes up a lot of CPU, and that it is
>> JE> slower than you would like.  We have an idea as to how big the table
>> JE> is (not very), and we know that you have done your homework and have
>> JE> vacuum analyzed.
>>
>> JE> Next we need the query in question (so we can check for some of the
>> JE> more well known performance pitfalls like using IN on large result
>> JE> sets), and also the explain output (so we can see what PostgreSQL
>> JE> thinks of your query).  You also might want to consider including the
>> JE> schema of the tables involved.
>>
>> JE> Jason
>>
>> JE> Gurudutt <guru@indvalley.com> writes:
>>
>> >> Hi,
>> >>
>> >> I have a pentium III server, running on RHL 7.1 with 256 MB RAM,
>> >>
>> >> The following is output of the "top" command for query which involves
>> >> fetch from a table with about MAX of 10,000 rows.
>> >>
>> >> -------------------------------------TOP------------------------------
>> >>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>> >>
>> >>  3174 postgres  19   0  3328 3328  2672 R    99.0  1.3   0:58
>> postmaster
>> >>  1199 nobody     9   0  3728 3728  2704 S     0.5  1.4   0:03 httpd
>> >>  3035 root      10   0  1048 1048   840 R     0.3  0.4   0:15 top
>> >>     1 root       8   0   544  544   472 S     0.0  0.2   0:04 init
>> >>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>> >>     3 root
>> >>
>> >>
>> >> Now, my question is, it takes ages(2 mints) for the query to
>> run (regularly
>> >> VACUUM ANALYZED Database) and if you look closely at the resources
>> >> consumed by the postgres, it is almost taking away 100% CPU time.
>> >>
>> >> How can we make it faster and to consume less resources ??
>> >>
>> >> Can anybody suggest the steps they are taking for time-critical
>> >> applications to run efficiently.
>> >>
>> >> Thanks in advance
>> >>
>> >> --
>> >> Best regards,
>> >>  Gurudutt                          mailto:guru@indvalley.com
>> >>
>> >> Life is not fair - get used to it.
>> >> Bill Gates
>> >>
>> >>
>> >> ---------------------------(end of
>> broadcast)---------------------------
>> >> TIP 1: subscribe and unsubscribe commands go to
>> majordomo@postgresql.org
>>
>> JE> ---------------------------(end of
>> broadcast)---------------------------
>> JE> TIP 2: you can get off all lists at once with the unregister command
>> JE>     (send "unregister YourEmailAddressHere" to
>> majordomo@postgresql.org)
>>
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 6: Have you searched our list archives?
>>
>> http://archives.postgresql.org
>>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/users-lounge/docs/faq.html
>


Re: Resources

From
Frank Bax
Date:
Sorry, I made too general a statement.  Index will help when the expession
is anchored to beginning of field.  So 'A%' gets help from index, but '%A%'
does not.

Frank


At 11:13 AM 1/11/02 -0500, Nick Fankhauser wrote:
>
>> I don't beleive indexes will improve SELECT using LIKE.
>
>I wondered about that too, so I did a test using a database I'm working
>with. The results indicate that it helps substantially- here is my terminal
>log file:
>
>staging=#
>staging=#
>staging=# explain select count(*) from actor where actor_full_name like
>'A%';
>NOTICE:  QUERY PLAN:
>
>Aggregate  (cost=1067.72..1067.72 rows=1 width=0)
>  ->  Seq Scan on actor  (cost=0.00..1066.29 rows=572 width=0)
>
>EXPLAIN
>staging=# create index test on actor(actor_full_name);
>CREATE
>staging=# explain select count(*) from actor where actor_full_name like
>'A%';
>NOTICE:  QUERY PLAN:
>
>Aggregate  (cost=577.46..577.46 rows=1 width=0)
>  ->  Index Scan using test on actor  (cost=0.00..576.03 rows=572 width=0)
>
>EXPLAIN
>staging=#
>
>
>-Nick
>
>--------------------------------------------------------------------------
>Nick Fankhauser  nickf@ontko.com  Phone 1.765.935.4283  Fax 1.765.962.9788
>Ray Ontko & Co.     Software Consulting Services     http://www.ontko.com/
>
>


Re: Resources

From
jtp
Date:

On Fri, 11 Jan 2002, Nick Fankhauser wrote:

> 
> > I don't beleive indexes will improve SELECT using LIKE.
> 
> I wondered about that too, so I did a test using a database I'm working
> with. The results indicate that it helps substantially- here is my terminal
> log file:

[snip]

an interesting (?) addendum to this.  Yes it does help substantially,
unless your wildcharacter has characters after it.

> staging=# explain select count(*) from actor where actor_full_name like
> 'A%';

try: explain select count(*) from actor where actor_full_name like '%a';

not too relevant for searches here, but occasionally it could be.  my
situation was having a merged last name + zipcode field, instead of having
two separately indexed fields.  Found out i couldn't do the job i wanted
that way.





Re: Resources

From
"Nick Fankhauser"
Date:
> Sorry, I made too general a statement.  Index will help when the expession

...and I didn't notice that Gurudutt's LIKE wasn't anchored to the front.
oops- mea culpa.

Taking a second look then, the LIKE is probably very costly.

Take a look at the section in the Docs on making explicit joins. If the date
check significantly constrains the results, index it & then make sure that
constraint gets evaluated first, then the sequential scan on the address has
less to scan. This might help a lot.

The doc page for explicit joins:
http://www.postgresql.org/idocs/index.php?explicit-joins.html

You mention that it may be too late to change the tables, but I guess that's
where I'd look next- I once had a similar situation when I worked for an
electric utility & we ended up parsing the address lines to find road names
& numbers and then loading them into special indexed fields just used for
sorting. If you control the input app, maybe you could even have the user
enter the fields you want to query on separately.


-Nick




Re: Resources

From
Chester Carlton Young
Date:
In regards to not being able to change tables, a common trick (eg, in
data warehousing) where tables are read only, is to have a trigger on
the table populate a secondary table with good index info (like ripping
street name out of the address) and the oid of the original row.  The
query is a little more complex, but very good performance.


> You mention that it may be too late to change the tables, but I guess
> that's
> where I'd look next- I once had a similar situation when I worked for
> an
> electric utility & we ended up parsing the address lines to find road
> names
> & numbers and then loading them into special indexed fields just used
> for
> sorting. If you control the input app, maybe you could even have the
> user
> enter the fields you want to query on separately.
> 
> 
> -Nick
> 
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to
majordomo@postgresql.org)


__________________________________________________
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/


Re: Resources

From
Frank Bax
Date:
Or you could create a function to extract the data from an existing field,
then index on the function?

At 09:39 AM 1/11/02 -0800, Chester Carlton Young wrote:
>In regards to not being able to change tables, a common trick (eg, in
>data warehousing) where tables are read only, is to have a trigger on
>the table populate a secondary table with good index info (like ripping
>street name out of the address) and the oid of the original row.  The
>query is a little more complex, but very good performance.
>
>
>> You mention that it may be too late to change the tables, but I guess
>> that's where I'd look next- I once had a similar situation when I
>> worked for an electric utility & we ended up parsing the address lines
>> to find road names & numbers and then loading them into special indexed
>> fields just used for sorting. If you control the input app, maybe you
>> could even have the user enter the fields you want to query on separately.
>>
>> -Nick


Re: Resources

From
Frank Bax
Date:
At 03:42 PM 1/11/02 -0500, Wei Weng wrote:
>Can you index on a function?
>How exactly does that help performance of a query?

If a table "employee" contains a field "last" and you are not sure how the
last name might make use of capital letters, then to get all names starting
with 'MC', you might code a query like:select * from employee where last ILIKE 'MC%';
which does not use an index on "last" because we used ILIKE instead of LIKE.
In this case, we would create an index using "lower" function, like:create index employee_last on employee (
lower(last));
 
then write the query as:select * from employee where lower(last) LIKE 'mc%';
which does make use of the index (and a faster executing query).

Frank


Re: Resources

From
Tom Lane
Date:
Wei Weng <wweng@kencast.com> writes:
> I have a side question. How do I stop this kind of crazy query to suck up
> my CPU power if it is fed by a database driver? (like ODBC, for example)
> Since kill -9 postmaster is highly not recommended, can i do a
> /sbin/service postgresql stop to force it to shut down?

That would shut down the whole database, which is likely not what you
want.  The best way to cancel a runaway query is to cancel it, ie,
send a cancel request from the client (cf. control-C in psql, dunno
whether the ODBC driver supports this).  If you are logged into the
postgres or root account on the DB machine, you can get the same
effect with "kill -INT pid-of-backend".  (not pid-of-postmaster)
        regards, tom lane


Re: Resources - Regular Expressions

From
Gurudutt
Date:
Hi!!

Thanks a lot for all the advices, I knew i could always get help from
the lists. Had festival so couldn't reply earlier and we were closed.

I have parallel question, I read about regular expressions being
faster than the 'like', is it correct and one more thing how do u
extract information from a given regular expression

eg: like in perl anything inside the () can be extracted into a
variable

like...

# Program Segment Starts here

$disksize = $1 if (/Storage Swap size:\s+([-0-9\.]+)\s+MB/);

# Program segment ends here

as u can see the '$1' has the value of the expression thats got evaluated.

Now my question is when i use the same in a SQL statement all I can
get is just a boolean value whether it evaluates to true or false ,
nothing more than that.

I would like to extract the value from the evaluated regular
expression in a SQL query

Is that possible?

Thanks in advance...


Best regards,Gurudutt                            mailto:guru@indvalley.com

Life is not fair - get used to it.
Bill Gates


Saturday, January 12, 2002, 2:43:13 AM, you wrote:

FB> At 03:42 PM 1/11/02 -0500, Wei Weng wrote:



Re: Resources - Regular Expressions

From
Jeff Eckermann
Date:
Not implemented in PostgreSQL RE's AFAIK.
But you can use pl/perl to do it.  Just write a
function that takes a text type parameter (your RE
string) and returns the $1 value.

--- Gurudutt <guru@indvalley.com> wrote:
> Hi!!
> 
> Thanks a lot for all the advices, I knew i could
> always get help from
> the lists. Had festival so couldn't reply earlier
> and we were closed.
> 
> I have parallel question, I read about regular
> expressions being
> faster than the 'like', is it correct and one more
> thing how do u
> extract information fro a given regular expression
> 
> eg: like in perl anything inside the () can be
> extracted into a
> variable
> 
> like...
> 
> # Program Segment Starts here
> 
> $disksize = $1 if (/Storage Swap
> size:\s+([-0-9\.]+)\s+MB/);
> 
> # Program segment ends here
> 
> as u can see the '$1' has the value of the
> expression thats got evaluated.
> 
> Now my question is when i use the same in a SQL
> statement all I can
> get is just a boolean value whether it evaluates to
> true or false ,
> nothing more than that.
> 
> I would like to extract the value from the evaluated
> regular
> expression in a SQL query
> 
> Is that possible?
> 
> Thanks in advance...
> 
> 
> Best regards,
>  Gurudutt                           
> mailto:guru@indvalley.com
> 
> Life is not fair - get used to it.
> Bill Gates
> 
> 
> Saturday, January 12, 2002, 2:43:13 AM, you wrote:
> 
> FB> At 03:42 PM 1/11/02 -0500, Wei Weng wrote:
> 
> 
> ---------------------------(end of
> broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please
> send an appropriate
> subscribe-nomail command to majordomo@postgresql.org
> so that your
> message can get through to the mailing list cleanly


__________________________________________________
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/


Re: Resources

From
"Lg"
Date:
>Sorry, I made too general a statement.  Index will help when the 
>expession is anchored to beginning of field.  So 'A%' gets help from 
>index, but '%A%' does not. Frank At 11:13 AM 1/11/02 -0500, Nick 
>Fankhauser wrote:

This is of great interest to me because this is exactly what I am trying
to do: use indices to speed up anchored searches.

What you say mirrors what the faq says. However, I just can't get it to
work. I created a new database and a scratch table and inserted some
random values. After the creation of an index and turning seqscan off, I
got indexed lookups for exact searches but not for likes. What am I
doing wrong?

create table waa(a text, b text);
insert into waa values('sdf','dsafasf');
insert into waa values('sdsff','dsafasfsf');
insert into waa values('sffdsff','dsafasfssf');
insert into waa values('sfsdffdsff','dsafassdfffssf');
insert into waa values('sfsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('df','dsafasf');
insert into waa values('dsff','dsafasfsf');
insert into waa values('ffdsff','dsafasfssf');
insert into waa values('fsdffdsff','dsafassdfffssf');
insert into waa values('fsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('f','dsafas');
insert into waa values('sff','dsafsfsf');
insert into waa values('fdsff','dsfasfssf');
insert into waa values('sdffdsff','dsafassdfffssf');
insert into waa values('sdffasfsaf','dsafassdfffssfaf');

create index i_waa on waa(a);
set enable_seqscan to off;
explain select * from waa where a = 'f';

>--NOTICE:  QUERY PLAN:
>--Index Scan using i_waa on waa  (cost=0.00..2.01 rows=1 width=24)

explain select * from waa where a like 'f%';

>--NOTICE:  QUERY PLAN:
>--Seq Scan on waa  (cost=100000000.00..100000001.19 rows=1 width=24)



Re: Resources

From
Tom Lane
Date:
"Lg" <postgres@lg.ndirect.co.uk> writes:
> This is of great interest to me because this is exactly what I am trying
> to do: use indices to speed up anchored searches.

> What you say mirrors what the faq says. However, I just can't get it to
> work.

Probably you initialized the database in non-C locale.  Anchored
searches can only be optimized with index scans in C locale; the
index ordering isn't necessarily right in other locales.

You can check the database locale with contrib/pg_controldata,
or if that's not handy tryod -c $PGDATA/global/pg_control
and look for locale names near the end of the file.
        regards, tom lane


Re: Resources

From
"lg"
Date:
>Sorry, I made too general a statement.  Index will help when the
>expession is anchored to beginning of field.  So 'A%' gets help from 
>index, but '%A%' does not. Frank At 11:13 AM 1/11/02 -0500, Nick 
>Fankhauser wrote:

This is of great interest to me because this is exactly what I am trying
to do: use indices to speed up anchored searches.

What you say mirrors what the faq says. However, I just can't get it to
work. I created a new database and a scratch table and inserted some
random values. After the creation of an index and turning seqscan off, I
got indexed lookups for exact searches but not for likes. What am I
doing wrong?

create table waa(a text, b text);
insert into waa values('sdf','dsafasf');
insert into waa values('sdsff','dsafasfsf');
insert into waa values('sffdsff','dsafasfssf');
insert into waa values('sfsdffdsff','dsafassdfffssf');
insert into waa values('sfsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('df','dsafasf');
insert into waa values('dsff','dsafasfsf');
insert into waa values('ffdsff','dsafasfssf');
insert into waa values('fsdffdsff','dsafassdfffssf');
insert into waa values('fsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('f','dsafas');
insert into waa values('sff','dsafsfsf');
insert into waa values('fdsff','dsfasfssf');
insert into waa values('sdffdsff','dsafassdfffssf');
insert into waa values('sdffasfsaf','dsafassdfffssfaf');

create index i_waa on waa(a);
set enable_seqscan to off;
explain select * from waa where a = 'f';

>--NOTICE:  QUERY PLAN:
>--Index Scan using i_waa on waa  (cost=0.00..2.01 rows=1 width=24)

explain select * from waa where a like 'f%';

>--NOTICE:  QUERY PLAN:
>--Seq Scan on waa  (cost=100000000.00..100000001.19 rows=1 width=24)



Re: Resources

From
"Llew Goodstadt"
Date:
>Sorry, I made too general a statement.  Index will help when the 
>expession is anchored to beginning of field.  So 'A%' gets help from 
>index, but '%A%' does not. Frank At 11:13 AM 1/11/02 -0500, Nick 
>Fankhauser wrote:

This is of great interest to me because this is exactly what I am trying
to do: use indices to speed up anchored searches.

What you say mirrors what the faq says. However, I just can't get it to
work. I created a new database and a scratch table and inserted some
random values. After the creation of an index and turning seqscan off, I
got indexed lookups for exact searches but not for likes. What am I
doing wrong?

create table waa(a text, b text);
insert into waa values('sdf','dsafasf');
insert into waa values('sdsff','dsafasfsf');
insert into waa values('sffdsff','dsafasfssf');
insert into waa values('sfsdffdsff','dsafassdfffssf');
insert into waa values('sfsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('df','dsafasf');
insert into waa values('dsff','dsafasfsf');
insert into waa values('ffdsff','dsafasfssf');
insert into waa values('fsdffdsff','dsafassdfffssf');
insert into waa values('fsdffasfsafdsff','dsafassdfffssfaf');
insert into waa values('f','dsafas');
insert into waa values('sff','dsafsfsf');
insert into waa values('fdsff','dsfasfssf');
insert into waa values('sdffdsff','dsafassdfffssf');
insert into waa values('sdffasfsaf','dsafassdfffssfaf');

create index i_waa on waa(a);
set enable_seqscan to off;
explain select * from waa where a = 'f';

>--NOTICE:  QUERY PLAN:
>--Index Scan using i_waa on waa  (cost=0.00..2.01 rows=1 width=24)

explain select * from waa where a like 'f%';

>--NOTICE:  QUERY PLAN:
>--Seq Scan on waa  (cost=100000000.00..100000001.19 rows=1 width=24)



Using indices with LIKE

From
"Llew Sion Goodstadt"
Date:
Thanks. You have saved my life!
I should perhaps have noticed the bit in the administrator's guide on
localisation.
Never thought it would be relevant.    
I have reloaded the database after a new initdb and restarting
postmaster after changing the environment ("export LANG=C").
Everything now works.

(I write in all this detail so that the next poor sod doesn't have to go
through the same hellish experience :->

Llew

> Probably you initialized the database in non-C locale.  
> Anchored searches can only be optimized with index scans in C 
> locale; the index ordering isn't necessarily right in other locales.
> 
> You can check the database locale with 
> contrib/pg_controldata, or if that's not handy try
>     od -c $PGDATA/global/pg_control
> and look for locale names near the end of the file.
> 
>             regards, tom lane
> 
> ---------------------------(end of 
> broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an 
> appropriate subscribe-nomail command to 
> majordomo@postgresql.org so that your message can get through 
> to the mailing list cleanly
>