Re: [osdldbt-general] Re: getting execution plans with - Mailing list pgsql-general

From Jenny Zhang
Subject Re: [osdldbt-general] Re: getting execution plans with
Date
Msg-id 1060990634.8166.212.camel@ibm-a
Whole thread Raw
In response to Re: getting execution plans with multiple database connections  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [osdldbt-general] Re: getting execution plans with multiple database connections  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Tom,

Thanks for your reply.  It does look like hitting limit of some system
resource since it is random.  But I am not sure if it is the NFILE.
I changed max_files_per_process to 200 and increased open files to
16384.
Here is my ulimit -a output:
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) unlimited
max memory size       (kbytes, -m) unlimited
open files                    (-n) 16384
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) 8192
cpu time             (seconds, -t) unlimited
max user processes            (-u) 67587
virtual memory        (kbytes, -v) unlimited

And I still have the same problem.  Since PG should fail if no more
files can be opened, my test should fail.  But I got all the other
results back at the end of the run.

Thanks,
Jenny
On Fri, 2003-08-15 at 11:46, Tom Lane wrote:
> Jenny Zhang <jenny@osdl.org> writes:
> > It works for power test(single database connection).  In other words, I
> > can get execution plans for all the 21 queries if there is only one
> > stream connecting to the database.
>
> > But for the throughput test(multiple database connections), some query
> > execution plan files are blank.  It seems to be random, for example, for
> > the run at:
> > http://khack.osdl.org/stp/277495/results/plan/
> > throughput_stream1_query14.txt, throughput_stream3_query4.txt,
> > throughput_stream3_query5.txt, throughput_stream4_query4.txt,
> > throughput_stream4_query18.txt are of size 0.  And if I do the run
> > again, a different set of files are blank.
>
> I wonder if you are not running out of kernel file table slots.  The
> described behavior sounds rather like the explain-ing script is failing
> to notice that it couldn't open its output file.  If you haven't bumped
> up the kernel's file table size (NFILE or similar parameter) and/or
> adjusted Postgres' max_files_per_process parameter downwards to hold
> Postgres to a small number of open files per backend, it's quite easy
> for Postgres to eat all your file table slots.  Postgres itself
> generally will not complain (it has strategies for dealing with ENFILE
> failures), but an awful lot of other stuff will fall right over when
> it can't open files.
>
>             regards, tom lane
>
>
> -------------------------------------------------------
> This SF.Net email sponsored by: Free pre-built ASP.NET sites including
> Data Reports, E-commerce, Portals, and Forums are available now.
> Download today and enter to win an XBOX or Visual Studio .NET.
> http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01
> _______________________________________________
> osdldbt-general mailing list
> osdldbt-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/osdldbt-general
--
Jenny Zhang
Open Source Development Lab Inc
12725 SW Millikan Way
Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31


pgsql-general by date:

Previous
From: Ron Johnson
Date:
Subject: Re: Arrays and "goodness" in RDBMSs (was Re: join of
Next
From: Jason Godden
Date:
Subject: Re: summary and request