Thread: Can PostGreSQL handle 100 user database?

Can PostGreSQL handle 100 user database?

From
martin.chantler@convergys.com
Date:

I plan to convert a Foxpro system to client/server - hopefully using PostGreSQL
(about 100 tables / 300 mb / 100 users)

Firstly I heard a rumour that p-sql doesn't process queries in parellel, i.e.
performs them sequentially.
Is this true? If so it would surely make it impracticle when more than a few
clients are connected
I tried this out by running 2 VB programs via ODBC than randomly performed
queries - they appeared to work
in parallel - however I then started a PSQL session and entered a slow query it
appeared to stop the 2 VB programs until
it had completed. Anyone got the answer to this???

The other question I have is how much memory I should really have to support 100
connected clients. There must be a formula / rule of thumb for this?

I am hoping I can convince my customer to use postgresql but first I need to
convince myself it is up to the job :)
I am actually pretty impressed with it so far, its got a lot of functionality
that DB2 doesn't have

Thanks,

M Chantler
Southampton



--
NOTICE:  The information contained in this electronic mail transmission is
intended by Convergys Corporation for the use of the named individual or entity
to which it is directed and may contain information that is privileged or
otherwise confidential.  If you have received this electronic mail transmission
in error, please delete it from your system without copying or forwarding it,
and notify the sender of the error by reply email or by telephone (collect), so
that the sender's address records can be corrected.



Re: Can PostGreSQL handle 100 user database?

From
Tom Lane
Date:
martin.chantler@convergys.com writes:
> I tried this out by running 2 VB programs via ODBC than randomly
> performed queries - they appeared to work in parallel - however I then
> started a PSQL session and entered a slow query it appeared to stop
> the 2 VB programs until it had completed. Anyone got the answer to
> this???

With only that amount of detail, there is no answer.  Some things run
in parallel, some things interlock; but you didn't tell us what your
test case was.

            regards, tom lane

Re: Can PostGreSQL handle 100 user database?

From
"Brian C. Doyle"
Date:
Another thing to rember about PostgreSQL is DB Size vs CPU bits
 
8.1 CPU types - 32-bit or 64-bit
Performance of 32-bit cpu machines will decline rapidly when the database size exceeds 5 GigaByte. You can run 30 gig database on 32-bit cpu but the performance will be degraded. Machines with 32-bit cpu imposes a limitation of 2 GB on RAM, 2 GB on file system sizes and other limitations on the operating system. Use the special filesystems for linux made by SGI, IBM or HP or ext3-fs to support file-sizes greater than 2 GB on 32-bit linux machines.
For extremely large databases, it is strongly advised to use 64-bit machines like Digital Alpha cpu, Sun Ultra-sparc 64-bit cpu, Silicon graphics 64-bit cpu, Intel Merced IA-64 cpu, HPUX 64bit machines or IBM 64-bit machines. Compile PostgreSQL under 64-bit cpu and it can support huge databases and large queries. Performance of PostgreSQL for queries on large tables and databases will be several times faster than PostgreSQL on 32-bit cpu machines. Advantage of 64-bit machines are that you get very large memory addressing space and the operating system can support very large file-systems, provide better performance with large databases, support much larger memory (RAM), have more capabilities etc..

found at http://www.linuxdoc.org/HOWTO/PostgreSQL-HOWTO-8.html


At 02:50 PM 11/30/00 +0000, martin.chantler@convergys.com wrote:


I plan to convert a Foxpro system to client/server - hopefully using PostGreSQL
(about 100 tables / 300 mb / 100 users)

Firstly I heard a rumour that p-sql doesn't process queries in parellel, i.e.
performs them sequentially.
Is this true? If so it would surely make it impracticle when more than a few
clients are connected
I tried this out by running 2 VB programs via ODBC than randomly performed
queries - they appeared to work
in parallel - however I then started a PSQL session and entered a slow query it
appeared to stop the 2 VB programs until
it had completed. Anyone got the answer to this???

The other question I have is how much memory I should really have to support 100
connected clients. There must be a formula / rule of thumb for this?

I am hoping I can convince my customer to use postgresql but first I need to
convince myself it is up to the job :)
I am actually pretty impressed with it so far, its got a lot of functionality
that DB2 doesn't have

Thanks,

M Chantler
Southampton



--
NOTICE:  The information contained in this electronic mail transmission is
intended by Convergys Corporation for the use of the named individual or entity
to which it is directed and may contain information that is privileged or
otherwise confidential.  If you have received this electronic mail transmission
in error, please delete it from your system without copying or forwarding it,
and notify the sender of the error by reply email or by telephone (collect), so
that the sender's address records can be corrected.

Re: Can PostGreSQL handle 100 user database?

From
Lamar Owen
Date:
"Brian C. Doyle" wrote:
> Another thing to rember about PostgreSQL is DB Size vs CPU bits

> Performance of 32-bit cpu machines will decline rapidly when the
> database size exceeds 5 GigaByte. You can run 30 gig database on
> 32-bit cpu but the performance will be degraded. Machines with 32-bit
> cpu imposes a limitation of 2 GB on RAM, 2 GB on file system sizes and
> other limitations on the operating system. Use the special filesystems
> for linux made by SGI, IBM or HP or ext3-fs to support file-sizes
> greater than 2 GB on 32-bit linux machines.

> found at http://www.linuxdoc.org/HOWTO/PostgreSQL-HOWTO-8.html

I would take anything found in that document with a grain or barrel of
salt. (Read chapter 2 of it for a taste of it unsalted....)

Above a certain configurable size (correct me if I am wrong, but I think
it is currently 1GB), tables are segmented in PostgreSQL.  The more
segments, the more likely you might take a performance hit.

But the CPU bittedness is not the real limit for large tables, thanks to
the automatic segmentation -- hard disk transfer rates and seek times
are.  Of course, the more RAM you can throw at it, the better -- but
that is true with any CPU, not just 32 bit ones.

But of course you do have real limits for 32-bit addressing CPU's (a 32
bit CPU is not necessarily limited to 32 address bus bits -- the
bittedness of the CPU is basic register word size, not dependent upon
bus size (8088, with an 8 bit data bus, a 20 bit address bus, and 16 bit
register word size, is a 16 bit cpu, for instance -- the 80386SX, with a
16 bit data bus, a 32 bit address bus (well, in all technicality, IIRC
it's a 31 bit address bus with a high-byte/low-byte/full word control
implementation -- ie, it addresses 32 bits for byte addressing, not word
addressing) was a 32 bit CPU -- and the original 68000 with a 16 bit
data bus and a 24 bit address bus was, in the register sense, a 32 bit
CPU).  The physical RAM is limited by physical address bus size -- on
the ia32 architecture that is 4 GB, IIRC.

A 64 bit CPU with a 32 bit address bus is still limited to 4 gigawords
(32 GB) physical RAM.

RAM size has everything to do with the address bus size -- but file size
has to do with basic data representation, typically limited by the
register size of the CPU.

The 2GB size limits of ia32 come in to play due to byte addressing
(versus word addressing) in ia32 plus the use of signed single register
two's-complement integers.

But, as always, I reserve the right to be wrong.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: Can PostGreSQL handle 100 user database?

From
Vivek Khera
Date:
>>>>> "BCD" == Brian C Doyle <bcdoyle@mindspring.com> writes:

BCD> performance will be degraded. Machines with 32-bit cpu imposes a
BCD> limitation of 2 GB on RAM, 2 GB on file system sizes and other
BCD> limitations on the operating system. Use the special filesystems
BCD> for linux made by SGI, IBM or

No; a 32-bit CPU does not limit you to 2Gb RAM and 2 GB files.  BSD/OS
runs on x86 architecture and has neither limit.  The limit is not in
the CPU but in your choice of operating system.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: Can PostGreSQL handle 100 user database?

From
The Hermit Hacker
Date:
Note that this is a Linux limitation ... and even then, I'm not quite sure
how accurate that is anymore ... the *BSDs have supported >2gb file
systems for ages now, and, since IBM supports Linux, I'd be shocked if
there was a 2GB limit on memory, considering alot of IBMs servers support
up to 4 or 8GB of RAM ...

On Thu, 30 Nov 2000, Brian C. Doyle wrote:

> Another thing to rember about PostgreSQL is DB Size vs CPU bits
>
> 8.1 CPU types - 32-bit or 64-bit
> Performance of 32-bit cpu machines will decline rapidly when the database
> size exceeds 5 GigaByte. You can run 30 gig database on 32-bit cpu but the
> performance will be degraded. Machines with 32-bit cpu imposes a limitation
> of 2 GB on RAM, 2 GB on file system sizes and other limitations on the
> operating system. Use the special filesystems for linux made by SGI, IBM or
> HP or ext3-fs to support file-sizes greater than 2 GB on 32-bit linux
> machines.
> For extremely large databases, it is strongly advised to use 64-bit
> machines like Digital Alpha cpu, Sun Ultra-sparc 64-bit cpu, Silicon
> graphics 64-bit cpu, Intel Merced IA-64 cpu, HPUX 64bit machines or IBM
> 64-bit machines. Compile PostgreSQL under 64-bit cpu and it can support
> huge databases and large queries. Performance of PostgreSQL for queries on
> large tables and databases will be several times faster than PostgreSQL on
> 32-bit cpu machines. Advantage of 64-bit machines are that you get very
> large memory addressing space and the operating system can support very
> large file-systems, provide better performance with large databases,
> support much larger memory (RAM), have more capabilities etc..
>
> found at http://www.linuxdoc.org/HOWTO/PostgreSQL-HOWTO-8.html
>
>
> At 02:50 PM 11/30/00 +0000, martin.chantler@convergys.com wrote:
>
>
> >I plan to convert a Foxpro system to client/server - hopefully using
> >PostGreSQL
> >(about 100 tables / 300 mb / 100 users)
> >
> >Firstly I heard a rumour that p-sql doesn't process queries in parellel, i.e.
> >performs them sequentially.
> >Is this true? If so it would surely make it impracticle when more than a few
> >clients are connected
> >I tried this out by running 2 VB programs via ODBC than randomly performed
> >queries - they appeared to work
> >in parallel - however I then started a PSQL session and entered a slow
> >query it
> >appeared to stop the 2 VB programs until
> >it had completed. Anyone got the answer to this???
> >
> >The other question I have is how much memory I should really have to
> >support 100
> >connected clients. There must be a formula / rule of thumb for this?
> >
> >I am hoping I can convince my customer to use postgresql but first I need to
> >convince myself it is up to the job :)
> >I am actually pretty impressed with it so far, its got a lot of functionality
> >that DB2 doesn't have
> >
> >Thanks,
> >
> >M Chantler
> >Southampton
> >
> >
> >
> >--
> >NOTICE:  The information contained in this electronic mail transmission is
> >intended by Convergys Corporation for the use of the named individual or
> >entity
> >to which it is directed and may contain information that is privileged or
> >otherwise confidential.  If you have received this electronic mail
> >transmission
> >in error, please delete it from your system without copying or forwarding it,
> >and notify the sender of the error by reply email or by telephone
> >(collect), so
> >that the sender's address records can be corrected.
>

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


Re: Can PostGreSQL handle 100 user database?

From
Vivek Khera
Date:
>>>>> "LO" == Lamar Owen <lamar.owen@wgcr.org> writes:

LO> The 2GB size limits of ia32 come in to play due to byte addressing
LO> (versus word addressing) in ia32 plus the use of signed single register
LO> two's-complement integers.

LO> But, as always, I reserve the right to be wrong.

You are wrong.  The file size limit has to do with the data size of
your file offset pointer.  This is not necessarily a 32 bit quantity
on a 32-bit processor.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: Can PostGreSQL handle 100 user database?

From
Lamar Owen
Date:
Vivek Khera wrote:
> LO> The 2GB size limits of ia32 come in to play due to byte addressing
> LO> (versus word addressing) in ia32 plus the use of signed single register
> LO> two's-complement integers.

> LO> But, as always, I reserve the right to be wrong.

> You are wrong.  The file size limit has to do with the data size of
> your file offset pointer.  This is not necessarily a 32 bit quantity
> on a 32-bit processor.

If the file offset pointer is a signed integer, then it holds.  That is
an OS specific issue as to the type of the pointer.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: Can PostGreSQL handle 100 user database?

From
Bruce Guenter
Date:
On Thu, Nov 30, 2000 at 01:48:43PM -0400, The Hermit Hacker wrote:
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

Correct.  With the 36-bit PAE extensions on PII and above CPUs, Linux
supports up to the full 64GB of physical RAM.  Individual processes are
limited to either 2GB or 3GB (or 3.5GB), depending on the kernel compile
option as to the division point between kernel and user memory.  Linux
also supports >2GB files (the kernel is limited to 2TB IIRC -- 2^32 512
byte blocks).

Of course, on a 64-bit CPU, all these limitations are off, which really
makes them the platform of choice for heavy data manipulation (I/O).
--
Bruce Guenter <bruceg@em.ca>                       http://em.ca/~bruceg/

Attachment

Re: Can PostGreSQL handle 100 user database?

From
Marc SCHAEFER
Date:
On Thu, 30 Nov 2000, The Hermit Hacker wrote:

> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

Linux 2.2.x on ix86 only supports files upto 2 GB. Linux 2.4.x or any
64-bit plateform (SPARC, Alpha, m68k) fixes this (through the Large File
Summit support, and a new libc).

Memory: Upto 1 GB is supported stock, 2 GB by recompiling kernel. There is
work in progress in 2.4 for supporting the > 32 bit ix86 addressing modes
available in some processors.



Re: Can PostGreSQL handle 100 user database?

From
"Martin A. Marques"
Date:
On Thursday 30 November 2000 14:48, The Hermit Hacker wrote:
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

As far as I know, the limitation has been passed recently with kernel 2.4.

Saludos... :-)

--
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-----------------------------------------------------------------
Mart�n Marqu�s            email:     martin@math.unl.edu.ar
Santa Fe - Argentina        http://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-----------------------------------------------------------------

Re: Can PostGreSQL handle 100 user database?

From
"Adam Lang"
Date:
Here is a link that explains memory.  It is from a windows2000 magazine, but
it isn't very NT specific.  It speaks in genaralities.  I thought it was a
rather good atrticle.

http://www.win2000mag.com/Articles/Index.cfm?ArticleID=7290

I don't think you need to be a subscriber to read it.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
----- Original Message -----
From: "Vivek Khera" <khera@kciLink.com>
To: <pgsql-general@postgresql.org>
Sent: Thursday, November 30, 2000 1:31 PM
Subject: Re: [GENERAL] Can PostGreSQL handle 100 user database?


> >>>>> "LO" == Lamar Owen <lamar.owen@wgcr.org> writes:
>
> LO> The 2GB size limits of ia32 come in to play due to byte addressing
> LO> (versus word addressing) in ia32 plus the use of signed single
register
> LO> two's-complement integers.
>
> LO> But, as always, I reserve the right to be wrong.
>
> You are wrong.  The file size limit has to do with the data size of
> your file offset pointer.  This is not necessarily a 32 bit quantity
> on a 32-bit processor.
>
> --
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Vivek Khera, Ph.D.                Khera Communications, Inc.
> Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
> AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/


Re: Can PostGreSQL handle 100 user database?

From
"Mr. Shannon Aldinger"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thu, 30 Nov 2000, The Hermit Hacker wrote:

>
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...
>
Linux kernel 2.2.x unpatched has the 2GB file size and 1GB ram limit.
Patched with the lfs package the 2GB file size limit goes away.
The lfs patch needs applied against gnu-libc as well. This alone may not
avoid the 2GB limit, the application must use the lseek64 instead of
lseek, for example. lfs will be included by default in the upcoming 2.4.x
kernels. The upcoming 2.4.x kernels also support more ram. I'm fairly
certain ram patches exist for the 2.2.x series.

I have just one question to ask will postgresql 7.1 include full support
for using lseek64, stat64, etc?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: Made with pgp4pine 1.75

iEYEARECAAYFAjomo5wACgkQwtU6L/A4vVD+bgCfZqBGG2C/FQceN4BkE9m474K1
mHcAoKEFHowB6iWhbOhPbOdDUxlN0eyA
=/AV7
-----END PGP SIGNATURE-----



RE: Can PostGreSQL handle 100 user database?

From
"Francis Solomon"
Date:
Hi,

2Gb file *systems* have been supported forever and a day on Linux. ext2
supports this without batting an eyelid. 2Gb *files* have not been
supported very well or very long on 32-bit systems. Essentially you need
a recent 2.4.0-test kernel version (test7 and up) or a patched 2.2.x
kernel (more likely if you're in a production environment). For more
information, see http://www.suse.de/~aj/linux_lfs.html

2Gb memory is a limitation under x86 (ia32) Linux in current production
kernels (2.2.x).
Again, the new 2.4.0 kernels go one better by using Intel's PAE
(Physical Address Extension) mode on Pentium Pro CPUs and newer. This
raises the available memory on Linux to 64Gb. Of course, 2.4.0-testx
kernels are not production quality, but it's a good taste of what's
imminent.

Hope this helps.

Francis Solomon

>
> Note that this is a Linux limitation ... and even then, I'm
> not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs
> servers support
> up to 4 or 8GB of RAM ...


Re: Can PostGreSQL handle 100 user database?

From
teg@redhat.com (Trond Eivind GlomsrØd)
Date:
Marc SCHAEFER <schaefer@alphanet.ch> writes:

> On Thu, 30 Nov 2000, The Hermit Hacker wrote:
>
> > Note that this is a Linux limitation ... and even then, I'm not quite sure
> > how accurate that is anymore ... the *BSDs have supported >2gb file
> > systems for ages now, and, since IBM supports Linux, I'd be shocked if
> > there was a 2GB limit on memory, considering alot of IBMs servers support
> > up to 4 or 8GB of RAM ...
>
> Linux 2.2.x on ix86 only supports files upto 2 GB.

This support has been backported as is available in some kernels
shipped with Red Hat Linux, and has been so for some time. Possibly
others.


--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: Can PostGreSQL handle 100 user database?

From
Alex Pilosov
Date:
On Thu, 30 Nov 2000, Mr. Shannon Aldinger wrote:

> Linux kernel 2.2.x unpatched has the 2GB file size and 1GB ram limit.
> Patched with the lfs package the 2GB file size limit goes away.
> The lfs patch needs applied against gnu-libc as well. This alone may not
> avoid the 2GB limit, the application must use the lseek64 instead of
> lseek, for example. lfs will be included by default in the upcoming 2.4.x
That's automatic, if you compile your source with -D_LARGE_FILES (or
something like that, don't remember exact names).

> kernels. The upcoming 2.4.x kernels also support more ram. I'm fairly
> certain ram patches exist for the 2.2.x series.
>
> I have just one question to ask will postgresql 7.1 include full support
> for using lseek64, stat64, etc?
Postgres doesn't need to. Just autoconf script needs to have an option
--with-large-files argument that would add -D_LARGE_FILES to CFLAGS.

-alex


pltcl: missing close-brace

From
"Jonathan Ellis"
Date:
I have a pl/tcl function that doesn't work, but the same function in a
plain-vanilla tcl script does.  (The function emulates Oracle's "replace"
function.)

The PL/TCL results:

bf2=# create function replace2 (varchar, varchar, varchar) returns varchar
as '
    regsub -all {&} "$3" {\\\&} 3
    regsub -all {\\[0-9]} "$3" {\\\0} 3
    eval "regsub -all \{$2\} \{$1\} \{$3\} rval"
    return $rval
' language 'pltcl';

bf2'# bf2'# bf2'# bf2'# bf2'# CREATE

bf2=# select replace2('a b', ' ', 'c');
ERROR:  pltcl: missing close-brace

The plain tcl results:

#!/usr/bin/tcl

proc replace { 1 2 3 } {
    regsub -all {&} "$3" {\\\&} 3
    regsub -all {\\[0-9]} "$3" {\\\0} 3
    eval "regsub -all \{$2\} \{$1\} \{$3\} rval"
    return $rval
}

puts [replace "a b" " " "c"]

$ ./test.tcl
acb

Any idea what this error message means?

-Jonathan


Re: pltcl: missing close-brace

From
Tom Lane
Date:
"Jonathan Ellis" <jellis@advocast.com> writes:
> I have a pl/tcl function that doesn't work, but the same function in a
> plain-vanilla tcl script does.  (The function emulates Oracle's "replace"
> function.)

> bf2=# create function replace2 (varchar, varchar, varchar) returns varchar
> as '
>     regsub -all {&} "$3" {\\\&} 3
>     regsub -all {\\[0-9]} "$3" {\\\0} 3
>     eval "regsub -all \{$2\} \{$1\} \{$3\} rval"
>     return $rval
> ' language 'pltcl';

Those backslashes all need to be protected by backslashes ...
the SQL parser will strip off one level of backslashing as it parses
the command.

            regards, tom lane