Thread: Basic questions before start

Basic questions before start

From
gogulus@eqnet.hu
Date:
Hello,

  I am planning to implement a system, where there is one Master database running on a Linux box with as many resources
asnecessary, and there are one or more client pc computers,with processor speed of 100 Mhz, memory of 32-64 Mbytes and
a10Mb/s network card.
 

  The major task is that the clients should work on the actual state of data, which means the basic database (data,
unchangeablefor clients), and some statistical data on other clients, and they should not stop work when there is no
networkconnection to the master pc. They should give back their detailed transactions on the basic data to the master
pc.

  For this reason I consider to run postgres on the client computers, but I am quite concerned about system overhead.

  The clients will only do basic database work: 
   - selects from the database, without nested selects (or with nested selects with the maximum of 1-2 levels) 
   - writing their transactions into the database, with commit/rollback functionality.
   - update some tables because of synchronization with master db.
   - update some tables to summarize the value of transactions. (They could be done by triggers, but if they need
resources,there is an existing solution with basic operations).
 

  Size of the database: The basic data includes 50-100.000 elements in 3-4 tables each, and much less data in other
tables.The number of tables is around 100.
 

  I would like to know the opinion of experienced users of Postgres, if I can embark upon this road, or should choose
another way which uses an other db system with lower resource-needs.
 

  Thanks in advance,

  Gogulus

Re: Basic questions before start

From
Paul Thomas
Date:
On 29/07/2003 18:04 gogulus@eqnet.hu wrote:
> Hello,
>
>   I am planning to implement a system, where there is one Master database
> running on a Linux box with as many resources as necessary, and there are
> one or more client pc computers,with processor speed of 100 Mhz, memory
> of 32-64 Mbytes and a 10Mb/s network card.
>
>   The major task is that the clients should work on the actual state of
> data, which means the basic database (data, unchangeable for clients),
> and some statistical data on other clients, and they should not stop work
> when there is no network connection to the master pc. They should give
> back their detailed transactions on the basic data to the master pc.
>
>   For this reason I consider to run postgres on the client computers, but
> I am quite concerned about system overhead.
>
>   The clients will only do basic database work:
>    - selects from the database, without nested selects (or with nested
> selects with the maximum of 1-2 levels)
>    - writing their transactions into the database, with commit/rollback
> functionality.
>    - update some tables because of synchronization with master db.
>    - update some tables to summarize the value of transactions. (They
> could be done by triggers, but if they need resources, there is an
> existing solution with basic operations).
>
>   Size of the database: The basic data includes 50-100.000 elements in
> 3-4 tables each, and much less data in other tables. The number of tables
> is around 100.
>
>   I would like to know the opinion of experienced users of Postgres, if I
> can embark upon this road, or should choose an other way which uses an
> other db system with lower resource-needs.

I've run Linux together with Gnome 1.4, Apache, Sendmail, Postgresql and a
Java VM on a laptop with 64MB RAM with no problem. You just have to accept
that it will be a bit slower.

HTH

--
Paul Thomas
+------------------------------+---------------------------------------------+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants         |
http://www.thomas-micro-systems-ltd.co.uk   |
+------------------------------+---------------------------------------------+




Re: Basic questions before start

From
Dmitry Tkach
Date:
gogulus@eqnet.hu wrote:
How are you going to synchronize the databases on the client boxes with
your 'Master database'?

And how are your clients going to be able to use "some statistical data
on other clients" when "there is no network connection"? Or did you mean
that the connection to the "Master database" is down, but the clients
are still able to talk to each other?
That would be kinda weir (why would that happen), and still... are you
planning to have each client to write its statistical data to *all* the
client databases at once? What will you do if some writes succeed, and
some fail?

... and also, if you do this, why do you need that 'Master database' at
all to begin with?

Dima

>Hello,
>
>  I am planning to implement a system, where there is one Master database running on a Linux box with as many
resourcesas necessary, and there are one or more client pc computers,with processor speed of 100 Mhz, memory of 32-64
Mbytesand a 10Mb/s network card. 
>
>  The major task is that the clients should work on the actual state of data, which means the basic database (data,
unchangeablefor clients), and some statistical data on other clients, and they should not stop work when there is no
networkconnection to the master pc. They should give back their detailed transactions on the basic data to the master
pc.
>
>  For this reason I consider to run postgres on the client computers, but I am quite concerned about system overhead.
>
>  The clients will only do basic database work:
>   - selects from the database, without nested selects (or with nested selects with the maximum of 1-2 levels)
>   - writing their transactions into the database, with commit/rollback functionality.
>   - update some tables because of synchronization with master db.
>   - update some tables to summarize the value of transactions. (They could be done by triggers, but if they need
resources,there is an existing solution with basic operations). 
>
>  Size of the database: The basic data includes 50-100.000 elements in 3-4 tables each, and much less data in other
tables.The number of tables is around 100. 
>
>  I would like to know the opinion of experienced users of Postgres, if I can embark upon this road, or should choose
another way which uses an other db system with lower resource-needs. 
>
>  Thanks in advance,
>
>  Gogulus
>
>---------------------------(end of broadcast)---------------------------
>TIP 6: Have you searched our list archives?
>
>               http://archives.postgresql.org
>
>



Re: Basic questions before start

From
"scott.marlowe"
Date:
On Tue, 29 Jul 2003 gogulus@eqnet.hu wrote:

> Hello,
>
>   I am planning to implement a system, where there is one Master
> database running on a Linux box with as many resources as necessary, and
> there are one or more client pc computers,with processor speed of 100
> Mhz, memory of 32-64 Mbytes and a 10Mb/s network card.

That's pretty slow for graphics and fancy stuff.  I'd run a light weight
windows manager if you HAVE to have a GUI front en.

If you can set them up to use a curses based client, you might get better
performance out of them.

If the subordinate databases on each workstation are kept trim and
vacuumed and analyzed often enough, they should be quite snappy from a
database perspective.  It's handling large data sets that slows down
machines like that.  If there's only a few hundred or thousand rows, a
P100 / 64 Megs is plenty fast.


Re: Basic questions before start

From
Gogulus
Date:
Dmitry Tkach wrote:

> gogulus@eqnet.hu wrote:
> How are you going to synchronize the databases on the client boxes
> with your 'Master database'?
>
> And how are your clients going to be able to use "some statistical
> data on other clients" when "there is no network connection"? Or did
> you mean that the connection to the "Master database" is down, but the
> clients are still able to talk to each other?
> That would be kinda weir (why would that happen), and still... are you
> planning to have each client to write its statistical data to *all*
> the client databases at once? What will you do if some writes succeed,
> and some fail?

The clients are sell points, working on 50-60.000 of item data like
description, price, quantity unit, etc. This is what I call basic data.
The fill-up is handled at the master db side. There is an application
sitting on the master db which takes care of sending down the data to
each of the clients, and receiving transactions from them.

Clients work on the data, and send back selling information of time,
quantity, discounts, payments, etc. This is overtaken to the master db
for doing calculations there. As the clients can switch with each other
(e.g. sales person can go to an other box), we should know some data on
clients which is somehow machine-independent. That's why some
statistical data of the clients should be synchronized with client
boxes. This is handled by master db as well.

As the clients should be able to work without network connection, they
have to have a local database, and if net connection is on, do the
synchronization with master db. The main idea is, sale cannot stop
because of net connection breakage.

That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
of a database with around 100 tables, 3-4 of these tables having
50-60000 of records, others have at most 1000.

TIA,

Gogulus

>
> ... and also, if you do this, why do you need that 'Master database'
> at all to begin with?
>
> Dima
>
>> Hello,
>>
>>  I am planning to implement a system, where there is one Master
>> database running on a Linux box with as many resources as necessary,
>> and there are one or more client pc computers,with processor speed of
>> 100 Mhz, memory of 32-64 Mbytes and a 10Mb/s network card.
>>
>>  The major task is that the clients should work on the actual state
>> of data, which means the basic database (data, unchangeable for
>> clients), and some statistical data on other clients, and they should
>> not stop work when there is no network connection to the master pc.
>> They should give back their detailed transactions on the basic data
>> to the master pc.
>>
>>  For this reason I consider to run postgres on the client computers,
>> but I am quite concerned about system overhead.
>>
>>  The clients will only do basic database work:   - selects from the
>> database, without nested selects (or with nested selects with the
>> maximum of 1-2 levels)   - writing their transactions into the
>> database, with commit/rollback functionality.
>>   - update some tables because of synchronization with master db.
>>   - update some tables to summarize the value of transactions. (They
>> could be done by triggers, but if they need resources, there is an
>> existing solution with basic operations).
>>
>>  Size of the database: The basic data includes 50-100.000 elements in
>> 3-4 tables each, and much less data in other tables. The number of
>> tables is around 100.
>>
>>  I would like to know the opinion of experienced users of Postgres,
>> if I can embark upon this road, or should choose an other way which
>> uses an other db system with lower resource-needs.
>>
>>  Thanks in advance,
>>
>>  Gogulus
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 6: Have you searched our list archives?
>>
>>               http://archives.postgresql.org
>>
>>
>
>
>
> .
>




Re: Basic questions before start

From
"scott.marlowe"
Date:
On Wed, 30 Jul 2003, Gogulus wrote:

> As the clients should be able to work without network connection, they
> have to have a local database, and if net connection is on, do the
> synchronization with master db. The main idea is, sale cannot stop
> because of net connection breakage.
>
> That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
> of a database with around 100 tables, 3-4 of these tables having
> 50-60000 of records, others have at most 1000.

I would say yes, but I would also say that you should design this around a
character based interface.  The overhead of a GUI is gonna make it much
slower.

I don't know if you're familiar with the ncurses library, but that's what
I'd use, along with C or a lightweight scripting language like Perl or
PHP.


Minimal system (was Re: Basic questions before start)

From
Ron Johnson
Date:
On Wed, 2003-07-30 at 09:25, scott.marlowe wrote:
> On Wed, 30 Jul 2003, Gogulus wrote:
>
> > As the clients should be able to work without network connection, they
> > have to have a local database, and if net connection is on, do the
> > synchronization with master db. The main idea is, sale cannot stop
> > because of net connection breakage.
> >
> > That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
> > of a database with around 100 tables, 3-4 of these tables having
> > 50-60000 of records, others have at most 1000.
>
> I would say yes, but I would also say that you should design this around a
> character based interface.  The overhead of a GUI is gonna make it much
> slower.
>
> I don't know if you're familiar with the ncurses library, but that's what
> I'd use, along with C or a lightweight scripting language like Perl or
> PHP.

Or Python, which has an excellent curses library.

How could he do local and remote access in PHP?  Wouldn't a local
Apache server (which takes more RAM) be necessary?

Also regarding PHP, "links" is a great text-mode web browser that
handles style sheets and frames.

--
+-----------------------------------------------------------------+
| Ron Johnson, Jr.        Home: ron.l.johnson@cox.net             |
| Jefferson, LA  USA                                              |
|                                                                 |
| "I'm not a vegetarian because I love animals, I'm a vegetarian  |
|  because I hate vegetables!"                                    |
|    unknown                                                      |
+-----------------------------------------------------------------+



Re: Minimal system (was Re: Basic questions before start)

From
DeJuan Jackson
Date:
Ron Johnson wrote:
On Wed, 2003-07-30 at 09:25, scott.marlowe wrote: 
On Wed, 30 Jul 2003, Gogulus wrote:
   
As the clients should be able to work without network connection, they 
have to have a local database, and if net connection is on, do the 
synchronization with master db. The main idea is, sale cannot stop 
because of net connection breakage.

That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care 
of a database with around 100 tables, 3-4 of these tables having 
50-60000 of records, others have at most 1000.     
I would say yes, but I would also say that you should design this around a 
character based interface.  The overhead of a GUI is gonna make it much 
slower.

I don't know if you're familiar with the ncurses library, but that's what 
I'd use, along with C or a lightweight scripting language like Perl or 
PHP.   
Or Python, which has an excellent curses library.

How could he do local and remote access in PHP?  Wouldn't a local
Apache server (which takes more RAM) be necessary?

Also regarding PHP, "links" is a great text-mode web browser that
handles style sheets and frames.
 

PHP has a command line version, and it's own GTK.

I write all my processing scripts in PHP to leverage all the function and classes I've writting for the web.

Re: Minimal system (was Re: Basic questions before start)

From
"scott.marlowe"
Date:
On 30 Jul 2003, Ron Johnson wrote:

> On Wed, 2003-07-30 at 09:25, scott.marlowe wrote:
> > On Wed, 30 Jul 2003, Gogulus wrote:
> >
> > > As the clients should be able to work without network connection, they
> > > have to have a local database, and if net connection is on, do the
> > > synchronization with master db. The main idea is, sale cannot stop
> > > because of net connection breakage.
> > >
> > > That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
> > > of a database with around 100 tables, 3-4 of these tables having
> > > 50-60000 of records, others have at most 1000.
> >
> > I would say yes, but I would also say that you should design this around a
> > character based interface.  The overhead of a GUI is gonna make it much
> > slower.
> >
> > I don't know if you're familiar with the ncurses library, but that's what
> > I'd use, along with C or a lightweight scripting language like Perl or
> > PHP.
>
> Or Python, which has an excellent curses library.
>
> How could he do local and remote access in PHP?

I'm not sure what you mean.  Local and remote access of postgresql?
that's easy.  But do you mean something else?  I'm just talking about
using ncurses from a command line.

http://us2.php.net/manual/en/ref.ncurses.php

If you HAVE to have graphics, you can use gtk too, but that would need
X11.

http://gtk.php.net/

>  Wouldn't a local
> Apache server (which takes more RAM) be necessary?

PHP doesn't need apache any more than Python needs Zope.  PHP can run from
the command line just like perl, python, etc...  We write most of our
system maintenance and cron job stuff in it.  :-)

But even if you did run apache, a local server uses about 1M per child
(only need one, two max) and that's more than usual.  On a box JUST
running apache/php with both stripped down, you're probably looking at
~650k or so per child.

> Also regarding PHP, "links" is a great text-mode web browser that
> handles style sheets and frames.

I'll have to look it up.


Re: Minimal system (was Re: Basic questions before start)

From
Ron Johnson
Date:
On Wed, 2003-07-30 at 14:49, scott.marlowe wrote:
> On 30 Jul 2003, Ron Johnson wrote:
>
> > On Wed, 2003-07-30 at 09:25, scott.marlowe wrote:
> > > On Wed, 30 Jul 2003, Gogulus wrote:
[snip]
> > How could he do local and remote access in PHP?
>
> I'm not sure what you mean.  Local and remote access of postgresql?
> that's easy.  But do you mean something else?  I'm just talking about
> using ncurses from a command line.
>
> http://us2.php.net/manual/en/ref.ncurses.php

This was all predicated upon my not knowing that there is a stand-
alone PHP...

> If you HAVE to have graphics, you can use gtk too, but that would need
> X11.
>
> http://gtk.php.net/
>
> >  Wouldn't a local
> > Apache server (which takes more RAM) be necessary?
>
> PHP doesn't need apache any more than Python needs Zope.  PHP can run from
> the command line just like perl, python, etc...  We write most of our
> system maintenance and cron job stuff in it.  :-)
>
> But even if you did run apache, a local server uses about 1M per child
> (only need one, two max) and that's more than usual.  On a box JUST
> running apache/php with both stripped down, you're probably looking at
> ~650k or so per child.
>
> > Also regarding PHP, "links" is a great text-mode web browser that
> > handles style sheets and frames.
>
> I'll have to look it up.

--
+-----------------------------------------------------------------+
| Ron Johnson, Jr.        Home: ron.l.johnson@cox.net             |
| Jefferson, LA  USA                                              |
|                                                                 |
| "I'm not a vegetarian because I love animals, I'm a vegetarian  |
|  because I hate vegetables!"                                    |
|    unknown                                                      |
+-----------------------------------------------------------------+



Re: Minimal system (was Re: Basic questions before start)

From
"scott.marlowe"
Date:
On 30 Jul 2003, Ron Johnson wrote:

> On Wed, 2003-07-30 at 14:49, scott.marlowe wrote:
> > On 30 Jul 2003, Ron Johnson wrote:
> >
> > > On Wed, 2003-07-30 at 09:25, scott.marlowe wrote:
> > > > On Wed, 30 Jul 2003, Gogulus wrote:
> [snip]
> > > How could he do local and remote access in PHP?
> >
> > I'm not sure what you mean.  Local and remote access of postgresql?
> > that's easy.  But do you mean something else?  I'm just talking about
> > using ncurses from a command line.
> >
> > http://us2.php.net/manual/en/ref.ncurses.php
>
> This was all predicated upon my not knowing that there is a stand-
> alone PHP...

Ahhh.  OK.  Yeah, when I first started writing stuff in it and figured out
the cgi version worked fine as a scripting language I was so happy.  Due
to its http heritage, you have to run it with a -q switch to tell it to
not add headers to its output.

It can even do asyn stream handling, but I haven't played with that much.


Re: Basic questions before start

From
Gogulus
Date:
scott.marlowe wrote:

>On Wed, 30 Jul 2003, Gogulus wrote:
>
>
>
>>As the clients should be able to work without network connection, they
>>have to have a local database, and if net connection is on, do the
>>synchronization with master db. The main idea is, sale cannot stop
>>because of net connection breakage.
>>
>>That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
>>of a database with around 100 tables, 3-4 of these tables having
>>50-60000 of records, others have at most 1000.
>>
>>
>
>I would say yes, but I would also say that you should design this around a
>character based interface.  The overhead of a GUI is gonna make it much
>slower.
>
>I don't know if you're familiar with the ncurses library, but that's what
>I'd use, along with C or a lightweight scripting language like Perl or
>PHP.
>
>
Well, the whole application is written in C or C++ after redesign. It
has to be, because the PC needs to cooperate with a special driver
handling special hardwares connected to the PC. The GUI is already
implemented using SVGALib and MicroWin. So I wanna say, not only PGSQL
will run on it... And it is all on Red Hat 7.1 because of driver limitation.

Gogulus

>
>.
>
>
>




Re: Minimal system (was Re: Basic questions before start)

From
DeJuan Jackson
Date:
scott.marlowe wrote:
Ahhh.  OK.  Yeah, when I first started writing stuff in it and figured out 
the cgi version worked fine as a scripting language I was so happy.  Due 
to its http heritage, you have to run it with a -q switch to tell it to 
not add headers to its output.

It can even do asyn stream handling, but I haven't played with that much. 


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate     subscribe-nomail command to majordomo@postgresql.org so that your     message can get through to the mailing list cleanly 
This is off topic but...

Newer versions (4.3.*) of PHP compile a command line (php-cli) instance.
You no longer have to do two compiles when building the apache module.
Some features of which (from http://www.php.net/manual/en/features.commandline.php):
  • Unlike the CGI SAPI, no headers are written to the output.

    Though the CGI SAPI provides a way to suppress HTTP headers, there's no equivalent switch to enable them in the CLI SAPI.

    CLI is started up in quiet mode by default, though the -q switch is kept for compatibility so that you can use older CGI scripts.

    It does not change the working directory to that of the script. (-C switch kept for compatibility)

    Plain text error messages (no HTML formatting).

  • There are certain php.ini directives which are overriden by the CLI SAPI because they do not make sense in shell environments

  • ...
And yes the stream handling is very nice and will be even better when 5.0.0 hits the release stage, not to mention the class implementation improvements.

Re: Basic questions before start

From
"scott.marlowe"
Date:
On Wed, 30 Jul 2003, Gogulus wrote:

> scott.marlowe wrote:
>
> >On Wed, 30 Jul 2003, Gogulus wrote:
> >
> >
> >
> >>As the clients should be able to work without network connection, they
> >>have to have a local database, and if net connection is on, do the
> >>synchronization with master db. The main idea is, sale cannot stop
> >>because of net connection breakage.
> >>
> >>That's why I am asking if 100 Mhz of CPU, 32 Mbytes of RAM can take care
> >>of a database with around 100 tables, 3-4 of these tables having
> >>50-60000 of records, others have at most 1000.
> >>
> >>
> >
> >I would say yes, but I would also say that you should design this around a
> >character based interface.  The overhead of a GUI is gonna make it much
> >slower.
> >
> >I don't know if you're familiar with the ncurses library, but that's what
> >I'd use, along with C or a lightweight scripting language like Perl or
> >PHP.
> >
> >
> Well, the whole application is written in C or C++ after redesign. It
> has to be, because the PC needs to cooperate with a special driver
> handling special hardwares connected to the PC. The GUI is already
> implemented using SVGALib and MicroWin. So I wanna say, not only PGSQL
> will run on it... And it is all on Red Hat 7.1 because of driver limitation.

that's a pretty lightweight environment, so pg should be fine on it.  If
the machines start swapping at 32 megs, you may need machines with 64 meg
ram, but that's as big as you should need.