Re: [QUESTION]Concurrent Access - Mailing list pgsql-performance

From PFC
Subject Re: [QUESTION]Concurrent Access
Date
Msg-id op.udpvkdv6cigqcu@apollo13.peufeu.com
Whole thread Raw
In response to [QUESTION]Concurrent Access  ("Leví Teodoro da Silva" <tlevisilva@gmail.com>)
List pgsql-performance
> I want to know if the PostGree has limitations about the concurrent
> access,
> because a lot of people will access this database at the same time.

    PostgreSQL has excellent concurrency provided you use it correctly.

    But what do you mean by concurrent access ?

    * Number of opened Postgres connections at the same time ?
    => each one of those uses a little bit of RAM. (see manual) but if idle
they don't use CPU.

    * Number of opened transactions at the same time ?
    (between BEGIN and COMMIT)
    If your transactions are long and you have many transactions at the same
time you can get lock problems, for instance transaction A updates row X
and transaction B updates the same row X, one will have to wait for the
other to commit or rollback of course. If your transactions last 1 ms
there is no problem, if they last 5 minutes you will suffer.

    * Number of queries executing at the same time ?
    This is different from above, each query will eat some CPU and IO
resources, and memory too.

    * Number of concurrent HTTP connections to your website ?
    If you have a website, you will probably use some form of connection
pooling, or lighttpd/fastcgi, or a proxy, whatever, so the number of open
database connections at the same time won't be that high. Unless you use
mod_php without connection pooling, in that case it will suck of course,
but that's normal.

    * Number of people using your client ?
    See number of idle connections above. Or use connection pool.

> I want to know about the limitations, like how much memory do i have to
> use

    That depends on what you want to do ;)

> How big could be my database ?

    That depends on what you do with it ;)

    Working set size is more relevant than total database size.

    For instance if your database contains orders from the last 10 years, but
only current orders (say orders from this month) are accessed all the
time, with old orders being rarely accessed, you want the last 1-2 months'
worth of orders to fit in RAM for fast access (caching) but you don't need
RAM to fit your entire database.
    So, think about working sets not total sizes.

    And there is no limit on the table size (well, there is, but you'll never
hit it). People have terabytes in postgres and it seems to work ;)

pgsql-performance by date:

Previous
From: Abhijit Menon-Sen
Date:
Subject: Re: switchover between index and sequential scans
Next
From: Jessica Richard
Date:
Subject: slow delete