Re: PostgreSQL backend process high memory usage issue - Mailing list pgsql-general

From Merlin Moncure
Subject Re: PostgreSQL backend process high memory usage issue
Date
Msg-id BANLkTiniE+Aoey3deGW_hVAMVBELxgJqQg@mail.gmail.com
Whole thread Raw
In response to PostgreSQL backend process high memory usage issue  (Shianmiin <Shianmiin@gmail.com>)
Responses Re: PostgreSQL backend process high memory usage issue
List pgsql-general
On Thu, Apr 7, 2011 at 3:42 PM, Shianmiin <Shianmiin@gmail.com> wrote:
> Hi there,
>
> We are evaluating using PostgreSQL to implement a multitenant database,
> Currently we are running some tests on single-database-multiple-schema model
> (basically, all tenants have the same set of database objects under then own
> schema within the same database).
> The application will maintain a connection pool that will be shared among
> all tenants/schemas.
>
> e.g. If the database has 500 tenants/schemas and each tenants has 200
> tables/views,
> the total number of tables/views will be 500 * 200 = 100,000.
>
> Since the connection pool will be used by all tenants, eventually each
> connection will hit all the tables/views.
>
> In our tests, when the connection hits more views, we found the memory usage
> of the backend process increases quite fast and most of them are private
> memory.
> Those memory will be hold until the connection is closed.
>
> We have a test case that one backend process uses more the 30GB memory and
> eventually get an out of memory error.
>
> To help understand the issue, I wrote code to create a simplified test cases
>  - MTDB_destroy: used to clear tenant schemas
>  - MTDB_Initialize: used to create a multitenant DB
>  - MTDB_RunTests: simplified test case, basically select from all tenant
> views one by one.
>
> The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4
> To make sure I have a clean environment, I re-created database cluster and
> leave majority configurations as default,
> (the only thing I HAVE to change is to increase "max_locks_per_transaction"
> since MTDB_destroy needs to drop many objects.)
>
> This is what I do to reproduce the issue:
> 1. create a new database
> 2. create the three functions using the code attached
> 3. connect to the new created db and run the initialize scripts
>
> -- Initialize
> select MTDB_Initialize('tenant', 100, 100, true);
> -- not sure if vacuum analyze is useful here, I just run it
> vacuum analyze;
> -- check the tables/views created
> select table_schema, table_type, count(*) from information_schema.tables
> where table_schema like 'tenant%' group by table_schema, table_type order by
> table_schema, table_type;
>
> 4. open another connection to the new created db and run the test scripts
>
> -- get backend process id for current connection
> SELECT pg_backend_pid();
>
> -- open a linux console and run ps -p  and watch VIRT, RES and SHR
>
> -- run tests
> select MTDB_RunTests('tenant', 1);

don't write your test environment this way.  postgresql functions !=
stored procedure. Your entire suite of tests is running in single
transaction context which is very problematic and not a real test of
any reasonably written application.   Convert your test suite into a
sql/per/bash/etc script that is run into the database through libpq or
sql.

merlin

pgsql-general by date:

Previous
From: Shianmiin
Date:
Subject: PostgreSQL backend process high memory usage issue
Next
From: John R Pierce
Date:
Subject: Re: PostgreSQL backend process high memory usage issue