Thread: limiting connections per user/database

limiting connections per user/database

From
Petr Jelínek
Date:
Hello,

patch I attached allows to set connectin limits per user or per database
(it's on TODO list).
It's proposal because I am not sure if this implementation can be
accepted (I never made new feature patch for postgres so I really dunno)
and I would like to know what you guys think about it and what I should
change.

Something about patch:
I added two new guc variables name max_db_connections and
max_user_connections which can be set by superuser which means it can be
in main config file or in user/database config.
I was thinking about 3 different aproaches - the other two was using
max_connections with set/get hooks or change in catalog tables but new
fuc variables seemed best solution to me.

Conenction limits are checked in InitPostgres function after user and
database configs are loaded.
Patch works only when stats are on because it takes number of
connections per user and database from there - I had to patch pgstat to
store user connection stats.

Also this patch relies on bugfix I sent on Thursday but I wasn't
subcribed and it still waits for moderation so I attached it to this
mail too (pgstat.c.diff) because without it database stats are broken in
current CVS.

I modified only .c sources, no documentation, I will make documentation
changes when (and if) this will be finished and accepted.

Diffs should be against latest CVS.

--
Regards
Petr Jelinek (PJMODOS)


*** pgstat.c    Sat Jun 18 01:17:26 2005
--- pgstat.c.new    Thu Jun 23 21:38:06 2005
***************
*** 2613,2619 ****
  static void
  pgstat_recv_bestart(PgStat_MsgBestart *msg, int len)
  {
!     PgStat_StatBeEntry *entry;

      /*
       * If the backend is known dead, we ignore the message -- we don't
--- 2613,2621 ----
  static void
  pgstat_recv_bestart(PgStat_MsgBestart *msg, int len)
  {
!     PgStat_StatBeEntry *beentry;
!     PgStat_StatDBEntry *dbentry;
!     bool        found;

      /*
       * If the backend is known dead, we ignore the message -- we don't
***************
*** 2623,2632 ****
      if (pgstat_add_backend(&msg->m_hdr) != 0)
          return;

!     entry = &(pgStatBeTable[msg->m_hdr.m_backendid - 1]);
!     entry->userid = msg->m_userid;
!     memcpy(&entry->clientaddr, &msg->m_clientaddr, sizeof(entry->clientaddr));
!     entry->databaseid = msg->m_databaseid;
  }


--- 2625,2675 ----
      if (pgstat_add_backend(&msg->m_hdr) != 0)
          return;

!     beentry = &(pgStatBeTable[msg->m_hdr.m_backendid - 1]);
!     beentry->userid = msg->m_userid;
!     memcpy(&beentry->clientaddr, &msg->m_clientaddr, sizeof(beentry->clientaddr));
!     beentry->databaseid = msg->m_databaseid;
!
!     /*
!      * Lookup or create the database entry for this backends DB.
!      */
!     dbentry = (PgStat_StatDBEntry *) hash_search(pgStatDBHash,
!                                            (void *) &(msg->m_databaseid),
!                                                  HASH_ENTER, &found);
!     if (dbentry == NULL)
!         ereport(ERROR,
!                 (errcode(ERRCODE_OUT_OF_MEMORY),
!              errmsg("out of memory in statistics collector --- abort")));
!
!     /*
!      * If not found, initialize the new one.
!      */
!     if (!found)
!     {
!         HASHCTL        hash_ctl;
!
!         dbentry->tables = NULL;
!         dbentry->n_xact_commit = 0;
!         dbentry->n_xact_rollback = 0;
!         dbentry->n_blocks_fetched = 0;
!         dbentry->n_blocks_hit = 0;
!         dbentry->n_backends = 0;
!         dbentry->destroy = 0;
!
!         memset(&hash_ctl, 0, sizeof(hash_ctl));
!         hash_ctl.keysize = sizeof(Oid);
!         hash_ctl.entrysize = sizeof(PgStat_StatTabEntry);
!         hash_ctl.hash = tag_hash;
!         dbentry->tables = hash_create("Per-database table",
!                                       PGSTAT_TAB_HASH_SIZE,
!                                       &hash_ctl,
!                                       HASH_ELEM | HASH_FUNCTION);
!     }
!
!     /*
!      * Count number of connects to the database
!      */
!     dbentry->n_backends++;
  }


diff -Nacr -x CVS bah2\src\backend\postmaster\pgstat.c bah\src\backend\postmaster\pgstat.c
*** bah2\src\backend\postmaster\pgstat.c    Sat Jun 25 21:57:22 2005
--- bah\src\backend\postmaster\pgstat.c    Sat Jun 25 21:56:06 2005
***************
*** 131,136 ****
--- 131,137 ----

  static TransactionId pgStatDBHashXact = InvalidTransactionId;
  static HTAB *pgStatDBHash = NULL;
+ static HTAB *pgStatUserHash = NULL;
  static HTAB *pgStatBeDead = NULL;
  static PgStat_StatBeEntry *pgStatBeTable = NULL;
  static int    pgStatNumBackends = 0;
***************
*** 163,173 ****
--- 164,177 ----
  static void pgstat_beshutdown_hook(int code, Datum arg);

  static PgStat_StatDBEntry *pgstat_get_db_entry(int databaseid);
+ static PgStat_StatUserEntry *pgstat_get_user_entry(int userid);
  static int    pgstat_add_backend(PgStat_MsgHdr *msg);
  static void pgstat_sub_backend(int procpid);
  static void pgstat_drop_database(Oid databaseid);
+ static void pgstat_drop_user(Oid userid);
  static void pgstat_write_statsfile(void);
  static void pgstat_read_statsfile(HTAB **dbhash, Oid onlydb,
+                       HTAB **userhash,
                        PgStat_StatBeEntry **betab,
                        int *numbackends);
  static void backend_read_statsfile(void);
***************
*** 181,186 ****
--- 185,191 ----
  static void pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len);
  static void pgstat_recv_tabpurge(PgStat_MsgTabpurge *msg, int len);
  static void pgstat_recv_dropdb(PgStat_MsgDropdb *msg, int len);
+ static void pgstat_recv_dropuser(PgStat_MsgDropuser *msg, int len);
  static void pgstat_recv_resetcounter(PgStat_MsgResetcounter *msg, int len);


***************
*** 772,782 ****
      Relation    dbrel;
      HeapScanDesc dbscan;
      HeapTuple    dbtup;
!     Oid           *dbidlist;
!     int            dbidalloc;
!     int            dbidused;
      HASH_SEQ_STATUS hstat;
      PgStat_StatDBEntry *dbentry;
      PgStat_StatTabEntry *tabentry;
      HeapTuple    reltup;
      int            nobjects = 0;
--- 777,791 ----
      Relation    dbrel;
      HeapScanDesc dbscan;
      HeapTuple    dbtup;
!     Relation    userrel;
!     HeapScanDesc userscan;
!     HeapTuple    usertup;
!     Oid           *oidlist;
!     int            oidalloc;
!     int            oidused;
      HASH_SEQ_STATUS hstat;
      PgStat_StatDBEntry *dbentry;
+     PgStat_StatUserEntry *userentry;
      PgStat_StatTabEntry *tabentry;
      HeapTuple    reltup;
      int            nobjects = 0;
***************
*** 866,886 ****
      /*
       * Read pg_database and remember the Oid's of all existing databases
       */
!     dbidalloc = 256;
!     dbidused = 0;
!     dbidlist = (Oid *) palloc(sizeof(Oid) * dbidalloc);

      dbrel = heap_open(DatabaseRelationId, AccessShareLock);
      dbscan = heap_beginscan(dbrel, SnapshotNow, 0, NULL);
      while ((dbtup = heap_getnext(dbscan, ForwardScanDirection)) != NULL)
      {
!         if (dbidused >= dbidalloc)
          {
!             dbidalloc *= 2;
!             dbidlist = (Oid *) repalloc((char *) dbidlist,
!                                         sizeof(Oid) * dbidalloc);
          }
!         dbidlist[dbidused++] = HeapTupleGetOid(dbtup);
      }
      heap_endscan(dbscan);
      heap_close(dbrel, AccessShareLock);
--- 875,895 ----
      /*
       * Read pg_database and remember the Oid's of all existing databases
       */
!     oidalloc = 256;
!     oidused = 0;
!     oidlist = (Oid *) palloc(sizeof(Oid) * oidalloc);

      dbrel = heap_open(DatabaseRelationId, AccessShareLock);
      dbscan = heap_beginscan(dbrel, SnapshotNow, 0, NULL);
      while ((dbtup = heap_getnext(dbscan, ForwardScanDirection)) != NULL)
      {
!         if (oidused >= oidalloc)
          {
!             oidalloc *= 2;
!             oidlist = (Oid *) repalloc((char *) oidlist,
!                                         sizeof(Oid) * oidalloc);
          }
!         oidlist[oidused++] = HeapTupleGetOid(dbtup);
      }
      heap_endscan(dbscan);
      heap_close(dbrel, AccessShareLock);
***************
*** 894,902 ****
      {
          Oid            dbid = dbentry->databaseid;

!         for (i = 0; i < dbidused; i++)
          {
!             if (dbidlist[i] == dbid)
              {
                  dbid = InvalidOid;
                  break;
--- 903,911 ----
      {
          Oid            dbid = dbentry->databaseid;

!         for (i = 0; i < oidused; i++)
          {
!             if (oidlist[i] == dbid)
              {
                  dbid = InvalidOid;
                  break;
***************
*** 910,919 ****
          }
      }

      /*
!      * Free the dbid list.
       */
!     pfree(dbidlist);

      /*
       * Tell the caller how many removeable objects we found
--- 919,977 ----
          }
      }

+
      /*
!      * Clear the Oid list.
       */
!     memset(oidlist, 0, sizeof(Oid) * oidalloc);
!
!     /*
!      * Read pg_shadow and remember the Oid's of all existing users
!      */
!     userrel = heap_open(ShadowRelationId, AccessShareLock);
!     userscan = heap_beginscan(userrel, SnapshotNow, 0, NULL);
!     while ((usertup = heap_getnext(userscan, ForwardScanDirection)) != NULL)
!     {
!         if (oidused >= oidalloc)
!         {
!             oidalloc *= 2;
!             oidlist = (Oid *) repalloc((char *) oidlist,
!                                         sizeof(Oid) * oidalloc);
!         }
!         oidlist[oidused++] = HeapTupleGetOid(usertup);
!     }
!     heap_endscan(userscan);
!     heap_close(userrel, AccessShareLock);
!
!     /*
!      * Search the user hash table for dead users and tell the
!      * collector to drop them as well.
!      */
!     hash_seq_init(&hstat, pgStatUserHash);
!     while ((userentry = (PgStat_StatUserEntry *) hash_seq_search(&hstat)) != NULL)
!     {
!         Oid            userid = userentry->userid;
!
!         for (i = 0; i < oidused; i++)
!         {
!             if (oidlist[i] == userid)
!             {
!                 userid = InvalidOid;
!                 break;
!             }
!         }
!
!         if (userid != InvalidOid)
!         {
!             nobjects++;
!             pgstat_drop_user(userid);
!         }
!     }
!
!     /*
!      * Free the Oid list.
!      */
!     pfree(oidlist);

      /*
       * Tell the caller how many removeable objects we found
***************
*** 946,951 ****
--- 1004,1032 ----


  /* ----------
+  * pgstat_drop_user() -
+  *
+  *    Tell the collector that we just dropped a user.
+  *    This is the only message that shouldn't get lost in space. Otherwise
+  *    the collector will keep the statistics for the dead users until his
+  *    stats file got removed while the postmaster is down.
+  * ----------
+  */
+ static void
+ pgstat_drop_user(Oid userid)
+ {
+     PgStat_MsgDropuser msg;
+
+     if (pgStatSock < 0)
+         return;
+
+     pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_DROPUSER);
+     msg.m_userid = userid;
+     pgstat_send(&msg, sizeof(msg));
+ }
+
+
+ /* ----------
   * pgstat_reset_counters() -
   *
   *    Tell the statistics collector to reset counters for our database.
***************
*** 1196,1201 ****
--- 1277,1308 ----
                                                HASH_FIND, NULL);
  }

+ /* ----------
+  * pgstat_fetch_stat_userentry() -
+  *
+  *    Support function for the SQL-callable pgstat* functions. Returns
+  *    the collected statistics for one user or NULL. NULL doesn't mean
+  *    that the user doesn't exist, it is just not yet known by the
+  *    collector, so the caller is better off to report ZERO instead.
+  * ----------
+  */
+ PgStat_StatUserEntry *
+ pgstat_fetch_stat_userentry(Oid userid)
+ {
+     /*
+      * If not done for this transaction, read the statistics collector
+      * stats file into some hash tables.
+      */
+     backend_read_statsfile();
+
+     /*
+      * Lookup the requested database; return NULL if not found
+      */
+     return (PgStat_StatUserEntry *) hash_search(pgStatUserHash,
+                                               (void *) &userid,
+                                               HASH_FIND, NULL);
+ }
+

  /* ----------
   * pgstat_fetch_stat_tabentry() -
***************
*** 1490,1496 ****
       * to zero.
       */
      pgStatRunningInCollector = TRUE;
!     pgstat_read_statsfile(&pgStatDBHash, InvalidOid, NULL, NULL);

      /*
       * Create the dead backend hashtable
--- 1597,1603 ----
       * to zero.
       */
      pgStatRunningInCollector = TRUE;
!     pgstat_read_statsfile(&pgStatDBHash, InvalidOid, &pgStatUserHash, NULL, NULL);

      /*
       * Create the dead backend hashtable
***************
*** 1670,1675 ****
--- 1777,1786 ----
                      pgstat_recv_dropdb((PgStat_MsgDropdb *) &msg, nread);
                      break;

+                 case PGSTAT_MTYPE_DROPUSER:
+                     pgstat_recv_dropuser((PgStat_MsgDropuser *) &msg, nread);
+                     break;
+
                  case PGSTAT_MTYPE_RESETCOUNTER:
                      pgstat_recv_resetcounter((PgStat_MsgResetcounter *) &msg,
                                               nread);
***************
*** 2087,2092 ****
--- 2198,2228 ----
      return result;
  }

+ /*
+  * Lookup the hash table entry for the specified user. If no hash
+  * table entry exists, initialize it.
+  */
+ static PgStat_StatUserEntry *
+ pgstat_get_user_entry(int userid)
+ {
+     PgStat_StatUserEntry *result;
+     bool found;
+
+     /* Lookup or create the hash table entry for this user */
+     result = (PgStat_StatUserEntry *) hash_search(pgStatUserHash,
+                                                 &userid,
+                                                 HASH_ENTER, &found);
+
+     /* If not found, initialize the new one. */
+     if (!found)
+     {
+         result->n_backends  = 0;
+         result->destroy = 0;
+     }
+
+     return result;
+ }
+
  /* ----------
   * pgstat_sub_backend() -
   *
***************
*** 2156,2161 ****
--- 2292,2298 ----
      HASH_SEQ_STATUS hstat;
      HASH_SEQ_STATUS tstat;
      PgStat_StatDBEntry *dbentry;
+     PgStat_StatUserEntry *userentry;
      PgStat_StatTabEntry *tabentry;
      PgStat_StatBeDead *deadbe;
      FILE       *fpout;
***************
*** 2254,2259 ****
--- 2391,2412 ----
      }

      /*
+      * Walk through the user table.
+      */
+     ereport(DEBUG3, (errmsg_internal("before write 'U'")));
+     hash_seq_init(&hstat, pgStatUserHash);
+     ereport(DEBUG3, (errmsg_internal("write 'U' - before while")));
+     while ((userentry = (PgStat_StatUserEntry *) hash_seq_search(&hstat)) != NULL)
+     {
+         ereport(DEBUG3, (errmsg_internal("write 'U' - in while 1")));
+         fputc('U', fpout);
+         ereport(DEBUG3, (errmsg_internal("write 'U' - in while 2")));
+         fwrite(userentry, sizeof(PgStat_StatUserEntry), 1, fpout);
+         ereport(DEBUG3, (errmsg_internal("write 'U' - in while 3")));
+     }
+     ereport(DEBUG3, (errmsg_internal("after write 'U'")));
+
+     /*
       * Write out the known running backends to the stats file.
       */
      i = MaxBackends;
***************
*** 2327,2336 ****
--- 2480,2492 ----
   */
  static void
  pgstat_read_statsfile(HTAB **dbhash, Oid onlydb,
+                       HTAB **userhash,
                        PgStat_StatBeEntry **betab, int *numbackends)
  {
      PgStat_StatDBEntry *dbentry;
      PgStat_StatDBEntry dbbuf;
+     PgStat_StatUserEntry *userentry;
+     PgStat_StatUserEntry userbuf;
      PgStat_StatTabEntry *tabentry;
      PgStat_StatTabEntry tabbuf;
      HASHCTL        hash_ctl;
***************
*** 2371,2376 ****
--- 2527,2545 ----
                            HASH_ELEM | HASH_FUNCTION | mcxt_flags);

      /*
+      * Create users hashtable
+      */
+     ereport(DEBUG3, (errmsg_internal("before Create users hashtable")));
+     memset(&hash_ctl, 0, sizeof(hash_ctl));
+     hash_ctl.keysize = sizeof(Oid);
+     hash_ctl.entrysize = sizeof(PgStat_StatUserEntry);
+     hash_ctl.hash = oid_hash;
+     hash_ctl.hcxt = use_mcxt;
+     *userhash = hash_create("Users hash", PGSTAT_DB_HASH_SIZE, &hash_ctl,
+                           HASH_ELEM | HASH_FUNCTION | mcxt_flags);
+     ereport(DEBUG3, (errmsg_internal("after Create users hashtable")));
+
+     /*
       * Initialize the number of known backends to zero, just in case we do
       * a silent error return below.
       */
***************
*** 2501,2506 ****
--- 2670,2707 ----
                  break;

                  /*
+                  * 'U'    A PgStat_StatUserEntry struct describing an user follows.
+                  */
+             case 'U':
+                 ereport(DEBUG3, (errmsg_internal("in read 'U' start")));
+                 if (fread(&userbuf, 1, sizeof(userbuf), fpin) != sizeof(userbuf))
+                 {
+                     ereport(pgStatRunningInCollector ? LOG : WARNING,
+                             (errmsg("corrupted pgstat.stat file")));
+                     goto done;
+                 }
+
+                 /*
+                  * Add to the user hash
+                  */
+                 userentry = (PgStat_StatUserEntry *) hash_search(*userhash,
+                                               (void *) &userbuf.userid,
+                                                              HASH_ENTER,
+                                                              &found);
+                 if (found)
+                 {
+                     ereport(pgStatRunningInCollector ? LOG : WARNING,
+                             (errmsg("corrupted pgstat.stat file")));
+                     goto done;
+                 }
+
+                 memcpy(userentry, &userbuf, sizeof(PgStat_StatUserEntry));
+                 userentry->destroy = 0;
+                 userentry->n_backends = 0;
+                 ereport(DEBUG3, (errmsg_internal("in read 'U' end")));
+                 break;
+
+                 /*
                   * 'M'    The maximum number of backends to expect follows.
                   */
              case 'M':
***************
*** 2557,2562 ****
--- 2758,2772 ----
                  if (dbentry)
                      dbentry->n_backends++;

+                 /*
+                  * Count backends per user here.
+                  */
+                 userentry = (PgStat_StatUserEntry *) hash_search(*userhash,
+                            (void *) &((*betab)[havebackends].userid),
+                                                         HASH_FIND, NULL);
+                 if (userentry)
+                     userentry->n_backends++;
+
                  havebackends++;
                  if (numbackends != 0)
                      *numbackends = havebackends;
***************
*** 2598,2603 ****
--- 2808,2814 ----
      {
          Assert(!pgStatRunningInCollector);
          pgstat_read_statsfile(&pgStatDBHash, MyDatabaseId,
+                               &pgStatUserHash,
                                &pgStatBeTable, &pgStatNumBackends);
          pgStatDBHashXact = topXid;
      }
***************
*** 2615,2620 ****
--- 2826,2832 ----
  {
      PgStat_StatBeEntry *beentry;
      PgStat_StatDBEntry *dbentry;
+     PgStat_StatUserEntry *userentry;
      bool        found;

      /*
***************
*** 2670,2675 ****
--- 2882,2913 ----
       * Count number of connects to the database
       */
      dbentry->n_backends++;
+
+
+     /*
+      * Lookup or create the user entry for this backends user.
+      */
+     userentry = (PgStat_StatUserEntry *) hash_search(pgStatUserHash,
+                                            (void *) &(msg->m_userid),
+                                                  HASH_ENTER, &found);
+     if (userentry == NULL)
+         ereport(ERROR,
+                 (errcode(ERRCODE_OUT_OF_MEMORY),
+              errmsg("out of memory in statistics collector --- abort")));
+
+     /*
+      * If not found, initialize the new one.
+      */
+     if (!found)
+     {
+         userentry->n_backends = 0;
+         userentry->destroy = 0;
+     }
+
+     /*
+      * Count number of connects of the user
+      */
+     userentry->n_backends++;
  }


***************
*** 2865,2870 ****
--- 3103,3137 ----
       * Mark the database for destruction.
       */
      dbentry->destroy = PGSTAT_DESTROY_COUNT;
+ }
+
+
+ /* ----------
+  * pgstat_recv_dropuser() -
+  *
+  *    Arrange for dead user removal
+  * ----------
+  */
+ static void
+ pgstat_recv_dropuser(PgStat_MsgDropuser *msg, int len)
+ {
+     PgStat_StatUserEntry *userentry;
+
+     /*
+      * Make sure the backend is counted for.
+      */
+     if (pgstat_add_backend(&msg->m_hdr) < 0)
+         return;
+
+     /*
+      * Lookup the user in the hashtable.
+      */
+     userentry = pgstat_get_user_entry(msg->m_userid);
+
+     /*
+      * Mark the user for destruction.
+      */
+     userentry->destroy = PGSTAT_DESTROY_COUNT;
  }


diff -Nacr -x CVS bah2\src\backend\utils\init\globals.c bah\src\backend\utils\init\globals.c
*** bah2\src\backend\utils\init\globals.c    Sat Jan 01 00:01:40 2005
--- bah\src\backend\utils\init\globals.c    Sat Jun 25 21:56:06 2005
***************
*** 92,97 ****
--- 92,99 ----
  /* Primary determinants of sizes of shared-memory structures: */
  int            NBuffers = 1000;
  int            MaxBackends = 100;
+ int            MaxDBBackends = 0;
+ int            MaxUserBackends = 0;

  int            VacuumCostPageHit = 1;        /* GUC parameters for vacuum */
  int            VacuumCostPageMiss = 10;
diff -Nacr -x CVS bah2\src\backend\utils\init\postinit.c bah\src\backend\utils\init\postinit.c
*** bah2\src\backend\utils\init\postinit.c    Fri Jun 24 19:42:44 2005
--- bah\src\backend\utils\init\postinit.c    Sat Jun 25 21:56:06 2005
***************
*** 43,48 ****
--- 43,49 ----
  #include "utils/portal.h"
  #include "utils/relcache.h"
  #include "utils/syscache.h"
+ #include "pgstat.h"


  static bool FindMyDatabase(const char *name, Oid *db_id, Oid *db_tablespace);
***************
*** 50,55 ****
--- 51,57 ----
  static void InitCommunication(void);
  static void ShutdownPostgres(int code, Datum arg);
  static bool ThereIsAtLeastOneUser(void);
+ static void CheckMaxConnections(const char *dbname, const char *username);


  /*** InitPostgres support ***/
***************
*** 436,441 ****
--- 438,450 ----
      if (!bootstrap)
          ReverifyMyDatabase(dbname);

+
+     /* Now we have database specifig & user specifig configs loaded,
+      * we can check for max_db_connections and max_user_connections
+      */
+     CheckMaxConnections(dbname, username);
+
+
      /*
       * Final phase of relation cache startup: write a new cache file if
       * necessary.  This is done after ReverifyMyDatabase to avoid writing
***************
*** 548,551 ****
--- 557,599 ----
      heap_close(pg_shadow_rel, AccessExclusiveLock);

      return result;
+ }
+
+
+ /*
+  * Check if we are not over max_db_conenctions and max_user_connections limits
+  */
+ static void
+ CheckMaxConnections(const char *dbname, const char *username)
+ {
+     PgStat_StatDBEntry *dbentry;
+     PgStat_StatUserEntry *userentry;
+
+     if (MaxDBBackends > 0)
+     {
+         if ((dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId)) != NULL)
+         {
+             if (dbentry->n_backends > MaxDBBackends)
+             {
+                 ereport(FATAL,
+                     (errcode(ERRCODE_TOO_MANY_CONNECTIONS),
+                  errmsg("sorry, too many clients already for database \"%s\"",
+                     dbname)));
+             }
+         }
+     }
+
+     if (MaxUserBackends > 0)
+     {
+         if ((userentry = pgstat_fetch_stat_userentry(GetUserId())) != NULL)
+         {
+             if (userentry->n_backends > MaxUserBackends)
+             {
+                 ereport(FATAL,
+                     (errcode(ERRCODE_TOO_MANY_CONNECTIONS),
+                  errmsg("sorry, too many clients already for user \"%s\"",
+                     username)));
+             }
+         }
+     }
  }
diff -Nacr -x CVS bah2\src\backend\utils\misc\guc.c bah\src\backend\utils\misc\guc.c
*** bah2\src\backend\utils\misc\guc.c    Thu Jun 23 03:57:06 2005
--- bah\src\backend\utils\misc\guc.c    Sat Jun 25 21:56:06 2005
***************
*** 981,986 ****
--- 981,1004 ----
      },

      {
+         {"max_db_connections", PGC_SUSET, CONN_AUTH_SETTINGS,
+             gettext_noop("Sets the maximum number of concurrent connections per database."),
+             NULL
+         },
+         &MaxDBBackends,
+         0, 0, INT_MAX / BLCKSZ, NULL, NULL
+     },
+
+     {
+         {"max_user_connections", PGC_SUSET, CONN_AUTH_SETTINGS,
+             gettext_noop("Sets the maximum number of concurrent connections per user."),
+             NULL
+         },
+         &MaxUserBackends,
+         0, 0, INT_MAX / BLCKSZ, NULL, NULL
+     },
+
+     {
          {"shared_buffers", PGC_POSTMASTER, RESOURCES_MEM,
              gettext_noop("Sets the number of shared memory buffers used by the server."),
              NULL
diff -Nacr -x CVS bah2\src\bin\psql\tab-complete.c bah\src\bin\psql\tab-complete.c
*** bah2\src\bin\psql\tab-complete.c    Thu Jun 23 03:57:08 2005
--- bah\src\bin\psql\tab-complete.c    Sat Jun 25 21:56:06 2005
***************
*** 576,581 ****
--- 576,583 ----
          "log_statement_stats",
          "maintenance_work_mem",
          "max_connections",
+         "max_db_connections",
+         "max_user_connections",
          "max_files_per_process",
          "max_fsm_pages",
          "max_fsm_relations",
diff -Nacr -x CVS bah2\src\include\miscadmin.h bah\src\include\miscadmin.h
*** bah2\src\include\miscadmin.h    Sat Feb 26 20:43:34 2005
--- bah\src\include\miscadmin.h    Sat Jun 25 21:56:06 2005
***************
*** 130,135 ****
--- 130,137 ----

  extern DLLIMPORT int NBuffers;
  extern int    MaxBackends;
+ extern int    MaxDBBackends;
+ extern int    MaxUserBackends;

  extern DLLIMPORT int MyProcPid;
  extern struct Port *MyProcPort;
diff -Nacr -x CVS bah2\src\include\pgstat.h bah\src\include\pgstat.h
*** bah2\src\include\pgstat.h    Wed May 11 03:41:42 2005
--- bah\src\include\pgstat.h    Sat Jun 25 21:56:06 2005
***************
*** 28,33 ****
--- 28,34 ----
  #define PGSTAT_MTYPE_TABPURGE        5
  #define PGSTAT_MTYPE_DROPDB            6
  #define PGSTAT_MTYPE_RESETCOUNTER    7
+ #define PGSTAT_MTYPE_DROPUSER        8

  /* ----------
   * The data type used for counters.
***************
*** 175,180 ****
--- 176,193 ----


  /* ----------
+  * PgStat_MsgDropuser            Sent by the backend to tell the collector
+  *                                about dropped user
+  * ----------
+  */
+ typedef struct PgStat_MsgDropuser
+ {
+     PgStat_MsgHdr m_hdr;
+     Oid            m_userid;
+ } PgStat_MsgDropuser;
+
+
+ /* ----------
   * PgStat_MsgResetcounter        Sent by the backend to tell the collector
   *                                to reset counters
   * ----------
***************
*** 224,229 ****
--- 237,253 ----
      int            destroy;
  } PgStat_StatDBEntry;

+ /* ----------
+  * PgStat_StatUserEntry            The collectors data per user
+  * ----------
+  */
+ typedef struct PgStat_StatUserEntry
+ {
+     Oid            userid;
+     int            n_backends;
+     int            destroy;
+ } PgStat_StatUserEntry;
+

  /* ----------
   * PgStat_StatBeEntry            The collectors data per backend
***************
*** 424,429 ****
--- 448,454 ----
   */
  extern PgStat_StatDBEntry *pgstat_fetch_stat_dbentry(Oid dbid);
  extern PgStat_StatTabEntry *pgstat_fetch_stat_tabentry(Oid relid);
+ extern PgStat_StatUserEntry *pgstat_fetch_stat_userentry(Oid userid);
  extern PgStat_StatBeEntry *pgstat_fetch_stat_beentry(int beid);
  extern int    pgstat_fetch_stat_numbackends(void);


Re: limiting connections per user/database

From
Tom Lane
Date:
=?ISO-8859-2?Q?Petr_Jel=EDnek?= <pjmodos@parba.cz> writes:
> patch I attached allows to set connectin limits per user or per database
> (it's on TODO list).

I don't think this is going to work.  The first problem with it is that
it changes pgstats from an optional into a required part of the system.
The second is that since the pgstats output lags well behind the actual
state of the system, it'll be at best a very approximate limit on the
number of connections.  Perhaps for some people that's good enough, but
I kinda doubt it's what the people asking for the feature have in mind.

            regards, tom lane

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Ok didn't know that stats are well behind actual state - I used pgstat
because it already stores number of conenctions per database so I
thought doing same thing again is needless.
So you think I should store those data somewhere myself. What you think
is best place, some extra hash or in system catalog or ... ?

BTW you should still apply that pgstat.c.diff because it fixes bug in
current CVS

--
Regards
Petr Jelinek (PJMODOS)
www.parba.cz


Re: limiting connections per user/database

From
"Andrew Dunstan"
Date:
Petr Jelínek said:
>
> Something about patch:
> I added two new guc variables name max_db_connections and
> max_user_connections which can be set by superuser which means it can
> be  in main config file or in user/database config.


Is this what is intended by the TODO item? I thought that it was intended to
allow max connections to be specified on a per-db or per-user basis, not
just for global limits on per-user or per-db connections.

cheers

andrew



Re: limiting connections per user/database

From
Petr Jelínek
Date:
Andrew Dunstan wrote:

>Is this what is intended by the TODO item? I thought that it was intended to
>allow max connections to be specified on a per-db or per-user basis, not
>just for global limits on per-user or per-db connections.
>
>
They are - ALTER DATABASE dbname SET max_db_conenctions = '10';
Like I said they're checked after user specific and database specific
config is loaded.
But after what Tom said I think I will have to rewrite it all so it
doesn't really matter.

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Tom Lane
Date:
=?windows-1250?Q?Petr_Jel=EDnek?= <pjmodos@parba.cz> writes:
> BTW you should still apply that pgstat.c.diff because it fixes bug in
> current CVS

What bug exactly?

            regards, tom lane

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Tom Lane wrote:

> What bug exactly?
>
Database stats aren't initialized so everything in pg_stat_database is
always zero.

--
Regards
Petr Jelinek (PJMODOS)

Re: limiting connections per user/database

From
Tom Lane
Date:
=?windows-1250?Q?Petr_Jel=EDnek?= <pjmodos@parba.cz> writes:
> Tom Lane wrote:
>> What bug exactly?
>>
> Database stats aren't initialized so everything in pg_stat_database is
> always zero.

[ shrug... ]  Don't see that here; sure it isn't something broken in
your local modified version?

            regards, tom lane

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Tom Lane napsal(a):

>[ shrug... ]  Don't see that here; sure it isn't something broken in
>your local modified version?
>
>
Well I tryed it with unmodified cvs copy and I have numbackends is 0
when 40 clients are connected (using default config)

I am bit confused now because I am no really sure if it's intended to be
this way or not - 8.0 behaviour was to report numbackends when stats
were on, now it reports numbackends when stats_row_level is true.
I should prolly talk with neilc to see what he intended to do when he
commited code which changed this (but I believe it's side effect not
intended change)

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Petr Jelínek
Date:
One more thing, do you have more concerns about that max connection out
of the fact it uses pgstat ?
For example are guc variables ok for this or should it be new variable
in pg_shadow & pg_database etc - I would like to rewrite it just once or
twice not ten times.

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Euler Taveira de Oliveira
Date:
Hi Petr,

> One more thing, do you have more concerns about that max connection
> out
> of the fact it uses pgstat ?
> For example are guc variables ok for this or should it be new
> variable
> in pg_shadow & pg_database etc - I would like to rewrite it just once
> or
> twice not ten times.
>
IIRC, people want the last one. We have more control if we can set it
per user or per database. Talking about pgstat, I think you can rely on
it 'cause it can be disabled. Could you describe the design you
intended to implement?


Euler Taveira de Oliveira
euler[at]yahoo_com_br

__________________________________________________
Converse com seus amigos em tempo real com o Yahoo! Messenger
http://br.download.yahoo.com/messenger/

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Euler Taveira de Oliveira wrote:

>IIRC, people want the last one. We have more control if we can set it
>per user or per database.
>
Like I said, guc variables _can_ be set per user or per database if you
use right context.

>Talking about pgstat, I think you can rely on
>it 'cause it can be disabled. Could you describe the design you
>intended to implement?
>
>
I "can rely on it 'cause it can be disabled" - did you mean that I can't
rely on it ?
Well I am still not sure how I will implement it when I can't use
pgstat. I think that I will have to use similar aproach which makes me sad.

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Alvaro Herrera
Date:
On Sun, Jun 26, 2005 at 07:46:32PM +0200, Petr Jelínek wrote:
> Euler Taveira de Oliveira wrote:
>
> >IIRC, people want the last one. We have more control if we can set it
> >per user or per database.
>
> Like I said, guc variables _can_ be set per user or per database if you
> use right context.

I don't think this approach is very user-friendly.  I'd vote for the
catalog approach, I think.


> >Talking about pgstat, I think you can rely on
> >it 'cause it can be disabled. Could you describe the design you
> >intended to implement?
>
> I "can rely on it 'cause it can be disabled" - did you mean that I can't
> rely on it ?
> Well I am still not sure how I will implement it when I can't use
> pgstat. I think that I will have to use similar aproach which makes me sad.

Maybe you could make some checks against the shared array of PGPROCs
(procarray.c), for the per-database limit at least.  Not sure about
per-user limit.

--
Alvaro Herrera (<alvherre[a]surnet.cl>)
"[PostgreSQL] is a great group; in my opinion it is THE best open source
development communities in existence anywhere."                (Lamar Owen)

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Alvaro Herrera wrote:

>I don't think this approach is very user-friendly.  I'd vote for the
>catalog approach, I think.
>
>
Ok I am fine with both but catalog changes would mean more hacking of
ALTER DATABASE and ALTER USER.

>Maybe you could make some checks against the shared array of PGPROCs
>(procarray.c), for the per-database limit at least.  Not sure about
>per-user limit.
>
>
Thats good idea (I could maybe add userid to PGPROC struct too) but I
think there could be problem with two phase commits because they add new
entry to that array of PGPROCs too and I don't kow if we want to include
them to that limit.

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Heikki Linnakangas
Date:
On Sun, 26 Jun 2005, Petr Jelínek wrote:

> Alvaro Herrera wrote:
>
>> Maybe you could make some checks against the shared array of PGPROCs
>> (procarray.c), for the per-database limit at least.  Not sure about
>> per-user limit.
>>
> Thats good idea (I could maybe add userid to PGPROC struct too) but I think
> there could be problem with two phase commits because they add new entry to
> that array of PGPROCs too and I don't kow if we want to include them to that
> limit.

You can ignore PGPROCs that belong to prepared transactions. They have 0
in the pid field, see TransactionIdIsActive and
CountActiveBackends functions in procarray.c for an example.

- Heikki

Re: limiting connections per user/database

From
Alvaro Herrera
Date:
On Sun, Jun 26, 2005 at 08:52:55PM +0200, Petr Jelínek wrote:
> Alvaro Herrera wrote:

> >Maybe you could make some checks against the shared array of PGPROCs
> >(procarray.c), for the per-database limit at least.  Not sure about
> >per-user limit.
>
> Thats good idea (I could maybe add userid to PGPROC struct too) but I
> think there could be problem with two phase commits because they add new
> entry to that array of PGPROCs too and I don't kow if we want to include
> them to that limit.

Prepared transactions can be filtered out by checking the pid struct
member.  I'm not sure if anybody would object to adding the
authenticated user Id to ProcArray, but I don't see why not.

--
Alvaro Herrera (<alvherre[a]surnet.cl>)
"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
 That's because in Europe they call me by name, and in the US by value!"

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Alvaro Herrera wrote:

>Prepared transactions can be filtered out by checking the pid struct
>member.  I'm not sure if anybody would object to adding the
>authenticated user Id to ProcArray, but I don't see why not.
>
>
Very well, it seems to work this way (although the code for storing
userid in PGPROC isn't as clean as I hoped).

The problem now is that I am storing those limits in system catalog but
there is no ALTER DATABASE implementation which I could use to change
this - there are only RENAME, OWNER and SET implementations and those
are not usable for "normal" properties (SET is for guc variables only
which was the actual reason for me to use guc variables first time).
That said I think that I will have to implement some ALTER DATABASE
command for those purposes and I am not sure if I can handle it because
I am not familiar with bison (I have even problems to add new ALTER USER
property to existing implementation).

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Neil Conway
Date:
Petr Jelínek wrote:
> I am bit confused now because I am no really sure if it's intended to be
> this way or not - 8.0 behaviour was to report numbackends when stats
> were on, now it reports numbackends when stats_row_level is true.

Yeah, this is a bug. Attached is a fix. I'll apply this to HEAD later
today barring any objections.

-Neil
Index: src/backend/postmaster/pgstat.c
===================================================================
RCS file: /var/lib/cvs/pgsql/src/backend/postmaster/pgstat.c,v
retrieving revision 1.96
diff -c -r1.96 pgstat.c
*** src/backend/postmaster/pgstat.c    25 Jun 2005 23:58:57 -0000    1.96
--- src/backend/postmaster/pgstat.c    27 Jun 2005 04:47:44 -0000
***************
*** 2073,2078 ****
--- 2073,2079 ----
          result->n_blocks_fetched = 0;
          result->n_blocks_hit = 0;
          result->destroy = 0;
+         result->n_backends = 0;

          memset(&hash_ctl, 0, sizeof(hash_ctl));
          hash_ctl.keysize = sizeof(Oid);
***************
*** 2321,2327 ****
   * pgstat_read_statsfile() -
   *
   *    Reads in an existing statistics collector and initializes the
!  *    databases hash table (who's entries point to the tables hash tables)
   *    and the current backend table.
   * ----------
   */
--- 2322,2328 ----
   * pgstat_read_statsfile() -
   *
   *    Reads in an existing statistics collector and initializes the
!  *    databases hash table (whose entries point to the table's hash tables)
   *    and the current backend table.
   * ----------
   */
***************
*** 2627,2632 ****
--- 2628,2636 ----
      entry->userid = msg->m_userid;
      memcpy(&entry->clientaddr, &msg->m_clientaddr, sizeof(entry->clientaddr));
      entry->databaseid = msg->m_databaseid;
+
+     /* Initialize the backend's entry in the db hash table */
+     (void) pgstat_get_db_entry(msg->m_databaseid);
  }


Index: src/include/pgstat.h
===================================================================
RCS file: /var/lib/cvs/pgsql/src/include/pgstat.h,v
retrieving revision 1.30
diff -c -r1.30 pgstat.h
*** src/include/pgstat.h    25 Jun 2005 23:58:58 -0000    1.30
--- src/include/pgstat.h    27 Jun 2005 02:46:23 -0000
***************
*** 209,215 ****
   */

  /* ----------
!  * PgStat_StatDBEntry            The collectors data per database
   * ----------
   */
  typedef struct PgStat_StatDBEntry
--- 209,215 ----
   */

  /* ----------
!  * PgStat_StatDBEntry            The collector's data per database
   * ----------
   */
  typedef struct PgStat_StatDBEntry
***************
*** 226,232 ****


  /* ----------
!  * PgStat_StatBeEntry            The collectors data per backend
   * ----------
   */
  typedef struct PgStat_StatBeEntry
--- 226,232 ----


  /* ----------
!  * PgStat_StatBeEntry            The collector's data per backend
   * ----------
   */
  typedef struct PgStat_StatBeEntry
***************
*** 269,275 ****


  /* ----------
!  * PgStat_StatTabEntry            The collectors data table data
   * ----------
   */
  typedef struct PgStat_StatTabEntry
--- 269,275 ----


  /* ----------
!  * PgStat_StatTabEntry            The collector's data table data
   * ----------
   */
  typedef struct PgStat_StatTabEntry

Re: limiting connections per user/database

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Yeah, this is a bug. Attached is a fix. I'll apply this to HEAD later
> today barring any objections.

I looked at this but did not actually see the code path that requires
forcing creation of the per-DB entry right at this spot.  The HASH_FIND
calls for this hashtable seem to all happen on the backend side not the
collector side.  Can you explain why we need this?

            regards, tom lane

Re: limiting connections per user/database

From
Neil Conway
Date:
Tom Lane wrote:
> I looked at this but did not actually see the code path that requires
> forcing creation of the per-DB entry right at this spot.  The HASH_FIND
> calls for this hashtable seem to all happen on the backend side not the
> collector side.  Can you explain why we need this?

Yeah, I missed this when making the original change (this code is rather
opaque :-\). The problem is that if we don't initialize the dbentry for
the database we connect to, it won't get written out to the statsfile in
pgstat_write_statsfile(). So the database won't be counted as having any
backends connected to it in pgstat_read_statsfile() (see line 2558 of
pgstat.c in HEAD).

BTW, the comment at line 2210 of pgstat.c is misleading: the n_backends
in the entries of the dbentry hash table are explicitly ignored when
reading in the stats file -- the value is instead derived from the
number of beentries that are seen.

-Neil

Re: limiting connections per user/database

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Tom Lane wrote:
>> I looked at this but did not actually see the code path that requires
>> forcing creation of the per-DB entry right at this spot.  The HASH_FIND
>> calls for this hashtable seem to all happen on the backend side not the
>> collector side.  Can you explain why we need this?

> Yeah, I missed this when making the original change (this code is rather
> opaque :-\).

No kidding.  Somebody ought to separate the collector-side code from the
backend-side code sometime.

> BTW, the comment at line 2210 of pgstat.c is misleading: the n_backends
> in the entries of the dbentry hash table are explicitly ignored when
> reading in the stats file -- the value is instead derived from the
> number of beentries that are seen.

Right.  So do we care whether the collector has the right number?
Or should we push the responsibility for tracking that count over
to the collector (+1 for that personally)?

            regards, tom lane

Re: limiting connections per user/database

From
Neil Conway
Date:
Tom Lane wrote:
> Right.  So do we care whether the collector has the right number?

Not at present; n_backends is not read/written by the stats collector
itself, it is just in the hash table for the convenience of backends who
read in the stats file.

> Or should we push the responsibility for tracking that count over
> to the collector (+1 for that personally)?

Makes sense to me -- I'll take a look at implementing this. For now I'll
just commit the bug fix.

-Neil

Re: limiting connections per user/database

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Makes sense to me -- I'll take a look at implementing this. For now I'll
> just commit the bug fix.

I'm still missing what the exact "bug fix" is here.  So far we have
established that (a) the n_backends count is not tracked by the
collector; (b) the collector does not need it; (c) the count is
recomputed by each backend that might need it; and (d) we could probably
save some cycles by letting the collector count active backends instead
of making the backends do it.

(d) is a performance bug, but if there is a functionality bug I'm not
seeing it.

            regards, tom lane

Re: limiting connections per user/database

From
Neil Conway
Date:
Tom Lane wrote:
> I'm still missing what the exact "bug fix" is here.

The bug is:

- a backend starts up and sends the collector a BESTART message. For the
sake of clarity, suppose that the backend is the first and only backend
connected to its database.

- the stats collector receives the BESTART message and records the
existence of the backend via pgstat_add_backend(). In current sources,
it does not initialize the entry for the backend's database in pgStatDBHash.

- the stats collector then decides to write out the stats file. Since
there is no pgStatDBHash entry for the backend's database, we don't
write out anything for it.

- when we read in the stats collector's stats file in a normal backend,
there will be no pgStatDBHash entry for the backend's database.
Therefore we'll read the beentry for the backend, and when we go to
increment the n_backends for the corresponding dbentry, there will be no
dbentry found (HASH_FIND at pgstat.c:2554), so n_backends won't be updated.

- therefore we won't count the n_backends for the database correctly.

This can be seen in current sources with a fresh initdb and default
postgresql.conf settings: connect to a database, and do select * from
pg_stat_database.

-Neil

Re: limiting connections per user/database

From
Petr Jelínek
Date:
Tom Lane wrote:

>(d) is a performance bug, but if there is a functionality bug I'm not
>seeing it.
>
>
You must use default config (or atleast turn off everything in stats
thats off by default) to see this bug.

And if you guys want to be more confused then when i'll make that
conenction limit patch (and if it gets accepted) then there will be
function to get exact number of backends for database (atlest thats my
current aproach) so whole n_backend thingy will not be needed anymore :)

--
Regards
Petr Jelinek (PJMODOS)



Re: limiting connections per user/database

From
Petr Jelínek
Date:
So now I have it all working including brand new alter database (this
would have to be doublechecked by somebody from core team) and using
proc array to get number of backends per database and per user.

My only concer now is that userid in PGPROC, does anybody have anything
against it ?

And one more thing about userid, at the time MyProc is inserted to proc
array we don't know userid, so I am now updating MyProc as soon as I
know it (check for connection limit is done afterwards).
There is another solution I know of - I could use flatfiles to get
userid before MyProc is stored in proc array (which is how we get
MyDatabaseId anyway).
What of those solutions is better in your opinion ?

Oh and atm those limits are not enforced for superusers, any objections ?

--
Regards
Petr Jelinek (PJMODOS)
www.parba.cz


Re: limiting connections per user/database

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> - when we read in the stats collector's stats file in a normal backend,
> there will be no pgStatDBHash entry for the backend's database.
> Therefore we'll read the beentry for the backend, and when we go to
> increment the n_backends for the corresponding dbentry, there will be no
> dbentry found (HASH_FIND at pgstat.c:2554), so n_backends won't be updated.

> - therefore we won't count the n_backends for the database correctly.

However, this could equally be fixed by replacing the hash_search call
at l. 2554 by pgstat_get_db_entry().  If your definition of "correct"
is that "a new backend can see at least one backend in its own database"
then this would be a more appropriate fix anyway, since there's a lag
between sending BESTART and seeing any result in the collector's output.

This isn't an argument against moving the responsibility for counting
n_backends into the collector, but as long as it's done in
pgstat_read_statsfile the proposed fix is pretty bogus.

            regards, tom lane

Re: limiting connections per user/database

From
Karel Zak
Date:
On Sun, 2005-06-26 at 20:52 +0200, Petr Jelínek wrote:
> Alvaro Herrera wrote:
>
> >I don't think this approach is very user-friendly.  I'd vote for the
> >catalog approach, I think.
> >
> >
> Ok I am fine with both but catalog changes would mean more hacking of
> ALTER DATABASE and ALTER USER.

IMHO Oracle has better solution for per-user setting:

CREATE PROFILE <name>
   SESSIONS_PER_USER <int>
   CONNECT_TIME <int>
   IDLE_TIME <int>;

ALTER USER <name> PROFILE <name>;


    karel

--
Karel Zak <zakkr@zf.jcu.cz>