Re: very slow after a while... - Mailing list pgsql-general

From Costin Manda
Subject Re: very slow after a while...
Date
Msg-id 1469.193.226.119.24.1112787238.squirrel@193.226.119.24
Whole thread Raw
In response to Re: very slow after a while...  (Richard Huxton <dev@archonet.com>)
Responses Re: very slow after a while...  ("Costin Manda" <siderite@madnet.ro>)
List pgsql-general
> Some more info please:
> 1. This is this one INSERT statement per transaction, yes? If that
> fails, you do an UPDATE

  correct.

> 2. Are there any foreign-keys the insert will be checking?
> 3. What indexes are there on the main table/foreign-key-related tables?

this is the table, the only restriction at the insert is the logid which
must be unique.

           Table "public.pgconnectionlog"
     Column     |         Type          | Modifiers
----------------+-----------------------+-----------
 logid          | integer               | not null
 username       | character varying(20) |
 logtime        | integer               |
 connecttime    | integer               |
 disconnecttime | integer               |
 usedcredit     | double precision      |
 usedtime       | integer               |
 phonenum       | character varying(30) |
 prephonenum    | character varying(20) |
 pricelistname  | character varying(30) |
 precode        | character varying(20) |
 effectivetime  | integer               |
 callerid       | character varying(30) |
 serialnumber   | character varying(30) |
 prefix         | character varying(20) |
 tara           | character varying     |
Indexes:
    "pgconnectionlog_pkey" PRIMARY KEY, btree (logid)
    "connecttime_index" btree (connecttime)
    "disconnecttime_index" btree (disconnecttime)
    "logtime_index" btree (logtime)
    "prefix_index" btree (prefix)
    "tara_index" btree (tara)
    "username_index" btree (username)


> Whatever the answers to these questions, perhaps look into loading your
> data into a temporary table, inserting any rows without matching primary
> keys and then deleting those and updating what's left.

  You think this will be faster? It does make sense. Anyway, the problem
is not optimising the script, is the speed change , dramatic I would
say.


  It is possible the problem doesn't come from this script, but from
others. The question is why does the database slow down to such a
degree? I repeat: dumping all data into a file, recreating the data
directory and reloading the data results in almost instantaneous inserts
and updates.


-------------------------
E-Mail powered by MadNet.
http://www.madnet.ro/


pgsql-general by date:

Previous
From: Johann Uhrmann
Date:
Subject: How to get details about referential integrity violations?
Next
From: D A GERM
Date:
Subject: resetting postgres password