I have a small test program (using libpq) inserting a lot of data into the
database. Each command inserts a small large object (about 5k) into the database
and inserts one row into a table, that references the large object oid.
I repeat this 100.000 times. Each insert consists of his own transaction (begin
-> insert large object, insert row -> commit ...). I also measure the time taken
for always 100 inserts.
The performance is ok and stays constant over the whole time. But I have the
following effect: from time to time a short interruption occurs (my test program
is standing still for a moment) and then it goes on.
Has anyone an idea what might cause these pauses? Is it due to caching
mechanisms of the database?
Another question is concerning the reading of the written data. When I finished
the test, I used psql to check the written data. Therefore I started some
queries, searching for certain large objects in pg_largeobject (... where loid =
XX). These queries took very much time (about 5 seconds or more). After calling
vacuum on the database, the queries got fast. Can anyone explain this? Is the
index on pg_largeobject built by calling vacuum?
Thanks, Andreas.
-------------------------------------------------
This mail sent through IMP: http://horde.org/imp/