Thread: I: Migrating a very large db

I: Migrating a very large db

From
"Samuele Brignoli"
Date:
Hi all,
I'm trying to migrate a very large db with the PGAdminII's migration
plug-in. It all goes well since one of the last table. It' s an out of
memory error. Dave Page told me that this is a server error :


An error occured at: 4/4/2002 11:18:45 AM:
-2147467259: Error while executing the query;
ERROR:  LockAcquire: holder table out of memory

Rolling back... Done.


Can someone help me ?

    Bye, Samuele.


Re: I: Migrating a very large db

From
Tom Lane
Date:
"Samuele Brignoli" <samuele.brignoli@darpha.com> writes:
> ERROR:  LockAcquire: holder table out of memory

You could probably work around that by increasing the postmaster
configuration parameter max_locks_per_transaction (note you will need to
restart postmaster to make it take effect).  However, I'm curious to
know what you were doing that made it fail; I've not seen very many
reports of people running into this limit.  How many tables are in this
database?  Does PGAdmin try to restore them all in one transaction?
Also, do you have problems pg_dumping the database?

            regards, tom lane

R: I: Migrating a very large db

From
"Samuele Brignoli"
Date:
Hi Tom,

thanks for answering me. In fact this is a bit strange thing. I don't know
well
postgresql locking mechanism but it seem this to cause the problem.
I was migrating a very large db with pgadmin porting tool from mssql server.
This failure
occours when, during the migration, I click on some areas of pgadmin, trying
to
refresh the content of the db or viewing the data in a table. In fact,
during the migration,
I was also doing some development and I've tried for two times to work in
parallel with
the migration and the development. I observed that this LockAcquire is a
routine about deadlock
avoiding. I've also done a dump of the db a month ago, all went perfect.

Anyway my db is, I think, very large. I have 700 tables, and someone of
those are larger than 10000 records.
Now I try to migrate the db without do nothing else and expanding this
max_locks_per_transaction.

    regards, Samuele Brignoli.


-----Messaggio originale-----
Da: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Inviato: lunedì 15 aprile 2002 16.41
A: samuele.brignoli@darpha.com
Cc: PostgreSQL General List
Oggetto: Re: [GENERAL] I: Migrating a very large db


"Samuele Brignoli" <samuele.brignoli@darpha.com> writes:
> ERROR:  LockAcquire: holder table out of memory

You could probably work around that by increasing the postmaster
configuration parameter max_locks_per_transaction (note you will need to
restart postmaster to make it take effect).  However, I'm curious to
know what you were doing that made it fail; I've not seen very many
reports of people running into this limit.  How many tables are in this
database?  Does PGAdmin try to restore them all in one transaction?
Also, do you have problems pg_dumping the database?

            regards, tom lane


Re: R: I: Migrating a very large db

From
Tom Lane
Date:
"Samuele Brignoli" <samuele.brignoli@darpha.com> writes:
> Anyway my db is, I think, very large. I have 700 tables, and someone of
> those are larger than 10000 records.

Hmm.  The number of records per table is not relevant here.  The number
of tables doesn't seem that large either.  We've heard one or two
reports of this error message from people who were trying to dump
databases containing thousands of "large objects", back in the days when
each large object was a separate table and so had its own lock.  But I'm
surprised to hear of it at 700 tables.  Would you be willing to send me
a schema dump (pg_dump -s, no data) of your database for testing?

            regards, tom lane

explain and explain analyze with functions

From
elein
Date:
Explain and explain analyze do not seem to count any
subqueries in functions called.

select la, di, da from mytab where la=fala(di,da);

gets a lower score on both explain and analyze
than the query in fala() does by itself (or select fala()).
The runtime reported in milliseconds is also lower (19 vs 55).
The function fala() is just an sql function.

Is this a bug?  Is this a feature request?  Is there
any way besides doing the arithmetic to get this information?

I understand the function runs in a different context, but
the explain should traverse the execution tree and sum up, no?

elein@nextbus.com




:::::::::::::::::::::::::::::::::
: elein@nextbus.com :
:     AND NOT OR     :
:::::::::::::::::::::::::::::::::


Re: explain and explain analyze with functions

From
Tom Lane
Date:
elein <elein@nextbus.com> writes:
> Explain and explain analyze do not seem to count any
> subqueries in functions called.

Functions are considered black boxes.  Unless you have an unpublished
solution to the halting problem, I doubt we can do much better than that
for PL-language functions.

I believe there was once code in Postgres (Berkeley era) to try to be
intelligent about the costs of SQL-language functions.  It's been dead
code for a long time though, and I'm not currently seeing a good
argument for reviving it.  People don't seem to use SQL-language
functions all that much.

            regards, tom lane