Thread: Having problem retrieving huge? table

Having problem retrieving huge? table

From
Ana Roizen
Date:
I have a table with aproximately 30 columns and 45000 tuples .
When I execute (from psql) "select * from mytable" it shows me one of
the following messages:
1)Can't allocate memory
or
2) pqReadData() - backend closed the channel unexpectedly.
This probably means the backend terminated abnormally before or while
processing the request.
We have lost the connection to the backend, so further processing is
impossible.  Terminating.
I also receive the message 2) when querying :
"select oid from mytable where oid >= all(select oid from mytable)"
Please, I aprecciate your help.   Ana Roizen.



Slow Inserts Leads To Unable To Dump

From
"Frank Morton"
Date:
Remember the thread last week about slow inserts? I still
have more to do, but basically I ended up waiting about 7
DAYS to insert 150,000 rows into a single table.

Now that that is done, I thought I should dump the database
before doing any more to save those 7 days if I mess up.
After processing 18 HOURS doing the "pg_dump -d", it ran
out of memory and quit. Are there tools to do this differently,
that is, not requiring the system to do a SELECT on the
whole table first? Just dump it? (I did try pg_dump without
-d but stopped it after an hour or so)

I am getting concerned that I can't really use postgreSQL
for large databases. Sure, some have built large databases,
but have any of you ever had to dump and restore them?
What happens when an update requires that? I think some
have reported 1,000,000 rows databases. By my calculation,
if you could get it to dump, it would take my database 46 DAYS
to reload if I get to that size. Maybe "copy" will help, but now
I'm more concerned about being totally unable to "dump."

I really like postgreSQL and think everyone working on it doing
great work. But, I need to hear some comments from all of you
confirming what I am seeing or telling me I am missing something.

Thanks to all, Frank






Re: [SQL] Slow Inserts Leads To Unable To Dump

From
Tom Lane
Date:
"Frank Morton" <fmorton@base2inc.com> writes:
> Now that that is done, I thought I should dump the database
> before doing any more to save those 7 days if I mess up.
> After processing 18 HOURS doing the "pg_dump -d", it ran
> out of memory and quit. Are there tools to do this differently,

Yes: pg_dump without -d.  COPY should work fine.

This does suggest an item for the to-do list: making pg_dump use
DECLARE CURSOR and FETCH rather than a plain SELECT to pull the
contents of tables in -d mode.  However, if your table is so large
that it doesn't fit in memory inside pg_dump, then I'd have to say
that you wouldn't want to reload it from a series of INSERT statements
anyway.  So I'm not very excited about this...

It does seem that your installation is remarkably slow; other people
are working with tables exceeding 2Gb and don't seem unhappy about
the speed.  Perhaps you need to check to see if the thing is thrashing;
how much real memory do you have, and what -B setting are you using?
        regards, tom lane


Re: [SQL] Slow Inserts Leads To Unable To Dump

From
Engard Ferenc
Date:
On Mon, 10 May 1999, Tom Lane wrote:

>"Frank Morton" <fmorton@base2inc.com> writes:
>> Now that that is done, I thought I should dump the database
>> before doing any more to save those 7 days if I mess up.
>> After processing 18 HOURS doing the "pg_dump -d", it ran
>> out of memory and quit. Are there tools to do this differently,
[...]
>It does seem that your installation is remarkably slow; other people
>are working with tables exceeding 2Gb and don't seem unhappy about

Try 'copy': for cc. 170,000 records the import took me about 5-15
mins (I don't remember) on a P100 with 16MB of RAM. From the
insert-stuff it should not be a big deal to convert to tab-separated
text.

Circum
__  @
/  \    _   _                                           Engard Ferenc
l    | ( \ /  | | (\/)                      mailto:s-fery@kkt.sote.hu
\__/ | |   \_ \_/ I  I                    http://pons.sote.hu/~s-fery