Thread: Out of swap space & memory

Out of swap space & memory

From
"Kevin Bartz"
Date:
I'm de-duping a 45-million-row table (about 4 gigs) by creating a new table
with the unique rows of the first table (should be about 3 gigs). I'm on a
64-bit SuSE Linux with 2 gigs of memory and another gig of swap space. I
tracked postmaster's use of memory and swap space after I run the query, and
I noticed that as soon as postmaster first uses up all available memory and
swap space, I get this:

2004-08-02 19:35:52 LOG:  checkpoint record is at 7/4AA2F590
2004-08-02 19:35:52 LOG:  redo record is at 7/4AA2F590; undo record is at
0/0; shutdown TRUE
2004-08-02 19:35:52 LOG:  next transaction ID: 740; next OID: 116212470
2004-08-02 19:35:52 LOG:  database system is ready
2004-08-02 19:38:04 LOG:  server process (PID 25302) was terminated by
signal 9
2004-08-02 19:38:04 LOG:  terminating any other active server processes
2004-08-02 19:38:04 LOG:  all server processes terminated; reinitializing
2004-08-02 19:38:04 FATAL:  the database system is starting up
2004-08-02 19:38:04 LOG:  database system was interrupted at 2004-08-02
19:35:52 PDT
2004-08-02 19:38:05 LOG:  checkpoint record is at 7/4AA2F590
2004-08-02 19:38:05 LOG:  redo record is at 7/4AA2F590; undo record is at
0/0; shutdown TRUE
2004-08-02 19:38:05 LOG:  next transaction ID: 740; next OID: 116212470
2004-08-02 19:38:05 LOG:  database system was not properly shut down;
automatic recovery in progress
2004-08-02 19:38:05 LOG:  redo starts at 7/4AA2F5D8
2004-08-02 19:38:05 LOG:  record with zero length at 7/4AA4D5C0
2004-08-02 19:38:05 LOG:  redo done at 7/4AA4B560
2004-08-02 19:38:07 LOG:  database system is ready

And the query never completes. This happens reproducibly and consistently. I
like to think that two gigs should be enough memory. What is killing the
postmaster process and how can I stop it? And, more importantly, is there
any way I can run this query? Thanks for any help you can provide,

Kevin


Re: Out of swap space & memory

From
mike g
Date:
What version of Postgresql are you running?

I believe this was a problem in 7.4.1 but fixed by 7.4.3 if I remember
correctly.

Mike

On Mon, 2004-08-02 at 22:08, Kevin Bartz wrote:
> I'm de-duping a 45-million-row table (about 4 gigs) by creating a new table
> with the unique rows of the first table (should be about 3 gigs). I'm on a
> 64-bit SuSE Linux with 2 gigs of memory and another gig of swap space. I
> tracked postmaster's use of memory and swap space after I run the query, and
> I noticed that as soon as postmaster first uses up all available memory and
> swap space, I get this:
>
> 2004-08-02 19:35:52 LOG:  checkpoint record is at 7/4AA2F590
> 2004-08-02 19:35:52 LOG:  redo record is at 7/4AA2F590; undo record is at
> 0/0; shutdown TRUE
> 2004-08-02 19:35:52 LOG:  next transaction ID: 740; next OID: 116212470
> 2004-08-02 19:35:52 LOG:  database system is ready
> 2004-08-02 19:38:04 LOG:  server process (PID 25302) was terminated by
> signal 9
> 2004-08-02 19:38:04 LOG:  terminating any other active server processes
> 2004-08-02 19:38:04 LOG:  all server processes terminated; reinitializing
> 2004-08-02 19:38:04 FATAL:  the database system is starting up
> 2004-08-02 19:38:04 LOG:  database system was interrupted at 2004-08-02
> 19:35:52 PDT
> 2004-08-02 19:38:05 LOG:  checkpoint record is at 7/4AA2F590
> 2004-08-02 19:38:05 LOG:  redo record is at 7/4AA2F590; undo record is at
> 0/0; shutdown TRUE
> 2004-08-02 19:38:05 LOG:  next transaction ID: 740; next OID: 116212470
> 2004-08-02 19:38:05 LOG:  database system was not properly shut down;
> automatic recovery in progress
> 2004-08-02 19:38:05 LOG:  redo starts at 7/4AA2F5D8
> 2004-08-02 19:38:05 LOG:  record with zero length at 7/4AA4D5C0
> 2004-08-02 19:38:05 LOG:  redo done at 7/4AA4B560
> 2004-08-02 19:38:07 LOG:  database system is ready
>
> And the query never completes. This happens reproducibly and consistently. I
> like to think that two gigs should be enough memory. What is killing the
> postmaster process and how can I stop it? And, more importantly, is there
> any way I can run this query? Thanks for any help you can provide,
>
> Kevin
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html

Re: Out of swap space & memory

From
Manfred Koizar
Date:
On Mon, 2 Aug 2004 20:08:45 -0700, "Kevin Bartz"
<kbartz@loyaltymatrix.com> wrote:
>is there any way I can run this query?

What query?  You didn't show us your SQL.

Servus
 Manfred

Re: Out of swap space & memory

From
Tom Lane
Date:
"Kevin Bartz" <kbartz@loyaltymatrix.com> writes:
> I'm de-duping a 45-million-row table (about 4 gigs) by creating a new table
> with the unique rows of the first table (should be about 3 gigs). I'm on a
> 64-bit SuSE Linux with 2 gigs of memory and another gig of swap space. I
> tracked postmaster's use of memory and swap space after I run the query, and
> I noticed that as soon as postmaster first uses up all available memory and
> swap space, I get this:

> 2004-08-02 19:38:04 LOG:  server process (PID 25302) was terminated by
> signal 9

This is the infamous "out of memory kill" that is perpetrated by some
versions of Linux after the kernel realizes that it has given out memory
it does not have.  Google for "OOM kill" and you'll find info.  It's an
extremely good idea to disable this kernel bug^H^H^Hfeature, as a lot of
the time the process that gets zapped is not the one that was actually
consuming all the RAM, but some innocent bystander.  The first time the
OOM killer takes out your postmaster, your mail daemon, or some other
highly critical process, you'll wish you had turned it off.  (You turn
it off by adjusting kernel settings so that it won't give out more
memory than it has in the first place.)

However, that doesn't really answer your problem, which is why your
query is consuming unreasonable amounts of RAM and what you can do about
it.  What PG version is this, what is the query *exactly*, what does
EXPLAIN show for the query, and what nondefault postgresql.conf settings
are you using?

            regards, tom lane

Re: Out of swap space & memory

From
"Kevin Bartz"
Date:
Unfortunately, updating to 7.4.3 did not fix the problem. But thanks for
your suggestion! Using 7.4.3 is cooler, anyway. Enjoy your weekend,

Kevin

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of mike g
Sent: Monday, August 02, 2004 8:40 PM
To: Kevin Bartz
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of swap space & memory

What version of Postgresql are you running?

I believe this was a problem in 7.4.1 but fixed by 7.4.3 if I remember
correctly.

Mike

On Mon, 2004-08-02 at 22:08, Kevin Bartz wrote:
> I'm de-duping a 45-million-row table (about 4 gigs) by creating a new
table
> with the unique rows of the first table (should be about 3 gigs). I'm on a
> 64-bit SuSE Linux with 2 gigs of memory and another gig of swap space. I
> tracked postmaster's use of memory and swap space after I run the query,
and
> I noticed that as soon as postmaster first uses up all available memory
and
> swap space, I get this:
>
> 2004-08-02 19:35:52 LOG:  checkpoint record is at 7/4AA2F590
> 2004-08-02 19:35:52 LOG:  redo record is at 7/4AA2F590; undo record is at
> 0/0; shutdown TRUE
> 2004-08-02 19:35:52 LOG:  next transaction ID: 740; next OID: 116212470
> 2004-08-02 19:35:52 LOG:  database system is ready
> 2004-08-02 19:38:04 LOG:  server process (PID 25302) was terminated by
> signal 9
> 2004-08-02 19:38:04 LOG:  terminating any other active server processes
> 2004-08-02 19:38:04 LOG:  all server processes terminated; reinitializing
> 2004-08-02 19:38:04 FATAL:  the database system is starting up
> 2004-08-02 19:38:04 LOG:  database system was interrupted at 2004-08-02
> 19:35:52 PDT
> 2004-08-02 19:38:05 LOG:  checkpoint record is at 7/4AA2F590
> 2004-08-02 19:38:05 LOG:  redo record is at 7/4AA2F590; undo record is at
> 0/0; shutdown TRUE
> 2004-08-02 19:38:05 LOG:  next transaction ID: 740; next OID: 116212470
> 2004-08-02 19:38:05 LOG:  database system was not properly shut down;
> automatic recovery in progress
> 2004-08-02 19:38:05 LOG:  redo starts at 7/4AA2F5D8
> 2004-08-02 19:38:05 LOG:  record with zero length at 7/4AA4D5C0
> 2004-08-02 19:38:05 LOG:  redo done at 7/4AA4B560
> 2004-08-02 19:38:07 LOG:  database system is ready
>
> And the query never completes. This happens reproducibly and consistently.
I
> like to think that two gigs should be enough memory. What is killing the
> postmaster process and how can I stop it? And, more importantly, is there
> any way I can run this query? Thanks for any help you can provide,
>
> Kevin
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster


Re: Out of swap space & memory

From
"Kevin Bartz"
Date:
Mike, thanks so much for your reply. I'm sorry for not showing you my SQL. I
didn't show it because I couldn't manage to boil it down to something
reproducible that everyone could try. But here's what it was:

drop table octanenights;
CREATE TABLE octanenights (member_id varchar(100), campaign_id varchar(100),
catalog_type varchar(100), pushed int, delivered int, clicks int, opened
int, month varchar(100), type1 int, type2 int, type3 int, type4 int, type5
int);

copy octanenights from
'/home/kevin/octanenights/proc/uberfile/uberfile1.txt' null as '';
copy octanenights from
'/home/kevin/octanenights/proc/uberfile/uberfile2.txt' null as '';
copy octanenights from
'/home/kevin/octanenights/proc/uberfile/uberfile3.txt' null as '';

select * from octanenights limit 10;
alter table octanenights rename to octanenights_raw;

-- de-dup the table
select member_id, campaign_id, catalog_type, pushed, delivered, clicks,
opened, month, type1, type2, type3, type4, type5
into octanenights
from octanenights_raw
group by member_id, campaign_id, catalog_type, pushed, delivered, clicks,
         opened, month, type1, type2, type3, type4, type5;

Let me tell you a little about octanenights. It's a file of about 36,000,000
rows, each describing an e-mail sent. Unfortunately, there are duplicate
records scattered throughout the table, which I do not care about. One might
suggest that I could've used uniq from the command line for this, but the
data were not sorted originally and the duplicate records may be scattered
anywhere in the table. The objective in the final line is to de-dup the
table and place it into octanenights, leaving the original in
octanenights_raw in case I ever need to refer back to it.

MS SQL Server, with as much RAM and less clock speed, de-dups the table in
about six minutes. The de-duped version has about 26,000,000 rows. The final
line is where Postgres gobbles up all my swap and RAM and then conks out
completely.

Am I doing something wrong? Maybe there was a better way to approach this
problem? I'd be open to suggestions of any kind, since I'm still very, very
new to the world of optimizing Postgres.

Kevin

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Manfred Koizar
Sent: Tuesday, August 03, 2004 3:04 AM
To: Kevin Bartz
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of swap space & memory

On Mon, 2 Aug 2004 20:08:45 -0700, "Kevin Bartz"
<kbartz@loyaltymatrix.com> wrote:
>is there any way I can run this query?

What query?  You didn't show us your SQL.

Servus
 Manfred

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@postgresql.org so that your
      message can get through to the mailing list cleanly


Re: Out of swap space & memory

From
mike g
Date:
Ok,

This is a long shot but how are you executing your code?  In say a
pgadminIII sql window with the below entered line after line?

If so I believe it will be treated as one transaction.  With the default
settings postgres would have to keep track of everything done to be able
to rollback all the changes if it failed.  I would believe that would
force it to keep track of all 56 million rows combined in memory
(probably just the oid column - I am sure the other more experienced
postgresql wizards can verify) but still that can take a lot of
resources.

If by chance you are doing it one sweep try executing it in separate
steps so the commit can be executed.

Hopefully then you won't run out of resources then.

Are you doing a drop / create say everynight to update your data?  If so
perhaps using TRUNCATE octanenights might be more efficient.

If you must drop a full table perhaps a vacuum should be done
afterwards???

Mike

On Fri, 2004-08-06 at 21:32, Kevin Bartz wrote:
> Mike, thanks so much for your reply. I'm sorry for not showing you my SQL. I
> didn't show it because I couldn't manage to boil it down to something
> reproducible that everyone could try. But here's what it was:
>
> drop table octanenights;
> CREATE TABLE octanenights (member_id varchar(100), campaign_id varchar(100),
> catalog_type varchar(100), pushed int, delivered int, clicks int, opened
> int, month varchar(100), type1 int, type2 int, type3 int, type4 int, type5
> int);
>
> copy octanenights from
> '/home/kevin/octanenights/proc/uberfile/uberfile1.txt' null as '';
> copy octanenights from
> '/home/kevin/octanenights/proc/uberfile/uberfile2.txt' null as '';
> copy octanenights from
> '/home/kevin/octanenights/proc/uberfile/uberfile3.txt' null as '';
>
> select * from octanenights limit 10;
> alter table octanenights rename to octanenights_raw;
>
> -- de-dup the table
> select member_id, campaign_id, catalog_type, pushed, delivered, clicks,
> opened, month, type1, type2, type3, type4, type5
> into octanenights
> from octanenights_raw
> group by member_id, campaign_id, catalog_type, pushed, delivered, clicks,
>          opened, month, type1, type2, type3, type4, type5;
>
> Let me tell you a little about octanenights. It's a file of about 36,000,000
> rows, each describing an e-mail sent. Unfortunately, there are duplicate
> records scattered throughout the table, which I do not care about. One might
> suggest that I could've used uniq from the command line for this, but the
> data were not sorted originally and the duplicate records may be scattered
> anywhere in the table. The objective in the final line is to de-dup the
> table and place it into octanenights, leaving the original in
> octanenights_raw in case I ever need to refer back to it.
>
> MS SQL Server, with as much RAM and less clock speed, de-dups the table in
> about six minutes. The de-duped version has about 26,000,000 rows. The final
> line is where Postgres gobbles up all my swap and RAM and then conks out
> completely.
>
> Am I doing something wrong? Maybe there was a better way to approach this
> problem? I'd be open to suggestions of any kind, since I'm still very, very
> new to the world of optimizing Postgres.
>
> Kevin
>
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Manfred Koizar
> Sent: Tuesday, August 03, 2004 3:04 AM
> To: Kevin Bartz
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of swap space & memory
>
> On Mon, 2 Aug 2004 20:08:45 -0700, "Kevin Bartz"
> <kbartz@loyaltymatrix.com> wrote:
> >is there any way I can run this query?
>
> What query?  You didn't show us your SQL.
>
> Servus
>  Manfred
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
>       joining column's datatypes do not match

Re: Out of swap space & memory

From
Lincoln Yeoh
Date:
What does EXPLAIN <your query here> say?

I had an issue[1] with 7.4.1 where it was using "HashAggregate" because it
thought everything could fit in memory, but it couldn't.

So I downgraded to 7.3.4 and it worked.

In my case Tom Lane suggested a workaround for the 7.4.1 planner
erroneously using HashAgregate:

set enable_hashagg = off

But without an EXPLAIN it's harder for people here to help you.

Wonder if 7.4.x could throw a warning and switch to a less memory intensive
query plan if it discovers it is running out of mem.

Hope this helps,
Link.

[1] see thread: "postmaster growing to consume all memory"

At 07:32 PM 8/6/2004 -0700, Kevin Bartz wrote:

>Mike, thanks so much for your reply. I'm sorry for not showing you my SQL. I
>didn't show it because I couldn't manage to boil it down to something
>reproducible that everyone could try. But here's what it was:
>
>drop table octanenights;
>CREATE TABLE octanenights (member_id varchar(100), campaign_id varchar(100),
>catalog_type varchar(100), pushed int, delivered int, clicks int, opened
>int, month varchar(100), type1 int, type2 int, type3 int, type4 int, type5
>int);
>
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile1.txt' null as '';
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile2.txt' null as '';
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile3.txt' null as '';
>
>select * from octanenights limit 10;
>alter table octanenights rename to octanenights_raw;
>
>-- de-dup the table
>select member_id, campaign_id, catalog_type, pushed, delivered, clicks,
>opened, month, type1, type2, type3, type4, type5
>into octanenights
>from octanenights_raw
>group by member_id, campaign_id, catalog_type, pushed, delivered, clicks,
>          opened, month, type1, type2, type3, type4, type5;
>
>Let me tell you a little about octanenights. It's a file of about 36,000,000
>rows, each describing an e-mail sent. Unfortunately, there are duplicate
>records scattered throughout the table, which I do not care about. One might
>suggest that I could've used uniq from the command line for this, but the
>data were not sorted originally and the duplicate records may be scattered
>anywhere in the table. The objective in the final line is to de-dup the
>table and place it into octanenights, leaving the original in
>octanenights_raw in case I ever need to refer back to it.
>
>MS SQL Server, with as much RAM and less clock speed, de-dups the table in
>about six minutes. The de-duped version has about 26,000,000 rows. The final
>line is where Postgres gobbles up all my swap and RAM and then conks out
>completely.
>
>Am I doing something wrong? Maybe there was a better way to approach this
>problem? I'd be open to suggestions of any kind, since I'm still very, very
>new to the world of optimizing Postgres.
>
>Kevin


Re: Out of swap space & memory

From
Jeff
Date:
On Aug 6, 2004, at 10:32 PM, Kevin Bartz wrote:
>
> -- de-dup the table

> MS SQL Server, with as much RAM and less clock speed, de-dups the
> table in
> about six minutes. The de-duped version has about 26,000,000 rows. The
> final
> line is where Postgres gobbles up all my swap and RAM and then conks
> out
> completely.

What is your sort_mem set to and how much ram is on the box?

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/


Re: Out of swap space & memory

From
Pierre-Frédéric Caillaud
Date:
> catalog_type varchar(100), pushed int, delivered int, clicks int, opened
> int, month varchar(100), type1 int, type2 int, type3 int, type4 int,
> type5
> int);

    You could use the TEXT type (see postgresql doc).

Re: Out of swap space & memory

From
"Kevin Bartz"
Date:
Well, I didn't set sort_mem, which (does it?) means it defaults to 512.
There are 2 gigs of RAM on the box and one gig of swap space. Should I have
set it to some different value? Thanks for your reply.

Kevin

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Jeff
Sent: Saturday, August 07, 2004 4:31 AM
To: Kevin Bartz
Cc: pgsql-general@postgresql.org; 'Manfred Koizar'
Subject: Re: [GENERAL] Out of swap space & memory


On Aug 6, 2004, at 10:32 PM, Kevin Bartz wrote:
>
> -- de-dup the table

> MS SQL Server, with as much RAM and less clock speed, de-dups the
> table in
> about six minutes. The de-duped version has about 26,000,000 rows. The
> final
> line is where Postgres gobbles up all my swap and RAM and then conks
> out
> completely.

What is your sort_mem set to and how much ram is on the box?

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match


Re: Out of swap space & memory

From
"Kevin Bartz"
Date:
Hi Lincoln! Thanks for your reply. On the problematic SELECT INTO, EXPLAIN
says:

test=#
                             QUERY PLAN
------------------------------------------------------------------
 HashAggregate  (cost=42.50..42.50 rows=1000 width=356)
   ->  Seq Scan on hp_raw  (cost=0.00..20.00 rows=1000 width=356)
(2 rows)

Correct me if I'm wrong, but I'm guessing that what I should infer from this
is that Postgres is indeed using hash aggregation. I just tried set
enable_hashagg = off, per your suggestion, and the SELECT INTO is grouping
away without blowup this time. Unfortunately, there's no end in sight. I've
let it sit and crank for an hour now, and it's still going. MS SQL Server
with as much RAM and less clock speed took six minutes for this de-duping!
For comparison, I also tried some other operations, like "select count(*)
from octanenights" and creating indexes, and in all instances Postgres
finishes in about three-fourths of the time SQL Server does. So why is this
SELECT INTO taking so much more time in comparison? Is there any other
option I can set to make it move a little faster? Thanks for all your help.

Kevin

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Lincoln Yeoh
Sent: Friday, August 06, 2004 9:25 PM
To: Kevin Bartz; 'Manfred Koizar'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of swap space & memory

What does EXPLAIN <your query here> say?

I had an issue[1] with 7.4.1 where it was using "HashAggregate" because it
thought everything could fit in memory, but it couldn't.

So I downgraded to 7.3.4 and it worked.

In my case Tom Lane suggested a workaround for the 7.4.1 planner
erroneously using HashAgregate:

set enable_hashagg = off

But without an EXPLAIN it's harder for people here to help you.

Wonder if 7.4.x could throw a warning and switch to a less memory intensive
query plan if it discovers it is running out of mem.

Hope this helps,
Link.

[1] see thread: "postmaster growing to consume all memory"

At 07:32 PM 8/6/2004 -0700, Kevin Bartz wrote:

>Mike, thanks so much for your reply. I'm sorry for not showing you my SQL.
I
>didn't show it because I couldn't manage to boil it down to something
>reproducible that everyone could try. But here's what it was:
>
>drop table octanenights;
>CREATE TABLE octanenights (member_id varchar(100), campaign_id
varchar(100),
>catalog_type varchar(100), pushed int, delivered int, clicks int, opened
>int, month varchar(100), type1 int, type2 int, type3 int, type4 int, type5
>int);
>
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile1.txt' null as '';
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile2.txt' null as '';
>copy octanenights from
>'/home/kevin/octanenights/proc/uberfile/uberfile3.txt' null as '';
>
>select * from octanenights limit 10;
>alter table octanenights rename to octanenights_raw;
>
>-- de-dup the table
>select member_id, campaign_id, catalog_type, pushed, delivered, clicks,
>opened, month, type1, type2, type3, type4, type5
>into octanenights
>from octanenights_raw
>group by member_id, campaign_id, catalog_type, pushed, delivered, clicks,
>          opened, month, type1, type2, type3, type4, type5;
>
>Let me tell you a little about octanenights. It's a file of about
36,000,000
>rows, each describing an e-mail sent. Unfortunately, there are duplicate
>records scattered throughout the table, which I do not care about. One
might
>suggest that I could've used uniq from the command line for this, but the
>data were not sorted originally and the duplicate records may be scattered
>anywhere in the table. The objective in the final line is to de-dup the
>table and place it into octanenights, leaving the original in
>octanenights_raw in case I ever need to refer back to it.
>
>MS SQL Server, with as much RAM and less clock speed, de-dups the table in
>about six minutes. The de-duped version has about 26,000,000 rows. The
final
>line is where Postgres gobbles up all my swap and RAM and then conks out
>completely.
>
>Am I doing something wrong? Maybe there was a better way to approach this
>problem? I'd be open to suggestions of any kind, since I'm still very, very
>new to the world of optimizing Postgres.
>
>Kevin


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster


Re: Out of swap space & memory

From
Alvaro Herrera
Date:
On Mon, Aug 09, 2004 at 11:33:04AM -0700, Kevin Bartz wrote:
> Hi Lincoln! Thanks for your reply. On the problematic SELECT INTO, EXPLAIN
> says:
>
> test=#
>                              QUERY PLAN
> ------------------------------------------------------------------
>  HashAggregate  (cost=42.50..42.50 rows=1000 width=356)
>    ->  Seq Scan on hp_raw  (cost=0.00..20.00 rows=1000 width=356)
> (2 rows)

Maybe a hash aggregate is not a good idea in this case.  Try applying
ANALYZE to the table before the SELECT INTO, and EXPLAIN again.  Or
execute "SET enable_hashagg TO off" before the SELECT INTO.


Food for thought: do we need to make the HashAggregate code able to save
to disk if the space estimate turns out to be wrong?

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Uno combate cuando es necesario... ¡no cuando está de humor!
El humor es para el ganado, o para hacer el amor, o para tocar el
baliset.  No para combatir."  (Gurney Halleck)


Re: Out of swap space & memory

From
Tom Lane
Date:
"Kevin Bartz" <kbartz@loyaltymatrix.com> writes:
> Hi Lincoln! Thanks for your reply. On the problematic SELECT INTO, EXPLAIN
> says:

> test=#
>                              QUERY PLAN
> ------------------------------------------------------------------
>  HashAggregate  (cost=42.50..42.50 rows=1000 width=356)
>    ->  Seq Scan on hp_raw  (cost=0.00..20.00 rows=1000 width=356)
> (2 rows)

It's fairly obvious that you have never vacuumed or analyzed hp_raw,
since those numbers are the fallback defaults when a table is completely
unknown to the planner :-(

> Correct me if I'm wrong, but I'm guessing that what I should infer from this
> is that Postgres is indeed using hash aggregation. I just tried set
> enable_hashagg = off, per your suggestion, and the SELECT INTO is grouping
> away without blowup this time. Unfortunately, there's no end in sight. I've
> let it sit and crank for an hour now, and it's still going.

If you are running with the default value of sort_mem, it wouldn't be
surprising for a sort-based aggregation to take awhile :-(.  I would
suggest cranking sort_mem up to something large (several hundred meg,
maybe) for this specific operation.

            regards, tom lane