Please remove the domains *@delinvest.com and *@lynch-mayer from your
mailing lists. You are sending multiple emails to invalid addresses on
our servers.
Thank you,
Postmaster
Delaware Investments
postmaster@delinvest.com
-----Original Message-----
From: Postmaster
Sent: Thursday, July 01, 1999 1:34 AM
To: Postmaster
Subject: Notification: Inbound Mail Failure
The following recipients did not receive the attached mail. A NDR was not
sent to the originator for the following recipients for one of the following
reasons:
* The Delivery Status Notification options did not request failure
notification, or requested no notification.
* The message was of precedence bulk.
NDR reasons are listed with each recipient, along with the notification
requested for that recipient,
or the precedence.
<listserv-postgres@delinvest.com> listserv-postgres@delinvest.com
MSEXCH:IMS:Delaware Group:PHL-1818:HOOPER 0 (000C05A6) Unknown
Recipient
Precedence: bulk
The message that caused this notification was:
<<Re: [BUGS] General Bug Report: Files greater than 1 GB are created while
sorting>>
Nice analysis of a problem. Probably psort code is not using those
routines because we never expected the sorts to get over 2Gigs.
> The backed is creating files bigger than 1 GB when sorting
> and it will break when the file gets to 2 GB.
>
> Here are the biggest files:
>
> 1049604 -rw------- 1 postgres postgres 1073741824 Jun 30 19:10
bigtable_pkey
> 1049604 -rw------- 1 postgres postgres 1073741824 Jun 30 19:36
pg_temp.2446.0
> 1049604 -rw------- 1 postgres postgres 1073741824 Jun 30 19:55 bigtable
> 1122136 -rw------- 1 postgres postgres 1147937412 Jun 30 21:39
pg_temp2769.3
> 1148484 -rw------- 1 postgres postgres 1174890288 Jun 30 21:26
pg_temp2769.4
>
> I also have some smaller ".1" files that are the rest of the above
> files along with everything else you might expect to find in a
> PG database directory. It's those two last big ones that are
> troublesome.
>
> Table and indicies are segmenting just fine at 1GB, but
> some sort files just keep growing. I did actually get a
> back-end error one time when one exceeded 2 GB.
>
> Thanks,
> Doug
>
>
> --------------------------------------------------------------------------
>
> Test Case:
> ----------
> Just do:
> mydb=> select * into bigtable2 from bigtable order by custno;
>
> You might want to decrease RELSEGSZ to see it faster.
> Mail be back if you can't reproduce it.
> (and please make the bug report form boxes bigger!)
>
> --------------------------------------------------------------------------
>
> Solution:
> ---------
> Something is not using the Magnetic Disk Storage Manager,
> but is writing a temp file out on its own during the sort.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026