This is under Windows 2000 SP4 using the beta1
installer package. If I create the following table:
CREATE TABLE test1
(
idnbr int4 NOT NULL,
num1 numeric(12,2),
num2 numeric(12,2),
text1 varchar(600),
CONSTRAINT test1_idnbr_key UNIQUE (idnbr)
)
WITH OIDS;
and then use the following perl script to generate a
test data insert script:
#!/usr/bin/perl -w
use strict;
my $ct=0;
open(FILE,">inserts.sql");
print FILE "truncate test1;\n";
print FILE "begin;\n";
while ($ct<100000)
{
print FILE "insert into test1(idnbr, num1, num2,
text1) values (";
print FILE $ct.",";
print FILE int(rand(100)).".".int(rand(100)).",";
print FILE int(rand(100)).".".int(rand(100)).",";
print FILE "'";
my $width = 20;
while ($width>0)
{
print FILE int(rand(999));
$width--;
}
print FILE "');"."\n";
$ct++;
}
print FILE "commit;\n";
close(FILE);
When I execute this script inside a fresh psql session
via "\i inserts.sql" I can watch the working set size
of the backend climb at a rate of approximately 500k
to 1meg per second up until the script finishes. At
this point, the backend servicing the psql session has
grown to have a working set size of almost 100 megs or
so. Executing the script a second time and/or
executing other commands in the same session after the
initial "bloating" period cause virtually no change in
the working set size of the backend.
I can replicate the behavior from both the 8.0 beta1
native win32 psql client and the cygwin 7.4.x psql
client.
Regards,
Shelby Cain
__________________________________
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http://promotions.yahoo.com/new_mail