-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi.
I am in the middle of moving a product from MySQL to Postgre. One of
the tables is relatively big, with 100M+ rows and growing, each of
which has a column that usually contains between 1-500k of data (the
'MYD' file it is currently 94G).
Most of the software that interacts with this database is written in
Perl. The machine has 2G of memory, and a lot of times I need to
process result sets that are bigger than that. Under normal
circumstances using the perl DBI, say you got something going on like
my $sth = $dbh->prepare(qq{SOME_QUERY});
$sth->execute;
while (my $href = $sth->fetchrow_hashref) {
# do stuff
}
right? The perl DBI tries so stuff the entire result set into memory
and dies if it can't. Then the fetchrow_hashref calls to $sth yield
references to hashes that represent the rows, but they do not really
'fetch' at all, its just shifting references to data that it already
fetched. This is my understanding of things, feel free to correct me
anywhere.
So with mysql, I can just say $dbh->{'mysql-use-result'} = 1, and
then it switches so that the fetchrow_hashref calls are actually
fetching from the database, and I only have one row in memory at a
time, unless I copy the reference and let it wander off somewhere
else, or something.
So all I'm really asking is, how does postgre approach the use result/
store result issue? Can I easily process result sets that are larger
than memory? And if it handles it similar to mysql, does it also
cause the same table locking behaviour?
Thanks,
Neal Clark
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (Darwin)
iD8DBQFF9OiIOUuHw4wCzDMRAma+AJ4pUPjVmPZUn7GYlVe4diTQaMCShwCghqCb
7hKG4ZbrSzhO2aqqyIyQu8k=
=OkYX
-----END PGP SIGNATURE-----