This message was sent from Geocrawler.com by "Tim Perdue" <tim@perdue.net>
Be sure to reply to that address.
> >It uses over 1GB of disk space to do that sort,
> >and it would have used a lot more if I hadn't
run
> >out.
> >
> >Then it won't fail gracefully, instead of just
> >hangs and leaves temp files completely filling
up
> >the hard drive.
> Because maybe you're doing a really dumb join
before you
> sort? SQL is full of such "gotchas".
No, sorry.
select distinct serial into serial_good from
serial_half;
serial_half is a 1-column list of 10-digit
numbers. I'm doing a select distinct because I
believe there may be duplicates in that column.
The misunderstanding on my end came because
serial_half was a 60MB text file, but when it was
inserted into postgres, it became 345MB (6.8
million rows has a lot of bloat apparently).
So the temp-sort space for 345MB could easily
surpass the 1GB I had on my hard disk. Although
how anyone can take a 60MB text file and turn it
into > 1GB is beyond me.
> And, of course, you've posed your question
stupidly - "my query's
> slow, why is Postgres so horrible?" and you
haven't bothered posting
> your query.
None of that was ever stated.
Actually what was stated is that it is retarded to
fill up a hard disk and then hang instead of
bowing out gracefully, forcing the user to
manually delete the temp_sort files and kill -9
postgres.
You can't argue with that portion.
And it happens on v6.4, v6.4.2, and v6.5.2 on RHAT
6.1, and LinuxPPC.
Yes, my post was rather harsh - I posted it when I
was pissed and that was a mistake. I had this
same problem in March when trying to sort a 2.5GB
file with 9GB free.
I use postgres on every project I work on,
including this site, Geocrawler.com, and my
PHPBuilder.com site, because it's a decent and
free database and it will scale beyond 2GB, unlike
MySQL.
Tim
Geocrawler.com - The Knowledge Archive