Thread: could not write to hash-join temporary file: No space left on device
My database is total around 100 gigs of data, and I have 50 gigs free on the volume, yet I get this error message. It was a simple join. Is there a way to see more information on why I got this error?
--
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
moshe@neadwerx.com | www.neadwerx.com
moshe@neadwerx.com | www.neadwerx.com
"Quality is not an act, it is a habit." -- Aristotle
El 14/06/13 18:25, Moshe Jacobson escribió: > My database is total around 100 gigs of data, and I have 50 gigs free on > the volume, yet I get this error message. It was a simple join. Is there > a way to see more information on why I got this error? Hash-joins can use a lot of space on temp files. Try running the query and at the same time monitor disk usage from the database temp/ directory ($PGDATA/dboid/temp/). -- Martín Marqués http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
El 14/06/13 18:25, Moshe Jacobson escribió: > My database is total around 100 gigs of data, and I have 50 gigs free on > the volume, yet I get this error message. It was a simple join. Is there > a way to see more information on why I got this error? Forgot to say. A low value of work_mem will make it more suitable for PostgreSQL to use temp files if numerous sorting jobs need quite a good amount o memory. You might as well try different values of work_mem to see what happens (watch out for large values of work_mem), or rethink your query all together. Cheers, -- Martín Marqués http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Jun 14, 2013 at 2:25 PM, Moshe Jacobson <moshe@neadwerx.com> wrote:
My database is total around 100 gigs of data, and I have 50 gigs free on the volume, yet I get this error message. It was a simple join. Is there a way to see more information on why I got this error?
How big are the particular tables that participate in the query? How many of the columns participate in the query?
It would be useful to see the same information for this as what is recommended here:
Cheers,
Jeff