Thread: Mount Point /datastorage is 93 percent full

Mount Point /datastorage is 93 percent full

From
Satyamnaidu Gokeda
Date:
Hi Team,


We are getting following error while running this query. As checked

org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [53400]: ERROR: temporary file size exceeds temp_file_limit (52428800kB)
    at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:134)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:467)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:407)
    at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:146)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:405)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:849)

 





Thanks&Regards
Satyam Naidu

Re: Mount Point /datastorage is 93 percent full

From
Julien Rouhaud
Date:
Hi,

On Wed, Aug 03, 2022 at 02:32:05PM +0530, Satyamnaidu Gokeda wrote:
>
> We are getting following error while running this query. As checked

You probably forgot to copy the query here.

> org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [53400]: ERROR:
> temporary file size exceeds temp_file_limit (52428800kB)

This error is raised because the query used more than temp_file_limit (see
https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-DISK).
Your options are to raise that parameter (and maybe increase the
underlying disk given the message subject) or find a way to consume less disk
in temporary files.  It's hard to know without knowing the query and the
underlying data, but maybe rewriting the query or splitting it into multiple
queries over a temporary table would work.



Fwd: Mount Point /datastorage is 93 percent full

From
Satyamnaidu Gokeda
Date:

Hi Team,

We are getting following error while running this query. As checked, the query is fine.
Please guide on this

 

Query :

with recursive ctee as ( select * , 1 as level
from kms_chunked_content
where processid = '6598'
and nodeid = 1
union all select e.*, c.level + 1
from ctee c
join kms_chunked_content e on
e.parentnodeid = cast(c.nodeid as text)
and e.is_active = 'Y'
and e.processid = '6598' ) select *
from ctee
order by position;

 

 

Exception :

org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [53400]: ERROR: temporary file size exceeds temp_file_limit (52428800kB)
    at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:134)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:467)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:407)
    at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:146)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:405)
    at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:849)

 Note : We have set temp_file_limit =50GB still query getting issue.




Thanks&Regards,
Satyam Naidu

On Wed, 3 Aug 2022 at 14:41, Julien Rouhaud <rjuju123@gmail.com> wrote:
Hi,

On Wed, Aug 03, 2022 at 02:32:05PM +0530, Satyamnaidu Gokeda wrote:
>
> We are getting following error while running this query. As checked

You probably forgot to copy the query here.

> org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [53400]: ERROR:
> temporary file size exceeds temp_file_limit (52428800kB)

This error is raised because the query used more than temp_file_limit (see
https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-DISK).
Your options are to raise that parameter (and maybe increase the
underlying disk given the message subject) or find a way to consume less disk
in temporary files.  It's hard to know without knowing the query and the
underlying data, but maybe rewriting the query or splitting it into multiple
queries over a temporary table would work.

Re: Mount Point /datastorage is 93 percent full

From
Scott Ribe
Date:
I suspect you have a cycle in your data and that query will never terminate. (Note that 52428800kB is 50TB, which is a
ratherlarge temporary table.) 


Re: Mount Point /datastorage is 93 percent full

From
Scott Ribe
Date:
> On Aug 8, 2022, at 7:24 AM, Satyamnaidu Gokeda <satyamnaidu215@gmail.com> wrote:
>
> Postgres DB cluster mount point size /data 1.5TB used 200GB free 1.3TB when query running temp file size increased
to1.3TB   also query getting failed.  


You are correct that your limit is 50GB. I miscounted places in my prior post, sorry for the confusion.

However my point still stands: either this data is VERY large for that one root node, or you have a cycle in the data.
Areyou sure you do not have a cycle in the data?