Thread: PSQLException: An I/O error occurred while sending to the backend.

PSQLException: An I/O error occurred while sending to the backend.

From
Argha Deep Ghoshal
Date:
Hi Team,

We are using PostgreSQL 11 wherein intermittently the below exception is popping up, causing our application to lose connection with the database. It isn't reconnecting until the application is restarted.

    org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:335)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
    at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
    at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:224)
    at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:728)
    at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:470)
    at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:395)
    at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:316)
    at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069)
    at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482)
    at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
    at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:479)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:810)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1506)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException
    at org.postgresql.core.PGStream.receiveChar(PGStream.java:308)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1952)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
   
We have checked the PostgreSQL logs in detail, however we are unable to find any significant errors related to this issue.

We have setup HAProxy between our application and DB. So, the requests are coming to the DB via HAProxy.

PostgresSQL Version : 11

JDBC Version: 42.2.5

All the servers are present in the same region and building.

Virus-free. www.avast.com
Argha Deep Ghoshal <ghoshal.arghadeep@gmail.com> writes:
> We are using PostgreSQL 11 wherein intermittently the below exception is
> popping up, causing our application to lose connection with the database.
> It isn't reconnecting until the application is restarted.

>     org.postgresql.util.PSQLException: An I/O error occurred while sending
> to the backend.

That certainly looks like loss of network connection.  Had the connection
been sitting idle for awhile before this query attempt?

> We have checked the PostgreSQL logs in detail, however we are unable to
> find any significant errors related to this issue.

I'd expect that the backend would eventually notice the dead connection.
But the timeout before it does so might be completely different from the
time at which the client notices the dead connection, so the relationship
might not be very obvious.

> All the servers are present in the same region and building.

Doesn't mean there's not routers or firewalls between them.  I'd start
by looking for network timeouts, and possibly configuring the server
to send TCP keepalives more aggressively.  (In this case it might be
HAProxy that needs to be sending keepalives ... don't know what options
it has for that.)

            regards, tom lane



Re: PSQLException: An I/O error occurred while sending to the backend.

From
Argha Deep Ghoshal
Date:

Hi Tom,

Appreciate your inputs. Please find my comments inline below.


> We are using PostgreSQL 11 wherein intermittently the below exception is
> popping up, causing our application to lose connection with the database.
> It isn't reconnecting until the application is restarted.

>     org.postgresql.util.PSQLException: An I/O error occurred while sending
> to the backend.

That certainly looks like loss of network connection.  Had the connection
been sitting idle for awhile before this query attempt?

- We are sending requests continuously using Jmeter and the exceptions are interspersed. Out of 100 say 8-9 requests are getting this exception and there is no lag between them. The connections I think are being kept open after the testing is done, but shouldn't the error come against the first response when we are reopening for test. The exceptions are coming after 10-15 requests.
 
> We have checked the PostgreSQL logs in detail, however we are unable to
> find any significant errors related to this issue.

I'd expect that the backend would eventually notice the dead connection.
But the timeout before it does so might be completely different from the
time at which the client notices the dead connection, so the relationship
might not be very obvious.

- Initially I was seeing connection termination error in the logs. However, currently this exception is not breaking the connectivity so no errors are getting logged in the database.

> All the servers are present in the same region and building.

Doesn't mean there's not routers or firewalls between them.  I'd start
by looking for network timeouts, and possibly configuring the server
to send TCP keepalives more aggressively.  (In this case it might be
HAProxy that needs to be sending keepalives ... don't know what options
it has for that.)


- I have made the below changes in our HAProxy server. 

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20

Currently we are testing to see whether this did the trick.

 
                        regards, tom lane