Thread: pq_flush: send() failed: Broken pipe

pq_flush: send() failed: Broken pipe

From
Yumiko Izumi
Date:
Hello.
I use PostgreSQL on jobs.

When I carried out SQL sentence in PostgreSQL7.3.8 environment,
PostgreSQL outputs the following error messages.
==============================
pq_flush: send() failed: Broken pipe
pq_recvbuf: unexpected EOF on client connection
==============================

Why these messages appear?
Please teach me about a workaround of a problem.

Detailed informations are as follows.

<Work procedure>
(1) I install PostgreSQL7.3.8 in a HP-UX machine
(2) I carry out initdb to create gyomuDB
(3) I carry out an SQL sentence to make TABLE and FUNCTION in gyomuDB
      psql -d gyomuDB -f 003.sql

<Error contents>
Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection

Plural environment outputs above error messages.
I checked the disk use situation then, the neck in a resource was not found.
I understand this error isn't caused by stringency of a resource.

In addition, this error caused when I am creating DB
just after PostgreSQL installation.

I can't think that there is a lot access in PostgreSQL.
I can't think that DB size is big and lacked memory.

By the way, I carry out seven SQL sentence files in by the above (3) consecutively.
But this error seems to be caused by executing the FUNCTION in the third file.

This FUNCTION makes 1000 TABLE in a mass.
Those names are different but designs are same.

I carry out the above (3) via a script written in bash.
The script was finished by signal 16 then(SIGUSR1).

  # ./gyomuDB_setup.com
  Signal 16

<About a signal>
I saw a source of PostgreSQL and various documents.
And I understood about a "SIGUSR1" signal as follows.

* A "SIGUSR1" signal is used only for timing making a transaction log
  to restore DB at the time of DB disorder outbreak.
* All the transaction processing is carried out by the child process that is a backend.
  But when accumulated a fixed quantity transaction log,
  postmaster carries out checkpoint processing
  by transmitting a "SIGUSR1" signal to postmaster from a backend.
  (Actually, postmaster generates child process more and carries it out)

Actually this signal is transmitted not to postmaster but to psql, and psql seems to stop.

<postgresql.conf>
--------------------------------------------------
#
#    Connection Parameters
#
#tcpip_socket = false
#ssl = false

#max_connections = 32
#superuser_reserved_connections = 2

#port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777    # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#    Shared Memory Size
#
#shared_buffers = 64        # min max_connections*2 or 16, 8KB each
#max_fsm_relations = 1000    # min 10, fsm is free space map, ~40 bytes
#max_fsm_pages = 10000        # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64    # min 10
#wal_buffers = 8        # min 4, typically 8KB each

shared_buffers = 256
max_fsm_relations = 4000
max_fsm_pages = 131072
max_locks_per_transaction = 512

#
#    Non-shared Memory Sizes
#
#sort_mem = 1024        # min 64, size in KB
#vacuum_mem = 8192        # min 1024, size in KB


#
#    Write-ahead log (WAL)
#
#checkpoint_segments = 3    # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300    # range 30-3600, in seconds
#
#commit_delay = 0        # range 0-100000, in microseconds
#commit_siblings = 5        # range 1-1000
#
#fsync = true
#wal_sync_method = fsync    # the default varies across platforms:
#                # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0            # range 0-16


#
#    Optimizer Parameters
#
#enable_seqscan = true
#enable_indexscan = true
#enable_tidscan = true
#enable_sort = true
#enable_nestloop = true
#enable_mergejoin = true
#enable_hashjoin = true

#effective_cache_size = 1000    # typically 8KB each
#random_page_cost = 4        # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01        # (same)
#cpu_index_tuple_cost = 0.001    # (same)
#cpu_operator_cost = 0.0025    # (same)

#default_statistics_target = 10    # range 1-1000

#
#    GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0    # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0        # default based on tables in statement,
                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1        # auto-compute seed


#
#    Message display
#
#server_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   info, notice, warning, error, log, fatal,
                #   panic
#client_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   log, info, notice, warning, error
#silent_mode = false

#log_connections = false
#log_pid = false
#log_statement = false
#log_duration = false
#log_timestamp = false

#log_min_error_statement = panic # Values in order of increasing severity:
                 #   debug5, debug4, debug3, debug2, debug1,
                 #   info, notice, warning, error, panic(off)

#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#    Syslog
#
#syslog = 0            # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#    Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#    Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false


#
#    Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0


#
#    Misc
#
#autocommit = true
#dynamic_library_path = '$libdir'
#search_path = '$user,public'
#datestyle = 'iso, us'
#timezone = unknown        # actually, defaults to TZ environment setting
#australian_timezones = false
#client_encoding = sql_ascii    # actually, defaults to database encoding
#authentication_timeout = 60    # 1-600, in seconds
#deadlock_timeout = 1000    # in milliseconds
#default_transaction_isolation = 'read committed'
#max_expr_depth = 10000        # min 10
#max_files_per_process = 1000    # min 25
#password_encryption = true
#sql_inheritance = true
#transform_null_equals = false
#statement_timeout = 0        # 0 is disabled, in milliseconds
#db_user_namespace = false

deadlock_timeout = 10000
--------------------------------------------------

Thank you in advance.


****************************************
  Yumiko Izumi
     E-mail:izumi-yumiko@scnet.co.jp
****************************************



Re: pq_flush: send() failed: Broken pipe

From
Tom Lane
Date:
Yumiko Izumi <izumi-yumiko@scnet.co.jp> writes:
> When I carried out SQL sentence in PostgreSQL7.3.8 environment,
> PostgreSQL outputs the following error messages.

You haven't shown us the failing SQL command, nor indeed anything else
that would let someone else reproduce the problem.  However, 7.3.8
is more than two years old, so your first step ought to be to update
to a more recent release to see if the problem is already fixed.

            regards, tom lane

Re: pq_flush: send() failed: Broken pipe

From
Yumiko Izumi
Date:
Hello Tom.

I carry out a SQL sentence file with psql command.

% psql -d gyomuDB -f 003.sql

In 003.sql, I create various tables and indexes, but the following SQL sentences terminated abnormally.
(There are 1500 lines in the whole file, and it is the extract as follows.)
#######################
CREATE FUNCTION mon_CreateTable_WbemMonitorLog( text ) RETURNS integerAS'
DECLARE
        str_basename    ALIAS FOR $1;
        str_sql         text;
        str_tablename   text;
        nb              integer;
        nb_end          integer;
BEGIN
        nb := 0;
        nb_end := 999;
        while nb <= nb_end loop
                -- table
                str_tablename := str_basename || CAST( nb AS text );
                str_sql := ''CREATE TABLE '' || str_tablename
                        || ''(''
                        || ''MonID int NOT NULL ,''
                        || ''CategoryID int NOT NULL ,''
                        || ''ExtensionID int NOT NULL ,''
                        || ''SummaryID int NOT NULL ,''
                        || ''KeyValue varchar (256) NULL ,''
                        || ''Data varchar (512) NULL ,''
                        || ''GetDate timestamp NOT NULL ,''
                        || ''Status int NOT NULL ,''
                        || ''Summarized int NOT NULL);'';
                execute str_sql;
                nb := nb + 1;
        end loop;
        RETURN ( 0 );
END;
'
LANGUAGE 'plpgsql';
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogMinute_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogHour_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogDay_' );
DROP FUNCTION mon_CreateTable_WbemMonitorLog( text );
#######################
In other words I carry out this function after I create a function to make 1,000 tables.

This function worked with various servers normally.
But this function terminated abnormally only with a certain server.

A difference is only that server that this function terminates abnormally with has high-speed multiprocessor.
Besides this, there is not remarkable difference.

When this function worked normally, this function outputs as follows.
#######################
CREATE FUNCTION
 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

DROP FUNCTION
#######################

When this function terminated abnormally, this function outputs only "CREATE FUNCTION".
And psql command terminated.

In other words I suppose that I fall in practice (SELECT) of the first function.

At this time, PostgreSQL outputs the following error messages.
#######################
Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection
#######################

Besides, if there is necessary information, please point it out.

Thank you in advance.

****************************************
  Yumiko Izumi
     E-mail:izumi-yumiko@scnet.co.jp
****************************************



Re: pq_flush: send() failed: Broken pipe

From
Tom Lane
Date:
Yumiko Izumi <izumi-yumiko@scnet.co.jp> writes:
> At this time, PostgreSQL outputs the following error messages.
> #######################
> Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
> Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection
> #######################

Hmm ... this indicates that psql is failing, not the server.  Perhaps
you have a corrupt copy of psql or libpq?  But in any case, you don't
want to be running 7.3.8.  If you must run a 7.3 system then it should
be 7.3.18 or close to that.

            regards, tom lane