Thread: pq_flush: send() failed: Broken pipe

pq_flush: send() failed: Broken pipe

From
Yumiko Izumi
Date:
Hello.

When I carried out SQL sentence in PostgreSQL7.3.8 environment,
PostgreSQL outputs the following error messages.
==============================
Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection
==============================

Why these messages appear?
Please teach me about a workaround of a problem.

Work procedures are as follows.
(1) I install PostgreSQL7.3.8 in a HP-UX machine
(2) I carry out initdb to create gyomuDB
(3) I carry out an SQL sentence to make TABLE and FUNCTION in gyomuDB
    % psql -d gyomuDB -f 003.sql

In 003.sql, I create various tables and indexes, but the following SQL sentences terminated abnormally.
(There are 1500 lines in the whole file, and it is the extract as follows.)
#######################
CREATE FUNCTION mon_CreateTable_WbemMonitorLog( text ) RETURNS integerAS'
DECLARE
        str_basename    ALIAS FOR $1;
        str_sql         text;
        str_tablename   text;
        nb              integer;
        nb_end          integer;
BEGIN
        nb := 0;
        nb_end := 999;
        while nb <= nb_end loop
                -- table
                str_tablename := str_basename || CAST( nb AS text );
                str_sql := ''CREATE TABLE '' || str_tablename
                        || ''(''
                        || ''MonID int NOT NULL ,''
                        || ''CategoryID int NOT NULL ,''
                        || ''ExtensionID int NOT NULL ,''
                        || ''SummaryID int NOT NULL ,''
                        || ''KeyValue varchar (256) NULL ,''
                        || ''Data varchar (512) NULL ,''
                        || ''GetDate timestamp NOT NULL ,''
                        || ''Status int NOT NULL ,''
                        || ''Summarized int NOT NULL);'';
                execute str_sql;
                nb := nb + 1;
        end loop;
        RETURN ( 0 );
END;
'
LANGUAGE 'plpgsql';
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogMinute_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogHour_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogDay_' );
DROP FUNCTION mon_CreateTable_WbemMonitorLog( text );
#######################

This function worked with various servers normally.
But this function terminated abnormally only with a certain server.

A difference is only that server that this function terminates abnormally with has high-speed multiprocessor.
Besides this, there is not remarkable difference.

When this function worked normally, this function outputs as follows.
#######################
CREATE FUNCTION
 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

 mon_createtable_wbemmonitorlog
--------------------------------
                              0
(1 row)

DROP FUNCTION
#######################

When this function terminated abnormally, psql outputs only "CREATE FUNCTION".
And psql command terminated.

In other words I suppose that I fall in practice (SELECT) of the first function.

I checked the disk use situation then, the neck in a resource was not found.
I understand this error isn't caused by stringency of a resource.

In addition, this error caused when I am creating DB
just after PostgreSQL installation.

I can't think that there is a lot access in PostgreSQL.
I can't think that DB size is big and lacked memory.

<About a signal>
I carry out the above (3) via a script written in bash.
The script was finished by signal 16 then(SIGUSR1).

  # ./gyomuDB_setup.com
  Signal 16

I saw a source of PostgreSQL and various documents.
And I understood about a "SIGUSR1" signal as follows.

* A "SIGUSR1" signal is used only for timing making a transaction log
  to restore DB at the time of DB disorder outbreak.
* All the transaction processing is carried out by the child process that is a backend.
  But when accumulated a fixed quantity transaction log,
  postmaster carries out checkpoint processing
  by transmitting a "SIGUSR1" signal to postmaster from a backend.
  (Actually, postmaster generates child process more and carries it out)

Actually this signal is transmitted not to postmaster but to psql, and psql seems to stop.

<postgresql.conf>
--------------------------------------------------
#
#    Connection Parameters
#
#tcpip_socket = false
#ssl = false

#max_connections = 32
#superuser_reserved_connections = 2

#port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777    # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#    Shared Memory Size
#
#shared_buffers = 64        # min max_connections*2 or 16, 8KB each
#max_fsm_relations = 1000    # min 10, fsm is free space map, ~40 bytes
#max_fsm_pages = 10000        # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64    # min 10
#wal_buffers = 8        # min 4, typically 8KB each

shared_buffers = 256
max_fsm_relations = 4000
max_fsm_pages = 131072
max_locks_per_transaction = 512

#
#    Non-shared Memory Sizes
#
#sort_mem = 1024        # min 64, size in KB
#vacuum_mem = 8192        # min 1024, size in KB


#
#    Write-ahead log (WAL)
#
#checkpoint_segments = 3    # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300    # range 30-3600, in seconds
#
#commit_delay = 0        # range 0-100000, in microseconds
#commit_siblings = 5        # range 1-1000
#
#fsync = true
#wal_sync_method = fsync    # the default varies across platforms:
#                # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0            # range 0-16


#
#    Optimizer Parameters
#
#enable_seqscan = true
#enable_indexscan = true
#enable_tidscan = true
#enable_sort = true
#enable_nestloop = true
#enable_mergejoin = true
#enable_hashjoin = true

#effective_cache_size = 1000    # typically 8KB each
#random_page_cost = 4        # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01        # (same)
#cpu_index_tuple_cost = 0.001    # (same)
#cpu_operator_cost = 0.0025    # (same)

#default_statistics_target = 10    # range 1-1000

#
#    GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0    # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0        # default based on tables in statement,
                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1        # auto-compute seed


#
#    Message display
#
#server_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   info, notice, warning, error, log, fatal,
                #   panic
#client_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   log, info, notice, warning, error
#silent_mode = false

#log_connections = false
#log_pid = false
#log_statement = false
#log_duration = false
#log_timestamp = false

#log_min_error_statement = panic # Values in order of increasing severity:
                 #   debug5, debug4, debug3, debug2, debug1,
                 #   info, notice, warning, error, panic(off)

#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#    Syslog
#
#syslog = 0            # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#    Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#    Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false


#
#    Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0


#
#    Misc
#
#autocommit = true
#dynamic_library_path = '$libdir'
#search_path = '$user,public'
#datestyle = 'iso, us'
#timezone = unknown        # actually, defaults to TZ environment setting
#australian_timezones = false
#client_encoding = sql_ascii    # actually, defaults to database encoding
#authentication_timeout = 60    # 1-600, in seconds
#deadlock_timeout = 1000    # in milliseconds
#default_transaction_isolation = 'read committed'
#max_expr_depth = 10000        # min 10
#max_files_per_process = 1000    # min 25
#password_encryption = true
#sql_inheritance = true
#transform_null_equals = false
#statement_timeout = 0        # 0 is disabled, in milliseconds
#db_user_namespace = false

deadlock_timeout = 10000
--------------------------------------------------

Besides, if there is necessary information, please point it out.

Thank you in advance.




Re: pq_flush: send() failed: Broken pipe

From
"Andrej Ricnik-Bay"
Date:
On 3/6/07, Yumiko Izumi <izumi-yumiko@scnet.co.jp> wrote:
> Hello.
Hi,

> When I carried out SQL sentence in PostgreSQL7.3.8 environment,
> PostgreSQL outputs the following error messages.
Can't say anything sensible regarding the error message,
but if you have to carry on using 7.x you should upgrade
to at least 7.4.16.  Your install is 2.5 years behind, and there
were quite a few patches/security fixes since.


Cheers,
Andrej

Re: pq_flush: send() failed: Broken pipe

From
Tom Lane
Date:
Yumiko Izumi <izumi-yumiko@scnet.co.jp> writes:
> When I carried out SQL sentence in PostgreSQL7.3.8 environment,
> PostgreSQL outputs the following error messages.
> ==============================
> Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
> Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection

These messages indicate that psql crashed, not the backend.  Did you not
see any interesting messages on the client side?  Can you get a stack
trace from the psql crash?

Also, as someone already mentioned, the current release in the 7.3
branch is 7.3.18 not 7.3.8.  If you want us to expend time looking for
the problem, it would be polite to first make sure it's not a
long-since-solved problem.

            regards, tom lane