Thread: pq_flush: send() failed: Broken pipe

pq_flush: send() failed: Broken pipe

From
Yumiko Izumi
Date:
Hello.
I use PostgreSQL on jobs.

When I carried out SQL sentence in PostgreSQL7.3.8 environment,
PostgreSQL outputs the following error messages.
==============================
pq_flush: send() failed: Broken pipe
pq_recvbuf: unexpected EOF on client connection
==============================

Why these messages appear?
Please teach me about a workaround of a problem.

Detailed informations are as follows.

<Work procedure>
(1) I install PostgreSQL7.3.8 in a HP-UX machine
(2) I carry out initdb to create gyomuDB
(3) I carry out an SQL sentence to make TABLE and FUNCTION in gyomuDB
      psql -d gyomuDB -f 003.sql

<Error contents>
Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: Broken pipe
Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on client connection

Plural environment outputs above error messages.
I checked the disk use situation then, the neck in a resource was not found.
I understand this error isn't caused by stringency of a resource.

In addition, this error caused when I am creating DB
just after PostgreSQL installation.

I can't think that there is a lot access in PostgreSQL.
I can't think that DB size is big and lacked memory.

By the way, I carry out seven SQL sentence files in by the above (3) consecutively.
But this error seems to be caused by executing the FUNCTION in the third file.

This FUNCTION makes 1000 TABLE in a mass.
Those names are different but designs are same.

I carry out the above (3) via a script written in bash.
The script was finished by signal 16 then(SIGUSR1).

  # ./gyomuDB_setup.com
  Signal 16

<About a signal>
I saw a source of PostgreSQL and various documents.
And I understood about a "SIGUSR1" signal as follows.

* A "SIGUSR1" signal is used only for timing making a transaction log
  to restore DB at the time of DB disorder outbreak.
* All the transaction processing is carried out by the child process that is a backend.
  But when accumulated a fixed quantity transaction log,
  postmaster carries out checkpoint processing
  by transmitting a "SIGUSR1" signal to postmaster from a backend.
  (Actually, postmaster generates child process more and carries it out)

Actually this signal is transmitted not to postmaster but to psql, and psql seems to stop.

<postgresql.conf>
--------------------------------------------------
#
#    Connection Parameters
#
#tcpip_socket = false
#ssl = false

#max_connections = 32
#superuser_reserved_connections = 2

#port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777    # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#    Shared Memory Size
#
#shared_buffers = 64        # min max_connections*2 or 16, 8KB each
#max_fsm_relations = 1000    # min 10, fsm is free space map, ~40 bytes
#max_fsm_pages = 10000        # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64    # min 10
#wal_buffers = 8        # min 4, typically 8KB each

shared_buffers = 256
max_fsm_relations = 4000
max_fsm_pages = 131072
max_locks_per_transaction = 512

#
#    Non-shared Memory Sizes
#
#sort_mem = 1024        # min 64, size in KB
#vacuum_mem = 8192        # min 1024, size in KB


#
#    Write-ahead log (WAL)
#
#checkpoint_segments = 3    # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300    # range 30-3600, in seconds
#
#commit_delay = 0        # range 0-100000, in microseconds
#commit_siblings = 5        # range 1-1000
#
#fsync = true
#wal_sync_method = fsync    # the default varies across platforms:
#                # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0            # range 0-16


#
#    Optimizer Parameters
#
#enable_seqscan = true
#enable_indexscan = true
#enable_tidscan = true
#enable_sort = true
#enable_nestloop = true
#enable_mergejoin = true
#enable_hashjoin = true

#effective_cache_size = 1000    # typically 8KB each
#random_page_cost = 4        # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01        # (same)
#cpu_index_tuple_cost = 0.001    # (same)
#cpu_operator_cost = 0.0025    # (same)

#default_statistics_target = 10    # range 1-1000

#
#    GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0    # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0        # default based on tables in statement,
                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1        # auto-compute seed


#
#    Message display
#
#server_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   info, notice, warning, error, log, fatal,
                #   panic
#client_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   log, info, notice, warning, error
#silent_mode = false

#log_connections = false
#log_pid = false
#log_statement = false
#log_duration = false
#log_timestamp = false

#log_min_error_statement = panic # Values in order of increasing severity:
                 #   debug5, debug4, debug3, debug2, debug1,
                 #   info, notice, warning, error, panic(off)

#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#    Syslog
#
#syslog = 0            # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#    Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#    Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false


#
#    Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0


#
#    Misc
#
#autocommit = true
#dynamic_library_path = '$libdir'
#search_path = '$user,public'
#datestyle = 'iso, us'
#timezone = unknown        # actually, defaults to TZ environment setting
#australian_timezones = false
#client_encoding = sql_ascii    # actually, defaults to database encoding
#authentication_timeout = 60    # 1-600, in seconds
#deadlock_timeout = 1000    # in milliseconds
#default_transaction_isolation = 'read committed'
#max_expr_depth = 10000        # min 10
#max_files_per_process = 1000    # min 25
#password_encryption = true
#sql_inheritance = true
#transform_null_equals = false
#statement_timeout = 0        # 0 is disabled, in milliseconds
#db_user_namespace = false

deadlock_timeout = 10000
--------------------------------------------------

Thank you in advance.


****************************************
  Yumiko Izumi
     E-mail:izumi-yumiko@scnet.co.jp
****************************************



not able to view information schema

From
"Ashish Karalkar"
Date:
Hello All,
When I query to command line I can see information schema like pg_catlog
etc.
but same is absent in pg_admin.why??
thanks in advance

with regards
Ashish


Re: not able to view information schema

From
George Weaver
Date:
From "Ashish Karalkar"

> but same is absent in pg_admin.why??

Select View > System Objects to see them in PgAdmin.

Regards,
George