Thread: Low Insert/Update Performance

Low Insert/Update Performance

From
Rhaoni Chiu Pereira
Date:
Hi List,

   I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running PostgreSQL
7.3.2-3 Database.
   I have a Delphi aplication that updates the Oracle database using .dbf
file's information  ( converting the data from the old clipper aplication ) and
it takes about 3min and 45 seconds to update Jan/2003 .
    My problem is that I must substitute this Oracle for  a PostgreSQL database
and this same Delphi aplication takes 45 min to update Jan/2003.
    All delphi routines are converted and optmized to work with PgSQL.

Here follows my postgresql.conf:

#
#    Connection Parameters
#
tcpip_socket = true
#ssl = false

max_connections = 10
#superuser_reserved_connections = 2

port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777    # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#    Shared Memory Size
#
shared_buffers = 10000        # min max_connections*2 or 16, 8KB each
max_fsm_relations = 2000    # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 20000        # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64    # min 10
#wal_buffers =         # min 4, typically 8KB each

#
#    Non-shared Memory Sizes
#
sort_mem = 8000          # min 64, size in KB
vacuum_mem = 16192        # min 1024, size in KB


#
#    Write-ahead log (WAL)
#
checkpoint_segments = 9     # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300    # range 30-3600, in seconds
#
#commit_delay = 0        # range 0-100000, in microseconds
#commit_siblings = 5        # range 1-1000
#
fsync = false
#wal_sync_method = fsync    # the default varies across platforms:
#                # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0            # range 0-16


#
#    Optimizer Parameters
#
enable_seqscan = false
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true

effective_cache_size = 16000    # typically 8KB each
#random_page_cost = 4        # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01        # (same)
#cpu_index_tuple_cost = 0.001    # (same)
#cpu_operator_cost = 0.0025    # (same)

default_statistics_target = 1000    # range 1-1000

#
#    GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0    # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0        # default based on tables in statement,
                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1        # auto-compute seed


#
#    Message display
#
#server_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   info, notice, warning, error, log, fatal,
                #   panic
#client_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   log, info, notice, warning, error
#silent_mode = false

#log_connections = false
#log_pid = false
#log_statement = false
#log_duration = false
log_timestamp = true

#log_min_error_statement = error # Values in order of increasing severity:
                 #   debug5, debug4, debug3, debug2, debug1,
                 #   info, notice, warning, error, panic(off)

#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#    Syslog
#
#syslog = 0            # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#    Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#    Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false


#
#    Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0


#
#    Misc
#
#autocommit = true
#dynamic_library_path = '$libdir'
search_path = 'vendas'
#datestyle = 'iso, us'
#timezone = unknown        # actually, defaults to TZ environment setting
#australian_timezones = false
#client_encoding = sql_ascii    # actually, defaults to database encoding
#authentication_timeout = 60    # 1-600, in seconds
#deadlock_timeout = 1000    # in milliseconds
#default_transaction_isolation = 'read committed'
#max_expr_depth = 10000        # min 10
#max_files_per_process = 1000    # min 25
#password_encryption = true
#sql_inheritance = true
#transform_null_equals = false
#statement_timeout = 0        # 0 is disabled, in milliseconds
#db_user_namespace = false



#
#    Locale settings
#
# (initialized by initdb -- may be changed)
LC_MESSAGES = 'en_US.UTF-8'
LC_MONETARY = 'en_US.UTF-8'
LC_NUMERIC = 'en_US.UTF-8'
LC_TIME = 'en_US.UTF-8'

Atenciosamente,

Rhaoni Chiu Pereira
Sistêmica Computadores

Visite-nos na Web: http://sistemica.info
Fone/Fax : +55 51 3328 1122






Re: Low Insert/Update Performance

From
Jeff
Date:
On Mon, 20 Oct 2003 12:13:26 -0200
Rhaoni Chiu Pereira <rhaoni@sistemica.info> wrote:

> Hi List,
>
>    I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running
>    PostgreSQL
> 7.3.2-3 Database.

[clip]

Please send schema & queries or we will not be able to help you.  Also,
if you could provide explain analyze of each query it would be even more
helpful!

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/

Re: Low Insert/Update Performance

From
Josh Berkus
Date:
Rhaoni,

>     My problem is that I must substitute this Oracle for  a PostgreSQL
> database and this same Delphi aplication takes 45 min to update Jan/2003.
>     All delphi routines are converted and optmized to work with PgSQL.

Obviously not.

How about posting the update queries?

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Low Insert/Update Performance

From
Hannu Krosing
Date:
Rhaoni Chiu Pereira kirjutas E, 20.10.2003 kell 17:13:
> Hi List,
>
>    I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running PostgreSQL
> 7.3.2-3 Database.
>    I have a Delphi aplication that updates the Oracle database using .dbf
> file's information  ( converting the data from the old clipper aplication ) and
> it takes about 3min and 45 seconds to update Jan/2003 .

Have you tried contrib/dbase to do the same ?

How fast does this run

>     My problem is that I must substitute this Oracle for  a PostgreSQL database
> and this same Delphi aplication takes 45 min to update Jan/2003.
>     All delphi routines are converted and optmized to work with PgSQL.

Could it be that you try to run each insert in a separate transaction in
PgSQL version ?

Another possibility is that there is a primary key index created on
empty tables which is not used in subsequent UNIQUE tests when tables
start to fill and using index would be useful. An ANALYZE in a parallel
backend could help here. Same can be true for foreign keys and unique
constraints.

---------------
Hannu


Re: Low Insert/Update Performance

From
Josh Berkus
Date:
Rhaoni,

> The delphi program does just one commit for all queries .
> I was wandering if ther is some configuration parameters to be changed to
> improve the performance ?

To help you, we'll need to to trap a query and run an EXPLAIN ANALYZE on it.

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Low Insert/Update Performance

From
Rhaoni Chiu Pereira
Date:
Hi List;

 Here follow the update query, explain analyze of it , my postgresql.conf and
my db configuration. This is my first PostgreSQL DB so I would like to know if
its performance is normal !
  If there is some postgresql.conf's parameter that you think will optmize the
database just tell me !!!

QUERY:

update ftnfco00 set
empfil = 0,
data_entrega = NULL,
situacao_nf  = 'N',
cod_fiscal   = 61010000,
base_calc_icm_trib = '264.1'::float8,
nf_emitida   = 'S',
tipo_cad_clicre ='C',
cod_cliente = '55380'::float8,
cod_repres  = 8,
cod_tipo_cliente = 1,
estado_cliente = 'PR',
pais_cliente = 978,
classif_cliente = '',
cod_suframa = ' ',
ordem_compra = ' ',
banco_cobranca = 0,
situacao_comissao = '0',
perc_comissao = '6'::float8,
emitir_bloqueto = 'N',
cod_tipo_venda = 0,
prazo_pgto_01 = 68,
prazo_pgto_02 = 0,
prazo_pgto_03 = 0,
prazo_pgto_04 = 0,
prazo_pgto_05 = 0,
prazo_pgto_06 = 0,
prazo_pgto_07 = 0,
prazo_pgto_08 = 0,
prazo_pgto_09 = 0,
prazo_pgto_desc_duplic = 0,
perc_desc_duplic = '0'::float8,
qtde_fisica   = '5'::float8,
vlr_liquido   = '264.1'::float8,
vlr_ipi       = '0'::float8,
vlr_compl_nf  = 0,
vlr_frete     = '26.4'::float8,
vlr_acresc_fin_emp = 0,
vlr_acresc_fin_tab = 0,
vlr_dolar_vcto_dupl = 1,
vlr_dolar_dia_fatur = 1,
vlr_icm = '31.69'::float8,
vlr_ipi_consignacao = 0,
perc_juro_dia = '0.15'::float8,
cod_texto_padrao = 19,
cod_transp = 571,
cod_transp_redesp = 0,
placa_transp = '',
peso_liquido = '5.832'::float8,
peso_bruto   = '6.522'::float8,
qtde_volumes = 5,
proxima_nf   = '0'::float8,
lista_preco  = '03RS',
lista_preco_basico = ' ',
atu_guia_embarque  = 'N',
vlr_pis_cofins = 0,
qtde_duzias = 5,
obs_nf = 'ORDEM DE COMPRA 40851583',
margem_comercial = 0,
margem_operac = 0
where
emp = 909 and
fil = 101 and
nota_fiscal = '57798'::float8 and
serie = 'UNICA' and
data_emissao = cast('2003-01-03 00:00:00'::timestamp as timestamp)


EXPLAIN ANALYZE:
                                                              QUERY
PLAN                                                     $
--------------------------------------------------------------------------------
---------------------------------------------$
 Index Scan using ftnfco06 on ftnfco00  (cost=0.00..20.20 rows=1 width=535)
(actual time=1.14..1.27 rows=1 loops=1)
   Index Cond: ((emp = 909::numeric) AND (fil = 101::numeric) AND (data_emissao
= '2003-01-03 00:00:00'::timestamp without ti$
   Filter: (((nota_fiscal)::double precision = 57798::double precision) AND
(serie = 'UNICA'::character varying))
 Total runtime: 3.56 msec
(4 rows)

postgresql.conf:

#    Connection Parameters
#
tcpip_socket = true
#ssl = false

max_connections = 10
#superuser_reserved_connections = 2

port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777    # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#    Shared Memory Size
#
shared_buffers = 10000        # min max_connections*2 or 16, 8KB each
max_fsm_relations = 2000    # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 20000        # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64    # min 10
#wal_buffers =         # min 4, typically 8KB each

#
#    Non-shared Memory Sizes
#
sort_mem = 8000          # min 64, size in KB
vacuum_mem = 16192        # min 1024, size in KB


#
#    Write-ahead log (WAL)
#
checkpoint_segments = 9     # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300    # range 30-3600, in seconds
#
#commit_delay = 0        # range 0-100000, in microseconds
#commit_siblings = 5        # range 1-1000
#
fsync = false
#wal_sync_method = fsync    # the default varies across platforms:
#                # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0            # range 0-16


#
#    Optimizer Parameters
#
enable_seqscan = false
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true

effective_cache_size = 16000    # typically 8KB each
#random_page_cost = 4        # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01        # (same)
#cpu_index_tuple_cost = 0.001    # (same)
#cpu_operator_cost = 0.0025    # (same)

default_statistics_target = 1000    # range 1-1000

#
#    GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0    # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0        # default based on tables in statement,
                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1        # auto-compute seed


#
#    Message display
#
#server_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   info, notice, warning, error, log, fatal,
                #   panic
#client_min_messages = notice    # Values, in order of decreasing detail:
                #   debug5, debug4, debug3, debug2, debug1,
                #   log, info, notice, warning, error
#silent_mode = false

#log_connections = false
#log_pid = false
#log_statement = false
#log_duration = false
log_timestamp = true

#log_min_error_statement = error # Values in order of increasing severity:
                 #   debug5, debug4, debug3, debug2, debug1,
                 #   info, notice, warning, error, panic(off)

#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#    Syslog
#
#syslog = 0            # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#    Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#    Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false


#
#    Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0


#
#    Misc
#
#autocommit = true
#dynamic_library_path = '$libdir'
search_path = 'vendas'
#datestyle = 'iso, us'
#timezone = unknown        # actually, defaults to TZ environment setting
#australian_timezones = false
#client_encoding = sql_ascii    # actually, defaults to database encoding
#authentication_timeout = 60    # 1-600, in seconds
#deadlock_timeout = 1000    # in milliseconds
#default_transaction_isolation = 'read committed'
#max_expr_depth = 10000        # min 10
#max_files_per_process = 1000    # min 25
#password_encryption = true
#sql_inheritance = true
#transform_null_equals = false
#statement_timeout = 0        # 0 is disabled, in milliseconds
#db_user_namespace = false



#
#    Locale settings
#
# (initialized by initdb -- may be changed)
LC_MESSAGES = 'en_US.UTF-8'
LC_MONETARY = 'en_US.UTF-8'
LC_NUMERIC = 'en_US.UTF-8'
LC_TIME = 'en_US.UTF-8'


db configuration:

    Pentium 4 1.7 GHz , 512 MB RAM DDR , HD 7200 RPM , RH 9 , PostgreSQL 7.3.2-3



Atenciosamente,

Rhaoni Chiu Pereira
Sistêmica Computadores

Visite-nos na Web: http://sistemica.info
Fone/Fax : +55 51 3328 1122



Citando Josh Berkus <josh@agliodbs.com>:

<> Rhaoni,
<>
<> > The delphi program does just one commit for all queries .
<> > I was wandering if ther is some configuration parameters to be changed to
<> > improve the performance ?
<>
<> To help you, we'll need to to trap a query and run an EXPLAIN ANALYZE on
<> it.
<>
<> --
<> Josh Berkus
<> Aglio Database Solutions
<> San Francisco
<>


Re: Low Insert/Update Performance

From
Josh Berkus
Date:
Rhaoni,

>  Total runtime: 3.56 msec
> (4 rows)

Well, from that figure it's not the query that's holding you up.

You said that the system bogs down when you're doing a whole series of these
updates, or just one?   If the former, then I'm afraid that it's your disk
that's to blame ... large numbers of rapid-fire updates simply won't be fast
on a single IDE disk.   Try getting a second disk and moving the transaction
log to it.

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Low Insert/Update Performance

From
Josh Berkus
Date:
Rhaoni,

>    First of all , thank's for your atention and fast answer. The system
> really bogs down when I'm doing a whole series of these updates.

That would be consistent with a single-disk problem.

> Take a
> look at my postgresql.conf I'm afraid of putting some parameters wrong (
> too high or too low ). And sorry if it sounds stupid but how can I move the
> transaction log to this second disk ?

1) Install the 2nd disk.
2) With PostgreSQL shut down, copy the PGDATA/pg_xlog directory to the 2nd
disk.
3) delete the old pg_xlog directory
4) Symlink or Mount the new pg_xlog directory under PGDATA as PGDATA/pg_xlog.
5) Restart Postgres.

What I am interested in is your original assertion that this ran faster on
Oracle.   Was Oracle installed on this particular machine, or a different
one?

--
Josh Berkus
Aglio Database Solutions
San Francisco