Re: Insert speed question - Mailing list pgsql-general

From Josué Maldonado
Subject Re: Insert speed question
Date
Msg-id 40BCAAEE.2050207@lamundial.hn
Whole thread Raw
In response to Re: Insert speed question  (Shridhar Daithankar <shridhar@frodo.hserus.net>)
Responses Re: Insert speed question
List pgsql-general
Thanks for your responses,

I did the vacuum but I cannot make the insert again at this moment, even
when that server is not in production so all the resources should be
dedicated to Postgres I think I still have some perfomance issues

Did some changes to postgresql.conf according the tuning guide:
tcpip_socket = true
max_connections = 28
shared_buffers = 32768          # min max_connections*2 or 16, 8KB each
max_fsm_relations = 500     # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 80000           # min 1000, fsm is free space map, ~6
max_locks_per_transaction = 64  # min 10
sort_mem = 16384                # min 64, size in KB
vacuum_mem = 419430             # min 1024, size in KB
checkpoint_segments = 10
effective_cache_size = 819200   # typically 8KB each

Shmmax is:
/proc/sys/kernel: cat shmmax
536870912

A simple query on the 4.8 million row table:

dbmund=# explain analyze select * from pkardex where pkd_procode='8959';
                                                               QUERY
PLAN

---------------------------------------------------------------------------------------------------------------------------------------
  Index Scan using ix_pkardex_procode on pkardex  (cost=0.00..3865.52
rows=991 width=287) (actual time=10.879..100.914 rows=18 loops=1)
    Index Cond: (pkd_procode = '8959'::bpchar)
  Total runtime: 101.057 ms
(3 rows)


A simple query on 1.2 million row

  explain analyze select * from pmdoc where pdc_docto='744144';
                                                        QUERY PLAN


------------------------------------------------------------------------------------------------------------------------
  Index Scan using ix_pmdoc_docto on pmdoc  (cost=0.00..5.20 rows=2
width=206) (actual time=0.081..0.085 rows=1 loops=1)
    Index Cond: (pdc_docto = '744144'::bpchar)
  Total runtime: 0.140 ms
(3 rows)


I would appreciate any comment or suggestion, does a hardware upgrade is
needed, does it seems "normal" for postgresql perfomance.

Thanks in advance




El 01/06/2004 1:35 AM, Shridhar Daithankar en su mensaje escribio:

> On Tuesday 01 June 2004 01:35, Josué Maldonado wrote:
>
>>Hello List,
>>
>>I'm importing some data from Foxpro to Postgres, there is atable wich
>>contains aprox 4.8 million rows and it size about 830MB. I uploaded it
>>to Postgres using dbf2pg and worked fine, it tooks about 10-15 minutes.
>>Now  I'm inserting some data from that table to a brand new table in
>>Postgresql, for that I'm doing insert into ... select from. The point is
>>inserting this data from one table to another table in Postgresql took
>>about 35 minutes ago. Is that the expected behavior in Postgres?
>
>
> Can you generate explain analyze for insert into.. select from? Most probably
> it is using seq. scan because you haven't analysed after inserting 4.8M rows.
>
> Do a vacuum verbose analyze tablename and reattempt inter into.. select from.
>
> You can also read general tuning guide at
>
> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
>
>  HTH
>
>  Shridhar
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>


--
Sinceramente,
Josué Maldonado.
"Toda otra ciencia es perjudicial a quien no posee la ciencia de la
bondad." Michel Eyquen de Montaigne. Filósofo y escritor francés.

pgsql-general by date:

Previous
From: Richard Huxton
Date:
Subject: Re: after using pg_resetxlog, db lost
Next
From: zhicheng wang
Date:
Subject: Re: after using pg_resetxlog, db lost