Thread: Slow Recordset

Slow Recordset

From
Francisco Jose Toledano Alcala
Date:
Hello everyone.

I think i've got a big problem developing an client-server aplication.
On Server side I use a postgress sql server  for linux, using SuSe 6.3
distribution, all over a ethernet network with aproxmately 10 clients.
On clients side, i use windows 98 and Visual Basic 6 clients programs
using ADO throught odbc  (postgress odbc driver).

All run ok. I create a database and 5 tables on Access to export all
throught odbc to Postgress Server. Export work ok, and all registers
apears ok. The primary key is exported correctly too.

The table has aproximately 15 fields with one of the of type 'text'.

The i run on Visual a program to fill the database with 50.000 registers
with ADO. All run ok, but when on client run a query like "select * from
alumnos order by codalumno;" over all registers, the query spend aprox.
35 seconds. The server is a Pentium III, 128 Mb RAM, 8'4 UDMA 66 IDE
disk.

I think there must be an error on postmaster daemon runing because its
imposible to run as slow as i see.

Must optimize the database with a view or something like that?
Is this normal?
Need more indexes? (I've only the primary key)

Please, I need help as soon as posible.



Re: Slow Recordset

From
Joe Shevland
Date:
Francisco Jose Toledano Alcala wrote:
> 
> Hello everyone.
> 
> I think i've got a big problem developing an client-server aplication.
> On Server side I use a postgress sql server  for linux, using SuSe 6.3
> distribution, all over a ethernet network with aproxmately 10 clients.
> On clients side, i use windows 98 and Visual Basic 6 clients programs
> using ADO throught odbc  (postgress odbc driver).
> 
> All run ok. I create a database and 5 tables on Access to export all
> throught odbc to Postgress Server. Export work ok, and all registers
> apears ok. The primary key is exported correctly too.
> 
> The table has aproximately 15 fields with one of the of type 'text'.
> 
> The i run on Visual a program to fill the database with 50.000 registers
> with ADO. All run ok, but when on client run a query like "select * from
> alumnos order by codalumno;" over all registers, the query spend aprox.
> 35 seconds. The server is a Pentium III, 128 Mb RAM, 8'4 UDMA 66 IDE
> disk.
> 
> I think there must be an error on postmaster daemon runing because its
> imposible to run as slow as i see.
> 
> Must optimize the database with a view or something like that?
> Is this normal?
> Need more indexes? (I've only the primary key)

Is 'codalumno' the primary key? If not defining an index on this column
may speed things up.

Can you execute the query on the server using 'psql' and compare
results?

Joe

> 
> Please, I need help as soon as posible.

-- 
Joe Shevland
Principal Consultant
KPI Logistics Pty Ltd
http://www.kpi.com.au
mailto:shevlandj@kpi.com.au

"Support bacteria -- it's the only culture some people have!"


Re: Slow Recordset

From
Joe Shevland
Date:
You could try a clustered index with the CLUSTER command, thoughs there's a few
things worth reading about it first. I don't know that this would really help in
this case though.

Joe

Francisco Jose Toledano Alcala wrote:
> 
> Joe Shevland wrote:
> >
> > Francisco Jose Toledano Alcala wrote:
> > >
> > > Hello everyone.
> > >
> > > I think i've got a big problem developing an client-server aplication.
> > > On Server side I use a postgress sql server  for linux, using SuSe 6.3
> > > distribution, all over a ethernet network with aproxmately 10 clients.
> > > On clients side, i use windows 98 and Visual Basic 6 clients programs
> > > using ADO throught odbc  (postgress odbc driver).
> > >
> > > All run ok. I create a database and 5 tables on Access to export all
> > > throught odbc to Postgress Server. Export work ok, and all registers
> > > apears ok. The primary key is exported correctly too.
> > >
> > > The table has aproximately 15 fields with one of the of type 'text'.
> > >
> > > The i run on Visual a program to fill the database with 50.000 registers
> > > with ADO. All run ok, but when on client run a query like "select * from
> > > alumnos order by codalumno;" over all registers, the query spend aprox.
> > > 35 seconds. The server is a Pentium III, 128 Mb RAM, 8'4 UDMA 66 IDE
> > > disk.
> > >
> > > I think there must be an error on postmaster daemon runing because its
> > > imposible to run as slow as i see.
> > >
> > > Must optimize the database with a view or something like that?
> > > Is this normal?
> > > Need more indexes? (I've only the primary key)
> >
> > Is 'codalumno' the primary key? If not defining an index on this column
> > may speed things up.
> >
> 
> Yes, it's primary key.
> 
> > Can you execute the query on the server using 'psql' and compare
> > results?
> >
> Timing results are executed on server side throught psql.

-- 
Joe Shevland
Principal Consultant
KPI Logistics Pty Ltd
http://www.kpi.com.au
mailto:shevlandj@kpi.com.au


postgres crash. HELP

From
"Oscar Serrano"
Date:
Hi:
I don't know how, but Postgres has made a very odd thing.
Suddenly in the directory where I have all the tables, now there is about
1000 files with names like this:

pg_class_oid_index.165               pg_class_oid_index.2835
pg_class_oid_index.1650              pg_class_oid_index.2836
pg_class_oid_index.1651              pg_class_oid_index.2837
pg_class_oid_index.1652              pg_class_oid_index.2838
pg_class_oid_index.1653              pg_class_oid_index.2839
pg_class_oid_index.1654              pg_class_oid_index.284
pg_class_oid_index.1655              pg_class_oid_index.2840
pg_class_oid_index.1656              pg_class_oid_index.2841
pg_class_oid_index.1657              pg_class_oid_index.2842
pg_class_oid_index.1658              pg_class_oid_index.2843
pg_class_oid_index.1659              pg_class_oid_index.2844
pg_class_oid_index.166               pg_class_oid_index.2845
pg_class_oid_index.1660              pg_class_oid_index.2846
pg_class_oid_index.1661              pg_class_oid_index.2847
pg_class_oid_index.1662              pg_class_oid_index.2848
pg_class_oid_index.1663              pg_class_oid_index.2849


And one of my biggest tables, is now like this:

+----------------------------------+----------------------------------+-----
--+
|              Field               |              Type                |
Length|
+----------------------------------+----------------------------------+-----
--+
| fecha                            | datetime                         |
8 |
| fecha                            | datetime                         |
8 |
| cc                               | int4                             |
4 |
| cc                               | int4                             |
4 |
| isin                             | varchar()                        |
12 |
| isin                             | varchar()                        |
12 |
| valor                            | float8                           |
8 |
| valor                            | float8                           |
8 |
| patrimonio                       | float8                           |
8 |
| patrimonio                       | float8                           |
8 |
| participes                       | int4                             |
4 |
| participes                       | int4                             |
4 |
| pre_max                          | float8                           |
8 |
| pre_max                          | float8                           |
8 |
+----------------------------------+----------------------------------+-----
--+
Index:    if_bbsacu


I mean, all the fiels has been duplicated :-???
And the index, the same:

Table    = if_bbsacu
+----------------------------------+----------------------------------+-----
--+
|              Field               |              Type                |
Length|
+----------------------------------+----------------------------------+-----
--+
| cc                               | int4                             |
4 |
| cc                               | int4                             |
4 |
+----------------------------------+----------------------------------+-----
--+


And now, when I restart postgres, I get this message:
Stopping postgresql service:                      [  OK  ]
Starting postgresql service: FindExec: found "/home/pgsql/bin/postgres"
using argv[0]
binding ShmemCreate(key=56a3b1, size=3896320)
postmaster [16106]


And each time I access for whatever to the table with the duplicate fields,
postgres crashes and I have to restart it.

And when trying to vacuum the table....
[f@svr1 pgsql]$ vacuumdb --analyze --verbose -t tf_bbsacu aii
NOTICE:  --Relation tf_bbsacu--
NOTICE:  Pages 0: Changed 0, Reapped 0, Empty 0, New 0; Tup 0: Vac 0,
Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail.
Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.
ERROR:  cannot find attribute 2 of relation if_bbsacu
vacuumdb: database vacuum failed on aii.

Does anybody know how to fix it?

Thank you very much.



Re: postgres crash. HELP

From
Tom Lane
Date:
"Oscar Serrano" <oserra@fondos.net> writes:
> Suddenly in the directory where I have all the tables, now there is about
> 1000 files with names like this:
> [ and a bunch of apparently-duplicated entries in system catalogs ]

Looks pretty messy :-(.  What version are you running?

I hope you have a fairly recent pg_dumpall backup, because otherwise
you are likely to be in trouble.  If the backup is new enough that you
can just restore from it without losing much, then I'd recommend just
initdb and restore.  If you have newer data than what is in the backup,
but the schema is still the same, you could probably use pg_upgrade to
recover the system tables from the backup and then move your current
user data into the database.  (It looks like the trouble is just in the
system tables, so your user tables are most likely OK.)

If you want to try the pg_upgrade method, make sure you have the latest
pg_upgrade script (from 7.0RC5, current nightly snapshot, or direct from
CVS server) --- previous versions are known to have problems in
multi-user databases.  Then, follow the man page's directions for
pg_upgrade, except don't bother trying to make a "pg_dumpall -s" dump
from your corrupted database; instead use your last good backup file
as the input script for pg_upgrade.
        regards, tom lane


Re: postgres crash. HELP

From
"Ken J. Wright"
Date:
On Fri, 05 May 2000, Oscar Serrano wrote:
> Hi:
> I don't know how, but Postgres has made a very odd thing.
> Suddenly in the directory where I have all the tables, now there is about
> 1000 files with names like this:
> 
> pg_class_oid_index.165               pg_class_oid_index.2835
> pg_class_oid_index.1650              pg_class_oid_index.2836

I had the same problem at a site with 6.5.0. Although postgres did not crash, it
did suck up a lot of disk space. After upgrading to 6.5.3 and doing
initdb with a good backup the problem appears to have been fixed.

Ken


RE: postgres crash. HELP

From
"Oscar Serrano"
Date:

> -----Mensaje original-----
> De: pgsql-interfaces-owner@hub.org
> [mailto:pgsql-interfaces-owner@hub.org]En nombre de Tom Lane
> Enviado el: domingo, 07 de mayo de 2000 18:18
> Para: Oscar Serrano
> CC: pgsql-interfaces@postgresql.org
> Asunto: Re: [INTERFACES] postgres crash. HELP
>
>
> "Oscar Serrano" <oserra@fondos.net> writes:
> > Suddenly in the directory where I have all the tables, now
> there is about
> > 1000 files with names like this:
> > [ and a bunch of apparently-duplicated entries in system catalogs ]
>
> Looks pretty messy :-(.  What version are you running?
>
> I hope you have a fairly recent pg_dumpall backup, because otherwise
> you are likely to be in trouble.  If the backup is new enough that you
> can just restore from it without losing much, then I'd recommend just
> initdb and restore.  If you have newer data than what is in the backup,
> but the schema is still the same, you could probably use pg_upgrade to
> recover the system tables from the backup and then move your current
> user data into the database.  (It looks like the trouble is just in the
> system tables, so your user tables are most likely OK.)

I had a backup fortunately. I first had to make some pg_dump of some tables
that were not corrupted and restore them later.
I have postgres 6.5.3.
I didn't know about pg_upgrade. I will have to read about it.
Actually I have my dabatases open to every IP address, so anybody that
enters our web page, and visits some of our applets, the connect via JDBC to
our database. I think that during one of this connection, the postgres
crashed. I'm going to stop using JDBC.
Is there any known problem with the ODBC of the postgres?



> If you want to try the pg_upgrade method, make sure you have the latest
> pg_upgrade script (from 7.0RC5, current nightly snapshot, or direct from
> CVS server) --- previous versions are known to have problems in
> multi-user databases.  Then, follow the man page's directions for
> pg_upgrade, except don't bother trying to make a "pg_dumpall -s" dump
> from your corrupted database; instead use your last good backup file
> as the input script for pg_upgrade.
>
>             regards, tom lane


The problem was that I could not access the corrupted database for nothing.
Neither could I do a SELECT nor a pg_dump. The fields of the index were also
duplicated and that was generating more errors. I really think the data was
there but the system databases where corrupted.
Our schema don't change everyday so I'm going to do a backup of the system
tables every day.

Thank you very much.



Re: postgres crash. HELP

From
Tom Lane
Date:
"Oscar Serrano" <oserra@fondos.net> writes:
> I had a backup fortunately. I first had to make some pg_dump of some tables
> that were not corrupted and restore them later.
> I have postgres 6.5.3.
> I didn't know about pg_upgrade. I will have to read about it.
> Actually I have my dabatases open to every IP address, so anybody that
> enters our web page, and visits some of our applets, the connect via JDBC to
> our database. I think that during one of this connection, the postgres
> crashed. I'm going to stop using JDBC.
> Is there any known problem with the ODBC of the postgres?

I don't think you can fairly blame JDBC for this problem; it's an
internal-to-the-backend bug of some sort.  I would suggest upgrading
to 7.0 as a more reasonable response... we've fixed an awful lot of
bugs since 6.5.*.
        regards, tom lane