Thread: Access'97 and ODBC
Hello, I have a problem using Access97 and PostODBC (po021-32.tgz). I can link PostgreSQL 6.3.1 tables to Access'97 but I can open them only if they are empty. If I insert data into tables and then I try to access it, I have the following message: Receiving an unsupported type from Postgres (#14) SELECT (#513) Thanks for any help Jose'
Please see the Insight Dist site for a newer source and binary distribution of the ODBC driver http://www.insightdist.com/psqlodbc Julie Quoting Jose' Soares Da Silva (sferac@proxy.bazzanese.com): > Hello, > > I have a problem using Access97 and PostODBC (po021-32.tgz). > I can link PostgreSQL 6.3.1 tables to Access'97 but I can open them > only if they are empty. > If I insert data into tables and then I try to access it, I have the > following message: > > Receiving an unsupported type from Postgres (#14) SELECT (#513) > > Thanks for any help > Jose' > -- [ Julia Anne Case ] [ Ships are safe inside the harbor, ] [Programmer at large] [ but is that what ships are really for. ] [ Admining Linux ] [ To thine own self be true. ] [ Windows/WindowsNT ] [ Fair is where you take your cows to be judged. ]
On Fri, 24 Apr 1998, Julia A.Case wrote: > Please see the Insight Dist site for a newer source and binary > distribution of the ODBC driver > > http://www.insightdist.com/psqlodbc > > Julie Thanks Julie. Now it works, but now I have a little problem about date formats. I have a table with field1 DATE and field2 TIMESTAMP. If I insert data into these fields, field2 looks OK, but Access97 show me a strange date on field1. This is Access97 output: field1: 27/7/99 field2: 1998-04-27 12:20:21+02 This is psql output: Field | Value -- RECORD 0 -- field1| 1998-04-27 field2| 1998-04-27 12:20:21+02 ---- PS: My DateStyle is setting to 'ISO' Jose'
Hello, The ODBC driver can not yet handle multiple datestyle formats. Currently, it expects dates to be in US format. There will be a future option that allows you to configure that for the driver or per datasource. Byron Jose' Soares Da Silva wrote: > Thanks Julie. Now it works, but now I have a little problem about > date formats. > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data > into these fields, field2 looks OK, but Access97 show me a strange > date on field1. > > This is Access97 output: > field1: 27/7/99 > field2: 1998-04-27 12:20:21+02 > > This is psql output: > Field | Value > -- RECORD 0 -- > field1| 1998-04-27 > field2| 1998-04-27 12:20:21+02 > ---- > PS: My DateStyle is setting to 'ISO' > Jose'
Jose' Soares Da Silva wrote: > > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data > into these fields, field2 looks OK, but Access97 show me a strange > date on field1. > > This is Access97 output: > field1: 27/7/99 > field2: 1998-04-27 12:20:21+02 > > This is psql output: > Field | Value > -- RECORD 0 -- > field1| 1998-04-27 > field2| 1998-04-27 12:20:21+02 > ---- > PS: My DateStyle is setting to 'ISO' You should set it to 'US' when using Insight ODBC drivers. It should affect the output in no way, but the driver expects it from the backend in US format. As this is a per-connection setting it can safely be set from the driver at startup without affecting other connections. There has been some discussion about 'fixing' it and making the driver recognize other date formats. That would be IMHO unnecessary. It should be enough just to do "SET DateStyle TO 'US';" at startup. This can be currently done by setting some registry entries, but this should really be just a part of driver startup. Hannu
Hannu Krosing wrote: > Jose' Soares Da Silva wrote: > > > > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data > > into these fields, field2 looks OK, but Access97 show me a strange > > date on field1. > > > > This is Access97 output: > > field1: 27/7/99 > > field2: 1998-04-27 12:20:21+02 > > > > This is psql output: > > Field | Value > > -- RECORD 0 -- > > field1| 1998-04-27 > > field2| 1998-04-27 12:20:21+02 > > ---- > > PS: My DateStyle is setting to 'ISO' > > You should set it to 'US' when using Insight ODBC drivers. > > It should affect the output in no way, but the driver expects it from > the backend in US format. As this is a per-connection setting it can > safely be set from the driver at startup without affecting other > connections. > > There has been some discussion about 'fixing' it and making the > driver recognize other date formats. That would be IMHO unnecessary. > It should be enough just to do "SET DateStyle TO 'US';" at startup. > > This can be currently done by setting some registry entries, but > this should really be just a part of driver startup. > > Hannu Hannu, I understand what you are saying here, and am very tempted to just go with setting the datestyle to US at connection time by default. It is true that this would have no negative effect on applications such as Access. But, before I do, is there cases out there where people are executing DIRECT queries through the driver where they are expecting the date to be in a particular format such as: insert into tablex (date1) values('28-04-1998') # DD-MM-YYYY format If the driver always sets the datestyle to "US", the above insert might not work. Of course, I would imagine the query should be written more portably using the ODBC shorthand escape syntax, as: insert into tablex (date1) values( {d '1998-04-28'} ), which would work correctly. The reverse is true also, if the user does "select date1 from tablex", and uses SQL_C_CHAR as the return type, expecting the format to be EURO, when in fact it would be US. If no one has any objections, I will change the driver to always set the datestyle to US, and forget about adding a selection to the dialogs to select it. Byron
Hello, At 17.32 28/04/98 -0400, Byron Nikolaidis wrote: >I understand what you are saying here, and am very tempted to just go with >setting the datestyle to US at connection time by default. It is true that >this would have no negative effect on applications such as Access. > >But, before I do, is there cases out there where people are executing DIRECT >queries through the driver where they are expecting the date to be in a >particular format such as: > >insert into tablex (date1) values('28-04-1998') # DD-MM-YYYY >format > >If the driver always sets the datestyle to "US", the above insert might not >work. Of course, I would imagine the query should be written more portably >using the ODBC shorthand escape syntax, as: > >insert into tablex (date1) values( {d '1998-04-28'} ), > >which would work correctly. The reverse is true also, if the user does >"select date1 from tablex", and uses SQL_C_CHAR as the return type, >expecting the format to be EURO, when in fact it would be US. > >If no one has any objections, I will change the driver to always set the >datestyle to US, and forget about adding a selection to the dialogs to >select it. Microsoft says that the US date format is *always* recognized by the Jet database engine, no matter of the windows interntional settings, and it suggest to use US date format as a kind of international date format. This means that whenever you don't know in which country your program will be executed, it is safe to use the US date format. Setting US datestyle by default in the ODBC driver will provide a behaviour which is much similar to the Jet database engine, i.e. the behaviour Access/VB programmers usually have to deal with. So go on with this solution ! Bye ! P.S. I tested the new ODBC driver with index support. VisData still isn't able to show the index list, anyway it sees them because it allow updates. Used with VB the ODBC is rather slow compared with other ODBC (About 10 time slower than MS SQL and Velocis, about 30 times slower than MySql) but it works pretty well. Anyway it is about 3/4 times faster than the OpenLink driver, which is also pretty buggy ;) Really good job Byron ! Dr. Sbragion Denis InfoTecna Tel, Fax: +39 39 2324054 URL: http://space.tin.it/internet/dsbragio
On Tue, 28 Apr 1998, Byron Nikolaidis wrote: > > > Hannu Krosing wrote: > > > Jose' Soares Da Silva wrote: > > > > > > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data > > > into these fields, field2 looks OK, but Access97 show me a strange > > > date on field1. > > > > > > This is Access97 output: > > > field1: 27/7/99 > > > field2: 1998-04-27 12:20:21+02 > > > > > > This is psql output: > > > Field | Value > > > -- RECORD 0 -- > > > field1| 1998-04-27 > > > field2| 1998-04-27 12:20:21+02 > > > ---- > > > PS: My DateStyle is setting to 'ISO' > > > > You should set it to 'US' when using Insight ODBC drivers. > > > > It should affect the output in no way, but the driver expects it from > > the backend in US format. As this is a per-connection setting it can > > safely be set from the driver at startup without affecting other > > connections. > > > > There has been some discussion about 'fixing' it and making the > > driver recognize other date formats. That would be IMHO unnecessary. > > It should be enough just to do "SET DateStyle TO 'US';" at startup. > > > > This can be currently done by setting some registry entries, but > > this should really be just a part of driver startup. > > > > Hannu > > > Hannu, > > I understand what you are saying here, and am very tempted to just go with > setting the datestyle to US at connection time by default. It is true that > this would have no negative effect on applications such as Access. > > But, before I do, is there cases out there where people are executing DIRECT > queries through the driver where they are expecting the date to be in a > particular format such as: > > insert into tablex (date1) values('28-04-1998') # DD-MM-YYYY > format > > If the driver always sets the datestyle to "US", the above insert might not > work. Of course, I would imagine the query should be written more portably > using the ODBC shorthand escape syntax, as: > > insert into tablex (date1) values( {d '1998-04-28'} ), > > which would work correctly. The reverse is true also, if the user does > "select date1 from tablex", and uses SQL_C_CHAR as the return type, > expecting the format to be EURO, when in fact it would be US. > > If no one has any objections, I will change the driver to always set the > datestyle to US, and forget about adding a selection to the dialogs to > select it. Why not ISO-8601 this is the Standard SQL92 date format (i.e. YYYY-MM-DD) and for coherence with PostgreSQL User's Guide, quoting Thomas Lockhart at page 14, chapter 4, under "Date/Time Styles": "For Postgres v6.3 (and earlier) the default date/time style is "traditional Postgres". In future releases, the default may become ISO-8601, which alleviates date specification ambiguities and Y2K collation problems." I vote for changing default date format to ISO-8601 to reflect PostgreSQL documentation and for adherence to Standard SQL92. Jose'
Sbragion Denis wrote: > P.S. I tested the new ODBC driver with index support. VisData still isn't > able to show the index list, anyway it sees them because it allow updates. > Used with VB the ODBC is rather slow compared with other ODBC (About 10 > time slower than MS SQL and Velocis, about 30 times slower than MySql) but > it works pretty well. Anyway it is about 3/4 times faster than the OpenLink > driver, which is also pretty buggy ;) Really good job Byron ! > I'm not sure why VisData still isn't able to show the index list. First of all, I dont know what "VisData" is anyway! Perhaps you could use the odbc tracing feature (through the 32 bit odbc administrator) and send the "sql.log" to me. Make sure it is empty before you begin your session. This will really slow things down by the way. As for performance, the backend affects that equation greatly. You should see what happens in Access when you are using unique indexes. Even with one keypart, Access generates that infamous query we have been talking about (with all the ANDs and ORs), which really slows things down. Byron
Hello, At 09.31 29/04/98 -0400, Byron Nikolaidis wrote: >I'm not sure why VisData still isn't able to show the index list. First of all, >I dont know what "VisData" is anyway! Perhaps you could use the odbc tracing VisData is a small tool provided with visual basic 5.0. It provides a graphical representation of all the feature of any database that could be opened through visual basic, including ODBC databases. It is quite an hard test for any ODBC driver because it tries to show *almost anything* that could be retrieved through an ODBC driver, not only data. Most ODBC drivers, even some "famous" one, fail with VisData and still can perfectly be used in normal applications. >feature (through the 32 bit odbc administrator) and send the "sql.log" to me. >Make sure it is empty before you begin your session. This will really slow >things down by the way. I'll do it ASAP, and I'll provide also the exact sequence of operation performed to show the problems. Anyway the problem showed with VisData has no importance at all, at least using Visual Basic and Access. ASAP I'll also perform some test using Power Builder, wich uses the ODBC in a different way than VB. >As for performance, the backend affects that equation greatly. You should see >what happens in Access when you are using unique indexes. Even with one keypart, >Access generates that infamous query we have been talking about (with all the >ANDs and ORs), which really slows things down. I know. Anyway I was not using Access but a small test program I wrote myself. This program perform random operations (insert, update, select and delete) through recordset opened on simple tables, so it doesn't suffer the Access "feature" of creating too complex queries. I know this is not a deep test, anyway it is the sort of operations 90% of VB code perform on databases. I think first we should obtain a functioning ODBC driver, i.e. you should continue on the way you are going now. After this we could take care of performances. Doing things in reverse order usually produce "very fast non functioning code", which is not usefull at all ;) Bye ! Dr. Sbragion Denis InfoTecna Tel, Fax: +39 39 2324054 URL: http://space.tin.it/internet/dsbragio
"Jose' Soares Da Silva" <sferac@bo.nettuno.it> writes: > I vote for changing default date format to ISO-8601 to reflect > PostgreSQL documentation and for adherence to Standard SQL92. Hear! Hear! Good standards beat silly conventions any day! -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Thanks to every body that replied my question. Now dates are Ok. Now I have another problem using M$-Access; I have a table like this one: Table = comuni +------------------------------+----------------------------------+-------+ | Field | Type | Length| +------------------------------+----------------------------------+-------+ | istat | char() not null | 6 | | nome | varchar() | 50 | | provincia | char() | 2 | | codice_fiscale | char() | 4 | | cap | char() | 5 | | regione | char() | 3 | | distretto | char() | 4 | +------------------------------+----------------------------------+-------+ ... in this table I have stored 8k rows, if I load it from M$-Access and then I modify a row and I try to save it to database, it goes in a loop I don't know what's happening. Please help me. Thanks, Jose' On Tue, 28 Apr 1998, Hannu Krosing wrote: > Jose' Soares Da Silva wrote: > > > > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data > > into these fields, field2 looks OK, but Access97 show me a strange > > date on field1. > > > > This is Access97 output: > > field1: 27/7/99 > > field2: 1998-04-27 12:20:21+02 > > > > This is psql output: > > Field | Value > > -- RECORD 0 -- > > field1| 1998-04-27 > > field2| 1998-04-27 12:20:21+02 > > ---- > > PS: My DateStyle is setting to 'ISO' > > You should set it to 'US' when using Insight ODBC drivers. > > It should affect the output in no way, but the driver expects it from > the backend in US format. As this is a per-connection setting it can > safely be set from the driver at startup without affecting other > connections. > > There has been some discussion about 'fixing' it and making the > driver recognize other date formats. That would be IMHO unnecessary. > It should be enough just to do "SET DateStyle TO 'US';" at startup. > > This can be currently done by setting some registry entries, but > this should really be just a part of driver startup. > > Hannu
Jose' Soares Da Silva wrote: > Now I have another problem using M$-Access; > I have a table like this one: > > Table = comuni > +------------------------------+----------------------------------+-------+ > | Field | Type | Length| > +------------------------------+----------------------------------+-------+ > | istat | char() not null | 6 | > | nome | varchar() | 50 | > | provincia | char() | 2 | > | codice_fiscale | char() | 4 | > | cap | char() | 5 | > | regione | char() | 3 | > | distretto | char() | 4 | > +------------------------------+----------------------------------+-------+ > ... in this table I have stored 8k rows, if I load it from M$-Access and > then I modify a row and I try to save it to database, it goes in a loop > I don't know what's happening. > Please help me. Thanks, Jose' > This problem has to do with the Postgres' locking mechanism. You cant update a table while you have the table open for reading. You may be asking yourself, but I do not have the table open for reading. Ahhh, but Access does because of the way the odbc driver uses cursors to manage backend data. Here is the illustration: --------------------- Access uses two backend connections. On one connection, it does a query to get key values from the table: "declare c1 cursor for select key from table" It then fetches 101 keys from this query. This fetch results in the following 2 queries to the backend: "fetch 100 in c1" "fetch 100 in c1" (Note that there are 8000+ rows in the table so this leaves the table locked) On the other connection, it actually does the update query: "update table set a1=2 where key=1" This update will wait forever because the other query has the table completely locked. Workarounds -------------- In Access, you can go to the end of the table first, before you begin your update. Then, any update or insert you do should work. You can also do your update on a smaller subset of records by using a filter in Access. 200 or less rows would allow the driver to handle it since all the keys would have been read in as illustrated above. Now for the ultimate question ----------------------------- What is the current status/priority of the locking enhancements for Postgres? Clearly, this is an important problem and needs to be addressed. Even though the above example only involves Microsoft Access, we have applications which need to write data to tables that may already be open for reading for a long time, such as while doing a massive report with lots of joins. With the current locking strategy, these applications are impossible. Regards, Byron
On Thu, 30 Apr 1998, Byron Nikolaidis wrote: Thank you very much Byron for your explanation. > Jose' Soares Da Silva wrote: > > > Now I have another problem using M$-Access; > > I have a table like this one: > > > > Table = comuni > > +------------------------------+----------------------------------+-------+ > > | Field | Type | Length| > > +------------------------------+----------------------------------+-------+ > > | istat | char() not null | 6 | > > | nome | varchar() | 50 | > > | provincia | char() | 2 | > > | codice_fiscale | char() | 4 | > > | cap | char() | 5 | > > | regione | char() | 3 | > > | distretto | char() | 4 | > > +------------------------------+----------------------------------+-------+ > > ... in this table I have stored 8k rows, if I load it from M$-Access and > > then I modify a row and I try to save it to database, it goes in a loop > > I don't know what's happening. > > Please help me. Thanks, Jose' > > > > This problem has to do with the Postgres' locking mechanism. You cant update a > table while you have the table open for reading. You may be asking yourself, > but I do not have the table open for reading. Ahhh, but Access does because of > the way the odbc driver uses cursors to manage backend data. > > Here is the illustration: > --------------------- > Access uses two backend connections. On one connection, it does a query to get > key values from the table: > "declare c1 cursor for select key from table" > > It then fetches 101 keys from this query. This fetch results in the following > 2 queries to the backend: > "fetch 100 in c1" > "fetch 100 in c1" > > (Note that there are 8000+ rows in the table so this leaves the table locked) > > On the other connection, it actually does the update query: > "update table set a1=2 where key=1" > > This update will wait forever because the other query has the table completely > locked. > > Workarounds > -------------- > In Access, you can go to the end of the table first, before you begin your > update. Then, any update or insert you do should work. > > You can also do your update on a smaller subset of records by using a filter in > Access. 200 or less rows would allow the driver to handle it since all the > keys would have been read in as illustrated above. Seems this problem exists also when I read only one row. I tried this: I got the first row using a form, then I modified a field on this form and then I tried to load the next row (by using right arrow), and Access is already there locked by PostgreSQL. ps command give me the followinng result: (two backend connections as you said) 3033 ? S 0:00 postmaster -i -o -F -B 512 -S 5034 ? S 0:01 /usr/local/pgsql/bin/postgres -p -Q -P5 -F -B 512 -v 6553 5035 ? S 0:07 /usr/local/pgsql/bin/postgres -p -Q -P5 -F -B 512 -v 6553 > > Now for the ultimate question > ----------------------------- > What is the current status/priority of the locking enhancements for Postgres? > Clearly, this is an important problem and needs to be addressed. Even though > the above example only involves Microsoft Access, we have applications which > need to write data to tables that may already be open for reading for a long > time, > such as while doing a massive report with lots of joins. With the current > locking strategy, these applications are impossible. Is there in project to work on this problem ? Jose'
Hi, all! I created a table with a TIMESTAMP data type to use with M$-Access, because Access uses such field to control concurrent access on records. But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see such fields as "text" instead of "date/time". Is there a way to make Access recognize TIMESTAMPs ? Thanks, Jose'
On Tue, 9 Jun 1998, Jose' Soares Da Silva wrote: > Hi, all! > > I created a table with a TIMESTAMP data type to use with M$-Access, because > Access uses such field to control concurrent access on records. > But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see > such fields as "text" instead of "date/time". > Is there a way to make Access recognize TIMESTAMPs ? > Thanks, Jose' Also the following types are recognized as text: int28 oid8 oidint2 oidint4 I forgot to say that I'm using : PostgreSQL-6.3.2 Linyx ELF 2.0.33 psqlodbc-06.30.0243 M$-Access97 Ciao, Jose'
Jose' Soares Da Silva wrote: > Hi, all! > > I created a table with a TIMESTAMP data type to use with M$-Access, because > Access uses such field to control concurrent access on records. > But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see > such fields as "text" instead of "date/time". > Is there a way to make Access recognize TIMESTAMPs ? > Thanks, Jose' I could add TimeStamp as a supported data type of the odbc driver. Currently, 'abstime' is supported but not 'timestamp'. Byron
Byron Nikolaidis wrote: > I could add TimeStamp as a supported data type of the odbc driver. Currently, > 'abstime' is supported but not 'timestamp'. > Also, the postgres "datetime" type is already supported as well. Maybe that would work for you temporarily. As a matter of fact, all the date/time types "look" the same since we now use 'ISO'. Byron
Jose' Soares Da Silva wrote: > Also the following types are recognized as text: > int28 > oid8 > oidint2 > oidint4 > Just a little history here...any data type that is not directly supported by the odbc driver will get mapped to SQL_VARCHAR or SQL_LONGVARCHAR, depending on driver 'data type options'. That allows you to view it and possibly update it, if there is an appropriate operator. This is great compared to what the driver used to do in the old days with unsupported types (i.e., crash with no descriptive error message)! For int28 and oid8, there is no SQL data type that maps. Text is the only way to display it that I know of. oidint2 and oidint4 are just integers I guess, and probably could be mapped to SQL_SMALLINT and SQL_INTEGER, respectively. Byron
On Tue, 9 Jun 1998, Byron Nikolaidis wrote: > > > Jose' Soares Da Silva wrote: > > > Hi, all! > > > > I created a table with a TIMESTAMP data type to use with M$-Access, because > > Access uses such field to control concurrent access on records. > > But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see > > such fields as "text" instead of "date/time". > > Is there a way to make Access recognize TIMESTAMPs ? > > Thanks, Jose' > > I could add TimeStamp as a supported data type of the odbc driver. Currently, > 'abstime' is supported but not 'timestamp'. > Thank you Byron. I think this is great. M$-Access should work well with a timestamp field, I have problems with concurrent access and I think it is because this data type. Jose'
On Tue, 9 Jun 1998, Byron Nikolaidis wrote: > > > Byron Nikolaidis wrote: > > > I could add TimeStamp as a supported data type of the odbc driver. Currently, > > 'abstime' is supported but not 'timestamp'. > > > > Also, the postgres "datetime" type is already supported as well. > Maybe that would work for you temporarily. > As a matter of fact, all the date/time types "look" the same since we now use > 'ISO'. My problem is that I need a TIMESTAMP data type defined in M$-Access because M$-Access wants it to have best performance when it updates a table via ODBC. M$-Access doesn't lock a record being modified, to allow control concurrent access to data M$-Access reads again the record to verify if it was modified by another user, before update it to database. If there's a TIMESTAMP M$-Access verifies only, if this field was modified, otherwise it verifies every field of the table, and obviously it is slower. I beleave it would very useful if you could add this feature to psqlodbc. Thanks, Jose' > > Byron > > > Ciao, Jose' ___, / |_+_| /| / ~ ~~~~~~~~~~~~~~~~~~~~~~~~~ | / | /| ~~~~~~~~~~~~~~~~~~~~~ Jose' Soares Da Silva ~ |/ | / | / "As armas e os Baroes Progetto "OS LUSIADAS" ~ | |/| | /| assinalados, que da SFERA CARTA SOFTWARE ~ /| / | | / | Occidental praia Lusitana Via Bazzanese, 69 / | / | | /| | por mares nunca de antes Casalecchio R. BO - Italy / | / | |/ | | navegados, passarono http://www.sferacarta.com / |/____|_/__|_| ainda alem da Taprobana" sferac@bo.nettuno.it /____|__| | __|___________ ~ Fax. ++39 51 6131537 ____________|_____|_/ LUSIADAS / (Luis de Camoes, Tel. ++39 51 591054 \ o / Os Lusiadas, canto I) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jose' Soares Da Silva wrote: > My problem is that I need a TIMESTAMP data type defined in M$-Access because > M$-Access wants it to have best performance when it updates a table via ODBC. > M$-Access doesn't lock a record being modified, to allow control concurrent > access to data M$-Access reads again the record to verify if it was modified by > another user, before update it to database. > If there's a TIMESTAMP M$-Access verifies only, if this field was modified, > otherwise it verifies every field of the table, and obviously it is slower. > I beleave it would very useful if you could add this feature to psqlodbc. > Thanks, Jose' > I have absolutely no problem with adding the postgres 'timestamp' type, in fact, I already added it. But, the thing is, the postgres types abstime and datetime, ALREADY map to SQL_TIMESTAMP! I think, that this actually has to do with SQLSpecialColumns 'SQL_ROWVER'. Access checks for this but we don't return anything. SQL_ROWVER is defined as the column(s) in the specified table, if any, that are automatically updated by the data source when any value in the row is updated by any transaction (as in SQLBase ROWID or Sybase TIMESTAMP). It seems to me, that this suggests that if we had a hidden timestamp column, Access would use that to verify. I don't believe we have such a column in postgres? Byron
Byron Nikolaidis wrote: > Jose' Soares Da Silva wrote: > > > My problem is that I need a TIMESTAMP data type defined in M$-Access because > > M$-Access wants it to have best performance when it updates a table via ODBC. > > M$-Access doesn't lock a record being modified, to allow control concurrent > > access to data M$-Access reads again the record to verify if it was modified by > > another user, before update it to database. > > If there's a TIMESTAMP M$-Access verifies only, if this field was modified, > > otherwise it verifies every field of the table, and obviously it is slower. > > I beleave it would very useful if you could add this feature to psqlodbc. > > Thanks, Jose' > > > I did some testing with SQLSpecialColumns 'SQL_ROWVER'. As I noted in my previous mail, we dont return anything for this function in the driver. I tried hard-coding a column that was a SQL_TIMESTAMP type (in my table it was a postgres 'datetime'). Access did use that column. Here are the results: test1 table ---------- a,c,d,e,f,g = int2 b,h = varchar datetim = datetime Access results without ROWVER (this is the way things currently are) --------------------------------------------------------------------- BEGIN update test1 set b='hello' where a=7 AND b='other' AND c=3 AND d=4 AND e is NULL AND f is NULL AND g=5 AND h='stuff' COMMIT Access results with ROWVER ------------------------------- BEGIN update test1 set b='hello' where a=7 AND datetim = '1998-05-30 10:59:00'; select a,b,c,d,e,f,g,h,datetim where a=7; COMMIT Conclusion: ----------- The update statement was definately smaller and only involved the key and the timestamp column. The extra select that it does to verify no one has changed anything (using the value of the timestamp) slowed the update down, though. I don't think the speed gain on the smaller update statement makes up for the extra query. In either case, the backend locking problem would still prevent the update if the table was opened by someone else (or even the same application, as in our declare/fetch problem). Also, something would have to be done to actually put a timestamp value in every time a row was added or updated. Access actually prevented me from entering a value in my 'datetim' field because it assumed the dbms would fill it in. I guess you could use a trigger to update the timestamp field. OR if we had a pseudo column that qualified, we could use that, however when I tried using a pseudo column, Access barfed on me complaining "Table TMP%#$$^ already exists". If I added the pseudo column to the output, the message went away. I have no idea what the heck that means? Any ideas or thoughts? Byron
On Wed, 10 Jun 1998, Byron Nikolaidis wrote: > > > > Byron Nikolaidis wrote: > > > Jose' Soares Da Silva wrote: > > > > > My problem is that I need a TIMESTAMP data type defined in M$-Access because > > > M$-Access wants it to have best performance when it updates a table via ODBC. > > > M$-Access doesn't lock a record being modified, to allow control concurrent > > > access to data M$-Access reads again the record to verify if it was modified by > > > another user, before update it to database. > > > If there's a TIMESTAMP M$-Access verifies only, if this field was modified, > > > otherwise it verifies every field of the table, and obviously it is slower. > > > I beleave it would very useful if you could add this feature to psqlodbc. > > > Thanks, Jose' > > > > > > > I did some testing with SQLSpecialColumns 'SQL_ROWVER'. As I noted in my previous mail, > we dont return anything for this function in the driver. I tried hard-coding a column > that was a SQL_TIMESTAMP type (in my table it was a postgres 'datetime'). Access did use > that column. Here are the results: > > test1 table > ---------- > a,c,d,e,f,g = int2 > b,h = varchar > datetim = datetime > > Access results without ROWVER (this is the way things currently are) > --------------------------------------------------------------------- > BEGIN > update test1 set b='hello' where a=7 AND b='other' AND c=3 AND d=4 AND e is NULL AND f is > NULL AND g=5 AND h='stuff' > COMMIT > > Access results with ROWVER > ------------------------------- > BEGIN > update test1 set b='hello' where a=7 AND datetim = '1998-05-30 10:59:00'; > select a,b,c,d,e,f,g,h,datetim where a=7; > COMMIT > > Conclusion: > ----------- > The update statement was definately smaller and only involved the key and the timestamp > column. The extra select that it does to verify no one has changed anything (using the > value of the timestamp) slowed the update down, though. I don't think the speed gain on > the smaller update statement makes up for the extra query. In either case, the backend I don't know for sure, if in this way Access is faster, I red on Access manual that it is faster using ROWVER during updates. I think the extra select is to refresh the data on the Client side, otherwise Access doesn't refresh the Client and it says that another user has modified the record (but that other user is me). > locking problem would still prevent the update if the table was opened by someone else (or > even the same application, as in our declare/fetch problem). > > Also, something would have to be done to actually put a timestamp value in every time a > row was added or updated. Access actually prevented me from entering a value in my > 'datetim' field because it assumed the dbms would fill it in. I guess you could use a > trigger to update the timestamp field. OR if we had a pseudo column that qualified, we > could use that, however when I tried using a pseudo column, Access barfed on me > complaining "Table TMP%#$$^ already exists". If I added the pseudo column to the output, > the message went away. I have no idea what the heck that means? > > Any ideas or thoughts? > > Byron Jose'