Thread: VB ADODB .Open failing

VB ADODB .Open failing

From
Finn Lassen
Date:
I'm new to this, so please bear with me.

Here is a VB code snippet:
    Dim dbOut As ADODB.Connection
    Dim rsOut As ADODB.Recordset
    Set dbOut = New ADODB.Connection
    Set rsOut = New ADODB.Recordset

    With dbOut
        .ConnectionString = "Driver={PostgreSQL
ANSI};Server=localhost;Port=5432;Database=postgres;Uid=postgres;Pwd=none;"
        .Open
        With rsOut
            rsOut.Open """Contact1""", dbOut, adOpenDynamic,
adLockOptimistic, adCmdTable

The .Open statement fails with:
"ERROR: column "oid" does not exist;
Error while executing the query".

I get the same error with the following:
rsOut.Open "select * from ""Contact1"";", dbOut, adOpenDynamic,
adLockOptimistic, adCmdText

Should I have created the table "Contact1" WITH OIDS ?

Finn

--
Finn Lassen
Deputy CIO
Axiom
1805 Drew Street
Clearwater, Florida 33765
727-442-7774 voice
727-442-8344 fax
dcio@AxiomInt.com
www.AxiomInt.com



Re: VB ADODB .Open failing

From
Richard Broersma Jr
Date:
--- On Tue, 11/20/07, Finn Lassen <dcio@AxiomInt.com> wrote:
> Here is a VB code snippet:
>     Dim dbOut As ADODB.Connection
>     Dim rsOut As ADODB.Recordset
>     Set dbOut = New ADODB.Connection
>     Set rsOut = New ADODB.Recordset
>         .ConnectionString = "Driver={PostgreSQL
> ANSI};Server=localhost;Port=5432;Database=postgres;Uid=postgres;Pwd=none;"

> Should I have created the table "Contact1" WITH
> OIDS ?

I wonder if this is a problem with the way your ODBC driver is configured.   If it is a problem with your ODBC driver
configuration,you might also try posting to the PostgreSQL-ODBC mailing list.  

Also, why did you choose the ANSI driver over the Unicode driver?

Regards,
Richard Broersma Jr.

Normalization tools for postgres?

From
Dane Springmeyer
Date:
Anyone have recommendations on tools/utilities or SQL approaches to
quickly break apart a large imported flat file into normal forms,
ideally 1NF or 2NF?

I noticed this tool for mySQL which captures what I am looking for:
http://www.sqldbu.com/eng/sections/tips/normalize.html

Given the fields with data to be output into separate tables, it
takes a csv and automatically generates a set of INSERT queries to
build all the related tables with new a new primary key for the main
table and serialized codes for each of the new tables to maintain
relationships.

Perhaps a customized php script could accomplish the same thing for
postgres?

Any suggestions would be helpful.

Dane



  field names for which to break out into their own tables
On Nov 20, 2007, at 8:33 PM, Richard Broersma Jr wrote:

> --- On Tue, 11/20/07, Finn Lassen <dcio@AxiomInt.com> wrote:
>> Here is a VB code snippet:
>>     Dim dbOut As ADODB.Connection
>>     Dim rsOut As ADODB.Recordset
>>     Set dbOut = New ADODB.Connection
>>     Set rsOut = New ADODB.Recordset
>>         .ConnectionString = "Driver={PostgreSQL
>> ANSI};Server=localhost;Port=5432;Database=postgres;Uid=postgres;Pwd=n
>> one;"
>
>> Should I have created the table "Contact1" WITH
>> OIDS ?
>
> I wonder if this is a problem with the way your ODBC driver is
> configured.   If it is a problem with your ODBC driver
> configuration, you might also try posting to the PostgreSQL-ODBC
> mailing list.
>
> Also, why did you choose the ANSI driver over the Unicode driver?
>
> Regards,
> Richard Broersma Jr.
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that
> your
>        message can get through to the mailing list cleanly


Re: Normalization tools for postgres?

From
Richard Huxton
Date:
Please don't start a new thread by replying to another question.

If you are going to do that, please don't cross-post it in addition.
This doesn't appear to be anything to do with PHP.

Dane Springmeyer wrote:
> Anyone have recommendations on tools/utilities or SQL approaches to
> quickly break apart a large imported flat file into normal forms,
> ideally 1NF or 2NF?

I don't know of any tool that could do this automatically. Without
understanding what the columns *mean* how could it.

> I noticed this tool for mySQL which captures what I am looking for:
> http://www.sqldbu.com/eng/sections/tips/normalize.html

Well, given that the example just shows the author doesn't know what 1NF
means, I don't think it does do what you want.

> Given the fields with data to be output into separate tables, it takes a
> csv and automatically generates a set of INSERT queries to build all the
> related tables with new a new primary key for the main table and
> serialized codes for each of the new tables to maintain relationships.

I don't see any reason why the INSERTs won't work for PG just as well.

> Perhaps a customized php script could accomplish the same thing for
> postgres?
>
> Any suggestions would be helpful.

Can you state what problem you are trying to solve? Do people just email
you flat files and ask you to send them a relational database back? Is
there a specific file that gets imported on a regular basis?

--
   Richard Huxton
   Archonet Ltd

Re: Normalization tools for postgres?

From
Richard Huxton
Date:
Dane Springmeyer wrote:
> On Nov 21, 2007, at 1:37 AM, Richard Huxton wrote:
>> Dane Springmeyer wrote:

>>> These mostly reside in MS access in flat tables and and I am
>>> importing them into postgres.
>>
>> If you are familiar/comfortable with Access and VB, I'd probably do
>> the work there.
>
> Unfortunately I am not. I am only familiar with very basic SQL in Access
> and not with VB. And I am not interested in investing any time learning
> microsoft products. I'd rather put time in learning postgres and php or
> other languages to manipulate data in postgres.

Fair enough. It's easy enough to get started with PHP. Oh, you don't
need to run it in a webserver, you can do so from the command-line too.
Any of the scripting languages will do nicely for this sort of thing -
Perl, Python, Ruby etc.


>> 1. Import as-is into a table called e.g. raw_data
>>
>> 2. Identify/add the primary-key (presumably ID in this case) in raw_data
>> If no ID, you can add a column of type SERIAL to raw_data and let it
>> be populated automatically.
>>
>> 3. CREATE TABLE lookups.region (id SERIAL, description text NOT NULL
>> UNIQUE, PRIMARY KEY id);
>> INSERT INTO lookups.region (description) SELECT DISTINCT region from
>> raw_data;
>>
>> 4. Repeat #3 for other lookups
>>
>> 5. CREATE TABLE processed_data (...);
>> INSERT INTO processed_data (id, region_id, ...)
>> SELECT raw.id, lkp_reg.id, ...
>> FROM raw_data raw
>> JOIN lookups.region lkp_reg ON raw.region = lkp_reg.description
>> JOIN lookups.whatever...

> Wow. That was EXTREMELY helpful. With those concepts I've not been able
> to do EXACTLY what I was shooting for AND now understand the SQL well
> enough to start thinking of better ways to do it as well. Thank you.

Good.

> Here is the SQL which inserts the sample data and processes it into 4
> different tables. Perhaps I after you take a look I could post this back
> to the group?

I've cc:ed the list on this one for you - plenty of smart people on that
list.

> CREATE TABLE raw_data (

Later on, you might want to consider CREATE TEMPORARY TABLE... but don't
worry for the moment.

>     wid int4,
>     region character(35),
>     drain character(65),
>     eco character(29)

These should almost certainly be "varchar" (or "character varying") -
unless you actually want each field padded with spaces. In fact, since
this is just a temporary thing I'd make them all type "text".

> );
>
> INSERT INTO raw_data VALUES (11210, 'SW Oregon / N Cali Coast', 'Rogue
> River', 'California Coast');
> INSERT INTO raw_data VALUES (11100, 'SW Oregon / N Cali Coast', 'Coastal
> grouping of Chetco River and Pistol River', 'California Coast');

You might want to read the manuals regarding the COPY <table> FROM STDIN
command. Good for bulk-loading data.

Also, you could download the pg-odbc driver and link to PG from Access
to copy the data over.

>
> drop table region;

There's an "IF EXISTS" clause you can add to DROP TABLE - prevents errors.

> CREATE TABLE region (id SERIAL, name text NOT NULL UNIQUE, PRIMARY KEY
> (id));
> INSERT INTO region (name) SELECT DISTINCT region from raw_data;
> Select * from region;
[snip repeats for other tables]
> drop table processed_data;
> CREATE TABLE processed_data (wid int4 NOT NULL UNIQUE, region_id int4
> NOT NULL, eco_id int4 NOT NULL, drain_id int4 NOT NULL, PRIMARY KEY (wid));
> INSERT INTO processed_data (wid, region_id, eco_id, drain_id)
> SELECT w.wid, r.id, e.id, d.id
> FROM raw_data as w, region as r, eco as e, drain as d
> WHERE w.region = r.name AND w.drain = d.name AND w.eco = e.name;

Good. You'll want to read up on foreign-keys too. You can define them
when you build the table, or add them after. Something like (not checked):
ALTER TABLE processed_data ADD CONSTRAINT valid_region
FOREIGN KEY (region_id) REFERENCES region (id);

>> 6. Wrap the entire lot above in BEGIN...END so it either all works or
>> it doesn't then VACUUM FULL the database at key points.
>
> I'm not familiar with these ideas, but I'll look into them in the docs...

Read up on VACUUM, ANALYSE and the autovacuum daemon (agent).

>> If you're comfortable with a bit of VB/Perl/Python/PHP/plpgsql then
>> you can automate that fairly simply. If not, a bit of cut+paste will
>> see you there.
>
> Yes, I think I'll experiment with trying to produce this SQL text with
> php. Seems like in a very short time I could have a custom script to
> parse any table given column names and types which I'd like to break
> out. Very nice.

The fun comes when you have to clean up the data - correct spelling
mistakes, remove duplicates etc.

--
   Richard Huxton
   Archonet Ltd